Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hi all,
I am trying to do an API call to a source that returns JSON. The call is successful and I can get to the data, the issue is the pagination. There is a limit of 100 rows per page and the system is using currentToken and nextToken pagination. I have done a custom pagination and this is how my script looks:
LIB CONNECT TO 'REST_CALL';
// Action required: Implement the logic to retrieve the total records from the REST source and assign to the 'total' local variable.
Let total = 0;
Let totalfetched = 0;
Let startAt = 0;
Let pageSize = 100;
for startAt = 0 to total step pageSize
RestConnectorMasterTable:
SQL SELECT
"__KEY_root",
(SELECT
"limit",
"totalCount",
"nextToken",
"currentToken",
"__KEY_paging",
"__FK_paging",
(SELECT
"__KEY__links",
"__FK__links",
(SELECT
"href",
"__FK_next"
FROM "next" FK "__FK_next"),
(SELECT
"href" AS "href_u0",
"__FK_self"
FROM "self" FK "__FK_self")
FROM "_links" PK "__KEY__links" FK "__FK__links")
FROM "paging" PK "__KEY_paging" FK "__FK_paging"),
(SELECT
"id" AS "id_u0",
"title",
"code",
"description",
"start",
"end",
"closeAfter",
"archiveAfter",
"launchAfter",
"timezone",
"defaultLocale",
"currency",
"registrationSecurityLevel",
"status",
"eventStatus",
"testMode",
"created",
"lastModified",
"virtual",
"format",
"type" AS "type_u0",
"capacity",
"__KEY_data",
"__FK_data",
(SELECT
"name",
"__KEY_venues",
"__FK_venues",
(SELECT
"city",
"country",
"countryCode",
"latitude",
"longitude",
"address1",
"region",
"regionCode",
"postalCode",
"__FK_address"
FROM "address" FK "__FK_address")
FROM "venues" PK "__KEY_venues" FK "__FK_venues"),
(SELECT
"firstName",
"lastName",
"email",
"prefix",
"__FK_planners"
FROM "planners" FK "__FK_planners"),
(SELECT
"id",
"name" AS "name_u0",
"type",
"order",
"__KEY_customFields",
"__FK_customFields",
(SELECT
"@Value",
"__FK_value"
FROM "value" FK "__FK_value" ArrayValueAlias "@Value")
FROM "customFields" PK "__KEY_customFields" FK "__FK_customFields"),
(SELECT
"name" AS "name_u1",
"__FK_category"
FROM "category" FK "__FK_category"),
(SELECT
"__KEY__links_u0",
"__FK__links_u0",
(SELECT
"href" AS "href_u1",
"__FK_invitation"
FROM "invitation" FK "__FK_invitation"),
(SELECT
"href" AS "href_u2",
"__FK_agenda"
FROM "agenda" FK "__FK_agenda"),
(SELECT
"href" AS "href_u3",
"__FK_summary"
FROM "summary" FK "__FK_summary"),
(SELECT
"href" AS "href_u4",
"__FK_registration"
FROM "registration" FK "__FK_registration")
FROM "_links" PK "__KEY__links_u0" FK "__FK__links_u0"),
(SELECT
"@Value" AS "@Value_u0",
"__FK_languages"
FROM "languages" FK "__FK_languages" ArrayValueAlias "@Value_u0")
FROM "data" PK "__KEY_data" FK "__FK_data")
FROM JSON (wrap on) "root" PK "__KEY_root"
WITH CONNECTION(
QUERY "startAt" "$(startAt)",
URL "https://api-platform.sample.com/sample/sample",
QUERY "filter" "sample.id eq '123456789'",
HTTPHEADER "Authorization" "Bearer $(vToken)"
);
NEXT startAt;
Can anybody help me understand how to implement the nextToken in the URL so I can paginate through all records? I read all documentation related to this, but it only explains what nextToken means, it doesn't give any examples on how to actually implement this in the script.
Hello, Silviya. I hope all is well. This question can be very specific to the API you intend to retrieve data from. Do you happen to have an OpenAPI spec for this endpoint? (https://editor.swagger.io/)
If so, you'll need to increment the values used in the tokens you introduce at the top of your loop.
Alternatively, you can use the Rest connector pagination option and override, where needed, to append your intended pagination scheme. See this article for reference: https://community.qlik.com/t5/Integration-Extension-APIs/GETTING-DATA-FROM-GITLAB-REST-API/m-p/20142...
Hello, Silviya. I hope all is well. This question can be very specific to the API you intend to retrieve data from. Do you happen to have an OpenAPI spec for this endpoint? (https://editor.swagger.io/)
If so, you'll need to increment the values used in the tokens you introduce at the top of your loop.
Alternatively, you can use the Rest connector pagination option and override, where needed, to append your intended pagination scheme. See this article for reference: https://community.qlik.com/t5/Integration-Extension-APIs/GETTING-DATA-FROM-GITLAB-REST-API/m-p/20142...
Hi Steve,
Thanks a lot for your response!
I am not sure I understand what you mean that I have to do though. Do you happen to have an example of how to do what you described? I have a currentToken and a nextToken field in my data. As the platform I am trying to get the data from (Cvent) has a limit of 100 records per page, I need to somehow use these two fields to go through all pages of data and get all lines I need.
Hello SilviyaK.
Not a problem. I hope all is well. Without a standards-based doc describing your endpoint, we will step through your example as written. If your loop is working OK at this point, your loop has these initiation parameters:
// Action required: Implement the logic to retrieve the total records from the REST source and assign to the 'total' local variable.
Let total = 0;
Let totalfetched = 0;
Let startAt = 0;
Let pageSize = 100;
for startAt = 0 to total step pageSize
Check this page on this if you're unsure of the syntax: https://help.qlik.com/en-US/sense/November2021/Subsystems/Hub/Content/Sense_Hub/Scripting/ScriptCont...
Going back to your question, you would need to Peek at your total, and, given your example, your loop is already set to iterate with 100 records per iteration. The following example would fall before your next startAt:
// Action required: Implement the logic to retrieve the total records from the REST source and assign to the 'total' local variable.
Let total = 0;
Let totalfetched = 0;
Let startAt = 0;
Let pageSize = 100;
for startAt = 0 to total step pageSize
...
let total = Peek('nextToken',0,'RestConnectorMasterTable');
...
next startAt;
Hi Steve,
Still fetches only the first 100 rows. The endpoints is cvent. This is a link to the documentation - https://developer-portal-sandbox.cvent.com/documentation.
Thanks!
Hello Silviya,
Based on the documentation for the endpoint, it looks like you want to leave the script you have aside and leverage the approach here as your starting point. You'll need to ensure the 'append missing parameters' option is enabled when setting up the connection: https://help.qlik.com/en-US/connectors/Subsystems/REST_connector_help/Content/Connectors_REST/Create...
The nextToken option in the REST connectors will allow you to look into the data right out of the box.
What you would use for your 'Next token path' would be something like this: 'root/paging/nextToken'
The 'path options give you this ability.
You have two root elements for a successful response (screenshot from their site):
Hi Steve,
Thanks for your reply.
Please find attached a screenshot of how my connection is currently set up (screenshots 1,2 and 3) and successful.
Afterwards, I selected a couple of fields from the data (id, first name and email) so I can make a test (screenshot 4 of my script). Then I did two tests with two different event IDs for which I know the row count. One of them has 16 rows and was loaded successfully within 5 seconds. The second ID has 960 rows and what happens when I ran the load with that event ID is that Qlik started to load unrealistically high amount of lines (it got to 1 270 980) and eventually my access token which is valid for one hour expired and I got an error then the data load stopped. The error is that I am unauthorized and it is because of the access token, but in any case, I do not think that the more than a million lines is even correct to begin with. It seems that when the lines are more than the 100 limit, something goes wrong.
Any ideas why this might be happening and how to make it work?
Thanks!
Additionally, I tried to play around with the 'Pass via header' and 'Look in header', but it didn't work again. If both are ticked I get the first 100 rows. If only 'Look in header' is ticked - I get the first 100 rows and if only 'Pass via header' is ticked - I again get the too many lines and crashing of my script after about an hour.