Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hello,
I'm wondering if it's possible to use Filter Conditions and Record Selection Conditions on the same column / table / task?
What I'm hoping to do is use the Filter Conditions to first limit what Replicate extracts, then use the Record Selection Conditions to do a more complex or dynamic filtering before writing to the target.
My source has billions of rows from the last two decades, but only want to dynamically replicate the last two years of data.
Thank you in advance.
Hello All,
We are in the process of replicating 1200+ tables from an Oracle ERP source to target Oracle datawarehouse. Since there are huge number of tables, we decided to use the logstream and child task setup for it. We split the logstream tasks into 4 buckets of volume of changes that we get in a day (Low,High,Medium and temp). The tables in temporary will be sorted to the other 3 buckets at a later point in time. Also the child tasks are divided into 18, with 8 dedicated task having one table each for high volume, 6 task each for Low volume, 3 task for Medium volume and 1 for temporary.
So the logstream task is running fine with no latency. However I see there are about 5 child tasks that have extremely high latency. I see that there are changes being accumulated at the disk in the target, close to 220,991 transactions and 2654 transactions waiting until target commit. There are no updates happening on the table. Initially I could find the error in target_apply : "01984869: 2025-11-18T00:19:20 [TARGET_APPLY ]T: Failed to execute statement. Error is: ORA-20999: 0 rows affected ORA-06512: at line 1 ORA-06512: at line 1
~{AgAAACJPDXuDj7NuzKKZuhSA2i723IjjATLTcyL3ERn8+QUzNE8RMERqiUYA7kOMnOaX3Wc4W281UX8IYxudA3YrCAEVr5+i0nB5RsPT7yk48GiwfHqoQpYV7luwBuxIIyMYI5MFJmByPPt2Z2ZbsMlhESBsVz"
But after applying upsert mode in apply conflict tab, I see this error no longer exists, but now I see the below message in target_apply :
Got non-Insert change for table 200 that has no PK. Going to finish bulk since the Inserts in the current bulk exceeded the threshold
Could you please advise how can I resolve this issue?
I have attached the screenshot of the UI and also the task logs and DR package.
Can anyone help to fix this?
Hi there,
we are developing a service to trigger automations via the Qlik REST Api.
For this we are using OAuth allowing for impersonation.
While working with the API, I stumbled upon some issues with the API:
"fields" paramerter in endpoints:
The documentation e.g. for getting a list of all automations get-api-v1-automations states that there is a "fields" parameter.
However, the parameter is completely ignored.
The only endpoint where it seems to be working is the /api/v1/users endpoint - is this simply not implemented yet for automations (although the docs say differently) or am I missing something when e.g. querying /api/v1/automations?fields=name ?
Getting automation details (executionToken for triggered automations):
As stated above, we are using OAuth and impersonation to allow the service to run all existing automations.
When not impersonating, the client is acting as a Tenant Admin, which allows us to get a list of all automations.
However, the list of all automations does not include the executionToken for triggered automations unless you are explicitly the owner of that automation.
So to get the executionToken, we have to impersonate the owner, which means we have to find out who the owner is first.
This leads to the following scenario:
Our service is supposed to trigger automation "XYZ12345" (identified only by it's ID).
At this point, the owner is unknown - the endpoint to get information on the automation (GET /api/v1/automations/XYZ12345) is forbidden to the OAuth client, since it's not the owner.
The list of all automations strangely does not allow for an ID-filter.
This then means we have to query all automations every time, filter for the ID in question client-side to get its ownerId, and then impersonate the owner to get the automation details including the executionToken.
This seems like a very convoluted way to go about things, so again I am wondering if I am missing something here?
We tried giving all available roles to the OAuth client in the UI and tried the appropriate scopes ("admin_classic" / "admin.automations") for the access token, but that didn't help.
I saw that you can actually provide a filter for other fields like "name" for the automations endpoint, but since that property is potentially subject to change and the information on which automation to run in which case is stored outside of Qlik, it wouldn't be safe to use.
Thanks alot!
Hi i have this result :
| itemcode | distnumber | onhandqty |
| 1800000038 | 127-07 | 952.540000 |
| 1800000038 | 127-08 | 952.540000 |
| 1800000038 | 127-09 | 952.540000 |
| 1800000038 | 127-10 | 952.540000 |
| 1800000038 | 127-13 | 952.540000 |
| 1800000038 | 127-14 | 952.540000 |
| 1800000038 | 127-15 | 952.540000 |
| 1800000038 | 127-16 | 952.540000 |
| 1800000038 | 127-17 | 952.540000 |
| 1800000038 | 127-18 | 952.540000 |
| 1800000038 | 127-19 | 952.540000 |
| 1800000038 | 127-20 | 952.540000 |
| 1800000038 | 128-01 | 952.540000 |
| 1800000038 | 128-02 | 952.540000 |
| 1800000038 | 128-03 | 952.540000 |
| 1800000038 | 128-04 | 952.540000 |
| 1800000038 | 128-05 | 952.540000 |
| 1800000038 | 128-06 | 952.540000 |
| 1800000038 | 128-07 | 952.540000 |
| 1800000038 | 128-08 | 952.540000 |
| 1800000038 | 128-09 | 952.540000 |
| 1800000038 | 128-10 | 952.540000 |
| 1800000038 | 128-11 | 952.540000 |
| 1800000038 | 128-12 | 952.540000 |
| 1800000038 | 128-13 | 952.540000 |
| 1800000038 | 128-14 | 952.540000 |
| 1800000038 | 128-15 | 952.540000 |
| 1800000038 | 128-16 | 952.540000 |
| 1800000038 | 128-17 | 952.540000 |
| 1800000038 | 128-18 | 952.540000 |
| 1800000038 | 128-19 | 952.540000 |
| 1800000038 | 128-20 | 952.540000 |
| 1800000040 | 132-35 | 400.000000 |
| 1800000040 | 132-36 | 400.000000 |
| 1800000040 | 132-37 | 400.000000 |
| 1800000040 | 132-38 | 400.000000 |
| 1800000040 | 132-39 | 400.000000 |
| 1800000040 | 132-40 | 400.000000 |
| 1800000040 | 133-01 | 400.000000 |
| 1800000040 | 133-02 | 400.000000 |
| 1800000040 | 133-03 | 400.000000 |
| 1800000040 | 133-04 | 400.000000 |
| 1800000040 | 133-05 | 400.000000 |
| 1800000040 | 133-06 | 400.000000 |
| 1800000040 | 133-07 | 400.000000 |
| 1800000040 | 133-08 | 400.000000 |
| 1800000040 | 133-09 | 400.000000 |
| 1800000040 | 133-10 | 400.000000 |
| 1800000040 | 133-11 | 400.000000 |
| 1800000040 | 133-12 | 400.000000 |
| 1800000040 | 133-13 | 400.000000 |
| 1800000040 | 133-14 | 400.000000 |
| 1800000040 | 133-15 | 400.000000 |
| 1800000040 | 133-16 | 400.000000 |
| 1800000040 | 133-17 | 400.000000 |
| 1800000040 | 133-18 | 400.000000 |
| 1800000040 | 133-19 | 400.000000 |
| 1800000040 | 133-20 | 400.000000 |
| 1800000040 | 133-21 | 400.000000 |
| 1800000040 | 133-22 | 400.000000 |
| 1800000040 | 133-23 | 400.000000 |
| 1800000040 | 133-24 | 400.000000 |
| 1800000040 | 133-25 | 400.000000 |
| 1800000040 | 133-26 | 400.000000 |
| 1800000040 | 133-27 | 400.000000 |
| 1800000040 | 133-28 | 400.000000 |
| 1800000040 | 133-29 | 400.000000 |
| 1800000040 | 133-30 | 400.000000 |
| 1800000040 | 133-31 | 400.000000 |
| 1800000040 | 133-32 | 400.000000 |
| 1800000040 | 133-33 | 400.000000 |
| 1800000040 | 133-34 | 400.000000 |
| 1800000040 | 133-35 | 400.000000 |
| 1800000040 | 133-36 | 400.000000 |
| 1800000040 | 133-37 | 400.000000 |
| 1800000040 | 133-38 | 400.000000 |
| 1800000040 | 133-39 | 400.000000 |
| 1800000040 | 133-40 | 400.000000 |
i want only a résult by item code with only the min of the three first number of distnumber and a count of this condition so the result is :
into my table graphic i use
first colum = dimension
second : Min( SubField(distnumber, '-', 1) ) (take only the min of the distnumber)
last colum :count(
{<
distnumber = {"*$(=Aggr(Min(Left(distnumber,3))), U_CapacItemcode)*"}
>} distnumber
)
i have this result (no correct, the correct is 12 for the O-71 (1800000038) , here is the total of the distnumber 127 and 128 of the 1800000038 (O-71)
this is the correct result how to have the good résult ? thanks
| 1800000038 | 127 | 12 |
| 1800000040 | 132 | 6 |
Hello I have the following table:
I have selected the above project names;
the expression is sum(amount)
now total amount is 35.79 which is correct
but in the expression I want to eliminate the projects whose parent_project_pfolio_1_id = 1 and pfolio_level_id = 18
so the last row shouldn't be calculated and thus the total should be 34.73 since the last row shouldn't be counted
so I wrote the below expression:
sum(
{$-<parent_project_pfolio_id_id = 1, pfolio_level_id=18>}
amount)
and I got the following result:
the total is correct however the results came under null scenario type and null year
but if I exclude the SUPPORT-BSRM_BSM project I get the fine result:
so how can I get the correct result when the project to be excluded is in the current selections set of data?
why when that project is available the other projects to be included come under null year and null scenario?
knowing that all the fields except the year and scenario are in the project table (dimension) linked to fact table via prjh_key and the year is part of the calendar table linked to fact table via the month_key and the scenario is part of the scenario table linked to fact table via scenario_key
kindly advise
As per Client requirement we have to remove all Java client related files from all Replicate servers, client found Java client files in the below path under Oracle client.
E:\app\client\product\19.0.0\client1\jdk\bin\ E:\app\client\product\19.0.0\client1\jdk\jre\bin\
In case if we remove this JDK folder (along with internal files), is there any impact on existing/new replication tasks/process and what is the impact on Qlik Replicate?
Hello,
Our company is in the process of moving to Qlik Cloud. We are in the process of migrating Nprinting reports to the cloud, mainly tabular reports, as they are based on Excel models.
On some reports, we are encountering migration problems. For example, we are applying a filter on a line graph for data from a specific week (filter CSL_Last_Week_Actual_Year on graph <wyEBjY>, screenshot below of Nprinting design).
When I export this Nprinting template to a tabular report template (with "Export Qlik Cloud reports" on Qlik Nprinting web console), I get an error because this functionality is not available in tabular reports : "Object filters are not supported. The following tags will not be filtered"
How can I apply a specific filter to a specific graph on Qlik Cloud Reporting? With tabular reports, Pixelperfect, automation, etc...
Thanks,
Florian
Hello,
I wonder if it is posile to migrate my '.qvw' files to qlik sense desktop or similar.
I have already instaled locally qlik sense desktop but I find it totally different than qlikview
Thank you
Hi all.
Please advise whether there are any options to highlight, in colour, only the very top Totals in a pivot table.
I have a table like the one below and tried to solve this issue using the Dimensionality() function. For some reason, my formula does not work.
=If(Dimensionality() = 0,LightGray(), White())
FREE TRIAL
Data has stories to tell—let Qlik help you hear them. Start your FREE TRIAL today!
Only at Qlik Connect! Guest keynote Jesse Cole shares his secrets for daring to be different.
The Qlik Product Recap spotlights the latest features, plus a short demo and shareable resources to help you stay current and make the most of Qlik.
Independent validation for trusted, AI-ready data integration. See why IDC named Qlik a Leader.
With Qlik Public ILT Passport, embark on live sessions with our top trainers — helping you grow faster and get the most out of Qlik. See where your passport can take you.
Take advantage of this ISV on-demand webinar series that includes 12 sessions of detailed examples and product demonstrations presented by Qlik experts.
Your journey awaits! Join us by Logging in and let the adventure begin.
Catalyst Cloud developed Fusion, a no-code portal that integrates with existing Qlik licenses to deliver critical insights across the organization. The results? Time savings, transparency, scalability to expand, and increased adoption.
Catalyst Cloud developed Coeus SEP, a Qlik‑based platform for sharing supply chain data with suppliers.
Billion-dollar organization delivers quality service and consistency at scale with Qlik Answers, powered by Amazon Bedrock.
Thomas More University works with the Qlik Academic Program and EpicData to encourage and inspire students.
Qlik Data Integration transforms Airbus' aircraft production, leading to over 150 replication tasks and informing more efficient analysis.
Join one of our Location and Language groups. Find one that suits you today!
Únete a la conversación con usuarios de Qlik en todo México: comparte ideas, haz preguntas y conéctate en español.
Qlik Communityの日本語のグループです。 Qlik製品に関する日本語資料のダウンロードや質問を日本語で投稿することができます。
Connectez-vous avec des utilisateurs francophones de Qlik pour collaborer, poser des questions et partager des idées.
The Qlik Product Recap showcases the newest features with quick demos and shareable resources to help you stay current?
You can test-drive Qlik for free? Try Qlik Talend Cloud to integrate and clean data without code, or explore Qlik Cloud Analytics to create AI-powered visualizations and uncover insights hands-on.
Salesforce’s acquisition of Informatica could put your flexibility on hold? Qlik makes it easy to keep your data initiatives moving forward.
You can move beyond batch processing and harness real-time data to power AI and faster decisions? Discover how in our new eBook, Mastering Change Data Capture.