Skip to main content

Welcome to
Qlik Community!

cancel
Showing results for 
Search instead for 
Did you mean: 
  • 258,114 members
  • 3,064 online
  • 2,040,572 posts
  • 153,560 Solutions
Announcements
Week 5: Getting Answers With AI + A New Era of Data Governance - WATCH NOW

Welcome to Qlik Community

Recent Discussions

  • forum

    Talend Studio

    tDBOutput extremely slow

    I have a process that pulls from a MySQL database, compares to existing entries in a destination database, and then Inserts or Updates depending on ex... Show More

    I have a process that pulls from a MySQL database, compares to existing entries in a destination database, and then Inserts or Updates depending on existence.

    The Updates are extremely slow, along the lines of ~2 rows/sec. This is fairly consistent regardless of batch size or commit size. Destination has an appropriate index for the key. 

    When the job is modified to send Updates to a tLogRow component, output jumps to nearly 8k rows/sec.

    Any ideas why this tDBOutput component is going so slow?

    Show Less
    Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
  • forum

    Women Who Qlik

    Get excited for a NEW Women Who Qlik Linkedin Live!

    Join us on LinkedIn Live August 21 for a conversation with #WomenWhoQlik on challenging bias, redesigning systems, and building a more equitable data ... Show More

    Join us on LinkedIn Live August 21 for a conversation with WomenWhoQlik on challenging bias, redesigning systems, and building a more equitable data culture.

    Featuring voices from across data, tech, and ethics, we’ll explore how women can lead the charge for transparency, fairness, and inclusion in the systems that shape our world.

    Share in the excitement with @Aklidas, @StephanieR@jeannine_boot and kelly Forbes and let them know you will be attending HERE!

    Show Less
    Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
  • forum

    Qlik Replicate

    Qlik Replicate if posibble only in cdc only filters the updates

    Hi, Please for some escenary, is posible filter to target only update changes. We not need in this case inserts, deletes. Thanks
    Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
  • forum

    Qlik Replicate

    Qlik replicate target as Databricks lakehouse Delta on GCP

    hi All,   we have Qlik replicate target as Databricks lakehouse Delta on GCP.. we have Primary key and few of them are non Primary key. Here i check d... Show More

    hi All,  

    we have Qlik replicate target as Databricks lakehouse Delta on GCP..

    we have Primary key and few of them are non Primary key.

    Here i check docs as apply changes only for PK's.

    How to support for the Non PK's kindly share the suggestion. 

    Even if the store changes to __CT tables will truncate and load will work

    Kindly share?

     

    Show Less
    Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
  • forum

    Qlik Predict

    Time Series Forecasting for Multiple Customers in Qlik AutoML / ML Experiment

    Hi, I’m working on building a time series forecasting model to predict Gross Profit for the next 18 months (Aug 2025 through Dec 2026), and I’m lookin... Show More

    Hi,

    I’m working on building a time series forecasting model to predict Gross Profit for the next 18 months (Aug 2025 through Dec 2026), and I’m looking for expert guidance on the best approach within Qlik’s ML environment—especially for handling multiple time series (one per customer).

    Dataset Details:

    • Time Range: Jan 2022 – July 2025 (43 months of historical data)

    • Columns: CustomerID, EOM (End of Month), Amount (Gross Profit)

    • Scale: 500+ unique CustomerIDs

    • Goal: Generate individual forecasts per customer for the next 18 months

    Below is a screenshot of the dataset.

    I’m specifically exploring the "ML Experiment" feature and wondering:

    Is there a tutorial or example that shows how to structure a time series forecast for multiple entities (like CustomerID)? I’ve reviewed the documentation but haven’t found a clear example of this use case.

    If you’ve implemented something similar or know of any Qlik-native resources, tutorials, or best practices, I’d love your insights.

    Thanks in advance!

    Show Less
    Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
  • forum

    Qlik Replicate

    Amazon RDS for DB2 update operations taking longer duration

    Source : DB2 for zOS v12Target : AWS RDS for DB2 v11.5.9Qlik Replicate Server : Windows 2016 (2024.0.11.177)We have 3 env (Prod,Test,Dev). Prod and De... Show More

    Source : DB2 for zOS v12

    Target : AWS RDS for DB2 v11.5.9

    Qlik Replicate Server : Windows 2016 (2024.0.11.177)

    We have 3 env (Prod,Test,Dev). Prod and Dev handles the update operations like Delete and Insert DML statements, whereas our test env handles the update operations as update dml statement

    UPDATE "BRONZE"."ECS_CONTR_TO_ENV"
    SET "EMAIL_ADDR"= (
    SELECT CASE WHEN "BRONZE"."attrep_changesA871FDC750E1E5C9"."col1" IS NULL THEN "BRONZE"."ECS_CONTR_TO_ENV"."EMAIL_ADDR" WHEN "BRONZE"."attrep_changesA871FDC750E1E5C9"."col1" = '<att_null>' THEN NULL ELSE CAST( "BRONZE"."attrep_changesA871FDC750E1E5C9"."col1" AS VARCHAR(100)) END
    FROM "BRONZE"."attrep_changesA871FDC750E1E5C9"
    WHERE "BRONZE"."ECS_CONTR_TO_ENV"."CUST_ID"= CAST( "BRONZE"."attrep_changesA871FDC750E1E5C9"."seg1" AS INTEGER) AND "BRONZE"."ECS_CONTR_TO_ENV"."REP_ID"= CAST( "BRONZE"."attrep_changesA871FDC750E1E5C9"."seg2" AS VARCHAR(5)) AND "BRONZE"."ECS_CONTR_TO_ENV"."DATE_OF_BIRTH"= CAST( "BRONZE"."attrep_changesA871FDC750E1E5C9"."seg3" AS DATE) AND "BRONZE"."ECS_CONTR_TO_ENV"."TAX_CNTRY_CODE"= CAST( "BRONZE"."attrep_changesA871FDC750E1E5C9"."seg4" AS VARCHAR(2)) AND "BRONZE"."attrep_changesA871FDC750E1E5C9"."seq" >= ? AND "BRONZE"."attrep_changesA871FDC750E1E5C9"."seq" <= ? ) ,"ADDR1"= (
    SELECT CASE WHEN "BRONZE"."attrep_changesA871FDC750E1E5C9"."col2" IS NULL THEN "BRONZE"."ECS_CONTR_TO_ENV"."ADDR1" WHEN "BRONZE"."attrep_changesA871FDC750E1E5C9"."col2" = '<att_null>' THEN NULL ELSE CAST( "BRONZE"."attrep_changesA871FDC750E1E5C9"."col2" AS VARCHAR(50)) END
    FROM "BRONZE"."attrep_changesA871FDC750E1E5C9"
    WHERE "BRONZE"."ECS_CONTR_TO_ENV"."CUST_ID"= CAST( "BRONZE"."attrep_changesA871FDC750E1E5C9"."seg1" AS INTEGER) AND "BRONZE"."ECS_CONTR_TO_ENV"."REP_ID"= CAST( "BRONZE"."attrep_changesA871FDC750E1E5C9"."seg2" AS VARCHAR(5)) AND "BRONZE"."ECS_CONTR_TO_ENV"."DATE_OF_BIRTH"= CAST( "BRONZE"."attrep_changesA871FDC750E1E5C9"."seg3" AS DATE) AND "BRONZE"."ECS_CONTR_TO_ENV"."TAX_CNTRY_CODE"= CAST( "BRONZE"."attrep_changesA871FDC750E1E5C9"."seg4" AS VARCHAR(2)) AND "BRONZE"."attrep_changesA871FDC750E1E5C9"."seq" >= ? AND "BRONZE"."attrep_changesA871FDC750E1E5C9"."seq" <= ? ) ,"ADDR2"= (
    SELECT CASE WHEN "BRONZE"."attrep_changesA871FDC750E1E5C9"."col3" IS NULL THEN "BRONZE"."ECS_CONTR_TO_ENV"."ADDR2" WHEN "BRONZE"."attrep_changesA871FDC750E1E5C9"."col3" = '<att_null>' THEN NULL ELSE CAST( "BRONZE"."attrep_changesA871FDC750E1E5C9"."col3" AS VARCHAR(50)) END

    For each update statement, it updates each column based on the seq no. Not sure whether un-touched column is also included. This impacts hugely, it fills the sorter sub-directory of the task directory (this is in the Qlik Replicate server).

    We want the test env to function in the same way as Prod and Dev env (like Delete and Insert operations for Update execution). 

    Pls let me know how to achieve it. What setting at task or end-point connection level can handle this request.

    Thank you,

    Raghavan Sampath

    Show Less
    Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
  • forum

    Qlik Replicate

    Qlik replicate interface loads slow after upgrading to version May 2024 (2024.5....

    Hi,I have noticed that the QDI interface in browser loads slower after upgrading to May 2024 (2024.5.0.563).I am using Chrome Version 138.0.7204.158 (... Show More

    Hi,


    I have noticed that the QDI interface in browser loads slower after upgrading to May 2024 (2024.5.0.563).


    I am using Chrome Version 138.0.7204.158 (Official Build) (64-bit)

    Is there a way to check logs to see what's going on or increase the cache size?

     

    Thank you.
    Desmond

    Show Less
    Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
  • forum

    App Development

    Load Inline Table Formula Help

    Hello,I have an inline table with the following data: Product Code, Period, and Total.TempData:LOAD*INLINE [Cod_Product, Period, TotalA, 202004, 675A,... Show More

    Hello,

    I have an inline table with the following data: Product Code, Period, and Total.

    TempData:
    LOAD*
    INLINE [
    Cod_Product, Period, Total
    A, 202004, 675
    A, 202007, 646
    A, 202104, 668
    A, 202404, 570
    B, 202108, 901
    B, 202008, 955
    B, 202504, 597
    B, 202203, 682
    C, 202511, 135
    C, 202311, 551
    C, 202510, 292
    C, 202001, 867
    D, 202502, 912
    D, 202202, 126
    D, 201805, 681
    D, 202507, 406
    ];

    I would like to know if you can help me find the formula that calculates the maximum periods per product, regardless of which period I select in the filter. For example:

    Example if I choose 202502
    202502 D
    202311 C
    202203 B
    202104 A

    Example if I choose 202203
    202202 D
    202001 C
    202203 B
    202104 A

    Example if I choose 202008
    201805 D
    202001 C
    202008 B
    202007 A

    Example with no period selected

    202507 D
    202511 C
    202504 B
    202404 A

    And when this is done, I want the total sum of those maximum periods to appear.

    I attach an Excel file for better understanding.

    Thank you!!!!

    Show Less
    Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
  • forum

    Integration, Extension & APIs

    Gohighlevel integration in Qlik sense cloud

    Hi,I am integrating GoHighLevel with Qlik using REST connector.The method I am using isUsing the URL : 'https://rest.gohighlevel.com/v1/contacts/'Quer... Show More

    Hi,

    I am integrating GoHighLevel with Qlik using REST connector.
    The method I am using is
    Using the URL : 'https://rest.gohighlevel.com/v1/contacts/'
    Query Parameters
     Authorization , Bearer API KEY
    And Pagination Type : Choosing Next URL and this way
    With Next url path : root/nextPageUrl => Connecting but after load very minimum data 106 out of 2000 rows.
    With Next url path : root/meta/nextPageUrl => Connecting but while reloading in data load editor showing error 502.
    I have tested on postman and I am getting whole data.
    Please help how do I proceed here.

    Show Less
    Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
  • forum

    Talend Studio

    Talend Big Data job failed due to null pointer issue at tSQLROW

    Hi,   I am running a Talend Big data job on spark but has failed due to null pointer issue and could not figure out the root cause for it.  Please ref... Show More

    Hi,

     

    I am running a Talend Big data job on spark but has failed due to null pointer issue and could not figure out the root cause for it. 

    Please refer below log for issue:

    org.apache.spark.SparkException: Job aborted due to stage failure: java.lang.RuntimeException: java.lang.NullPointerException
    at org.talend.bigdata.dataflow.functions.FlatMapperIterator.hasNext(FlatMapperIterator.java:75)
    at scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:41)
    at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$7.apply$mcV$sp(PairRDDFunctions.scala:1195)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$7.apply(PairRDDFunctions.scala:1195)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$7.apply(PairRDDFunctions.scala:1195)
    at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1277)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1203)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1183)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    at org.apache.spark.scheduler.Task.run(Task.scala:89)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

     

     

    I am doing right outer join operation using tSQLROW where I suspect the issue lies but dont know the root cause.

     

    Thanks

    Show Less
    Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.

Empower your organization's data with Qlik - start your FREE TRIAL today!

Weekly Leaderboard

Customer Story

Catalyst Cloud (Fusion)

Catalyst Cloud developed Fusion, a no-code portal that integrates with existing Qlik licenses to deliver critical insights across the organization. The results? Time savings, transparency, scalability to expand, and increased adoption.

Location and Language Groups

Choose a Group

Join one of our Location and Language groups. Find one that suits you today!

Collaborate

Forums

Qlik Automate

Less manual work, more data flow. Welcome to Qlik Automate.