Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Have anyone run successfully a code to extract the scripts from QVW\QVFs files in QlikView into QVS using an external code such PowerShell, VBS, Python, etc.? The catch is that the code must
1. Loop all QVW\QVF files in a directory share folder (all sub-folders and files in those sub-folders).
2. Needs to run in non-interactive mode (back-end)
I have created a code in PowerShell, and it is running as expected, but it only runs through an interactive session. I used the concept build in the document analyzer version in https://qlikviewcookbook.com/2017/01/the-document-analyzer-compare-tool/ built by Rob Wunderlich. I have a challenge to run it in non-interactive mode as the code can only open the QlikView application using COMObject cmd:
$qv = New-Object -ComObject QlikTech.QlikView
How can I open the QlikView application as it would be running from QMC (task)?
Thank you for any advice
I am getting following issue during the Talend upgrade from 8.0 R2025-07v2 to R2025-10. Can you please help?
After the Talend upgrade, Snowflake JDBC tDBInput getting error as below.
INFO: Connecting to GLOBAL Snowflake domain [FATAL] 00:49:45 etl.main- tDBInput_1 null java.lang.NullPointerException: null at java.util.Objects.requireNonNull(Objects.java:209) ~[?:?] at java.util.stream.Collectors.lambda$uniqKeysMapAccumulator$1(Collectors.java:180) ~[?:?] at java.util.stream.ReduceOps$3ReducingSink.accept(ReduceOps.java:169) ~[?:?] at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:179) ~[?:?] at java.util.HashMap$EntrySpliterator.forEachRemaining(HashMap.java:1850) ~[?:?] at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) ~[?:?] at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) ~[?:?]
For the Snwoflake database connection, we need to use JDBC data component (snowflake-jdbc-3.22 driver) using Key Pair Authentication. We cannot use native snowflake data object.
I reversed upgrade and back to R2025-07v2 - not getting this error.
I tried R2025-09 - getting same issue.
We are setting up a new task to replicate from source Azure SQL Server into target endpoint SQL Server 2024.
The task ran but none of the tables get replicated. We hit into the following errors.
Table 'dbo'.'table1' has encrypted column(s), but the 'Capture data from Always Encrypted database' option is disabled. The table will be suspended.
Can share how to fix this please?
Thank you.
Desmond
Hi. all
Are there any tuning points to improve Qlik Replicate performance
when using Oracle XStream as a source endpoint?
When using Oracle as the source endpoint, the apply throughput is 46,000–48,000 records per second.
With Oracle XStream as the source endpoint, it is 7,000–9,000 records per second.
Are there any Qlik Replicate configuration settings (e.g., internal parameters) to improve processing performance?
Thanks.
After upgrading to the latest patch in December. The visualization bundles stopped working and displayed as invalid visualisations.
Looking into the issue the had disappeared from the extension list.
I have trie dthe following
modifying / changing / repairing the bundle installation - no change. am unable to remove as that option is greyed out.
When I check the logs I am getting the following errors-
350 20250204T084038.700-0500 WARN System.Repository.Repository.Core.Resource.Support.ExtensionResourceSupport 50 d0117ad3-fb66-4122-8803-96f6472d5efa<username> Failed to add extension Exception of type 'System.Exception' was thrown.↵↓ZIP file error at Repository.Core.Util.ZipUtil.UnzipFile(String filePath, String destinationFolder, String password, Boolean deleteZipFile, String fileFilter)↵↓ at Repository.Core.Util.ZipUtil.UnzipFile(String filePath, String destinationFolder, String password, Boolean deleteZipFile, String fileFilter)↵↓ at Repository.Core.Resource.Support.ExtensionResourceSupport.Add(String path, String password, Boolean appendPrivileges, Boolean replaceExtension) c0de923a-c1c9-2f6b-db41-ae357858ccba d0117ad3-fb66-4122-8803-96f6472d5efa
This is from the install log
Error: POST /qrs/extension/upload?privileges=true&pwd=&xrfkey=4yAjqKO6JQCbPLow: 400
Nothing else has changed other than he patch which was released as a security upgrade.
Hi,
Our client has a CDC task that reads from Oracle. Recently, the task was not capturing any changes in the source due to a database refresh that occurred months back. However, the task kept running and did not indicate any errors or warnings.
It also did not indicate that "Changes are not being captured", or "Changes have not been captured in the last N minutes". This is because if Oracle is the source endpoint, and there are changes occurring to other tables that aren't in the task, the messages will not be displayed.
My colleague then suggested using the attrep_status table to detect when a task has not captured changes in a while. The current time can be compared against a timestamp from the table to determine if changes have not been captured in some time. However, I want to be sure of which column I can suggest to the client to use that will work all the time.
I've narrowed it down to these two columns - Source_Timestamp_Applied, and Source_Current_Timestamp.
In my testing I've found that Source_Current_Timestamp seems to keep increasing. Would it keep increasing if there is an issue with Replicate reading the source logs? ie, the logs are not found? Surely not is my assumption.
I've also found that Source_Current_Timestamp does not increase if none of the tables in the task experience changes. It also does not change if unrelated tables not in that task are updated. I believe that this may be a good column to use.
However, there could be some info that I'm missing so can someone please provide extra insight into this? Thank you.
Kind regards,
Mohammed
Ask Michael what’s kept him connected to Qlik since 2012, and he’ll tell you it’s simple: the people. From his first days as a QlikView developer to his role today, he’s found inspiration, support, and shared curiosity in the Qlik Community, a place that’s helped him grow while empowering others to do the same. It’s that spirit of continuous learning and contribution that makes Michael our November 2025 Featured Member!
Over the years, Michael has seen Qlik evolve from QlikView to the powerful analytics platform it is today, and he’s been right there growing with it. “The Qlik Community has been the best resource I’ve come across for learning all things about Qlik,” he shares. “An engaged community is underappreciated among BI software, and the Qlik Community has helped me immensely over the years.”
Michael embodies what it means to be a rising star, someone who continues to learn, contribute, and light the way for others through his curiosity and collaborative spirit.
Outside of work, Michael enjoys following baseball, riding his motorcycle, reading science fiction, and playing video games with his kids, a mix of hobbies that reflect both focus and imagination.
Michael, thank you for bringing your curiosity, energy, and drive to the Qlik Community. Please join us in celebrating Michael as our November 2025 Featured Member and leave a comment below!
@mgranillo @Sue_Macaluso @Jamie_Gregory @nicole_ulloa @Brett_Cunningham
Hi,
for example, this expression "=today() - date(date#('20200920','YYYYMMDD'))"
returns 1879 as a result and I need to show number of years, months and days between those 2 dates.
Thanks
Hello,
I am working with a pivot table in Qlik Sense with dynamic dimensions. The dimensionality is not fixed and might be affected by selections on two different data islands. I have to be careful as hidden dimensions impact this problem, as GetObjectField() can return a blank result ('') if the dimension's condition is not fulfilled.
For my app, I need to compute percentages for each dimension by using the TOTAL qualifier with a different number of fields according to the dimensionality of the pivot table. The problem is users are allowed to change the order of the dimensions by moving the fields in the pivot table. For instance, they might put field2 before field1, and all other possible combinations.
Therefore, the order for the dimensions is really important as I compute the TOTAL according to the dimensionality of the table and the order of the fields. I use GetObjectField() along with variables to define the first dimension that is shown, vFirstDimension, defined as:
if(len(GetObjectDimension(0)) > 0, 0,
if(len(GetObjectDimension(1)) > 0, 1,
if(len(GetObjectDimension(2)) > 0, 2,
''
)
)
)Therefore, I can now use GetObjectField($(vFirstDimension)) to get the first dimension shown. For the second available dimension, I define vSecondDimension:
if(len(GetObjectDimension($(vFirstUsedDimension)+1))>0,$(vFirstUsedDimension)+1,
if(len(GetObjectDimension($(vFirstUsedDimension)+2))>0,$(vFirstUsedDimension)+2,''))This does work, but as you can imagine, there are plenty of nested if statements, and the final app is getting slow because of this.
Note: dimensions are shown according to the selections made on a data island; however, I can't simply concat the field as it won't respect the order when moving the field order in the pivot table itself.
Do you have any suggestions for resolving this performance problem?
Thanks in advance.
Greetings,
Alex
Hi everyone, in Qlik Cloud capacity-based model, the service account owner is notified via email approaching a capacity limit (at 90%, 95%, and 100% of the limit). Is it possible to automatically send the same email to other recipients? Additionally, is it possible to customize the alert thresholds?
FREE TRIAL
Data has stories to tell—let Qlik help you hear them. Start your FREE TRIAL today!
Only at Qlik Connect! Guest keynote Jesse Cole shares his secrets for daring to be different.
The Qlik Product Recap spotlights the latest features, plus a short demo and shareable resources to help you stay current and make the most of Qlik.
Making a platform decision for 2025? See why IDC named Qlik a Leader in BI and Analytics.
With Qlik Public ILT Passport, embark on live sessions with our top trainers — helping you grow faster and get the most out of Qlik. See where your passport can take you.
Take advantage of this ISV on-demand webinar series that includes 12 sessions of detailed examples and product demonstrations presented by Qlik experts.
Your journey awaits! Join us by Logging in and let the adventure begin.
Catalyst Cloud developed Fusion, a no-code portal that integrates with existing Qlik licenses to deliver critical insights across the organization. The results? Time savings, transparency, scalability to expand, and increased adoption.
Catalyst Cloud developed Coeus SEP, a Qlik‑based platform for sharing supply chain data with suppliers.
Billion-dollar organization delivers quality service and consistency at scale with Qlik Answers, powered by Amazon Bedrock.
Thomas More University works with the Qlik Academic Program and EpicData to encourage and inspire students.
Qlik Data Integration transforms Airbus' aircraft production, leading to over 150 replication tasks and informing more efficient analysis.
Join one of our Location and Language groups. Find one that suits you today!
Únete a la conversación con usuarios de Qlik en todo México: comparte ideas, haz preguntas y conéctate en español.
Qlik Communityの日本語のグループです。 Qlik製品に関する日本語資料のダウンロードや質問を日本語で投稿することができます。
Connectez-vous avec des utilisateurs francophones de Qlik pour collaborer, poser des questions et partager des idées.
The Qlik Product Recap showcases the newest features with quick demos and shareable resources to help you stay current?
You can test-drive Qlik for free? Try Qlik Talend Cloud to integrate and clean data without code, or explore Qlik Cloud Analytics to create AI-powered visualizations and uncover insights hands-on.
Salesforce’s acquisition of Informatica could put your flexibility on hold? Qlik makes it easy to keep your data initiatives moving forward.
You can move beyond batch processing and harness real-time data to power AI and faster decisions? Discover how in our new eBook, Mastering Change Data Capture.