Participate in Qlik Technical Previews, submit ideas, and review documentation.
Qlik Ideation
Vote for your favorite product ideas and suggest your own
Please search for an idea before you post a new one. Your idea may already have been submitted!
Get an early view of upcoming Qlik product releases.
Discussions and related ideation information
Hello everyone and welcome to the new Ideation Discussion Board!
The Product Insight & Ideas blog has been retired and replaced by the Ideation Discussion board. This will allow the Ideation community to all post and comment on discussions related to the Ideation process. The Ideation rules from the original blog were reposted here and pinned to the top.
A few rules before you post:
So, check it out and start posting! I can't wait to engage with you all.
Meghann
We are using Snowflake as a target and are replicating 800+ table from various sources using 75 tasks.
We have enabled attrep_status table and want to leverage this table for latency monitoring.
Replicate tasks 'updates' this table. Because of nature of Snowflake concurrent updates on a table fail. Snowflake has a default setting where it kills transactions if there are more than 20 transactions waiting on a table.
With 75 tables, there is a possibility that at a given point in time, more than 20 task try to update the table simultaneously. As a result these updates fail.
Because Replicate has a hard limitation of storing the control tables in the target DB, I would like Replicate to support 100's of tasks writing to Snowflake and be able to keep information in attrep_status consistent.
One option is to insert a new status in the table instead of update. Inserts do not create locks and hence would not have this issue.
When Replicate encounters changes that it can not immediately process it writes them to sorter files on the QLIK server. QIK has said they are essentially memory dumps. The problem is that they are huge in comparison to the same changes being stored as LogStream files. Thus, they can very rapidly fill up the storage space on the QLIK server and cause all tasks to be stopped due to critical disk storage space threshold being reached.
These files should instead be compressed and stored in a more space efficient manner similar to how LogStream files are stored.
Hi Team,
We have a client requirement to charge the users based on the size of the downloaded reports.
Based on the response from the QlikSense support team on ticket no 00035264, QlikSense currently does not have the provision of getting the size of the downloaded file. This is a feature request for the same. Can you please prioritize this as it is a must-have requirement from our client.
Thanks!
Venkatesh
PS: Link to ticket no 00035264 - community.qlik.com/t5/crmsupport/casepage/issue-id/00035264/issue-guid/5003z00002V6FmfAAF/issue-provider/salesforce
@Patrick @qliksupport @qlik @shilpan @DeepakVadithala @Manish @nitesh_s @techsupportqlik @QlikProductUpdates
Columnstore indexes are the standard for storing and querying large data warehousing fact tables.
Microsoft recommends usage of columnstore indexes for large tables in data warehouses as it helps to improve query performance.
As data stored on data warehouses based on SQL Server are being moved to cloud and streaming platforms it will be great if Qlik Replicate can support column store indexes for SQL Server endpoint.
I want to see a lot of axis values at once but it's automatically adjusted so you have to scroll to see it.
I'd like to adjust the width of the column as desired.
Hi Team,
I had come across a very great feature of Application Automation i.e. Loop and Reduce, but what I found is we do not have the mechanism to hide (provide access only to specific) apps from specific users in the same Managed Space.
This means I will have to either create as many managed spaces as the loop and reduce field. I understand that we can automate this process as well through automation, but managing so many managed spaces is not a good idea.
So my request you to please add functionality if possible to hide few apps from specific users in single managed space, this will help admin's like me to manage the tenant more efficiently.
Thanks
Hello,
Apps that manage security with section access can only be exported by a tenant or analytics administrator from their personal space.
Apps can't be exported easily, so we can't backup Qlik Sense App with Cli or Automation.
I don't understand why ?? I want to use the automation template to backup application but it works only in demo mode, in real life with section access in application it doesn't work.
Regards,
Currently exporting bookmarks from a Qlik Sense platform to another and/or from an app to another is not supported
Right now, sheet actions doesn't get triggered if the sheet is an embedded sheet. We would like to see the sheet actions get triggered so we don't have rely on extra buttons or third party extensions (if any available).
This would help users to easily use qlik sense actions when embedded as sheets in their products.
Authenticate to an Azure DevOps Git repo from within Qlik Compose with Azure credentials.
Currently authentication to a Git repo in Qlik Compose is the manual entry of username and password. It would be beneficial to link authentication to the Azure DevOps Azure Active Directory.
This enables integrated authentication and user experience based on a single set of credentials. The existing username and password combination should remain as an option for public Git repos like on Git Hub.
This helps organizations who manage work tasks in Azure Dev Ops to integrate authentication with Qlik Compose and the Windows desktop.
Hi Qlik,
My customer has some e-mail security. It was a challenge to set up the e-mailserver connection but this only work with internal e-mailaddresses. The e-mail configuration in QMC must be "none" at security, see screenshot. Otherwise it won't work with the e-mailserver. Difficult story but it was the only way. It work perfectly for alerts etc..
I created an Automation with a Task chain. I want recieve an e-mail when an App has failed. If I want to set up the connection, I can not choose security "none". So this option does not work for my customer.
IDEA: is it possible to make this option also available in the E-mail block from Automation?
\
Thank you!
<It would be nice if there would be also a connector for Microsoft ERP System Dynamics 365 Business Central in Qlik Sense Cloud. This is API: https://docs.microsoft.com/en-us/dynamics-nav/api-reference/v2.0/
For Dynamics 365 Business Central Cloud Azure Active Directory Authentication would be necessary. Basic Authentication is currently also possible but will be turned off in 2022 (https://docs.microsoft.com/en-us/dynamics365/business-central/dev-itpro/upgrade/deprecated-features-w1#accesskeys). For Dynamics 365 Business Central OnPrem Basic and Azure Active Directory Authentication are supported. Most Customers have only setup Basic Authentication.
Postgres DB is free, yes, but limited resources for support. Please add a SQL Express version to the QEM data analytic and Compose software as an option. There are way more SQL Server support resources than postgres.
We have had issues with our postgres DB during this latest upgrade. Have already spent more than 8 hours trying to debug the connection issue. If this was a SQL Express DB, I would have had it resolved within minutes.
Thank you
Lori Halsey
Our SAP HANA team have recommend any connection to HANA to be encrypted with SSL as part of our risk assessment.
We would like to request for this feature in Qlik Replicate as soon as possible as SAP HANA source is the major (80% plus) source of data on our QR servers.
Reference QR case: 00036053: HANA end-point with SSL.
It would be helpful to have a possibility to delete community bookmarks. Some bookmarks might become not relevant and when a person leaves the job bookmarks will keep hanging forever.
Underlying situation
When performing an initial load ("Reload Target...") with Qlik Replicate towards a Kafka target, it can happen that for one of loaded tables an error on Kafka (target) side occurs. In this situation usually several tables are loading in parallel (default parallelism degree is 5). When for one of these loading tables a target error occurs, then Qlik Replicate aborts the whole task and also the ones which are still loading fine. After this abortion the aborted tables are moved back to the "Queued" list. Several reload retries are done by Qlik Replicate and the same tables are picked out and are trying to be reloaded again and again, while for one of these tables the error on target side happens. Illustrated:
Reload Target - parallelism degree 5
Trial #1
After a few seconds of loading the task aborts.
Trial #2
After a few seconds of loading the task aborts.
Trial #3... same behavior as #2.. etc.
-> This behavior leads to (several) duplicates in the Kafka target topics for tables 1-4.
Idea description
Qlik Replicate could implement a smarter error handling in this case. In the above mentioned example 4 of 5 tables have no errors and are loading fine. They would finish, if the Kafka problem of table 5 wouldn't abort the whole task after a few seconds. So there would be two options / solution designs:
1) Qlik Replicate finishes the load of the tables, which have no problems, and then aborts the task. In this case Qlik Replicate should not get further tables from the "Queued" list, if an error occured. It should just wait, until the currently loading tables are finished and then abort.
2) Qlik Replicate moves the table with the error to the "Error" list and continues with the other tables. All tables without Kafka target errors can be finished thereby. In the end the task can finish or abort.
Either solution would prevent duplicates during initial load.
Target audience
Qlik Replicate and Kafka Developers.
Additionally also the target system developers who receive the duplicates.
Value proposition
Duplicates lead to
All these negative impacts can be prevented by implementing this idea.
Case reference
This issue was described in case 31305 and an ideation was suggested.
There is a current limitation when using Change table option with DB2 LUW as source:
https://help.qlik.com/en-US/replicate/November2021/Content/Replicate/Main/IBM%20DB2%20for%20LUW/limitations_db2.htm#ar_ibm_db2_456845099_1404155
"When the Change table option is enabled in the Store Changes Settings tab, the first timestamp record in the table may be Zero in some cases (i.e. 1970-01-01 00:00:00.000000)."
CDC functionality is thus not working - DDL changes are in some cases ignored by Compose.
Could this be fixed or a workable solution provided?
(Support case: 00037045: DB2 LUW source - DDL changes)
Qlik Sense done NOT support cycle groups today. At best it supports alternate dimensions.
This is one of the features that is a show stopper for customers to move from QlikView to Qlik Sense.
The strength and value of QlikView-like Cycle Groups is that they are dimensional and global. When you change a Cycle Group is works like any other field used as dimension. The value applies to every chart where it is used.
For example, if you have 20 charts spread across multiple sheets and you change a cycle group to a new dimension, then ALL 20 charts reflect that new dimension. The charts stay synced together. The charts are 'Product' charts …cycle... now they are all about 'Store' …cycle… now they all show 'Product Category' as the dimension.
It is a very powerful way of looking at a group of visualizations because you see a collective story across the charts.
Plus it helps solve the expression problems associated with the way Qlik Sense does things today with alternate dimensions and drill down groups. Without the getCurrentField(groupname) function we cannot support complex aggr expressions without creating a lot of different charts that cannot use drill down group or alternate dimensions.
Because of the lack of cycle group support, to implement a set of 6 charts in Qlik Sense that exists in QlikView means replacing 6 QlikView charts with 1,296 Qlik Sense charts. That doesn't scale well.
Along with the implementation of Cycle Groups means bringing back the getCurrentField(groupname) function that returns the currently selected dimension for the group: Cycle Groups and Drill Down Groups.