Vote for your favorite Qlik product ideas and add your own suggestions.
We need a way to automatically pass Qlik metadata (mainly app ID, app name, app owner, and Qlik identifier) when apps submit query jobs to BigQuery. This is needed so we can identify query jobs in the BQ INFORMATION_
Our SAP HANA team have recommend any connection to HANA to be encrypted with SSL as part of our risk assessment.
We would like to request for this feature in Qlik Replicate as soon as possible as SAP HANA source is the major (80% plus) source of data on our QR servers.
Reference QR case: 00036053: HANA end-point with SSL.
Underlying situation
When performing an initial load ("Reload Target...") with Qlik Replicate towards a Kafka target, it can happen that for one of loaded tables an error on Kafka (target) side occurs. In this situation usually several tables are loading in parallel (default parallelism degree is 5). When for one of these loading tables a target error occurs, then Qlik Replicate aborts the whole task and also the ones which are still loading fine. After this abortion the aborted tables are moved back to the "Queued" list. Several reload retries are done by Qlik Replicate and the same tables are picked out and are trying to be reloaded again and again, while for one of these tables the error on target side happens. Illustrated:
Reload Target - parallelism degree 5
Trial #1
After a few seconds of loading the task aborts.
Trial #2
After a few seconds of loading the task aborts.
Trial #3... same behavior as #2.. etc.
-> This behavior leads to (several) duplicates in the Kafka target topics for tables 1-4.
Idea description
Qlik Replicate could implement a smarter error handling in this case. In the above mentioned example 4 of 5 tables have no errors and are loading fine. They would finish, if the Kafka problem of table 5 wouldn't abort the whole task after a few seconds. So there would be two options / solution designs:
1) Qlik Replicate finishes the load of the tables, which have no problems, and then aborts the task. In this case Qlik Replicate should not get further tables from the "Queued" list, if an error occured. It should just wait, until the currently loading tables are finished and then abort.
2) Qlik Replicate moves the table with the error to the "Error" list and continues with the other tables. All tables without Kafka target errors can be finished thereby. In the end the task can finish or abort.
Either solution would prevent duplicates during initial load.
Target audience
Qlik Replicate and Kafka Developers.
Additionally also the target system developers who receive the duplicates.
Value proposition
Duplicates lead to
All these negative impacts can be prevented by implementing this idea.
Case reference
This issue was described in case 31305 and an ideation was suggested.
At the moment the Quotas API on Qlik Sense SaaS returns a single record for app_mem_size and app_upload_disk_size. The Spaces API doesn't show this either.
When a customer has Dedicated and/or Expanded Capacity this can be different based on which Spaces are allocated to those.
Could the Quota API be enhanced to show the quotas per space / per capacity type or similar?
and/or
Could the Spaces API be enhanced to show whether it's Normal or Dedicated or Expanded and what the quota limit is?
It would also be useful if they supported / showed Forts although the quotas here would be meaningless presumably as it's only limited by the customer's infrastructure for the Fort.
These improvements could then be leveraged by the App Analyzer.
If you apply color expressions, the cells can for example color red for negative values. If the total value is also negative, the developer should be presented the option to have the Totals color red as well or not (standard black as it is implemented today)
Hi,
a customer asked if it's possible to disable the search tool in columns of tables.
This might be useful when a different dimension is used in the visualization in place of the real one that is available in filter panes and in the data model in general:
This way it's possible to avoid confusion or missing data due to different styling of fields.
Currently exporting bookmarks from a Qlik Sense platform to another and/or from an app to another is not supported
I just had a problem debugging my Qlik Sense app when it hung and I could not stop it. I tried refreshing the page but it didn't recover. I closed the tab, but then I could not open it again as it said "Reload in Progress".
I found a previous entry in "Ideas" (1525202) to have the ability to kill a hung task but it was closed saying that there is an abort button in the Data Load Editor.
The problem we need to solve is that I can't access the Data Load Editor as it said "Reload in Progress", so I am locked out of killing the task.
We need the ability somewhere in the hub or QMC to find these hung developer reload tasks and kill them from outside the Data Load Editor.
Thanks, Barnaby.
It would be helpful to have a possibility to delete community bookmarks. Some bookmarks might become not relevant and when a person leaves the job bookmarks will keep hanging forever.
We are all aware of the buttons in Qlik sense with various functionalities including the navigation to next page. It would be very user friendly if we can navigate to the other pages by selecting a dimension from a chart without the use of buttons.
For instance, I have a pie chart 'Usage by Devices'. Upon selecting a device (Mobile for example) from the chart, it needs to be navigated to a sheet called 'Mobile Analysis'.
This is a request from a customer:
They would like to have owner in NPrinting the same way as Qlik Sense, which may help to manage the authorization in a better manner.
When a sheet is embedded in an iFrame on a web page, and the sheet contains a KPI or multi-KPI with a link to sheet, the link to sheet does not work. Enhancement/idea is to enable "link to sheet" when embedding KPI's or multi-KPI's in iFrame on web page.
When printing to PDF in Qlik Sense server, it exports all of the features, extensions and values to the file, with no difference in visualization.
However, when the same app is in the cloud, and I download as PDF, the visualization is limited, some buttons are blank and missing fields.
I looked for this issue in the community, and found that there are some limitations when exporting in the Cloud, comparing to the server app.
I also found that this is an idea in progress to improve export PDF in the cloud: Sneak peak of reporting improvements on Qlik Cloud - Qlik Community - 1922236
It'd be a great idea if the funcionality matches both versions of the product, so we can work on the cloud without any limitations, like we do on the server.
Just for the matter of comparison, see screenshots attached of the same app downloaded as PDF in the Cloud, and in the server.
Longer APP names are trimmed from both tile & list view in Qlik Cloud hub/QlikSense SaaS. It makes more difficult if dashboard has different versions divided by region/country tagged to the end of the APP name. User can only see full name by mouse hovering over the trimmed APP name.
In list view, Please add flexibility to adjust the column width, so user can adjust and see full name of APP in Qlik cloud hub. for example user can adjust the column width in Qlik cloud console -> Users -> 'All Users' view
So many countries, like in the continental Europe, South America and many Africa and Asia countries uses comma as decimal separator and dot as thousand separator.
Qlik has noted that as the number format and separators are set in the initial variables of any Qlik Sense document since the first version of Qlik Sense.
For some reason the new option to export table formats introduced in February 2022 ignores that number formats and doesn't apply any number format when it found that there is a comma as decimal separator.
I thought it should be reported as a bug but R&D thinks this is just a limitation, I can agree with them so here is an idea to remove that limitation and allow that half the world to keep the numbers in the formats they are used to read.
We are using Snowflake as a target and are replicating 800+ table from various sources using 75 tasks.
We have enabled attrep_status table and want to leverage this table for latency monitoring.
Replicate tasks 'updates' this table. Because of nature of Snowflake concurrent updates on a table fail. Snowflake has a default setting where it kills transactions if there are more than 20 transactions waiting on a table.
With 75 tables, there is a possibility that at a given point in time, more than 20 task try to update the table simultaneously. As a result these updates fail.
Because Replicate has a hard limitation of storing the control tables in the target DB, I would like Replicate to support 100's of tasks writing to Snowflake and be able to keep information in attrep_status consistent.
One option is to insert a new status in the table instead of update. Inserts do not create locks and hence would not have this issue.
The Google Analytics Web Connector is currently only compatible for the Google Universal Analytics (UA). GA4 is due to replace the current version of Google Analytics (Universal Analytics or UA) in June 2023. More and more customers are now migrating to GA4. The GA4 comes with a new API (Google Analytics Data API). It would be great if the web connector becomes compatible for this new API, or that a new web connector is created.
Hi Team,
We have a client requirement to charge the users based on the size of the downloaded reports.
Based on the response from the QlikSense support team on ticket no 00035264, QlikSense currently does not have the provision of getting the size of the downloaded file. This is a feature request for the same. Can you please prioritize this as it is a must-have requirement from our client.
Thanks!
Venkatesh
PS: Link to ticket no 00035264 - community.qlik.com/t5/crmsupport/casepage/issue-id/00035264/issue-guid/5003z00002V6FmfAAF/issue-provider/salesforce
@Patrick @qliksupport @qlik @shilpan @DeepakVadithala @Manish @nitesh_s @techsupportqlik @QlikProductUpdates
Currently, when we replicate an endpoint, only the primary keys are created in the target endpoint. We would like to have foreign keys also replicated from the source to the target. Please note, all of our use cases are full load.