Skip to main content
QlikSenseUpdates
Employee
Employee

We are pleased to announce the release of Qlik Sense June 2017!

With releases five times per year, it could be difficult to stay up to date on what’s new. The document attached to this post will help you keep up to date.  You will find a look back at some of the key features released over the past 12 months as well as a detailed outline of the the most current release.

But first.... check out this overview video:

Some of the most exciting enhancements in this release include:

  • New Visualizations – New visualizations including the Box Plot, Distribution Plot, and Histogram,
  • Advanced Analytics Integration – The ability to call out to third party engines (such as R & Python) during analysis.
  • Visual Data Prep Enhancements – A wide array of improvements to the visual data preparation capability of Qlik Sense including Visual Data Profiling, Data Binning, Visual Table Concatentation, Data Quality Transformations, Filtering, and inclusion of scripted data sets in visual data preparation.
  • On Demand App Generation – User-generated on-demand analysis apps drawn from Big Data.

WHAT YOU SHOULD DO NEXT:

  • Customers can visit the Qlik Customer Download Site HERE
  • If you are new to Qlik, you can download the Qlik Sense June 2017 desktop version HERE.
  • Stay informed and learn more by joining us on an upcoming webinar or local event.
43 Comments
Bjorn_Wedbratt
Former Employee
Former Employee
0 Likes
1,759 Views
Anonymous
Not applicable

Thank you, and the Python connector?

0 Likes
1,759 Views
ToniKautto
Employee
Employee

The GitHub - qlik-oss/server-side-extension: A Qlik server-side extension protocol for extending the Qli...‌ has the general overview of Server Side Extension and related documentation. At the bottom of the page you will find some examples including Python references.

1,759 Views
luizcdepaula
Creator III
Creator III

Hi Harish,

You can only upgrade directly to Qlik Sense June 2017 if the version you have is 3.1 SR2 or higher. If not, follow the instructions below.

Upgrading from any version of Qlik Sense earlier than 3.1 SR2 to Qlik Sense June 2017 cannot be done using the setup program. To upgrade from earlier versions of Qlik Sense with a synchronized persistence model to Qlik Sense June 2017, see Upgrading to Qlik Sense June 2017 from Qlik Sense versions earlier than 3.1 SR2.

I hope it helps.

Cheers,

LD

0 Likes
1,759 Views
sebasdpereira
Partner - Contributor III
Partner - Contributor III

Hi guys. Warning!

I have upgraded to June2017, but now i must to downgrade!!.

There are two bugs:

1- Look at this simple graph. I have more than one value for the same dimension.

2- When you set persistent colors, if you set alternative dimensions this only works with one of the alternatives.

1217bb9e0ea34eb2a6bf37aa3da7bfab.png

0 Likes
1,772 Views
robert99
Specialist III
Specialist III

Hi Sebastian

2- When you set persistent colors, if you set alternative dimensions this only works with one of the alternatives.

This has always been a slight issue but it's worse now. Before this release if I selected 'Colors Auto' it defaulted with some charts to the selected alternative dimension. Now it doesn't. In other words coloring by dimension doesn't work with alternatives anymore. So the great new July 2017 feature (using master dimension coloring) is a bit limited if alternative are used

The obvious way for QLIK to get around this problem (and it has existed since alternatives was first introduced) is to have two selection options along with all of the dimensions. Either

  • Selected 1st Data alternative dimension
  • Selected 2nd Data alternative dimension.

This would be fine for every chart except the tree map whe three or more dimensions are used. and I don't use alternative for the tree map. For tree maps Qlik could link colors to the alternative like with tables but as a temporary fix the above should be easy to introduce.

TBH I almost always color by the selected dimension anyway so I would always select one or other of these 2 options

As a work around 'By Expression' can be used. But hopefully Qlik will sort this soon

if (wildmatch ([Product Type], 'Rhythm*' ) ,'#282828',

if (wildmatch ([Product Type], 'Herbal*' ) ,'#f8981d' ,

if (wildmatch ([Product Type], 'coco i*' ) ,'#545352' ,

if (wildmatch ([Product Type], 'Chocolate*' ) ,'#cb7c18' ,

if (wildmatch ([Product Type], 'vitamin*' ) ,'#ffcf02',

if (wildmatch ([Product Type], 'coco y*' ) ,'#7b7a78' ,

if (wildmatch ([Product Type], 'organic*' ) ,'#e3b902' ,

if (wildmatch ([VP Sales], 'April*' ) ,'#282828',

if (wildmatch ([VP Sales], 'Dan*' ) ,'#f8981d' ,

if (wildmatch ([VP Sales], 'Rand*' ) ,'#545352' ,

black()   ))))))))))

1,772 Views
robert99
Specialist III
Specialist III

1- Look at this simple graph. I have more than one value for the same dimension.

Are you using a time stamp formatted as a date and a stacked chart?

1,772 Views
sirpod90
Contributor III
Contributor III

The On Demand App Generation feature is  in my eyes just a better form of  Document Chaning and / or Script builder.

It's a nice feature that you able to hand over multiple parameters to other apps (I hope there are no limitations anymore, not like document chaining via URL before [2000 chars]), but the key function of an On Demand App Generation should be not to load the data again from the sources (DB/QVD).

But if I understand it right, the only way Qlik has presented, is to get the list of fields from the one app, paste to the other app and then start loading QVD's with that specified filters.

On bigger data, this concept doesn't make so much sense for me.

For example: My original App has around 20 GB of data and takes about 1,5 h to (re-)load the data from the source files. If I now use "odso_" prefix to bind the user selected fields to my template app, the template app also takes about one hour to load the data from the source.

Why does not Qlik takes the data directly from RAM???

The only work around I came up with, is to work with a binary load and short the data afterwards.

For example:

Binary [LIB://QlikSenseApps/myBigApp_ODG_Master.qvf];

// Load all data from OnDemandMaster App

SET ThousandSep='.';

SET DecimalSep=',';

SET .......

// Get "ID" filter from OnDemandMaster App

TMP_OdagBindings_ID:

LOAD * INLINE [

ID

$(odso_ID){"quote": "", "delimiter": ""}

];

// Drop all rows without the selected or associated ID's

[TMP]:

Right Keep (TABLE_WITH_ID) LOAD * RESIDENT TMP_OdagBindings_ID;

// Clean up

Drop Table TMP;

This works quit faster and just takes about 5 minutes in my example.

But still, why Qlik cannot just paste the current selected and available data from the one app to the other in RAM?

I would really like to open a new discussion about ODAG and how to handle bigger Data in Qlik in RAM.

BTW: In the huge presentation of QlikSense June they showing the new On Demand App Generation feature, but what they really showing is just a button to open another App, because they are not creating anything while the presentation.

But I like your direction you are heading and hope you will more go more into technical aspects in the future instead of announcing marketing tags

And one more question: I cannot find the Advanced properties panel. Where is it located QMC/hub? And where is it exactly???

0 Likes
1,772 Views
Ian_Crosland
Employee
Employee

The key function of the On demand approach is to load a slice of data from source and not just to chain between in memory applications to cater for Big Data scenario's which also to allows our customers to retain data at the majority of data at source but still have the associative experience.

An example customer deployment could have a requirement to do basket analysis on billions of transactions stored in an in-memory MPP DB and they have a further requirement of allowing a section of users access to all of the data without overtly "replicating" all of this data in another layer, for example QVD files.

We can satisfy this requirement by deploying two applications, the first is an aggregated view of the underlying source (for example your 20GB app could be reduced by using GROUP BY on  a number of dimensions to reduce the size/reload time) This app would only go to a certain level of granularity.

A detail app could then be constructed (potentially with the same data model) which would allow the customer to "drill to details" accessing the billions of transaction rows with a series of filters in the where clause potentially building an app with <1M rows which could take <60secs to reload (most of our customer deployments follow this model).  Users can be "forced" to select a combination of dimensions e.g they have to select no more than 10 products/two quarters of data from the aggregated selection app to ensure the detail app returns in good time, the user can generate as many detail slices as they wish which can be configured to auto-delete on a time basis

0 Likes
1,772 Views
sirpod90
Contributor III
Contributor III

I understand your point of view, but I think most of the companies who are not in marketing or sales field have another data structure as you expect.


In your examples you have always group of details by areas, products, countries etc. But in more technical fields, we have also some groups, but if the user wants to look into a detailed problem, he also wants than to look back to rest of the data associated by the problem that is given.


For example: We have one specific part that has a failure. Now the engineer wants to find the reason for it. He can now create an on demand app by searching for the type, group or production date of the part and start analyzing. If he now finds out, we haven’t any problems in our production, but the supplier had a problem with the batch, the user has to go back and create a new on demand app for vendor batch and the product details he used before.


The problem I try to describe is we have to provide all detailed information for the engineers to find all associated information to the actual problem and not a group of information where the problem could be!

On the other hand we could create 1000 types of groups by any kind of accumulation, and would still need all details to it. That would lead us to the problem, we have always two types QVD’s, one for the group (on demand master app) and one for the detail view (template app) what makes the data redundant and what also means a lot of overflow and administration for no reward. Or on the other hand, we just have one or more QVD’s with all the information. But then we need quite a while to find all the group elements and distinct values, so the resources for that kind of calculation kills nearly our server(‘s).

The best practice would be to load everything into RAM (without any transformation, aggregation and analysis) and provide the user with data he needs from the RAM and then start analyzing.

The current concept is always to load the data twice or prepare the data for a cumulated and detailed view. But why, when it is already fast accessible in your RAM?

0 Likes
1,772 Views