Skip to main content
MAINTENANCE ALERT: Dec. 12th starting 9 AM CET. Community will be read-only. GET DETAILS
Showing results for 
Search instead for 
Did you mean: 
Digital Support
Digital Support

STT - Getting to Your Data with Qlik Connectors

This session will cover:
• What connectors are available with Qlik Sense Enterprise on Windows
• Advice for using connectors
• Configuration best practices
• Troubleshooting tips


00:00 – Intro

 00:33 - Agenda

01:19 - ODBC Connector package

 01:45 - REST Connector

 02:41 - Salesforce Connector

03:19 - Why is it called ODBC Connector 'Package'?

04:05 - 'How connectors work' diagram

 04:59 - Creating a new connection

 05:56 - Single Sign-On

 07:10 - Troubleshooting the connection

 09:18 - Where to find connector logs

 11:26 - Where to find the driver logs

 14:42 - Analyzing log files

16:08 - How to improve connector performance

 18:08 - Q&A

22:02 - Qlik Connector documentation


Connectors Documentation on

Connectors Forum on Qlik Community

Support Portal

For more information:

Article (with Q&A) - Getting to Your Data with Qlik Connectors


Video transcript:

Today's presentation is Getting to Your Data with Qlik connectors.

I'd like to introduce our presenter for today, our own data connector expert, Alexey Dubovenko.

Alexey –

Hi everyone. My name is Alexey. I am an R&D manager in Qlik. My primary focus is Enterprise connectivity within Qlik, within different products. My experience with Qlik is about seven or eight years.

Today we're going to be talking about the Enterprise connectivity options and specifically about the data connectors where my team is focused on. We call them Enterprise connectors. Then we're going to be talking a little bit about how to configure the data connections; what authentication methods and protocols the data connectors support to connect to data sources. Then we’re going to be talking about how to find the connector log files, and what information we can get from a log files if something is going wrong.  And finally, I’ll give you some troubleshooting tips in case again the connection is not working as expected; especially if connection is not working at all.  And even test the connection against the data source. 

So, Alexey, what are the Enterprise connectors that we're talking about here? 

We are primarily talking about the four large things.  So, the first and the biggest one is an ODBC connector package. So, the ODBC connector allows users to connect to different ODBC data sources; including but not limited with Microsoft SQL server, Oracle, Apache Hive, Impala, Spark, Drill, Phoenix, Presto and many others.

The next big one is a REST connector. Which allows users to connect to any web services that exposes REST API’s.  This connector can consume xml, json and csv data formats. And the connector allows to create GET and POST requests to the remote REST API servers. It's also very flexible, because it allows to do a lot of scripting and build the very complicated algorithms of loading the REST API data.  The REST connector can be a very good option to connect to the remote server; even not having any specific data source connector to do this connectivity.

Quick question, are those first two connectors: are those free with a regular Qlik Sense Enterprise license?

Yes, all the connectors we're going to be talking about are bundled with general Qlik Sense Enterprise license.

The third one is a Salesforce connector.  Which allows to do a connection to CRM.  It's very high performance and flexible and supports two major APIs that provides:  SOAP API and BULK API including the picky chunking technology which significantly increased data acquisition from a Salesforce data object that supports primary key chunking.

And the third one is Oracle Hyperion’s Essbase connector.

Okay, can you explain why most of these are referred to as connectors, but we have the ODBC connector package?  What does that mean?

So, if you look on a standard add data wizard window on a Qlik Sense Enterprise you will see different connectivity options.  Salesforce connectivity option, or the REST connectivity option represents the one connector.

You can see a number of options like Redshift, Phoenix, Impala, Big Query, Presto, Postgres, Microsoft SQL and so on represented by one connector. We call this connector “ODBC Connector Package.”  So technically this is a one connector that exposes 16 + different ODBC data sources with different connectivity options.  And that's why we call it ODBC connector package.

Alexey, can you explain first how these all work?

If we look on a diagram, we have the connector itself which exposes the UI, on a browser or on a QlikView, depending on the platform…

It's like the script load editor?

Yes, and we have an engine. And we have a script load editor. And inside the connectors, there are a number of ODBC drivers. When the time comes to load the data, then the engine consumes the Qlik Sense script; transfers the script to the connector.  Connector separates the select statement, and sent to one of the drivers depending what connection was configured prior to the script.  Driver addresses a database over the network, and get the data sent back to the connector, and the connector sends the data back to engine.

Do you think we can see how this all looks in the data editor?

Yeah sure. If you come to data load editor and try to create a new connection. We can go with something primitive, like Amazon Redshift, let’s say. This is a create new connection window where the users can configure a hostname, port, a database, username and password (that’s a primitive authentication).

Is everything for these connections username and password?  Are there other authentication options supported?

Username and password is the very simplest method of getting to the data source and authenticate.  However, if we look on, let's say Microsoft SQL server, then we have a different authentication option.  The authentication might support NTLM; or if not NTLM, then just username and password. That's the built-in users to Microsoft SQL server, as well as single sign-on, based on Kerberos. Single sign-on means that once user is authenticated in a Qlik Sense towards the active directory, then the identity of this user is transferred across all Qlik sessions.  So, user does not have to even enter the credentials when you configure the connection.  You sign in once in Qlik Sense, and your identity is transferred for the data connectivity purposes.

Alexey, what are some best practices when managing all those data connections to avoid having any issues connecting to data?

Yeah,  the connection should just work, right? So  the expected behavior is: you  configure the connection, you press the  test connection button,  it works, you open a select dialog, and  pick up some data,  that works, you create a script, doing a  reload, and that works, and brings the  data inside the Qlik engine.  That's an expected behavior however different things might happen. usually it does not depend on the connector, but on the infrastructure that surrounds the connector. There is a data source, which might be unavailable.  Network architecture which might have some issues.  There are a few tips on how to troubleshoot the connection, in case something goes wrong.

So, the first I trying to test the connection and connection does not work I try to check the server availability usually the RDP session is required on a machine where the Qlik Sense server is installed.  Once you open the RDP session, I can open the command window. Let's try to test the connection to Microsoft SQL server.  So, I usually for this purpose, I use a telnet command, just because the telnet command is most likely installed with Windows Server; and if not, it can easily be added through the control panel. 

So, I do a telnet and Microsoft SQL server’s domain name, following by the correct port:  1 4 3 3. And if this is a blank screen, that means that the connection worked.

I know just from working in support, that most connection issues tend to be network related.  Can you show us what it looks like when it doesn't work as expected?

Let me show you what happens if there is something wrong. Maybe there is a firewall in between, where the ports are not open, so something is blocking this machine to be connected to the remote data service. And I will just put a typo in a port, and we'll use:  1 4 3 4.

The telnet report that connecting to “RD-ADO-demo” could not open a connection to the host. So, the connection failed. So that means that there is no straight connectivity to this server, and this port from this machine and that's a problem which needs to be addressed separately. Additionally, can show you; if we specify the right port but specify the name of the server which does not exist - so same story. Connection Failed.  That's my technique of quickly troubleshooting the connectivity.

Okay what if there's a scheduled reload that fails or there are some intermittent errors that require some more investigation?

Yes. Let's try to configure the connection in the wrong way, and see what can we take from a log file.  So, I will try to create a connection to Microsoft SQL server.  I’m going to put the correct server name, and the correct port.  I will put the correct database, and I even will put the correct username; but the garbage password. If I try to test the connection, or I try to use a connection in a scheduled reload, or in a manual reload; I will get an error. In case of a test connection, I can see this error on a screen.  In case of a scheduled reload, the log files are only one way how to go deep in a problem and see where it is.

Yeah, where do we find those log files?

If I open the C: drive, and then Program Data folder, Qlik, Custom Data ODBC Connector Package, and logs; I will find the log file of the ODBC connector package prefixed with the name of the machine where it was taken.

By the way, the custom data folder at the C: Program Data, Qlik, contains the subfolders for all the connectors installed for Qlik Sense Enterprise Server. Meaning that other connectors also have the log sub folder, where the connector log is produced. If you open the log file, scroll down to the bottom. We can see that that was an error, at this time stamp, and the error was “login failed for user Alexey.”  The log files indicate the specific ODBC driver. In our case, MsSQL server, MSSQL.  Also has a different level. It can be just INFO level, or it can be WARNING or that can be even a DEBUG information or that can be ERROR. 

Okay, but ultimately this is where we can find more information about what's going wrong. And if it comes to that, Support will probably be asking for these kind of logs if any support cases get created.

Of course.  As R and D, we always ask our support to provide us at least two sets of logs: the connector log, which we're looking right now, and the driver log.

Yeah, where do we find the driver logs?

So, by default, the driver logs are disabled. However, to do that we need to open the registry editor; and locate under “local machine,” “software,” “Qlik.” In this folder, there are the subfolders for every ODBC driver that comes bundled into ODBC connector package.  In our case, we are interested in Microsoft SQL Server driver. So, we locate this, click SQL Server ODBC driver; the subfolder driver and all these folders for the different ODBC data sources have absolutely similar structure. We're interested in Microsoft SQL Server.  We open the driver folder, and here is a typical set of registry settings.

There are log path and log level. Both stay empty at the beginning.  So, the log path is basically a path on a hard drive where the log file is going to be produced by the driver.  And the log level is an integer value from 0 to 6 where 0 is no logs and 6 is a maximum amount of logs. Let’s use it in the most aggressive mode, and says that we want all the information; including the trace information. 

For the log path, I prefer to put exactly same folder where the connector produces its own logs. Practically, it's very convenient to have all the logs in the same folder.  Paste in in this register value. Save. 

The next time I try to use a connection, see the same problem; I will see the second file. Which is called: “qliksqlserverodbcdriver.”  If I open this file, a lot of information. Some trace data. You can see like a network dump.  So that helps us troubleshoot tricky situations.

Yeah, it's nice knowing where to find these logs and how to set the log verbosity level. That's great.

I would like to warn you. Let's configure a connection with the right credential set, and do a very simple reload. 

So now we’re entering the correct password?

Yes. That's the correct password.  And now it works. I create a connection.  Let's open select dialog and pick some database. Let's say “adventure works 2019” should work well. Let … “human resources” owner, and load to tables “departments” and “employees.”

This is just a demo SQL database you set up?

Yes.  Let's insert the script. Load the data.  800 records from one table, 300 records (almost) from another table.  If we look on a log file, that produced 19 megs of data.

What would you recommend for a  production environment? What should that be set at?

I would recommend setting it to zero.  Because normally in a production environment, you don't want to have driver logs at all.  So, the driver logs are only for troubleshooting. I would recommend to use log level 6, reproduce the problem, take the logs, and switch it off.

Now we've got all those log files. It’s good to know where to find them. They’re kind of difficult to read. Do you have any recommendations on how to analyze them?

Happy to tell you a couple of tricks. If we look in a connector log file, that's a little bit human readable. This is a CSV file, the comma separated; however, we do not use comma to separate the columns. We use a tab character to just make this file a little bit more human readable.  And that can be read by Qlik Sense.  To do that, the simplest way is just drag and drop this connector log file on a Qlik Sense.  A data wizard, and then the load data; and directly go to the edit sheet. As you can see the application was  immediately built  from this log file, where we can filter  to the data source; we can filter by the  errors; and we can see an error message, and an  application id, and a session id. And we can build in general any application we would like to build over this log data. Of course, that's a very simple example  of drag and drop in the single log file;  but it's possible to build an  application which will  take the logs from some location, load  them (maybe even incrementally)  and analyze and show the number of  errors, a number of different issues  which happens with the connector.

Nice that there's software available to help you analyze data like this.


Do you have any tips for improving data connectivity performance overall?

Yes. We have few tips. Again, in normal situations the performance should be just enough. However, sometimes when the performance is not enough, and there is a one very common trick of how to increase the performance of the existing ODBC connection. 

You will need to locate the folder where the connector binaries live. And that’s usually C: Program Files, Common Files, Qlik, Custom Data, ODBC Connector Package, there is the connector exe and the connector exe config file, which is an xml file. 

If we open the config file, scroll down; we’ll find the “app settings” section.  There are several options. It starts with driver abbreviation, like “MySQL” or “Oracle” or “Postgres;” and then it says “-reading- strategy;” and by default your value is set to “connector.” 

In some cases, performance of that is not enough. You can modify this config file. Instead of a “connector” as a  reading strategy, specify  “engine.” This will allow engine (at the reload time) to deal with the ODBC driver directly without involving the connector.  Connector now is still going to be used to create a connection, manage the select dialog, bring a list of tables, list of fields, create a script, put the script in the script editor.  However, when you press the reload or if the scheduled reload happens, there won’t be any connector in between an engine, which in some cases might increase the performance. However, be warned that the metadata that the engine gets from an ODBC driver might be slightly different from the metadata that engine gets from a connector; which is still going to work in Qlik Sense, but just do not be surprised.

Okay, it's time for Q and A. Please submit your questions through the Q and A panel on the left side of your On24 console. Alexey, which question would you like to address first?

 So, let's go from start.  The question was about: is there a specific version that has fewer bugs with data connectors that you recommend?

Basically, the every new version we release uh contains bug fixes as they appear, but we always keep an eye on the quality, and all our always our next release includes as many bug fixes as we can do.  So technically, every new version should have less bugs than the previous one.  And I would strongly recommend to use as newer version of a connector as possibly all the time

Okay, next question: Are these connectors the same for QlikView as they are for Qlik Sense?

Yeah, so as I told before during the webinar; that’s how we build it technically. They are the same. They just exposing different UI, in a QlikView and in a Qlik Sense; but technically, connectors are the same.  And we try to update and release the newer versions for QlikView as often as we can.


Is there a connector available for Hadoop?

So not for HDFS specifically, but we support number of services from a Hadoop system.  Including Apache Hive, Cloudera, Impala, Spark, Drill, Phoenix. So, you have a possibility to extract the data from a Hadoop ecosystem. 

Okay, I see there's a question there about Qlik Sense cloud.

Yes. Are these connectors the same in a Qlik Sense cloud? 

Yes, they are the same and again our goal is to support the same connectivity in all the platforms: in Qlik Sense cloud, and Qlik Sense Enterprise, and Kubernetes and a QlikView. However,  sometimes the cloud dictates us some  limitations in a connector's  functionality;  mostly related to security, but in  general they are the same or at least  they support the same  number of data sources provided, if in  case we're talking about the ODBC  connector,  but it's the same set of data sources  for sure.

The next one is: what type of a connectors require separate license?

So nowadays, the Qlik Web Connector package for non-subscription users and SAP connectors require a separate license.

And the related question: Is the SAP connector part of the ODBC connector package?

No, it is not a part of ODBC connector package.  The ODBC connector package is bundled with a Qlik Sense Enterprise and available on all other platforms with a Qlik Sense product. SAP connectors, a set of SAP connectors requires a separate licensing, so that's a separate connector. 

And again, very related to that: Is it possible to download the connector packages separately?

And the answer is: Yes, for QlikView.  Because in a QlikView, we do not bundle the connectors with a product; so whatever connector you would like to work with a QlikView, you need to go to the customer support portal, to the download page, and the download connectors.  I would recommend always the latest available version.  You can download it for QlikView, a separate MSI package and install separately.


The next question is about Amazon Redshift connectivity: is Amazon Redshift connector available?

Yes, it is absolutely available for all our platforms.  That includes again, Qlik cloud, Qlik Enterprise on Kubernetes, Enterprise and Windows, and QlikView.

The next one is: where can we find more documentation or instructions on setting up connections?

Yeah so there are a few places we can get help in that. The first is  And there is a special section for the Qlik connectors. So here you can find information about  the connection dialogs, and different  options that offer it, and  authentication ways that support it for  all the connectors; not only the ODBC,  REST and Essbase, not only for Enterprise  connectors, but also including the SAP  connectors, and the  web connectors package.

And the second resource I would do I would like to point your attention on is the Qlik community website, Which most like a forum where you can first looks through the topics that were already discussed, and there are millions of them.

And the second you can always submit your question and get a help on that.

That's great.

So, the next question is: when now I try to load custom fields in Jira using the REST connector, they show include as a label, not a field name. Any help?

So, it's hard to tell right now as a straight answer. But it knows that you can address the support team. So, create a support ticket and support will send this to R&D, and our engineers will take a closer look what's going on in your specific case.


The next question: is Qlik Sense  better  at connecting to the data than a Tableau? Are there data connectors with Qlik that aren't available with a Tableau?

Well I can only answer from the cloud Qlik the cloud perspective. So, it looks like the Tableau online service has less connectivity options than the Qlik Cloud services; specifically, I’m talking about the SAP connectivity.  So, the Qlik Sense Cloud supports SAP.

The next one is: what's the best connector for adding Google Sheets?

And this is a part of a web connector package. As far as I know. So, we can go, if you go to portal, and open the Qlik web connectors, and open this up chapter, the data source is included in a Qlik web connectors. You will see Google Drive and Spreadsheets.

That's great. Okay, let's see we have time for one last question.

The last one is: where can I find resources for setting up a connection to SharePoint?

And again, the SharePoint is a part of web connectors.  Here is the help page on that.  We have a separate connector which is called Office 365 SharePoint.  So, you can follow this documentation link, you can find all the information you need. And that’s a part of web connectors there it’s a part about caches.

Great. Well, thank you very much Alexey.

Thank you Troy and thank you everyone for joining our webinar today.

Okay great. Thank you everyone. We hope you enjoyed this session.  Thank you to Alexey for presenting. We appreciate getting experts like Alexey to share with us.  Here's our legal disclaimer, and thank you once again.  Have a great rest of your day.

Labels (4)
0 Replies