Documents for QlikView related information.
I am trying to export data from a QlikView Chart/Table in the Access Point or QlikView Desktop and I am receiving the below message:
'Microsoft Excel cannot open or save any more documents because there is not enough memory or space available'
When looking at memory and disk space, everything looks fine. Not sure why this is happening. I am trying to do this on Windows Server 2012 R2 with Excel 2013.
Hi Qlikers,
Sometimes we can have requirement to calculate the business working hours between two Dates due to certain reasons. There are many solutions available on community but I thought let's create the document so that everyone can utilize this.
Consider the below case
Let's say one Ticket is logged into the system on certain day , call it as Start Date, and the same Ticket got resolved on certain day, call it as End Day. Now we may want to calculate the working hours between these two Dates to calculate the efficiency of ticket closure time.
Below are the considerations
1) Count only standard working hours ( 9 AM - 6 PM)
2) Exclude Saturdays and Sundays
3) Exclude Holidays
Below is the Setup in QlikView
You can change the Date format of the below variables as per your convenience eg. 'DD/MM/YYYY hh:mm:ss' to 'DD-MMM-YYYY hh:mm:ss'
SET TimeFormat='hh:mm:ss';
SET DateFormat='DD/MM/YYYY';
SET TimestampFormat='DD/MM/YYYY hh:mm:ss';
First, convert your Timestamp date format to 24 hour format using the below functions
a) Timestamp# () (if your Timestamp values are text)
b) TimeStamp () (If your Timestamp values are in proper Timestamp format)

Now you can setup the below variables for standard working hours in weekdays.You can change the Variable according to your working hours (Here 9 AM - 6 PM) and rest calculation will be done automatically

Now Setup the Holiday list as below. You can use your own holiday list

Below is the logic to calculate the business working hours between to dates

Here is the output

Below are the considerations
1) Count only standard working hours ( 9 AM - 6 PM)
2) Exclude Saturdays and Sundays
3) Exclude Lunch Break (1 PM - 2PM)
4) Exclude Holidays
Set the Variables for standard working hours and lunch breaks. You can change the values according to your needs

Include the Holidays

Below is the logic to calculate the business working hours between two dates

Here is the output

Please see the attached QVW.
Feel free to provide any suggestions.
Regards,
Kushal
Hello All,
i have created a PDF report but in a page i could adjust only one one field information. how would I adjust 2 or 3 fields data on same page based on field size.
Thanks in Advance
NR
What it is used for?
The Qualify statement is used for switching on the qualification of field names, i.e. field names will get the table name as a prefix.’
![]()
In situation where we have two (or more) Tables with the same field names:
| Product | Payment |
|---|---|
![]() | ![]() |
The Qualify statement will assign name of the Table to fields:
Otherwise you will need to make changes the this path - [Qualify.xlsx]
QUALIFY *;
Product:
LOAD [Serial No],
Category,
Value
FROM
Qualify.xlsx
(ooxml, embedded labels, table is Product);
QUALIFY *;
Payment:
LOAD [Serial No],
Category,
Value
FROM
Qualify.xlsx
(ooxml, embedded labels, table is Payment);
The Outcome:

Table Viewer:

Read only if you need to know more....
If we have not used ‘QUALIFY’ statement in situation as above Qlikview would map both of the Tables with outcome like below:

The end result -merge of those two Tables is correct only for the “Serial No” fields
The “Value” and “Category” fields although merged is none of use to us.
To fix this we can only Qualify fields that we do not want to associated:
QUALIFY Category,
Value;
Product:
LOAD [Serial No],
Category,
Value
FROM
Qualify.xlsx
(ooxml, embedded labels, table is Product);
QUALIFY Category,
Value;
Payment:
LOAD [Serial No],
Category,
Value
FROM
Qualify.xlsx
(ooxml, embedded labels, table is Payment);
or by using UNQUALIFY statement:
(which works as opposite to QUALIFY)
QUALIFY *;
UNQUALIFY [Serial No];
Product:
LOAD [Serial No],
Category,
Value
FROM
Qualify.xlsx
(ooxml, embedded labels, table is Product);
QUALIFY *;
UNQUALIFY [Serial No];
Payment:
LOAD [Serial No],
Category,
Value
FROM
Qualify.xlsx
(ooxml, embedded labels, table is Payment);
In second example the ‘QUALIFY *’ part will add Table name to all fields and the UNQUALIFY statement will remove those names only from specified fields(this method is very useful if we got large numbers of fields)
Outcome is as below:
Fields:

Table Viewer:

Feeling Qlikngry?
Mapping Load is definitely one of those essential tools for your QlikView and Qlik Sense scripting. Similar to a left join, but different in distinct ways that allow you to do cool things in your QlikView scripts that would be difficult to do or would otherwise involve many steps.
Basically a mapping load allows you to “map in” a set of values to a recipient table based on a key field.
We will first look at the properties of these functions and examine the syntax. Then we will review 8 different applications of this to help you understand where these functions are a good fit.
The script below utilizes QlikView in this case, but we can apply these scripting techniques directly to Qlik Sense as well with the exception of the “from” statements which get changed to library connections.
We first must create the map. Simply use a load prefix of “Mapping” to indicate this is a mapped table
Then we can apply the map wherever needed. We will use the ApplyMap function where the first parameter is the map name (in single quotes), the second parameter is the key we use to translate the map from the current table and an optional third parameter will define a value when no match is found.
Our central table is sacred. If we need to sum up sales or count rows or calculate any sort of metric, this is likely your fact table. It is important that maintain the integrity of this table and do not inadvertently add rows to it since that would mess up your calculations.
But there are times when we need to join data to this table. You might think that a left join is the way to do this. But what if the fields we use to join this table to our fact table actually have repeats? Notice in the charts below, when we do a left join, it will actually add records to the fact table. Now our totals are exaggerated for that customer.

If instead we do a mapping load, we ensure that only one of the ID records actually maps into the receiving table. In fact, the first record “Walmart” is mapped in and the second value “Wal-Mart” is discarded only because the load will encounter the “Walmart” value first in the current ordering of the map table.

This trait of mapping loads is also handy when we only want one value from the map but we want a specific value. Similar to the scenario above, if Walmart had changed their name at some point, we might want only the latest name. If we sort the table by the “Date Modified” field in descending order, we will only get the latest value. This can be helpful for mapping in slowly changing dimensions where we only want one of the values.


Let’s consider a scenario where we have survey results on a numbered scale from 1-5. It might be handy to add the description for the numbers. We could create a “nested if” statement to handle this. This will work perfectly fine, but as this list gets larger, it might be difficult to maintain.

It might be better to map these values in, especially as the number of values in the list increases. Because the values can be provided in a table, this process becomes much easier to maintain as the number of distinct values grow.


Often times we must do a calculation within our script like “Quantity * Price as Extended_Amount”. But what about when the calculation we need must occur between fields that are not in the same table?
A mapping load can provide a quick answer here as well. The benefit here is that we can map the value in AND do the math in the same step. Without the mapping load function, we would be forced to join the second field to the first table and then reload the whole table using resident load.
In the example below, we need to come up with an extended cost of goods sold. But the standard cost of the item is retained in the item master, a separate table from our sales table. We will apply the cost to the sale record and do the calculation at the same time.


Often times, we will encounter an “attributes” table. This is a catch-all table that holds all textual descriptions for many ID fields. This single table might hold item descriptions, product families, reporting segments, etc. These tables have at least 3 columns: 1 for the value ID, 1 for the field ID and the text descriptor.
In this case, we can load this table as a map and then refer to it multiple times, wherever needed throughout our script, loading these text descriptions into appropriately named fields in our data model. Notice that we are concatenating the ValueID and the FieldID in the map and then using both fields in the map application as well. This ensures that if a ValueID has a repeat, we are not accidentally pulling text for a different field.
Below, we are applying the map once for our item description and then again for our category description.


Use Case 6 – Reference a Default Value When No Match Is Found
Often times, a business requirement will state that rather than a null value, we would like to see “No Description” for an item or “n/a” for example.
If you need to define a default value when there is no match, we can provide this value as a third parameter in the applymap function.
This example shows the simple use of a “no description” label when no match is found.


This is my favorite use-case for mapping loads. I have definitely had the situation where a client has said, “Well, first try find a match from this SQL table, but if there is no match, then try this spreadsheet, and if there is still no match, put n/a”.
Again, using the applymap third parameter, we can nest a backup applymap in the third parameter to meet this requirement. We could have an endless list of maps to satisfy the requirement, although I have never gone beyond three levels!
In this example, we will first go to the SQL table, then to a spreadsheet as a backup and then if there is still no match, we will add a manual description of “no description”.


There might be times when our key needs to be a little more complex. Sometimes two or three fields are needed to key the mapped values to your receiving table, but as you recall, the mapping table can only have two fields.
In these cases, you can create one field that combines the keys. Just remember that you will have to create the same key when you apply the map. Learn Qlikview.


Source: Living Qlikview
When showing data in a month on month style the current month can often have a large drop off. This can be prevented by providing a run rate for the current month. This is where the known values for the current period are extrapolated forward for the rest of the period, providing an estimate of where the period will end. For example, if you have a value of 1000 on day 10 of a 30 day month, you could calculate a run rate of 3000 for the month.
This solution was created in response to a question on Qlik Community, which you can find here:
Re: Last time value in time line chart
I have documented how this document works and the reasons why you might use it in a blog post here:
http://www.quickintelligence.co.uk/qlik-run-rate/
There is also a link to the QlikView version of this application on the blog post, and further discussions on why run rates are good to have.

I hope that you find it useful. You will find other applications that I have uploaded under my profile on QlikCommunity.
Steve
Hi All,
This document helps you in loading multiple excels and excel sheets with the name of the sheet and data.
//to read each file from the specified folder
FOR EACH file in FileList('filepath\*.xlsx');
//In order to get the file information from SQLtables command making use of the ODBC connection format
ODBC CONNECT32 TO [Excel Files;DBQ=$(file)];
tables:
SQLtables;
DISCONNECT;
FOR i = 0 to NoOfRows('tables')-1
LET sheetName = purgeChar(purgeChar(peek('TABLE_NAME', i, 'tables'), chr(39)), chr(36));
Table:
Load * ,
FileBaseName()as FIle,
FileDir() as Dir,
FileName() as File_Name,
'$(sheetName)' as Sheet_name
From $(file)(ooxml, embedded labels, table is [$(sheetName)]);
NEXT i
Next
Hope this helps !!!
please find the attachment for the eg: qvw and test fiels
Regards,
This example demonstrates the SQL WriteBack capability of QlikView via the use of an Extension object. It can be useful for passing a filtered subset of data to a database, writing a comment to a record or simply amend/correct something in the database. Since there’s no open source code available I decided to write this example in my free time to make it available for everyone just like me trying to achieve a writing back functionality to a database. Hopefully this might be useful to some.
Please note that this example was created to be as simple as possible and does not serve to fill in any business necessities. It should rather demonstrate that it works and provide the code for free to anyone who wants to extent it to his own business needs.
A detailed installation guide is included in the .zip file.

I will try to support and answer as many questions as possible in the comments.
Thanks,
Martin
I have two tables Table1,Table2. Table1 has DATE, Table2 has START_DT,END_DT. Want to join these two tables on DATE columns.Intervalmatch function will help us to make the join in Qlik. If we use left join, we can avoid synthetic tables too. Here is the example.If you want to test, prepare the test data.
Let me know if you need any more info/clarification.
Table1:
LOAD COUNTRY,
DATE,
C1,
C2
FROM Table1;
Table2:
LOAD COUNTRY, C3,C4,
date(START_DT) as START_DT,
date(END_DT) as END_DT
FROM Table2;
Final_Table:
INTERVALMATCH (DATE) left JOIN LOAD START_DT, END_DT RESIDENT Table2;
left Join (Table1) LOAD distinct * RESIDENT Table2;
DROP TABLE Table2;
The United States Government has implemented requirements for software in order to make all of the software it uses easily accessible to people with disabilities. Specifically for access through the web, these requirements primarily target the needs of people with vision impairments who may require special keyboard navigation features and third party Screen Readers to be used with QlikView documents.
To become section 508 compliant, QlikView requires users to access the QlikView applications through AJAX and to have the 508 Accessibility extension installed on the server. These extensions were previously available in the Qlik Marketplace, which has been discontinued. These extensions are now being released to the community and will be accessible from this page.
Additional information "DS-Government-Section-508-VPAT.pdf" can be downloaded on Qlik.com:
https://www.qlik.com/us/company/press-room/press-releases/1030-qliktech-achieves-section-508-compliance
We were looking for a solution to update Excel file pivottables before they are loaded into QlikView. I came up with the macro script below, which I want to share with you. I added an extra option to run the Workbook_Open macro. When opening an Excel file via QlikView, this macro will not start (to my experience). Choosing action option 2 will run the macro from QlikView, therefore starting whatever process is started when you open the Excel file manually.
In QV press Ctrl+M to add the macro to the QV document. Be sure to allow system access for this macro function to work. Check the example script in the header of the function. Be aware that the filename should be quoted with single quotes.
Please click the Like link if you are going to use this macro.
' *****************************************************
' Description: Function to open an Excel document and
' refresh all external data queries and
' pivottables in the Excel document or
' run the Workbook_Open macro that runs
' when manually opening the Excel document.
' Parameters:
' fileName > Specify the Excel document path to open.
' The path should be the absolute (complete) path on the filesystem.
' action > 1 = Open Excel file and refresh all external data tables.
' > 2 = Open Excel file and run Workbook_Open macro.
'
' Example:
' LET result = OpenExcel('<path>\filename.xlsx', 1);
' If $(result) Then
' LOAD * FROM '<path>\filename.xlsx';
' EndIf
' *****************************************************
Function OpenExcel(fileName, action)
Dim objExcel, objWb, fso
' Get file extenstion from the document name
ext = Mid(fileName, InStrRev(fileName, ".") + 1)
' Check if specified document name is an Excel document
If ext <> "xlsx" And ext <> "xlsm" And ext <> "xls" Then
OpenExcel = False
Exit Function
End If
' Create a filesystem instance
Set fso = CreateObject("Scripting.FileSystemObject")
' Exit the function if the documentdoes not exist
If Not fso.FileExists (fileName) Then
OpenExcel = False
Exit Function
End If
' Create an Excel instance
Set objExcel = CreateObject("Excel.Application")
' Exit the function if Excel object cannot be created. Excel is not installed?
If objExcel Is Nothing Then
OpenExcel = False
Exit Function
End If
' Disable Excel prompts and alerts
objExcel.DisplayAlerts = False
' Open the Excel document
Set objWb = objExcel.Workbooks.Open(fileName)
' Exit the function if document cannot be opened. Is the document already opened?
If objWb Is Nothing Then
OpenExcel = False
Exit Function
End If
' Do some stuff according to the specified action parameter
Select Case action
Case 1
' For Excel 2007 and later documents refresh all external data sources
' and pivottables in the document
If ext = "xlsx" Or ext = "xlsm" Then
objWb.RefreshAll
' For Excel 2003 document refresh each pivottable on each sheet
ElseIf ext = "xls" Then
' First refresh each external query table on all sheets.
' Pivottables may depend on these query tables
For Each sh in objWb.Worksheets
For Each qt In sh.QueryTables
qt.Refresh
Next
Next
' Refresh every pivottable on all sheets
For Each sh in objWb.Worksheets
For Each pvt In sh.PivotTables
pvt.RefreshTable
Next
Next
End If
Case 2
' Run the Workbook_Open macro
objWb.Application.Run "ThisWorkbook.Workbook_Open"
End Select
' Save Excel document and quit Excel
objWb.Close True
objExcel.Quit
' Cleanup memory
Set fso = Nothing
Set objWb = Nothing
Set objExcel = Nothing
' Return true if the function finishes without an error.
' This does not mean that the Excel refresh was succesful
OpenExcel = True
End Function
Hi All
i have combined all important topics by @Henric_Cronström at one place. i hope will be helpful for you all while preparing for certification
Thanks
Kushal
Need to add a printable report to your QlikView document? Use QlikView's Report Editor to create a report that users can print and take with them. Check out this technical brief for details on how to do this as well as this blog on the topic.
Jennell
About
What is it? What does it do?
This is a tool that I use to quick check shared file content and do some usual maintenance job on it. After playing with colleagues for a while, I think it'd be nice to share with the community and get some feedback about if/how I should proceed with this personal project.
This tool is a very simple one, it just can open both legacy ".Shared" and new ".TShared" formats of QlikView Shared File, show helpful info in it, and provide some very basic operations on shared objects (currently I have only add support for Bookmark because it's the most commonly used one day-today)
Why another Shared File Viewer?
There has been a Shared File Viewer already for quite a while (in PowerTools package)
The limitation of the existing one is it can't open the new "TShared" format that was introduced lately into QlikView. So if one wants to view new format, they have to convert "Tshared" to "Shared" first and convert it back afterwards, which is really annoying especially the shared file is *big*.
Another limitation for the current one is it provides small subset info of Shared file content and doesn't embed much shared file functions (cleaning, filtering) in it because its development toolchain is out of dated.
Lastly, I found it's not easy to run a Shared File Cleaner without GUI and want something more intuitive.
In short the legacy shared file viewer is inconvenient to use(to me at least 😅 ), especially when it comes to new "TShared" format.
So i think why not just write another tool myself to meet my need - here it comes.
Release Note
Current Stable Release: 0.2
You can find it in the attachment, where the zip file simply contains an exe file that you can run on Windows.
Features:
Hopefully you have time to download and play with it, and, most importantly, give me some feedback about how you think of it, and what other functions you want to include in it in future.
NOTE:
this tool is currently under preview only. and please be CAUTIOUS if you use it with production Shared files. I know the shared content is critically important, so make sure you have backup before touching any Shared Files.
Hi Folks,
If the Instagram account is public, we can fetch the number of likes and comments based on below scripts in Qlikview.
Script:
LOAD
TextBetween([@1:n],'edge_media_preview_like":{"count":',',') as Likes,
TextBetween([@1:n],'"edge_media_to_parent_comment":{"count":',',') as Comments,
SubField(FilePath(), '/', -1) as PhotoID,
FilePath() as Url
FROM
[https://www.instagram.com/p/B4-I3zrAlTn]
(fix, utf8, no labels)
where wildmatch([@1:n], '*"edge_media_preview_like"*');
output:
Hope it helps for someone
"Because there can be so many different ways to visualize your data, it helps to have a basic guideline to follow."
A document by Qlik written by Patrik Lundblad
with Dr. Abela's permission
Hi Qlikers,
Firstly, kudos to avinashelite for giving us the below method to load the all excel files with the multiple sheets.
Loading Multiple Excel Sheets Dynamically along with file name and sheet name
Using above method we can load the multiple excel files with multiple sheets even though all the sheets have different names.
Above method works only in below scenarios
1) When all the files having the same number of sheets.
2) When all the sheets of excel files having the same number of Columns with same name.
But below method works even though you don't have same number of columns and sheets in excel and having blank sheet in excel.
// Define the Path
LET vExcelFilePath = 'D:\Test';
LET vQVDFilePath='D:\Test';
SUB CreateQVDFromAllExcelFiles(vPath)
FOR EACH vFileExtension IN 'xlsx'
FOR EACH vFile IN FILELIST(vPath & '\*.' & vFileExtension);
ODBC CONNECT TO [Excel Files;DBQ=$(vFile)];
Temp:
LOAD *;
SQLtables;
DISCONNECT;
Data:
LOAD * INLINE [
junk ];
FOR i = 0 TO NOOFROWS('Temp')-1
LET vSheetName = PURGECHAR(PURGECHAR(PEEK('TABLE_NAME', i, 'Temp'), CHR(39)), CHR(36));
CONCATENATE(Data)
LOAD *,
FILEBASENAME() AS FileName,
'$(vSheetName)' AS Sheet_name
FROM $(vFile)(ooxml, embedded labels, table is [$(vSheetName)]);
NEXT i
DROP TABLE Temp;
DROP FIELD junk;
NEXT vFile
NEXT vFileExtension
Set ErrorMode=0;
Drop Field A; // When there is blank sheet in excel file, field A is created
Set ErrorMode=1;
STORE Data into $(vQVDFilePath)\FullData.qvd;
DROP Table Data;
END SUB
CALL CreateQVDFromAllExcelFiles('$(vExcelFilePath)');
LET i = Null();
Feel free to provide your suggestions
Thanks & Regards,
Kushal Chawda
Generic Keys is a way to define keys between tables in a more general way so that their values can represent other things than individual key values; they can represent groups of key values or any key value. As an example, you can combine product IDs, product group IDs and a symbol for all products into one key field.
You can use generic keys to solve many data modeling problems:
See more in the attached files.
PS I have been asked to make an example on comparing budget and actual numbers, so here it comes as a second attachment. It is a zip file with a mock-up order database with a budget. Create a new empty qvw; expand the zipped files in the same folder as the qvw; insert one of the two included script files into the qvw and run the script.
Hi All,
The below script helps to remove the empty columns (which has all null values) dynamically from the DataModel. To use just copy the below script and paste at the end of your script.
FOR a = 0 TO NoOfTables() - 1
LET vTable = TableName($(a));
LET d = 0;
FOR i = 1 TO NoOfFields('$(vTable)')
LET j = i - d;
LET vField = FieldName($(j), '$(vTable)');
LET vFieldValueCount = Alt(FieldValueCount('$(vField)'), 0);
IF (vFieldValueCount <= 0 ) THEN
DROP FIELD [$(vField)] FROM $(vTable);
TRACE DROP FIELD [$(vField)] FROM $(vTable);
LET d = d + 1; // dropping fields impacts the internal field no.
END IF
NEXT i
NEXT a
If you want the empty values (whitespaces) then use below script
FOR a = 0 TO NoOfTables() - 1
LET vTable = TableName($(a));
LET d = 0;
FOR i = 1 TO NoOfFields('$(vTable)')
LET j = i - d;
LET vField = FieldName($(j), '$(vTable)');
DropField:
LOAD 1 as Check2
Resident $(vTable)
WHERE Len(Trim($(vField))) > 0;
LET vNoOfRows = NoOfRows('DropField');
DROP TABLE DropField;
IF vNoOfRows = 0 THEN
DROP FIELD [$(vField)] FROM $(vTable);
TRACE DROP FIELD [$(vField)] FROM $(vTable);
LET d = d + 1; // dropping fields impacts the internal field no.
END IF
NEXT i
NEXT a
You can test the above script using below Inline table.
Data:
LOAD
*,
' ' AS Field3,
Null() AS Field4
INLINE [
Field1, Field2
1,2
3,4
5,6];
When you use the first scrip then Field4 will be dropped, and when you use second script then both Field3 and Field4 are dropped.
Hope this helps.
Regards,
jagan.