Originally published on 07-21-2011 10:31 AMIn expressor datascript, the logical operators – and, or – do not necessarily return Boolean values, but ra...
Originally published on 07-21-2011 10:31 AM
In expressor datascript, the logical operators – and, or – do not necessarily return Boolean values, but rather the value of one of their inputs.
The and operator returns its first input if that input value is false or nil, otherwise it returns the value of its second input.
The or operator returns the value of its first input if that input value is not false or nil, otherwise it returns the value of its second input.
A Boolean value is returned only if the appropriate input is itself a Boolean value.
Only the values false and nil are interpreted as false.
The values zero, one, and the empty string are all interpreted as non-nil, and therefore true, values.
The and operator has higher precedence than the or operator and both of these operators have lower precedence than the relational and mathematical operators.
Since logical expressions can return values, they may be used in situations in which you would not normally think to use a logical expression. The most unusual example is to use these operators to implement the equivalent of an if..then..else block as in the following statement.
input_1 and input_2 or input_3
Think of input_1 as the conditional statement tested in the if clause. What you enter as input_1 can simply be a reference to a record attribute or a typical conditional statement that includes relational operators. Then input_2 represents the value returned from the then clause. And input_3 is the value returned from the else clause. You can’t use this syntax when the then and else clauses contain multiple statements, but it does have some very interesting uses.
For example, what if you want to supply a default value if an attribute is nil? The following statement will do the trick.
field_name and field_name or default_value
If field_name contains a non-nil value, this statement will return this value, otherwise it will return default_value.
Note that you can also nest the conditional statements. The following statement will return Democrat if the attribute party contains the value Democratic, will return Republican if the attribute party contains the value Republican, and will return Neither a Democrat nor Republican if the attribute party contains any other value. Be careful to use parentheses to control the precedence in an expression.
party=="Democratic" and "Democrat" or (party=="Republican" and "Republican" or "Neither Democrat nor Republican")
New - Version 5.1.23 (posted 9/25/2014)***added workaround for "UserAgent" bug where condition is not met but QV reads the Else field anyways***Edited...
New - Version 5.1.23 (posted 9/25/2014)
***added workaround for "UserAgent" bug where condition is not met but QV reads the Else field anyways
***Edited expressions for TotalMaxCalsInUse charts to reflect max of cluster, not sum of nodes
***Corrected QDS variable typo in script
This app was created by Michael Terenzi of QlikView Support, and has a number of new
Includes Salesforce Connector and SAP Connector logging integration
Includes Offline Client logging integration
Includes Windows OS Event Application/System/Security logging integration (see instructions)
Specify how many days’ worth of logs you want to retrieve (default is 30 days)
Hyperlinks added to data tables so you can quickly link an error message to either Salesforce Help (SF Connector messages), EventID.net (Windows logs include event and source automagically in URL), and QlikCommunity for all others. Now if you get “Invalid Query Locator” error from the Salesforce logs, you get sent to the right place to find out why.
Added capability for DD/MM/YYYY webserver logs to be picked up for those of you with different time formatting (see instructions)
Optimization – CPU, Table structure
New UI (simpler, easier interface)
Supports up to 10 QVS nodes
Supports up to 2 QDS instances (will read as a "node" but is really an entire resource)
Shared file corruption detection (only on builds 11440+ ....please contact Support if you encounter the alert)
Analyze concurrency trends over the hour or the minute, even with Performance logging turned off!
Includes Deployment Framework integration (by Magnus Berg)
Includes Scalability Center Environmental Analytics (see attachments and please email them your results!)
Only includes support for QV 11
Removed HTTP ERROR, QMS Audit logging due to time constraints
Verify your logs are in the default locations and adjust as needed in the Setup tab
Save and reload!
For custom/enterprise environments, Clear Defaults (button) in Setup tab and enter Required Components
Must be running on QV 11 and must have QVS Event logs to run bare bones
Archive your logs and stow them away if not necessary for your analysis. Webserver/IIS logs have the potential to greatly increase table sizes, so limit your source data if possible to optimize performance
When navigating to the Troubleshooting tab you will need at least 2 gb RAM on your machine with the sample data (sorry )
Over time you are looking at potentially millions and millions of records. Don't be a hoarder!
QV10 is not supported, especially on the Publisher side. Feel free to try your luck if necessary.
Error: “The Qlikview web server service on Local computer started and then stopped. Some services stop automatically if they are not in use by other ...
“The Qlikview web server service on Local computer started and then stopped. Some services stop automatically if they are not in use by other services or programs.”
When you are trying to start the Qlikview webserver if above error message is displayed, that means there are some other service run on the port (normally 80) that Qlikview webserver trying to acquire. The running application on that port can be another server like IIS, Browser Application, Facebook, Skype, Ammyy, Teamviewer, etc. . In the qlikview server while using http:// the port will be 80 and for https:// the port is 443. But in IP it’s showing 4750 or something as the port and it is the internal port of using for QV server and it’s not the port that webserver is accessing. Webserver is always accessing 80 or 443 according to the protocol.
We need to stop whatever the services running on the port and start the qlikview Webserver service. To do that, follow the steps given below.
1. Check what are running services on that port. Use netstat –aon | findstr [port no] command in command prompt. -aDisplays all active connections and the TCP and UDP ports on which the computer is listening. -oDisplays active TCP connections and includes the process ID (PID) for each connection. -n Displays active TCP connections, however, addresses and port numbers are expressed numerically and no attempt is made to determine names.
2. Get the Process ID of the services running on that port (Process ID will be displayed in the column to the right).
3. Open the Task Manager and go to Processes and end the Processes which is carrying PIDs those using that port.
4. Restart the service and enjoy qlikview.
When you are opening the services using above cmd command, there will be two default services running on that port. Those can be IIS, TCP/IP. And you cannot find those in task manger too.
Actually what happening is when you start qlikview webserver service, from there on wards qlikview application is not going to function. But an application like skype or other is always using that port since it got started. So what the system doing is start the service and check running threads on that service, if not found then switch to the previous service running on that port. So ultimately service get started and stopped within no time.
I often use a function that usually shortnes my loading proccessThis is step by step guide that explain how to create and using Java Script function i...
I often use a function that usually shortnes my loading proccess
This is step by step guide that explain how to create and using Java Script function in edit script load data.
In the example below the function will return the serching number if is exist in an array list, else, the smaller closet number returned , else, if not exist smaller closer number the big closet returned.
Set enable to call function from load script
HELP>About Qlikview>right click on QV icon
Change AllowMacroFunctionslnExpression to "1"
Then open Edit Modul (ctrl+m) and past the following script
To download QlikView Expressor Desktop edition, please follow these instructions:If you are logged into the QlikView forums or other site content:NOTE...
To download QlikView Expressor Desktop edition, please follow these instructions:
If you are logged into the QlikView forums or other site content:
NOTE: - Please make sure to clear your browser's cookies before trying each of these methods. The web team is aware of some of these issues and is working on correcting the problem to make the download experience better.
Este documento está basado el post Development Checklist. La intención es que sirva para desarrolladores de habla hispana, pero esta vez, el documento...
Este documento está basado el post Development Checklist. La intención es que sirva para desarrolladores de habla hispana, pero esta vez, el documento debería acompañar la documentación del proyecto en vez de ser una mera herramienta de verificación manual del desarrollo, según la versión original.
The attached extension object (bottom of post) is to be used with QlikView Expressor 3.9.x and can be installed and enabled in a QVE Workspace. The extension object has been built using QlikView Expressor Datascript and the new SDK Extension Builder. Please see the above mentioned link for more on building your own extensions.
A customer needed the ability to enrich their data sets with additional location information including gecoding those locations for mapping purposes. All they had as input was a list of IP addresses (from logs or DB queries) that were accessing their systems. They wanted this process to be reusable, repeatable and shareable with other applications and development groups.
The attached extension () is provided as a QlikView Expressor Transform Operator which accepts incoming IP addresses. It then geocodes them using a free RESTful web service API (http://freegeoip.net/) and provides the following additional location data:
country_code country_name region_code region_name city zipcode latitude longitude metro_code areacode
Custom properties for the extension simply include the ability to toggle the returned results in the Results pane as well as set a sleep time in between API calls. This may be necessary as the web service may have time period execution query limits on the number of results that are returned.
Using QlikView Expressor (QVE) to manage and prepare data for QlikView is a great step towards adding data governance and data management to your Qlik...
Using QlikView Expressor (QVE) to manage and prepare data for QlikView is a great step towards adding data governance and data management to your QlikView deployment. Not only can you visualize where data originates and its final destination, but you can also create reusable parameterized business rules that can be shared across multiple applications.
By design QVE uses a Transform Operator to store Expression and Function rules to manipulate and/or add new data. When transforming data - a simple QVE expression is used. The result is a transformed or new data column(s) in the output of the final QlikView table model.
But what if you want to store and reuse an actual QlikView specific scripted expression and not just have the resulting column output? This would be an ideal method to reference a single version of that expression in a unified manner. In turn it could reduce maintenance significantly if changes are made since there is only one place to make modifications, QlikView Expressor. This approach would also increase productivity and data confidence as it creates a single common expression stored in a centralized reusable repository.
Cultivating culture that emphasizes consistency and reusability is vital when introducing successful data governance practices. Common problems with m...
Cultivating culture that emphasizes consistency and reusability is vital when introducing successful data governance practices. Common problems with many decision support systems are the amount of variation, redundancy and overlap that exists within the data models and business logic used across multiple analytical applications. These problems can delay critical decisions and disrupt IT operations while users struggle to verify the truth in data. Having data is one thing, having “good data” is another. With the volume of data increasing it is important to have tools to monitor and create a structured and consolidated data management layer that contains reusable and consistent definitions. This in turn gives developers and business users assurance that the data they are using, whether to develop applications or make decisions, is “good data”. It also expedites the process of creating new applications and eliminates much of the guesswork in maintaining applications as business requirements evolve over time.
Data Governance can be considered ambiguous as it has an emerging definition – it can be simply defined as the exercise of authority for data related matters. It ensures that important information assets are formally managed throughout the enterprise and can be trusted to provide effective decisions.
Some of the goals of applying Data Governance practices include:
Improving regulatory compliance
Introducing best practices and repeatable processes
Conforming column definitions across all applications
For the most part, with many business intelligence solutions, it should work with some sort of metadata repository / data dictionary in order to be functional to answer critical deployment questions. Once in place, Data Governance will influence the actions and conduct of people who implement and follow these practices.
The term Metadata is also ambiguous and has an evolving definition. It can and always will be defined differently by those who work with it. But, when used in the context of QlikView applications or Business Discovery – metadata can be defined simply as - data about data. Within a QlikView ecosystem - there are two types of “data” that can be described:
Source Data - DATA that is used to make business decisions such as organizational data
QlikView Deployment Data - DATA about the structural elements that make up a QlikView deployment
Metadata's overall purpose is to increases the value of data by providing additional context. When managed effectively - it can be created once, centralized and reused in a self-service manner across multiple applications. It also can be used to answer questions that pertain to data lineage and impact analysis about the data or the applications it’s describing. In-turn it ensures consistency and understanding of data across the entire deployment for both IT and QlikView users. When applied correctly it can help with the overall effectiveness and efficiency of a QlikView deployment.
Whether it describes data used to make business decisions or data about a QlikView deployment, metadata helps bridge the gap between the way users work with data and how computer applications process it.
1. What 2 products are used to introduce Data Governance and Metadata Management to a QlikView deployment?
a) The first product is the QlikView Governance Dashboard (QVGD)- This is a free product available on QlikMarket which contains a QlikView Dashboard (.QVW file) and a run-time processing engine. Its overall function is to retro-actively scan a QlikView deployment(s), create a QlikView associative data model and present various KPIS/metrics about the depolyment(s). It is intended to be largely used by IT and other technical staff to gain visibility and insight to help them answer those questions that pertain to ... well simply - "What is going on in my QlikView deployment?". The overall value and benefit of the QVGD is to allow those to take actions on their finding such as instituting data governance practices to their QlikView environment, in-turn allowing them to measure its overall effectiveness and efficiency.
Some examples of the questions answered include:
What QVD/QVX files/fields are/are not being used?
How many QlikView applications exist in my deployment?
What data is or is not being used and by which QV apps?
Which expressions/labels are being used the most (recurring / overlapping)?
What and how many of each sheet objects are being used?
What sources of data are being accessed?
Please refer to the QVGD product landing page on our web site for more information.
b) The second product is QlikView Expressor Desktop / Server - which comprises of 4 components. A design environment - QVE Desktop, a version control and team development Repository, a server side Engine so created content can be deployed and executed on a server (QV Server / Publisher) and the QlikView Expressor Connector.
There are 3 license options for QlikView Expressor:
A free Desktop edition (interactive execution only)
Standard ( 8 core processing limitation, repository, engine)
Enterprise (unlimited cores, repository, engine)
QlikView Expressor Desktop - is used to prepare and manage data for QlikView applications. Its primary function is to create a Dataflow that visually provisions (access, conform, cleanse, etc.) data for QlikView. There are components to access data, cleanse, transform and control its flow and output to QlikView and other target systems. QlikView Expressor defines and captures the source, target and business rule metadata along the way which can be reused in other projects and reused amongst multiple QlikView applications. It can help reduce QlikView scripting in certain cases and offers a repeatable way of defining meta-driven QlikView applications. It provides an easy to use interface that most QlikView developers will feel comfortable with.
The Repository allows the storage and version control of what are called design-time model components used to create the Dataflow. (connections, schemas, business rules, templates, etc.)
The Server (engine component known as etask.exe) - will just execute what is created on the QV Server / Publisher machines.
QlikView Expressor Desktop and a Dataflow with data output to QlikView
QlikView Expressor Desktop Rules Editor - defining a parameterized, reusable business rule
2. What are some uses of QlikView Expressor within QlikView
In summary, both the QlikView Governance Dashboard and QlikView Expressor enable discovery and understanding of a QlikView deployment and its data by applying data governance, increasing reuse and facilitating the creation of metadata driven QlikView applications across the entire QlikView environment.
When creating QlikView applications there are few ways one can prepare data for QlikView.
One can provide direct access to the data via its connectors to databases, files and web services directly in the QlikView application (.QVW) - then use SQL and LOAD script functionality to further transform the data needed for the application.
QVWs can also be used to just prepare the data with the LOAD scripts, without the layout and chart objects. Connectors, SQL and LOAD scripts are used to access, conform, cleanse the data to create a QlikView datafile known as a .QVD file (QlikView Data layer). Other QlikView applications can use that QVD file if needed. These processes can be scheduled and refreshed as needed using QlikView Publisher (Distribution Service and its task manager)
Due to the extremely user friendly and addictive nature that QlikView offers, anyone can rapidly create content to answer those business questions easily. What happens when QlikView deployments starts to expand throughout an organization is multiple versions of the rules, metrics, column definitions may exists or are defined differently across similar applications. This can possibly create a difference in conclusions, reducing the confidence in the data, therefore delaying decisions. The QlikView Governance Dashboard can help identify these areas of concern and QlikView Expressor can help provide a way to manage reusable and consistent data for those QlikView applications as the environment continues to grow.
3. What data sources / targets can QlikView Expressor read / write?
QlikView Expressor can read and write a variety of data using Read and Write Operators. For data sources where an operator does not exist Read and Write Custom operators can be used along with the Datascript syntax.
QVX Connector - any QlikView connector that has been built using the QVX specification
4. How do you connect QlikView to QlikView Expressor
QlikView Expressor - can read and write QlikView QVD files. So the QVD output that is created is used as you would normally use it with QlikView. This can then be used as a data source file within QlikView application design as any other QlikView data file. If you output to QVX with QlikView Expressor, you have the option of using the QlikView Expressor Connector (QVEC) - which will allow you to source data directly from the QVE Dataflow without having to explicitly reference the .QVX file from a LOAD script. The QVEC allows you to access what is similar to a traditional metadata repository. "Deployment Packages" defined within QVE projects can be accessed and expose all the Dataflows that will be used to provision data for the QlikView application. The QlikView Expressor connector works specifically with Dataflows that output QVX only.
5. Where can QlikView Expressor Fit?
QlikView Expressor (QVE) provides data governance and data management within a QlikView environment; providing visibility and data confidence in QlikView deployments. It strengths enable the creation of a single conformed data management layer that can be used to drive QlikView applications. QlikView Expressor has also been used as an ETL (Extract Transform Load) / data integration tool to supplement other data preparation needs such as the creation of various data stores. This is common in a setting where other ETL tools are not available. QVE can help consolidate multiple data sources, augment data and create a data store/mart/warehouse to be accessed by QlikView and other applications. Other benefits of QlikView Expressor include its ability to graphically prepare and control the flow of data while storing, sharing and reusing various components of the development process.
Qlikview Server can support any combination of different CALs in a single Qlikview Server. Often it makes sense to combine the different CALs in a sin...
Qlikview Server can support any combination of different CALs in a single Qlikview Server. Often it makes sense to combine the different CALs in a single Qlikview server based on the user requirements. When CALs are combined on the Server, the order of precedence in CAL assignment is as follows:
When trying to create a connection to Microsoft SQL Server with QlikView Expressor using the provided 32 bit native drivers, you may receive the follo...
When trying to create a connection to Microsoft SQL Server with QlikView Expressor using the provided 32 bit native drivers, you may receive the following error:
SQLSTATE:, Code:, Msg:[expressor][ODBC SQL Server Wire Protocol driver]Connection refused. Verify Host Name and Port Number.
You may be able to connect to the MS SQL Server with other applications however they might be using a different driver / protocols to connect, so therefore it may seem that the MS SQL Server configuration is valid.
In this case, if this is a new installation of MS SQL Server which is a local installation (on the same pc).
Please verify the following:
That it works when creating a ODBC DSN using the 32-BIT version driver "SQL Server Native Client 11.0".
Programs -> expressor -> expressor3 -> system tools -> Data Sources (ODBC) - this ensures that the proper 32 bit ODBC admin tool is being used
If this works - then verify the following on the MS SQL Server side.
The problem may be the protocols that are enabled on the MS SQL Server. After installing MS SQL 2012 - you will need to enable TCP/IP in the SQL Server Configuration Manager in order to get the provided QlikView Expressor drivers to work.
Programs ->MS SQL->Configuration Tools->SQL Server Configuration Manager
By default TCP/IP is not enabled. Also check your TCP/IP Dynamic ports - make sure the property is blank and does not have a 0 as a value. Scroll through the list and do this for all IP addresses to be sure. I encountered a number of machines that have installed named "INSTANCES" of MS SQL Server and they needed to modify these settings. Please refer to this document for more information onusing Named Instances: http://community.qlik.com/docs/DOC-3247
The configure the QlikView Expressor Database Connection as usual:
When using Expressor Deskstop 3.10 and later (Expressor) you now have the ability to debug your Expressor Datascript code in the Lua coding and debugg...
When using Expressor Deskstop 3.10 and later (Expressor) you now have the ability to debug your Expressor Datascript code in the Lua coding and debugging IDE Decoda. This open source product must be downloaded and installed separately from Expressor.
Normally when you run a dataflow from within Desktop, code debugging is disabled. If you want to debug your code, you must specifically enable the Prompt for debugging property on the operator whose code you want to examine and select the new Start With Debugging menu item that is under the Start ribbon bar button. You will then be able to attach Decoda to the code you want to examine, set break points and watch variables, and step through the code.
The operators for which debugging is available are: Aggregate, Filter, Join, Multi-Transform, Read Custom, Write Custom, and Transform.
You can use this debugging feature with both expression and function rules. If your operator includes multiple expression rules, you will be able to examine each rule simultaneously as the debugger will cycle through the rules, which will be listed separatelly. If your operator includes a function rule with code in both required and optional functions (for example, the transform operator's filter and transform functions), you will be able to examine all the coding during the same execution of the dataflow.
Let's see how this all comes together.
When designing your dataflow, select the Prompt for debugging property on the operator whose code you want to examine.
You may only select this property on one operator on each step of the dataflow; it is not possible to simultaneously debug multiple operators on a single step.
Before running the dataflow start the Decoda application.
Run the dataflow by selecting the Start With Debuffing menu item from the drop down list under the ribbon bar's Start button or by pressing the Ctrl+F5 key combination. If you simply click the Start button, the dataflow will run without connecting to Decoda.
A message window will appear. Note the process ID that is associated with the process running the operator. DO NOT click OK at this time. You must first start the debugging process within Decoda.
Return to the Decoda IDE and select the Debug - Processes... menu item. This opens a window that lists all of the running processes. In this window, click on the ID column header to sort the processes by process ID, highlight the process corresponding to the operator whose code you are examining, and click Attach.
Momentarily the Decoda IDE will acknowledge that it is ready to start the debugging session.
Return to the message window of Step 4 and click OK. Execution of the dataflow will begin and a second message window will appear. DO NOT click OK at this time. You must first set break points and identify watch variables.
In the Decoda IDE, select the entry within the Project Explorer panel that corresponds to the code being examined and double-click on the function name. In the following screen shot both the filter optional function and the transform required function have code.
To set break points, place the cursor in front of a statement and press the F9 key. The F9 key can also be used to toggle the break point off and on.
To set watch variables, either: highlight the variable and drag-and-drop it into the Watch panel, or click in the Watch panel and enter the name of a variable.
Once break points and watch variables have been set, return to the message window of step 7 and click OK.
The application runs to the first break point; in this example to the break point in the filter function.
After examining the watch variables, press F5 to allow the code to execute to the next break point (in transform function).
Continue pressing F5 to process the remaining records.
Hi All,This document covers integrating qlikview files in third party applications using web ticketing and iframe. I have covered qlikview server inst...
This document covers integrating qlikview files in third party applications using web ticketing and iframe. I have covered qlikview server installation and configuration also. Included a java program to generate web ticket and integrate in iframe.
In order to read file data into anexpressor data integration application, you must create a schema file that describes the structure of each record.&n...
In order to read file data into anexpressor data integration application, you must create a schema file that describes the structure of each record. expressor Studio has wizards that you use to create schemas for delimited files or database tables. A wizard for files with fixed width fields is not yet included in the product. However, since data within files is read into expressor Studio as strings, it is a simple matter to use the substring function to parse each fixed width record into individual fields.
Let's assume that your application needs to read a file where each record includes four fixed width fields of the following sizes: 2 characters, 10 characters, 15 characters, and 30 characters. Each 57 character record ends with a new line terminator. You can easily write a delimited file schema that describes this record format. The schema contains a single field, perhaps named "line," the record delimiter is the new line (or carriage return/new line), and, since the record contains only a single field, any character or character combination not in the actual data is an acceptable field delimiter. The corresponding composite type contains a single attribute (also named "line") of type string.
When your application reads this file, each incoming record will be a 57 character string. Immediately following the Read File operator, place a Transform operator. Define, in the composite type describing the output record, four attributes that correspond to the four fixed width fields in the incoming record. If appropriate, assign non-string types, such as integer, decimal, or datetime to these attributes. For the purposes of this discussion, let's assume that the first attribute should be handled as an integer, the second and third attributes are strings, and the fourth attribute is a datetime.
Now, within the Transform Editor, map the attribute in the incoming record to all four attributes in the outgoing record. For each mapping, use the string.substring function to extract the desired characters from the single incoming string and then change the type of the data if necessary. For example, the expression used to initialize the first string output attribute would be string.substring(input.line,3,12). Note the pattern used to set the starting and ending characters of the substring. The starting character is the ending character of the previous field plus one and the ending character is the ending character of the previous field plus the width of the next field.
Initializing the fourth output attribute is a little more involved as you must use the string.datetime function to convert the extracted string into a datetime value. And since the data in each fixed width field may not actually require the full field width, it's a good practice to use the string.trim function to trim trailing space characters from each substring.
If you choose to use an expression rule, you can perform all four extractions in a single rule.
Alternatively, you may choose to use a function rule.
The preceding approach is fine if the number of fields in the incoming record is relatively small. But how would you handle a record with many fields? In this case, it might be too tedious to write an assignment statement for each field, so you need to use some sort of a loop to handle the processing. While there are many ways to approach this objective, the following screen shot illustrates the basic logic.
Beginning on the first line, you define a numerically indexed Datascript table where each element's value is another numerically indexed table with two elements. The first element in this nested table is the name of an output attribute while the second element is the length of the field. Note that the elements within the table fields are in the same order as the fields in the input record.
Then within the transform function, parsing of the input record is performed within the ipairs iterator function. As each element of the table fields is retrieved the code extracts the name of the output attribute and the corresponding characters from the input string. If necessary, the extracted value is then converted into a different data type such as integer or datetime. Each value parsed from the input is then used to initialize an output attribute and when the ipairs loop completes, the output record is emitted.