Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
Qlik and ServiceNow Partner to Bring Trusted Enterprise Context into AI-Powered Workflows. Learn More!

How to: Getting started with the OpenAI Connector in Qlik Automate

100% helpful (1/1)
cancel
Showing results for 
Search instead for 
Did you mean: 
Jayarams
Support
Support

How to: Getting started with the OpenAI Connector in Qlik Automate

Last Update:

Mar 5, 2026 5:55:26 AM

Updated By:

Sonja_Bauernfeind

Created date:

Jun 20, 2023 3:52:07 PM

Attachments

This article provides an overview of getting started with the OpenAI connector in Qlik Automate. 

The OpenAI connector offers developers a range of powerful natural language processing capabilities. It allows for tasks such as text generation, translating between languages, analyzing sentiment, summarizing content, and building question-answering systems. These features enable you to bring additional value to your existing automations.

Content:

 

Authentication

Create a new automation and search for the OpenAI connector in the block library on the left side. Drag a block inside the automation editor canvas, and make sure to select the block to show the block configuration menu on the right side of the editor. Open the Connect tab in the configuration menu and provide your OpenAI API key. Visit your API Keys page to retrieve the API key you'll use in your requests.

connections tab openAI connector.png

 

Working with OpenAI blocks 

Once the connection to your OpenAI account has been created, you can start building an automation that uses the connector. 

The available blocks are:

  • List Models: This block returns a list of available models, including their names, descriptions, and capabilities.

  • Retrieve Model: This block returns detailed information about a specific model, including its parameters, training data, and performance metrics.

  • Chat Completion: This block generates a model response using a model. The request body includes the messages, model, and other parameters. The response body includes the generated model response. More information about each parameter in this block can be found at the end of this article.

  • Chat Completion Message: This block generates a message that can be passed into the Message parameter of the Chat Completion block. The Message parameter requires a list of messages, but this block outputs a JSON object. To convert it into a list, use a Variable block of type List and append the output of this block to that list variable. The list variable created using the Variable block should then be populated with the output from the Chat Completion Message block.

  • Function Tool: This block is used to configure a tool parameter of type function in the Chat Completion block. Since the block returns a JSON object, you’ll need to convert it into a list before using it. To do that, use a Variable block of type List and append the output of this block to the list variable. Then, reference that List variable in the Chat Completion block.

  • Raw API Request: This block allows you to make a generic request to the OpenAI API and allows you to configure the HTTP method and query or body parameters.

  • Raw API List Request: This block lets you perform a generic GET request to the OpenAI API and returns the results as a list over which you can loop.

For more details on the API, please refer to the following link.

At the time of writing this article, the Images and Audio endpoints in the OpenAI API are in beta state but can be used through the Raw API Request blocks.

 

Tips and limitations

Below are a couple of tips and limitations to keep in mind when working with the OpenAI connector in automations.

  1. If the usage quota is exceeded with the Free OpenAI account, the following error message will be displayed. Upgrading to a paid OpenAI account will resolve the issue.
    {
        "error": {
            "message": "You exceeded your current quota, please check your plan and billing details.",
            "type": "insufficient_quota",
            "param": null,
            "code": null
        }
    }​
  2.  For an overview of the rate limitations in the OpenAI API, please refer to this documentation.
  3. Raw API Request block: This connector does not support the OpenAI API with the content type Multipart/form-data. Following API are not supported in the Raw API Request block
    • Upload file.
    • Create an image edit.
    • Create image variation.
    • Create transcription.
    • Create a translation.
  4. No paging for list blocks: Since the OpenAI API has no support for paging, all records a list block in an automation retrieves are returned in a single API response by OpenAI. The List Models and the Raw API List Request blocks are examples of list blocks.

 

A Deep Dive into Chat Completion Block

Chat completion block: The model and messages parameter, the only required parameters in this block, allow us to produce a random answer. A minimum prerequisite for a more insightful response is a Messages, Model, and Max Completion Tokens input parameter. Other input variables may be employed to narrow down the response.

  1. Messages: This parameter expects a list of messages representing the conversation so far.

    Use the Chat Completion Message block to generate a message. Since that block returns a JSON object, you must convert it into a list before passing it to this parameter. To do this, create a Variable of type List and append the output of the Chat Completion Message block to that list variable. Finally, use this List variable as the input for the parameter.

  2. Model: This is the parameter in this block that specifies which model to use for generating completions. You can use the lookup functionality or the List models API to see all available models or see OpenAI's Model overview for their descriptions.

    Example value: "text-davinci-003" 

  3. Max Completion Tokens: Specifies the maximum number of tokens in the generated completion response. More information about how the token count is calculated by OpenAI can be found here: Tokenizer.

    Example: 50

The following parameters are optional in most use cases but could be used to fine-tune the response:

  1. Tools: A list of tools that the model can invoke. At present, only functions are supported as tools. Use the Function Tool block to define a function. This block returns a JSON object, which must then be converted into a list. To do so, create a Variable of type List and add the output of the Function Tool block to that list variable. Finally, use this List variable as the input of this parameter.

  2. Tool Choice:  Controls which (if any) function is called by the model. 

    none means the model will not invoke any functions and will generate a regular message instead. 

    auto allows the model to decide whether to produce a message or call one or more functions.


    required forces the model to call at least one function. You can also enforce a specific function by setting: { "type": "function", "function": { "name": "my_function" } }, which compels the model to call that exact function.

    none is the default when no tools are present. 

    auto is the default if tools are present.

  3. Logit Bias: The `logit bias` parameter in the OpenAI API enables you to prioritize certain tokens over others when generating text. For instance, by assigning a higher bias to "cat" (e.g., 3.0) and lower biases to other tokens like "dog" (e.g., 1.5) and "bird" (e.g., -2.0), you can influence the model to generate more content related to cats.

    Example:

    {
    "cat": 3.0,
    "dog": 1.5,
    "bird": -2.0
    }

  4. Number Of Completions: Specifies the number of completions to generate.

    Example: 3

  5. Presence Penalty: Adjusts the model's likelihood of generating repetitive phrases. Higher values (e.g., 0.8) make the output more focused and deterministic, while lower values (e.g., 0.2) make it more creative.

    Example: 0.6

  6. Temperature: Controls the randomness of the output. Higher values (e.g., 0.8) make the output more random, while lower values (e.g., 0.2) make it more focused and deterministic.

    Example: 0.6

  7. Top p: Controls the diversity of the generated output by setting a probability threshold. top_p (or nucleus) selects the most likely tokens until the cumulative probability exceeds the threshold. 

    Example: 0.8 

  8. User: Specifies an optional user ID or string that can be used to customize the model's behavior based on the user.

    Example: "12345"

  9. Response Format:  Defines the format of the model’s response output. Setting it to { "type": "json_schema", "json_schema": { ... } } enables Structured Outputs, ensuring the model strictly adheres to the provided JSON schema. Setting it to { "type": "json_object" } enables the legacy JSON mode, which guarantees that the model’s response is valid JSON but does not enforce a specific schema structure. For models that support it, using json_schema is recommended.

 

How to use Structured Outputs with Chat Completion Block

Structured Outputs is an advanced OpenAI feature that ensures AI responses always adhere to a JSON schema you define. This guide demonstrates step-by-step how to use the Chat Completion block in Qlik Automate to implement Structured Outputs using a practical math tutoring example.

 

What are Structured Outputs?

Structured Outputs guarantee that OpenAI models return responses in a specific JSON format you define, eliminating the need for post-processing validation or retries. Unlike JSON mode (which only ensures valid JSON), Structured Outputs enforce schema adherence.

 

Supported Models

Structured Outputs require: gpt-4o-mini, gpt-4o-mini-2024-07-18, and gpt-4o-2024-08-06 model snapshots and later.

This example demonstrates how to create a Qlik automation that uses Structured Outputs to provide step-by-step math solutions with guaranteed formatting.

 

Step 1: Define Your Response Schema

Before configuring the Chat Completion block, define the JSON schema that describes the exact structure of the response.

Math Reasoning Schema:

{
  "type": "json_schema",
  "json_schema": {
    "name": "math_reasoning",
    "schema": {
      "type": "object",
      "properties": {
        "steps": {
          "type": "array",
          "items": {
            "type": "object",
            "properties": {
              "explanation": {
                "type": "string"
              },
              "output": {
                "type": "string"
              }
            },
            "required": [
              "explanation",
              "output"
            ],
            "additionalProperties": false
          }
        },
        "final_answer": {
          "type": "string"
        }
      },
      "required": [
        "steps",
        "final_answer"
      ],
      "additionalProperties": false
    },
    "strict": true
  }
}

This schema requires:

  • A steps array containing objects with explanation and output fields
  • A final_answer string
  • All fields are required
  • No additional properties are allowed

 

Step 2: Create the Automation

  1. Go to your Qlik Sense tenant and create a new automation.

  2. Search for the OpenAI connector in the block library.

  3. Drag the Chat Completion Message block into the automation canvas.

  4. Open the Connection tab in the configuration panel on the right, and either choose an existing connection or enter a new key.

  5. Select the Chat Completion Message block to display the configuration panel on the right.

    This produces one message object (role + content) each time it runs.

    Configure the role (system/user/assistant) and the message text.

  6. Use a Variable block of type List to collect messages:

    Create a list variable (e.g. chatMessages).

  7. Repeat step 5 for all messages you need in the conversation.

  8. Add each Chat Completion Message block output as an item to chatMessages

 

Step 3: Configure the Chat Completion Block

  1. Drag a Chat Completion block into the automation.

  2. In the Inputs tab of the Chat Completion block, set the following:

    1. Model

      Set to: gpt-4o-mini, gpt-4o-mini-2024-07-18, gpt-4o-2024-08-06 (or later). Required for Structured Outputs support.

    2. Messages

      Map this to the list variable (e.g., chatMessages) that you populated using the Chat Completion Message blocks.

      If you prefer not to use a variable, you can still input a literal JSON array manually, but the variable approach ensures dynamic message construction.

      Example manual array:

      [
        {
          "role": "system",
          "content": "You are a helpful math tutor. Guide the user through solving math problems step by step. Provide clear explanations and show all work."
        },
        {
          "role": "user",
          "content": "How can I solve 8x + 7 = -23?"
        }
      ]

       

    3. Response Format (Critical for Structured Outputs)

      Set the response_format parameter to include your schema:

      {
        "type": "json_schema",
        "json_schema": {
          "name": "math_reasoning",
          "schema": {
            "type": "object",
            "properties": {
              "steps": {
                "type": "array",
                "items": {
                  "type": "object",
                  "properties": {
                    "explanation": {
                      "type": "string"
                    },
                    "output": {
                      "type": "string"
                    }
                  },
                  "required": [
                    "explanation",
                    "output"
                  ],
                  "additionalProperties": false
                }
              },
              "final_answer": {
                "type": "string"
              }
            },
            "required": [
              "steps",
              "final_answer"
            ],
            "additionalProperties": false
          },
          "strict": true
        }
      }

      Add an object formula on top of this JSON schema to convert it into a proper JSON object.

      Important: Set "strict": true to enable Structured Outputs mode.

 

Step 4: Run the Automation

After the Chat Completion block executes, the response is parsed and available in a structured format. You can then use the structured output to build the formatted message and send it over Slack or email.

You can find the demo automation attached to this article (open-ai-structured-output.json).

 

Best Practices

  • Keep schemas simple: Start with basic schemas before adding complexity.
  • Use descriptions: Add clear descriptions to all properties.
  • Provide examples: Include examples in your system prompt.
  • Test schemas: Validate with sample prompts before production.
  • Version control: Track schema changes for consistency.
  • Handle refusals: Always check for model refusals in responses.

Resources

Tags (1)
Labels (1)
Comments
Antoine04
Partner - Creator III
Partner - Creator III

Hello,

First, thank you for the article.

I am currently working on the subject and I notice that some new blocks are now availables, like :

- Chat Completion

- Chat Completion Message

- Chat Completion Function

Any chance to find an article about this ?

Thank you

Regards,

Antoine L.

Sonja_Bauernfeind
Digital Support
Digital Support

Hello @Antoine04 

Let me look into this for you.

All the best,
Sonja 

Jayarams
Support
Support

@Antoine04 We don't have the article at the moment for the above blocks.

We will either write a new article or update the existing article with the information on those blocks. We will keep you updated on this.
Antoine04
Partner - Creator III
Partner - Creator III

Hello to both of you,

Thanks for reply. Can't wait to see it !

I have to say I already try it and it works in my case, but would be nice to have the best practices on how to configure it 🙂

Regards 

MichielHofsteenge
Partner - Contributor III
Partner - Contributor III

Hi @Sonja_Bauernfeind

I'm wondering if a new article or documentation update about the new OpenAI blocks in Automate will be published soon. I haven't been able to find any recent information, and I'm currently running into an issue where either the endpoint isn't recognized, or the variable for the Chat Completion block isn't being populated correctly.

Kind regards,

Michiel

 

 

 

 

Version history
Last update:
‎2026-03-05 05:55 AM
Updated by: