Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hi team,
I tried to set up this LLM but currently, to make the test with Swagger on Power Automate to authorize the OAuth session, it indicates that the redirect URI is not registered, and I made this in the before steps.
Some have the same problem or know other steps to work with Qlik Cloud MCP and Copilot.
Thanks
I try it, and Copilot with Qlik doesn't work very well.
I have the error OpenAIModelTokenLimit, I ask to Claude why and this is the the summary :
Is somebody use it or is everybody use Claude ?
🔴 The OpenAIModelTokenLimit Error — What's Happening
This error means your conversation has exceeded the maximum token limit allowed by the underlying OpenAI model in your Copilot Studio agent.
The real issue is structural: with an MCP like Qlik, every tool call returns data (schemas, query results, metadata) that accumulates in the conversation context — and that fills up the counter very quickly.
📊 Context Window Comparison
Copilot Studio Claude (direct)
| Effective context window | ~4k–8k tokens (GPT-3.5/4 depending on config) | 200k tokens (Sonnet 4.6) |
| Underlying model | GPT-3.5 / GPT-4 / GPT-4 Turbo depending on plan | Native Claude Sonnet 4.6 |
| Max MCP payload | Limited — if a real-time connector returns a payload larger than the agent can handle, it throws a blocking error | Up to ~180k tokens of data per exchange |
| History management | Accumulates → hits limit fast | Natively managed, sliding window |
| Stateful MCP support | Known issue: when an MCP server holds state, errors appear silently on the Copilot Studio backend — the workaround is to build stateless MCP servers | No restriction on stateful servers |
| Context control | Very limited for end users | Fully controllable |
⚠️ Copilot Studio–Specific Limitations with Qlik MCP
1. Token limit too low for Qlik data Qlik MCP can return large JSON objects (datasets, hypercubes, app lists). With GPT-4 at 8,192 tokens, just a few exchanges are enough to saturate the context.
2. Statefulness problem Copilot Studio creates a session ID each time it interacts with a stateful MCP server — this causes silent backend errors. The current recommendation is to build stateless MCP servers for Copilot Studio. A Qlik MCP that maintains an active connection is typically stateful by nature.
3. Connector payload limits If a real-time connector returns a payload larger than the agent can handle, the error is blocking with no graceful fallback.
4. No granular control Unlike Claude directly, you can't easily adjust what enters the context, truncate MCP responses, or manage history compression yourself.
✅ Why Claude Is Better Suited for Qlik MCP
🔧 Workarounds If You Must Keep Copilot Studio
Bottom line: for any serious use of Qlik MCP (iterative queries, large metadata volumes, multi-step analysis), Claude is structurally better suited than Copilot Studio, whose token limits are simply too low for this kind of workload.
Replying to the bug question:
So basically MCP tools expose their arguments so the agent knows what to send. The problem is that the "measures" argument isn't being exposed properly, so Copilot ends up calling the tool with everything except measures and it throws an error. Haven't cracked it yet, if you find something let me know!
Replying to your last post:
I also experienced issues when using GPT models in Copilot, especially with errors that block access to sensitive information and prevent retrieving table data. You can solve this issue by using Claude models.
In my experience, it’s still possible to make an agent work with some effort, particularly for analytics on existing Qlik charts. You can improve results through better prompting and by adding automation flows to the agent. For example, you can retrieve large JSON datasets, process or filter them with a flow, and then return a more compact and usable result back to the agent.