Omniscope now supports OpenRouter, a powerful platform that provides unified access to a wide range of large language models (LLMs) from all major providers — including OpenAI, Anthropic, Google, Mistral, and others. This integration allows you to experiment with and switch between models directly inside Omniscope’s AI-powered features.
Whether you’re generating reports, building workflows, or querying data, OpenRouter gives you flexibility, choice, and cost control - all from within Omniscope.
Federated OpenRouter API Key Usage
To use OpenRouter in Omniscope, you’ll need to obtain a single OpenRouter API key.
This key serves as a federated gateway to all supported model providers - meaning you don’t need to manage or enter separate API keys for OpenAI, Anthropic, Mistral, or others directly in Omniscope.
Once configured, Omniscope connects through OpenRouter, which securely routes your requests to the chosen provider and model behind the scenes.
If you have existing API keys for specific providers, you can link them to your OpenRouter account, enabling direct billing through those providers while still managing everything from one place.
What is OpenRouter?
OpenRouter acts as a single gateway to multiple LLMs from different providers. Instead of managing separate APIs for OpenAI, Anthropic, Mistral, and others, you can use OpenRouter to access them all through a single integration.
It also gives you access to models that may not otherwise be directly available in Omniscope — including open-source and local models running in the cloud. This makes it easy to test different model capabilities and choose the right one for your specific task.
How the Integration Works in Omniscope
You can use OpenRouter models in several of Omniscope’s AI integrations:
AI Completion Block (Workflow Executions) – For direct text completions, code generation, or structured outputs.
Report Ninja – For AI-assisted report generation.
Workflow Ninja – For end-to-end workflow automation using AI.
Data Q&A – For natural language querying and analysis of your datasets.
Once you connect your OpenRouter account in Omniscope, you’ll be able to select any available model when configuring these AI integrations.
Model Compatibility Across Integrations
Not all OpenRouter models are compatible with every Omniscope AI integration. This depends on the model’s capabilities — particularly whether it supports tools or structured JSON.
Integrations like Report Ninja and Workflow Ninja work best with models that support tool calling (function calling), but if a model doesn’t support tools, it can still be used, however won't perform as well as other models that support tool calling,
Integrations like the AI Completion Block can work with a wider range of models, including those that don’t support tools.
In practice, this means a given model might work perfectly in one Omniscope feature but not in another, or only in a reduced manner.
Tested models
The following OpenRouter models have been tested to work out of the box with most OpenRouter-compatible AI integrations. They support a range of capabilities including text generation, structured outputs, and tool calling where available.
| Model | Description | Advantages | Considerations |
|---|---|---|---|
| amazon/nova-lite-v1 | A fast, lightweight model from Amazon’s “Nova” family designed for efficiency and general-purpose reasoning. | Low latency, cost-effective, good for everyday completions and summarisation. | Less capable for complex reasoning or multi-step analysis. |
| anthropic/claude-3.5-sonnet | Anthropic’s latest mid-tier Claude model balancing reasoning ability and speed. Excellent for structured tasks, writing, and analysis. | Strong tool-use capabilities, excellent comprehension, safe and consistent outputs. | Slightly slower than smaller models; higher cost per token. |
| deepseek/deepseek-v3.1-terminus | A high-performance open model from DeepSeek optimised for code generation, logical reasoning, and math. | Great for technical workflows and structured outputs. | Can be verbose; requires careful prompting for creative writing tasks. |
| meta-llama/llama-4-maverick | Meta’s next-generation Llama model combining strong open-source reasoning with efficiency. | Good reasoning quality, supports JSON mode, widely compatible. | Tool-calling support may vary depending on configuration. |
| mistralai/mistral-7b-instruct | A small, instruction-tuned open model offering fast responses and low compute cost. | Ideal for lightweight tasks, drafts, and rapid iterations. | Limited reasoning depth; does not support advanced tool calling. |
| qwen/qwen3-coder-30b-a3b-instruct | A large-scale multilingual model tuned for programming and technical writing. | Excellent code generation and debugging; supports structured responses. | High resource usage and slower latency for long completions. |
| x-ai/grok-4-fast | A performance-optimised variant of Grok-4 from xAI, designed for conversational and reasoning tasks. | Fast, witty, and adaptable; performs well in natural dialogue. | May not always support structured JSON or tool-based integrations. |
Handling Tool Call Errors and Configuration Options
Tool Calls
When using an OpenRouter model that doesn’t support tool calls, OpenRouter may return an error indicating that the model does not implement this capability (e.g. an error might show up mentioning that no endpoints that support tool calling are available).
If this occurs, you can still use the model by disabling tool calls for the entire OpenRouter provider within Omniscope’s Admin app:
![]() |
|
This change applies at the provider level, meaning all models accessed through that specific provider will be treated as not supporting tool calls.
The AI integrations will continue to work, but certain features may operate with reduced accuracy or capability.
General errors
Likewise, if you encounter a general error, it might just be that structured JSON is not supported. In this case you can disable this functionality:
![]() |
|
Configuring Different Models with Different Capabilities
If you want to use both models with and without tool calling capabilities, you’ll need to create two separate OpenRouter providers within Omniscope:
Provider A (Tool Calls Enabled): For models that support tool/function calling (e.g., GPT-4o, Claude 3.5).
Provider B (Tool Calls Disabled): For models that do not support tool calls or return errors when used with them.
Each integration or workflow block can then be configured to use the appropriate provider, allowing you to mix and match models according to their strengths and compatibility.
Using Your Own API Key (BYOK)
OpenRouter supports Bring Your Own Key (BYOK) functionality. This means that if you already have an API key for a provider such as OpenAI, you can link it through OpenRouter and use it within Omniscope without paying additional OpenRouter usage fees - currently up to 1 million queries.
This gives you maximum flexibility and helps you manage costs efficiently, while still benefiting from Omniscope’s seamless AI integration layer.
Example Use Cases
Report generation: Use Anthropic’s Claude or OpenAI’s GPT models in Report Ninja to automatically explain findings, or draft narrative reports.
Workflow automation: Leverage Mistral or Gemini models in Workflow Ninja to perform intelligent data transformations or assist in decision automation.
Data exploration: Query your datasets conversationally in Data Q&A using your preferred model, whether it’s a top-tier commercial one or an efficient open-source model from OpenRouter.
Was this article helpful?
That’s Great!
Thank you for your feedback
Sorry! We couldn't be helpful
Thank you for your feedback
Feedback sent
We appreciate your effort and will try to fix the article

