Use any AI model locally in your device.

Connect the providers you already use, approve tool access before actions run, and see every thread, file, and result in one place.

9:41 100%

Now in your pocket.

Connect Decubed to your own model provider

Use your keys, linked accounts, local runtimes, or any OpenAI-compatible endpoint without changing the thread workflow.
OpenAI
Anthropic
Google Gemini
Mistral AI
Cohere
xAI
Groq
Together AI
Perplexity
Azure OpenAI
AWS Bedrock
Ollama

One Runtime

We are building the creators, maintainers, and contributors layer for the next generation of AI workspace infrastructure.

Unified stack

Decubed manages your threads, files, tools, memory, approvals, and model paths in one portable workspace layer. Read more

Execution log

Threads become durable timelines: intent, reasoning, tool calls, approvals, files, and final outputs remain inspectable. Read more

Permissioned actions

Browsers, files, calendars, automations, shell commands, and app connectors are visible capabilities, not hidden model magic. Read more

Switch models

Run local, remote, linked-account, or custom models while the environment keeps the same memory and permissions. Read more

Control your environment, steer models in your own way.

Providers available now

12

Provider coverage

Local runtimes, hosted APIs, and linked accounts in one workspace.

First integrations Current coverage
100+ tools and connectors
3 ways to connect models
Local-first state and run history

Move between local models, API endpoints, linked accounts, threads, approvals, and files without losing the trail of what happened.

OpenAI
Anthropic
Google Gemini
Mistral AI
Cohere
xAI
Groq
Together AI
Perplexity
Azure OpenAI
AWS Bedrock
Ollama