Intelligence, woven in.
Silk AI is a family of reasoning & coding-first language models by Loomic — built to think deeper, code better, and cost less. Drop-in OpenAI-compatible. No bloat. No lock-in.
Multi-agent system
A team of agents,
not a single model.
SilkAI powers coordinated agent pipelines where each model owns a distinct role — just like a real team.
Decomposes goals, delegates to specialists, synthesises results.
Software build
From ticket to production, fully automated.
Researcher
Context & prior art
Coder
Implementation
Reviewer
Code review
Tester
QA
Integrator
Assembly
Research report
Question in, cited report out.
Researcher
Source gathering
Analyst
Evidence analysis
Writer
Drafting
Reviewer
Fact-checking
Publisher
Delivery
Content pipeline
Brief in, polished piece out.
Researcher
Topic research
Writer
First draft
Editor
Refinement
Reviewer
Quality check
Publisher
Distribution
Data analysis
Raw data in, actionable insights out.
* Estimated wall-clock time on Silk Base with a typical workload. Actual times vary by task complexity.
Model family
The Silk model family
Three models. One architecture. Reasoning & Coding-first, from the ground up.
Silk Mini
Fast and efficient for everyday tasks — low latency, high throughput.
- Instruction following
- Summarisation & rewriting
- Quick Q&A
- Light code generation
Silk Base
Balanced performance and reasoning for complex workloads.
- Advanced reasoning
- Long-context analysis
- Code review & refactoring
- Agentic tool use
Silk Large
Maximum capability — state-of-the-art coding, logic, and research.
- Deep multi-step reasoning
- 100K+ LOC codebases
- Research & synthesis
- FIM & autonomous agents
model: "silk-mini"model: "silk-base"model: "silk-large"Why Silk
Everything you need.
Nothing you don't.
Silk models are designed to be the last models you need to evaluate. Sensible defaults, powerful capabilities.
Reasoning first
Every Silk model is trained with explicit <think> token reasoning traces, letting you observe the model's internal chain-of-thought before the final answer.
Built for code
Native fill-in-the-middle (FIM) support, 100K+ LOC context windows, and a training corpus weighted toward real-world codebases in 40+ languages.
OpenAI-compatible
Drop in with a single URL swap. Our API surface mirrors OpenAI's Chat Completions spec — no SDK changes, no migration friction.
Long context
Silk Base and Large support up to 128K context tokens. Analyse full codebases, legal documents, or extended conversations without chunking.
Fast inference
Our inference stack is optimised for low time-to-first-token and high throughput. Silk Mini delivers sub-200ms TTFT on standard hardware.
Your data stays yours
Requests are never logged for training. EU-hosted infrastructure, GDPR-compliant, with per-key granular access controls built in.
Be first to build with Silk AI.
Join the waitlist and get notified the moment Silk AI launches. Early access members get priority onboarding and locked-in launch pricing.