Intelligence, woven in.

Silk AI is a family of reasoning & coding-first language models by Loomic — built to think deeper, code better, and cost less. Drop-in OpenAI-compatible. No bloat. No lock-in.

Coming soon — join the waitlist for early access.
3B–13BParameter range
128KContext window
<200msTime to first token
40+Languages supported

Multi-agent system

A team of agents, not a single model.

SilkAI powers coordinated agent pipelines where each model owns a distinct role — just like a real team.

DirectorOrchestrator

Decomposes goals, delegates to specialists, synthesises results.

Software build

5 agents~12 min*

From ticket to production, fully automated.

DirectorOrchestrator
CI/CD readytest coveragezero-touch deploy

Researcher

Context & prior art

Coder

Implementation

Reviewer

Code review

Tester

QA

Integrator

Assembly

Research report

5 agents~8 min*

Question in, cited report out.

DirectorOrchestrator
cited sourcesstructured outputfact-checked
1

Researcher

Source gathering

2

Analyst

Evidence analysis

3

Writer

Drafting

4

Reviewer

Fact-checking

5

Publisher

Delivery

Content pipeline

5 agents~6 min*

Brief in, polished piece out.

DirectorOrchestrator
SEO-optimisedmulti-formatbrand voice

Researcher

Topic research

Writer

First draft

Editor

Refinement

Reviewer

Quality check

Publisher

Distribution

Data analysis

5 agents~10 min*

Raw data in, actionable insights out.

DirectorOrchestrator
chart exportanomaly detectionnarrative summary
Collector· Data ingestion
Analyst· Processing
Researcher· Enrichment
Visualiser· Charting
Reporter· Summary

* Estimated wall-clock time on Silk Base with a typical workload. Actual times vary by task complexity.

Model family

The Silk model family

Three models. One architecture. Reasoning & Coding-first, from the ground up.

Built on open-source foundations from Hugging Face, fine-tuned by Loomic to meet our reasoning, coding, and safety standards.

Silk Mini

3B32K ctx

Fast and efficient for everyday tasks — low latency, high throughput.

  • Instruction following
  • Summarisation & rewriting
  • Quick Q&A
  • Light code generation

Silk Base

7B128K ctx

Balanced performance and reasoning for complex workloads.

  • Advanced reasoning
  • Long-context analysis
  • Code review & refactoring
  • Agentic tool use

Silk Large

13B128K ctx

Maximum capability — state-of-the-art coding, logic, and research.

  • Deep multi-step reasoning
  • 100K+ LOC codebases
  • Research & synthesis
  • FIM & autonomous agents
model: "silk-mini"model: "silk-base"model: "silk-large"

Why Silk

Everything you need.Nothing you don't.

Silk models are designed to be the last models you need to evaluate. Sensible defaults, powerful capabilities.

Reasoning first

Every Silk model is trained with explicit <think> token reasoning traces, letting you observe the model's internal chain-of-thought before the final answer.

Built for code

Native fill-in-the-middle (FIM) support, 100K+ LOC context windows, and a training corpus weighted toward real-world codebases in 40+ languages.

OpenAI-compatible

Drop in with a single URL swap. Our API surface mirrors OpenAI's Chat Completions spec — no SDK changes, no migration friction.

Long context

Silk Base and Large support up to 128K context tokens. Analyse full codebases, legal documents, or extended conversations without chunking.

Fast inference

Our inference stack is optimised for low time-to-first-token and high throughput. Silk Mini delivers sub-200ms TTFT on standard hardware.

Your data stays yours

Requests are never logged for training. EU-hosted infrastructure, GDPR-compliant, with per-key granular access controls built in.

Early Access

Be first to build with Silk AI.

Join the waitlist and get notified the moment Silk AI launches. Early access members get priority onboarding and locked-in launch pricing.

No spam. Just a launch notification.