SearchAPIPlatform

Deterministic search
without the inference cost

Search 64.6M knowledge entries or bring your own data. Tokenized with 3.8M whole words. No LLM embeddings. No inference costs.

3.8M
Whole-Word Vocabulary
<3s
Cold-State Response
0
LLM Inference Calls
100%
Deterministic
Quick Start

Two calls. That's it.

quickstart.js
// 1. Create an index — no embedding pipeline
await fetch("https://cold-api.coldstate.ai/v1/indexes", {
  method: "POST",
  headers: {
    "Authorization": "Bearer cs_live_...",
    "Content-Type": "application/json"
  },
  body: JSON.stringify({
    name: "acme-docs",
    domain_preset: "general",
    documents: [
      { id: "doc_001", content: "Parental leave policy..." },
      { id: "doc_002", content: "Remote work guidelines..." }
    ]
  })
});

// 2. Search — deterministic, E-scored, sub-3s
const res = await fetch(
  "https://cold-api.coldstate.ai/v1/indexes/idx_.../search",
  {
    method: "POST",
    headers: { "Authorization": "Bearer cs_live_..." },
    body: JSON.stringify({
      query: "parental leave policy for remote employees",
      limit: 5
    })
  }
).then(r => r.json());

// res.state: "CRYSTALLINE"
// res.results[0].score.E: 0.94
// res.diagnostics.execution_time_ms: 847
Comparison

vs. the traditional RAG stack

Embedding pipeline
Traditional: Required at index + query
ColdState: None — zero LLM cost
Per-query cost
Traditional: Embed + vector + rerank
ColdState: Single QST navigation
Result consistency
Traditional: Probabilistic (varies)
ColdState: Deterministic (identical)
Infrastructure
Traditional: Multi-node clusters
ColdState: Single Cold-State engine
Topology signal
Traditional: Not available
ColdState: CRYSTALLINE · FLUID · REACTIVE

Ready to go cold?

Search our knowledge base or bring your own data. Get your API key and start in under a minute.

Get API Key