SDK Reference
The free Decision Memos SDK. Query multiple AI models in parallel with typed responses — one call, four models, zero boilerplate.
Note
The free SDK provides raw multi-model querying. For structured verdicts with consensus scoring, advisor personas, and Decision Memos, use the hosted API.
Installation
npm install decisionmemosRequires Node.js 18+ and TypeScript 5+.
createMultiModelQuery(options?)
Factory function that creates a MultiModelQuery instance configured with your API keys.
import { createMultiModelQuery } from 'decisionmemos';
// Reads API keys from process.env by default
const query = createMultiModelQuery();
// Or pass keys explicitly
const query = createMultiModelQuery({
keys: {
openai: 'sk-...',
anthropic: 'sk-ant-...',
xai: 'xai-...',
google: 'AIza...',
},
});Options
| Property | Type | Default | Description |
|---|---|---|---|
| keys | object | process.env | Provider API keys |
| clients | AIModelClient[] | auto-detected | Custom model client instances |
query.ask(question, systemPrompt?)
Query all configured models in parallel with the same question. Returns typed responses from every model.
const result = await query.ask(
"Should we adopt Kubernetes for our staging environment?"
);
// result.responses — ModelResponse[] (one per model)
// result.successCount — how many responded successfully
// result.errorCount — how many failed
// result.totalLatency — wall-clock time (ms)Parameters
| Parameter | Type | Description |
|---|---|---|
| question | string | The question to send to all models |
| systemPrompt | string? | Optional system-level instructions sent to every model |
Return value: MultiModelResult
interface MultiModelResult {
question: string;
responses: ModelResponse[];
successCount: number;
errorCount: number;
totalLatency: number; // ms
timestamp: Date;
}
interface ModelResponse {
modelName: string; // e.g. "GPT-5.2"
provider: string; // e.g. "openai"
response: string; // The model's full response
timestamp: Date;
tokensUsed?: number;
latency: number; // ms
error?: string; // Set if this model failed
}query.testConnections()
Test connectivity to all configured providers.
const results = await query.testConnections();
// [
// { model: "GPT-5.2", provider: "openai", ok: true },
// { model: "Claude Opus 4.6", provider: "anthropic", ok: true },
// ...
// ]query.getStatus()
Get the number and names of configured models.
const status = query.getStatus();
// { count: 4, models: [{ name: "GPT-5.2", provider: "openai" }, ...] }Using individual clients
You can also import and use model clients directly:
import { OpenAIClient, AnthropicClient } from 'decisionmemos';
const openai = new OpenAIClient('sk-...', 'gpt-5.2');
const anthropic = new AnthropicClient('sk-ant-...', 'claude-sonnet-4-6');
const [gptResponse, claudeResponse] = await Promise.all([
openai.query("Should we use GraphQL?"),
anthropic.query("Should we use GraphQL?"),
]);Tip
Want structured verdicts with consensus scoring instead of raw responses? See the POST /v1/deliberate API reference. BYOK gets you 20% off per key (all 4 = 80% off).