Custom Embeddings
The recommended way to use a custom embedding provider is via a config file that the CLI and MCP server load automatically — no code changes to Lucerna required.
Custom Providers Implement EmbeddingFunction in a local file or npm package and wire it up in lucerna.config.ts.
Interface
Section titled “Interface”interface EmbeddingFunction { /** Dimensionality of output vectors */ readonly dimensions: number; /** * Stable model identifier (e.g. "text-embedding-3-small"). * Used to detect model changes between runs and warn about vector-space corruption. */ readonly modelId?: string; /** Produce one embedding vector per input text */ generate(texts: string[]): Promise<number[][]>; /** Optional: pre-load the model before the first generate() call */ warmup?(): Promise<void>;}Quick example
Section titled “Quick example”import type { EmbeddingFunction } from '@upstart.gg/lucerna';
export class OpenAIEmbeddings implements EmbeddingFunction { readonly dimensions = 1536; readonly modelId = 'text-embedding-3-small';
async generate(texts: string[]): Promise<number[][]> { const response = await openai.embeddings.create({ model: this.modelId, input: texts, }); return response.data.map((d) => d.embedding); }}// lucerna.config.ts (at project root)import { OpenAIEmbeddings } from './my-embeddings.ts';
export default { embeddingFunction: new OpenAIEmbeddings(),};