Skip to content

Local Models

All local models run via @huggingface/transformers + ONNX. Models are downloaded from Hugging Face Hub on first use and cached locally.

A code-optimised model. This is the default when no Cloudflare env vars are set.

import { CodeIndexer, NomicCodeEmbeddings } from '@upstart.gg/lucerna';
const indexer = new CodeIndexer({
projectRoot: '.',
embeddingFunction: new NomicCodeEmbeddings(),
});

General-purpose — Xenova/bge-small-en-v1.5 (MTEB 62.17).

import { CodeIndexer, BGESmallEmbeddings } from '@upstart.gg/lucerna';
const indexer = new CodeIndexer({
projectRoot: '.',
embeddingFunction: new BGESmallEmbeddings(),
});

Use any ONNX model from the Hugging Face Hub:

import { CodeIndexer, HFEmbeddings } from '@upstart.gg/lucerna';
const indexer = new CodeIndexer({
projectRoot: '.',
embeddingFunction: new HFEmbeddings(
'Xenova/bge-small-en-v1.5', // modelId
384, // dimensions — must match model output
'fp32', // dtype: 'auto' | 'fp32' | 'fp16' | 'q8' | 'q4' | …
),
});