Skip to content

Class: LlamaModel

Defined in: evaluator/LlamaModel/LlamaModel.ts:173

Properties

tokenizer

ts
readonly tokenizer: Tokenizer;

Defined in: evaluator/LlamaModel/LlamaModel.ts:197


onDispose

ts
readonly onDispose: EventRelay<void>;

Defined in: evaluator/LlamaModel/LlamaModel.ts:198

Accessors

disposed

Get Signature

ts
get disposed(): boolean;

Defined in: evaluator/LlamaModel/LlamaModel.ts:307

Returns

boolean


llama

Get Signature

ts
get llama(): Llama;

Defined in: evaluator/LlamaModel/LlamaModel.ts:311

Returns

Llama


tokens

Get Signature

ts
get tokens(): LlamaModelTokens;

Defined in: evaluator/LlamaModel/LlamaModel.ts:315

Returns

LlamaModelTokens


filename

Get Signature

ts
get filename(): undefined | string;

Defined in: evaluator/LlamaModel/LlamaModel.ts:319

Returns

undefined | string


fileInfo

Get Signature

ts
get fileInfo(): GgufFileInfo;

Defined in: evaluator/LlamaModel/LlamaModel.ts:323

Returns

GgufFileInfo


fileInsights

Get Signature

ts
get fileInsights(): GgufInsights;

Defined in: evaluator/LlamaModel/LlamaModel.ts:327

Returns

GgufInsights


gpuLayers

Get Signature

ts
get gpuLayers(): number;

Defined in: evaluator/LlamaModel/LlamaModel.ts:335

Number of layers offloaded to the GPU. If GPU support is disabled, this will always be 0.

Returns

number


size

Get Signature

ts
get size(): number;

Defined in: evaluator/LlamaModel/LlamaModel.ts:344

Total model size in memory in bytes.

When using mmap, actual memory usage may be higher than this value due to llama.cpp's performance optimizations.

Returns

number


flashAttentionSupported

Get Signature

ts
get flashAttentionSupported(): boolean;

Defined in: evaluator/LlamaModel/LlamaModel.ts:350

Returns

boolean


defaultContextFlashAttention

Get Signature

ts
get defaultContextFlashAttention(): boolean;

Defined in: evaluator/LlamaModel/LlamaModel.ts:354

Returns

boolean


defaultContextSwaFullCache

Get Signature

ts
get defaultContextSwaFullCache(): boolean;

Defined in: evaluator/LlamaModel/LlamaModel.ts:358

Returns

boolean


trainContextSize

Get Signature

ts
get trainContextSize(): number;

Defined in: evaluator/LlamaModel/LlamaModel.ts:647

The context size the model was trained on

Returns

number


embeddingVectorSize

Get Signature

ts
get embeddingVectorSize(): number;

Defined in: evaluator/LlamaModel/LlamaModel.ts:657

The size of an embedding vector the model can produce

Returns

number


vocabularyType

Get Signature

ts
get vocabularyType(): LlamaVocabularyType;

Defined in: evaluator/LlamaModel/LlamaModel.ts:666

Returns

LlamaVocabularyType

Methods

dispose()

ts
dispose(): Promise<void>;

Defined in: evaluator/LlamaModel/LlamaModel.ts:293

Returns

Promise<void>


tokenize()

Call Signature

ts
tokenize(
   text: string, 
   specialTokens?: boolean, 
   options?: "trimLeadingSpace"): Token[];

Defined in: evaluator/LlamaModel/LlamaModel.ts:372

Transform text into tokens that can be fed to the model

Parameters
ParameterTypeDescription
textstringthe text to tokenize
specialTokens?booleanif set to true, text that correspond to special tokens will be tokenized to those tokens. For example, <s> will be tokenized to the BOS token if specialTokens is set to true, otherwise it will be tokenized to tokens that corresponds to the plaintext <s> string.
options?"trimLeadingSpace"additional options for tokenization. If set to "trimLeadingSpace", a leading space will be trimmed from the tokenized output if the output has an additional space at the beginning.
Returns

Token[]

Call Signature

ts
tokenize(text: BuiltinSpecialTokenValue, specialTokens: "builtin"): Token[];

Defined in: evaluator/LlamaModel/LlamaModel.ts:373

Transform text into tokens that can be fed to the model

Parameters
ParameterTypeDescription
textBuiltinSpecialTokenValuethe text to tokenize
specialTokens"builtin"if set to true, text that correspond to special tokens will be tokenized to those tokens. For example, <s> will be tokenized to the BOS token if specialTokens is set to true, otherwise it will be tokenized to tokens that corresponds to the plaintext <s> string.
Returns

Token[]


detokenize()

ts
detokenize(
   tokens: readonly Token[], 
   specialTokens?: boolean, 
   lastTokens?: readonly Token[]): string;

Defined in: evaluator/LlamaModel/LlamaModel.ts:487

Transform tokens into text

Parameters

ParameterTypeDefault valueDescription
tokensreadonly Token[]undefinedthe tokens to detokenize.
specialTokens?booleanfalseif set to true, special tokens will be detokenized to their corresponding token text representation. Recommended for debugging purposes only. > Note: there may be additional spaces around special tokens that were not present in the original text - this is not a bug, this is how the tokenizer is supposed to work. Defaults to false.
lastTokens?readonly Token[]undefinedthe last few tokens that preceded the tokens to detokenize. If provided, the last few tokens will be used to determine whether a space has to be added before the current tokens or not, and apply other detokenizer-specific heuristics to provide the correct text continuation to the existing tokens. Using it may have no effect with some models, but it is still recommended.

Returns

string


getTokenAttributes()

ts
getTokenAttributes(token: Token): TokenAttributes;

Defined in: evaluator/LlamaModel/LlamaModel.ts:508

Parameters

ParameterType
tokenToken

Returns

TokenAttributes


isSpecialToken()

ts
isSpecialToken(token: undefined | Token): boolean;

Defined in: evaluator/LlamaModel/LlamaModel.ts:519

Check whether the given token is a special token (a control-type token or a token with no normal text representation)

Parameters

ParameterType
tokenundefined | Token

Returns

boolean


iterateAllTokens()

ts
iterateAllTokens(): Generator<Token, void, unknown>;

Defined in: evaluator/LlamaModel/LlamaModel.ts:534

Returns

Generator<Token, void, unknown>


isEogToken()

ts
isEogToken(token: undefined | Token): boolean;

Defined in: evaluator/LlamaModel/LlamaModel.ts:547

Check whether the given token is an EOG (End Of Generation) token, like EOS or EOT.

Parameters

ParameterType
tokenundefined | Token

Returns

boolean


createContext()

ts
createContext(options: LlamaContextOptions): Promise<LlamaContext>;

Defined in: evaluator/LlamaModel/LlamaModel.ts:554

Parameters

ParameterType
optionsLlamaContextOptions

Returns

Promise<LlamaContext>


createEmbeddingContext()

ts
createEmbeddingContext(options: LlamaEmbeddingContextOptions): Promise<LlamaEmbeddingContext>;

Defined in: evaluator/LlamaModel/LlamaModel.ts:571

Parameters

ParameterType
optionsLlamaEmbeddingContextOptions

Returns

Promise<LlamaEmbeddingContext>

See

Using Embedding tutorial


createRankingContext()

ts
createRankingContext(options: LlamaRankingContextOptions): Promise<LlamaRankingContext>;

Defined in: evaluator/LlamaModel/LlamaModel.ts:581

Parameters

ParameterType
optionsLlamaRankingContextOptions

Returns

Promise<LlamaRankingContext>

See

Reranking Documents tutorial


getWarnings()

ts
getWarnings(): string[];

Defined in: evaluator/LlamaModel/LlamaModel.ts:593

Get warnings about the model file that would affect its usage.

These warnings include all the warnings generated by GgufInsights, but are more comprehensive.

Returns

string[]