Class: LlamaModel
Defined in: evaluator/LlamaModel/LlamaModel.ts:173
Properties
tokenizer
readonly tokenizer: Tokenizer;Defined in: evaluator/LlamaModel/LlamaModel.ts:197
onDispose
readonly onDispose: EventRelay<void>;Defined in: evaluator/LlamaModel/LlamaModel.ts:198
Accessors
disposed
Get Signature
get disposed(): boolean;Defined in: evaluator/LlamaModel/LlamaModel.ts:307
Returns
boolean
llama
Get Signature
get llama(): Llama;Defined in: evaluator/LlamaModel/LlamaModel.ts:311
Returns
tokens
Get Signature
get tokens(): LlamaModelTokens;Defined in: evaluator/LlamaModel/LlamaModel.ts:315
Returns
filename
Get Signature
get filename(): undefined | string;Defined in: evaluator/LlamaModel/LlamaModel.ts:319
Returns
undefined | string
fileInfo
Get Signature
get fileInfo(): GgufFileInfo;Defined in: evaluator/LlamaModel/LlamaModel.ts:323
Returns
fileInsights
Get Signature
get fileInsights(): GgufInsights;Defined in: evaluator/LlamaModel/LlamaModel.ts:327
Returns
gpuLayers
Get Signature
get gpuLayers(): number;Defined in: evaluator/LlamaModel/LlamaModel.ts:335
Number of layers offloaded to the GPU. If GPU support is disabled, this will always be 0.
Returns
number
size
Get Signature
get size(): number;Defined in: evaluator/LlamaModel/LlamaModel.ts:344
Total model size in memory in bytes.
When using mmap, actual memory usage may be higher than this value due to llama.cpp's performance optimizations.
Returns
number
flashAttentionSupported
Get Signature
get flashAttentionSupported(): boolean;Defined in: evaluator/LlamaModel/LlamaModel.ts:350
Returns
boolean
defaultContextFlashAttention
Get Signature
get defaultContextFlashAttention(): boolean;Defined in: evaluator/LlamaModel/LlamaModel.ts:354
Returns
boolean
defaultContextSwaFullCache
Get Signature
get defaultContextSwaFullCache(): boolean;Defined in: evaluator/LlamaModel/LlamaModel.ts:358
Returns
boolean
trainContextSize
Get Signature
get trainContextSize(): number;Defined in: evaluator/LlamaModel/LlamaModel.ts:647
The context size the model was trained on
Returns
number
embeddingVectorSize
Get Signature
get embeddingVectorSize(): number;Defined in: evaluator/LlamaModel/LlamaModel.ts:657
The size of an embedding vector the model can produce
Returns
number
vocabularyType
Get Signature
get vocabularyType(): LlamaVocabularyType;Defined in: evaluator/LlamaModel/LlamaModel.ts:666
Returns
Methods
dispose()
dispose(): Promise<void>;Defined in: evaluator/LlamaModel/LlamaModel.ts:293
Returns
Promise<void>
tokenize()
Call Signature
tokenize(
text: string,
specialTokens?: boolean,
options?: "trimLeadingSpace"): Token[];Defined in: evaluator/LlamaModel/LlamaModel.ts:372
Transform text into tokens that can be fed to the model
Parameters
| Parameter | Type | Description |
|---|---|---|
text | string | the text to tokenize |
specialTokens? | boolean | if set to true, text that correspond to special tokens will be tokenized to those tokens. For example, <s> will be tokenized to the BOS token if specialTokens is set to true, otherwise it will be tokenized to tokens that corresponds to the plaintext <s> string. |
options? | "trimLeadingSpace" | additional options for tokenization. If set to "trimLeadingSpace", a leading space will be trimmed from the tokenized output if the output has an additional space at the beginning. |
Returns
Token[]
Call Signature
tokenize(text: BuiltinSpecialTokenValue, specialTokens: "builtin"): Token[];Defined in: evaluator/LlamaModel/LlamaModel.ts:373
Transform text into tokens that can be fed to the model
Parameters
| Parameter | Type | Description |
|---|---|---|
text | BuiltinSpecialTokenValue | the text to tokenize |
specialTokens | "builtin" | if set to true, text that correspond to special tokens will be tokenized to those tokens. For example, <s> will be tokenized to the BOS token if specialTokens is set to true, otherwise it will be tokenized to tokens that corresponds to the plaintext <s> string. |
Returns
Token[]
detokenize()
detokenize(
tokens: readonly Token[],
specialTokens?: boolean,
lastTokens?: readonly Token[]): string;Defined in: evaluator/LlamaModel/LlamaModel.ts:487
Transform tokens into text
Parameters
| Parameter | Type | Default value | Description |
|---|---|---|---|
tokens | readonly Token[] | undefined | the tokens to detokenize. |
specialTokens? | boolean | false | if set to true, special tokens will be detokenized to their corresponding token text representation. Recommended for debugging purposes only. > Note: there may be additional spaces around special tokens that were not present in the original text - this is not a bug, this is how the tokenizer is supposed to work. Defaults to false. |
lastTokens? | readonly Token[] | undefined | the last few tokens that preceded the tokens to detokenize. If provided, the last few tokens will be used to determine whether a space has to be added before the current tokens or not, and apply other detokenizer-specific heuristics to provide the correct text continuation to the existing tokens. Using it may have no effect with some models, but it is still recommended. |
Returns
string
getTokenAttributes()
getTokenAttributes(token: Token): TokenAttributes;Defined in: evaluator/LlamaModel/LlamaModel.ts:508
Parameters
| Parameter | Type |
|---|---|
token | Token |
Returns
isSpecialToken()
isSpecialToken(token: undefined | Token): boolean;Defined in: evaluator/LlamaModel/LlamaModel.ts:519
Check whether the given token is a special token (a control-type token or a token with no normal text representation)
Parameters
| Parameter | Type |
|---|---|
token | undefined | Token |
Returns
boolean
iterateAllTokens()
iterateAllTokens(): Generator<Token, void, unknown>;Defined in: evaluator/LlamaModel/LlamaModel.ts:534
Returns
Generator<Token, void, unknown>
isEogToken()
isEogToken(token: undefined | Token): boolean;Defined in: evaluator/LlamaModel/LlamaModel.ts:547
Check whether the given token is an EOG (End Of Generation) token, like EOS or EOT.
Parameters
| Parameter | Type |
|---|---|
token | undefined | Token |
Returns
boolean
createContext()
createContext(options: LlamaContextOptions): Promise<LlamaContext>;Defined in: evaluator/LlamaModel/LlamaModel.ts:554
Parameters
| Parameter | Type |
|---|---|
options | LlamaContextOptions |
Returns
createEmbeddingContext()
createEmbeddingContext(options: LlamaEmbeddingContextOptions): Promise<LlamaEmbeddingContext>;Defined in: evaluator/LlamaModel/LlamaModel.ts:571
Parameters
| Parameter | Type |
|---|---|
options | LlamaEmbeddingContextOptions |
Returns
Promise<LlamaEmbeddingContext>
See
Using Embedding tutorial
createRankingContext()
createRankingContext(options: LlamaRankingContextOptions): Promise<LlamaRankingContext>;Defined in: evaluator/LlamaModel/LlamaModel.ts:581
Parameters
| Parameter | Type |
|---|---|
options | LlamaRankingContextOptions |
Returns
See
Reranking Documents tutorial
getWarnings()
getWarnings(): string[];Defined in: evaluator/LlamaModel/LlamaModel.ts:593
Get warnings about the model file that would affect its usage.
These warnings include all the warnings generated by GgufInsights, but are more comprehensive.
Returns
string[]