Class: LlamaModel
Defined in: evaluator/LlamaModel/LlamaModel.ts:144
Properties
tokenizer
readonly tokenizer: Tokenizer;
Defined in: evaluator/LlamaModel/LlamaModel.ts:167
onDispose
readonly onDispose: EventRelay<void>;
Defined in: evaluator/LlamaModel/LlamaModel.ts:168
Accessors
disposed
Get Signature
get disposed(): boolean
Defined in: evaluator/LlamaModel/LlamaModel.ts:273
Returns
boolean
llama
Get Signature
get llama(): Llama
Defined in: evaluator/LlamaModel/LlamaModel.ts:277
Returns
tokens
Get Signature
get tokens(): LlamaModelTokens
Defined in: evaluator/LlamaModel/LlamaModel.ts:281
Returns
filename
Get Signature
get filename(): undefined | string
Defined in: evaluator/LlamaModel/LlamaModel.ts:285
Returns
undefined
| string
fileInfo
Get Signature
get fileInfo(): GgufFileInfo
Defined in: evaluator/LlamaModel/LlamaModel.ts:289
Returns
fileInsights
Get Signature
get fileInsights(): GgufInsights
Defined in: evaluator/LlamaModel/LlamaModel.ts:293
Returns
gpuLayers
Get Signature
get gpuLayers(): number
Defined in: evaluator/LlamaModel/LlamaModel.ts:301
Number of layers offloaded to the GPU. If GPU support is disabled, this will always be 0
.
Returns
number
size
Get Signature
get size(): number
Defined in: evaluator/LlamaModel/LlamaModel.ts:310
Total model size in memory in bytes.
When using mmap, actual memory usage may be higher than this value due to llama.cpp
's performance optimizations.
Returns
number
flashAttentionSupported
Get Signature
get flashAttentionSupported(): boolean
Defined in: evaluator/LlamaModel/LlamaModel.ts:316
Returns
boolean
defaultContextFlashAttention
Get Signature
get defaultContextFlashAttention(): boolean
Defined in: evaluator/LlamaModel/LlamaModel.ts:320
Returns
boolean
trainContextSize
Get Signature
get trainContextSize(): number
Defined in: evaluator/LlamaModel/LlamaModel.ts:609
The context size the model was trained on
Returns
number
embeddingVectorSize
Get Signature
get embeddingVectorSize(): number
Defined in: evaluator/LlamaModel/LlamaModel.ts:619
The size of an embedding vector the model can produce
Returns
number
vocabularyType
Get Signature
get vocabularyType(): LlamaVocabularyType
Defined in: evaluator/LlamaModel/LlamaModel.ts:628
Returns
Methods
dispose()
dispose(): Promise<void>
Defined in: evaluator/LlamaModel/LlamaModel.ts:259
Returns
Promise
<void
>
tokenize()
Call Signature
tokenize(
text: string,
specialTokens?: boolean,
options?: "trimLeadingSpace"): Token[]
Defined in: evaluator/LlamaModel/LlamaModel.ts:334
Transform text into tokens that can be fed to the model
Parameters
Parameter | Type | Description |
---|---|---|
text | string | the text to tokenize |
specialTokens ? | boolean | if set to true, text that correspond to special tokens will be tokenized to those tokens. For example, <s> will be tokenized to the BOS token if specialTokens is set to true , otherwise it will be tokenized to tokens that corresponds to the plaintext <s> string. |
options ? | "trimLeadingSpace" | additional options for tokenization. If set to "trimLeadingSpace" , a leading space will be trimmed from the tokenized output if the output has an additional space at the beginning. |
Returns
Token
[]
Call Signature
tokenize(text: BuiltinSpecialTokenValue, specialTokens: "builtin"): Token[]
Defined in: evaluator/LlamaModel/LlamaModel.ts:335
Transform text into tokens that can be fed to the model
Parameters
Parameter | Type | Description |
---|---|---|
text | BuiltinSpecialTokenValue | the text to tokenize |
specialTokens | "builtin" | if set to true, text that correspond to special tokens will be tokenized to those tokens. For example, <s> will be tokenized to the BOS token if specialTokens is set to true , otherwise it will be tokenized to tokens that corresponds to the plaintext <s> string. |
Returns
Token
[]
detokenize()
detokenize(
tokens: readonly Token[],
specialTokens?: boolean,
lastTokens?: readonly Token[]): string
Defined in: evaluator/LlamaModel/LlamaModel.ts:449
Transform tokens into text
Parameters
Parameter | Type | Default value | Description |
---|---|---|---|
tokens | readonly Token [] | undefined | the tokens to detokenize. |
specialTokens ? | boolean | false | if set to true , special tokens will be detokenized to their corresponding token text representation. Recommended for debugging purposes only. > Note: there may be additional spaces around special tokens that were not present in the original text - this is not a bug, this is how the tokenizer is supposed to work. Defaults to false . |
lastTokens ? | readonly Token [] | undefined | the last few tokens that preceded the tokens to detokenize. If provided, the last few tokens will be used to determine whether a space has to be added before the current tokens or not, and apply other detokenizer-specific heuristics to provide the correct text continuation to the existing tokens. Using it may have no effect with some models, but it is still recommended. |
Returns
string
getTokenAttributes()
getTokenAttributes(token: Token): TokenAttributes
Defined in: evaluator/LlamaModel/LlamaModel.ts:470
Parameters
Parameter | Type |
---|---|
token | Token |
Returns
isSpecialToken()
isSpecialToken(token: undefined | Token): boolean
Defined in: evaluator/LlamaModel/LlamaModel.ts:481
Check whether the given token is a special token (a control-type token or a token with no normal text representation)
Parameters
Parameter | Type |
---|---|
token | undefined | Token |
Returns
boolean
iterateAllTokens()
iterateAllTokens(): Generator<Token, void, unknown>
Defined in: evaluator/LlamaModel/LlamaModel.ts:496
Returns
Generator
<Token
, void
, unknown
>
isEogToken()
isEogToken(token: undefined | Token): boolean
Defined in: evaluator/LlamaModel/LlamaModel.ts:509
Check whether the given token is an EOG (End Of Generation) token, like EOS or EOT.
Parameters
Parameter | Type |
---|---|
token | undefined | Token |
Returns
boolean
createContext()
createContext(options: LlamaContextOptions): Promise<LlamaContext>
Defined in: evaluator/LlamaModel/LlamaModel.ts:516
Parameters
Parameter | Type |
---|---|
options | LlamaContextOptions |
Returns
createEmbeddingContext()
createEmbeddingContext(options: LlamaEmbeddingContextOptions): Promise<LlamaEmbeddingContext>
Defined in: evaluator/LlamaModel/LlamaModel.ts:533
Parameters
Parameter | Type |
---|---|
options | LlamaEmbeddingContextOptions |
Returns
Promise
<LlamaEmbeddingContext
>
See
Using Embedding tutorial
createRankingContext()
createRankingContext(options: LlamaRankingContextOptions): Promise<LlamaRankingContext>
Defined in: evaluator/LlamaModel/LlamaModel.ts:543
Parameters
Parameter | Type |
---|---|
options | LlamaRankingContextOptions |
Returns
See
Reranking Documents tutorial
getWarnings()
getWarnings(): string[]
Defined in: evaluator/LlamaModel/LlamaModel.ts:555
Get warnings about the model file that would affect its usage.
These warnings include all the warnings generated by GgufInsights
, but are more comprehensive.
Returns
string
[]