Class: Llama
Defined in: bindings/Llama.ts:36
Properties
onDispose
readonly onDispose: EventRelay<void>;Defined in: bindings/Llama.ts:74
Accessors
disposed
Get Signature
get disposed(): boolean;Defined in: bindings/Llama.ts:188
Returns
boolean
classes
Get Signature
get classes(): LlamaClasses;Defined in: bindings/Llama.ts:192
Returns
gpu
Get Signature
get gpu(): LlamaGpuType;Defined in: bindings/Llama.ts:199
Returns
supportsGpuOffloading
Get Signature
get supportsGpuOffloading(): boolean;Defined in: bindings/Llama.ts:203
Returns
boolean
supportsMmap
Get Signature
get supportsMmap(): boolean;Defined in: bindings/Llama.ts:207
Returns
boolean
gpuSupportsMmap
Get Signature
get gpuSupportsMmap(): boolean;Defined in: bindings/Llama.ts:211
Returns
boolean
supportsMlock
Get Signature
get supportsMlock(): boolean;Defined in: bindings/Llama.ts:215
Returns
boolean
cpuMathCores
Get Signature
get cpuMathCores(): number;Defined in: bindings/Llama.ts:220
The number of CPU cores that are useful for math
Returns
number
maxThreads
Get Signature
get maxThreads(): number;Defined in: bindings/Llama.ts:231
The maximum number of threads that can be used by the Llama instance.
If set to 0, the Llama instance will have no limit on the number of threads.
See the maxThreads option of getLlama for more information.
Returns
number
Set Signature
set maxThreads(value: number): void;Defined in: bindings/Llama.ts:235
Parameters
| Parameter | Type |
|---|---|
value | number |
Returns
void
numa
Get Signature
get numa(): LlamaNuma;Defined in: bindings/Llama.ts:242
See the numa option of getLlama for more information
Returns
logLevel
Get Signature
get logLevel(): LlamaLogLevel;Defined in: bindings/Llama.ts:246
Returns
Set Signature
set logLevel(value: LlamaLogLevel): void;Defined in: bindings/Llama.ts:250
Parameters
| Parameter | Type |
|---|---|
value | LlamaLogLevel |
Returns
void
logger
Get Signature
get logger(): (level: LlamaLogLevel, message: string) => void;Defined in: bindings/Llama.ts:260
Returns
(level: LlamaLogLevel, message: string): void;Parameters
| Parameter | Type |
|---|---|
level | LlamaLogLevel |
message | string |
Returns
void
Set Signature
set logger(value: (level: LlamaLogLevel, message: string) => void): void;Defined in: bindings/Llama.ts:264
Parameters
| Parameter | Type |
|---|---|
value | (level: LlamaLogLevel, message: string) => void |
Returns
void
buildType
Get Signature
get buildType(): "localBuild" | "prebuilt";Defined in: bindings/Llama.ts:271
Returns
"localBuild" | "prebuilt"
cmakeOptions
Get Signature
get cmakeOptions(): Readonly<Record<string, string>>;Defined in: bindings/Llama.ts:275
Returns
Readonly<Record<string, string>>
llamaCppRelease
Get Signature
get llamaCppRelease(): {
repo: string;
release: string;
};Defined in: bindings/Llama.ts:279
Returns
{
repo: string;
release: string;
}repo
readonly repo: string;release
readonly release: string;systemInfo
Get Signature
get systemInfo(): string;Defined in: bindings/Llama.ts:283
Returns
string
vramPaddingSize
Get Signature
get vramPaddingSize(): number;Defined in: bindings/Llama.ts:295
VRAM padding used for memory size calculations, as these calculations are not always accurate. This is set by default to ensure stability, but can be configured when you call getLlama.
See vramPadding on getLlama for more information.
Returns
number
Methods
dispose()
dispose(): Promise<void>;Defined in: bindings/Llama.ts:170
Returns
Promise<void>
getVramState()
getVramState(): Promise<{
total: number;
used: number;
free: number;
unifiedSize: number;
}>;Defined in: bindings/Llama.ts:305
The total amount of VRAM that is currently being used.
unifiedSize represents the amount of VRAM that is shared between the CPU and GPU. On SoC devices, this is usually the same as total.
Returns
Promise<{ total: number; used: number; free: number; unifiedSize: number; }>
getSwapState()
getSwapState(): Promise<{
maxSize: number;
allocated: number;
used: number;
}>;Defined in: bindings/Llama.ts:330
Get the state of the swap memory.
maxSize - The maximum size of the swap memory that the system can allocate. If the swap size is dynamic (like on macOS), this will be Infinity.
allocated - The total size allocated by the system for swap memory.
used - The amount of swap memory that is currently being used from the allocated size.
On Windows, this will return the info for the page file.
Returns
Promise<{ maxSize: number; allocated: number; used: number; }>
getGpuDeviceNames()
getGpuDeviceNames(): Promise<string[]>;Defined in: bindings/Llama.ts:356
Returns
Promise<string[]>
loadModel()
loadModel(options: LlamaModelOptions): Promise<LlamaModel>;Defined in: bindings/Llama.ts:364
Parameters
| Parameter | Type |
|---|---|
options | LlamaModelOptions |
Returns
createGrammarForJsonSchema()
createGrammarForJsonSchema<T, Defs>(schema: Readonly<T> & GbnfJsonSchema<Defs>): Promise<LlamaJsonSchemaGrammar<T, Defs>>;Defined in: bindings/Llama.ts:384
Type Parameters
| Type Parameter | Default type |
|---|---|
T extends | GbnfJsonBasicSchema | GbnfJsonConstSchema | GbnfJsonEnumSchema | GbnfJsonBasicStringSchema | GbnfJsonFormatStringSchema | GbnfJsonOneOfSchema<Defs> | GbnfJsonObjectSchema<string, Defs> | GbnfJsonArraySchema<Defs> | GbnfJsonRefSchema<Defs> | - |
Defs extends GbnfJsonDefList<Defs> | Record<any, any> |
Parameters
| Parameter | Type |
|---|---|
schema | Readonly<T> & GbnfJsonSchema<Defs> |
Returns
Promise<LlamaJsonSchemaGrammar<T, Defs>>
See
getGrammarFor()
getGrammarFor(type:
| "json"
| "json_arr"
| "english"
| "list"
| "c"
| "arithmetic"
| "japanese"
| "chess"): Promise<LlamaGrammar>;Defined in: bindings/Llama.ts:392
Parameters
| Parameter | Type |
|---|---|
type | | "json" | "json_arr" | "english" | "list" | "c" | "arithmetic" | "japanese" | "chess" |
Returns
createGrammar()
createGrammar(options: LlamaGrammarOptions): Promise<LlamaGrammar>;Defined in: bindings/Llama.ts:399
Parameters
| Parameter | Type |
|---|---|
options | LlamaGrammarOptions |
Returns
See
Using Grammar tutorial
defaultConsoleLogger()
static defaultConsoleLogger(level: LlamaLogLevel, message: string): void;Defined in: bindings/Llama.ts:611
Parameters
| Parameter | Type |
|---|---|
level | LlamaLogLevel |
message | string |
Returns
void