Skip to content

Type alias: LlamaContextOptions

ts
type LlamaContextOptions: {
  batchSize: number;
  contextSize: number;
  embedding: boolean;
  logitsAll: boolean;
  model: LlamaModel;
  prependBos: boolean;
  seed: number | null;
  threads: number;
};
type LlamaContextOptions: {
  batchSize: number;
  contextSize: number;
  embedding: boolean;
  logitsAll: boolean;
  model: LlamaModel;
  prependBos: boolean;
  seed: number | null;
  threads: number;
};

Type declaration

batchSize

ts
batchSize?: number;
batchSize?: number;

prompt processing batch size

contextSize

ts
contextSize?: number;
contextSize?: number;

text context size

embedding

ts
embedding?: boolean;
embedding?: boolean;

embedding mode only

logitsAll

ts
logitsAll?: boolean;
logitsAll?: boolean;

the llama_eval() call computes all logits, not just the last one

model

ts
model: LlamaModel;
model: LlamaModel;

prependBos

ts
prependBos?: boolean;
prependBos?: boolean;

seed

ts
seed?: number | null;
seed?: number | null;

If null, a random seed will be used

threads

ts
threads?: number;
threads?: number;

number of threads to use to evaluate tokens

Source

llamaEvaluator/LlamaContext.ts:9