Type Alias: LLamaChatCompletePromptOptions
type LLamaChatCompletePromptOptions = {
maxTokens?: LLamaChatPromptOptions["maxTokens"];
stopOnAbortSignal?: LLamaChatPromptOptions["stopOnAbortSignal"];
onTextChunk?: LLamaChatPromptOptions["onTextChunk"];
onToken?: LLamaChatPromptOptions["onToken"];
signal?: LLamaChatPromptOptions["signal"];
temperature?: LLamaChatPromptOptions["temperature"];
minP?: LLamaChatPromptOptions["minP"];
topK?: LLamaChatPromptOptions["topK"];
topP?: LLamaChatPromptOptions["topP"];
seed?: LLamaChatPromptOptions["seed"];
trimWhitespaceSuffix?: LLamaChatPromptOptions["trimWhitespaceSuffix"];
evaluationPriority?: LLamaChatPromptOptions["evaluationPriority"];
repeatPenalty?: LLamaChatPromptOptions["repeatPenalty"];
tokenBias?: LLamaChatPromptOptions["tokenBias"];
customStopTriggers?: LLamaChatPromptOptions["customStopTriggers"];
grammar?: LlamaGrammar;
functions?: ChatSessionModelFunctions;
documentFunctionParams?: boolean;
};
Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:239
Properties
maxTokens?
optional maxTokens: LLamaChatPromptOptions["maxTokens"];
Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:245
Generate a completion for the given user prompt up to the given number of tokens.
Defaults to 256
or half the context size, whichever is smaller.
stopOnAbortSignal?
optional stopOnAbortSignal: LLamaChatPromptOptions["stopOnAbortSignal"];
Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:253
When a completion already started being generated and then the given signal
is aborted, the generation will stop and the completion will be returned as-is instead of throwing an error.
Defaults to false
.
onTextChunk?
optional onTextChunk: LLamaChatPromptOptions["onTextChunk"];
Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:260
Called as the model generates a completion with the generated text chunk.
Useful for streaming the generated completion as it's being generated.
onToken?
optional onToken: LLamaChatPromptOptions["onToken"];
Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:267
Called as the model generates a completion with the generated tokens.
Preferably, you'd want to use onTextChunk
instead of this.
signal?
optional signal: LLamaChatPromptOptions["signal"];
Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:269
temperature?
optional temperature: LLamaChatPromptOptions["temperature"];
Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:270
minP?
optional minP: LLamaChatPromptOptions["minP"];
Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:271
topK?
optional topK: LLamaChatPromptOptions["topK"];
Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:272
topP?
optional topP: LLamaChatPromptOptions["topP"];
Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:273
seed?
optional seed: LLamaChatPromptOptions["seed"];
Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:274
trimWhitespaceSuffix?
optional trimWhitespaceSuffix: LLamaChatPromptOptions["trimWhitespaceSuffix"];
Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:275
evaluationPriority?
optional evaluationPriority: LLamaChatPromptOptions["evaluationPriority"];
Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:276
repeatPenalty?
optional repeatPenalty: LLamaChatPromptOptions["repeatPenalty"];
Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:277
tokenBias?
optional tokenBias: LLamaChatPromptOptions["tokenBias"];
Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:278
customStopTriggers?
optional customStopTriggers: LLamaChatPromptOptions["customStopTriggers"];
Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:279
grammar?
optional grammar: LlamaGrammar;
Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:281
functions?
optional functions: ChatSessionModelFunctions;
Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:290
Functions are not used by the model here, but are used for keeping the instructions given to the model about the functions in the current context state, to avoid context shifts.
It's best to provide the same functions that were used for the previous prompt here.
documentFunctionParams?
optional documentFunctionParams: boolean;
Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:299
Functions are not used by the model here, but are used for keeping the instructions given to the model about the functions in the current context state, to avoid context shifts.
It's best to provide the same value that was used for the previous prompt here.