Type Alias: LLamaChatLoadAndCompleteUserMessageOptions<Functions>
type LLamaChatLoadAndCompleteUserMessageOptions<Functions> = {
initialUserPrompt: string;
stopOnAbortSignal: boolean;
onTextChunk: LLamaChatGenerateResponseOptions<Functions>["onTextChunk"];
onToken: LLamaChatGenerateResponseOptions<Functions>["onToken"];
signal: LLamaChatGenerateResponseOptions<Functions>["signal"];
maxTokens: LLamaChatGenerateResponseOptions<Functions>["maxTokens"];
temperature: LLamaChatGenerateResponseOptions<Functions>["temperature"];
minP: LLamaChatGenerateResponseOptions<Functions>["minP"];
topK: LLamaChatGenerateResponseOptions<Functions>["topK"];
topP: LLamaChatGenerateResponseOptions<Functions>["topP"];
seed: LLamaChatGenerateResponseOptions<Functions>["seed"];
trimWhitespaceSuffix: LLamaChatGenerateResponseOptions<Functions>["trimWhitespaceSuffix"];
repeatPenalty: LLamaChatGenerateResponseOptions<Functions>["repeatPenalty"];
tokenBias: LLamaChatGenerateResponseOptions<Functions>["tokenBias"];
evaluationPriority: LLamaChatGenerateResponseOptions<Functions>["evaluationPriority"];
contextShift: LLamaChatGenerateResponseOptions<Functions>["contextShift"];
customStopTriggers: LLamaChatGenerateResponseOptions<Functions>["customStopTriggers"];
lastEvaluationContextWindow: LLamaChatGenerateResponseOptions<Functions>["lastEvaluationContextWindow"];
grammar: LlamaGrammar;
functions: Functions | ChatModelFunctions;
documentFunctionParams: boolean;
};
Defined in: evaluator/LlamaChat/LlamaChat.ts:267
Type Parameters
Type Parameter | Default type |
---|---|
Functions extends ChatModelFunctions | undefined | undefined |
Properties
initialUserPrompt?
optional initialUserPrompt: string;
Defined in: evaluator/LlamaChat/LlamaChat.ts:271
Complete the given user prompt without adding it or the completion to the returned context window.
stopOnAbortSignal?
optional stopOnAbortSignal: boolean;
Defined in: evaluator/LlamaChat/LlamaChat.ts:279
When a completion already started being generated and then the signal is aborted, the generation will stop and the completion will be returned as is instead of throwing an error.
Defaults to false
.
onTextChunk?
optional onTextChunk: LLamaChatGenerateResponseOptions<Functions>["onTextChunk"];
Defined in: evaluator/LlamaChat/LlamaChat.ts:286
Called as the model generates a completion with the generated text chunk.
Useful for streaming the generated completion as it's being generated.
onToken?
optional onToken: LLamaChatGenerateResponseOptions<Functions>["onToken"];
Defined in: evaluator/LlamaChat/LlamaChat.ts:293
Called as the model generates a completion with the generated tokens.
Preferably, you'd want to use onTextChunk
instead of this.
signal?
optional signal: LLamaChatGenerateResponseOptions<Functions>["signal"];
Defined in: evaluator/LlamaChat/LlamaChat.ts:295
maxTokens?
optional maxTokens: LLamaChatGenerateResponseOptions<Functions>["maxTokens"];
Defined in: evaluator/LlamaChat/LlamaChat.ts:296
temperature?
optional temperature: LLamaChatGenerateResponseOptions<Functions>["temperature"];
Defined in: evaluator/LlamaChat/LlamaChat.ts:297
minP?
optional minP: LLamaChatGenerateResponseOptions<Functions>["minP"];
Defined in: evaluator/LlamaChat/LlamaChat.ts:298
topK?
optional topK: LLamaChatGenerateResponseOptions<Functions>["topK"];
Defined in: evaluator/LlamaChat/LlamaChat.ts:299
topP?
optional topP: LLamaChatGenerateResponseOptions<Functions>["topP"];
Defined in: evaluator/LlamaChat/LlamaChat.ts:300
seed?
optional seed: LLamaChatGenerateResponseOptions<Functions>["seed"];
Defined in: evaluator/LlamaChat/LlamaChat.ts:301
trimWhitespaceSuffix?
optional trimWhitespaceSuffix: LLamaChatGenerateResponseOptions<Functions>["trimWhitespaceSuffix"];
Defined in: evaluator/LlamaChat/LlamaChat.ts:302
repeatPenalty?
optional repeatPenalty: LLamaChatGenerateResponseOptions<Functions>["repeatPenalty"];
Defined in: evaluator/LlamaChat/LlamaChat.ts:303
tokenBias?
optional tokenBias: LLamaChatGenerateResponseOptions<Functions>["tokenBias"];
Defined in: evaluator/LlamaChat/LlamaChat.ts:304
evaluationPriority?
optional evaluationPriority: LLamaChatGenerateResponseOptions<Functions>["evaluationPriority"];
Defined in: evaluator/LlamaChat/LlamaChat.ts:305
contextShift?
optional contextShift: LLamaChatGenerateResponseOptions<Functions>["contextShift"];
Defined in: evaluator/LlamaChat/LlamaChat.ts:306
customStopTriggers?
optional customStopTriggers: LLamaChatGenerateResponseOptions<Functions>["customStopTriggers"];
Defined in: evaluator/LlamaChat/LlamaChat.ts:307
lastEvaluationContextWindow?
optional lastEvaluationContextWindow: LLamaChatGenerateResponseOptions<Functions>["lastEvaluationContextWindow"];
Defined in: evaluator/LlamaChat/LlamaChat.ts:308
grammar?
optional grammar: LlamaGrammar;
Defined in: evaluator/LlamaChat/LlamaChat.ts:310
functions?
optional functions: Functions | ChatModelFunctions;
Defined in: evaluator/LlamaChat/LlamaChat.ts:319
Functions are not used by the model here, but are used for keeping the instructions given to the model about the functions in the current context state, to avoid context shifts.
It's best to provide the same functions that were used for the previous prompt here.
documentFunctionParams?
optional documentFunctionParams: boolean;
Defined in: evaluator/LlamaChat/LlamaChat.ts:328
Functions are not used by the model here, but are used for keeping the instructions given to the model about the functions in the current context state, to avoid context shifts.
It's best to provide the same value that was used for the previous prompt here.