Skip to content

Type Alias: LLamaChatCompletePromptOptions

ts
type LLamaChatCompletePromptOptions = {
  maxTokens: LLamaChatPromptOptions["maxTokens"];
  stopOnAbortSignal: LLamaChatPromptOptions["stopOnAbortSignal"];
  onTextChunk: LLamaChatPromptOptions["onTextChunk"];
  onToken: LLamaChatPromptOptions["onToken"];
  signal: LLamaChatPromptOptions["signal"];
  temperature: LLamaChatPromptOptions["temperature"];
  minP: LLamaChatPromptOptions["minP"];
  topK: LLamaChatPromptOptions["topK"];
  topP: LLamaChatPromptOptions["topP"];
  seed: LLamaChatPromptOptions["seed"];
  trimWhitespaceSuffix: LLamaChatPromptOptions["trimWhitespaceSuffix"];
  evaluationPriority: LLamaChatPromptOptions["evaluationPriority"];
  repeatPenalty: LLamaChatPromptOptions["repeatPenalty"];
  tokenBias: LLamaChatPromptOptions["tokenBias"];
  customStopTriggers: LLamaChatPromptOptions["customStopTriggers"];
  grammar: LlamaGrammar;
  functions: ChatSessionModelFunctions;
  documentFunctionParams: boolean;
};

Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:208

Properties

maxTokens?

ts
optional maxTokens: LLamaChatPromptOptions["maxTokens"];

Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:214

Generate a completion for the given user prompt up to the given number of tokens.

Defaults to 256 or half the context size, whichever is smaller.


stopOnAbortSignal?

ts
optional stopOnAbortSignal: LLamaChatPromptOptions["stopOnAbortSignal"];

Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:222

When a completion already started being generated and then the given signal is aborted, the generation will stop and the completion will be returned as-is instead of throwing an error.

Defaults to false.


onTextChunk?

ts
optional onTextChunk: LLamaChatPromptOptions["onTextChunk"];

Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:229

Called as the model generates a completion with the generated text chunk.

Useful for streaming the generated completion as it's being generated.


onToken?

ts
optional onToken: LLamaChatPromptOptions["onToken"];

Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:236

Called as the model generates a completion with the generated tokens.

Preferably, you'd want to use onTextChunk instead of this.


signal?

ts
optional signal: LLamaChatPromptOptions["signal"];

Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:238


temperature?

ts
optional temperature: LLamaChatPromptOptions["temperature"];

Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:239


minP?

ts
optional minP: LLamaChatPromptOptions["minP"];

Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:240


topK?

ts
optional topK: LLamaChatPromptOptions["topK"];

Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:241


topP?

ts
optional topP: LLamaChatPromptOptions["topP"];

Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:242


seed?

ts
optional seed: LLamaChatPromptOptions["seed"];

Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:243


trimWhitespaceSuffix?

ts
optional trimWhitespaceSuffix: LLamaChatPromptOptions["trimWhitespaceSuffix"];

Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:244


evaluationPriority?

ts
optional evaluationPriority: LLamaChatPromptOptions["evaluationPriority"];

Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:245


repeatPenalty?

ts
optional repeatPenalty: LLamaChatPromptOptions["repeatPenalty"];

Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:246


tokenBias?

ts
optional tokenBias: LLamaChatPromptOptions["tokenBias"];

Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:247


customStopTriggers?

ts
optional customStopTriggers: LLamaChatPromptOptions["customStopTriggers"];

Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:248


grammar?

ts
optional grammar: LlamaGrammar;

Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:250


functions?

ts
optional functions: ChatSessionModelFunctions;

Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:259

Functions are not used by the model here, but are used for keeping the instructions given to the model about the functions in the current context state, to avoid context shifts.

It's best to provide the same functions that were used for the previous prompt here.


documentFunctionParams?

ts
optional documentFunctionParams: boolean;

Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:268

Functions are not used by the model here, but are used for keeping the instructions given to the model about the functions in the current context state, to avoid context shifts.

It's best to provide the same value that was used for the previous prompt here.