Skip to content

Type Alias: LLamaChatLoadAndCompleteUserMessageOptions<Functions>

ts
type LLamaChatLoadAndCompleteUserMessageOptions<Functions> = {
  initialUserPrompt?: string;
  stopOnAbortSignal?: boolean;
  onTextChunk?: LLamaChatGenerateResponseOptions<Functions>["onTextChunk"];
  onToken?: LLamaChatGenerateResponseOptions<Functions>["onToken"];
  signal?: LLamaChatGenerateResponseOptions<Functions>["signal"];
  maxTokens?: LLamaChatGenerateResponseOptions<Functions>["maxTokens"];
  temperature?: LLamaChatGenerateResponseOptions<Functions>["temperature"];
  minP?: LLamaChatGenerateResponseOptions<Functions>["minP"];
  topK?: LLamaChatGenerateResponseOptions<Functions>["topK"];
  topP?: LLamaChatGenerateResponseOptions<Functions>["topP"];
  seed?: LLamaChatGenerateResponseOptions<Functions>["seed"];
  xtc?: LLamaChatGenerateResponseOptions<Functions>["xtc"];
  trimWhitespaceSuffix?: LLamaChatGenerateResponseOptions<Functions>["trimWhitespaceSuffix"];
  repeatPenalty?: LLamaChatGenerateResponseOptions<Functions>["repeatPenalty"];
  dryRepeatPenalty?: LLamaChatGenerateResponseOptions<Functions>["dryRepeatPenalty"];
  tokenBias?: LLamaChatGenerateResponseOptions<Functions>["tokenBias"];
  evaluationPriority?: LLamaChatGenerateResponseOptions<Functions>["evaluationPriority"];
  contextShift?: LLamaChatGenerateResponseOptions<Functions>["contextShift"];
  customStopTriggers?: LLamaChatGenerateResponseOptions<Functions>["customStopTriggers"];
  lastEvaluationContextWindow?: LLamaChatGenerateResponseOptions<Functions>["lastEvaluationContextWindow"];
  grammar?: LlamaGrammar;
  functions?: Functions | ChatModelFunctions;
  documentFunctionParams?: boolean;
};

Defined in: evaluator/LlamaChat/LlamaChat.ts:404

Type Parameters

Type ParameterDefault type
Functions extends ChatModelFunctions | undefinedundefined

Properties

initialUserPrompt?

ts
optional initialUserPrompt: string;

Defined in: evaluator/LlamaChat/LlamaChat.ts:408

Complete the given user prompt without adding it or the completion to the returned context window.


stopOnAbortSignal?

ts
optional stopOnAbortSignal: boolean;

Defined in: evaluator/LlamaChat/LlamaChat.ts:416

When a completion already started being generated and then the signal is aborted, the generation will stop and the completion will be returned as is instead of throwing an error.

Defaults to false.


onTextChunk?

ts
optional onTextChunk: LLamaChatGenerateResponseOptions<Functions>["onTextChunk"];

Defined in: evaluator/LlamaChat/LlamaChat.ts:423

Called as the model generates a completion with the generated text chunk.

Useful for streaming the generated completion as it's being generated.


onToken?

ts
optional onToken: LLamaChatGenerateResponseOptions<Functions>["onToken"];

Defined in: evaluator/LlamaChat/LlamaChat.ts:430

Called as the model generates a completion with the generated tokens.

Preferably, you'd want to use onTextChunk instead of this.


signal?

ts
optional signal: LLamaChatGenerateResponseOptions<Functions>["signal"];

Defined in: evaluator/LlamaChat/LlamaChat.ts:432


maxTokens?

ts
optional maxTokens: LLamaChatGenerateResponseOptions<Functions>["maxTokens"];

Defined in: evaluator/LlamaChat/LlamaChat.ts:433


temperature?

ts
optional temperature: LLamaChatGenerateResponseOptions<Functions>["temperature"];

Defined in: evaluator/LlamaChat/LlamaChat.ts:434


minP?

ts
optional minP: LLamaChatGenerateResponseOptions<Functions>["minP"];

Defined in: evaluator/LlamaChat/LlamaChat.ts:435


topK?

ts
optional topK: LLamaChatGenerateResponseOptions<Functions>["topK"];

Defined in: evaluator/LlamaChat/LlamaChat.ts:436


topP?

ts
optional topP: LLamaChatGenerateResponseOptions<Functions>["topP"];

Defined in: evaluator/LlamaChat/LlamaChat.ts:437


seed?

ts
optional seed: LLamaChatGenerateResponseOptions<Functions>["seed"];

Defined in: evaluator/LlamaChat/LlamaChat.ts:438


xtc?

ts
optional xtc: LLamaChatGenerateResponseOptions<Functions>["xtc"];

Defined in: evaluator/LlamaChat/LlamaChat.ts:439


trimWhitespaceSuffix?

ts
optional trimWhitespaceSuffix: LLamaChatGenerateResponseOptions<Functions>["trimWhitespaceSuffix"];

Defined in: evaluator/LlamaChat/LlamaChat.ts:440


repeatPenalty?

ts
optional repeatPenalty: LLamaChatGenerateResponseOptions<Functions>["repeatPenalty"];

Defined in: evaluator/LlamaChat/LlamaChat.ts:441


dryRepeatPenalty?

ts
optional dryRepeatPenalty: LLamaChatGenerateResponseOptions<Functions>["dryRepeatPenalty"];

Defined in: evaluator/LlamaChat/LlamaChat.ts:442


tokenBias?

ts
optional tokenBias: LLamaChatGenerateResponseOptions<Functions>["tokenBias"];

Defined in: evaluator/LlamaChat/LlamaChat.ts:443


evaluationPriority?

ts
optional evaluationPriority: LLamaChatGenerateResponseOptions<Functions>["evaluationPriority"];

Defined in: evaluator/LlamaChat/LlamaChat.ts:444


contextShift?

ts
optional contextShift: LLamaChatGenerateResponseOptions<Functions>["contextShift"];

Defined in: evaluator/LlamaChat/LlamaChat.ts:445


customStopTriggers?

ts
optional customStopTriggers: LLamaChatGenerateResponseOptions<Functions>["customStopTriggers"];

Defined in: evaluator/LlamaChat/LlamaChat.ts:446


lastEvaluationContextWindow?

ts
optional lastEvaluationContextWindow: LLamaChatGenerateResponseOptions<Functions>["lastEvaluationContextWindow"];

Defined in: evaluator/LlamaChat/LlamaChat.ts:447


grammar?

ts
optional grammar: LlamaGrammar;

Defined in: evaluator/LlamaChat/LlamaChat.ts:449


functions?

ts
optional functions: Functions | ChatModelFunctions;

Defined in: evaluator/LlamaChat/LlamaChat.ts:458

Functions are not used by the model here, but are used for keeping the instructions given to the model about the functions in the current context state, to avoid context shifts.

It's best to provide the same functions that were used for the previous prompt here.


documentFunctionParams?

ts
optional documentFunctionParams: boolean;

Defined in: evaluator/LlamaChat/LlamaChat.ts:467

Functions are not used by the model here, but are used for keeping the instructions given to the model about the functions in the current context state, to avoid context shifts.

It's best to provide the same value that was used for the previous prompt here.