Skip to content

Type Alias: LLamaChatCompletePromptOptions

ts
type LLamaChatCompletePromptOptions = {
  maxTokens?: LLamaChatPromptOptions["maxTokens"];
  stopOnAbortSignal?: LLamaChatPromptOptions["stopOnAbortSignal"];
  onTextChunk?: LLamaChatPromptOptions["onTextChunk"];
  onToken?: LLamaChatPromptOptions["onToken"];
  signal?: LLamaChatPromptOptions["signal"];
  temperature?: LLamaChatPromptOptions["temperature"];
  minP?: LLamaChatPromptOptions["minP"];
  topK?: LLamaChatPromptOptions["topK"];
  topP?: LLamaChatPromptOptions["topP"];
  seed?: LLamaChatPromptOptions["seed"];
  trimWhitespaceSuffix?: LLamaChatPromptOptions["trimWhitespaceSuffix"];
  evaluationPriority?: LLamaChatPromptOptions["evaluationPriority"];
  repeatPenalty?: LLamaChatPromptOptions["repeatPenalty"];
  tokenBias?: LLamaChatPromptOptions["tokenBias"];
  customStopTriggers?: LLamaChatPromptOptions["customStopTriggers"];
  grammar?: LlamaGrammar;
  functions?: ChatSessionModelFunctions;
  documentFunctionParams?: boolean;
  completeAsModel?:   | "auto"
     | boolean
     | {
     enabled?: "auto" | boolean;
     appendedMessages?: ChatHistoryItem[];
   };
};

Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:247

Properties

maxTokens?

ts
optional maxTokens: LLamaChatPromptOptions["maxTokens"];

Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:253

Generate a completion for the given user prompt up to the given number of tokens.

Defaults to 256 or half the context size, whichever is smaller.


stopOnAbortSignal?

ts
optional stopOnAbortSignal: LLamaChatPromptOptions["stopOnAbortSignal"];

Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:261

When a completion already started being generated and then the given signal is aborted, the generation will stop and the completion will be returned as-is instead of throwing an error.

Defaults to false.


onTextChunk?

ts
optional onTextChunk: LLamaChatPromptOptions["onTextChunk"];

Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:268

Called as the model generates a completion with the generated text chunk.

Useful for streaming the generated completion as it's being generated.


onToken?

ts
optional onToken: LLamaChatPromptOptions["onToken"];

Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:275

Called as the model generates a completion with the generated tokens.

Preferably, you'd want to use onTextChunk instead of this.


signal?

ts
optional signal: LLamaChatPromptOptions["signal"];

Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:277


temperature?

ts
optional temperature: LLamaChatPromptOptions["temperature"];

Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:278


minP?

ts
optional minP: LLamaChatPromptOptions["minP"];

Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:279


topK?

ts
optional topK: LLamaChatPromptOptions["topK"];

Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:280


topP?

ts
optional topP: LLamaChatPromptOptions["topP"];

Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:281


seed?

ts
optional seed: LLamaChatPromptOptions["seed"];

Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:282


trimWhitespaceSuffix?

ts
optional trimWhitespaceSuffix: LLamaChatPromptOptions["trimWhitespaceSuffix"];

Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:283


evaluationPriority?

ts
optional evaluationPriority: LLamaChatPromptOptions["evaluationPriority"];

Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:284


repeatPenalty?

ts
optional repeatPenalty: LLamaChatPromptOptions["repeatPenalty"];

Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:285


tokenBias?

ts
optional tokenBias: LLamaChatPromptOptions["tokenBias"];

Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:286


customStopTriggers?

ts
optional customStopTriggers: LLamaChatPromptOptions["customStopTriggers"];

Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:287


grammar?

ts
optional grammar: LlamaGrammar;

Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:289


functions?

ts
optional functions: ChatSessionModelFunctions;

Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:298

Functions are not used by the model here, but are used for keeping the instructions given to the model about the functions in the current context state, to avoid context shifts.

It's best to provide the same functions that were used for the previous prompt here.


documentFunctionParams?

ts
optional documentFunctionParams: boolean;

Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:307

Functions are not used by the model here, but are used for keeping the instructions given to the model about the functions in the current context state, to avoid context shifts.

It's best to provide the same value that was used for the previous prompt here.


completeAsModel?

ts
optional completeAsModel: 
  | "auto"
  | boolean
  | {
  enabled?: "auto" | boolean;
  appendedMessages?: ChatHistoryItem[];
};

Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:319

Whether to complete the prompt as a model response.

  • "auto": Automatically determine whether to complete as a model response based on the model used. This is a good option to workaround some models that don't support used prompt completions.
  • true: Always complete as a model response
  • false: Never complete as a model response

Defaults to "auto".

Type declaration

"auto"

boolean

ts
{
  enabled?: "auto" | boolean;
  appendedMessages?: ChatHistoryItem[];
}

enabled?

ts
optional enabled: "auto" | boolean;

Whether to complete the prompt as a model response.

  • "auto": Automatically determine whether to complete as a model response based on the model used. This is a good option to workaround some models that don't support used prompt completions.
  • true: Always complete as a model response
  • false: Never complete as a model response

Defaults to "auto".

appendedMessages?

ts
optional appendedMessages: ChatHistoryItem[];

The messages to append to the chat history to generate a completion as a model response.

If the last message is a model message, the prompt will be pushed to it for the completion, otherwise a new model message will be added with the prompt.

It must contain a user message or a system message before the model message.

Default to:

ts
[
    {
        type: "system",
        text: "For your next response predict what the user may send next. No yapping, no whitespace. Match the user's language and tone."
    },
    {type: "user", text: ""},
    {type: "model", response: [""]}
]