Class: LlamaChatSession
Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:513
See
Using LlamaChatSession tutorial
Constructors
Constructor
new LlamaChatSession(options: LlamaChatSessionOptions): LlamaChatSession;Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:529
Parameters
| Parameter | Type |
|---|---|
options | LlamaChatSessionOptions |
Returns
LlamaChatSession
Properties
onDispose
readonly onDispose: EventRelay<void>;Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:527
Accessors
disposed
Get Signature
get disposed(): boolean;Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:586
Returns
boolean
chatWrapper
Get Signature
get chatWrapper(): ChatWrapper;Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:590
Returns
sequence
Get Signature
get sequence(): LlamaContextSequence;Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:597
Returns
context
Get Signature
get context(): LlamaContext;Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:604
Returns
model
Get Signature
get model(): LlamaModel;Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:608
Returns
Methods
dispose()
dispose(__namedParameters?: {
disposeSequence?: boolean;
}): void;Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:571
Parameters
| Parameter | Type |
|---|---|
__namedParameters | { disposeSequence?: boolean; } |
__namedParameters.disposeSequence? | boolean |
Returns
void
prompt()
prompt<Functions>(prompt: string, options?: LLamaChatPromptOptions<Functions>): Promise<string>;Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:612
Type Parameters
| Type Parameter | Default type |
|---|---|
Functions extends | ChatSessionModelFunctions | undefined | undefined |
Parameters
| Parameter | Type |
|---|---|
prompt | string |
options | LLamaChatPromptOptions<Functions> |
Returns
Promise<string>
promptWithMeta()
promptWithMeta<Functions>(prompt: string, options?: LLamaChatPromptOptions<Functions>): Promise<
| {
response: (
| string
| ChatModelFunctionCall
| ChatModelSegment)[];
responseText: string;
stopReason: "customStopTrigger";
customStopTrigger: (string | Token)[];
remainingGenerationAfterStop: string | Token[] | undefined;
}
| {
customStopTrigger?: undefined;
response: (
| string
| ChatModelFunctionCall
| ChatModelSegment)[];
responseText: string;
stopReason: | "abort"
| "maxTokens"
| "eogToken"
| "stopGenerationTrigger"
| "functionCalls";
remainingGenerationAfterStop: string | Token[] | undefined;
}>;Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:663
Type Parameters
| Type Parameter | Default type |
|---|---|
Functions extends | ChatSessionModelFunctions | undefined | undefined |
Parameters
| Parameter | Type | Description |
|---|---|---|
prompt | string | - |
options? | LLamaChatPromptOptions<Functions> | - |
Returns
Promise< | { response: ( | string | ChatModelFunctionCall | ChatModelSegment)[]; responseText: string; stopReason: "customStopTrigger"; customStopTrigger: (string | Token)[]; remainingGenerationAfterStop: string | Token[] | undefined; } | { customStopTrigger?: undefined; response: ( | string | ChatModelFunctionCall | ChatModelSegment)[]; responseText: string; stopReason: | "abort" | "maxTokens" | "eogToken" | "stopGenerationTrigger" | "functionCalls"; remainingGenerationAfterStop: string | Token[] | undefined; }>
preloadPrompt()
preloadPrompt(prompt: string, options?: LLamaChatPreloadPromptOptions): Promise<void>;Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:957
Preload a user prompt into the current context sequence state to make later inference of the model response begin sooner and feel faster.
Note: Preloading a long user prompt can incur context shifts, so consider limiting the length of prompts you preload
Parameters
| Parameter | Type | Description |
|---|---|---|
prompt | string | the prompt to preload |
options? | LLamaChatPreloadPromptOptions | - |
Returns
Promise<void>
completePrompt()
completePrompt(prompt: string, options?: LLamaChatCompletePromptOptions): Promise<string>;Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:975
Preload a user prompt into the current context sequence state and generate a completion for it.
Note: Preloading a long user prompt and completing a user prompt with a high number of
maxTokenscan incur context shifts, so consider limiting the length of prompts you preload.Also, it's recommended to limit the number of tokens generated to a reasonable amount by configuring
maxTokens.
Parameters
| Parameter | Type | Description |
|---|---|---|
prompt | string | the prompt to preload |
options? | LLamaChatCompletePromptOptions | - |
Returns
Promise<string>
createPromptCompletionEngine()
createPromptCompletionEngine(options?: LLamaChatPromptCompletionEngineOptions): LlamaChatSessionPromptCompletionEngine;Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:988
Create a smart completion engine that caches the prompt completions and reuses them when the user prompt matches the beginning of the cached prompt or completion.
All completions are made and cache is used only for the current chat session state. You can create a single completion engine for an entire chat session.
Parameters
| Parameter | Type |
|---|---|
options? | LLamaChatPromptCompletionEngineOptions |
Returns
LlamaChatSessionPromptCompletionEngine
completePromptWithMeta()
completePromptWithMeta(prompt: string, options?: LLamaChatCompletePromptOptions): Promise<
| {
completion: string;
stopReason: "customStopTrigger";
customStopTrigger: (string | Token)[];
remainingGenerationAfterStop: string | Token[] | undefined;
}
| {
customStopTrigger?: undefined;
completion: string;
stopReason: | "abort"
| "maxTokens"
| "eogToken"
| "stopGenerationTrigger"
| "functionCalls";
remainingGenerationAfterStop: string | Token[] | undefined;
}>;Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:997
See completePrompt for more information.
Parameters
| Parameter | Type | Description |
|---|---|---|
prompt | string | - |
options? | LLamaChatCompletePromptOptions | - |
Returns
Promise< | { completion: string; stopReason: "customStopTrigger"; customStopTrigger: (string | Token)[]; remainingGenerationAfterStop: string | Token[] | undefined; } | { customStopTrigger?: undefined; completion: string; stopReason: | "abort" | "maxTokens" | "eogToken" | "stopGenerationTrigger" | "functionCalls"; remainingGenerationAfterStop: string | Token[] | undefined; }>
getChatHistory()
getChatHistory(): ChatHistoryItem[];Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:1231
Returns
getLastEvaluationContextWindow()
getLastEvaluationContextWindow(): ChatHistoryItem[] | null;Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:1235
Returns
ChatHistoryItem[] | null
setChatHistory()
setChatHistory(chatHistory: ChatHistoryItem[]): void;Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:1242
Parameters
| Parameter | Type |
|---|---|
chatHistory | ChatHistoryItem[] |
Returns
void
resetChatHistory()
resetChatHistory(): void;Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:1250
Clear the chat history and reset it to the initial state.
Returns
void