Class: LlamaChatSession
Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:424
See
Using LlamaChatSession tutorial
Constructors
Constructor
new LlamaChatSession(options: LlamaChatSessionOptions): LlamaChatSession;Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:440
Parameters
| Parameter | Type |
|---|---|
options | LlamaChatSessionOptions |
Returns
LlamaChatSession
Properties
onDispose
readonly onDispose: EventRelay<void>;Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:438
Accessors
disposed
Get Signature
get disposed(): boolean;Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:497
Returns
boolean
chatWrapper
Get Signature
get chatWrapper(): ChatWrapper;Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:501
Returns
sequence
Get Signature
get sequence(): LlamaContextSequence;Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:508
Returns
context
Get Signature
get context(): LlamaContext;Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:515
Returns
model
Get Signature
get model(): LlamaModel;Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:519
Returns
Methods
dispose()
dispose(__namedParameters: {
disposeSequence?: boolean;
}): void;Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:482
Parameters
| Parameter | Type |
|---|---|
__namedParameters | { disposeSequence?: boolean; } |
__namedParameters.disposeSequence? | boolean |
Returns
void
prompt()
prompt<Functions>(prompt: string, options: LLamaChatPromptOptions<Functions>): Promise<string>;Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:523
Type Parameters
| Type Parameter | Default type |
|---|---|
Functions extends | undefined | ChatSessionModelFunctions | undefined |
Parameters
| Parameter | Type |
|---|---|
prompt | string |
options | LLamaChatPromptOptions<Functions> |
Returns
Promise<string>
promptWithMeta()
promptWithMeta<Functions>(prompt: string, options?: LLamaChatPromptOptions<Functions>): Promise<
| {
response: (
| string
| ChatModelFunctionCall
| ChatModelSegment)[];
responseText: string;
stopReason: "customStopTrigger";
customStopTrigger: (string | Token)[];
remainingGenerationAfterStop: undefined | string | Token[];
}
| {
customStopTrigger?: undefined;
response: (
| string
| ChatModelFunctionCall
| ChatModelSegment)[];
responseText: string;
stopReason: | "abort"
| "maxTokens"
| "eogToken"
| "stopGenerationTrigger"
| "functionCalls";
remainingGenerationAfterStop: undefined | string | Token[];
}>;Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:572
Type Parameters
| Type Parameter | Default type |
|---|---|
Functions extends | undefined | ChatSessionModelFunctions | undefined |
Parameters
| Parameter | Type | Description |
|---|---|---|
prompt | string | |
options? | LLamaChatPromptOptions<Functions> |
Returns
Promise< | { response: ( | string | ChatModelFunctionCall | ChatModelSegment)[]; responseText: string; stopReason: "customStopTrigger"; customStopTrigger: (string | Token)[]; remainingGenerationAfterStop: undefined | string | Token[]; } | { customStopTrigger?: undefined; response: ( | string | ChatModelFunctionCall | ChatModelSegment)[]; responseText: string; stopReason: | "abort" | "maxTokens" | "eogToken" | "stopGenerationTrigger" | "functionCalls"; remainingGenerationAfterStop: undefined | string | Token[]; }>
preloadPrompt()
preloadPrompt(prompt: string, options?: LLamaChatPreloadPromptOptions): Promise<void>;Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:862
Preload a user prompt into the current context sequence state to make later inference of the model response begin sooner and feel faster.
Note: Preloading a long user prompt can incur context shifts, so consider limiting the length of prompts you preload
Parameters
| Parameter | Type | Description |
|---|---|---|
prompt | string | the prompt to preload |
options? | LLamaChatPreloadPromptOptions |
Returns
Promise<void>
completePrompt()
completePrompt(prompt: string, options?: LLamaChatCompletePromptOptions): Promise<string>;Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:880
Preload a user prompt into the current context sequence state and generate a completion for it.
Note: Preloading a long user prompt and completing a user prompt with a high number of
maxTokenscan incur context shifts, so consider limiting the length of prompts you preload.Also, it's recommended to limit the number of tokens generated to a reasonable amount by configuring
maxTokens.
Parameters
| Parameter | Type | Description |
|---|---|---|
prompt | string | the prompt to preload |
options? | LLamaChatCompletePromptOptions |
Returns
Promise<string>
createPromptCompletionEngine()
createPromptCompletionEngine(options?: LLamaChatPromptCompletionEngineOptions): LlamaChatSessionPromptCompletionEngine;Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:893
Create a smart completion engine that caches the prompt completions and reuses them when the user prompt matches the beginning of the cached prompt or completion.
All completions are made and cache is used only for the current chat session state. You can create a single completion engine for an entire chat session.
Parameters
| Parameter | Type |
|---|---|
options? | LLamaChatPromptCompletionEngineOptions |
Returns
LlamaChatSessionPromptCompletionEngine
completePromptWithMeta()
completePromptWithMeta(prompt: string, options?: LLamaChatCompletePromptOptions): Promise<
| {
completion: string;
stopReason: "customStopTrigger";
customStopTrigger: (string | Token)[];
remainingGenerationAfterStop: undefined | string | Token[];
}
| {
customStopTrigger?: undefined;
completion: string;
stopReason: | "abort"
| "maxTokens"
| "eogToken"
| "stopGenerationTrigger"
| "functionCalls";
remainingGenerationAfterStop: undefined | string | Token[];
}>;Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:902
See completePrompt for more information.
Parameters
| Parameter | Type | Description |
|---|---|---|
prompt | string | |
options? | LLamaChatCompletePromptOptions |
Returns
Promise< | { completion: string; stopReason: "customStopTrigger"; customStopTrigger: (string | Token)[]; remainingGenerationAfterStop: undefined | string | Token[]; } | { customStopTrigger?: undefined; completion: string; stopReason: | "abort" | "maxTokens" | "eogToken" | "stopGenerationTrigger" | "functionCalls"; remainingGenerationAfterStop: undefined | string | Token[]; }>
getChatHistory()
getChatHistory(): ChatHistoryItem[];Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:1130
Returns
getLastEvaluationContextWindow()
getLastEvaluationContextWindow(): null | ChatHistoryItem[];Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:1134
Returns
null | ChatHistoryItem[]
setChatHistory()
setChatHistory(chatHistory: ChatHistoryItem[]): void;Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:1141
Parameters
| Parameter | Type |
|---|---|
chatHistory | ChatHistoryItem[] |
Returns
void
resetChatHistory()
resetChatHistory(): void;Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:1149
Clear the chat history and reset it to the initial state.
Returns
void