Class: LlamaChatSession
Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:318
See
Using LlamaChatSession
tutorial
Constructors
new LlamaChatSession()
new LlamaChatSession(options: LlamaChatSessionOptions): LlamaChatSession
Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:333
Parameters
Parameter | Type |
---|---|
options | LlamaChatSessionOptions |
Returns
Properties
onDispose
readonly onDispose: EventRelay<void>;
Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:331
Accessors
disposed
Get Signature
get disposed(): boolean
Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:390
Returns
boolean
chatWrapper
Get Signature
get chatWrapper(): ChatWrapper
Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:394
Returns
sequence
Get Signature
get sequence(): LlamaContextSequence
Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:401
Returns
context
Get Signature
get context(): LlamaContext
Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:408
Returns
model
Get Signature
get model(): LlamaModel
Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:412
Returns
Methods
dispose()
dispose(__namedParameters: {
disposeSequence: boolean;
}): void
Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:375
Parameters
Parameter | Type |
---|---|
__namedParameters | { disposeSequence : boolean ; } |
__namedParameters.disposeSequence ? | boolean |
Returns
void
prompt()
prompt<Functions>(prompt: string, options: LLamaChatPromptOptions<Functions>): Promise<string>
Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:416
Type Parameters
Type Parameter | Default type |
---|---|
Functions extends | undefined | ChatSessionModelFunctions | undefined |
Parameters
Parameter | Type |
---|---|
prompt | string |
options | LLamaChatPromptOptions <Functions > |
Returns
Promise
<string
>
promptWithMeta()
promptWithMeta<Functions>(prompt: string, options?: LLamaChatPromptOptions<Functions>): Promise<
| {
response: (
| string
| ChatModelFunctionCall
| ChatModelSegment)[];
responseText: string;
stopReason: "customStopTrigger";
customStopTrigger: (string | Token)[];
remainingGenerationAfterStop: undefined | string | Token[];
}
| {
customStopTrigger: undefined;
response: (
| string
| ChatModelFunctionCall
| ChatModelSegment)[];
responseText: string;
stopReason: | "abort"
| "maxTokens"
| "eogToken"
| "stopGenerationTrigger"
| "functionCalls";
remainingGenerationAfterStop: undefined | string | Token[];
}>
Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:460
Type Parameters
Type Parameter | Default type |
---|---|
Functions extends | undefined | ChatSessionModelFunctions | undefined |
Parameters
Parameter | Type | Description |
---|---|---|
prompt | string | |
options ? | LLamaChatPromptOptions <Functions > |
Returns
Promise
< | { response
: ( | string
| ChatModelFunctionCall
| ChatModelSegment
)[]; responseText
: string
; stopReason
: "customStopTrigger"
; customStopTrigger
: (string
| Token
)[]; remainingGenerationAfterStop
: undefined
| string
| Token
[]; } | { customStopTrigger
: undefined
; response
: ( | string
| ChatModelFunctionCall
| ChatModelSegment
)[]; responseText
: string
; stopReason
: | "abort"
| "maxTokens"
| "eogToken"
| "stopGenerationTrigger"
| "functionCalls"
; remainingGenerationAfterStop
: undefined
| string
| Token
[]; }>
preloadPrompt()
preloadPrompt(prompt: string, options?: LLamaChatPreloadPromptOptions): Promise<void>
Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:730
Preload a user prompt into the current context sequence state to make later inference of the model response begin sooner and feel faster.
Note: Preloading a long user prompt can incur context shifts, so consider limiting the length of prompts you preload
Parameters
Parameter | Type | Description |
---|---|---|
prompt | string | the prompt to preload |
options ? | LLamaChatPreloadPromptOptions |
Returns
Promise
<void
>
completePrompt()
completePrompt(prompt: string, options?: LLamaChatCompletePromptOptions): Promise<string>
Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:747
Preload a user prompt into the current context sequence state and generate a completion for it.
Note: Preloading a long user prompt and completing a user prompt with a high number of
maxTokens
can incur context shifts, so consider limiting the length of prompts you preload.Also, it's recommended to limit the number of tokens generated to a reasonable amount by configuring
maxTokens
.
Parameters
Parameter | Type | Description |
---|---|---|
prompt | string | the prompt to preload |
options ? | LLamaChatCompletePromptOptions |
Returns
Promise
<string
>
createPromptCompletionEngine()
createPromptCompletionEngine(options?: LLamaChatPromptCompletionEngineOptions): LlamaChatSessionPromptCompletionEngine
Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:760
Create a smart completion engine that caches the prompt completions and reuses them when the user prompt matches the beginning of the cached prompt or completion.
All completions are made and cache is used only for the current chat session state. You can create a single completion engine for an entire chat session.
Parameters
Parameter | Type |
---|---|
options ? | LLamaChatPromptCompletionEngineOptions |
Returns
LlamaChatSessionPromptCompletionEngine
completePromptWithMeta()
completePromptWithMeta(prompt: string, options?: LLamaChatCompletePromptOptions): Promise<
| {
completion: string;
stopReason: "customStopTrigger";
customStopTrigger: (string | Token)[];
remainingGenerationAfterStop: undefined | string | Token[];
}
| {
customStopTrigger: undefined;
completion: string;
stopReason: "abort" | "maxTokens" | "eogToken" | "stopGenerationTrigger";
remainingGenerationAfterStop: undefined | string | Token[];
}>
Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:769
See completePrompt
for more information.
Parameters
Parameter | Type | Description |
---|---|---|
prompt | string | |
options ? | LLamaChatCompletePromptOptions |
Returns
Promise
< | { completion
: string
; stopReason
: "customStopTrigger"
; customStopTrigger
: (string
| Token
)[]; remainingGenerationAfterStop
: undefined
| string
| Token
[]; } | { customStopTrigger
: undefined
; completion
: string
; stopReason
: "abort"
| "maxTokens"
| "eogToken"
| "stopGenerationTrigger"
; remainingGenerationAfterStop
: undefined
| string
| Token
[]; }>
getChatHistory()
getChatHistory(): ChatHistoryItem[]
Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:872
Returns
getLastEvaluationContextWindow()
getLastEvaluationContextWindow(): null | ChatHistoryItem[]
Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:876
Returns
null
| ChatHistoryItem
[]
setChatHistory()
setChatHistory(chatHistory: ChatHistoryItem[]): void
Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:883
Parameters
Parameter | Type |
---|---|
chatHistory | ChatHistoryItem [] |
Returns
void
resetChatHistory()
resetChatHistory(): void
Defined in: evaluator/LlamaChatSession/LlamaChatSession.ts:890
Clear the chat history and reset it to the initial state.
Returns
void