Class: abstract
TokenPredictor
Defined in: evaluator/LlamaContext/TokenPredictor.ts:8
See
Extended by
Constructors
new TokenPredictor()
new TokenPredictor(): TokenPredictor
Returns
Methods
reset()
abstract reset(params: {
targetSequence: LlamaContextSequence;
stateTokens: Token[];
evaluateOptions: Readonly<SequenceEvaluateOptions>;
}):
| void
| Promise<void>
Defined in: evaluator/LlamaContext/TokenPredictor.ts:14
Resets the state of the predictor.
Called before the generation starts.
Parameters
Parameter | Type | Description |
---|---|---|
params | { targetSequence : LlamaContextSequence ; stateTokens : Token []; evaluateOptions : Readonly <SequenceEvaluateOptions >; } | - |
params.targetSequence | LlamaContextSequence | The target sequence that this token predictor is generating tokens for |
params.stateTokens | Token [] | The tokens that are or will be loaded into the state. The initial predictions should be based on these tokens. When additional tokens are pushed into the state, the pushTokens method will be called with those tokens. |
params.evaluateOptions | Readonly <SequenceEvaluateOptions > | Options used for the evaluation on the target sequence. The grammarEvaluationState is cloned before being passed to the token predictor, so it can be modified without affecting the original state. |
Returns
| void
| Promise
<void
>
pushTokens()
abstract pushTokens(tokens: Token[]): void
Defined in: evaluator/LlamaContext/TokenPredictor.ts:35
Parameters
Parameter | Type |
---|---|
tokens | Token [] |
Returns
void
predictTokens()
abstract predictTokens():
| Token[]
| Promise<Token[]>
Defined in: evaluator/LlamaContext/TokenPredictor.ts:46
Predicts the next tokens based on the current state.
If the generation should wait until the minimum predications are ready, this method should return a promise that resolves when the minimum predictions are ready.
A background prediction process can be started when this function is called, so that the next predictions will be ready when this function is called again.
Returns
stop()
stop(untilPredictionsExhausted?: boolean):
| void
| Promise<void>
Defined in: evaluator/LlamaContext/TokenPredictor.ts:52
Stops the prediction process when it runs in the background.
Parameters
Parameter | Type | Description |
---|---|---|
untilPredictionsExhausted ? | boolean | If true, the prediction process should not resume until the current predictions are exhausted. |
Returns
| void
| Promise
<void
>
updateInputTokens()
updateInputTokens(tokens: Token[]): void
Defined in: evaluator/LlamaContext/TokenPredictor.ts:57
Called with the input tokens before the generation starts when using LlamaChatSession
, LlamaChat
, and LlamaCompletion
.
Parameters
Parameter | Type |
---|---|
tokens | Token [] |
Returns
void
dispose()
dispose():
| void
| Promise<void>
Defined in: evaluator/LlamaContext/TokenPredictor.ts:59
Returns
| void
| Promise
<void
>