Class: TokenBias
Defined in: evaluator/TokenBias.ts:9
See
Using Token Bias tutorial
Constructors
new TokenBias()
ts
new TokenBias(tokenizer: Tokenizer): TokenBias
Defined in: evaluator/TokenBias.ts:13
Parameters
Parameter | Type |
---|---|
tokenizer | Tokenizer |
Returns
Methods
set()
ts
set(input:
| string
| LlamaText
| Token
| Token[], bias:
| number
| "never"
| {
logit: number;
}): TokenBias
Defined in: evaluator/TokenBias.ts:41
Adjust the bias of the given token(s).
If a text is provided, the bias will be applied to each individual token in the text.
Setting a bias to "never"
will prevent the token from being generated, unless it is required to comply with a grammar.
Setting the bias of the EOS or EOT tokens to "never"
has no effect and will be ignored.
Parameters
Parameter | Type | Description |
---|---|---|
input | | string | LlamaText | Token | Token [] | The token(s) to apply the bias to |
bias | | number | "never" | { logit : number ; } | The probability bias to apply to the token(s). Setting to a positive number increases the probability of the token(s) being generated. Setting to a negative number decreases the probability of the token(s) being generated. Setting to 0 has no effect. For example, setting to 0.5 will increase the probability of the token(s) being generated by 50%. Setting to -0.5 will decrease the probability of the token(s) being generated by 50%. Setting to "never" will prevent the token from being generated, unless it is required to comply with a grammar. Try to play around with values between 0.9 and -0.9 to see what works for your use case. |
Returns
for()
ts
static for(modelOrTokenizer:
| Tokenizer
| LlamaModel): TokenBias
Defined in: evaluator/TokenBias.ts:65
Parameters
Parameter | Type |
---|---|
modelOrTokenizer | | Tokenizer | LlamaModel |