Class: LlamaCompletion
Defined in: evaluator/LlamaCompletion.ts:181
See
Text Completion tutorial
Constructors
new LlamaCompletion()
new LlamaCompletion(__namedParameters: LlamaCompletionOptions): LlamaCompletion
Defined in: evaluator/LlamaCompletion.ts:187
Parameters
Parameter | Type |
---|---|
__namedParameters | LlamaCompletionOptions |
Returns
Properties
onDispose
readonly onDispose: EventRelay<void>;
Defined in: evaluator/LlamaCompletion.ts:185
Accessors
disposed
Get Signature
get disposed(): boolean
Defined in: evaluator/LlamaCompletion.ts:219
Returns
boolean
infillSupported
Get Signature
get infillSupported(): boolean
Defined in: evaluator/LlamaCompletion.ts:223
Returns
boolean
Methods
dispose()
dispose(__namedParameters: {
disposeSequence: boolean;
}): void
Defined in: evaluator/LlamaCompletion.ts:202
Parameters
Parameter | Type |
---|---|
__namedParameters | { disposeSequence : boolean ; } |
__namedParameters.disposeSequence ? | boolean |
Returns
void
generateCompletion()
generateCompletion(input:
| string
| LlamaText
| Token[], options: LlamaCompletionGenerationOptions): Promise<string>
Defined in: evaluator/LlamaCompletion.ts:234
Generate a completion for an input.
Parameters
Parameter | Type |
---|---|
input | | string | LlamaText | Token [] |
options | LlamaCompletionGenerationOptions |
Returns
Promise
<string
>
generateCompletionWithMeta()
generateCompletionWithMeta(input:
| string
| LlamaText
| Token[], __namedParameters: LlamaCompletionGenerationOptions): Promise<LlamaCompletionResponse>
Defined in: evaluator/LlamaCompletion.ts:244
Same as generateCompletion
, but returns additional metadata about the generation. See generateCompletion
for more information.
Parameters
Parameter | Type |
---|---|
input | | string | LlamaText | Token [] |
__namedParameters | LlamaCompletionGenerationOptions |
Returns
Promise
<LlamaCompletionResponse
>
generateInfillCompletion()
generateInfillCompletion(
prefixInput:
| string
| LlamaText
| Token[],
suffixInput:
| string
| LlamaText
| Token[],
options: LlamaInfillGenerationOptions): Promise<string>
Defined in: evaluator/LlamaCompletion.ts:372
Infill (also known as Fill-In-Middle), generates a completion for an input (prefixInput
) that should connect to a given continuation (suffixInput
). For example, for prefixInput: "123"
and suffixInput: "789"
, the model is expected to generate 456
to make the final text be 123456789
.
Parameters
Parameter | Type |
---|---|
prefixInput | | string | LlamaText | Token [] |
suffixInput | | string | LlamaText | Token [] |
options | LlamaInfillGenerationOptions |
Returns
Promise
<string
>
generateInfillCompletionWithMeta()
generateInfillCompletionWithMeta(
prefixInput:
| string
| LlamaText
| Token[],
suffixInput:
| string
| LlamaText
| Token[],
__namedParameters: LlamaInfillGenerationOptions): Promise<LlamaCompletionResponse>
Defined in: evaluator/LlamaCompletion.ts:386
Same as generateInfillCompletion
, but returns additional metadata about the generation. See generateInfillCompletion
for more information.
Parameters
Parameter | Type |
---|---|
prefixInput | | string | LlamaText | Token [] |
suffixInput | | string | LlamaText | Token [] |
__namedParameters | LlamaInfillGenerationOptions |