Class: LlamaCompletion
Constructors
new LlamaCompletion()
new LlamaCompletion(__namedParameters: LlamaCompletionOptions): LlamaCompletion
Parameters
Parameter | Type |
---|---|
__namedParameters | LlamaCompletionOptions |
Returns
Defined in
evaluator/LlamaCompletion.ts:183
Properties
onDispose
readonly onDispose: EventRelay<void>;
Defined in
evaluator/LlamaCompletion.ts:181
Accessors
disposed
get disposed(): boolean
Returns
boolean
Defined in
evaluator/LlamaCompletion.ts:215
infillSupported
get infillSupported(): boolean
Returns
boolean
Defined in
evaluator/LlamaCompletion.ts:219
Methods
dispose()
dispose(__namedParameters: {
disposeSequence: boolean;
}): void
Parameters
Parameter | Type |
---|---|
__namedParameters | object |
__namedParameters.disposeSequence ? | boolean |
Returns
void
Defined in
evaluator/LlamaCompletion.ts:198
generateCompletion()
generateCompletion(input: string | LlamaText | Token[], options: LlamaCompletionGenerationOptions): Promise<string>
Generate a completion for an input.
Parameters
Parameter | Type |
---|---|
input | string | LlamaText | Token [] |
options | LlamaCompletionGenerationOptions |
Returns
Promise
<string
>
Defined in
evaluator/LlamaCompletion.ts:230
generateCompletionWithMeta()
generateCompletionWithMeta(input: string | LlamaText | Token[], __namedParameters: LlamaCompletionGenerationOptions): Promise<LlamaCompletionResponse>
Same as generateCompletion
, but returns additional metadata about the generation. See generateCompletion
for more information.
Parameters
Parameter | Type |
---|---|
input | string | LlamaText | Token [] |
__namedParameters | LlamaCompletionGenerationOptions |
Returns
Promise
<LlamaCompletionResponse
>
Defined in
evaluator/LlamaCompletion.ts:240
generateInfillCompletion()
generateInfillCompletion(
prefixInput: string | LlamaText | Token[],
suffixInput: string | LlamaText | Token[],
options: LlamaInfillGenerationOptions): Promise<string>
Infill (also known as Fill-In-Middle), generates a completion for an input (prefixInput
) that should connect to a given continuation (suffixInput
). For example, for prefixInput: "123"
and suffixInput: "789"
, the model is expected to generate 456
to make the final text be 123456789
.
Parameters
Parameter | Type |
---|---|
prefixInput | string | LlamaText | Token [] |
suffixInput | string | LlamaText | Token [] |
options | LlamaInfillGenerationOptions |
Returns
Promise
<string
>
Defined in
evaluator/LlamaCompletion.ts:365
generateInfillCompletionWithMeta()
generateInfillCompletionWithMeta(
prefixInput: string | LlamaText | Token[],
suffixInput: string | LlamaText | Token[],
__namedParameters: LlamaInfillGenerationOptions): Promise<LlamaCompletionResponse>
Same as generateInfillCompletion
, but returns additional metadata about the generation. See generateInfillCompletion
for more information.
Parameters
Parameter | Type |
---|---|
prefixInput | string | LlamaText | Token [] |
suffixInput | string | LlamaText | Token [] |
__namedParameters | LlamaInfillGenerationOptions |
Returns
Promise
<LlamaCompletionResponse
>