Skip to content

Class: LlamaCompletion

Constructors

new LlamaCompletion()

ts
new LlamaCompletion(__namedParameters: LlamaCompletionOptions): LlamaCompletion

Parameters

ParameterType
__namedParametersLlamaCompletionOptions

Returns

LlamaCompletion

Defined in

evaluator/LlamaCompletion.ts:183

Properties

onDispose

ts
readonly onDispose: EventRelay<void>;

Defined in

evaluator/LlamaCompletion.ts:181

Accessors

disposed

ts
get disposed(): boolean

Returns

boolean

Defined in

evaluator/LlamaCompletion.ts:215


infillSupported

ts
get infillSupported(): boolean

Returns

boolean

Defined in

evaluator/LlamaCompletion.ts:219

Methods

dispose()

ts
dispose(__namedParameters: {
  disposeSequence: boolean;
 }): void

Parameters

ParameterType
__namedParametersobject
__namedParameters.disposeSequence?boolean

Returns

void

Defined in

evaluator/LlamaCompletion.ts:198


generateCompletion()

ts
generateCompletion(input: string | LlamaText | Token[], options: LlamaCompletionGenerationOptions): Promise<string>

Generate a completion for an input.

Parameters

ParameterType
inputstring | LlamaText | Token[]
optionsLlamaCompletionGenerationOptions

Returns

Promise<string>

Defined in

evaluator/LlamaCompletion.ts:230


generateCompletionWithMeta()

ts
generateCompletionWithMeta(input: string | LlamaText | Token[], __namedParameters: LlamaCompletionGenerationOptions): Promise<LlamaCompletionResponse>

Same as generateCompletion, but returns additional metadata about the generation. See generateCompletion for more information.

Parameters

ParameterType
inputstring | LlamaText | Token[]
__namedParametersLlamaCompletionGenerationOptions

Returns

Promise<LlamaCompletionResponse>

Defined in

evaluator/LlamaCompletion.ts:240


generateInfillCompletion()

ts
generateInfillCompletion(
   prefixInput: string | LlamaText | Token[], 
   suffixInput: string | LlamaText | Token[], 
   options: LlamaInfillGenerationOptions): Promise<string>

Infill (also known as Fill-In-Middle), generates a completion for an input (prefixInput) that should connect to a given continuation (suffixInput). For example, for prefixInput: "123" and suffixInput: "789", the model is expected to generate 456 to make the final text be 123456789.

Parameters

ParameterType
prefixInputstring | LlamaText | Token[]
suffixInputstring | LlamaText | Token[]
optionsLlamaInfillGenerationOptions

Returns

Promise<string>

Defined in

evaluator/LlamaCompletion.ts:365


generateInfillCompletionWithMeta()

ts
generateInfillCompletionWithMeta(
   prefixInput: string | LlamaText | Token[], 
   suffixInput: string | LlamaText | Token[], 
   __namedParameters: LlamaInfillGenerationOptions): Promise<LlamaCompletionResponse>

Same as generateInfillCompletion, but returns additional metadata about the generation. See generateInfillCompletion for more information.

Parameters

ParameterType
prefixInputstring | LlamaText | Token[]
suffixInputstring | LlamaText | Token[]
__namedParametersLlamaInfillGenerationOptions

Returns

Promise<LlamaCompletionResponse>

Defined in

evaluator/LlamaCompletion.ts:379