Skip to content

Class: LlamaCompletion

Defined in: evaluator/LlamaCompletion.ts:181

See

Text Completion tutorial

Constructors

new LlamaCompletion()

ts
new LlamaCompletion(__namedParameters: LlamaCompletionOptions): LlamaCompletion

Defined in: evaluator/LlamaCompletion.ts:187

Parameters

ParameterType
__namedParametersLlamaCompletionOptions

Returns

LlamaCompletion

Properties

onDispose

ts
readonly onDispose: EventRelay<void>;

Defined in: evaluator/LlamaCompletion.ts:185

Accessors

disposed

Get Signature

ts
get disposed(): boolean

Defined in: evaluator/LlamaCompletion.ts:219

Returns

boolean


infillSupported

Get Signature

ts
get infillSupported(): boolean

Defined in: evaluator/LlamaCompletion.ts:223

Returns

boolean

Methods

dispose()

ts
dispose(__namedParameters: {
  disposeSequence: boolean;
 }): void

Defined in: evaluator/LlamaCompletion.ts:202

Parameters

ParameterType
__namedParameters{ disposeSequence: boolean; }
__namedParameters.disposeSequence?boolean

Returns

void


generateCompletion()

ts
generateCompletion(input: 
  | string
  | LlamaText
  | Token[], options: LlamaCompletionGenerationOptions): Promise<string>

Defined in: evaluator/LlamaCompletion.ts:234

Generate a completion for an input.

Parameters

ParameterType
input| string | LlamaText | Token[]
optionsLlamaCompletionGenerationOptions

Returns

Promise<string>


generateCompletionWithMeta()

ts
generateCompletionWithMeta(input: 
  | string
  | LlamaText
  | Token[], __namedParameters: LlamaCompletionGenerationOptions): Promise<LlamaCompletionResponse>

Defined in: evaluator/LlamaCompletion.ts:244

Same as generateCompletion, but returns additional metadata about the generation. See generateCompletion for more information.

Parameters

ParameterType
input| string | LlamaText | Token[]
__namedParametersLlamaCompletionGenerationOptions

Returns

Promise<LlamaCompletionResponse>


generateInfillCompletion()

ts
generateInfillCompletion(
   prefixInput: 
  | string
  | LlamaText
  | Token[], 
   suffixInput: 
  | string
  | LlamaText
  | Token[], 
   options: LlamaInfillGenerationOptions): Promise<string>

Defined in: evaluator/LlamaCompletion.ts:372

Infill (also known as Fill-In-Middle), generates a completion for an input (prefixInput) that should connect to a given continuation (suffixInput). For example, for prefixInput: "123" and suffixInput: "789", the model is expected to generate 456 to make the final text be 123456789.

Parameters

ParameterType
prefixInput| string | LlamaText | Token[]
suffixInput| string | LlamaText | Token[]
optionsLlamaInfillGenerationOptions

Returns

Promise<string>


generateInfillCompletionWithMeta()

ts
generateInfillCompletionWithMeta(
   prefixInput: 
  | string
  | LlamaText
  | Token[], 
   suffixInput: 
  | string
  | LlamaText
  | Token[], 
   __namedParameters: LlamaInfillGenerationOptions): Promise<LlamaCompletionResponse>

Defined in: evaluator/LlamaCompletion.ts:386

Same as generateInfillCompletion, but returns additional metadata about the generation. See generateInfillCompletion for more information.

Parameters

ParameterType
prefixInput| string | LlamaText | Token[]
suffixInput| string | LlamaText | Token[]
__namedParametersLlamaInfillGenerationOptions

Returns

Promise<LlamaCompletionResponse>