Skip to content

Class: GgufInsightsConfigurationResolver

Accessors

ggufInsights

ts
get ggufInsights(): GgufInsights

Returns

GgufInsights

Defined in

gguf/insights/GgufInsightsConfigurationResolver.ts:21

Methods

resolveAndScoreConfig()

ts
resolveAndScoreConfig(options: {
  targetGpuLayers: number | "max";
  targetContextSize: number;
  embeddingContext: false;
  flashAttention: false;
 }, hardwareOverrides: {
  getVramState: Promise<{
     total: number;
     free: number;
    }>;
  getRamState: Promise<{
     total: number;
     free: number;
    }>;
  llamaVramPaddingSize: number;
  llamaGpu: false | "cuda" | "vulkan" | "metal";
  llamaSupportsGpuOffloading: boolean;
 }): Promise<{
  compatibilityScore: number;
  bonusScore: number;
  totalScore: number;
  resolvedValues: {
     gpuLayers: number;
     contextSize: number;
     modelRamUsage: number;
     contextRamUsage: number;
     totalRamUsage: number;
     modelVramUsage: number;
     contextVramUsage: number;
     totalVramUsage: number;
    };
 }>

Resolve the best configuration for loading a model and creating a context using the current hardware.

Specifying a targetGpuLayers and/or targetContextSize will ensure the resolved configuration matches those values, but note it can lower the compatibility score if the hardware doesn't support it.

Overriding hardware values it possible by configuring hardwareOverrides.

Parameters

ParameterTypeDescription
optionsobject
options.targetGpuLayers?number | "max"-
options.targetContextSize?number-
options.embeddingContext?boolean-
options.flashAttention?boolean-
hardwareOverridesobject
hardwareOverrides.getVramState?-
hardwareOverrides.getRamState?-
hardwareOverrides.llamaVramPaddingSize?number-
hardwareOverrides.llamaGpu?false | "cuda" | "vulkan" | "metal"-
hardwareOverrides.llamaSupportsGpuOffloading?boolean-

Returns

Promise<{ compatibilityScore: number; bonusScore: number; totalScore: number; resolvedValues: { gpuLayers: number; contextSize: number; modelRamUsage: number; contextRamUsage: number; totalRamUsage: number; modelVramUsage: number; contextVramUsage: number; totalVramUsage: number; }; }>

compatibilityScore
ts
compatibilityScore: number;

A number between 0 (inclusive) and 1 (inclusive) representing the compatibility score.

bonusScore
ts
bonusScore: number;

A number starting at 0 with no upper limit representing the bonus score. For each multiplier of the specified contextSize that the resolved context size is larger by, 1 bonus point is given.

totalScore
ts
totalScore: number;

The total score, which is the sum of the compatibility and bonus scores.

resolvedValues
ts
resolvedValues: {
  gpuLayers: number;
  contextSize: number;
  modelRamUsage: number;
  contextRamUsage: number;
  totalRamUsage: number;
  modelVramUsage: number;
  contextVramUsage: number;
  totalVramUsage: number;
};

The resolved values used to calculate the scores.

resolvedValues.gpuLayers
ts
gpuLayers: number;
resolvedValues.contextSize
ts
contextSize: number;
resolvedValues.modelRamUsage
ts
modelRamUsage: number;
resolvedValues.contextRamUsage
ts
contextRamUsage: number;
resolvedValues.totalRamUsage
ts
totalRamUsage: number;
resolvedValues.modelVramUsage
ts
modelVramUsage: number;
resolvedValues.contextVramUsage
ts
contextVramUsage: number;
resolvedValues.totalVramUsage
ts
totalVramUsage: number;

Defined in

gguf/insights/GgufInsightsConfigurationResolver.ts:35


scoreModelConfigurationCompatibility()

ts
scoreModelConfigurationCompatibility(__namedParameters: {
  contextSize: number;
  embeddingContext: false;
  flashAttention: false;
  maximumFittedContextSizeMultiplier: 100;
 }, __namedParameters: {
  getVramState: Promise<{
     total: number;
     free: number;
    }>;
  getRamState: Promise<{
     total: number;
     free: number;
    }>;
  llamaVramPaddingSize: number;
  llamaGpu: false | "cuda" | "vulkan" | "metal";
  llamaSupportsGpuOffloading: boolean;
 }): Promise<{
  compatibilityScore: number;
  bonusScore: number;
  totalScore: number;
  resolvedValues: {
     gpuLayers: number;
     contextSize: number;
     modelRamUsage: number;
     contextRamUsage: number;
     totalRamUsage: number;
     modelVramUsage: number;
     contextVramUsage: number;
     totalVramUsage: number;
    };
 }>

Score the compatibility of the model configuration with the current GPU and VRAM state. Assumes a model is loaded with the default "auto" configurations. Scored based on the following criteria:

  • The number of GPU layers that can be offloaded to the GPU (only if there's a GPU. If there's no GPU then by how small the model is)
  • Whether all layers can be offloaded to the GPU (gives additional points)
  • Whether the resolved context size is at least as large as the specified contextSize

If the resolved context size is larger than the specified context size, for each multiplier of the specified contextSize that the resolved context size is larger by, 1 bonus point is given in the bonusScore.

maximumFittedContextSizeMultiplier is used to improve the proportionality of the bonus score between models. Set this to any value higher than <max compared model context size> / contextSize. Defaults to 100.

contextSize defaults to 4096 (if the model train context size is lower than this, the model train context size is used instead).

Parameters

ParameterType
__namedParametersobject
__namedParameters.contextSize?number
__namedParameters.embeddingContext?boolean
__namedParameters.flashAttention?boolean
__namedParameters.maximumFittedContextSizeMultiplier?number
__namedParametersobject
__namedParameters.getVramState?
__namedParameters.getRamState?
__namedParameters.llamaVramPaddingSize?number
__namedParameters.llamaGpu?false | "cuda" | "vulkan" | "metal"
__namedParameters.llamaSupportsGpuOffloading?boolean

Returns

Promise<{ compatibilityScore: number; bonusScore: number; totalScore: number; resolvedValues: { gpuLayers: number; contextSize: number; modelRamUsage: number; contextRamUsage: number; totalRamUsage: number; modelVramUsage: number; contextVramUsage: number; totalVramUsage: number; }; }>

compatibilityScore
ts
compatibilityScore: number;

A number between 0 (inclusive) and 1 (inclusive) representing the compatibility score.

bonusScore
ts
bonusScore: number;

A number starting at 0 with no upper limit representing the bonus score. For each multiplier of the specified contextSize that the resolved context size is larger by, 1 bonus point is given.

totalScore
ts
totalScore: number;

The total score, which is the sum of the compatibility and bonus scores.

resolvedValues
ts
resolvedValues: {
  gpuLayers: number;
  contextSize: number;
  modelRamUsage: number;
  contextRamUsage: number;
  totalRamUsage: number;
  modelVramUsage: number;
  contextVramUsage: number;
  totalVramUsage: number;
};

The resolved values used to calculate the scores.

resolvedValues.gpuLayers
ts
gpuLayers: number;
resolvedValues.contextSize
ts
contextSize: number;
resolvedValues.modelRamUsage
ts
modelRamUsage: number;
resolvedValues.contextRamUsage
ts
contextRamUsage: number;
resolvedValues.totalRamUsage
ts
totalRamUsage: number;
resolvedValues.modelVramUsage
ts
modelVramUsage: number;
resolvedValues.contextVramUsage
ts
contextVramUsage: number;
resolvedValues.totalVramUsage
ts
totalVramUsage: number;

Defined in

gguf/insights/GgufInsightsConfigurationResolver.ts:153


resolveModelGpuLayers()

ts
resolveModelGpuLayers(gpuLayers?: number | "auto" | "max" | {
  min: number;
  max: number;
  fitContext: {
     contextSize: number;
     embeddingContext: boolean;
    };
 }, __namedParameters?: {
  ignoreMemorySafetyChecks: false;
  getVramState: Promise<{
     total: number;
     free: number;
    }>;
  llamaVramPaddingSize: number;
  llamaGpu: false | "cuda" | "vulkan" | "metal";
  llamaSupportsGpuOffloading: boolean;
  defaultContextFlashAttention: false;
 }): Promise<number>

Parameters

ParameterType
gpuLayers?number | "auto" | "max" | { min: number; max: number; fitContext: { contextSize: number; embeddingContext: boolean; }; }
__namedParameters?object
__namedParameters.ignoreMemorySafetyChecks?boolean
__namedParameters.getVramState?
__namedParameters.llamaVramPaddingSize?number
__namedParameters.llamaGpu?false | "cuda" | "vulkan" | "metal"
__namedParameters.llamaSupportsGpuOffloading?boolean
__namedParameters.defaultContextFlashAttention?boolean

Returns

Promise<number>

Defined in

gguf/insights/GgufInsightsConfigurationResolver.ts:315


resolveContextContextSize()

ts
resolveContextContextSize(contextSize: undefined | number | "auto" | {
  min: number;
  max: number;
 }, __namedParameters: {
  modelGpuLayers: number;
  modelTrainContextSize: number;
  flashAttention: false;
  batchSize: number;
  sequences: number;
  getVramState: Promise<{
     total: number;
     free: number;
    }>;
  llamaGpu: false | "cuda" | "vulkan" | "metal";
  ignoreMemorySafetyChecks: false;
  isEmbeddingContext: false;
 }): Promise<number>

Parameters

ParameterType
contextSizeundefined | number | "auto" | { min: number; max: number; }
__namedParametersobject
__namedParameters.modelGpuLayersnumber
__namedParameters.modelTrainContextSizenumber
__namedParameters.flashAttention?boolean
__namedParameters.batchSize?number
__namedParameters.sequences?number
__namedParameters.getVramState?
__namedParameters.llamaGpu?false | "cuda" | "vulkan" | "metal"
__namedParameters.ignoreMemorySafetyChecks?boolean
__namedParameters.isEmbeddingContext?boolean

Returns

Promise<number>

Defined in

gguf/insights/GgufInsightsConfigurationResolver.ts:336