Skip to main content

Interface: OllamaCompletionModelSettings<CONTEXT_WINDOW_SIZE>

Text generation model that uses the Ollama completion API.

See

https://github.com/jmorganca/ollama/blob/main/docs/api.md#generate-a-completion

Type parameters

NameType
CONTEXT_WINDOW_SIZEextends number | undefined

Hierarchy

Properties

api

Optional api: ApiConfiguration

Defined in

packages/modelfusion/src/model-provider/ollama/OllamaCompletionModel.ts:56


context

Optional context: number[]

Defined in

packages/modelfusion/src/model-provider/ollama/OllamaCompletionModel.ts:71


contextWindowSize

Optional contextWindowSize: CONTEXT_WINDOW_SIZE

Specify the context window size of the model that you have loaded in your Ollama server. (Default: 2048)

Defined in

packages/modelfusion/src/model-provider/ollama/OllamaCompletionModel.ts:62


format

Optional format: "json"

The format to return a response in. Currently the only accepted value is 'json'. Leave undefined to return a string.

Inherited from

OllamaTextGenerationSettings.format

Defined in

packages/modelfusion/src/model-provider/ollama/OllamaTextGenerationSettings.ts:101


maxGenerationTokens

Optional maxGenerationTokens: number

Specifies the maximum number of tokens (words, punctuation, parts of words) that the model can generate in a single response. It helps to control the length of the output.

Does nothing if the model does not support this setting.

Example: maxGenerationTokens: 1000

Inherited from

OllamaTextGenerationSettings.maxGenerationTokens

Defined in

packages/modelfusion/src/model-function/generate-text/TextGenerationModel.ts:28


mirostat

Optional mirostat: number

Enable Mirostat sampling for controlling perplexity. (default: 0, 0 = disabled, 1 = Mirostat, 2 = Mirostat 2.0)

Inherited from

OllamaTextGenerationSettings.mirostat

Defined in

packages/modelfusion/src/model-provider/ollama/OllamaTextGenerationSettings.ts:22


mirostatEta

Optional mirostatEta: number

Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)

Inherited from

OllamaTextGenerationSettings.mirostatEta

Defined in

packages/modelfusion/src/model-provider/ollama/OllamaTextGenerationSettings.ts:29


mirostatTau

Optional mirostatTau: number

Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)

Inherited from

OllamaTextGenerationSettings.mirostatTau

Defined in

packages/modelfusion/src/model-provider/ollama/OllamaTextGenerationSettings.ts:35


model

model: string

The name of the model to use. For example, 'mistral'.

See

https://ollama.ai/library

Inherited from

OllamaTextGenerationSettings.model

Defined in

packages/modelfusion/src/model-provider/ollama/OllamaTextGenerationSettings.ts:10


numGpu

Optional numGpu: number

The number of layers to send to the GPU(s). On macOS it defaults to 1 to enable metal support, 0 to disable.

Inherited from

OllamaTextGenerationSettings.numGpu

Defined in

packages/modelfusion/src/model-provider/ollama/OllamaTextGenerationSettings.ts:47


numGqa

Optional numGqa: number

The number of GQA groups in the transformer layer. Required for some models, for example it is 8 for llama2:70b

Inherited from

OllamaTextGenerationSettings.numGqa

Defined in

packages/modelfusion/src/model-provider/ollama/OllamaTextGenerationSettings.ts:41


numThreads

Optional numThreads: number

Sets the number of threads to use during computation. By default, Ollama will detect this for optimal performance. It is recommended to set this value to the number of physical CPU cores your system has (as opposed to the logical number of cores).

Inherited from

OllamaTextGenerationSettings.numThreads

Defined in

packages/modelfusion/src/model-provider/ollama/OllamaTextGenerationSettings.ts:54


numberOfGenerations

Optional numberOfGenerations: number

Number of texts to generate.

Specifies the number of responses or completions the model should generate for a given prompt. This is useful when you need multiple different outputs or ideas for a single prompt. The model will generate 'n' distinct responses, each based on the same initial prompt. In a streaming model this will result in both responses streamed back in real time.

Does nothing if the model does not support this setting.

Example: numberOfGenerations: 3 // The model will produce 3 different responses.

Inherited from

OllamaTextGenerationSettings.numberOfGenerations

Defined in

packages/modelfusion/src/model-function/generate-text/TextGenerationModel.ts:55


observers

Optional observers: FunctionObserver[]

Observers that are called when the model is used in run functions.

Inherited from

OllamaTextGenerationSettings.observers

Defined in

packages/modelfusion/src/model-function/Model.ts:8


promptTemplate

Optional promptTemplate: TextGenerationPromptTemplateProvider<OllamaCompletionPrompt>

Prompt template provider that is used when calling .withTextPrompt(), withInstructionPrompt() or withChatPrompt().

Defined in

packages/modelfusion/src/model-provider/ollama/OllamaCompletionModel.ts:76


raw

Optional raw: boolean

When set to true, no formatting will be applied to the prompt and no context will be returned.

Defined in

packages/modelfusion/src/model-provider/ollama/OllamaCompletionModel.ts:68


repeatLastN

Optional repeatLastN: number

Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)

Inherited from

OllamaTextGenerationSettings.repeatLastN

Defined in

packages/modelfusion/src/model-provider/ollama/OllamaTextGenerationSettings.ts:60


repeatPenalty

Optional repeatPenalty: number

Sets how strongly to penalize repetitions. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. (Default: 1.1)

Inherited from

OllamaTextGenerationSettings.repeatPenalty

Defined in

packages/modelfusion/src/model-provider/ollama/OllamaTextGenerationSettings.ts:67


seed

Optional seed: number

Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: 0)

Inherited from

OllamaTextGenerationSettings.seed

Defined in

packages/modelfusion/src/model-provider/ollama/OllamaTextGenerationSettings.ts:74


stopSequences

Optional stopSequences: string[]

Stop sequences to use. Stop sequences are an array of strings or a single string that the model will recognize as end-of-text indicators. The model stops generating more content when it encounters any of these strings. This is particularly useful in scripted or formatted text generation, where a specific end point is required. Stop sequences not included in the generated text.

Does nothing if the model does not support this setting.

Example: stopSequences: ['\n', 'END']

Inherited from

OllamaTextGenerationSettings.stopSequences

Defined in

packages/modelfusion/src/model-function/generate-text/TextGenerationModel.ts:41


system

Optional system: string

Defined in

packages/modelfusion/src/model-provider/ollama/OllamaCompletionModel.ts:70


temperature

Optional temperature: number

The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)

Inherited from

OllamaTextGenerationSettings.temperature

Defined in

packages/modelfusion/src/model-provider/ollama/OllamaTextGenerationSettings.ts:16


template

Optional template: string

Inherited from

OllamaTextGenerationSettings.template

Defined in

packages/modelfusion/src/model-provider/ollama/OllamaTextGenerationSettings.ts:103


tfsZ

Optional tfsZ: number

Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)

Inherited from

OllamaTextGenerationSettings.tfsZ

Defined in

packages/modelfusion/src/model-provider/ollama/OllamaTextGenerationSettings.ts:81


topK

Optional topK: number

Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)

Inherited from

OllamaTextGenerationSettings.topK

Defined in

packages/modelfusion/src/model-provider/ollama/OllamaTextGenerationSettings.ts:88


topP

Optional topP: number

Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)

Inherited from

OllamaTextGenerationSettings.topP

Defined in

packages/modelfusion/src/model-provider/ollama/OllamaTextGenerationSettings.ts:95


trimWhitespace

Optional trimWhitespace: boolean

When true, the leading and trailing white space and line terminator characters are removed from the generated text.

Default: true.

Inherited from

OllamaTextGenerationSettings.trimWhitespace

Defined in

packages/modelfusion/src/model-function/generate-text/TextGenerationModel.ts:63