Skip to main content

Interface: MistralChatModelSettings

Hierarchy

Properties

api

Optional api: ApiConfiguration

Defined in

packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:34


maxGenerationTokens

Optional maxGenerationTokens: number

Specifies the maximum number of tokens (words, punctuation, parts of words) that the model can generate in a single response. It helps to control the length of the output.

Does nothing if the model does not support this setting.

Example: maxGenerationTokens: 1000

Inherited from

TextGenerationModelSettings.maxGenerationTokens

Defined in

packages/modelfusion/src/model-function/generate-text/TextGenerationModel.ts:28


model

model: "mistral-tiny" | "mistral-small" | "mistral-medium"

Defined in

packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:36


numberOfGenerations

Optional numberOfGenerations: number

Number of texts to generate.

Specifies the number of responses or completions the model should generate for a given prompt. This is useful when you need multiple different outputs or ideas for a single prompt. The model will generate 'n' distinct responses, each based on the same initial prompt. In a streaming model this will result in both responses streamed back in real time.

Does nothing if the model does not support this setting.

Example: numberOfGenerations: 3 // The model will produce 3 different responses.

Inherited from

TextGenerationModelSettings.numberOfGenerations

Defined in

packages/modelfusion/src/model-function/generate-text/TextGenerationModel.ts:55


observers

Optional observers: FunctionObserver[]

Observers that are called when the model is used in run functions.

Inherited from

TextGenerationModelSettings.observers

Defined in

packages/modelfusion/src/model-function/Model.ts:8


randomSeed

Optional randomSeed: null | number

The seed to use for random sampling. If set, different calls will generate deterministic results.

Defined in

packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:71


safeMode

Optional safeMode: boolean

Whether to inject a safety prompt before all conversations.

Default: false

Defined in

packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:65


stopSequences

Optional stopSequences: string[]

Stop sequences to use. Stop sequences are an array of strings or a single string that the model will recognize as end-of-text indicators. The model stops generating more content when it encounters any of these strings. This is particularly useful in scripted or formatted text generation, where a specific end point is required. Stop sequences not included in the generated text.

Does nothing if the model does not support this setting.

Example: stopSequences: ['\n', 'END']

Inherited from

TextGenerationModelSettings.stopSequences

Defined in

packages/modelfusion/src/model-function/generate-text/TextGenerationModel.ts:41


temperature

Optional temperature: null | number

What sampling temperature to use, between 0.0 and 2.0. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.

We generally recommend altering this or top_p but not both.

Default: 0.7

Defined in

packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:47


topP

Optional topP: number

Nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.

We generally recommend altering this or temperature but not both.

Default: 1

Defined in

packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:58


trimWhitespace

Optional trimWhitespace: boolean

When true, the leading and trailing white space and line terminator characters are removed from the generated text.

Default: true.

Inherited from

TextGenerationModelSettings.trimWhitespace

Defined in

packages/modelfusion/src/model-function/generate-text/TextGenerationModel.ts:63