Skip to main content

Class: MistralChatModel

Hierarchy

Implements

Accessors

modelInformation

get modelInformation(): ModelInformation

Returns

ModelInformation

Implementation of

TextStreamingBaseModel.modelInformation

Inherited from

AbstractModel.modelInformation

Defined in

packages/modelfusion/src/model-function/AbstractModel.ts:17


modelName

get modelName(): "mistral-tiny" | "mistral-small" | "mistral-medium"

Returns

"mistral-tiny" | "mistral-small" | "mistral-medium"

Overrides

AbstractModel.modelName

Defined in

packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:84


settingsForEvent

get settingsForEvent(): Partial<MistralChatModelSettings>

Returns settings that should be recorded in observability events. Security-related settings (e.g. API keys) should not be included here.

Returns

Partial<MistralChatModelSettings>

Implementation of

TextStreamingBaseModel.settingsForEvent

Overrides

AbstractModel.settingsForEvent

Defined in

packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:133

Constructors

constructor

new MistralChatModel(settings): MistralChatModel

Parameters

NameType
settingsMistralChatModelSettings

Returns

MistralChatModel

Overrides

AbstractModel&lt;MistralChatModelSettings&gt;.constructor

Defined in

packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:79

Methods

callAPI

callAPI<RESULT>(prompt, callOptions, options): Promise<RESULT>

Type parameters

Name
RESULT

Parameters

NameType
promptChatPrompt
callOptionsFunctionCallOptions
optionsObject
options.responseFormatMistralChatResponseFormatType<RESULT>

Returns

Promise<RESULT>

Defined in

packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:92


doGenerateTexts

doGenerateTexts(prompt, options): Promise<{ rawResponse: { choices: { finish_reason: "length" | "stop" | "model_length" ; index: number ; message: { content: string ; role: "user" | "assistant" } }[] ; created: number ; id: string ; model: string ; object: string ; usage: { completion_tokens: number ; prompt_tokens: number ; total_tokens: number } } ; textGenerationResults: { finishReason: TextGenerationFinishReason ; text: string = choice.message.content }[] }>

Parameters

NameType
promptChatPrompt
optionsFunctionCallOptions

Returns

Promise<{ rawResponse: { choices: { finish_reason: "length" | "stop" | "model_length" ; index: number ; message: { content: string ; role: "user" | "assistant" } }[] ; created: number ; id: string ; model: string ; object: string ; usage: { completion_tokens: number ; prompt_tokens: number ; total_tokens: number } } ; textGenerationResults: { finishReason: TextGenerationFinishReason ; text: string = choice.message.content }[] }>

Implementation of

TextStreamingBaseModel.doGenerateTexts

Defined in

packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:150


doStreamText

doStreamText(prompt, options): Promise<AsyncIterable<Delta<{ choices: { delta: { content?: null | string ; role?: null | "user" | "assistant" } ; finish_reason?: null | "length" | "stop" | "model_length" ; index: number }[] ; created?: number ; id: string ; model: string ; object?: string }>>>

Parameters

NameType
promptChatPrompt
optionsFunctionCallOptions

Returns

Promise<AsyncIterable<Delta<{ choices: { delta: { content?: null | string ; role?: null | "user" | "assistant" } ; finish_reason?: null | "length" | "stop" | "model_length" ; index: number }[] ; created?: number ; id: string ; model: string ; object?: string }>>>

Implementation of

TextStreamingBaseModel.doStreamText

Defined in

packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:194


extractTextDelta

extractTextDelta(delta): undefined | string

Parameters

NameType
deltaunknown

Returns

undefined | string

Implementation of

TextStreamingBaseModel.extractTextDelta

Defined in

packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:200


processTextGenerationResponse

processTextGenerationResponse(rawResponse): Object

Parameters

NameType
rawResponseObject
rawResponse.choices{ finish_reason: "length" | "stop" | "model_length" ; index: number ; message: { content: string ; role: "user" | "assistant" } }[]
rawResponse.creatednumber
rawResponse.idstring
rawResponse.modelstring
rawResponse.objectstring
rawResponse.usageObject
rawResponse.usage.completion_tokensnumber
rawResponse.usage.prompt_tokensnumber
rawResponse.usage.total_tokensnumber

Returns

Object

NameType
rawResponse{ choices: { finish_reason: "length" | "stop" | "model_length" ; index: number ; message: { content: string ; role: "user" | "assistant" } }[] ; created: number ; id: string ; model: string ; object: string ; usage: { completion_tokens: number ; prompt_tokens: number ; total_tokens: number } }
rawResponse.choices{ finish_reason: "length" | "stop" | "model_length" ; index: number ; message: { content: string ; role: "user" | "assistant" } }[]
rawResponse.creatednumber
rawResponse.idstring
rawResponse.modelstring
rawResponse.objectstring
rawResponse.usage{ completion_tokens: number ; prompt_tokens: number ; total_tokens: number }
rawResponse.usage.completion_tokensnumber
rawResponse.usage.prompt_tokensnumber
rawResponse.usage.total_tokensnumber
textGenerationResults{ finishReason: TextGenerationFinishReason ; text: string = choice.message.content }[]

Defined in

packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:170


restoreGeneratedTexts

restoreGeneratedTexts(rawResponse): Object

Parameters

NameType
rawResponseunknown

Returns

Object

NameType
rawResponse{ choices: { finish_reason: "length" | "stop" | "model_length" ; index: number ; message: { content: string ; role: "user" | "assistant" } }[] ; created: number ; id: string ; model: string ; object: string ; usage: { completion_tokens: number ; prompt_tokens: number ; total_tokens: number } }
rawResponse.choices{ finish_reason: "length" | "stop" | "model_length" ; index: number ; message: { content: string ; role: "user" | "assistant" } }[]
rawResponse.creatednumber
rawResponse.idstring
rawResponse.modelstring
rawResponse.objectstring
rawResponse.usage{ completion_tokens: number ; prompt_tokens: number ; total_tokens: number }
rawResponse.usage.completion_tokensnumber
rawResponse.usage.prompt_tokensnumber
rawResponse.usage.total_tokensnumber
textGenerationResults{ finishReason: TextGenerationFinishReason ; text: string = choice.message.content }[]

Implementation of

TextStreamingBaseModel.restoreGeneratedTexts

Defined in

packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:161


withChatPrompt

withChatPrompt(): PromptTemplateTextStreamingModel<ChatPrompt, ChatPrompt, MistralChatModelSettings, MistralChatModel>

Returns this model with a chat prompt template.

Returns

PromptTemplateTextStreamingModel<ChatPrompt, ChatPrompt, MistralChatModelSettings, MistralChatModel>

Implementation of

TextStreamingBaseModel.withChatPrompt

Defined in

packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:213


withInstructionPrompt

withInstructionPrompt(): PromptTemplateTextStreamingModel<InstructionPrompt, ChatPrompt, MistralChatModelSettings, MistralChatModel>

Returns this model with an instruction prompt template.

Returns

PromptTemplateTextStreamingModel<InstructionPrompt, ChatPrompt, MistralChatModelSettings, MistralChatModel>

Implementation of

TextStreamingBaseModel.withInstructionPrompt

Defined in

packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:209


withJsonOutput

withJsonOutput(): this

When possible, limit the output generation to the specified JSON schema, or super sets of it (e.g. JSON in general).

Returns

this

Implementation of

TextStreamingBaseModel.withJsonOutput

Defined in

packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:217


withPromptTemplate

withPromptTemplate<INPUT_PROMPT>(promptTemplate): PromptTemplateTextStreamingModel<INPUT_PROMPT, ChatPrompt, MistralChatModelSettings, MistralChatModel>

Type parameters

Name
INPUT_PROMPT

Parameters

NameType
promptTemplateTextGenerationPromptTemplate<INPUT_PROMPT, ChatPrompt>

Returns

PromptTemplateTextStreamingModel<INPUT_PROMPT, ChatPrompt, MistralChatModelSettings, MistralChatModel>

Implementation of

TextStreamingBaseModel.withPromptTemplate

Defined in

packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:221


withSettings

withSettings(additionalSettings): MistralChatModel

The withSettings method creates a new model with the same configuration as the original model, but with the specified settings changed.

Parameters

NameType
additionalSettingsPartial<MistralChatModelSettings>

Returns

MistralChatModel

Example

const model = new OpenAICompletionModel({
model: "gpt-3.5-turbo-instruct",
maxGenerationTokens: 500,
});

const modelWithMoreTokens = model.withSettings({
maxGenerationTokens: 1000,
});

Implementation of

TextStreamingBaseModel.withSettings

Overrides

AbstractModel.withSettings

Defined in

packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:238


withTextPrompt

withTextPrompt(): PromptTemplateTextStreamingModel<string, ChatPrompt, MistralChatModelSettings, MistralChatModel>

Returns this model with a text prompt template.

Returns

PromptTemplateTextStreamingModel<string, ChatPrompt, MistralChatModelSettings, MistralChatModel>

Implementation of

TextStreamingBaseModel.withTextPrompt

Defined in

packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:205

Properties

contextWindowSize

Readonly contextWindowSize: undefined = undefined

Implementation of

TextStreamingBaseModel.contextWindowSize

Defined in

packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:88


countPromptTokens

Readonly countPromptTokens: undefined = undefined

Optional. Implement if you have a tokenizer and want to count the number of tokens in a prompt.

Implementation of

TextStreamingBaseModel.countPromptTokens

Defined in

packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:90


provider

Readonly provider: "mistral"

Overrides

AbstractModel.provider

Defined in

packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:83


settings

Readonly settings: MistralChatModelSettings

Implementation of

TextStreamingBaseModel.settings

Inherited from

AbstractModel.settings

Defined in

packages/modelfusion/src/model-function/AbstractModel.ts:7


tokenizer

Readonly tokenizer: undefined = undefined

Implementation of

TextStreamingBaseModel.tokenizer

Defined in

packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:89