Class: MistralChatModel
Hierarchy
-
AbstractModel
<MistralChatModelSettings
>↳
MistralChatModel
Implements
Accessors
modelInformation
• get
modelInformation(): ModelInformation
Returns
Implementation of
TextStreamingBaseModel.modelInformation
Inherited from
AbstractModel.modelInformation
Defined in
packages/modelfusion/src/model-function/AbstractModel.ts:17
modelName
• get
modelName(): "mistral-tiny"
| "mistral-small"
| "mistral-medium"
Returns
"mistral-tiny"
| "mistral-small"
| "mistral-medium"
Overrides
AbstractModel.modelName
Defined in
packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:84
settingsForEvent
• get
settingsForEvent(): Partial
<MistralChatModelSettings
>
Returns settings that should be recorded in observability events. Security-related settings (e.g. API keys) should not be included here.
Returns
Partial
<MistralChatModelSettings
>
Implementation of
TextStreamingBaseModel.settingsForEvent
Overrides
AbstractModel.settingsForEvent
Defined in
packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:133
Constructors
constructor
• new MistralChatModel(settings
): MistralChatModel
Parameters
Name | Type |
---|---|
settings | MistralChatModelSettings |
Returns
Overrides
AbstractModel<MistralChatModelSettings>.constructor
Defined in
packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:79
Methods
callAPI
▸ callAPI<RESULT
>(prompt
, callOptions
, options
): Promise
<RESULT
>
Type parameters
Name |
---|
RESULT |
Parameters
Name | Type |
---|---|
prompt | ChatPrompt |
callOptions | FunctionCallOptions |
options | Object |
options.responseFormat | MistralChatResponseFormatType <RESULT > |
Returns
Promise
<RESULT
>
Defined in
packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:92
doGenerateTexts
▸ doGenerateTexts(prompt
, options
): Promise
<{ rawResponse
: { choices
: { finish_reason
: "length"
| "stop"
| "model_length"
; index
: number
; message
: { content
: string
; role
: "user"
| "assistant"
} }[] ; created
: number
; id
: string
; model
: string
; object
: string
; usage
: { completion_tokens
: number
; prompt_tokens
: number
; total_tokens
: number
} } ; textGenerationResults
: { finishReason
: TextGenerationFinishReason
; text
: string
= choice.message.content }[] }>
Parameters
Name | Type |
---|---|
prompt | ChatPrompt |
options | FunctionCallOptions |
Returns
Promise
<{ rawResponse
: { choices
: { finish_reason
: "length"
| "stop"
| "model_length"
; index
: number
; message
: { content
: string
; role
: "user"
| "assistant"
} }[] ; created
: number
; id
: string
; model
: string
; object
: string
; usage
: { completion_tokens
: number
; prompt_tokens
: number
; total_tokens
: number
} } ; textGenerationResults
: { finishReason
: TextGenerationFinishReason
; text
: string
= choice.message.content }[] }>
Implementation of
TextStreamingBaseModel.doGenerateTexts
Defined in
packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:150
doStreamText
▸ doStreamText(prompt
, options
): Promise
<AsyncIterable
<Delta
<{ choices
: { delta
: { content?
: null
| string
; role?
: null
| "user"
| "assistant"
} ; finish_reason?
: null
| "length"
| "stop"
| "model_length"
; index
: number
}[] ; created?
: number
; id
: string
; model
: string
; object?
: string
}>>>
Parameters
Name | Type |
---|---|
prompt | ChatPrompt |
options | FunctionCallOptions |
Returns
Promise
<AsyncIterable
<Delta
<{ choices
: { delta
: { content?
: null
| string
; role?
: null
| "user"
| "assistant"
} ; finish_reason?
: null
| "length"
| "stop"
| "model_length"
; index
: number
}[] ; created?
: number
; id
: string
; model
: string
; object?
: string
}>>>
Implementation of
TextStreamingBaseModel.doStreamText
Defined in
packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:194
extractTextDelta
▸ extractTextDelta(delta
): undefined
| string
Parameters
Name | Type |
---|---|
delta | unknown |
Returns
undefined
| string
Implementation of
TextStreamingBaseModel.extractTextDelta
Defined in
packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:200
processTextGenerationResponse
▸ processTextGenerationResponse(rawResponse
): Object
Parameters
Name | Type |
---|---|
rawResponse | Object |
rawResponse.choices | { finish_reason : "length" | "stop" | "model_length" ; index : number ; message : { content : string ; role : "user" | "assistant" } }[] |
rawResponse.created | number |
rawResponse.id | string |
rawResponse.model | string |
rawResponse.object | string |
rawResponse.usage | Object |
rawResponse.usage.completion_tokens | number |
rawResponse.usage.prompt_tokens | number |
rawResponse.usage.total_tokens | number |
Returns
Object
Name | Type |
---|---|
rawResponse | { choices : { finish_reason : "length" | "stop" | "model_length" ; index : number ; message : { content : string ; role : "user" | "assistant" } }[] ; created : number ; id : string ; model : string ; object : string ; usage : { completion_tokens : number ; prompt_tokens : number ; total_tokens : number } } |
rawResponse.choices | { finish_reason : "length" | "stop" | "model_length" ; index : number ; message : { content : string ; role : "user" | "assistant" } }[] |
rawResponse.created | number |
rawResponse.id | string |
rawResponse.model | string |
rawResponse.object | string |
rawResponse.usage | { completion_tokens : number ; prompt_tokens : number ; total_tokens : number } |
rawResponse.usage.completion_tokens | number |
rawResponse.usage.prompt_tokens | number |
rawResponse.usage.total_tokens | number |
textGenerationResults | { finishReason : TextGenerationFinishReason ; text : string = choice.message.content }[] |
Defined in
packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:170
restoreGeneratedTexts
▸ restoreGeneratedTexts(rawResponse
): Object
Parameters
Name | Type |
---|---|
rawResponse | unknown |
Returns
Object
Name | Type |
---|---|
rawResponse | { choices : { finish_reason : "length" | "stop" | "model_length" ; index : number ; message : { content : string ; role : "user" | "assistant" } }[] ; created : number ; id : string ; model : string ; object : string ; usage : { completion_tokens : number ; prompt_tokens : number ; total_tokens : number } } |
rawResponse.choices | { finish_reason : "length" | "stop" | "model_length" ; index : number ; message : { content : string ; role : "user" | "assistant" } }[] |
rawResponse.created | number |
rawResponse.id | string |
rawResponse.model | string |
rawResponse.object | string |
rawResponse.usage | { completion_tokens : number ; prompt_tokens : number ; total_tokens : number } |
rawResponse.usage.completion_tokens | number |
rawResponse.usage.prompt_tokens | number |
rawResponse.usage.total_tokens | number |
textGenerationResults | { finishReason : TextGenerationFinishReason ; text : string = choice.message.content }[] |
Implementation of
TextStreamingBaseModel.restoreGeneratedTexts
Defined in
packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:161
withChatPrompt
▸ withChatPrompt(): PromptTemplateTextStreamingModel
<ChatPrompt
, ChatPrompt
, MistralChatModelSettings
, MistralChatModel
>
Returns this model with a chat prompt template.
Returns
PromptTemplateTextStreamingModel
<ChatPrompt
, ChatPrompt
, MistralChatModelSettings
, MistralChatModel
>
Implementation of
TextStreamingBaseModel.withChatPrompt
Defined in
packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:213
withInstructionPrompt
▸ withInstructionPrompt(): PromptTemplateTextStreamingModel
<InstructionPrompt
, ChatPrompt
, MistralChatModelSettings
, MistralChatModel
>
Returns this model with an instruction prompt template.
Returns
PromptTemplateTextStreamingModel
<InstructionPrompt
, ChatPrompt
, MistralChatModelSettings
, MistralChatModel
>
Implementation of
TextStreamingBaseModel.withInstructionPrompt
Defined in
packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:209
withJsonOutput
▸ withJsonOutput(): this
When possible, limit the output generation to the specified JSON schema, or super sets of it (e.g. JSON in general).
Returns
this
Implementation of
TextStreamingBaseModel.withJsonOutput
Defined in
packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:217
withPromptTemplate
▸ withPromptTemplate<INPUT_PROMPT
>(promptTemplate
): PromptTemplateTextStreamingModel
<INPUT_PROMPT
, ChatPrompt
, MistralChatModelSettings
, MistralChatModel
>
Type parameters
Name |
---|
INPUT_PROMPT |
Parameters
Name | Type |
---|---|
promptTemplate | TextGenerationPromptTemplate <INPUT_PROMPT , ChatPrompt > |
Returns
PromptTemplateTextStreamingModel
<INPUT_PROMPT
, ChatPrompt
, MistralChatModelSettings
, MistralChatModel
>
Implementation of
TextStreamingBaseModel.withPromptTemplate
Defined in
packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:221
withSettings
▸ withSettings(additionalSettings
): MistralChatModel
The withSettings
method creates a new model with the same configuration as the original model, but with the specified settings changed.
Parameters
Name | Type |
---|---|
additionalSettings | Partial <MistralChatModelSettings > |
Returns
Example
const model = new OpenAICompletionModel({
model: "gpt-3.5-turbo-instruct",
maxGenerationTokens: 500,
});
const modelWithMoreTokens = model.withSettings({
maxGenerationTokens: 1000,
});
Implementation of
TextStreamingBaseModel.withSettings
Overrides
AbstractModel.withSettings
Defined in
packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:238
withTextPrompt
▸ withTextPrompt(): PromptTemplateTextStreamingModel
<string
, ChatPrompt
, MistralChatModelSettings
, MistralChatModel
>
Returns this model with a text prompt template.
Returns
PromptTemplateTextStreamingModel
<string
, ChatPrompt
, MistralChatModelSettings
, MistralChatModel
>
Implementation of
TextStreamingBaseModel.withTextPrompt
Defined in
packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:205
Properties
contextWindowSize
• Readonly
contextWindowSize: undefined
= undefined
Implementation of
TextStreamingBaseModel.contextWindowSize
Defined in
packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:88
countPromptTokens
• Readonly
countPromptTokens: undefined
= undefined
Optional. Implement if you have a tokenizer and want to count the number of tokens in a prompt.
Implementation of
TextStreamingBaseModel.countPromptTokens
Defined in
packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:90
provider
• Readonly
provider: "mistral"
Overrides
AbstractModel.provider
Defined in
packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:83
settings
• Readonly
settings: MistralChatModelSettings
Implementation of
TextStreamingBaseModel.settings
Inherited from
AbstractModel.settings
Defined in
packages/modelfusion/src/model-function/AbstractModel.ts:7
tokenizer
• Readonly
tokenizer: undefined
= undefined
Implementation of
TextStreamingBaseModel.tokenizer
Defined in
packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:89