Class: OllamaChatModel
Text generation model that uses the Ollama chat API.
Hierarchy
-
AbstractModel
<OllamaChatModelSettings
>↳
OllamaChatModel
Implements
Accessors
modelInformation
• get
modelInformation(): ModelInformation
Returns
Implementation of
TextStreamingBaseModel.modelInformation
Inherited from
AbstractModel.modelInformation
Defined in
packages/modelfusion/src/model-function/AbstractModel.ts:17
modelName
• get
modelName(): string
Returns
string
Overrides
AbstractModel.modelName
Defined in
packages/modelfusion/src/model-provider/ollama/OllamaChatModel.ts:61
settingsForEvent
• get
settingsForEvent(): Partial
<OllamaChatModelSettings
>
Returns settings that should be recorded in observability events. Security-related settings (e.g. API keys) should not be included here.
Returns
Partial
<OllamaChatModelSettings
>
Implementation of
TextStreamingBaseModel.settingsForEvent
Overrides
AbstractModel.settingsForEvent
Defined in
packages/modelfusion/src/model-provider/ollama/OllamaChatModel.ts:123
Constructors
constructor
• new OllamaChatModel(settings
): OllamaChatModel
Parameters
Name | Type |
---|---|
settings | OllamaChatModelSettings |
Returns
Overrides
AbstractModel<OllamaChatModelSettings>.constructor
Defined in
packages/modelfusion/src/model-provider/ollama/OllamaChatModel.ts:56
Methods
asObjectGenerationModel
▸ asObjectGenerationModel<INPUT_PROMPT
, OllamaChatPrompt
>(promptTemplate
): ObjectFromTextStreamingModel
<INPUT_PROMPT
, unknown
, TextStreamingModel
<unknown
, TextGenerationModelSettings
>> | ObjectFromTextStreamingModel
<INPUT_PROMPT
, OllamaChatPrompt
, TextStreamingModel
<OllamaChatPrompt
, TextGenerationModelSettings
>>
Type parameters
Name |
---|
INPUT_PROMPT |
OllamaChatPrompt |
Parameters
Name | Type |
---|---|
promptTemplate | ObjectFromTextPromptTemplate <INPUT_PROMPT , OllamaChatPrompt > | FlexibleObjectFromTextPromptTemplate <INPUT_PROMPT , unknown > |
Returns
ObjectFromTextStreamingModel
<INPUT_PROMPT
, unknown
, TextStreamingModel
<unknown
, TextGenerationModelSettings
>> | ObjectFromTextStreamingModel
<INPUT_PROMPT
, OllamaChatPrompt
, TextStreamingModel
<OllamaChatPrompt
, TextGenerationModelSettings
>>
Defined in
packages/modelfusion/src/model-provider/ollama/OllamaChatModel.ts:212
asToolCallGenerationModel
▸ asToolCallGenerationModel<INPUT_PROMPT
>(promptTemplate
): TextGenerationToolCallModel
<INPUT_PROMPT
, ChatPrompt
, OllamaChatModel
>
Type parameters
Name |
---|
INPUT_PROMPT |
Parameters
Name | Type |
---|---|
promptTemplate | ToolCallPromptTemplate <INPUT_PROMPT , ChatPrompt > |
Returns
TextGenerationToolCallModel
<INPUT_PROMPT
, ChatPrompt
, OllamaChatModel
>
Defined in
packages/modelfusion/src/model-provider/ollama/OllamaChatModel.ts:194
asToolCallsOrTextGenerationModel
▸ asToolCallsOrTextGenerationModel<INPUT_PROMPT
>(promptTemplate
): TextGenerationToolCallsModel
<INPUT_PROMPT
, ChatPrompt
, OllamaChatModel
>
Type parameters
Name |
---|
INPUT_PROMPT |
Parameters
Name | Type |
---|---|
promptTemplate | ToolCallsPromptTemplate <INPUT_PROMPT , ChatPrompt > |
Returns
TextGenerationToolCallsModel
<INPUT_PROMPT
, ChatPrompt
, OllamaChatModel
>
Defined in
packages/modelfusion/src/model-provider/ollama/OllamaChatModel.ts:203
callAPI
▸ callAPI<RESPONSE
>(prompt
, callOptions
, options
): Promise
<RESPONSE
>
Type parameters
Name |
---|
RESPONSE |
Parameters
Name | Type |
---|---|
prompt | ChatPrompt |
callOptions | FunctionCallOptions |
options | Object |
options.responseFormat | OllamaChatResponseFormatType <RESPONSE > |
Returns
Promise
<RESPONSE
>
Defined in
packages/modelfusion/src/model-provider/ollama/OllamaChatModel.ts:69
doGenerateTexts
▸ doGenerateTexts(prompt
, options
): Promise
<{ rawResponse
: { created_at
: string
; done
: true
; eval_count
: number
; eval_duration
: number
; load_duration?
: number
; message
: { content
: string
; role
: string
} ; model
: string
; prompt_eval_count?
: number
; prompt_eval_duration?
: number
; total_duration
: number
} ; textGenerationResults
: { finishReason
: "unknown"
; text
: string
= rawResponse.message.content }[] }>
Parameters
Name | Type |
---|---|
prompt | ChatPrompt |
options | FunctionCallOptions |
Returns
Promise
<{ rawResponse
: { created_at
: string
; done
: true
; eval_count
: number
; eval_duration
: number
; load_duration?
: number
; message
: { content
: string
; role
: string
} ; model
: string
; prompt_eval_count?
: number
; prompt_eval_duration?
: number
; total_duration
: number
} ; textGenerationResults
: { finishReason
: "unknown"
; text
: string
= rawResponse.message.content }[] }>
Implementation of
TextStreamingBaseModel.doGenerateTexts
Defined in
packages/modelfusion/src/model-provider/ollama/OllamaChatModel.ts:151
doStreamText
▸ doStreamText(prompt
, options
): Promise
<AsyncIterable
<Delta
<{ created_at
: string
; done
: false
; message
: { content
: string
; role
: string
} ; model
: string
} | { created_at
: string
; done
: true
; eval_count
: number
; eval_duration
: number
; load_duration?
: number
; model
: string
; prompt_eval_count?
: number
; prompt_eval_duration?
: number
; total_duration
: number
}>>>
Parameters
Name | Type |
---|---|
prompt | ChatPrompt |
options | FunctionCallOptions |
Returns
Promise
<AsyncIterable
<Delta
<{ created_at
: string
; done
: false
; message
: { content
: string
; role
: string
} ; model
: string
} | { created_at
: string
; done
: true
; eval_count
: number
; eval_duration
: number
; load_duration?
: number
; model
: string
; prompt_eval_count?
: number
; prompt_eval_duration?
: number
; total_duration
: number
}>>>
Implementation of
TextStreamingBaseModel.doStreamText
Defined in
packages/modelfusion/src/model-provider/ollama/OllamaChatModel.ts:183
extractTextDelta
▸ extractTextDelta(delta
): undefined
| string
Parameters
Name | Type |
---|---|
delta | unknown |
Returns
undefined
| string
Implementation of
TextStreamingBaseModel.extractTextDelta
Defined in
packages/modelfusion/src/model-provider/ollama/OllamaChatModel.ts:189
restoreGeneratedTexts
▸ restoreGeneratedTexts(rawResponse
): Object
Parameters
Name | Type |
---|---|
rawResponse | unknown |
Returns
Object
Name | Type |
---|---|
rawResponse | { created_at : string ; done : true ; eval_count : number ; eval_duration : number ; load_duration? : number ; message : { content : string ; role : string } ; model : string ; prompt_eval_count? : number ; prompt_eval_duration? : number ; total_duration : number } |
rawResponse.created_at | string |
rawResponse.done | true |
rawResponse.eval_count | number |
rawResponse.eval_duration | number |
rawResponse.load_duration? | number |
rawResponse.message | { content : string ; role : string } |
rawResponse.message.content | string |
rawResponse.message.role | string |
rawResponse.model | string |
rawResponse.prompt_eval_count? | number |
rawResponse.prompt_eval_duration? | number |
rawResponse.total_duration | number |
textGenerationResults | { finishReason : "unknown" ; text : string = rawResponse.message.content }[] |
Implementation of
TextStreamingBaseModel.restoreGeneratedTexts
Defined in
packages/modelfusion/src/model-provider/ollama/OllamaChatModel.ts:162
withChatPrompt
▸ withChatPrompt(): PromptTemplateTextStreamingModel
<ChatPrompt
, ChatPrompt
, OllamaChatModelSettings
, OllamaChatModel
>
Returns this model with a chat prompt template.
Returns
PromptTemplateTextStreamingModel
<ChatPrompt
, ChatPrompt
, OllamaChatModelSettings
, OllamaChatModel
>
Implementation of
TextStreamingBaseModel.withChatPrompt
Defined in
packages/modelfusion/src/model-provider/ollama/OllamaChatModel.ts:236
withInstructionPrompt
▸ withInstructionPrompt(): PromptTemplateTextStreamingModel
<InstructionPrompt
, ChatPrompt
, OllamaChatModelSettings
, OllamaChatModel
>
Returns this model with an instruction prompt template.
Returns
PromptTemplateTextStreamingModel
<InstructionPrompt
, ChatPrompt
, OllamaChatModelSettings
, OllamaChatModel
>
Implementation of
TextStreamingBaseModel.withInstructionPrompt
Defined in
packages/modelfusion/src/model-provider/ollama/OllamaChatModel.ts:232
withJsonOutput
▸ withJsonOutput(): OllamaChatModel
When possible, limit the output generation to the specified JSON schema, or super sets of it (e.g. JSON in general).
Returns
Implementation of
TextStreamingBaseModel.withJsonOutput
Defined in
packages/modelfusion/src/model-provider/ollama/OllamaChatModel.ts:259
withPromptTemplate
▸ withPromptTemplate<INPUT_PROMPT
>(promptTemplate
): PromptTemplateTextStreamingModel
<INPUT_PROMPT
, ChatPrompt
, OllamaChatModelSettings
, OllamaChatModel
>
Type parameters
Name |
---|
INPUT_PROMPT |
Parameters
Name | Type |
---|---|
promptTemplate | TextGenerationPromptTemplate <INPUT_PROMPT , ChatPrompt > |
Returns
PromptTemplateTextStreamingModel
<INPUT_PROMPT
, ChatPrompt
, OllamaChatModelSettings
, OllamaChatModel
>
Implementation of
TextStreamingBaseModel.withPromptTemplate
Defined in
packages/modelfusion/src/model-provider/ollama/OllamaChatModel.ts:240
withSettings
▸ withSettings(additionalSettings
): OllamaChatModel
The withSettings
method creates a new model with the same configuration as the original model, but with the specified settings changed.
Parameters
Name | Type |
---|---|
additionalSettings | Partial <OllamaChatModelSettings > |
Returns
Example
const model = new OpenAICompletionModel({
model: "gpt-3.5-turbo-instruct",
maxGenerationTokens: 500,
});
const modelWithMoreTokens = model.withSettings({
maxGenerationTokens: 1000,
});
Implementation of
TextStreamingBaseModel.withSettings
Overrides
AbstractModel.withSettings
Defined in
packages/modelfusion/src/model-provider/ollama/OllamaChatModel.ts:263
withTextPrompt
▸ withTextPrompt(): PromptTemplateTextStreamingModel
<string
, ChatPrompt
, OllamaChatModelSettings
, OllamaChatModel
>
Returns this model with a text prompt template.
Returns
PromptTemplateTextStreamingModel
<string
, ChatPrompt
, OllamaChatModelSettings
, OllamaChatModel
>
Implementation of
TextStreamingBaseModel.withTextPrompt
Defined in
packages/modelfusion/src/model-provider/ollama/OllamaChatModel.ts:228
Properties
contextWindowSize
• Readonly
contextWindowSize: undefined
= undefined
Implementation of
TextStreamingBaseModel.contextWindowSize
Defined in
packages/modelfusion/src/model-provider/ollama/OllamaChatModel.ts:67
countPromptTokens
• Readonly
countPromptTokens: undefined
= undefined
Optional. Implement if you have a tokenizer and want to count the number of tokens in a prompt.
Implementation of
TextStreamingBaseModel.countPromptTokens
Defined in
packages/modelfusion/src/model-provider/ollama/OllamaChatModel.ts:66
provider
• Readonly
provider: "ollama"
Overrides
AbstractModel.provider
Defined in
packages/modelfusion/src/model-provider/ollama/OllamaChatModel.ts:60
settings
• Readonly
settings: OllamaChatModelSettings
Implementation of
TextStreamingBaseModel.settings
Inherited from
AbstractModel.settings
Defined in
packages/modelfusion/src/model-function/AbstractModel.ts:7
tokenizer
• Readonly
tokenizer: undefined
= undefined
Implementation of
TextStreamingBaseModel.tokenizer
Defined in
packages/modelfusion/src/model-provider/ollama/OllamaChatModel.ts:65