Class: OpenAICompatibleChatModel
Create a text generation model that calls an API that is compatible with OpenAI's chat API.
Please note that many providers implement the API with slight differences, which can cause unexpected errors and different behavior in less common scenarios.
See
https://platform.openai.com/docs/api-reference/chat/create
Hierarchy
-
AbstractOpenAIChatModel
<OpenAICompatibleChatSettings
>↳
OpenAICompatibleChatModel
Implements
TextStreamingBaseModel
<ChatPrompt
,OpenAICompatibleChatSettings
>ToolCallGenerationModel
<ChatPrompt
,OpenAICompatibleChatSettings
>ToolCallsGenerationModel
<ChatPrompt
,OpenAICompatibleChatSettings
>
Accessors
modelInformation
• get
modelInformation(): ModelInformation
Returns
Implementation of
ToolCallsGenerationModel.modelInformation
Inherited from
AbstractOpenAIChatModel.modelInformation
Defined in
packages/modelfusion/src/model-function/AbstractModel.ts:17
modelName
• get
modelName(): string
Returns
string
Overrides
AbstractOpenAIChatModel.modelName
Defined in
packages/modelfusion/src/model-provider/openai-compatible/OpenAICompatibleChatModel.ts:57
provider
• get
provider(): OpenAICompatibleProviderName
Returns
Overrides
AbstractOpenAIChatModel.provider
Defined in
packages/modelfusion/src/model-provider/openai-compatible/OpenAICompatibleChatModel.ts:51
settingsForEvent
• get
settingsForEvent(): Partial
<OpenAICompatibleChatSettings
>
Returns settings that should be recorded in observability events. Security-related settings (e.g. API keys) should not be included here.
Returns
Partial
<OpenAICompatibleChatSettings
>
Implementation of
TextStreamingBaseModel.settingsForEvent
Overrides
AbstractOpenAIChatModel.settingsForEvent
Defined in
packages/modelfusion/src/model-provider/openai-compatible/OpenAICompatibleChatModel.ts:65
Constructors
constructor
• new OpenAICompatibleChatModel(settings
): OpenAICompatibleChatModel
Parameters
Name | Type |
---|---|
settings | OpenAICompatibleChatSettings |
Returns
Overrides
AbstractOpenAIChatModel.constructor
Defined in
packages/modelfusion/src/model-provider/openai-compatible/OpenAICompatibleChatModel.ts:47
Methods
asObjectGenerationModel
▸ asObjectGenerationModel<INPUT_PROMPT
, OpenAIChatPrompt
>(promptTemplate
): ObjectFromTextStreamingModel
<INPUT_PROMPT
, unknown
, TextStreamingModel
<unknown
, TextGenerationModelSettings
>> | ObjectFromTextStreamingModel
<INPUT_PROMPT
, OpenAIChatPrompt
, TextStreamingModel
<OpenAIChatPrompt
, TextGenerationModelSettings
>>
Type parameters
Name |
---|
INPUT_PROMPT |
OpenAIChatPrompt |
Parameters
Name | Type |
---|---|
promptTemplate | ObjectFromTextPromptTemplate <INPUT_PROMPT , OpenAIChatPrompt > | FlexibleObjectFromTextPromptTemplate <INPUT_PROMPT , unknown > |
Returns
ObjectFromTextStreamingModel
<INPUT_PROMPT
, unknown
, TextStreamingModel
<unknown
, TextGenerationModelSettings
>> | ObjectFromTextStreamingModel
<INPUT_PROMPT
, OpenAIChatPrompt
, TextStreamingModel
<OpenAIChatPrompt
, TextGenerationModelSettings
>>
Defined in
packages/modelfusion/src/model-provider/openai-compatible/OpenAICompatibleChatModel.ts:87
callAPI
▸ callAPI<RESULT
>(messages
, callOptions
, options
): Promise
<RESULT
>
Type parameters
Name |
---|
RESULT |
Parameters
Name | Type |
---|---|
messages | ChatPrompt |
callOptions | FunctionCallOptions |
options | Object |
options.functionCall? | "auto" | { name : string } | "none" |
options.functions? | { description? : string ; name : string ; parameters : unknown }[] |
options.responseFormat | OpenAIChatResponseFormatType <RESULT > |
options.toolChoice? | "auto" | "none" | { function : { name : string } ; type : "function" } |
options.tools? | { function : { description? : string ; name : string ; parameters : unknown } ; type : "function" }[] |
Returns
Promise
<RESULT
>
Inherited from
AbstractOpenAIChatModel.callAPI
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:112
doGenerateTexts
▸ doGenerateTexts(prompt
, options
): Promise
<{ rawResponse
: { choices
: { finish_reason?
: null
| "length"
| "stop"
| "function_call"
| "tool_calls"
| "content_filter"
; index?
: number
; logprobs?
: any
; message
: { content
: null
| string
; function_call?
: { arguments
: string
; name
: string
} ; role
: "assistant"
; tool_calls?
: { function
: { arguments
: string
; name
: string
} ; id
: string
; type
: "function"
}[] } }[] ; created
: number
; id
: string
; model
: string
; object
: "chat.completion"
; system_fingerprint?
: null
| string
; usage
: { completion_tokens
: number
; prompt_tokens
: number
; total_tokens
: number
} } ; textGenerationResults
: { finishReason
: TextGenerationFinishReason
; text
: string
}[] ; usage
: { completionTokens
: number
= response.usage.completion_tokens; promptTokens
: number
= response.usage.prompt_tokens; totalTokens
: number
= response.usage.total_tokens } }>
Parameters
Name | Type |
---|---|
prompt | ChatPrompt |
options | FunctionCallOptions |
Returns
Promise
<{ rawResponse
: { choices
: { finish_reason?
: null
| "length"
| "stop"
| "function_call"
| "tool_calls"
| "content_filter"
; index?
: number
; logprobs?
: any
; message
: { content
: null
| string
; function_call?
: { arguments
: string
; name
: string
} ; role
: "assistant"
; tool_calls?
: { function
: { arguments
: string
; name
: string
} ; id
: string
; type
: "function"
}[] } }[] ; created
: number
; id
: string
; model
: string
; object
: "chat.completion"
; system_fingerprint?
: null
| string
; usage
: { completion_tokens
: number
; prompt_tokens
: number
; total_tokens
: number
} } ; textGenerationResults
: { finishReason
: TextGenerationFinishReason
; text
: string
}[] ; usage
: { completionTokens
: number
= response.usage.completion_tokens; promptTokens
: number
= response.usage.prompt_tokens; totalTokens
: number
= response.usage.total_tokens } }>
Implementation of
TextStreamingBaseModel.doGenerateTexts
Inherited from
AbstractOpenAIChatModel.doGenerateTexts
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:188
doGenerateToolCall
▸ doGenerateToolCall(tool
, prompt
, options
): Promise
<{ rawResponse
: { choices
: { finish_reason?
: null
| "length"
| "stop"
| "function_call"
| "tool_calls"
| "content_filter"
; index?
: number
; logprobs?
: any
; message
: { content
: null
| string
; function_call?
: { arguments
: string
; name
: string
} ; role
: "assistant"
; tool_calls?
: { function
: { arguments
: string
; name
: string
} ; id
: string
; type
: "function"
}[] } }[] ; created
: number
; id
: string
; model
: string
; object
: "chat.completion"
; system_fingerprint?
: null
| string
; usage
: { completion_tokens
: number
; prompt_tokens
: number
; total_tokens
: number
} } ; toolCall
: null
| { args
: unknown
; id
: string
} ; usage
: { completionTokens
: number
= response.usage.completion_tokens; promptTokens
: number
= response.usage.prompt_tokens; totalTokens
: number
= response.usage.total_tokens } }>
Parameters
Name | Type |
---|---|
tool | ToolDefinition <string , unknown > |
prompt | ChatPrompt |
options | FunctionCallOptions |
Returns
Promise
<{ rawResponse
: { choices
: { finish_reason?
: null
| "length"
| "stop"
| "function_call"
| "tool_calls"
| "content_filter"
; index?
: number
; logprobs?
: any
; message
: { content
: null
| string
; function_call?
: { arguments
: string
; name
: string
} ; role
: "assistant"
; tool_calls?
: { function
: { arguments
: string
; name
: string
} ; id
: string
; type
: "function"
}[] } }[] ; created
: number
; id
: string
; model
: string
; object
: "chat.completion"
; system_fingerprint?
: null
| string
; usage
: { completion_tokens
: number
; prompt_tokens
: number
; total_tokens
: number
} } ; toolCall
: null
| { args
: unknown
; id
: string
} ; usage
: { completionTokens
: number
= response.usage.completion_tokens; promptTokens
: number
= response.usage.prompt_tokens; totalTokens
: number
= response.usage.total_tokens } }>
Implementation of
ToolCallGenerationModel.doGenerateToolCall
Inherited from
AbstractOpenAIChatModel.doGenerateToolCall
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:264
doGenerateToolCalls
▸ doGenerateToolCalls(tools
, prompt
, options
): Promise
<{ rawResponse
: { choices
: { finish_reason?
: null
| "length"
| "stop"
| "function_call"
| "tool_calls"
| "content_filter"
; index?
: number
; logprobs?
: any
; message
: { content
: null
| string
; function_call?
: { arguments
: string
; name
: string
} ; role
: "assistant"
; tool_calls?
: { function
: { arguments
: string
; name
: string
} ; id
: string
; type
: "function"
}[] } }[] ; created
: number
; id
: string
; model
: string
; object
: "chat.completion"
; system_fingerprint?
: null
| string
; usage
: { completion_tokens
: number
; prompt_tokens
: number
; total_tokens
: number
} } ; text
: null
| string
; toolCalls
: null
| { args
: unknown
; id
: string
= toolCall.id; name
: string
= toolCall.function.name }[] ; usage
: { completionTokens
: number
= response.usage.completion_tokens; promptTokens
: number
= response.usage.prompt_tokens; totalTokens
: number
= response.usage.total_tokens } }>
Parameters
Name | Type |
---|---|
tools | ToolDefinition <string , unknown >[] |
prompt | ChatPrompt |
options | FunctionCallOptions |
Returns
Promise
<{ rawResponse
: { choices
: { finish_reason?
: null
| "length"
| "stop"
| "function_call"
| "tool_calls"
| "content_filter"
; index?
: number
; logprobs?
: any
; message
: { content
: null
| string
; function_call?
: { arguments
: string
; name
: string
} ; role
: "assistant"
; tool_calls?
: { function
: { arguments
: string
; name
: string
} ; id
: string
; type
: "function"
}[] } }[] ; created
: number
; id
: string
; model
: string
; object
: "chat.completion"
; system_fingerprint?
: null
| string
; usage
: { completion_tokens
: number
; prompt_tokens
: number
; total_tokens
: number
} } ; text
: null
| string
; toolCalls
: null
| { args
: unknown
; id
: string
= toolCall.id; name
: string
= toolCall.function.name }[] ; usage
: { completionTokens
: number
= response.usage.completion_tokens; promptTokens
: number
= response.usage.prompt_tokens; totalTokens
: number
= response.usage.total_tokens } }>
Implementation of
ToolCallsGenerationModel.doGenerateToolCalls
Inherited from
AbstractOpenAIChatModel.doGenerateToolCalls
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:302
doStreamText
▸ doStreamText(prompt
, options
): Promise
<AsyncIterable
<Delta
<{ choices
: { delta
: { content?
: null
| string
; function_call?
: { arguments?
: string
; name?
: string
} ; role?
: "user"
| "assistant"
; tool_calls?
: { function
: { arguments
: string
; name
: string
} ; id
: string
; type
: "function"
}[] } ; finish_reason?
: null
| "length"
| "stop"
| "function_call"
| "tool_calls"
| "content_filter"
; index
: number
}[] ; created
: number
; id
: string
; model?
: string
; object
: string
; system_fingerprint?
: null
| string
}>>>
Parameters
Name | Type |
---|---|
prompt | ChatPrompt |
options | FunctionCallOptions |
Returns
Promise
<AsyncIterable
<Delta
<{ choices
: { delta
: { content?
: null
| string
; function_call?
: { arguments?
: string
; name?
: string
} ; role?
: "user"
| "assistant"
; tool_calls?
: { function
: { arguments
: string
; name
: string
} ; id
: string
; type
: "function"
}[] } ; finish_reason?
: null
| "length"
| "stop"
| "function_call"
| "tool_calls"
| "content_filter"
; index
: number
}[] ; created
: number
; id
: string
; model?
: string
; object
: string
; system_fingerprint?
: null
| string
}>>>
Implementation of
TextStreamingBaseModel.doStreamText
Inherited from
AbstractOpenAIChatModel.doStreamText
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:237
extractTextDelta
▸ extractTextDelta(delta
): undefined
| string
Parameters
Name | Type |
---|---|
delta | unknown |
Returns
undefined
| string
Implementation of
TextStreamingBaseModel.extractTextDelta
Inherited from
AbstractOpenAIChatModel.extractTextDelta
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:243
extractUsage
▸ extractUsage(response
): Object
Parameters
Name | Type |
---|---|
response | Object |
response.choices | { finish_reason? : null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index? : number ; logprobs? : any ; message : { content : null | string ; function_call? : { arguments : string ; name : string } ; role : "assistant" ; tool_calls? : { function : { arguments : string ; name : string } ; id : string ; type : "function" }[] } }[] |
response.created | number |
response.id | string |
response.model | string |
response.object | "chat.completion" |
response.system_fingerprint? | null | string |
response.usage | Object |
response.usage.completion_tokens | number |
response.usage.prompt_tokens | number |
response.usage.total_tokens | number |
Returns
Object
Name | Type |
---|---|
completionTokens | number |
promptTokens | number |
totalTokens | number |
Inherited from
AbstractOpenAIChatModel.extractUsage
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:335
processTextGenerationResponse
▸ processTextGenerationResponse(rawResponse
): Object
Parameters
Name | Type |
---|---|
rawResponse | Object |
rawResponse.choices | { finish_reason? : null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index? : number ; logprobs? : any ; message : { content : null | string ; function_call? : { arguments : string ; name : string } ; role : "assistant" ; tool_calls? : { function : { arguments : string ; name : string } ; id : string ; type : "function" }[] } }[] |
rawResponse.created | number |
rawResponse.id | string |
rawResponse.model | string |
rawResponse.object | "chat.completion" |
rawResponse.system_fingerprint? | null | string |
rawResponse.usage | Object |
rawResponse.usage.completion_tokens | number |
rawResponse.usage.prompt_tokens | number |
rawResponse.usage.total_tokens | number |
Returns
Object
Name | Type |
---|---|
rawResponse | { choices : { finish_reason? : null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index? : number ; logprobs? : any ; message : { content : null | string ; function_call? : { arguments : string ; name : string } ; role : "assistant" ; tool_calls? : { function : { arguments : string ; name : string } ; id : string ; type : "function" }[] } }[] ; created : number ; id : string ; model : string ; object : "chat.completion" ; system_fingerprint? : null | string ; usage : { completion_tokens : number ; prompt_tokens : number ; total_tokens : number } } |
rawResponse.choices | { finish_reason? : null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index? : number ; logprobs? : any ; message : { content : null | string ; function_call? : { arguments : string ; name : string } ; role : "assistant" ; tool_calls? : { function : { arguments : string ; name : string } ; id : string ; type : "function" }[] } }[] |
rawResponse.created | number |
rawResponse.id | string |
rawResponse.model | string |
rawResponse.object | "chat.completion" |
rawResponse.system_fingerprint? | null | string |
rawResponse.usage | { completion_tokens : number ; prompt_tokens : number ; total_tokens : number } |
rawResponse.usage.completion_tokens | number |
rawResponse.usage.prompt_tokens | number |
rawResponse.usage.total_tokens | number |
textGenerationResults | { finishReason : TextGenerationFinishReason ; text : string }[] |
usage | { completionTokens : number = response.usage.completion_tokens; promptTokens : number = response.usage.prompt_tokens; totalTokens : number = response.usage.total_tokens } |
usage.completionTokens | number |
usage.promptTokens | number |
usage.totalTokens | number |
Inherited from
AbstractOpenAIChatModel.processTextGenerationResponse
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:208
restoreGeneratedTexts
▸ restoreGeneratedTexts(rawResponse
): Object
Parameters
Name | Type |
---|---|
rawResponse | unknown |
Returns
Object
Name | Type |
---|---|
rawResponse | { choices : { finish_reason? : null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index? : number ; logprobs? : any ; message : { content : null | string ; function_call? : { arguments : string ; name : string } ; role : "assistant" ; tool_calls? : { function : { arguments : string ; name : string } ; id : string ; type : "function" }[] } }[] ; created : number ; id : string ; model : string ; object : "chat.completion" ; system_fingerprint? : null | string ; usage : { completion_tokens : number ; prompt_tokens : number ; total_tokens : number } } |
rawResponse.choices | { finish_reason? : null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index? : number ; logprobs? : any ; message : { content : null | string ; function_call? : { arguments : string ; name : string } ; role : "assistant" ; tool_calls? : { function : { arguments : string ; name : string } ; id : string ; type : "function" }[] } }[] |
rawResponse.created | number |
rawResponse.id | string |
rawResponse.model | string |
rawResponse.object | "chat.completion" |
rawResponse.system_fingerprint? | null | string |
rawResponse.usage | { completion_tokens : number ; prompt_tokens : number ; total_tokens : number } |
rawResponse.usage.completion_tokens | number |
rawResponse.usage.prompt_tokens | number |
rawResponse.usage.total_tokens | number |
textGenerationResults | { finishReason : TextGenerationFinishReason ; text : string }[] |
usage | { completionTokens : number = response.usage.completion_tokens; promptTokens : number = response.usage.prompt_tokens; totalTokens : number = response.usage.total_tokens } |
usage.completionTokens | number |
usage.promptTokens | number |
usage.totalTokens | number |
Implementation of
TextStreamingBaseModel.restoreGeneratedTexts
Inherited from
AbstractOpenAIChatModel.restoreGeneratedTexts
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:199
withChatPrompt
▸ withChatPrompt(): PromptTemplateFullTextModel
<ChatPrompt
, ChatPrompt
, OpenAICompatibleChatSettings
, OpenAICompatibleChatModel
>
Returns this model with a chat prompt template.
Returns
PromptTemplateFullTextModel
<ChatPrompt
, ChatPrompt
, OpenAICompatibleChatSettings
, OpenAICompatibleChatModel
>
Implementation of
TextStreamingBaseModel.withChatPrompt
Defined in
packages/modelfusion/src/model-provider/openai-compatible/OpenAICompatibleChatModel.ts:111
withInstructionPrompt
▸ withInstructionPrompt(): PromptTemplateFullTextModel
<InstructionPrompt
, ChatPrompt
, OpenAICompatibleChatSettings
, OpenAICompatibleChatModel
>
Returns this model with an instruction prompt template.
Returns
PromptTemplateFullTextModel
<InstructionPrompt
, ChatPrompt
, OpenAICompatibleChatSettings
, OpenAICompatibleChatModel
>
Implementation of
TextStreamingBaseModel.withInstructionPrompt
Defined in
packages/modelfusion/src/model-provider/openai-compatible/OpenAICompatibleChatModel.ts:107
withJsonOutput
▸ withJsonOutput(): OpenAICompatibleChatModel
When possible, limit the output generation to the specified JSON schema, or super sets of it (e.g. JSON in general).
Returns
Implementation of
TextStreamingBaseModel.withJsonOutput
Defined in
packages/modelfusion/src/model-provider/openai-compatible/OpenAICompatibleChatModel.ts:134
withPromptTemplate
▸ withPromptTemplate<INPUT_PROMPT
>(promptTemplate
): PromptTemplateFullTextModel
<INPUT_PROMPT
, ChatPrompt
, OpenAICompatibleChatSettings
, OpenAICompatibleChatModel
>
Type parameters
Name |
---|
INPUT_PROMPT |
Parameters
Name | Type |
---|---|
promptTemplate | TextGenerationPromptTemplate <INPUT_PROMPT , ChatPrompt > |
Returns
PromptTemplateFullTextModel
<INPUT_PROMPT
, ChatPrompt
, OpenAICompatibleChatSettings
, OpenAICompatibleChatModel
>
Implementation of
TextStreamingBaseModel.withPromptTemplate
Defined in
packages/modelfusion/src/model-provider/openai-compatible/OpenAICompatibleChatModel.ts:115
withSettings
▸ withSettings(additionalSettings
): OpenAICompatibleChatModel
The withSettings
method creates a new model with the same configuration as the original model, but with the specified settings changed.
Parameters
Name | Type |
---|---|
additionalSettings | Partial <OpenAICompatibleChatSettings > |
Returns
Example
const model = new OpenAICompletionModel({
model: "gpt-3.5-turbo-instruct",
maxGenerationTokens: 500,
});
const modelWithMoreTokens = model.withSettings({
maxGenerationTokens: 1000,
});
Implementation of
ToolCallsGenerationModel.withSettings
Overrides
AbstractOpenAIChatModel.withSettings
Defined in
packages/modelfusion/src/model-provider/openai-compatible/OpenAICompatibleChatModel.ts:138
withTextPrompt
▸ withTextPrompt(): PromptTemplateFullTextModel
<string
, ChatPrompt
, OpenAICompatibleChatSettings
, OpenAICompatibleChatModel
>
Returns this model with a text prompt template.
Returns
PromptTemplateFullTextModel
<string
, ChatPrompt
, OpenAICompatibleChatSettings
, OpenAICompatibleChatModel
>
Implementation of
TextStreamingBaseModel.withTextPrompt
Defined in
packages/modelfusion/src/model-provider/openai-compatible/OpenAICompatibleChatModel.ts:103
Properties
contextWindowSize
• Readonly
contextWindowSize: undefined
= undefined
Implementation of
TextStreamingBaseModel.contextWindowSize
Defined in
packages/modelfusion/src/model-provider/openai-compatible/OpenAICompatibleChatModel.ts:61
countPromptTokens
• Readonly
countPromptTokens: undefined
= undefined
Optional. Implement if you have a tokenizer and want to count the number of tokens in a prompt.
Implementation of
TextStreamingBaseModel.countPromptTokens
Defined in
packages/modelfusion/src/model-provider/openai-compatible/OpenAICompatibleChatModel.ts:63
settings
• Readonly
settings: OpenAICompatibleChatSettings
Implementation of
ToolCallsGenerationModel.settings
Inherited from
AbstractOpenAIChatModel.settings
Defined in
packages/modelfusion/src/model-function/AbstractModel.ts:7
tokenizer
• Readonly
tokenizer: undefined
= undefined
Implementation of
TextStreamingBaseModel.tokenizer
Defined in
packages/modelfusion/src/model-provider/openai-compatible/OpenAICompatibleChatModel.ts:62