Skip to main content

Class: OpenAICompatibleChatModel

Create a text generation model that calls an API that is compatible with OpenAI's chat API.

Please note that many providers implement the API with slight differences, which can cause unexpected errors and different behavior in less common scenarios.

See

https://platform.openai.com/docs/api-reference/chat/create

Hierarchy

Implements

Accessors

modelInformation

get modelInformation(): ModelInformation

Returns

ModelInformation

Implementation of

ToolCallsGenerationModel.modelInformation

Inherited from

AbstractOpenAIChatModel.modelInformation

Defined in

packages/modelfusion/src/model-function/AbstractModel.ts:17


modelName

get modelName(): string

Returns

string

Overrides

AbstractOpenAIChatModel.modelName

Defined in

packages/modelfusion/src/model-provider/openai-compatible/OpenAICompatibleChatModel.ts:57


provider

get provider(): OpenAICompatibleProviderName

Returns

OpenAICompatibleProviderName

Overrides

AbstractOpenAIChatModel.provider

Defined in

packages/modelfusion/src/model-provider/openai-compatible/OpenAICompatibleChatModel.ts:51


settingsForEvent

get settingsForEvent(): Partial<OpenAICompatibleChatSettings>

Returns settings that should be recorded in observability events. Security-related settings (e.g. API keys) should not be included here.

Returns

Partial<OpenAICompatibleChatSettings>

Implementation of

TextStreamingBaseModel.settingsForEvent

Overrides

AbstractOpenAIChatModel.settingsForEvent

Defined in

packages/modelfusion/src/model-provider/openai-compatible/OpenAICompatibleChatModel.ts:65

Constructors

constructor

new OpenAICompatibleChatModel(settings): OpenAICompatibleChatModel

Parameters

NameType
settingsOpenAICompatibleChatSettings

Returns

OpenAICompatibleChatModel

Overrides

AbstractOpenAIChatModel.constructor

Defined in

packages/modelfusion/src/model-provider/openai-compatible/OpenAICompatibleChatModel.ts:47

Methods

asObjectGenerationModel

asObjectGenerationModel<INPUT_PROMPT, OpenAIChatPrompt>(promptTemplate): ObjectFromTextStreamingModel<INPUT_PROMPT, unknown, TextStreamingModel<unknown, TextGenerationModelSettings>> | ObjectFromTextStreamingModel<INPUT_PROMPT, OpenAIChatPrompt, TextStreamingModel<OpenAIChatPrompt, TextGenerationModelSettings>>

Type parameters

Name
INPUT_PROMPT
OpenAIChatPrompt

Parameters

NameType
promptTemplateObjectFromTextPromptTemplate<INPUT_PROMPT, OpenAIChatPrompt> | FlexibleObjectFromTextPromptTemplate<INPUT_PROMPT, unknown>

Returns

ObjectFromTextStreamingModel<INPUT_PROMPT, unknown, TextStreamingModel<unknown, TextGenerationModelSettings>> | ObjectFromTextStreamingModel<INPUT_PROMPT, OpenAIChatPrompt, TextStreamingModel<OpenAIChatPrompt, TextGenerationModelSettings>>

Defined in

packages/modelfusion/src/model-provider/openai-compatible/OpenAICompatibleChatModel.ts:87


callAPI

callAPI<RESULT>(messages, callOptions, options): Promise<RESULT>

Type parameters

Name
RESULT

Parameters

NameType
messagesChatPrompt
callOptionsFunctionCallOptions
optionsObject
options.functionCall?"auto" | { name: string } | "none"
options.functions?{ description?: string ; name: string ; parameters: unknown }[]
options.responseFormatOpenAIChatResponseFormatType<RESULT>
options.toolChoice?"auto" | "none" | { function: { name: string } ; type: "function" }
options.tools?{ function: { description?: string ; name: string ; parameters: unknown } ; type: "function" }[]

Returns

Promise<RESULT>

Inherited from

AbstractOpenAIChatModel.callAPI

Defined in

packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:112


doGenerateTexts

doGenerateTexts(prompt, options): Promise<{ rawResponse: { choices: { finish_reason?: null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index?: number ; logprobs?: any ; message: { content: null | string ; function_call?: { arguments: string ; name: string } ; role: "assistant" ; tool_calls?: { function: { arguments: string ; name: string } ; id: string ; type: "function" }[] } }[] ; created: number ; id: string ; model: string ; object: "chat.completion" ; system_fingerprint?: null | string ; usage: { completion_tokens: number ; prompt_tokens: number ; total_tokens: number } } ; textGenerationResults: { finishReason: TextGenerationFinishReason ; text: string }[] ; usage: { completionTokens: number = response.usage.completion_tokens; promptTokens: number = response.usage.prompt_tokens; totalTokens: number = response.usage.total_tokens } }>

Parameters

NameType
promptChatPrompt
optionsFunctionCallOptions

Returns

Promise<{ rawResponse: { choices: { finish_reason?: null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index?: number ; logprobs?: any ; message: { content: null | string ; function_call?: { arguments: string ; name: string } ; role: "assistant" ; tool_calls?: { function: { arguments: string ; name: string } ; id: string ; type: "function" }[] } }[] ; created: number ; id: string ; model: string ; object: "chat.completion" ; system_fingerprint?: null | string ; usage: { completion_tokens: number ; prompt_tokens: number ; total_tokens: number } } ; textGenerationResults: { finishReason: TextGenerationFinishReason ; text: string }[] ; usage: { completionTokens: number = response.usage.completion_tokens; promptTokens: number = response.usage.prompt_tokens; totalTokens: number = response.usage.total_tokens } }>

Implementation of

TextStreamingBaseModel.doGenerateTexts

Inherited from

AbstractOpenAIChatModel.doGenerateTexts

Defined in

packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:188


doGenerateToolCall

doGenerateToolCall(tool, prompt, options): Promise<{ rawResponse: { choices: { finish_reason?: null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index?: number ; logprobs?: any ; message: { content: null | string ; function_call?: { arguments: string ; name: string } ; role: "assistant" ; tool_calls?: { function: { arguments: string ; name: string } ; id: string ; type: "function" }[] } }[] ; created: number ; id: string ; model: string ; object: "chat.completion" ; system_fingerprint?: null | string ; usage: { completion_tokens: number ; prompt_tokens: number ; total_tokens: number } } ; toolCall: null | { args: unknown ; id: string } ; usage: { completionTokens: number = response.usage.completion_tokens; promptTokens: number = response.usage.prompt_tokens; totalTokens: number = response.usage.total_tokens } }>

Parameters

NameType
toolToolDefinition<string, unknown>
promptChatPrompt
optionsFunctionCallOptions

Returns

Promise<{ rawResponse: { choices: { finish_reason?: null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index?: number ; logprobs?: any ; message: { content: null | string ; function_call?: { arguments: string ; name: string } ; role: "assistant" ; tool_calls?: { function: { arguments: string ; name: string } ; id: string ; type: "function" }[] } }[] ; created: number ; id: string ; model: string ; object: "chat.completion" ; system_fingerprint?: null | string ; usage: { completion_tokens: number ; prompt_tokens: number ; total_tokens: number } } ; toolCall: null | { args: unknown ; id: string } ; usage: { completionTokens: number = response.usage.completion_tokens; promptTokens: number = response.usage.prompt_tokens; totalTokens: number = response.usage.total_tokens } }>

Implementation of

ToolCallGenerationModel.doGenerateToolCall

Inherited from

AbstractOpenAIChatModel.doGenerateToolCall

Defined in

packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:264


doGenerateToolCalls

doGenerateToolCalls(tools, prompt, options): Promise<{ rawResponse: { choices: { finish_reason?: null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index?: number ; logprobs?: any ; message: { content: null | string ; function_call?: { arguments: string ; name: string } ; role: "assistant" ; tool_calls?: { function: { arguments: string ; name: string } ; id: string ; type: "function" }[] } }[] ; created: number ; id: string ; model: string ; object: "chat.completion" ; system_fingerprint?: null | string ; usage: { completion_tokens: number ; prompt_tokens: number ; total_tokens: number } } ; text: null | string ; toolCalls: null | { args: unknown ; id: string = toolCall.id; name: string = toolCall.function.name }[] ; usage: { completionTokens: number = response.usage.completion_tokens; promptTokens: number = response.usage.prompt_tokens; totalTokens: number = response.usage.total_tokens } }>

Parameters

NameType
toolsToolDefinition<string, unknown>[]
promptChatPrompt
optionsFunctionCallOptions

Returns

Promise<{ rawResponse: { choices: { finish_reason?: null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index?: number ; logprobs?: any ; message: { content: null | string ; function_call?: { arguments: string ; name: string } ; role: "assistant" ; tool_calls?: { function: { arguments: string ; name: string } ; id: string ; type: "function" }[] } }[] ; created: number ; id: string ; model: string ; object: "chat.completion" ; system_fingerprint?: null | string ; usage: { completion_tokens: number ; prompt_tokens: number ; total_tokens: number } } ; text: null | string ; toolCalls: null | { args: unknown ; id: string = toolCall.id; name: string = toolCall.function.name }[] ; usage: { completionTokens: number = response.usage.completion_tokens; promptTokens: number = response.usage.prompt_tokens; totalTokens: number = response.usage.total_tokens } }>

Implementation of

ToolCallsGenerationModel.doGenerateToolCalls

Inherited from

AbstractOpenAIChatModel.doGenerateToolCalls

Defined in

packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:302


doStreamText

doStreamText(prompt, options): Promise<AsyncIterable<Delta<{ choices: { delta: { content?: null | string ; function_call?: { arguments?: string ; name?: string } ; role?: "user" | "assistant" ; tool_calls?: { function: { arguments: string ; name: string } ; id: string ; type: "function" }[] } ; finish_reason?: null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index: number }[] ; created: number ; id: string ; model?: string ; object: string ; system_fingerprint?: null | string }>>>

Parameters

NameType
promptChatPrompt
optionsFunctionCallOptions

Returns

Promise<AsyncIterable<Delta<{ choices: { delta: { content?: null | string ; function_call?: { arguments?: string ; name?: string } ; role?: "user" | "assistant" ; tool_calls?: { function: { arguments: string ; name: string } ; id: string ; type: "function" }[] } ; finish_reason?: null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index: number }[] ; created: number ; id: string ; model?: string ; object: string ; system_fingerprint?: null | string }>>>

Implementation of

TextStreamingBaseModel.doStreamText

Inherited from

AbstractOpenAIChatModel.doStreamText

Defined in

packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:237


extractTextDelta

extractTextDelta(delta): undefined | string

Parameters

NameType
deltaunknown

Returns

undefined | string

Implementation of

TextStreamingBaseModel.extractTextDelta

Inherited from

AbstractOpenAIChatModel.extractTextDelta

Defined in

packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:243


extractUsage

extractUsage(response): Object

Parameters

NameType
responseObject
response.choices{ finish_reason?: null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index?: number ; logprobs?: any ; message: { content: null | string ; function_call?: { arguments: string ; name: string } ; role: "assistant" ; tool_calls?: { function: { arguments: string ; name: string } ; id: string ; type: "function" }[] } }[]
response.creatednumber
response.idstring
response.modelstring
response.object"chat.completion"
response.system_fingerprint?null | string
response.usageObject
response.usage.completion_tokensnumber
response.usage.prompt_tokensnumber
response.usage.total_tokensnumber

Returns

Object

NameType
completionTokensnumber
promptTokensnumber
totalTokensnumber

Inherited from

AbstractOpenAIChatModel.extractUsage

Defined in

packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:335


processTextGenerationResponse

processTextGenerationResponse(rawResponse): Object

Parameters

NameType
rawResponseObject
rawResponse.choices{ finish_reason?: null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index?: number ; logprobs?: any ; message: { content: null | string ; function_call?: { arguments: string ; name: string } ; role: "assistant" ; tool_calls?: { function: { arguments: string ; name: string } ; id: string ; type: "function" }[] } }[]
rawResponse.creatednumber
rawResponse.idstring
rawResponse.modelstring
rawResponse.object"chat.completion"
rawResponse.system_fingerprint?null | string
rawResponse.usageObject
rawResponse.usage.completion_tokensnumber
rawResponse.usage.prompt_tokensnumber
rawResponse.usage.total_tokensnumber

Returns

Object

NameType
rawResponse{ choices: { finish_reason?: null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index?: number ; logprobs?: any ; message: { content: null | string ; function_call?: { arguments: string ; name: string } ; role: "assistant" ; tool_calls?: { function: { arguments: string ; name: string } ; id: string ; type: "function" }[] } }[] ; created: number ; id: string ; model: string ; object: "chat.completion" ; system_fingerprint?: null | string ; usage: { completion_tokens: number ; prompt_tokens: number ; total_tokens: number } }
rawResponse.choices{ finish_reason?: null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index?: number ; logprobs?: any ; message: { content: null | string ; function_call?: { arguments: string ; name: string } ; role: "assistant" ; tool_calls?: { function: { arguments: string ; name: string } ; id: string ; type: "function" }[] } }[]
rawResponse.creatednumber
rawResponse.idstring
rawResponse.modelstring
rawResponse.object"chat.completion"
rawResponse.system_fingerprint?null | string
rawResponse.usage{ completion_tokens: number ; prompt_tokens: number ; total_tokens: number }
rawResponse.usage.completion_tokensnumber
rawResponse.usage.prompt_tokensnumber
rawResponse.usage.total_tokensnumber
textGenerationResults{ finishReason: TextGenerationFinishReason ; text: string }[]
usage{ completionTokens: number = response.usage.completion_tokens; promptTokens: number = response.usage.prompt_tokens; totalTokens: number = response.usage.total_tokens }
usage.completionTokensnumber
usage.promptTokensnumber
usage.totalTokensnumber

Inherited from

AbstractOpenAIChatModel.processTextGenerationResponse

Defined in

packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:208


restoreGeneratedTexts

restoreGeneratedTexts(rawResponse): Object

Parameters

NameType
rawResponseunknown

Returns

Object

NameType
rawResponse{ choices: { finish_reason?: null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index?: number ; logprobs?: any ; message: { content: null | string ; function_call?: { arguments: string ; name: string } ; role: "assistant" ; tool_calls?: { function: { arguments: string ; name: string } ; id: string ; type: "function" }[] } }[] ; created: number ; id: string ; model: string ; object: "chat.completion" ; system_fingerprint?: null | string ; usage: { completion_tokens: number ; prompt_tokens: number ; total_tokens: number } }
rawResponse.choices{ finish_reason?: null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index?: number ; logprobs?: any ; message: { content: null | string ; function_call?: { arguments: string ; name: string } ; role: "assistant" ; tool_calls?: { function: { arguments: string ; name: string } ; id: string ; type: "function" }[] } }[]
rawResponse.creatednumber
rawResponse.idstring
rawResponse.modelstring
rawResponse.object"chat.completion"
rawResponse.system_fingerprint?null | string
rawResponse.usage{ completion_tokens: number ; prompt_tokens: number ; total_tokens: number }
rawResponse.usage.completion_tokensnumber
rawResponse.usage.prompt_tokensnumber
rawResponse.usage.total_tokensnumber
textGenerationResults{ finishReason: TextGenerationFinishReason ; text: string }[]
usage{ completionTokens: number = response.usage.completion_tokens; promptTokens: number = response.usage.prompt_tokens; totalTokens: number = response.usage.total_tokens }
usage.completionTokensnumber
usage.promptTokensnumber
usage.totalTokensnumber

Implementation of

TextStreamingBaseModel.restoreGeneratedTexts

Inherited from

AbstractOpenAIChatModel.restoreGeneratedTexts

Defined in

packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:199


withChatPrompt

withChatPrompt(): PromptTemplateFullTextModel<ChatPrompt, ChatPrompt, OpenAICompatibleChatSettings, OpenAICompatibleChatModel>

Returns this model with a chat prompt template.

Returns

PromptTemplateFullTextModel<ChatPrompt, ChatPrompt, OpenAICompatibleChatSettings, OpenAICompatibleChatModel>

Implementation of

TextStreamingBaseModel.withChatPrompt

Defined in

packages/modelfusion/src/model-provider/openai-compatible/OpenAICompatibleChatModel.ts:111


withInstructionPrompt

withInstructionPrompt(): PromptTemplateFullTextModel<InstructionPrompt, ChatPrompt, OpenAICompatibleChatSettings, OpenAICompatibleChatModel>

Returns this model with an instruction prompt template.

Returns

PromptTemplateFullTextModel<InstructionPrompt, ChatPrompt, OpenAICompatibleChatSettings, OpenAICompatibleChatModel>

Implementation of

TextStreamingBaseModel.withInstructionPrompt

Defined in

packages/modelfusion/src/model-provider/openai-compatible/OpenAICompatibleChatModel.ts:107


withJsonOutput

withJsonOutput(): OpenAICompatibleChatModel

When possible, limit the output generation to the specified JSON schema, or super sets of it (e.g. JSON in general).

Returns

OpenAICompatibleChatModel

Implementation of

TextStreamingBaseModel.withJsonOutput

Defined in

packages/modelfusion/src/model-provider/openai-compatible/OpenAICompatibleChatModel.ts:134


withPromptTemplate

withPromptTemplate<INPUT_PROMPT>(promptTemplate): PromptTemplateFullTextModel<INPUT_PROMPT, ChatPrompt, OpenAICompatibleChatSettings, OpenAICompatibleChatModel>

Type parameters

Name
INPUT_PROMPT

Parameters

NameType
promptTemplateTextGenerationPromptTemplate<INPUT_PROMPT, ChatPrompt>

Returns

PromptTemplateFullTextModel<INPUT_PROMPT, ChatPrompt, OpenAICompatibleChatSettings, OpenAICompatibleChatModel>

Implementation of

TextStreamingBaseModel.withPromptTemplate

Defined in

packages/modelfusion/src/model-provider/openai-compatible/OpenAICompatibleChatModel.ts:115


withSettings

withSettings(additionalSettings): OpenAICompatibleChatModel

The withSettings method creates a new model with the same configuration as the original model, but with the specified settings changed.

Parameters

NameType
additionalSettingsPartial<OpenAICompatibleChatSettings>

Returns

OpenAICompatibleChatModel

Example

const model = new OpenAICompletionModel({
model: "gpt-3.5-turbo-instruct",
maxGenerationTokens: 500,
});

const modelWithMoreTokens = model.withSettings({
maxGenerationTokens: 1000,
});

Implementation of

ToolCallsGenerationModel.withSettings

Overrides

AbstractOpenAIChatModel.withSettings

Defined in

packages/modelfusion/src/model-provider/openai-compatible/OpenAICompatibleChatModel.ts:138


withTextPrompt

withTextPrompt(): PromptTemplateFullTextModel<string, ChatPrompt, OpenAICompatibleChatSettings, OpenAICompatibleChatModel>

Returns this model with a text prompt template.

Returns

PromptTemplateFullTextModel<string, ChatPrompt, OpenAICompatibleChatSettings, OpenAICompatibleChatModel>

Implementation of

TextStreamingBaseModel.withTextPrompt

Defined in

packages/modelfusion/src/model-provider/openai-compatible/OpenAICompatibleChatModel.ts:103

Properties

contextWindowSize

Readonly contextWindowSize: undefined = undefined

Implementation of

TextStreamingBaseModel.contextWindowSize

Defined in

packages/modelfusion/src/model-provider/openai-compatible/OpenAICompatibleChatModel.ts:61


countPromptTokens

Readonly countPromptTokens: undefined = undefined

Optional. Implement if you have a tokenizer and want to count the number of tokens in a prompt.

Implementation of

TextStreamingBaseModel.countPromptTokens

Defined in

packages/modelfusion/src/model-provider/openai-compatible/OpenAICompatibleChatModel.ts:63


settings

Readonly settings: OpenAICompatibleChatSettings

Implementation of

ToolCallsGenerationModel.settings

Inherited from

AbstractOpenAIChatModel.settings

Defined in

packages/modelfusion/src/model-function/AbstractModel.ts:7


tokenizer

Readonly tokenizer: undefined = undefined

Implementation of

TextStreamingBaseModel.tokenizer

Defined in

packages/modelfusion/src/model-provider/openai-compatible/OpenAICompatibleChatModel.ts:62