Skip to main content

Class: OpenAIChatModel

Create a text generation model that calls the OpenAI chat API.

See

https://platform.openai.com/docs/api-reference/chat/create

Example

const model = new OpenAIChatModel({
model: "gpt-3.5-turbo",
temperature: 0.7,
maxGenerationTokens: 500,
});

const text = await generateText([
model,
openai.ChatMessage.system(
"Write a short story about a robot learning to love:"
),
]);

Hierarchy

Implements

Accessors

modelInformation

get modelInformation(): ModelInformation

Returns

ModelInformation

Implementation of

ToolCallsGenerationModel.modelInformation

Inherited from

AbstractOpenAIChatModel.modelInformation

Defined in

packages/modelfusion/src/model-function/AbstractModel.ts:17


modelName

get modelName(): OpenAIChatModelType

Returns

OpenAIChatModelType

Overrides

AbstractOpenAIChatModel.modelName

Defined in

packages/modelfusion/src/model-provider/openai/OpenAIChatModel.ts:142


settingsForEvent

get settingsForEvent(): Partial<OpenAIChatSettings>

Returns settings that should be recorded in observability events. Security-related settings (e.g. API keys) should not be included here.

Returns

Partial<OpenAIChatSettings>

Implementation of

TextStreamingBaseModel.settingsForEvent

Overrides

AbstractOpenAIChatModel.settingsForEvent

Defined in

packages/modelfusion/src/model-provider/openai/OpenAIChatModel.ts:160

Constructors

constructor

new OpenAIChatModel(settings): OpenAIChatModel

Parameters

NameType
settingsOpenAIChatSettings

Returns

OpenAIChatModel

Overrides

AbstractOpenAIChatModel.constructor

Defined in

packages/modelfusion/src/model-provider/openai/OpenAIChatModel.ts:130

Methods

asFunctionCallObjectGenerationModel

asFunctionCallObjectGenerationModel(«destructured»): OpenAIChatFunctionCallObjectGenerationModel<TextGenerationPromptTemplate<ChatPrompt, ChatPrompt>>

Parameters

NameType
«destructured»Object
› fnDescription?string
› fnNamestring

Returns

OpenAIChatFunctionCallObjectGenerationModel<TextGenerationPromptTemplate<ChatPrompt, ChatPrompt>>

Defined in

packages/modelfusion/src/model-provider/openai/OpenAIChatModel.ts:182


asObjectGenerationModel

asObjectGenerationModel<INPUT_PROMPT, OpenAIChatPrompt>(promptTemplate): ObjectFromTextStreamingModel<INPUT_PROMPT, unknown, TextStreamingModel<unknown, TextGenerationModelSettings>> | ObjectFromTextStreamingModel<INPUT_PROMPT, OpenAIChatPrompt, TextStreamingModel<OpenAIChatPrompt, TextGenerationModelSettings>>

Type parameters

Name
INPUT_PROMPT
OpenAIChatPrompt

Parameters

NameType
promptTemplateObjectFromTextPromptTemplate<INPUT_PROMPT, OpenAIChatPrompt> | FlexibleObjectFromTextPromptTemplate<INPUT_PROMPT, unknown>

Returns

ObjectFromTextStreamingModel<INPUT_PROMPT, unknown, TextStreamingModel<unknown, TextGenerationModelSettings>> | ObjectFromTextStreamingModel<INPUT_PROMPT, OpenAIChatPrompt, TextStreamingModel<OpenAIChatPrompt, TextGenerationModelSettings>>

Defined in

packages/modelfusion/src/model-provider/openai/OpenAIChatModel.ts:197


callAPI

callAPI<RESULT>(messages, callOptions, options): Promise<RESULT>

Type parameters

Name
RESULT

Parameters

NameType
messagesChatPrompt
callOptionsFunctionCallOptions
optionsObject
options.functionCall?"auto" | { name: string } | "none"
options.functions?{ description?: string ; name: string ; parameters: unknown }[]
options.responseFormatOpenAIChatResponseFormatType<RESULT>
options.toolChoice?"auto" | "none" | { function: { name: string } ; type: "function" }
options.tools?{ function: { description?: string ; name: string ; parameters: unknown } ; type: "function" }[]

Returns

Promise<RESULT>

Inherited from

AbstractOpenAIChatModel.callAPI

Defined in

packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:112


countPromptTokens

countPromptTokens(messages): Promise<number>

Counts the prompt tokens required for the messages. This includes the message base tokens and the prompt base tokens.

Parameters

NameType
messagesChatPrompt

Returns

Promise<number>

Implementation of

TextStreamingBaseModel.countPromptTokens

Defined in

packages/modelfusion/src/model-provider/openai/OpenAIChatModel.ts:153


doGenerateTexts

doGenerateTexts(prompt, options): Promise<{ rawResponse: { choices: { finish_reason?: null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index?: number ; logprobs?: any ; message: { content: null | string ; function_call?: { arguments: string ; name: string } ; role: "assistant" ; tool_calls?: { function: { arguments: string ; name: string } ; id: string ; type: "function" }[] } }[] ; created: number ; id: string ; model: string ; object: "chat.completion" ; system_fingerprint?: null | string ; usage: { completion_tokens: number ; prompt_tokens: number ; total_tokens: number } } ; textGenerationResults: { finishReason: TextGenerationFinishReason ; text: string }[] ; usage: { completionTokens: number = response.usage.completion_tokens; promptTokens: number = response.usage.prompt_tokens; totalTokens: number = response.usage.total_tokens } }>

Parameters

NameType
promptChatPrompt
optionsFunctionCallOptions

Returns

Promise<{ rawResponse: { choices: { finish_reason?: null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index?: number ; logprobs?: any ; message: { content: null | string ; function_call?: { arguments: string ; name: string } ; role: "assistant" ; tool_calls?: { function: { arguments: string ; name: string } ; id: string ; type: "function" }[] } }[] ; created: number ; id: string ; model: string ; object: "chat.completion" ; system_fingerprint?: null | string ; usage: { completion_tokens: number ; prompt_tokens: number ; total_tokens: number } } ; textGenerationResults: { finishReason: TextGenerationFinishReason ; text: string }[] ; usage: { completionTokens: number = response.usage.completion_tokens; promptTokens: number = response.usage.prompt_tokens; totalTokens: number = response.usage.total_tokens } }>

Implementation of

TextStreamingBaseModel.doGenerateTexts

Inherited from

AbstractOpenAIChatModel.doGenerateTexts

Defined in

packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:188


doGenerateToolCall

doGenerateToolCall(tool, prompt, options): Promise<{ rawResponse: { choices: { finish_reason?: null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index?: number ; logprobs?: any ; message: { content: null | string ; function_call?: { arguments: string ; name: string } ; role: "assistant" ; tool_calls?: { function: { arguments: string ; name: string } ; id: string ; type: "function" }[] } }[] ; created: number ; id: string ; model: string ; object: "chat.completion" ; system_fingerprint?: null | string ; usage: { completion_tokens: number ; prompt_tokens: number ; total_tokens: number } } ; toolCall: null | { args: unknown ; id: string } ; usage: { completionTokens: number = response.usage.completion_tokens; promptTokens: number = response.usage.prompt_tokens; totalTokens: number = response.usage.total_tokens } }>

Parameters

NameType
toolToolDefinition<string, unknown>
promptChatPrompt
optionsFunctionCallOptions

Returns

Promise<{ rawResponse: { choices: { finish_reason?: null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index?: number ; logprobs?: any ; message: { content: null | string ; function_call?: { arguments: string ; name: string } ; role: "assistant" ; tool_calls?: { function: { arguments: string ; name: string } ; id: string ; type: "function" }[] } }[] ; created: number ; id: string ; model: string ; object: "chat.completion" ; system_fingerprint?: null | string ; usage: { completion_tokens: number ; prompt_tokens: number ; total_tokens: number } } ; toolCall: null | { args: unknown ; id: string } ; usage: { completionTokens: number = response.usage.completion_tokens; promptTokens: number = response.usage.prompt_tokens; totalTokens: number = response.usage.total_tokens } }>

Implementation of

ToolCallGenerationModel.doGenerateToolCall

Inherited from

AbstractOpenAIChatModel.doGenerateToolCall

Defined in

packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:264


doGenerateToolCalls

doGenerateToolCalls(tools, prompt, options): Promise<{ rawResponse: { choices: { finish_reason?: null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index?: number ; logprobs?: any ; message: { content: null | string ; function_call?: { arguments: string ; name: string } ; role: "assistant" ; tool_calls?: { function: { arguments: string ; name: string } ; id: string ; type: "function" }[] } }[] ; created: number ; id: string ; model: string ; object: "chat.completion" ; system_fingerprint?: null | string ; usage: { completion_tokens: number ; prompt_tokens: number ; total_tokens: number } } ; text: null | string ; toolCalls: null | { args: unknown ; id: string = toolCall.id; name: string = toolCall.function.name }[] ; usage: { completionTokens: number = response.usage.completion_tokens; promptTokens: number = response.usage.prompt_tokens; totalTokens: number = response.usage.total_tokens } }>

Parameters

NameType
toolsToolDefinition<string, unknown>[]
promptChatPrompt
optionsFunctionCallOptions

Returns

Promise<{ rawResponse: { choices: { finish_reason?: null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index?: number ; logprobs?: any ; message: { content: null | string ; function_call?: { arguments: string ; name: string } ; role: "assistant" ; tool_calls?: { function: { arguments: string ; name: string } ; id: string ; type: "function" }[] } }[] ; created: number ; id: string ; model: string ; object: "chat.completion" ; system_fingerprint?: null | string ; usage: { completion_tokens: number ; prompt_tokens: number ; total_tokens: number } } ; text: null | string ; toolCalls: null | { args: unknown ; id: string = toolCall.id; name: string = toolCall.function.name }[] ; usage: { completionTokens: number = response.usage.completion_tokens; promptTokens: number = response.usage.prompt_tokens; totalTokens: number = response.usage.total_tokens } }>

Implementation of

ToolCallsGenerationModel.doGenerateToolCalls

Inherited from

AbstractOpenAIChatModel.doGenerateToolCalls

Defined in

packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:302


doStreamText

doStreamText(prompt, options): Promise<AsyncIterable<Delta<{ choices: { delta: { content?: null | string ; function_call?: { arguments?: string ; name?: string } ; role?: "user" | "assistant" ; tool_calls?: { function: { arguments: string ; name: string } ; id: string ; type: "function" }[] } ; finish_reason?: null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index: number }[] ; created: number ; id: string ; model?: string ; object: string ; system_fingerprint?: null | string }>>>

Parameters

NameType
promptChatPrompt
optionsFunctionCallOptions

Returns

Promise<AsyncIterable<Delta<{ choices: { delta: { content?: null | string ; function_call?: { arguments?: string ; name?: string } ; role?: "user" | "assistant" ; tool_calls?: { function: { arguments: string ; name: string } ; id: string ; type: "function" }[] } ; finish_reason?: null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index: number }[] ; created: number ; id: string ; model?: string ; object: string ; system_fingerprint?: null | string }>>>

Implementation of

TextStreamingBaseModel.doStreamText

Inherited from

AbstractOpenAIChatModel.doStreamText

Defined in

packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:237


extractTextDelta

extractTextDelta(delta): undefined | string

Parameters

NameType
deltaunknown

Returns

undefined | string

Implementation of

TextStreamingBaseModel.extractTextDelta

Inherited from

AbstractOpenAIChatModel.extractTextDelta

Defined in

packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:243


extractUsage

extractUsage(response): Object

Parameters

NameType
responseObject
response.choices{ finish_reason?: null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index?: number ; logprobs?: any ; message: { content: null | string ; function_call?: { arguments: string ; name: string } ; role: "assistant" ; tool_calls?: { function: { arguments: string ; name: string } ; id: string ; type: "function" }[] } }[]
response.creatednumber
response.idstring
response.modelstring
response.object"chat.completion"
response.system_fingerprint?null | string
response.usageObject
response.usage.completion_tokensnumber
response.usage.prompt_tokensnumber
response.usage.total_tokensnumber

Returns

Object

NameType
completionTokensnumber
promptTokensnumber
totalTokensnumber

Inherited from

AbstractOpenAIChatModel.extractUsage

Defined in

packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:335


processTextGenerationResponse

processTextGenerationResponse(rawResponse): Object

Parameters

NameType
rawResponseObject
rawResponse.choices{ finish_reason?: null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index?: number ; logprobs?: any ; message: { content: null | string ; function_call?: { arguments: string ; name: string } ; role: "assistant" ; tool_calls?: { function: { arguments: string ; name: string } ; id: string ; type: "function" }[] } }[]
rawResponse.creatednumber
rawResponse.idstring
rawResponse.modelstring
rawResponse.object"chat.completion"
rawResponse.system_fingerprint?null | string
rawResponse.usageObject
rawResponse.usage.completion_tokensnumber
rawResponse.usage.prompt_tokensnumber
rawResponse.usage.total_tokensnumber

Returns

Object

NameType
rawResponse{ choices: { finish_reason?: null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index?: number ; logprobs?: any ; message: { content: null | string ; function_call?: { arguments: string ; name: string } ; role: "assistant" ; tool_calls?: { function: { arguments: string ; name: string } ; id: string ; type: "function" }[] } }[] ; created: number ; id: string ; model: string ; object: "chat.completion" ; system_fingerprint?: null | string ; usage: { completion_tokens: number ; prompt_tokens: number ; total_tokens: number } }
rawResponse.choices{ finish_reason?: null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index?: number ; logprobs?: any ; message: { content: null | string ; function_call?: { arguments: string ; name: string } ; role: "assistant" ; tool_calls?: { function: { arguments: string ; name: string } ; id: string ; type: "function" }[] } }[]
rawResponse.creatednumber
rawResponse.idstring
rawResponse.modelstring
rawResponse.object"chat.completion"
rawResponse.system_fingerprint?null | string
rawResponse.usage{ completion_tokens: number ; prompt_tokens: number ; total_tokens: number }
rawResponse.usage.completion_tokensnumber
rawResponse.usage.prompt_tokensnumber
rawResponse.usage.total_tokensnumber
textGenerationResults{ finishReason: TextGenerationFinishReason ; text: string }[]
usage{ completionTokens: number = response.usage.completion_tokens; promptTokens: number = response.usage.prompt_tokens; totalTokens: number = response.usage.total_tokens }
usage.completionTokensnumber
usage.promptTokensnumber
usage.totalTokensnumber

Inherited from

AbstractOpenAIChatModel.processTextGenerationResponse

Defined in

packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:208


restoreGeneratedTexts

restoreGeneratedTexts(rawResponse): Object

Parameters

NameType
rawResponseunknown

Returns

Object

NameType
rawResponse{ choices: { finish_reason?: null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index?: number ; logprobs?: any ; message: { content: null | string ; function_call?: { arguments: string ; name: string } ; role: "assistant" ; tool_calls?: { function: { arguments: string ; name: string } ; id: string ; type: "function" }[] } }[] ; created: number ; id: string ; model: string ; object: "chat.completion" ; system_fingerprint?: null | string ; usage: { completion_tokens: number ; prompt_tokens: number ; total_tokens: number } }
rawResponse.choices{ finish_reason?: null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index?: number ; logprobs?: any ; message: { content: null | string ; function_call?: { arguments: string ; name: string } ; role: "assistant" ; tool_calls?: { function: { arguments: string ; name: string } ; id: string ; type: "function" }[] } }[]
rawResponse.creatednumber
rawResponse.idstring
rawResponse.modelstring
rawResponse.object"chat.completion"
rawResponse.system_fingerprint?null | string
rawResponse.usage{ completion_tokens: number ; prompt_tokens: number ; total_tokens: number }
rawResponse.usage.completion_tokensnumber
rawResponse.usage.prompt_tokensnumber
rawResponse.usage.total_tokensnumber
textGenerationResults{ finishReason: TextGenerationFinishReason ; text: string }[]
usage{ completionTokens: number = response.usage.completion_tokens; promptTokens: number = response.usage.prompt_tokens; totalTokens: number = response.usage.total_tokens }
usage.completionTokensnumber
usage.promptTokensnumber
usage.totalTokensnumber

Implementation of

TextStreamingBaseModel.restoreGeneratedTexts

Inherited from

AbstractOpenAIChatModel.restoreGeneratedTexts

Defined in

packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:199


withChatPrompt

withChatPrompt(): PromptTemplateFullTextModel<ChatPrompt, ChatPrompt, OpenAIChatSettings, OpenAIChatModel>

Returns this model with a chat prompt template.

Returns

PromptTemplateFullTextModel<ChatPrompt, ChatPrompt, OpenAIChatSettings, OpenAIChatModel>

Implementation of

TextStreamingBaseModel.withChatPrompt

Defined in

packages/modelfusion/src/model-provider/openai/OpenAIChatModel.ts:221


withInstructionPrompt

withInstructionPrompt(): PromptTemplateFullTextModel<InstructionPrompt, ChatPrompt, OpenAIChatSettings, OpenAIChatModel>

Returns this model with an instruction prompt template.

Returns

PromptTemplateFullTextModel<InstructionPrompt, ChatPrompt, OpenAIChatSettings, OpenAIChatModel>

Implementation of

TextStreamingBaseModel.withInstructionPrompt

Defined in

packages/modelfusion/src/model-provider/openai/OpenAIChatModel.ts:217


withJsonOutput

withJsonOutput(): OpenAIChatModel

When possible, limit the output generation to the specified JSON schema, or super sets of it (e.g. JSON in general).

Returns

OpenAIChatModel

Implementation of

TextStreamingBaseModel.withJsonOutput

Defined in

packages/modelfusion/src/model-provider/openai/OpenAIChatModel.ts:244


withPromptTemplate

withPromptTemplate<INPUT_PROMPT>(promptTemplate): PromptTemplateFullTextModel<INPUT_PROMPT, ChatPrompt, OpenAIChatSettings, OpenAIChatModel>

Type parameters

Name
INPUT_PROMPT

Parameters

NameType
promptTemplateTextGenerationPromptTemplate<INPUT_PROMPT, ChatPrompt>

Returns

PromptTemplateFullTextModel<INPUT_PROMPT, ChatPrompt, OpenAIChatSettings, OpenAIChatModel>

Implementation of

TextStreamingBaseModel.withPromptTemplate

Defined in

packages/modelfusion/src/model-provider/openai/OpenAIChatModel.ts:225


withSettings

withSettings(additionalSettings): OpenAIChatModel

The withSettings method creates a new model with the same configuration as the original model, but with the specified settings changed.

Parameters

NameType
additionalSettingsPartial<OpenAIChatSettings>

Returns

OpenAIChatModel

Example

const model = new OpenAICompletionModel({
model: "gpt-3.5-turbo-instruct",
maxGenerationTokens: 500,
});

const modelWithMoreTokens = model.withSettings({
maxGenerationTokens: 1000,
});

Implementation of

ToolCallsGenerationModel.withSettings

Overrides

AbstractOpenAIChatModel.withSettings

Defined in

packages/modelfusion/src/model-provider/openai/OpenAIChatModel.ts:248


withTextPrompt

withTextPrompt(): PromptTemplateFullTextModel<string, ChatPrompt, OpenAIChatSettings, OpenAIChatModel>

Returns this model with a text prompt template.

Returns

PromptTemplateFullTextModel<string, ChatPrompt, OpenAIChatSettings, OpenAIChatModel>

Implementation of

TextStreamingBaseModel.withTextPrompt

Defined in

packages/modelfusion/src/model-provider/openai/OpenAIChatModel.ts:213

Properties

contextWindowSize

Readonly contextWindowSize: number

Implementation of

TextStreamingBaseModel.contextWindowSize

Defined in

packages/modelfusion/src/model-provider/openai/OpenAIChatModel.ts:146


provider

Readonly provider: "openai"

Overrides

AbstractOpenAIChatModel.provider

Defined in

packages/modelfusion/src/model-provider/openai/OpenAIChatModel.ts:141


settings

Readonly settings: OpenAIChatSettings

Implementation of

ToolCallsGenerationModel.settings

Inherited from

AbstractOpenAIChatModel.settings

Defined in

packages/modelfusion/src/model-function/AbstractModel.ts:7


tokenizer

Readonly tokenizer: TikTokenTokenizer

Implementation of

TextStreamingBaseModel.tokenizer

Defined in

packages/modelfusion/src/model-provider/openai/OpenAIChatModel.ts:147