Class: OpenAIChatModel
Create a text generation model that calls the OpenAI chat API.
See
https://platform.openai.com/docs/api-reference/chat/create
Example
const model = new OpenAIChatModel({
model: "gpt-3.5-turbo",
temperature: 0.7,
maxGenerationTokens: 500,
});
const text = await generateText([
model,
openai.ChatMessage.system(
"Write a short story about a robot learning to love:"
),
]);
Hierarchy
-
AbstractOpenAIChatModel
<OpenAIChatSettings
>↳
OpenAIChatModel
Implements
TextStreamingBaseModel
<ChatPrompt
,OpenAIChatSettings
>ToolCallGenerationModel
<ChatPrompt
,OpenAIChatSettings
>ToolCallsGenerationModel
<ChatPrompt
,OpenAIChatSettings
>
Accessors
modelInformation
• get
modelInformation(): ModelInformation
Returns
Implementation of
ToolCallsGenerationModel.modelInformation
Inherited from
AbstractOpenAIChatModel.modelInformation
Defined in
packages/modelfusion/src/model-function/AbstractModel.ts:17
modelName
• get
modelName(): OpenAIChatModelType
Returns
Overrides
AbstractOpenAIChatModel.modelName
Defined in
packages/modelfusion/src/model-provider/openai/OpenAIChatModel.ts:142
settingsForEvent
• get
settingsForEvent(): Partial
<OpenAIChatSettings
>
Returns settings that should be recorded in observability events. Security-related settings (e.g. API keys) should not be included here.
Returns
Partial
<OpenAIChatSettings
>
Implementation of
TextStreamingBaseModel.settingsForEvent
Overrides
AbstractOpenAIChatModel.settingsForEvent
Defined in
packages/modelfusion/src/model-provider/openai/OpenAIChatModel.ts:160
Constructors
constructor
• new OpenAIChatModel(settings
): OpenAIChatModel
Parameters
Name | Type |
---|---|
settings | OpenAIChatSettings |
Returns
Overrides
AbstractOpenAIChatModel.constructor
Defined in
packages/modelfusion/src/model-provider/openai/OpenAIChatModel.ts:130
Methods
asFunctionCallObjectGenerationModel
▸ asFunctionCallObjectGenerationModel(«destructured»
): OpenAIChatFunctionCallObjectGenerationModel
<TextGenerationPromptTemplate
<ChatPrompt
, ChatPrompt
>>
Parameters
Name | Type |
---|---|
«destructured» | Object |
› fnDescription? | string |
› fnName | string |
Returns
OpenAIChatFunctionCallObjectGenerationModel
<TextGenerationPromptTemplate
<ChatPrompt
, ChatPrompt
>>
Defined in
packages/modelfusion/src/model-provider/openai/OpenAIChatModel.ts:182
asObjectGenerationModel
▸ asObjectGenerationModel<INPUT_PROMPT
, OpenAIChatPrompt
>(promptTemplate
): ObjectFromTextStreamingModel
<INPUT_PROMPT
, unknown
, TextStreamingModel
<unknown
, TextGenerationModelSettings
>> | ObjectFromTextStreamingModel
<INPUT_PROMPT
, OpenAIChatPrompt
, TextStreamingModel
<OpenAIChatPrompt
, TextGenerationModelSettings
>>
Type parameters
Name |
---|
INPUT_PROMPT |
OpenAIChatPrompt |
Parameters
Name | Type |
---|---|
promptTemplate | ObjectFromTextPromptTemplate <INPUT_PROMPT , OpenAIChatPrompt > | FlexibleObjectFromTextPromptTemplate <INPUT_PROMPT , unknown > |
Returns
ObjectFromTextStreamingModel
<INPUT_PROMPT
, unknown
, TextStreamingModel
<unknown
, TextGenerationModelSettings
>> | ObjectFromTextStreamingModel
<INPUT_PROMPT
, OpenAIChatPrompt
, TextStreamingModel
<OpenAIChatPrompt
, TextGenerationModelSettings
>>
Defined in
packages/modelfusion/src/model-provider/openai/OpenAIChatModel.ts:197
callAPI
▸ callAPI<RESULT
>(messages
, callOptions
, options
): Promise
<RESULT
>
Type parameters
Name |
---|
RESULT |
Parameters
Name | Type |
---|---|
messages | ChatPrompt |
callOptions | FunctionCallOptions |
options | Object |
options.functionCall? | "auto" | { name : string } | "none" |
options.functions? | { description? : string ; name : string ; parameters : unknown }[] |
options.responseFormat | OpenAIChatResponseFormatType <RESULT > |
options.toolChoice? | "auto" | "none" | { function : { name : string } ; type : "function" } |
options.tools? | { function : { description? : string ; name : string ; parameters : unknown } ; type : "function" }[] |
Returns
Promise
<RESULT
>
Inherited from
AbstractOpenAIChatModel.callAPI
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:112
countPromptTokens
▸ countPromptTokens(messages
): Promise
<number
>
Counts the prompt tokens required for the messages. This includes the message base tokens and the prompt base tokens.
Parameters
Name | Type |
---|---|
messages | ChatPrompt |
Returns
Promise
<number
>
Implementation of
TextStreamingBaseModel.countPromptTokens
Defined in
packages/modelfusion/src/model-provider/openai/OpenAIChatModel.ts:153
doGenerateTexts
▸ doGenerateTexts(prompt
, options
): Promise
<{ rawResponse
: { choices
: { finish_reason?
: null
| "length"
| "stop"
| "function_call"
| "tool_calls"
| "content_filter"
; index?
: number
; logprobs?
: any
; message
: { content
: null
| string
; function_call?
: { arguments
: string
; name
: string
} ; role
: "assistant"
; tool_calls?
: { function
: { arguments
: string
; name
: string
} ; id
: string
; type
: "function"
}[] } }[] ; created
: number
; id
: string
; model
: string
; object
: "chat.completion"
; system_fingerprint?
: null
| string
; usage
: { completion_tokens
: number
; prompt_tokens
: number
; total_tokens
: number
} } ; textGenerationResults
: { finishReason
: TextGenerationFinishReason
; text
: string
}[] ; usage
: { completionTokens
: number
= response.usage.completion_tokens; promptTokens
: number
= response.usage.prompt_tokens; totalTokens
: number
= response.usage.total_tokens } }>
Parameters
Name | Type |
---|---|
prompt | ChatPrompt |
options | FunctionCallOptions |
Returns
Promise
<{ rawResponse
: { choices
: { finish_reason?
: null
| "length"
| "stop"
| "function_call"
| "tool_calls"
| "content_filter"
; index?
: number
; logprobs?
: any
; message
: { content
: null
| string
; function_call?
: { arguments
: string
; name
: string
} ; role
: "assistant"
; tool_calls?
: { function
: { arguments
: string
; name
: string
} ; id
: string
; type
: "function"
}[] } }[] ; created
: number
; id
: string
; model
: string
; object
: "chat.completion"
; system_fingerprint?
: null
| string
; usage
: { completion_tokens
: number
; prompt_tokens
: number
; total_tokens
: number
} } ; textGenerationResults
: { finishReason
: TextGenerationFinishReason
; text
: string
}[] ; usage
: { completionTokens
: number
= response.usage.completion_tokens; promptTokens
: number
= response.usage.prompt_tokens; totalTokens
: number
= response.usage.total_tokens } }>
Implementation of
TextStreamingBaseModel.doGenerateTexts
Inherited from
AbstractOpenAIChatModel.doGenerateTexts
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:188
doGenerateToolCall
▸ doGenerateToolCall(tool
, prompt
, options
): Promise
<{ rawResponse
: { choices
: { finish_reason?
: null
| "length"
| "stop"
| "function_call"
| "tool_calls"
| "content_filter"
; index?
: number
; logprobs?
: any
; message
: { content
: null
| string
; function_call?
: { arguments
: string
; name
: string
} ; role
: "assistant"
; tool_calls?
: { function
: { arguments
: string
; name
: string
} ; id
: string
; type
: "function"
}[] } }[] ; created
: number
; id
: string
; model
: string
; object
: "chat.completion"
; system_fingerprint?
: null
| string
; usage
: { completion_tokens
: number
; prompt_tokens
: number
; total_tokens
: number
} } ; toolCall
: null
| { args
: unknown
; id
: string
} ; usage
: { completionTokens
: number
= response.usage.completion_tokens; promptTokens
: number
= response.usage.prompt_tokens; totalTokens
: number
= response.usage.total_tokens } }>
Parameters
Name | Type |
---|---|
tool | ToolDefinition <string , unknown > |
prompt | ChatPrompt |
options | FunctionCallOptions |
Returns
Promise
<{ rawResponse
: { choices
: { finish_reason?
: null
| "length"
| "stop"
| "function_call"
| "tool_calls"
| "content_filter"
; index?
: number
; logprobs?
: any
; message
: { content
: null
| string
; function_call?
: { arguments
: string
; name
: string
} ; role
: "assistant"
; tool_calls?
: { function
: { arguments
: string
; name
: string
} ; id
: string
; type
: "function"
}[] } }[] ; created
: number
; id
: string
; model
: string
; object
: "chat.completion"
; system_fingerprint?
: null
| string
; usage
: { completion_tokens
: number
; prompt_tokens
: number
; total_tokens
: number
} } ; toolCall
: null
| { args
: unknown
; id
: string
} ; usage
: { completionTokens
: number
= response.usage.completion_tokens; promptTokens
: number
= response.usage.prompt_tokens; totalTokens
: number
= response.usage.total_tokens } }>
Implementation of
ToolCallGenerationModel.doGenerateToolCall
Inherited from
AbstractOpenAIChatModel.doGenerateToolCall
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:264
doGenerateToolCalls
▸ doGenerateToolCalls(tools
, prompt
, options
): Promise
<{ rawResponse
: { choices
: { finish_reason?
: null
| "length"
| "stop"
| "function_call"
| "tool_calls"
| "content_filter"
; index?
: number
; logprobs?
: any
; message
: { content
: null
| string
; function_call?
: { arguments
: string
; name
: string
} ; role
: "assistant"
; tool_calls?
: { function
: { arguments
: string
; name
: string
} ; id
: string
; type
: "function"
}[] } }[] ; created
: number
; id
: string
; model
: string
; object
: "chat.completion"
; system_fingerprint?
: null
| string
; usage
: { completion_tokens
: number
; prompt_tokens
: number
; total_tokens
: number
} } ; text
: null
| string
; toolCalls
: null
| { args
: unknown
; id
: string
= toolCall.id; name
: string
= toolCall.function.name }[] ; usage
: { completionTokens
: number
= response.usage.completion_tokens; promptTokens
: number
= response.usage.prompt_tokens; totalTokens
: number
= response.usage.total_tokens } }>
Parameters
Name | Type |
---|---|
tools | ToolDefinition <string , unknown >[] |
prompt | ChatPrompt |
options | FunctionCallOptions |
Returns
Promise
<{ rawResponse
: { choices
: { finish_reason?
: null
| "length"
| "stop"
| "function_call"
| "tool_calls"
| "content_filter"
; index?
: number
; logprobs?
: any
; message
: { content
: null
| string
; function_call?
: { arguments
: string
; name
: string
} ; role
: "assistant"
; tool_calls?
: { function
: { arguments
: string
; name
: string
} ; id
: string
; type
: "function"
}[] } }[] ; created
: number
; id
: string
; model
: string
; object
: "chat.completion"
; system_fingerprint?
: null
| string
; usage
: { completion_tokens
: number
; prompt_tokens
: number
; total_tokens
: number
} } ; text
: null
| string
; toolCalls
: null
| { args
: unknown
; id
: string
= toolCall.id; name
: string
= toolCall.function.name }[] ; usage
: { completionTokens
: number
= response.usage.completion_tokens; promptTokens
: number
= response.usage.prompt_tokens; totalTokens
: number
= response.usage.total_tokens } }>
Implementation of
ToolCallsGenerationModel.doGenerateToolCalls
Inherited from
AbstractOpenAIChatModel.doGenerateToolCalls
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:302
doStreamText
▸ doStreamText(prompt
, options
): Promise
<AsyncIterable
<Delta
<{ choices
: { delta
: { content?
: null
| string
; function_call?
: { arguments?
: string
; name?
: string
} ; role?
: "user"
| "assistant"
; tool_calls?
: { function
: { arguments
: string
; name
: string
} ; id
: string
; type
: "function"
}[] } ; finish_reason?
: null
| "length"
| "stop"
| "function_call"
| "tool_calls"
| "content_filter"
; index
: number
}[] ; created
: number
; id
: string
; model?
: string
; object
: string
; system_fingerprint?
: null
| string
}>>>
Parameters
Name | Type |
---|---|
prompt | ChatPrompt |
options | FunctionCallOptions |
Returns
Promise
<AsyncIterable
<Delta
<{ choices
: { delta
: { content?
: null
| string
; function_call?
: { arguments?
: string
; name?
: string
} ; role?
: "user"
| "assistant"
; tool_calls?
: { function
: { arguments
: string
; name
: string
} ; id
: string
; type
: "function"
}[] } ; finish_reason?
: null
| "length"
| "stop"
| "function_call"
| "tool_calls"
| "content_filter"
; index
: number
}[] ; created
: number
; id
: string
; model?
: string
; object
: string
; system_fingerprint?
: null
| string
}>>>
Implementation of
TextStreamingBaseModel.doStreamText
Inherited from
AbstractOpenAIChatModel.doStreamText
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:237
extractTextDelta
▸ extractTextDelta(delta
): undefined
| string
Parameters
Name | Type |
---|---|
delta | unknown |
Returns
undefined
| string
Implementation of
TextStreamingBaseModel.extractTextDelta
Inherited from
AbstractOpenAIChatModel.extractTextDelta
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:243
extractUsage
▸ extractUsage(response
): Object
Parameters
Name | Type |
---|---|
response | Object |
response.choices | { finish_reason? : null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index? : number ; logprobs? : any ; message : { content : null | string ; function_call? : { arguments : string ; name : string } ; role : "assistant" ; tool_calls? : { function : { arguments : string ; name : string } ; id : string ; type : "function" }[] } }[] |
response.created | number |
response.id | string |
response.model | string |
response.object | "chat.completion" |
response.system_fingerprint? | null | string |
response.usage | Object |
response.usage.completion_tokens | number |
response.usage.prompt_tokens | number |
response.usage.total_tokens | number |
Returns
Object
Name | Type |
---|---|
completionTokens | number |
promptTokens | number |
totalTokens | number |
Inherited from
AbstractOpenAIChatModel.extractUsage
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:335
processTextGenerationResponse
▸ processTextGenerationResponse(rawResponse
): Object
Parameters
Name | Type |
---|---|
rawResponse | Object |
rawResponse.choices | { finish_reason? : null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index? : number ; logprobs? : any ; message : { content : null | string ; function_call? : { arguments : string ; name : string } ; role : "assistant" ; tool_calls? : { function : { arguments : string ; name : string } ; id : string ; type : "function" }[] } }[] |
rawResponse.created | number |
rawResponse.id | string |
rawResponse.model | string |
rawResponse.object | "chat.completion" |
rawResponse.system_fingerprint? | null | string |
rawResponse.usage | Object |
rawResponse.usage.completion_tokens | number |
rawResponse.usage.prompt_tokens | number |
rawResponse.usage.total_tokens | number |
Returns
Object
Name | Type |
---|---|
rawResponse | { choices : { finish_reason? : null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index? : number ; logprobs? : any ; message : { content : null | string ; function_call? : { arguments : string ; name : string } ; role : "assistant" ; tool_calls? : { function : { arguments : string ; name : string } ; id : string ; type : "function" }[] } }[] ; created : number ; id : string ; model : string ; object : "chat.completion" ; system_fingerprint? : null | string ; usage : { completion_tokens : number ; prompt_tokens : number ; total_tokens : number } } |
rawResponse.choices | { finish_reason? : null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index? : number ; logprobs? : any ; message : { content : null | string ; function_call? : { arguments : string ; name : string } ; role : "assistant" ; tool_calls? : { function : { arguments : string ; name : string } ; id : string ; type : "function" }[] } }[] |
rawResponse.created | number |
rawResponse.id | string |
rawResponse.model | string |
rawResponse.object | "chat.completion" |
rawResponse.system_fingerprint? | null | string |
rawResponse.usage | { completion_tokens : number ; prompt_tokens : number ; total_tokens : number } |
rawResponse.usage.completion_tokens | number |
rawResponse.usage.prompt_tokens | number |
rawResponse.usage.total_tokens | number |
textGenerationResults | { finishReason : TextGenerationFinishReason ; text : string }[] |
usage | { completionTokens : number = response.usage.completion_tokens; promptTokens : number = response.usage.prompt_tokens; totalTokens : number = response.usage.total_tokens } |
usage.completionTokens | number |
usage.promptTokens | number |
usage.totalTokens | number |
Inherited from
AbstractOpenAIChatModel.processTextGenerationResponse
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:208
restoreGeneratedTexts
▸ restoreGeneratedTexts(rawResponse
): Object
Parameters
Name | Type |
---|---|
rawResponse | unknown |
Returns
Object
Name | Type |
---|---|
rawResponse | { choices : { finish_reason? : null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index? : number ; logprobs? : any ; message : { content : null | string ; function_call? : { arguments : string ; name : string } ; role : "assistant" ; tool_calls? : { function : { arguments : string ; name : string } ; id : string ; type : "function" }[] } }[] ; created : number ; id : string ; model : string ; object : "chat.completion" ; system_fingerprint? : null | string ; usage : { completion_tokens : number ; prompt_tokens : number ; total_tokens : number } } |
rawResponse.choices | { finish_reason? : null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index? : number ; logprobs? : any ; message : { content : null | string ; function_call? : { arguments : string ; name : string } ; role : "assistant" ; tool_calls? : { function : { arguments : string ; name : string } ; id : string ; type : "function" }[] } }[] |
rawResponse.created | number |
rawResponse.id | string |
rawResponse.model | string |
rawResponse.object | "chat.completion" |
rawResponse.system_fingerprint? | null | string |
rawResponse.usage | { completion_tokens : number ; prompt_tokens : number ; total_tokens : number } |
rawResponse.usage.completion_tokens | number |
rawResponse.usage.prompt_tokens | number |
rawResponse.usage.total_tokens | number |
textGenerationResults | { finishReason : TextGenerationFinishReason ; text : string }[] |
usage | { completionTokens : number = response.usage.completion_tokens; promptTokens : number = response.usage.prompt_tokens; totalTokens : number = response.usage.total_tokens } |
usage.completionTokens | number |
usage.promptTokens | number |
usage.totalTokens | number |
Implementation of
TextStreamingBaseModel.restoreGeneratedTexts
Inherited from
AbstractOpenAIChatModel.restoreGeneratedTexts
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:199
withChatPrompt
▸ withChatPrompt(): PromptTemplateFullTextModel
<ChatPrompt
, ChatPrompt
, OpenAIChatSettings
, OpenAIChatModel
>
Returns this model with a chat prompt template.
Returns
PromptTemplateFullTextModel
<ChatPrompt
, ChatPrompt
, OpenAIChatSettings
, OpenAIChatModel
>
Implementation of
TextStreamingBaseModel.withChatPrompt
Defined in
packages/modelfusion/src/model-provider/openai/OpenAIChatModel.ts:221
withInstructionPrompt
▸ withInstructionPrompt(): PromptTemplateFullTextModel
<InstructionPrompt
, ChatPrompt
, OpenAIChatSettings
, OpenAIChatModel
>
Returns this model with an instruction prompt template.
Returns
PromptTemplateFullTextModel
<InstructionPrompt
, ChatPrompt
, OpenAIChatSettings
, OpenAIChatModel
>
Implementation of
TextStreamingBaseModel.withInstructionPrompt
Defined in
packages/modelfusion/src/model-provider/openai/OpenAIChatModel.ts:217
withJsonOutput
▸ withJsonOutput(): OpenAIChatModel
When possible, limit the output generation to the specified JSON schema, or super sets of it (e.g. JSON in general).
Returns
Implementation of
TextStreamingBaseModel.withJsonOutput
Defined in
packages/modelfusion/src/model-provider/openai/OpenAIChatModel.ts:244
withPromptTemplate
▸ withPromptTemplate<INPUT_PROMPT
>(promptTemplate
): PromptTemplateFullTextModel
<INPUT_PROMPT
, ChatPrompt
, OpenAIChatSettings
, OpenAIChatModel
>
Type parameters
Name |
---|
INPUT_PROMPT |
Parameters
Name | Type |
---|---|
promptTemplate | TextGenerationPromptTemplate <INPUT_PROMPT , ChatPrompt > |
Returns
PromptTemplateFullTextModel
<INPUT_PROMPT
, ChatPrompt
, OpenAIChatSettings
, OpenAIChatModel
>
Implementation of
TextStreamingBaseModel.withPromptTemplate
Defined in
packages/modelfusion/src/model-provider/openai/OpenAIChatModel.ts:225
withSettings
▸ withSettings(additionalSettings
): OpenAIChatModel
The withSettings
method creates a new model with the same configuration as the original model, but with the specified settings changed.
Parameters
Name | Type |
---|---|
additionalSettings | Partial <OpenAIChatSettings > |
Returns
Example
const model = new OpenAICompletionModel({
model: "gpt-3.5-turbo-instruct",
maxGenerationTokens: 500,
});
const modelWithMoreTokens = model.withSettings({
maxGenerationTokens: 1000,
});
Implementation of
ToolCallsGenerationModel.withSettings
Overrides
AbstractOpenAIChatModel.withSettings
Defined in
packages/modelfusion/src/model-provider/openai/OpenAIChatModel.ts:248
withTextPrompt
▸ withTextPrompt(): PromptTemplateFullTextModel
<string
, ChatPrompt
, OpenAIChatSettings
, OpenAIChatModel
>
Returns this model with a text prompt template.
Returns
PromptTemplateFullTextModel
<string
, ChatPrompt
, OpenAIChatSettings
, OpenAIChatModel
>
Implementation of
TextStreamingBaseModel.withTextPrompt
Defined in
packages/modelfusion/src/model-provider/openai/OpenAIChatModel.ts:213
Properties
contextWindowSize
• Readonly
contextWindowSize: number
Implementation of
TextStreamingBaseModel.contextWindowSize
Defined in
packages/modelfusion/src/model-provider/openai/OpenAIChatModel.ts:146
provider
• Readonly
provider: "openai"
Overrides
AbstractOpenAIChatModel.provider
Defined in
packages/modelfusion/src/model-provider/openai/OpenAIChatModel.ts:141
settings
• Readonly
settings: OpenAIChatSettings
Implementation of
ToolCallsGenerationModel.settings
Inherited from
AbstractOpenAIChatModel.settings
Defined in
packages/modelfusion/src/model-function/AbstractModel.ts:7
tokenizer
• Readonly
tokenizer: TikTokenTokenizer
Implementation of
TextStreamingBaseModel.tokenizer
Defined in
packages/modelfusion/src/model-provider/openai/OpenAIChatModel.ts:147