Class: OpenAICompletionModel
Create a text generation model that calls the OpenAI text completion API.
See
https://platform.openai.com/docs/api-reference/completions/create
Example
const model = new OpenAICompletionModel({
model: "gpt-3.5-turbo-instruct",
temperature: 0.7,
maxGenerationTokens: 500,
retry: retryWithExponentialBackoff({ maxTries: 5 }),
});
const text = await generateText(
model,
"Write a short story about a robot learning to love:\n\n"
);
Hierarchy
-
AbstractOpenAICompletionModel
<OpenAICompletionModelSettings
>↳
OpenAICompletionModel
Implements
Accessors
modelInformation
• get
modelInformation(): ModelInformation
Returns
Implementation of
TextStreamingBaseModel.modelInformation
Inherited from
AbstractOpenAICompletionModel.modelInformation
Defined in
packages/modelfusion/src/model-function/AbstractModel.ts:17
modelName
• get
modelName(): "gpt-3.5-turbo-instruct"
Returns
"gpt-3.5-turbo-instruct"
Overrides
AbstractOpenAICompletionModel.modelName
Defined in
packages/modelfusion/src/model-provider/openai/OpenAICompletionModel.ts:81
settingsForEvent
• get
settingsForEvent(): Partial
<OpenAICompletionModelSettings
>
Returns settings that should be recorded in observability events. Security-related settings (e.g. API keys) should not be included here.
Returns
Partial
<OpenAICompletionModelSettings
>
Implementation of
TextStreamingBaseModel.settingsForEvent
Overrides
AbstractOpenAICompletionModel.settingsForEvent
Defined in
packages/modelfusion/src/model-provider/openai/OpenAICompletionModel.ts:92
Constructors
constructor
• new OpenAICompletionModel(settings
): OpenAICompletionModel
Parameters
Name | Type |
---|---|
settings | OpenAICompletionModelSettings |
Returns
Overrides
AbstractOpenAICompletionModel.constructor
Defined in
packages/modelfusion/src/model-provider/openai/OpenAICompletionModel.ts:67
Methods
callAPI
▸ callAPI<RESULT
>(prompt
, callOptions
, options
): Promise
<RESULT
>
Type parameters
Name |
---|
RESULT |
Parameters
Name | Type |
---|---|
prompt | string |
callOptions | FunctionCallOptions |
options | Object |
options.responseFormat | OpenAITextResponseFormatType <RESULT > |
Returns
Promise
<RESULT
>
Inherited from
AbstractOpenAICompletionModel.callAPI
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAICompletionModel.ts:51
countPromptTokens
▸ countPromptTokens(input
): Promise
<number
>
Parameters
Name | Type |
---|---|
input | string |
Returns
Promise
<number
>
Implementation of
TextStreamingBaseModel.countPromptTokens
Defined in
packages/modelfusion/src/model-provider/openai/OpenAICompletionModel.ts:88
doGenerateTexts
▸ doGenerateTexts(prompt
, options
): Promise
<{ rawResponse
: { choices
: { finish_reason?
: null
| "length"
| "stop"
| "content_filter"
; index
: number
; logprobs?
: any
; text
: string
}[] ; created
: number
; id
: string
; model
: string
; object
: "text_completion"
; system_fingerprint?
: string
; usage
: { completion_tokens
: number
; prompt_tokens
: number
; total_tokens
: number
} } ; textGenerationResults
: { finishReason
: TextGenerationFinishReason
; text
: string
= choice.text }[] ; usage
: { completionTokens
: number
= rawResponse.usage.completion_tokens; promptTokens
: number
= rawResponse.usage.prompt_tokens; totalTokens
: number
= rawResponse.usage.total_tokens } }>
Parameters
Name | Type |
---|---|
prompt | string |
options | FunctionCallOptions |
Returns
Promise
<{ rawResponse
: { choices
: { finish_reason?
: null
| "length"
| "stop"
| "content_filter"
; index
: number
; logprobs?
: any
; text
: string
}[] ; created
: number
; id
: string
; model
: string
; object
: "text_completion"
; system_fingerprint?
: string
; usage
: { completion_tokens
: number
; prompt_tokens
: number
; total_tokens
: number
} } ; textGenerationResults
: { finishReason
: TextGenerationFinishReason
; text
: string
= choice.text }[] ; usage
: { completionTokens
: number
= rawResponse.usage.completion_tokens; promptTokens
: number
= rawResponse.usage.prompt_tokens; totalTokens
: number
= rawResponse.usage.total_tokens } }>
Implementation of
TextStreamingBaseModel.doGenerateTexts
Inherited from
AbstractOpenAICompletionModel.doGenerateTexts
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAICompletionModel.ts:111
doStreamText
▸ doStreamText(prompt
, options
): Promise
<AsyncIterable
<Delta
<{ choices
: { finish_reason?
: null
| "length"
| "stop"
| "content_filter"
; index
: number
; text
: string
}[] ; created
: number
; id
: string
; model
: string
; object
: "text_completion"
; system_fingerprint?
: string
}>>>
Parameters
Name | Type |
---|---|
prompt | string |
options | FunctionCallOptions |
Returns
Promise
<AsyncIterable
<Delta
<{ choices
: { finish_reason?
: null
| "length"
| "stop"
| "content_filter"
; index
: number
; text
: string
}[] ; created
: number
; id
: string
; model
: string
; object
: "text_completion"
; system_fingerprint?
: string
}>>>
Implementation of
TextStreamingBaseModel.doStreamText
Inherited from
AbstractOpenAICompletionModel.doStreamText
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAICompletionModel.ts:160
extractTextDelta
▸ extractTextDelta(delta
): undefined
| string
Parameters
Name | Type |
---|---|
delta | unknown |
Returns
undefined
| string
Implementation of
TextStreamingBaseModel.extractTextDelta
Inherited from
AbstractOpenAICompletionModel.extractTextDelta
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAICompletionModel.ts:166
restoreGeneratedTexts
▸ restoreGeneratedTexts(rawResponse
): Object
Parameters
Name | Type |
---|---|
rawResponse | unknown |
Returns
Object
Name | Type |
---|---|
rawResponse | { choices : { finish_reason? : null | "length" | "stop" | "content_filter" ; index : number ; logprobs? : any ; text : string }[] ; created : number ; id : string ; model : string ; object : "text_completion" ; system_fingerprint? : string ; usage : { completion_tokens : number ; prompt_tokens : number ; total_tokens : number } } |
rawResponse.choices | { finish_reason? : null | "length" | "stop" | "content_filter" ; index : number ; logprobs? : any ; text : string }[] |
rawResponse.created | number |
rawResponse.id | string |
rawResponse.model | string |
rawResponse.object | "text_completion" |
rawResponse.system_fingerprint? | string |
rawResponse.usage | { completion_tokens : number ; prompt_tokens : number ; total_tokens : number } |
rawResponse.usage.completion_tokens | number |
rawResponse.usage.prompt_tokens | number |
rawResponse.usage.total_tokens | number |
textGenerationResults | { finishReason : TextGenerationFinishReason ; text : string = choice.text }[] |
usage | { completionTokens : number = rawResponse.usage.completion_tokens; promptTokens : number = rawResponse.usage.prompt_tokens; totalTokens : number = rawResponse.usage.total_tokens } |
usage.completionTokens | number |
usage.promptTokens | number |
usage.totalTokens | number |
Implementation of
TextStreamingBaseModel.restoreGeneratedTexts
Inherited from
AbstractOpenAICompletionModel.restoreGeneratedTexts
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAICompletionModel.ts:119
withChatPrompt
▸ withChatPrompt(options?
): PromptTemplateTextStreamingModel
<ChatPrompt
, string
, OpenAICompletionModelSettings
, OpenAICompletionModel
>
Returns this model with a chat prompt template.
Parameters
Name | Type |
---|---|
options? | Object |
options.assistant? | string |
options.user? | string |
Returns
PromptTemplateTextStreamingModel
<ChatPrompt
, string
, OpenAICompletionModelSettings
, OpenAICompletionModel
>
Implementation of
TextStreamingBaseModel.withChatPrompt
Defined in
packages/modelfusion/src/model-provider/openai/OpenAICompletionModel.ts:123
withInstructionPrompt
▸ withInstructionPrompt(): PromptTemplateTextStreamingModel
<InstructionPrompt
, string
, OpenAICompletionModelSettings
, OpenAICompletionModel
>
Returns this model with an instruction prompt template.
Returns
PromptTemplateTextStreamingModel
<InstructionPrompt
, string
, OpenAICompletionModelSettings
, OpenAICompletionModel
>
Implementation of
TextStreamingBaseModel.withInstructionPrompt
Defined in
packages/modelfusion/src/model-provider/openai/OpenAICompletionModel.ts:119
withJsonOutput
▸ withJsonOutput(): this
When possible, limit the output generation to the specified JSON schema, or super sets of it (e.g. JSON in general).
Returns
this
Implementation of
TextStreamingBaseModel.withJsonOutput
Inherited from
AbstractOpenAICompletionModel.withJsonOutput
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAICompletionModel.ts:178
withPromptTemplate
▸ withPromptTemplate<INPUT_PROMPT
>(promptTemplate
): PromptTemplateTextStreamingModel
<INPUT_PROMPT
, string
, OpenAICompletionModelSettings
, OpenAICompletionModel
>
Type parameters
Name |
---|
INPUT_PROMPT |
Parameters
Name | Type |
---|---|
promptTemplate | TextGenerationPromptTemplate <INPUT_PROMPT , string > |
Returns
PromptTemplateTextStreamingModel
<INPUT_PROMPT
, string
, OpenAICompletionModelSettings
, OpenAICompletionModel
>
Implementation of
TextStreamingBaseModel.withPromptTemplate
Defined in
packages/modelfusion/src/model-provider/openai/OpenAICompletionModel.ts:127
withSettings
▸ withSettings(additionalSettings
): OpenAICompletionModel
The withSettings
method creates a new model with the same configuration as the original model, but with the specified settings changed.
Parameters
Name | Type |
---|---|
additionalSettings | Partial <OpenAICompletionModelSettings > |
Returns
Example
const model = new OpenAICompletionModel({
model: "gpt-3.5-turbo-instruct",
maxGenerationTokens: 500,
});
const modelWithMoreTokens = model.withSettings({
maxGenerationTokens: 1000,
});
Implementation of
TextStreamingBaseModel.withSettings
Overrides
AbstractOpenAICompletionModel.withSettings
Defined in
packages/modelfusion/src/model-provider/openai/OpenAICompletionModel.ts:146
withTextPrompt
▸ withTextPrompt(): PromptTemplateTextStreamingModel
<string
, string
, OpenAICompletionModelSettings
, OpenAICompletionModel
>
Returns this model with a text prompt template.
Returns
PromptTemplateTextStreamingModel
<string
, string
, OpenAICompletionModelSettings
, OpenAICompletionModel
>
Implementation of
TextStreamingBaseModel.withTextPrompt
Defined in
packages/modelfusion/src/model-provider/openai/OpenAICompletionModel.ts:115
Properties
contextWindowSize
• Readonly
contextWindowSize: number
Implementation of
TextStreamingBaseModel.contextWindowSize
Defined in
packages/modelfusion/src/model-provider/openai/OpenAICompletionModel.ts:85
provider
• Readonly
provider: "openai"
Overrides
AbstractOpenAICompletionModel.provider
Defined in
packages/modelfusion/src/model-provider/openai/OpenAICompletionModel.ts:80
settings
• Readonly
settings: OpenAICompletionModelSettings
Implementation of
TextStreamingBaseModel.settings
Inherited from
AbstractOpenAICompletionModel.settings
Defined in
packages/modelfusion/src/model-function/AbstractModel.ts:7
tokenizer
• Readonly
tokenizer: TikTokenTokenizer
Implementation of
TextStreamingBaseModel.tokenizer
Defined in
packages/modelfusion/src/model-provider/openai/OpenAICompletionModel.ts:86