Class: OpenAICompatibleCompletionModel
Create a text generation model that calls an API that is compatible with OpenAI's completion API.
Please note that many providers implement the API with slight differences, which can cause unexpected errors and different behavior in less common scenarios.
See
https://platform.openai.com/docs/api-reference/completions/create
Hierarchy
-
AbstractOpenAICompletionModel
<OpenAICompatibleCompletionModelSettings
>↳
OpenAICompatibleCompletionModel
Implements
Accessors
modelInformation
• get
modelInformation(): ModelInformation
Returns
Implementation of
TextStreamingBaseModel.modelInformation
Inherited from
AbstractOpenAICompletionModel.modelInformation
Defined in
packages/modelfusion/src/model-function/AbstractModel.ts:17
modelName
• get
modelName(): string
Returns
string
Overrides
AbstractOpenAICompletionModel.modelName
Defined in
packages/modelfusion/src/model-provider/openai-compatible/OpenAICompatibleCompletionModel.ts:50
provider
• get
provider(): OpenAICompatibleProviderName
Returns
Overrides
AbstractOpenAICompletionModel.provider
Defined in
packages/modelfusion/src/model-provider/openai-compatible/OpenAICompatibleCompletionModel.ts:44
settingsForEvent
• get
settingsForEvent(): Partial
<OpenAICompatibleCompletionModelSettings
>
Returns settings that should be recorded in observability events. Security-related settings (e.g. API keys) should not be included here.
Returns
Partial
<OpenAICompatibleCompletionModelSettings
>
Implementation of
TextStreamingBaseModel.settingsForEvent
Overrides
AbstractOpenAICompletionModel.settingsForEvent
Defined in
packages/modelfusion/src/model-provider/openai-compatible/OpenAICompatibleCompletionModel.ts:58
Constructors
constructor
• new OpenAICompatibleCompletionModel(settings
): OpenAICompatibleCompletionModel
Parameters
Name | Type |
---|---|
settings | OpenAICompatibleCompletionModelSettings |
Returns
OpenAICompatibleCompletionModel
Overrides
AbstractOpenAICompletionModel.constructor
Defined in
packages/modelfusion/src/model-provider/openai-compatible/OpenAICompatibleCompletionModel.ts:40
Methods
callAPI
▸ callAPI<RESULT
>(prompt
, callOptions
, options
): Promise
<RESULT
>
Type parameters
Name |
---|
RESULT |
Parameters
Name | Type |
---|---|
prompt | string |
callOptions | FunctionCallOptions |
options | Object |
options.responseFormat | OpenAITextResponseFormatType <RESULT > |
Returns
Promise
<RESULT
>
Inherited from
AbstractOpenAICompletionModel.callAPI
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAICompletionModel.ts:51
doGenerateTexts
▸ doGenerateTexts(prompt
, options
): Promise
<{ rawResponse
: { choices
: { finish_reason?
: null
| "length"
| "stop"
| "content_filter"
; index
: number
; logprobs?
: any
; text
: string
}[] ; created
: number
; id
: string
; model
: string
; object
: "text_completion"
; system_fingerprint?
: string
; usage
: { completion_tokens
: number
; prompt_tokens
: number
; total_tokens
: number
} } ; textGenerationResults
: { finishReason
: TextGenerationFinishReason
; text
: string
= choice.text }[] ; usage
: { completionTokens
: number
= rawResponse.usage.completion_tokens; promptTokens
: number
= rawResponse.usage.prompt_tokens; totalTokens
: number
= rawResponse.usage.total_tokens } }>
Parameters
Name | Type |
---|---|
prompt | string |
options | FunctionCallOptions |
Returns
Promise
<{ rawResponse
: { choices
: { finish_reason?
: null
| "length"
| "stop"
| "content_filter"
; index
: number
; logprobs?
: any
; text
: string
}[] ; created
: number
; id
: string
; model
: string
; object
: "text_completion"
; system_fingerprint?
: string
; usage
: { completion_tokens
: number
; prompt_tokens
: number
; total_tokens
: number
} } ; textGenerationResults
: { finishReason
: TextGenerationFinishReason
; text
: string
= choice.text }[] ; usage
: { completionTokens
: number
= rawResponse.usage.completion_tokens; promptTokens
: number
= rawResponse.usage.prompt_tokens; totalTokens
: number
= rawResponse.usage.total_tokens } }>
Implementation of
TextStreamingBaseModel.doGenerateTexts
Inherited from
AbstractOpenAICompletionModel.doGenerateTexts
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAICompletionModel.ts:111
doStreamText
▸ doStreamText(prompt
, options
): Promise
<AsyncIterable
<Delta
<{ choices
: { finish_reason?
: null
| "length"
| "stop"
| "content_filter"
; index
: number
; text
: string
}[] ; created
: number
; id
: string
; model
: string
; object
: "text_completion"
; system_fingerprint?
: string
}>>>
Parameters
Name | Type |
---|---|
prompt | string |
options | FunctionCallOptions |
Returns
Promise
<AsyncIterable
<Delta
<{ choices
: { finish_reason?
: null
| "length"
| "stop"
| "content_filter"
; index
: number
; text
: string
}[] ; created
: number
; id
: string
; model
: string
; object
: "text_completion"
; system_fingerprint?
: string
}>>>
Implementation of
TextStreamingBaseModel.doStreamText
Inherited from
AbstractOpenAICompletionModel.doStreamText
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAICompletionModel.ts:160
extractTextDelta
▸ extractTextDelta(delta
): undefined
| string
Parameters
Name | Type |
---|---|
delta | unknown |
Returns
undefined
| string
Implementation of
TextStreamingBaseModel.extractTextDelta
Inherited from
AbstractOpenAICompletionModel.extractTextDelta
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAICompletionModel.ts:166
restoreGeneratedTexts
▸ restoreGeneratedTexts(rawResponse
): Object
Parameters
Name | Type |
---|---|
rawResponse | unknown |
Returns
Object
Name | Type |
---|---|
rawResponse | { choices : { finish_reason? : null | "length" | "stop" | "content_filter" ; index : number ; logprobs? : any ; text : string }[] ; created : number ; id : string ; model : string ; object : "text_completion" ; system_fingerprint? : string ; usage : { completion_tokens : number ; prompt_tokens : number ; total_tokens : number } } |
rawResponse.choices | { finish_reason? : null | "length" | "stop" | "content_filter" ; index : number ; logprobs? : any ; text : string }[] |
rawResponse.created | number |
rawResponse.id | string |
rawResponse.model | string |
rawResponse.object | "text_completion" |
rawResponse.system_fingerprint? | string |
rawResponse.usage | { completion_tokens : number ; prompt_tokens : number ; total_tokens : number } |
rawResponse.usage.completion_tokens | number |
rawResponse.usage.prompt_tokens | number |
rawResponse.usage.total_tokens | number |
textGenerationResults | { finishReason : TextGenerationFinishReason ; text : string = choice.text }[] |
usage | { completionTokens : number = rawResponse.usage.completion_tokens; promptTokens : number = rawResponse.usage.prompt_tokens; totalTokens : number = rawResponse.usage.total_tokens } |
usage.completionTokens | number |
usage.promptTokens | number |
usage.totalTokens | number |
Implementation of
TextStreamingBaseModel.restoreGeneratedTexts
Inherited from
AbstractOpenAICompletionModel.restoreGeneratedTexts
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAICompletionModel.ts:119
withChatPrompt
▸ withChatPrompt(options?
): PromptTemplateTextStreamingModel
<ChatPrompt
, string
, OpenAICompatibleCompletionModelSettings
, OpenAICompatibleCompletionModel
>
Returns this model with a chat prompt template.
Parameters
Name | Type |
---|---|
options? | Object |
options.assistant? | string |
options.user? | string |
Returns
PromptTemplateTextStreamingModel
<ChatPrompt
, string
, OpenAICompatibleCompletionModelSettings
, OpenAICompatibleCompletionModel
>
Implementation of
TextStreamingBaseModel.withChatPrompt
Defined in
packages/modelfusion/src/model-provider/openai-compatible/OpenAICompatibleCompletionModel.ts:89
withInstructionPrompt
▸ withInstructionPrompt(): PromptTemplateTextStreamingModel
<InstructionPrompt
, string
, OpenAICompatibleCompletionModelSettings
, OpenAICompatibleCompletionModel
>
Returns this model with an instruction prompt template.
Returns
PromptTemplateTextStreamingModel
<InstructionPrompt
, string
, OpenAICompatibleCompletionModelSettings
, OpenAICompatibleCompletionModel
>
Implementation of
TextStreamingBaseModel.withInstructionPrompt
Defined in
packages/modelfusion/src/model-provider/openai-compatible/OpenAICompatibleCompletionModel.ts:85
withJsonOutput
▸ withJsonOutput(): this
When possible, limit the output generation to the specified JSON schema, or super sets of it (e.g. JSON in general).
Returns
this
Implementation of
TextStreamingBaseModel.withJsonOutput
Inherited from
AbstractOpenAICompletionModel.withJsonOutput
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAICompletionModel.ts:178
withPromptTemplate
▸ withPromptTemplate<INPUT_PROMPT
>(promptTemplate
): PromptTemplateTextStreamingModel
<INPUT_PROMPT
, string
, OpenAICompatibleCompletionModelSettings
, OpenAICompatibleCompletionModel
>
Type parameters
Name |
---|
INPUT_PROMPT |
Parameters
Name | Type |
---|---|
promptTemplate | TextGenerationPromptTemplate <INPUT_PROMPT , string > |
Returns
PromptTemplateTextStreamingModel
<INPUT_PROMPT
, string
, OpenAICompatibleCompletionModelSettings
, OpenAICompatibleCompletionModel
>
Implementation of
TextStreamingBaseModel.withPromptTemplate
Defined in
packages/modelfusion/src/model-provider/openai-compatible/OpenAICompatibleCompletionModel.ts:93
withSettings
▸ withSettings(additionalSettings
): OpenAICompatibleCompletionModel
The withSettings
method creates a new model with the same configuration as the original model, but with the specified settings changed.
Parameters
Name | Type |
---|---|
additionalSettings | Partial <OpenAICompatibleCompletionModelSettings > |
Returns
OpenAICompatibleCompletionModel
Example
const model = new OpenAICompletionModel({
model: "gpt-3.5-turbo-instruct",
maxGenerationTokens: 500,
});
const modelWithMoreTokens = model.withSettings({
maxGenerationTokens: 1000,
});
Implementation of
TextStreamingBaseModel.withSettings
Overrides
AbstractOpenAICompletionModel.withSettings
Defined in
packages/modelfusion/src/model-provider/openai-compatible/OpenAICompatibleCompletionModel.ts:112
withTextPrompt
▸ withTextPrompt(): PromptTemplateTextStreamingModel
<string
, string
, OpenAICompatibleCompletionModelSettings
, OpenAICompatibleCompletionModel
>
Returns this model with a text prompt template.
Returns
PromptTemplateTextStreamingModel
<string
, string
, OpenAICompatibleCompletionModelSettings
, OpenAICompatibleCompletionModel
>
Implementation of
TextStreamingBaseModel.withTextPrompt
Defined in
packages/modelfusion/src/model-provider/openai-compatible/OpenAICompatibleCompletionModel.ts:81
Properties
contextWindowSize
• Readonly
contextWindowSize: undefined
= undefined
Implementation of
TextStreamingBaseModel.contextWindowSize
Defined in
packages/modelfusion/src/model-provider/openai-compatible/OpenAICompatibleCompletionModel.ts:54
countPromptTokens
• Readonly
countPromptTokens: undefined
= undefined
Optional. Implement if you have a tokenizer and want to count the number of tokens in a prompt.
Implementation of
TextStreamingBaseModel.countPromptTokens
Defined in
packages/modelfusion/src/model-provider/openai-compatible/OpenAICompatibleCompletionModel.ts:56
settings
• Readonly
settings: OpenAICompatibleCompletionModelSettings
Implementation of
TextStreamingBaseModel.settings
Inherited from
AbstractOpenAICompletionModel.settings
Defined in
packages/modelfusion/src/model-function/AbstractModel.ts:7
tokenizer
• Readonly
tokenizer: undefined
= undefined
Implementation of
TextStreamingBaseModel.tokenizer
Defined in
packages/modelfusion/src/model-provider/openai-compatible/OpenAICompatibleCompletionModel.ts:55