Interface: AbstractOpenAIChatSettings
Hierarchy
-
↳
AbstractOpenAIChatSettings
Properties
api
• Optional
api: ApiConfiguration
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:24
frequencyPenalty
• Optional
frequencyPenalty: number
This parameter reduces the likelihood of the model repeatedly using the same words or phrases in its responses. A higher frequency penalty promotes a wider variety of language and expressions in the output. This is particularly useful in creative writing or content generation tasks where diversity in language is desirable. Example: frequencyPenalty: 0.5 // Moderately discourages repetitive language.
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:87
functionCall
• Optional
functionCall: "auto"
| { name
: string
} | "none"
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:33
functions
• Optional
functions: { description?
: string
; name
: string
; parameters
: unknown
}[]
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:28
isUserIdForwardingEnabled
• Optional
isUserIdForwardingEnabled: boolean
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:95
logitBias
• Optional
logitBias: Record
<number
, number
>
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:93
maxGenerationTokens
• Optional
maxGenerationTokens: number
Specifies the maximum number of tokens (words, punctuation, parts of words) that the model can generate in a single response. It helps to control the length of the output.
Does nothing if the model does not support this setting.
Example: maxGenerationTokens: 1000
Inherited from
TextGenerationModelSettings.maxGenerationTokens
Defined in
packages/modelfusion/src/model-function/generate-text/TextGenerationModel.ts:28
model
• model: string
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:26
numberOfGenerations
• Optional
numberOfGenerations: number
Number of texts to generate.
Specifies the number of responses or completions the model should generate for a given prompt. This is useful when you need multiple different outputs or ideas for a single prompt. The model will generate 'n' distinct responses, each based on the same initial prompt. In a streaming model this will result in both responses streamed back in real time.
Does nothing if the model does not support this setting.
Example: numberOfGenerations: 3
// The model will produce 3 different responses.
Inherited from
TextGenerationModelSettings.numberOfGenerations
Defined in
packages/modelfusion/src/model-function/generate-text/TextGenerationModel.ts:55
observers
• Optional
observers: FunctionObserver
[]
Observers that are called when the model is used in run functions.
Inherited from
TextGenerationModelSettings.observers
Defined in
packages/modelfusion/src/model-function/Model.ts:8
presencePenalty
• Optional
presencePenalty: number
Discourages the model from repeating the same information or context already mentioned in the conversation or prompt. Increasing this value encourages the model to introduce new topics or ideas, rather than reiterating what has been said. This is useful for maintaining a diverse and engaging conversation or for brainstorming sessions where varied ideas are needed. Example: presencePenalty: 1.0 // Strongly discourages repeating the same content.
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:79
responseFormat
• Optional
responseFormat: Object
Type declaration
Name | Type |
---|---|
type? | "text" | "json_object" |
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:89
seed
• Optional
seed: null
| number
Used to set the initial state for the random number generator in the model.
Providing a specific seed value ensures consistent outputs for the same inputs across different runs - useful for testing and reproducibility.
A null
value (or not setting it) results in varied, non-repeatable outputs each time.
Example: seed: 89 (or) seed: null
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:71
stopSequences
• Optional
stopSequences: string
[]
Stop sequences to use. Stop sequences are an array of strings or a single string that the model will recognize as end-of-text indicators. The model stops generating more content when it encounters any of these strings. This is particularly useful in scripted or formatted text generation, where a specific end point is required. Stop sequences not included in the generated text.
Does nothing if the model does not support this setting.
Example: stopSequences: ['\n', 'END']
Inherited from
TextGenerationModelSettings.stopSequences
Defined in
packages/modelfusion/src/model-function/generate-text/TextGenerationModel.ts:41
temperature
• Optional
temperature: number
temperature
: Controls the randomness and creativity in the model's responses.
A lower temperature (close to 0) results in more predictable, conservative text, while a higher temperature (close to 1) produces more varied and creative output.
Adjust this to balance between consistency and creativity in the model's replies.
Example: temperature: 0.5
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:54
toolChoice
• Optional
toolChoice: "auto"
| "none"
| { function
: { name
: string
} ; type
: "function"
}
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:43
tools
• Optional
tools: { function
: { description?
: string
; name
: string
; parameters
: unknown
} ; type
: "function"
}[]
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:35
topP
• Optional
topP: number
This parameter sets a threshold for token selection based on probability. The model will only consider the most likely tokens that cumulatively exceed this threshold while generating a response. It's a way to control the randomness of the output, balancing between diverse responses and sticking to more likely words. This means a topP of .1 will be far less random than one at .9 Example: topP: 0.2
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:63
trimWhitespace
• Optional
trimWhitespace: boolean
When true, the leading and trailing white space and line terminator characters are removed from the generated text.
Default: true.
Inherited from
TextGenerationModelSettings.trimWhitespace
Defined in
packages/modelfusion/src/model-function/generate-text/TextGenerationModel.ts:63