Interface: TextStreamingBaseModel<PROMPT, SETTINGS>
Type parameters
Name | Type |
---|---|
PROMPT | PROMPT |
SETTINGS | extends TextGenerationModelSettings = TextGenerationModelSettings |
Hierarchy
-
TextStreamingModel
<PROMPT
,SETTINGS
>↳
TextStreamingBaseModel
Implemented by
CohereTextGenerationModel
LlamaCppCompletionModel
MistralChatModel
OllamaChatModel
OllamaCompletionModel
OpenAIChatModel
OpenAICompatibleChatModel
OpenAICompatibleCompletionModel
OpenAICompletionModel
Accessors
settingsForEvent
• get
settingsForEvent(): Partial
<SETTINGS
>
Returns settings that should be recorded in observability events. Security-related settings (e.g. API keys) should not be included here.
Returns
Partial
<SETTINGS
>
Inherited from
TextStreamingModel.settingsForEvent
Defined in
packages/modelfusion/src/model-function/Model.ts:19
Methods
doGenerateTexts
▸ doGenerateTexts(prompt
, options?
): PromiseLike
<{ rawResponse
: unknown
; textGenerationResults
: TextGenerationResult
[] ; usage?
: { completionTokens
: number
; promptTokens
: number
; totalTokens
: number
} }>
Parameters
Name | Type |
---|---|
prompt | PROMPT |
options? | FunctionCallOptions |
Returns
PromiseLike
<{ rawResponse
: unknown
; textGenerationResults
: TextGenerationResult
[] ; usage?
: { completionTokens
: number
; promptTokens
: number
; totalTokens
: number
} }>
Inherited from
TextStreamingModel.doGenerateTexts
Defined in
packages/modelfusion/src/model-function/generate-text/TextGenerationModel.ts:91
doStreamText
▸ doStreamText(prompt
, options?
): PromiseLike
<AsyncIterable
<Delta
<unknown
>>>
Parameters
Name | Type |
---|---|
prompt | PROMPT |
options? | FunctionCallOptions |
Returns
PromiseLike
<AsyncIterable
<Delta
<unknown
>>>
Inherited from
TextStreamingModel.doStreamText
Defined in
packages/modelfusion/src/model-function/generate-text/TextGenerationModel.ts:149
extractTextDelta
▸ extractTextDelta(delta
): undefined
| string
Parameters
Name | Type |
---|---|
delta | unknown |
Returns
undefined
| string
Inherited from
TextStreamingModel.extractTextDelta
Defined in
packages/modelfusion/src/model-function/generate-text/TextGenerationModel.ts:154
restoreGeneratedTexts
▸ restoreGeneratedTexts(rawResponse
): Object
Parameters
Name | Type |
---|---|
rawResponse | unknown |
Returns
Object
Name | Type |
---|---|
rawResponse | unknown |
textGenerationResults | TextGenerationResult [] |
usage? | { completionTokens : number ; promptTokens : number ; totalTokens : number } |
usage.completionTokens | number |
usage.promptTokens | number |
usage.totalTokens | number |
Inherited from
TextStreamingModel.restoreGeneratedTexts
Defined in
packages/modelfusion/src/model-function/generate-text/TextGenerationModel.ts:104
withChatPrompt
▸ withChatPrompt(): TextStreamingModel
<ChatPrompt
, SETTINGS
>
Returns this model with a chat prompt template.
Returns
TextStreamingModel
<ChatPrompt
, SETTINGS
>
Defined in
packages/modelfusion/src/model-function/generate-text/TextGenerationModel.ts:174
withInstructionPrompt
▸ withInstructionPrompt(): TextStreamingModel
<InstructionPrompt
, SETTINGS
>
Returns this model with an instruction prompt template.
Returns
TextStreamingModel
<InstructionPrompt
, SETTINGS
>
Defined in
packages/modelfusion/src/model-function/generate-text/TextGenerationModel.ts:169
withJsonOutput
▸ withJsonOutput(schema
): this
When possible, limit the output generation to the specified JSON schema, or super sets of it (e.g. JSON in general).
Parameters
Name | Type |
---|---|
schema | Schema <unknown > & JsonSchemaProducer |
Returns
this
Inherited from
TextStreamingModel.withJsonOutput
Defined in
packages/modelfusion/src/model-function/generate-text/TextGenerationModel.ts:118
withPromptTemplate
▸ withPromptTemplate<INPUT_PROMPT
>(promptTemplate
): TextStreamingModel
<INPUT_PROMPT
, SETTINGS
>
Type parameters
Name |
---|
INPUT_PROMPT |
Parameters
Name | Type |
---|---|
promptTemplate | TextGenerationPromptTemplate <INPUT_PROMPT , PROMPT > |
Returns
TextStreamingModel
<INPUT_PROMPT
, SETTINGS
>
Defined in
packages/modelfusion/src/model-function/generate-text/TextGenerationModel.ts:176
withSettings
▸ withSettings(additionalSettings
): this
The withSettings
method creates a new model with the same configuration as the original model, but with the specified settings changed.
Parameters
Name | Type |
---|---|
additionalSettings | Partial <SETTINGS > |
Returns
this
Example
const model = new OpenAICompletionModel({
model: "gpt-3.5-turbo-instruct",
maxGenerationTokens: 500,
});
const modelWithMoreTokens = model.withSettings({
maxGenerationTokens: 1000,
});
Inherited from
TextStreamingModel.withSettings
Defined in
packages/modelfusion/src/model-function/Model.ts:34
withTextPrompt
▸ withTextPrompt(): TextStreamingModel
<string
, SETTINGS
>
Returns this model with a text prompt template.
Returns
TextStreamingModel
<string
, SETTINGS
>
Defined in
packages/modelfusion/src/model-function/generate-text/TextGenerationModel.ts:164
Properties
contextWindowSize
• Readonly
contextWindowSize: undefined
| number
Inherited from
TextStreamingModel.contextWindowSize
Defined in
packages/modelfusion/src/model-function/generate-text/TextGenerationModel.ts:80
countPromptTokens
• Readonly
countPromptTokens: undefined
| (prompt
: PROMPT
) => PromiseLike
<number
>
Optional. Implement if you have a tokenizer and want to count the number of tokens in a prompt.
Inherited from
TextStreamingModel.countPromptTokens
Defined in
packages/modelfusion/src/model-function/generate-text/TextGenerationModel.ts:87
modelInformation
• modelInformation: ModelInformation
Inherited from
TextStreamingModel.modelInformation
Defined in
packages/modelfusion/src/model-function/Model.ts:12
settings
• Readonly
settings: SETTINGS
Inherited from
Defined in
packages/modelfusion/src/model-function/Model.ts:13
tokenizer
• Readonly
tokenizer: undefined
| BasicTokenizer
| FullTokenizer
Inherited from
Defined in
packages/modelfusion/src/model-function/generate-text/TextGenerationModel.ts:82