Skip to main content

Interface: TextStreamingModel<PROMPT, SETTINGS>

Type parameters

NameType
PROMPTPROMPT
SETTINGSextends TextGenerationModelSettings = TextGenerationModelSettings

Hierarchy

Implemented by

Accessors

settingsForEvent

get settingsForEvent(): Partial<SETTINGS>

Returns settings that should be recorded in observability events. Security-related settings (e.g. API keys) should not be included here.

Returns

Partial<SETTINGS>

Inherited from

TextGenerationModel.settingsForEvent

Defined in

packages/modelfusion/src/model-function/Model.ts:19

Methods

doGenerateTexts

doGenerateTexts(prompt, options?): PromiseLike<{ rawResponse: unknown ; textGenerationResults: TextGenerationResult[] ; usage?: { completionTokens: number ; promptTokens: number ; totalTokens: number } }>

Parameters

NameType
promptPROMPT
options?FunctionCallOptions

Returns

PromiseLike<{ rawResponse: unknown ; textGenerationResults: TextGenerationResult[] ; usage?: { completionTokens: number ; promptTokens: number ; totalTokens: number } }>

Inherited from

TextGenerationModel.doGenerateTexts

Defined in

packages/modelfusion/src/model-function/generate-text/TextGenerationModel.ts:91


doStreamText

doStreamText(prompt, options?): PromiseLike<AsyncIterable<Delta<unknown>>>

Parameters

NameType
promptPROMPT
options?FunctionCallOptions

Returns

PromiseLike<AsyncIterable<Delta<unknown>>>

Defined in

packages/modelfusion/src/model-function/generate-text/TextGenerationModel.ts:149


extractTextDelta

extractTextDelta(delta): undefined | string

Parameters

NameType
deltaunknown

Returns

undefined | string

Defined in

packages/modelfusion/src/model-function/generate-text/TextGenerationModel.ts:154


restoreGeneratedTexts

restoreGeneratedTexts(rawResponse): Object

Parameters

NameType
rawResponseunknown

Returns

Object

NameType
rawResponseunknown
textGenerationResultsTextGenerationResult[]
usage?{ completionTokens: number ; promptTokens: number ; totalTokens: number }
usage.completionTokensnumber
usage.promptTokensnumber
usage.totalTokensnumber

Inherited from

TextGenerationModel.restoreGeneratedTexts

Defined in

packages/modelfusion/src/model-function/generate-text/TextGenerationModel.ts:104


withJsonOutput

withJsonOutput(schema): this

When possible, limit the output generation to the specified JSON schema, or super sets of it (e.g. JSON in general).

Parameters

NameType
schemaSchema<unknown> & JsonSchemaProducer

Returns

this

Inherited from

TextGenerationModel.withJsonOutput

Defined in

packages/modelfusion/src/model-function/generate-text/TextGenerationModel.ts:118


withSettings

withSettings(additionalSettings): this

The withSettings method creates a new model with the same configuration as the original model, but with the specified settings changed.

Parameters

NameType
additionalSettingsPartial<SETTINGS>

Returns

this

Example

const model = new OpenAICompletionModel({
model: "gpt-3.5-turbo-instruct",
maxGenerationTokens: 500,
});

const modelWithMoreTokens = model.withSettings({
maxGenerationTokens: 1000,
});

Inherited from

TextGenerationModel.withSettings

Defined in

packages/modelfusion/src/model-function/Model.ts:34

Properties

contextWindowSize

Readonly contextWindowSize: undefined | number

Inherited from

TextGenerationModel.contextWindowSize

Defined in

packages/modelfusion/src/model-function/generate-text/TextGenerationModel.ts:80


countPromptTokens

Readonly countPromptTokens: undefined | (prompt: PROMPT) => PromiseLike<number>

Optional. Implement if you have a tokenizer and want to count the number of tokens in a prompt.

Inherited from

TextGenerationModel.countPromptTokens

Defined in

packages/modelfusion/src/model-function/generate-text/TextGenerationModel.ts:87


modelInformation

modelInformation: ModelInformation

Inherited from

TextGenerationModel.modelInformation

Defined in

packages/modelfusion/src/model-function/Model.ts:12


settings

Readonly settings: SETTINGS

Inherited from

TextGenerationModel.settings

Defined in

packages/modelfusion/src/model-function/Model.ts:13


tokenizer

Readonly tokenizer: undefined | BasicTokenizer | FullTokenizer

Inherited from

TextGenerationModel.tokenizer

Defined in

packages/modelfusion/src/model-function/generate-text/TextGenerationModel.ts:82