Skip to main content

modelfusion

Classes

Interfaces

Namespaces

Functions

ObjectStreamFromResponse

ObjectStreamFromResponse<OBJECT>(«destructured»): AsyncGenerator<{ partialObject: OBJECT }, void, unknown>

Convert a Response to a lightweight ObjectStream. The response must be created using ObjectStreamResponse on the server.

Type parameters

Name
OBJECT

Parameters

NameType
«destructured»Object
› responseResponse
› schemaSchema<OBJECT>

Returns

AsyncGenerator<{ partialObject: OBJECT }, void, unknown>

See

ObjectStreamResponse

Defined in

packages/modelfusion/src/model-function/generate-object/ObjectStream.ts:35


classify

classify<VALUE, CLASS>(args): Promise<CLASS>

Type parameters

NameType
VALUEVALUE
CLASSextends null | string

Parameters

NameType
args{ fullResponse?: false ; model: Classifier<VALUE, CLASS, ClassifierSettings> ; value: VALUE } & FunctionOptions

Returns

Promise<CLASS>

Defined in

packages/modelfusion/src/model-function/classify/classify.ts:6

classify<VALUE, CLASS>(args): Promise<{ class: CLASS ; metadata: ModelCallMetadata ; rawResponse: unknown }>

Type parameters

NameType
VALUEVALUE
CLASSextends null | string

Parameters

NameType
args{ fullResponse: true ; model: Classifier<VALUE, CLASS, ClassifierSettings> ; value: VALUE } & FunctionOptions

Returns

Promise<{ class: CLASS ; metadata: ModelCallMetadata ; rawResponse: unknown }>

Defined in

packages/modelfusion/src/model-function/classify/classify.ts:13


convertDataContentToBase64String

convertDataContentToBase64String(content): string

Parameters

NameType
contentDataContent

Returns

string

Defined in

packages/modelfusion/src/util/format/DataContent.ts:8


convertDataContentToUint8Array

convertDataContentToUint8Array(content): Uint8Array

Parameters

NameType
contentDataContent

Returns

Uint8Array

Defined in

packages/modelfusion/src/util/format/DataContent.ts:20


cosineSimilarity

cosineSimilarity(a, b): number

Calculates the cosine similarity between two vectors. They must have the same length.

Parameters

NameTypeDescription
anumber[]The first vector.
bnumber[]The second vector.

Returns

number

The cosine similarity between the two vectors.

See

https://en.wikipedia.org/wiki/Cosine_similarity

Defined in

packages/modelfusion/src/util/cosineSimilarity.ts:11


countOpenAIChatMessageTokens

countOpenAIChatMessageTokens(«destructured»): Promise<number>

Parameters

NameType
«destructured»Object
› messageChatMessage
› modelOpenAIChatModelType

Returns

Promise<number>

Defined in

packages/modelfusion/src/model-provider/openai/countOpenAIChatMessageTokens.ts:21


countOpenAIChatPromptTokens

countOpenAIChatPromptTokens(«destructured»): Promise<number>

Parameters

NameType
«destructured»Object
› messagesChatMessage[]
› modelOpenAIChatModelType

Returns

Promise<number>

Defined in

packages/modelfusion/src/model-provider/openai/countOpenAIChatMessageTokens.ts:56


countTokens

countTokens(tokenizer, text): Promise<number>

Count the number of tokens in the given text.

Parameters

NameType
tokenizerBasicTokenizer
textstring

Returns

Promise<number>

Defined in

packages/modelfusion/src/model-function/tokenize-text/countTokens.ts:6


createChatPrompt

createChatPrompt<INPUT>(promptFunction): (input: INPUT) => PromptFunction<INPUT, ChatPrompt>

Type parameters

Name
INPUT

Parameters

NameType
promptFunction(input: INPUT) => Promise<ChatPrompt>

Returns

fn

▸ (input): PromptFunction<INPUT, ChatPrompt>

Parameters
NameType
inputINPUT
Returns

PromptFunction<INPUT, ChatPrompt>

Defined in

packages/modelfusion/src/model-function/generate-text/prompt-template/ChatPrompt.ts:125


createEventSourceStream

createEventSourceStream(events): ReadableStream<any>

Parameters

NameType
eventsAsyncIterable<unknown>

Returns

ReadableStream<any>

Defined in

packages/modelfusion/src/util/streaming/createEventSourceStream.ts:3


createInstructionPrompt

createInstructionPrompt<INPUT>(promptFunction): (input: INPUT) => PromptFunction<INPUT, InstructionPrompt>

Type parameters

Name
INPUT

Parameters

NameType
promptFunction(input: INPUT) => Promise<InstructionPrompt>

Returns

fn

▸ (input): PromptFunction<INPUT, InstructionPrompt>

Parameters
NameType
inputINPUT
Returns

PromptFunction<INPUT, InstructionPrompt>

Defined in

packages/modelfusion/src/model-function/generate-text/prompt-template/InstructionPrompt.ts:42


createTextPrompt

createTextPrompt<INPUT>(promptFunction): (input: INPUT) => PromptFunction<INPUT, string>

Type parameters

Name
INPUT

Parameters

NameType
promptFunction(input: INPUT) => Promise<string>

Returns

fn

▸ (input): PromptFunction<INPUT, string>

Parameters
NameType
inputINPUT
Returns

PromptFunction<INPUT, string>

Defined in

packages/modelfusion/src/model-function/generate-text/prompt-template/TextPrompt.ts:6


delay

delay(delayInMs): Promise<void>

Parameters

NameType
delayInMsnumber

Returns

Promise<void>

Defined in

packages/modelfusion/src/util/delay.ts:1


embed

embed<VALUE>(args): Promise<Vector>

Generate an embedding for a single value.

Type parameters

Name
VALUE

Parameters

NameType
args{ fullResponse?: false ; model: EmbeddingModel<VALUE, EmbeddingModelSettings> ; value: VALUE } & FunctionOptions

Returns

Promise<Vector>

  • A promise that resolves to a vector representing the embedding.

See

https://modelfusion.dev/guide/function/embed

Example

const embedding = await embed({
model: openai.TextEmbedder(...),
value: "At first, Nox didn't know what to do with the pup."
});

Defined in

packages/modelfusion/src/model-function/embed/embed.ts:133

embed<VALUE>(args): Promise<{ embedding: Vector ; metadata: ModelCallMetadata ; rawResponse: unknown }>

Type parameters

Name
VALUE

Parameters

NameType
args{ fullResponse: true ; model: EmbeddingModel<VALUE, EmbeddingModelSettings> ; value: VALUE } & FunctionOptions

Returns

Promise<{ embedding: Vector ; metadata: ModelCallMetadata ; rawResponse: unknown }>

Defined in

packages/modelfusion/src/model-function/embed/embed.ts:140


embedMany

embedMany<VALUE>(args): Promise<Vector[]>

Generate embeddings for multiple values.

Type parameters

Name
VALUE

Parameters

NameType
args{ fullResponse?: false ; model: EmbeddingModel<VALUE, EmbeddingModelSettings> ; values: VALUE[] } & FunctionOptions

Returns

Promise<Vector[]>

  • A promise that resolves to an array of vectors representing the embeddings.

See

https://modelfusion.dev/guide/function/embed

Example

const embeddings = await embedMany({
embedder: openai.TextEmbedder(...),
values: [
"At first, Nox didn't know what to do with the pup.",
"He keenly observed and absorbed everything around him, from the birds in the sky to the trees in the forest.",
]
});

Defined in

packages/modelfusion/src/model-function/embed/embed.ts:26

embedMany<VALUE>(args): Promise<{ embeddings: Vector[] ; metadata: ModelCallMetadata ; rawResponse: unknown }>

Type parameters

Name
VALUE

Parameters

NameType
args{ fullResponse: true ; model: EmbeddingModel<VALUE, EmbeddingModelSettings> ; values: VALUE[] } & FunctionOptions

Returns

Promise<{ embeddings: Vector[] ; metadata: ModelCallMetadata ; rawResponse: unknown }>

Defined in

packages/modelfusion/src/model-function/embed/embed.ts:33


executeFunction

executeFunction<INPUT, OUTPUT>(fn, input, options?): Promise<OUTPUT>

Type parameters

Name
INPUT
OUTPUT

Parameters

NameType
fn(input: INPUT, options: FunctionCallOptions) => PromiseLike<OUTPUT>
inputINPUT
options?FunctionOptions

Returns

Promise<OUTPUT>

Defined in

packages/modelfusion/src/core/executeFunction.ts:4


executeTool

executeTool<TOOL>(params): Promise<ReturnType<TOOL["execute"]>>

executeTool executes a tool with the given parameters.

Type parameters

NameType
TOOLextends Tool<any, any, any>

Parameters

NameType
params{ args: TOOL["parameters"]["_type"] ; fullResponse?: false ; tool: TOOL } & FunctionOptions

Returns

Promise<ReturnType<TOOL["execute"]>>

Defined in

packages/modelfusion/src/tool/execute-tool/executeTool.ts:30

executeTool<TOOL>(params): Promise<{ metadata: ExecuteToolMetadata ; output: Awaited<ReturnType<TOOL["execute"]>> }>

Type parameters

NameType
TOOLextends Tool<any, any, any>

Parameters

NameType
params{ args: TOOL["parameters"]["_type"] ; fullResponse: true ; tool: TOOL } & FunctionOptions

Returns

Promise<{ metadata: ExecuteToolMetadata ; output: Awaited<ReturnType<TOOL["execute"]>> }>

Defined in

packages/modelfusion/src/tool/execute-tool/executeTool.ts:37


generateImage

generateImage<PROMPT>(args): Promise<Uint8Array>

Generates an image using a prompt.

The prompt depends on the model. For example, OpenAI image models expect a string prompt, and Stability AI models expect an array of text prompts with optional weights.

Type parameters

Name
PROMPT

Parameters

NameType
args{ fullResponse?: false ; model: ImageGenerationModel<PROMPT, ImageGenerationModelSettings> ; prompt: PROMPT } & FunctionOptions

Returns

Promise<Uint8Array>

  • Returns a promise that resolves to the generated image. The image is a Uint8Array containing the image data in PNG format.

See

https://modelfusion.dev/guide/function/generate-image

Example

const image = await generateImage({
imageGenerator: stability.ImageGenerator(...),
prompt: [
{ text: "the wicked witch of the west" },
{ text: "style of early 19th century painting", weight: 0.5 },
]
});

Defined in

packages/modelfusion/src/model-function/generate-image/generateImage.ts:33

generateImage<PROMPT>(args): Promise<{ image: Uint8Array ; imageBase64: string ; images: Uint8Array[] ; imagesBase64: string[] ; metadata: ModelCallMetadata ; rawResponse: unknown }>

Type parameters

Name
PROMPT

Parameters

NameType
args{ fullResponse: true ; model: ImageGenerationModel<PROMPT, ImageGenerationModelSettings> ; prompt: PROMPT } & FunctionOptions

Returns

Promise<{ image: Uint8Array ; imageBase64: string ; images: Uint8Array[] ; imagesBase64: string[] ; metadata: ModelCallMetadata ; rawResponse: unknown }>

Defined in

packages/modelfusion/src/model-function/generate-image/generateImage.ts:40


generateObject

generateObject<OBJECT, PROMPT, SETTINGS>(args): Promise<OBJECT>

Generate a typed object for a prompt and a schema.

Type parameters

NameType
OBJECTOBJECT
PROMPTPROMPT
SETTINGSextends ObjectGenerationModelSettings

Parameters

NameType
args{ fullResponse?: false ; model: ObjectGenerationModel<PROMPT, SETTINGS> ; prompt: PROMPT | PromptFunction<unknown, PROMPT> | (schema: Schema<OBJECT>) => PROMPT | PromptFunction<unknown, PROMPT> ; schema: Schema<OBJECT> & JsonSchemaProducer } & FunctionOptions

Returns

Promise<OBJECT>

  • Returns a promise that resolves to the generated object.

See

https://modelfusion.dev/guide/function/generate-object

Example

const sentiment = await generateObject({
model: openai.ChatTextGenerator(...).asFunctionCallObjectGenerationModel(...),

schema: zodSchema(z.object({
sentiment: z
.enum(["positive", "neutral", "negative"])
.describe("Sentiment."),
})),

prompt: [
openai.ChatMessage.system(
"You are a sentiment evaluator. " +
"Analyze the sentiment of the following product review:"
),
openai.ChatMessage.user(
"After I opened the package, I was met by a very unpleasant smell " +
"that did not disappear even after washing. Never again!"
),
]
});

Defined in

packages/modelfusion/src/model-function/generate-object/generateObject.ts:52

generateObject<OBJECT, PROMPT, SETTINGS>(args): Promise<{ metadata: ModelCallMetadata ; rawResponse: unknown ; value: OBJECT }>

Type parameters

NameType
OBJECTOBJECT
PROMPTPROMPT
SETTINGSextends ObjectGenerationModelSettings

Parameters

NameType
args{ fullResponse: true ; model: ObjectGenerationModel<PROMPT, SETTINGS> ; prompt: PROMPT | PromptFunction<unknown, PROMPT> | (schema: Schema<OBJECT>) => PROMPT | PromptFunction<unknown, PROMPT> ; schema: Schema<OBJECT> & JsonSchemaProducer } & FunctionOptions

Returns

Promise<{ metadata: ModelCallMetadata ; rawResponse: unknown ; value: OBJECT }>

Defined in

packages/modelfusion/src/model-function/generate-object/generateObject.ts:67


generateSpeech

generateSpeech(args): Promise<Uint8Array>

Synthesizes speech from text. Also called text-to-speech (TTS).

Parameters

NameType
args{ fullResponse?: false ; model: SpeechGenerationModel<SpeechGenerationModelSettings> ; text: string } & FunctionOptions

Returns

Promise<Uint8Array>

  • A promise that resolves to a Uint8Array containing the synthesized speech.

See

https://modelfusion.dev/guide/function/generate-speech

Example

const speech = await generateSpeech({
model: lmnt.SpeechGenerator(...),
text: "Good evening, ladies and gentlemen! Exciting news on the airwaves tonight " +
"as The Rolling Stones unveil 'Hackney Diamonds.'
});

Defined in

packages/modelfusion/src/model-function/generate-speech/generateSpeech.ts:26

generateSpeech(args): Promise<{ metadata: ModelCallMetadata ; rawResponse: unknown ; speech: Uint8Array }>

Parameters

NameType
args{ fullResponse: true ; model: SpeechGenerationModel<SpeechGenerationModelSettings> ; text: string } & FunctionOptions

Returns

Promise<{ metadata: ModelCallMetadata ; rawResponse: unknown ; speech: Uint8Array }>

Defined in

packages/modelfusion/src/model-function/generate-speech/generateSpeech.ts:33


generateText

generateText<PROMPT>(args): Promise<string>

Generate text for a prompt and return it as a string.

The prompt depends on the model used. For instance, OpenAI completion models expect a string prompt, whereas OpenAI chat models expect an array of chat messages.

Type parameters

Name
PROMPT

Parameters

NameType
args{ fullResponse?: false ; model: TextGenerationModel<PROMPT, TextGenerationModelSettings> ; prompt: PROMPT | PromptFunction<unknown, PROMPT> } & FunctionOptions

Returns

Promise<string>

  • A promise that resolves to the generated text.

See

https://modelfusion.dev/guide/function/generate-text

Example

const text = await generateText({
model: openai.CompletionTextGenerator(...),
prompt: "Write a short story about a robot learning to love:\n\n"
});

Defined in

packages/modelfusion/src/model-function/generate-text/generateText.ts:34

generateText<PROMPT>(args): Promise<{ finishReason: TextGenerationFinishReason ; metadata: ModelCallMetadata ; rawResponse: unknown ; text: string ; textGenerationResults: TextGenerationResult[] ; texts: string[] }>

Type parameters

Name
PROMPT

Parameters

NameType
args{ fullResponse: true ; model: TextGenerationModel<PROMPT, TextGenerationModelSettings> ; prompt: PROMPT | PromptFunction<unknown, PROMPT> } & FunctionOptions

Returns

Promise<{ finishReason: TextGenerationFinishReason ; metadata: ModelCallMetadata ; rawResponse: unknown ; text: string ; textGenerationResults: TextGenerationResult[] ; texts: string[] }>

Defined in

packages/modelfusion/src/model-function/generate-text/generateText.ts:41


generateToolCall

generateToolCall<PARAMETERS, PROMPT, NAME, SETTINGS>(params): Promise<ToolCall<NAME, PARAMETERS>>

Type parameters

NameType
PARAMETERSPARAMETERS
PROMPTPROMPT
NAMEextends string
SETTINGSextends ToolCallGenerationModelSettings

Parameters

NameType
params{ fullResponse?: false ; model: ToolCallGenerationModel<PROMPT, SETTINGS> ; prompt: PROMPT | (tool: ToolDefinition<NAME, PARAMETERS>) => PROMPT ; tool: ToolDefinition<NAME, PARAMETERS> } & FunctionOptions

Returns

Promise<ToolCall<NAME, PARAMETERS>>

Defined in

packages/modelfusion/src/tool/generate-tool-call/generateToolCall.ts:13

generateToolCall<PARAMETERS, PROMPT, NAME, SETTINGS>(params): Promise<{ metadata: ModelCallMetadata ; rawResponse: unknown ; toolCall: ToolCall<NAME, PARAMETERS> }>

Type parameters

NameType
PARAMETERSPARAMETERS
PROMPTPROMPT
NAMEextends string
SETTINGSextends ToolCallGenerationModelSettings

Parameters

NameType
params{ fullResponse: true ; model: ToolCallGenerationModel<PROMPT, SETTINGS> ; prompt: PROMPT | (tool: ToolDefinition<NAME, PARAMETERS>) => PROMPT ; tool: ToolDefinition<NAME, PARAMETERS> } & FunctionOptions

Returns

Promise<{ metadata: ModelCallMetadata ; rawResponse: unknown ; toolCall: ToolCall<NAME, PARAMETERS> }>

Defined in

packages/modelfusion/src/tool/generate-tool-call/generateToolCall.ts:26


generateToolCalls

generateToolCalls<TOOLS, PROMPT>(params): Promise<{ text: string | null ; toolCalls: ToOutputValue<TOOLS>[] | null }>

Type parameters

NameType
TOOLSextends ToolDefinition<any, any>[]
PROMPTPROMPT

Parameters

NameType
params{ fullResponse?: false ; model: ToolCallsGenerationModel<PROMPT, ToolCallsGenerationModelSettings> ; prompt: PROMPT | (tools: TOOLS) => PROMPT ; tools: TOOLS } & FunctionOptions

Returns

Promise<{ text: string | null ; toolCalls: ToOutputValue<TOOLS>[] | null }>

Defined in

packages/modelfusion/src/tool/generate-tool-calls/generateToolCalls.ts:37

generateToolCalls<TOOLS, PROMPT>(params): Promise<{ metadata: ModelCallMetadata ; rawResponse: unknown ; value: { text: string | null ; toolCalls: ToOutputValue<TOOLS>[] } }>

Type parameters

NameType
TOOLSextends ToolDefinition<any, any>[]
PROMPTPROMPT

Parameters

NameType
params{ fullResponse: true ; model: ToolCallsGenerationModel<PROMPT, ToolCallsGenerationModelSettings> ; prompt: PROMPT | (tools: TOOLS) => PROMPT ; tools: TOOLS } & FunctionOptions

Returns

Promise<{ metadata: ModelCallMetadata ; rawResponse: unknown ; value: { text: string | null ; toolCalls: ToOutputValue<TOOLS>[] } }>

Defined in

packages/modelfusion/src/tool/generate-tool-calls/generateToolCalls.ts:51


generateTranscription

generateTranscription(args): Promise<string>

Transcribe audio data into text. Also called speech-to-text (STT) or automatic speech recognition (ASR).

Parameters

NameType
args{ audioData: DataContent ; fullResponse?: false ; mimeType: "audio/webm" | "audio/mp3" | "audio/wav" | "audio/mp4" | "audio/mpeg" | "audio/mpga" | "audio/ogg" | "audio/oga" | "audio/flac" | "audio/m4a" | string & ; model: TranscriptionModel<TranscriptionModelSettings> } & FunctionOptions

Returns

Promise<string>

A promise that resolves to the transcribed text.

See

https://modelfusion.dev/guide/function/generate-transcription

Example

const audioData = await fs.promises.readFile("data/test.mp3");

const transcription = await generateTranscription({
model: openai.Transcriber({ model: "whisper-1" }),
mimeType: "audio/mp3",
audioData,
});

Defined in

packages/modelfusion/src/model-function/generate-transcription/generateTranscription.ts:31

generateTranscription(args): Promise<{ metadata: ModelCallMetadata ; rawResponse: unknown ; value: string }>

Parameters

NameType
args{ audioData: DataContent ; fullResponse: true ; mimeType: "audio/webm" | "audio/mp3" | "audio/wav" | "audio/mp4" | "audio/mpeg" | "audio/mpga" | "audio/ogg" | "audio/oga" | "audio/flac" | "audio/m4a" | string & ; model: TranscriptionModel<TranscriptionModelSettings> } & FunctionOptions

Returns

Promise<{ metadata: ModelCallMetadata ; rawResponse: unknown ; value: string }>

Defined in

packages/modelfusion/src/model-function/generate-transcription/generateTranscription.ts:39


getAudioFileExtension

getAudioFileExtension(mimeType): "mp3" | "flac" | "webm" | "wav" | "mp4" | "mpeg" | "ogg" | "m4a"

Parameters

NameType
mimeTypestring

Returns

"mp3" | "flac" | "webm" | "wav" | "mp4" | "mpeg" | "ogg" | "m4a"

Defined in

packages/modelfusion/src/util/audio/getAudioFileExtension.ts:1


getOpenAIChatModelInformation

getOpenAIChatModelInformation(model): Object

Parameters

NameType
modelOpenAIChatModelType

Returns

Object

NameType
baseModelOpenAIChatBaseModelType
contextWindowSizenumber
isFineTunedboolean

Defined in

packages/modelfusion/src/model-provider/openai/OpenAIChatModel.ts:47


getOpenAICompletionModelInformation

getOpenAICompletionModelInformation(model): Object

Parameters

NameType
model"gpt-3.5-turbo-instruct"

Returns

Object

NameType
contextWindowSizenumber

Defined in

packages/modelfusion/src/model-provider/openai/OpenAICompletionModel.ts:29


getRun

getRun(run?): Promise<Run | undefined>

Returns the run stored in an AsyncLocalStorage if running in Node.js. It can be set with withRun().

Parameters

NameType
run?Run

Returns

Promise<Run | undefined>

Defined in

packages/modelfusion/src/core/getRun.ts:39


isPromptFunction

isPromptFunction<INPUT, PROMPT>(fn): fn is PromptFunction<INPUT, PROMPT>

Checks if a function is a PromptFunction by checking for the unique symbol.

Type parameters

Name
INPUT
PROMPT

Parameters

NameTypeDescription
fnunknownThe function to check.

Returns

fn is PromptFunction<INPUT, PROMPT>

  • True if the function is a PromptFunction, false otherwise.

Defined in

packages/modelfusion/src/core/PromptFunction.ts:47


mapBasicPromptToAutomatic1111Format

mapBasicPromptToAutomatic1111Format(): PromptTemplate<string, Automatic1111ImageGenerationPrompt>

Formats a basic text prompt as an Automatic1111 prompt.

Returns

PromptTemplate<string, Automatic1111ImageGenerationPrompt>

Defined in

packages/modelfusion/src/model-provider/automatic1111/Automatic1111ImageGenerationPrompt.ts:11


mapBasicPromptToStabilityFormat

mapBasicPromptToStabilityFormat(): PromptTemplate<string, StabilityImageGenerationPrompt>

Formats a basic text prompt as a Stability prompt.

Returns

PromptTemplate<string, StabilityImageGenerationPrompt>

Defined in

packages/modelfusion/src/model-provider/stability/StabilityImageGenerationPrompt.ts:11


markAsPromptFunction

markAsPromptFunction<INPUT, PROMPT>(fn): PromptFunction<INPUT, PROMPT>

Marks a function as a PromptFunction by setting a unique symbol.

Type parameters

Name
INPUT
PROMPT

Parameters

NameTypeDescription
fn() => PromiseLike<{ input: INPUT ; prompt: PROMPT }>The function to mark.

Returns

PromptFunction<INPUT, PROMPT>

  • The marked function.

Defined in

packages/modelfusion/src/core/PromptFunction.ts:29


parseJSON

parseJSON(text): unknown

Parses a JSON string into an unknown object.

Parameters

NameTypeDescription
textObjectThe JSON string to parse.
text.textstring-

Returns

unknown

  • The parsed JSON object.

Defined in

packages/modelfusion/src/core/schema/parseJSON.ts:13

parseJSON<T>(«destructured»): T

Parses a JSON string into a strongly-typed object using the provided schema.

Type parameters

NameDescription
TThe type of the object to parse the JSON into.

Parameters

NameType
«destructured»Object
› schemaSchema<T>
› textstring

Returns

T

  • The parsed object.

Defined in

packages/modelfusion/src/core/schema/parseJSON.ts:22


retrieve

retrieve<OBJECT, QUERY>(retriever, query, options?): Promise<OBJECT[]>

Type parameters

Name
OBJECT
QUERY

Parameters

NameType
retrieverRetriever<OBJECT, QUERY>
queryQUERY
options?FunctionOptions

Returns

Promise<OBJECT[]>

Defined in

packages/modelfusion/src/retriever/retrieve.ts:5


runTool

runTool<PROMPT, TOOL>(«destructured»): Promise<ToolCallResult<TOOL["name"], TOOL["parameters"], Awaited<ReturnType<TOOL["execute"]>>>>

runTool uses generateToolCall to generate parameters for a tool and then executes the tool with the parameters using executeTool.

Type parameters

NameType
PROMPTPROMPT
TOOLextends Tool<string, any, any>

Parameters

NameType
«destructured»{ model: ToolCallGenerationModel<PROMPT, ToolCallGenerationModelSettings> ; prompt: PROMPT | (tool: TOOL) => PROMPT ; tool: TOOL } & FunctionOptions

Returns

Promise<ToolCallResult<TOOL["name"], TOOL["parameters"], Awaited<ReturnType<TOOL["execute"]>>>>

The result contains the name of the tool (tool property), the parameters (parameters property, typed), and the result of the tool execution (result property, typed).

See

Defined in

packages/modelfusion/src/tool/run-tool/runTool.ts:23


runTools

runTools<PROMPT, TOOLS>(«destructured»): Promise<{ text: string | null ; toolResults: ToOutputValue<TOOLS>[] | null }>

Type parameters

NameType
PROMPTPROMPT
TOOLSextends Tool<any, any, any>[]

Parameters

NameType
«destructured»{ model: ToolCallsGenerationModel<PROMPT, ToolCallsGenerationModelSettings> ; prompt: PROMPT | (tools: TOOLS) => PROMPT ; tools: TOOLS } & FunctionOptions

Returns

Promise<{ text: string | null ; toolResults: ToOutputValue<TOOLS>[] | null }>

Defined in

packages/modelfusion/src/tool/run-tools/runTools.ts:43


safeParseJSON

safeParseJSON(text): { success: true ; value: unknown } | { error: JSONParseError | TypeValidationError ; success: false }

Safely parses a JSON string and returns the result as an object of type unknown.

Parameters

NameTypeDescription
textObjectThe JSON string to parse.
text.textstring-

Returns

{ success: true ; value: unknown } | { error: JSONParseError | TypeValidationError ; success: false }

Either an object with success: true and the parsed data, or an object with success: false and the error that occurred.

Defined in

packages/modelfusion/src/core/schema/parseJSON.ts:62

safeParseJSON<T>(«destructured»): { success: true ; value: T } | { error: JSONParseError | TypeValidationError ; success: false }

Safely parses a JSON string into a strongly-typed object, using a provided schema to validate the object.

Type parameters

NameDescription
TThe type of the object to parse the JSON into.

Parameters

NameType
«destructured»Object
› schemaSchema<T>
› textstring

Returns

{ success: true ; value: T } | { error: JSONParseError | TypeValidationError ; success: false }

An object with either a success flag and the parsed and typed data, or a success flag and an error object.

Defined in

packages/modelfusion/src/core/schema/parseJSON.ts:77


safeValidateTypes

safeValidateTypes<T>(«destructured»): { success: true ; value: T } | { error: TypeValidationError ; success: false }

Safely validates the types of an unknown object using a schema and return a strongly-typed object.

Type parameters

NameDescription
TThe type of the object to validate.

Parameters

NameType
«destructured»Object
› schemaSchema<T>
› valueunknown

Returns

{ success: true ; value: T } | { error: TypeValidationError ; success: false }

An object with either a success flag and the parsed and typed data, or a success flag and an error object.

Defined in

packages/modelfusion/src/core/schema/validateTypes.ts:49


splitAtCharacter

splitAtCharacter(«destructured»): SplitFunction

Splits text recursively until the resulting chunks are smaller than the maxCharactersPerChunk. The text is recursively split in the middle, so that all chunks are roughtly the same size.

Parameters

NameType
«destructured»Object
› maxCharactersPerChunknumber

Returns

SplitFunction

Defined in

packages/modelfusion/src/text-chunk/split/splitRecursively.ts:37


splitAtToken

splitAtToken(«destructured»): SplitFunction

Splits text recursively until the resulting chunks are smaller than the maxTokensPerChunk, while respecting the token boundaries. The text is recursively split in the middle, so that all chunks are roughtly the same size.

Parameters

NameType
«destructured»Object
› maxTokensPerChunknumber
› tokenizerFullTokenizer

Returns

SplitFunction

Defined in

packages/modelfusion/src/text-chunk/split/splitRecursively.ts:54


splitOnSeparator

splitOnSeparator(«destructured»): SplitFunction

Splits text on a separator string.

Parameters

NameType
«destructured»Object
› separatorstring

Returns

SplitFunction

Defined in

packages/modelfusion/src/text-chunk/split/splitOnSeparator.ts:6


splitTextChunk

splitTextChunk<CHUNK>(splitFunction, input): Promise<CHUNK[]>

Type parameters

NameType
CHUNKextends TextChunk

Parameters

NameType
splitFunctionSplitFunction
inputCHUNK

Returns

Promise<CHUNK[]>

Defined in

packages/modelfusion/src/text-chunk/split/splitTextChunks.ts:14


splitTextChunks

splitTextChunks<CHUNK>(splitFunction, inputs): Promise<CHUNK[]>

Type parameters

NameType
CHUNKextends TextChunk

Parameters

NameType
splitFunctionSplitFunction
inputsCHUNK[]

Returns

Promise<CHUNK[]>

Defined in

packages/modelfusion/src/text-chunk/split/splitTextChunks.ts:4


streamObject

streamObject<OBJECT, PROMPT>(args): Promise<ObjectStream<OBJECT>>

Generate and stream an object for a prompt and a schema.

Type parameters

Name
OBJECT
PROMPT

Parameters

NameType
args{ fullResponse?: false ; model: ObjectStreamingModel<PROMPT, ObjectGenerationModelSettings> ; prompt: PROMPT | PromptFunction<unknown, PROMPT> | (schema: Schema<OBJECT>) => PROMPT | PromptFunction<unknown, PROMPT> ; schema: Schema<OBJECT> & JsonSchemaProducer } & FunctionOptions

Returns

Promise<ObjectStream<OBJECT>>

See

https://modelfusion.dev/guide/function/generate-object

Example

const objectStream = await streamObject({
model: openai.ChatTextGenerator(...).asFunctionCallObjectGenerationModel(...),
schema: zodSchema(
z.array(
z.object({
name: z.string(),
class: z
.string()
.describe("Character class, e.g. warrior, mage, or thief."),
description: z.string(),
})
),
prompt: [
openai.ChatMessage.user(
"Generate 3 character descriptions for a fantasy role playing game."
),
]
});

for await (const { partialObject } of objectStream) {
// ...
}

Defined in

packages/modelfusion/src/model-function/generate-object/streamObject.ts:51

streamObject<OBJECT, PROMPT>(args): Promise<{ metadata: Omit<ModelCallMetadata, "durationInMs" | "finishTimestamp"> ; objectPromise: PromiseLike<OBJECT> ; objectStream: ObjectStream<OBJECT> }>

Type parameters

Name
OBJECT
PROMPT

Parameters

NameType
args{ fullResponse: true ; model: ObjectStreamingModel<PROMPT, ObjectGenerationModelSettings> ; prompt: PROMPT | PromptFunction<unknown, PROMPT> | (schema: Schema<OBJECT>) => PROMPT | PromptFunction<unknown, PROMPT> ; schema: Schema<OBJECT> & JsonSchemaProducer } & FunctionOptions

Returns

Promise<{ metadata: Omit<ModelCallMetadata, "durationInMs" | "finishTimestamp"> ; objectPromise: PromiseLike<OBJECT> ; objectStream: ObjectStream<OBJECT> }>

Defined in

packages/modelfusion/src/model-function/generate-object/streamObject.ts:62


streamSpeech

streamSpeech(args): Promise<AsyncIterable<Uint8Array>>

Stream synthesized speech from text. Also called text-to-speech (TTS). Duplex streaming where both the input and output are streamed is supported.

Parameters

NameType
args{ fullResponse?: false ; model: StreamingSpeechGenerationModel<SpeechGenerationModelSettings> ; text: string | AsyncIterable<string> } & FunctionOptions

Returns

Promise<AsyncIterable<Uint8Array>>

An async iterable promise that contains the synthesized speech chunks.

See

https://modelfusion.dev/guide/function/generate-speech

Example

const textStream = await streamText(...);

const speechStream = await streamSpeech({
model: elevenlabs.SpeechGenerator(...),
text: textStream
});

for await (const speechPart of speechStream) {
// ...
}

Defined in

packages/modelfusion/src/model-function/generate-speech/streamSpeech.ts:34

streamSpeech(args): Promise<{ metadata: Omit<ModelCallMetadata, "durationInMs" | "finishTimestamp"> ; speechStream: AsyncIterable<Uint8Array> }>

Parameters

NameType
args{ fullResponse: true ; model: StreamingSpeechGenerationModel<SpeechGenerationModelSettings> ; text: string | AsyncIterable<string> } & FunctionOptions

Returns

Promise<{ metadata: Omit<ModelCallMetadata, "durationInMs" | "finishTimestamp"> ; speechStream: AsyncIterable<Uint8Array> }>

Defined in

packages/modelfusion/src/model-function/generate-speech/streamSpeech.ts:41


streamText

streamText<PROMPT>(args): Promise<AsyncIterable<string>>

Stream the generated text for a prompt as an async iterable.

The prompt depends on the model used. For instance, OpenAI completion models expect a string prompt, whereas OpenAI chat models expect an array of chat messages.

Type parameters

Name
PROMPT

Parameters

NameType
args{ fullResponse?: false ; model: TextStreamingModel<PROMPT, TextGenerationModelSettings> ; prompt: PROMPT | PromptFunction<unknown, PROMPT> } & FunctionOptions

Returns

Promise<AsyncIterable<string>>

An async iterable promise that yields the generated text.

See

https://modelfusion.dev/guide/function/generate-text

Example

const textStream = await streamText({
model: openai.CompletionTextGenerator(...),
prompt: "Write a short story about a robot learning to love:\n\n"
});

for await (const textPart of textStream) {
// ...
}

Defined in

packages/modelfusion/src/model-function/generate-text/streamText.ts:32

streamText<PROMPT>(args): Promise<{ metadata: Omit<ModelCallMetadata, "durationInMs" | "finishTimestamp"> ; textPromise: PromiseLike<string> ; textStream: AsyncIterable<string> }>

Type parameters

Name
PROMPT

Parameters

NameType
args{ fullResponse: true ; model: TextStreamingModel<PROMPT, TextGenerationModelSettings> ; prompt: PROMPT | PromptFunction<unknown, PROMPT> } & FunctionOptions

Returns

Promise<{ metadata: Omit<ModelCallMetadata, "durationInMs" | "finishTimestamp"> ; textPromise: PromiseLike<string> ; textStream: AsyncIterable<string> }>

Defined in

packages/modelfusion/src/model-function/generate-text/streamText.ts:39


trimChatPrompt

trimChatPrompt(«destructured»): Promise<ChatPrompt>

Keeps only the most recent messages in the prompt, while leaving enough space for the completion.

It will remove user-ai message pairs that don't fit. The result is always a valid chat prompt.

When the minimal chat prompt (system message + last user message) is already too long, it will only return this minimal chat prompt.

Parameters

NameType
«destructured»Object
› modelTextGenerationModel<ChatPrompt, TextGenerationModelSettings> & HasTokenizer<ChatPrompt> & HasContextWindowSize
› promptChatPrompt
› tokenLimit?number

Returns

Promise<ChatPrompt>

See

https://modelfusion.dev/guide/function/generate-text#limiting-the-chat-length

Defined in

packages/modelfusion/src/model-function/generate-text/prompt-template/trimChatPrompt.ts:19


uncheckedSchema

uncheckedSchema<OBJECT>(jsonSchema?): UncheckedSchema<OBJECT>

Type parameters

Name
OBJECT

Parameters

NameType
jsonSchema?unknown

Returns

UncheckedSchema<OBJECT>

Defined in

packages/modelfusion/src/core/schema/UncheckedSchema.ts:4


upsertIntoVectorIndex

upsertIntoVectorIndex<VALUE, OBJECT>(«destructured», options?): Promise<void>

Type parameters

Name
VALUE
OBJECT

Parameters

NameTypeDefault value
«destructured»Objectundefined
› embeddingModelEmbeddingModel<VALUE, EmbeddingModelSettings>undefined
› generateId?() => stringcreateId
› getId?(object: OBJECT, index: number) => undefined | stringundefined
› getValueToEmbed(object: OBJECT, index: number) => VALUEundefined
› objectsOBJECT[]undefined
› vectorIndexVectorIndex<OBJECT, unknown, unknown>undefined
options?FunctionOptionsundefined

Returns

Promise<void>

Defined in

packages/modelfusion/src/vector-index/upsertIntoVectorIndex.ts:11


validateContentIsString

validateContentIsString(content, prompt): string

Parameters

NameType
contentunknown
promptunknown

Returns

string

Defined in

packages/modelfusion/src/model-function/generate-text/prompt-template/ContentPart.ts:42


validateTypes

validateTypes<T>(«destructured»): T

Validates the types of an unknown object using a schema and return a strongly-typed object.

Type parameters

NameDescription
TThe type of the object to validate.

Parameters

NameType
«destructured»Object
› schemaSchema<T>
› valueunknown

Returns

T

  • The typed object.

Defined in

packages/modelfusion/src/core/schema/validateTypes.ts:13


withRun

withRun(run, callback): Promise<void>

Stores the run in an AsyncLocalStorage if running in Node.js. It can be retrieved with getRun().

Parameters

NameType
runRun
callback(run: Run) => PromiseLike<void>

Returns

Promise<void>

Defined in

packages/modelfusion/src/core/getRun.ts:47


zodSchema

zodSchema<OBJECT>(zodSchema): ZodSchema<OBJECT>

Type parameters

Name
OBJECT

Parameters

NameType
zodSchemaZodType<OBJECT, ZodTypeDef, OBJECT>

Returns

ZodSchema<OBJECT>

Defined in

packages/modelfusion/src/core/schema/ZodSchema.ts:7

Variables

CHAT_MODEL_CONTEXT_WINDOW_SIZES

Const CHAT_MODEL_CONTEXT_WINDOW_SIZES: Object

Type declaration

NameType
gpt-3.5-turbo4096
gpt-3.5-turbo-012516385
gpt-3.5-turbo-03014096
gpt-3.5-turbo-06134096
gpt-3.5-turbo-110616385
gpt-3.5-turbo-16k16384
gpt-3.5-turbo-16k-061316384
gpt-48192
gpt-4-0125-preview128000
gpt-4-03148192
gpt-4-06138192
gpt-4-1106-preview128000
gpt-4-32k32768
gpt-4-32k-031432768
gpt-4-32k-061332768
gpt-4-turbo-preview128000
gpt-4-vision-preview128000

Defined in

packages/modelfusion/src/model-provider/openai/OpenAIChatModel.ts:27


COHERE_TEXT_EMBEDDING_MODELS

Const COHERE_TEXT_EMBEDDING_MODELS: Object

Type declaration

NameType
embed-english-light-v2.0{ contextWindowSize: number = 512; dimensions: number = 1024 }
embed-english-light-v2.0.contextWindowSizenumber
embed-english-light-v2.0.dimensionsnumber
embed-english-light-v3.0{ contextWindowSize: number = 512; dimensions: number = 384 }
embed-english-light-v3.0.contextWindowSizenumber
embed-english-light-v3.0.dimensionsnumber
embed-english-v2.0{ contextWindowSize: number = 512; dimensions: number = 4096 }
embed-english-v2.0.contextWindowSizenumber
embed-english-v2.0.dimensionsnumber
embed-english-v3.0{ contextWindowSize: number = 512; dimensions: number = 1024 }
embed-english-v3.0.contextWindowSizenumber
embed-english-v3.0.dimensionsnumber
embed-multilingual-light-v3.0{ contextWindowSize: number = 512; dimensions: number = 384 }
embed-multilingual-light-v3.0.contextWindowSizenumber
embed-multilingual-light-v3.0.dimensionsnumber
embed-multilingual-v2.0{ contextWindowSize: number = 256; dimensions: number = 768 }
embed-multilingual-v2.0.contextWindowSizenumber
embed-multilingual-v2.0.dimensionsnumber
embed-multilingual-v3.0{ contextWindowSize: number = 512; dimensions: number = 1024 }
embed-multilingual-v3.0.contextWindowSizenumber
embed-multilingual-v3.0.dimensionsnumber

Defined in

packages/modelfusion/src/model-provider/cohere/CohereTextEmbeddingModel.ts:20


COHERE_TEXT_GENERATION_MODELS

Const COHERE_TEXT_GENERATION_MODELS: Object

Type declaration

NameType
command{ contextWindowSize: number = 4096 }
command.contextWindowSizenumber
command-light{ contextWindowSize: number = 4096 }
command-light.contextWindowSizenumber

Defined in

packages/modelfusion/src/model-provider/cohere/CohereTextGenerationModel.ts:32


ChatMessage

ChatMessage: Object

Type declaration

NameType
assistant(__namedParameters: { text: null | string ; toolResults: null | ToolCallResult<string, unknown, unknown>[] }) => ChatMessage
tool(__namedParameters: { toolResults: null | ToolCallResult<string, unknown, unknown>[] }) => ChatMessage
user(__namedParameters: { text: string }) => ChatMessage

Defined in

packages/modelfusion/src/model-function/generate-text/prompt-template/ChatPrompt.ts:49

packages/modelfusion/src/model-function/generate-text/prompt-template/ChatPrompt.ts:54


CohereTextGenerationResponseFormat

Const CohereTextGenerationResponseFormat: Object

Type declaration

NameTypeDescription
deltaIterable{ handler: (__namedParameters: { response: Response }) => Promise<AsyncIterable<Delta<{ is_finished: false ; text: string } | { finish_reason: string ; is_finished: true ; response: { generations: { finish_reason?: string ; id: string ; text: string }[] ; id: string ; meta?: { api_version: { version: string } } ; prompt: string } = cohereTextGenerationResponseSchema }>>> ; stream: boolean = true }Returns an async iterable over the full deltas (all choices, including full current state at time of event) of the response stream.
deltaIterable.handler(__namedParameters: { response: Response }) => Promise<AsyncIterable<Delta<{ is_finished: false ; text: string } | { finish_reason: string ; is_finished: true ; response: { generations: { finish_reason?: string ; id: string ; text: string }[] ; id: string ; meta?: { api_version: { version: string } } ; prompt: string } = cohereTextGenerationResponseSchema }>>>-
deltaIterable.streamboolean-
json{ handler: ResponseHandler<{ generations: { finish_reason?: string ; id: string ; text: string }[] ; id: string ; meta?: { api_version: { version: string } } ; prompt: string }> ; stream: boolean = false }Returns the response as a JSON object.
json.handlerResponseHandler<{ generations: { finish_reason?: string ; id: string ; text: string }[] ; id: string ; meta?: { api_version: { version: string } } ; prompt: string }>-
json.streamboolean-

Defined in

packages/modelfusion/src/model-provider/cohere/CohereTextGenerationModel.ts:315


LlamaCppCompletionResponseFormat

Const LlamaCppCompletionResponseFormat: Object

Type declaration

NameTypeDescription
deltaIterable{ handler: (__namedParameters: { response: Response }) => Promise<AsyncIterable<Delta<{ content: string ; generation_settings: { frequency_penalty: number ; ignore_eos: boolean ; logit_bias: number[] ; mirostat: number ; mirostat_eta: number ; mirostat_tau: number ; model: string ; n_ctx: number ; n_keep: number ; n_predict: number ; n_probs: number ; penalize_nl: boolean ; presence_penalty: number ; repeat_last_n: number ; repeat_penalty: number ; seed: number ; stop: string[] ; stream: boolean ; temperature?: number ; tfs_z: number ; top_k: number ; top_p: number ; typical_p: number } ; model: string ; prompt: string ; stop: true ; stopped_eos: boolean ; stopped_limit: boolean ; stopped_word: boolean ; stopping_word: string ; timings: { predicted_ms: number ; predicted_n: number ; predicted_per_second: null | number ; predicted_per_token_ms: null | number ; prompt_ms?: null | number ; prompt_n: number ; prompt_per_second: null | number ; prompt_per_token_ms: null | number } ; tokens_cached: number ; tokens_evaluated: number ; tokens_predicted: number ; truncated: boolean } | { content: string ; stop: false }>>> ; stream: true = true }Returns an async iterable over the full deltas (all choices, including full current state at time of event) of the response stream.
deltaIterable.handler(__namedParameters: { response: Response }) => Promise<AsyncIterable<Delta<{ content: string ; generation_settings: { frequency_penalty: number ; ignore_eos: boolean ; logit_bias: number[] ; mirostat: number ; mirostat_eta: number ; mirostat_tau: number ; model: string ; n_ctx: number ; n_keep: number ; n_predict: number ; n_probs: number ; penalize_nl: boolean ; presence_penalty: number ; repeat_last_n: number ; repeat_penalty: number ; seed: number ; stop: string[] ; stream: boolean ; temperature?: number ; tfs_z: number ; top_k: number ; top_p: number ; typical_p: number } ; model: string ; prompt: string ; stop: true ; stopped_eos: boolean ; stopped_limit: boolean ; stopped_word: boolean ; stopping_word: string ; timings: { predicted_ms: number ; predicted_n: number ; predicted_per_second: null | number ; predicted_per_token_ms: null | number ; prompt_ms?: null | number ; prompt_n: number ; prompt_per_second: null | number ; prompt_per_token_ms: null | number } ; tokens_cached: number ; tokens_evaluated: number ; tokens_predicted: number ; truncated: boolean } | { content: string ; stop: false }>>>-
deltaIterable.streamtrue-
json{ handler: ResponseHandler<{ content: string ; generation_settings: { frequency_penalty: number ; ignore_eos: boolean ; logit_bias: number[] ; mirostat: number ; mirostat_eta: number ; mirostat_tau: number ; model: string ; n_ctx: number ; n_keep: number ; n_predict: number ; n_probs: number ; penalize_nl: boolean ; presence_penalty: number ; repeat_last_n: number ; repeat_penalty: number ; seed: number ; stop: string[] ; stream: boolean ; temperature?: number ; tfs_z: number ; top_k: number ; top_p: number ; typical_p: number } ; model: string ; prompt: string ; stop: true ; stopped_eos: boolean ; stopped_limit: boolean ; stopped_word: boolean ; stopping_word: string ; timings: { predicted_ms: number ; predicted_n: number ; predicted_per_second: null | number ; predicted_per_token_ms: null | number ; prompt_ms?: null | number ; prompt_n: number ; prompt_per_second: null | number ; prompt_per_token_ms: null | number } ; tokens_cached: number ; tokens_evaluated: number ; tokens_predicted: number ; truncated: boolean }> ; stream: false = false }Returns the response as a JSON object.
json.handlerResponseHandler<{ content: string ; generation_settings: { frequency_penalty: number ; ignore_eos: boolean ; logit_bias: number[] ; mirostat: number ; mirostat_eta: number ; mirostat_tau: number ; model: string ; n_ctx: number ; n_keep: number ; n_predict: number ; n_probs: number ; penalize_nl: boolean ; presence_penalty: number ; repeat_last_n: number ; repeat_penalty: number ; seed: number ; stop: string[] ; stream: boolean ; temperature?: number ; tfs_z: number ; top_k: number ; top_p: number ; typical_p: number } ; model: string ; prompt: string ; stop: true ; stopped_eos: boolean ; stopped_limit: boolean ; stopped_word: boolean ; stopping_word: string ; timings: { predicted_ms: number ; predicted_n: number ; predicted_per_second: null | number ; predicted_per_token_ms: null | number ; prompt_ms?: null | number ; prompt_n: number ; prompt_per_second: null | number ; prompt_per_token_ms: null | number } ; tokens_cached: number ; tokens_evaluated: number ; tokens_predicted: number ; truncated: boolean }>-
json.streamfalse-

Defined in

packages/modelfusion/src/model-provider/llamacpp/LlamaCppCompletionModel.ts:593


MistralChatResponseFormat

Const MistralChatResponseFormat: Object

Type declaration

NameTypeDescription
json{ handler: ResponseHandler<{ choices: { finish_reason: "length" | "stop" | "model_length" ; index: number ; message: { content: string ; role: "user" | "assistant" } }[] ; created: number ; id: string ; model: string ; object: string ; usage: { completion_tokens: number ; prompt_tokens: number ; total_tokens: number } }> ; stream: boolean = false }Returns the response as a JSON object.
json.handlerResponseHandler<{ choices: { finish_reason: "length" | "stop" | "model_length" ; index: number ; message: { content: string ; role: "user" | "assistant" } }[] ; created: number ; id: string ; model: string ; object: string ; usage: { completion_tokens: number ; prompt_tokens: number ; total_tokens: number } }>-
json.streamboolean-
textDeltaIterable{ handler: (__namedParameters: { response: Response }) => Promise<AsyncIterable<Delta<{ choices: { delta: { content?: null | string ; role?: null | "user" | "assistant" } ; finish_reason?: null | "length" | "stop" | "model_length" ; index: number }[] ; created?: number ; id: string ; model: string ; object?: string }>>> ; stream: boolean = true }Returns an async iterable over the text deltas (only the tex different of the first choice).
textDeltaIterable.handler(__namedParameters: { response: Response }) => Promise<AsyncIterable<Delta<{ choices: { delta: { content?: null | string ; role?: null | "user" | "assistant" } ; finish_reason?: null | "length" | "stop" | "model_length" ; index: number }[] ; created?: number ; id: string ; model: string ; object?: string }>>>-
textDeltaIterable.streamboolean-

Defined in

packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:298


OPENAI_CHAT_MESSAGE_BASE_TOKEN_COUNT

Const OPENAI_CHAT_MESSAGE_BASE_TOKEN_COUNT: 5

Prompt tokens that are included automatically for every message that is sent to OpenAI.

Defined in

packages/modelfusion/src/model-provider/openai/countOpenAIChatMessageTokens.ts:19


OPENAI_CHAT_PROMPT_BASE_TOKEN_COUNT

Const OPENAI_CHAT_PROMPT_BASE_TOKEN_COUNT: 2

Prompt tokens that are included automatically for every full chat prompt (several messages) that is sent to OpenAI.

Defined in

packages/modelfusion/src/model-provider/openai/countOpenAIChatMessageTokens.ts:13


OPENAI_TEXT_EMBEDDING_MODELS

Const OPENAI_TEXT_EMBEDDING_MODELS: Object

Type declaration

NameType
text-embedding-3-large{ contextWindowSize: number = 8192; dimensions: number = 3072 }
text-embedding-3-large.contextWindowSizenumber
text-embedding-3-large.dimensionsnumber
text-embedding-3-small{ contextWindowSize: number = 8192; dimensions: number = 1536 }
text-embedding-3-small.contextWindowSizenumber
text-embedding-3-small.dimensionsnumber
text-embedding-ada-002{ contextWindowSize: number = 8192; dimensions: number = 1536 }
text-embedding-ada-002.contextWindowSizenumber
text-embedding-ada-002.dimensionsnumber

Defined in

packages/modelfusion/src/model-provider/openai/OpenAITextEmbeddingModel.ts:10


OPENAI_TEXT_GENERATION_MODELS

Const OPENAI_TEXT_GENERATION_MODELS: Object

See

Type declaration

NameType
gpt-3.5-turbo-instruct{ contextWindowSize: number = 4097 }
gpt-3.5-turbo-instruct.contextWindowSizenumber

Defined in

packages/modelfusion/src/model-provider/openai/OpenAICompletionModel.ts:23


OllamaChatResponseFormat

Const OllamaChatResponseFormat: Object

Type declaration

NameTypeDescription
deltaIterable{ handler: (__namedParameters: { response: Response }) => Promise<AsyncIterable<Delta<{ created_at: string ; done: false ; message: { content: string ; role: string } ; model: string } | { created_at: string ; done: true ; eval_count: number ; eval_duration: number ; load_duration?: number ; model: string ; prompt_eval_count?: number ; prompt_eval_duration?: number ; total_duration: number }>>> ; stream: boolean = true }Returns an async iterable over the full deltas (all choices, including full current state at time of event) of the response stream.
deltaIterable.handler(__namedParameters: { response: Response }) => Promise<AsyncIterable<Delta<{ created_at: string ; done: false ; message: { content: string ; role: string } ; model: string } | { created_at: string ; done: true ; eval_count: number ; eval_duration: number ; load_duration?: number ; model: string ; prompt_eval_count?: number ; prompt_eval_duration?: number ; total_duration: number }>>>-
deltaIterable.streamboolean-
json{ handler: (__namedParameters: { requestBodyValues: unknown ; response: Response ; url: string }) => Promise<{ created_at: string ; done: true ; eval_count: number ; eval_duration: number ; load_duration?: number ; message: { content: string ; role: string } ; model: string ; prompt_eval_count?: number ; prompt_eval_duration?: number ; total_duration: number }> ; stream: false = false }Returns the response as a JSON object.
json.handler(__namedParameters: { requestBodyValues: unknown ; response: Response ; url: string }) => Promise<{ created_at: string ; done: true ; eval_count: number ; eval_duration: number ; load_duration?: number ; message: { content: string ; role: string } ; model: string ; prompt_eval_count?: number ; prompt_eval_duration?: number ; total_duration: number }>-
json.streamfalse-

Defined in

packages/modelfusion/src/model-provider/ollama/OllamaChatModel.ts:318


OllamaCompletionResponseFormat

Const OllamaCompletionResponseFormat: Object

Type declaration

NameTypeDescription
deltaIterable{ handler: (__namedParameters: { response: Response }) => Promise<AsyncIterable<Delta<{ created_at: string ; done: false ; model: string ; response: string } | { context?: number[] ; created_at: string ; done: true ; eval_count: number ; eval_duration: number ; load_duration?: number ; model: string ; prompt_eval_count?: number ; prompt_eval_duration?: number ; sample_count?: number ; sample_duration?: number ; total_duration: number }>>> ; stream: boolean = true }Returns an async iterable over the full deltas (all choices, including full current state at time of event) of the response stream.
deltaIterable.handler(__namedParameters: { response: Response }) => Promise<AsyncIterable<Delta<{ created_at: string ; done: false ; model: string ; response: string } | { context?: number[] ; created_at: string ; done: true ; eval_count: number ; eval_duration: number ; load_duration?: number ; model: string ; prompt_eval_count?: number ; prompt_eval_duration?: number ; sample_count?: number ; sample_duration?: number ; total_duration: number }>>>-
deltaIterable.streamboolean-
json{ handler: (__namedParameters: { requestBodyValues: unknown ; response: Response ; url: string }) => Promise<{ context?: number[] ; created_at: string ; done: true ; eval_count: number ; eval_duration: number ; load_duration?: number ; model: string ; prompt_eval_count?: number ; prompt_eval_duration?: number ; response: string ; total_duration: number }> ; stream: false = false }Returns the response as a JSON object.
json.handler(__namedParameters: { requestBodyValues: unknown ; response: Response ; url: string }) => Promise<{ context?: number[] ; created_at: string ; done: true ; eval_count: number ; eval_duration: number ; load_duration?: number ; model: string ; prompt_eval_count?: number ; prompt_eval_duration?: number ; response: string ; total_duration: number }>-
json.streamfalse-

Defined in

packages/modelfusion/src/model-provider/ollama/OllamaCompletionModel.ts:397


OpenAIChatResponseFormat

Const OpenAIChatResponseFormat: Object

Type declaration

NameTypeDescription
deltaIterable{ handler: (__namedParameters: { response: Response }) => Promise<AsyncIterable<Delta<{ choices: { delta: { content?: null | string ; function_call?: { arguments?: string ; name?: string } ; role?: "user" | "assistant" ; tool_calls?: { function: { arguments: string ; name: string } ; id: string ; type: "function" }[] } ; finish_reason?: null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index: number }[] ; created: number ; id: string ; model?: string ; object: string ; system_fingerprint?: null | string }>>> ; stream: boolean = true }Returns an async iterable over the text deltas (only the tex different of the first choice).
deltaIterable.handler(__namedParameters: { response: Response }) => Promise<AsyncIterable<Delta<{ choices: { delta: { content?: null | string ; function_call?: { arguments?: string ; name?: string } ; role?: "user" | "assistant" ; tool_calls?: { function: { arguments: string ; name: string } ; id: string ; type: "function" }[] } ; finish_reason?: null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index: number }[] ; created: number ; id: string ; model?: string ; object: string ; system_fingerprint?: null | string }>>>-
deltaIterable.streamboolean-
json{ handler: ResponseHandler<{ choices: { finish_reason?: null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index?: number ; logprobs?: any ; message: { content: null | string ; function_call?: { arguments: string ; name: string } ; role: "assistant" ; tool_calls?: { function: { arguments: string ; name: string } ; id: string ; type: "function" }[] } }[] ; created: number ; id: string ; model: string ; object: "chat.completion" ; system_fingerprint?: null | string ; usage: { completion_tokens: number ; prompt_tokens: number ; total_tokens: number } }> ; stream: boolean = false }Returns the response as a JSON object.
json.handlerResponseHandler<{ choices: { finish_reason?: null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index?: number ; logprobs?: any ; message: { content: null | string ; function_call?: { arguments: string ; name: string } ; role: "assistant" ; tool_calls?: { function: { arguments: string ; name: string } ; id: string ; type: "function" }[] } }[] ; created: number ; id: string ; model: string ; object: "chat.completion" ; system_fingerprint?: null | string ; usage: { completion_tokens: number ; prompt_tokens: number ; total_tokens: number } }>-
json.streamboolean-

Defined in

packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:449


OpenAIImageGenerationResponseFormat

Const OpenAIImageGenerationResponseFormat: Object

Type declaration

NameType
base64Json{ handler: ResponseHandler<{ created: number ; data: { b64_json: string }[] }> ; type: "b64_json" }
base64Json.handlerResponseHandler<{ created: number ; data: { b64_json: string }[] }>
base64Json.type"b64_json"
url{ handler: ResponseHandler<{ created: number ; data: { url: string }[] }> ; type: "url" }
url.handlerResponseHandler<{ created: number ; data: { url: string }[] }>
url.type"url"

Defined in

packages/modelfusion/src/model-provider/openai/OpenAIImageGenerationModel.ts:174


OpenAITextResponseFormat

Const OpenAITextResponseFormat: Object

Type declaration

NameTypeDescription
deltaIterable{ handler: (__namedParameters: { response: Response }) => Promise<AsyncIterable<Delta<{ choices: { finish_reason?: null | "length" | "stop" | "content_filter" ; index: number ; text: string }[] ; created: number ; id: string ; model: string ; object: "text_completion" ; system_fingerprint?: string }>>> ; stream: boolean = true }Returns an async iterable over the full deltas (all choices, including full current state at time of event) of the response stream.
deltaIterable.handler(__namedParameters: { response: Response }) => Promise<AsyncIterable<Delta<{ choices: { finish_reason?: null | "length" | "stop" | "content_filter" ; index: number ; text: string }[] ; created: number ; id: string ; model: string ; object: "text_completion" ; system_fingerprint?: string }>>>-
deltaIterable.streamboolean-
json{ handler: ResponseHandler<{ choices: { finish_reason?: null | "length" | "stop" | "content_filter" ; index: number ; logprobs?: any ; text: string }[] ; created: number ; id: string ; model: string ; object: "text_completion" ; system_fingerprint?: string ; usage: { completion_tokens: number ; prompt_tokens: number ; total_tokens: number } }> ; stream: boolean = false }Returns the response as a JSON object.
json.handlerResponseHandler<{ choices: { finish_reason?: null | "length" | "stop" | "content_filter" ; index: number ; logprobs?: any ; text: string }[] ; created: number ; id: string ; model: string ; object: "text_completion" ; system_fingerprint?: string ; usage: { completion_tokens: number ; prompt_tokens: number ; total_tokens: number } }>-
json.streamboolean-

Defined in

packages/modelfusion/src/model-provider/openai/AbstractOpenAICompletionModel.ts:238


OpenAITranscriptionResponseFormat

Const OpenAITranscriptionResponseFormat: Object

Type declaration

NameType
json{ handler: ResponseHandler<{ text: string }> ; type: "json" }
json.handlerResponseHandler<{ text: string }>
json.type"json"
srt{ handler: ResponseHandler<string> ; type: "srt" }
srt.handlerResponseHandler<string>
srt.type"srt"
text{ handler: ResponseHandler<string> ; type: "text" }
text.handlerResponseHandler<string>
text.type"text"
verboseJson{ handler: ResponseHandler<{ duration: number ; language: string ; segments: { avg_logprob: number ; compression_ratio: number ; end: number ; id: number ; no_speech_prob: number ; seek: number ; start: number ; temperature: number ; text: string ; tokens: number[] ; transient?: boolean }[] ; task: "transcribe" ; text: string }> ; type: "verbose_json" }
verboseJson.handlerResponseHandler<{ duration: number ; language: string ; segments: { avg_logprob: number ; compression_ratio: number ; end: number ; id: number ; no_speech_prob: number ; seek: number ; start: number ; temperature: number ; text: string ; tokens: number[] ; transient?: boolean }[] ; task: "transcribe" ; text: string }>
verboseJson.type"verbose_json"
vtt{ handler: ResponseHandler<string> ; type: "vtt" }
vtt.handlerResponseHandler<string>
vtt.type"vtt"

Defined in

packages/modelfusion/src/model-provider/openai/OpenAITranscriptionModel.ts:228


jsonObjectPrompt

Const jsonObjectPrompt: Object

Type declaration

NameType
custom<SOURCE_PROMPT, TARGET_PROMPT>(createPrompt: (prompt: SOURCE_PROMPT, schema: Schema<unknown> & JsonSchemaProducer) => TARGET_PROMPT) => ObjectFromTextPromptTemplate<SOURCE_PROMPT, TARGET_PROMPT>
instruction(__namedParameters: { schemaPrefix?: string ; schemaSuffix?: string }) => FlexibleObjectFromTextPromptTemplate<InstructionPrompt, InstructionPrompt>
text(__namedParameters: { schemaPrefix?: string ; schemaSuffix?: string }) => FlexibleObjectFromTextPromptTemplate<string, InstructionPrompt>

Defined in

packages/modelfusion/src/model-function/generate-object/jsonObjectPrompt.ts:14


jsonToolCallPrompt

Const jsonToolCallPrompt: Object

Type declaration

NameType
instruction(__namedParameters: { toolPrompt?: (tool: ToolDefinition<string, unknown>) => string }) => ToolCallPromptTemplate<InstructionPrompt, InstructionPrompt>
text(__namedParameters: { toolPrompt?: (tool: ToolDefinition<string, unknown>) => string }) => ToolCallPromptTemplate<string, InstructionPrompt>

Defined in

packages/modelfusion/src/tool/generate-tool-call/jsonToolCallPrompt.ts:22


openAITextEmbeddingResponseSchema

Const openAITextEmbeddingResponseSchema: ZodObject<{ data: ZodArray<ZodObject<{ embedding: ZodArray<ZodNumber, "many"> ; index: ZodNumber ; object: ZodLiteral<"embedding"> }, "strip", ZodTypeAny, { embedding: number[] ; index: number ; object: "embedding" }, { embedding: number[] ; index: number ; object: "embedding" }>, "many"> ; model: ZodString ; object: ZodLiteral<"list"> ; usage: ZodOptional<ZodObject<{ prompt_tokens: ZodNumber ; total_tokens: ZodNumber }, "strip", ZodTypeAny, { prompt_tokens: number ; total_tokens: number }, { prompt_tokens: number ; total_tokens: number }>> }, "strip", ZodTypeAny, { data: { embedding: number[] ; index: number ; object: "embedding" }[] ; model: string ; object: "list" ; usage?: { prompt_tokens: number ; total_tokens: number } }, { data: { embedding: number[] ; index: number ; object: "embedding" }[] ; model: string ; object: "list" ; usage?: { prompt_tokens: number ; total_tokens: number } }>

Defined in

packages/modelfusion/src/model-provider/openai/OpenAITextEmbeddingModel.ts:34


textGenerationModelProperties

Const textGenerationModelProperties: readonly ["maxGenerationTokens", "stopSequences", "numberOfGenerations", "trimWhitespace"]

Defined in

packages/modelfusion/src/model-function/generate-text/TextGenerationModel.ts:12

References

MistralChatMessage

Renames and re-exports ChatMessage


MistralChatPrompt

Renames and re-exports ChatPrompt


OllamaChatMessage

Renames and re-exports ChatMessage


OllamaChatPrompt

Renames and re-exports ChatPrompt


OpenAIChatMessage

Renames and re-exports ChatMessage


OpenAIChatPrompt

Renames and re-exports ChatPrompt


retryNever

Re-exports retryNever


retryWithExponentialBackoff

Re-exports retryWithExponentialBackoff


throttleMaxConcurrency

Re-exports throttleMaxConcurrency


throttleOff

Re-exports throttleOff

Type Aliases

AssistantContent

Ƭ AssistantContent: string | (TextPart | ToolCallPart)[]

Defined in

packages/modelfusion/src/model-function/generate-text/prompt-template/ChatPrompt.ts:41


AudioMimeType

Ƭ AudioMimeType: "audio/webm" | "audio/mp3" | "audio/wav" | "audio/mp4" | "audio/mpeg" | "audio/mpga" | "audio/ogg" | "audio/oga" | "audio/flac" | "audio/m4a"

Defined in

packages/modelfusion/src/util/audio/AudioMimeType.ts:1


Automatic1111ErrorData

Ƭ Automatic1111ErrorData: Object

Type declaration

NameType
bodystring
detailstring
errorstring
errorsstring

Defined in

packages/modelfusion/src/model-provider/automatic1111/Automatic1111Error.ts:16


Automatic1111ImageGenerationPrompt

Ƭ Automatic1111ImageGenerationPrompt: Object

Type declaration

NameType
negativePrompt?string
promptstring

Defined in

packages/modelfusion/src/model-provider/automatic1111/Automatic1111ImageGenerationPrompt.ts:3


Automatic1111ImageGenerationResponse

Ƭ Automatic1111ImageGenerationResponse: Object

Type declaration

NameType
imagesstring[]
infostring
parameters

Defined in

packages/modelfusion/src/model-provider/automatic1111/Automatic1111ImageGenerationModel.ts:184


AzureOpenAIApiConfigurationOptions

Ƭ AzureOpenAIApiConfigurationOptions: Object

Type declaration

NameType
apiKey?string
apiVersionstring
deploymentIdstring
resourceNamestring
retry?RetryFunction
throttle?ThrottleFunction

Defined in

packages/modelfusion/src/model-provider/openai/AzureOpenAIApiConfiguration.ts:6


BaseFunctionFinishedEventResult

Ƭ BaseFunctionFinishedEventResult: { status: "success" ; value: unknown } | { error: unknown ; status: "error" } | { status: "abort" }

Defined in

packages/modelfusion/src/core/FunctionEvent.ts:93


BaseModelCallFinishedEventResult

Ƭ BaseModelCallFinishedEventResult: { rawResponse: unknown ; status: "success" ; usage?: unknown ; value: unknown } | { error: unknown ; status: "error" } | { status: "abort" }

Defined in

packages/modelfusion/src/model-function/ModelCallEvent.ts:65


BaseUrlPartsApiConfigurationOptions

Ƭ BaseUrlPartsApiConfigurationOptions: Object

Type declaration

NameType
baseUrlstring | UrlParts
customCallHeaders?CustomHeaderProvider
headers?Record<string, string>
retry?RetryFunction
throttle?ThrottleFunction

Defined in

packages/modelfusion/src/core/api/BaseUrlApiConfiguration.ts:13


ChatMessage

Ƭ ChatMessage: { content: UserContent ; role: "user" } | { content: AssistantContent ; role: "assistant" } | { content: ToolContent ; role: "tool" }

A message in a chat prompt.

See

ChatPrompt

Defined in

packages/modelfusion/src/model-function/generate-text/prompt-template/ChatPrompt.ts:49

packages/modelfusion/src/model-function/generate-text/prompt-template/ChatPrompt.ts:54


ClassifyFinishedEventResult

Ƭ ClassifyFinishedEventResult: { rawResponse: unknown ; status: "success" ; value: unknown } | { error: unknown ; status: "error" } | { status: "abort" }

Defined in

packages/modelfusion/src/model-function/classify/ClassifyEvent.ts:11


CohereDetokenizationResponse

Ƭ CohereDetokenizationResponse: Object

Type declaration

NameType
meta{ api_version: { version: string } }
meta.api_version{ version: string }
meta.api_version.versionstring
textstring

Defined in

packages/modelfusion/src/model-provider/cohere/CohereTokenizer.ts:141


CohereErrorData

Ƭ CohereErrorData: Object

Type declaration

NameType
messagestring

Defined in

packages/modelfusion/src/model-provider/cohere/CohereError.ts:9


CohereTextEmbeddingModelType

Ƭ CohereTextEmbeddingModelType: keyof typeof COHERE_TEXT_EMBEDDING_MODELS

Defined in

packages/modelfusion/src/model-provider/cohere/CohereTextEmbeddingModel.ts:51


CohereTextEmbeddingResponse

Ƭ CohereTextEmbeddingResponse: Object

Type declaration

NameType
embeddingsnumber[][]
idstring
meta{ api_version: { version: string } }
meta.api_version{ version: string }
meta.api_version.versionstring
textsstring[]

Defined in

packages/modelfusion/src/model-provider/cohere/CohereTextEmbeddingModel.ts:196


CohereTextGenerationModelType

Ƭ CohereTextGenerationModelType: keyof typeof COHERE_TEXT_GENERATION_MODELS

Defined in

packages/modelfusion/src/model-provider/cohere/CohereTextGenerationModel.ts:41


CohereTextGenerationResponse

Ƭ CohereTextGenerationResponse: Object

Type declaration

NameType
generations{ finish_reason?: string ; id: string ; text: string }[]
idstring
meta?{ api_version: { version: string } }
meta.api_version{ version: string }
meta.api_version.versionstring
promptstring

Defined in

packages/modelfusion/src/model-provider/cohere/CohereTextGenerationModel.ts:292


CohereTextGenerationResponseFormatType

Ƭ CohereTextGenerationResponseFormatType<T>: Object

Type parameters

Name
T

Type declaration

NameType
handlerResponseHandler<T>
streamboolean

Defined in

packages/modelfusion/src/model-provider/cohere/CohereTextGenerationModel.ts:310


CohereTextStreamChunk

Ƭ CohereTextStreamChunk: { is_finished: false ; text: string } | { finish_reason: string ; is_finished: true ; response: { generations: { finish_reason?: string ; id: string ; text: string }[] ; id: string ; meta?: { api_version: { version: string } } ; prompt: string } = cohereTextGenerationResponseSchema }

Defined in

packages/modelfusion/src/model-provider/cohere/CohereTextGenerationModel.ts:308


CohereTokenizationResponse

Ƭ CohereTokenizationResponse: Object

Type declaration

NameType
meta{ api_version: { version: string } }
meta.api_version{ version: string }
meta.api_version.versionstring
token_stringsstring[]
tokensnumber[]

Defined in

packages/modelfusion/src/model-provider/cohere/CohereTokenizer.ts:155


CohereTokenizerModelType

Ƭ CohereTokenizerModelType: CohereTextGenerationModelType | CohereTextEmbeddingModelType

Defined in

packages/modelfusion/src/model-provider/cohere/CohereTokenizer.ts:16


CustomHeaderProvider

Ƭ CustomHeaderProvider: (headerParameters: HeaderParameters) => Record<string, string | undefined>

Type declaration

▸ (headerParameters): Record<string, string | undefined>

Parameters
NameType
headerParametersHeaderParameters
Returns

Record<string, string | undefined>

Defined in

packages/modelfusion/src/core/api/CustomHeaderProvider.ts:3


DataContent

Ƭ DataContent: string | Uint8Array | ArrayBuffer | Buffer

Data content. Can either be a base64-encoded string, a Uint8Array, an ArrayBuffer, or a Buffer.

Defined in

packages/modelfusion/src/util/format/DataContent.ts:6


Delta

Ƭ Delta<T>: { deltaValue: T ; type: "delta" } | { error: unknown ; type: "error" }

Type parameters

Name
T

Defined in

packages/modelfusion/src/model-function/Delta.ts:1


EmbeddingFinishedEventResult

Ƭ EmbeddingFinishedEventResult: { rawResponse: unknown ; status: "success" ; value: Vector | Vector[] } | { error: unknown ; status: "error" } | { status: "abort" }

Defined in

packages/modelfusion/src/model-function/embed/EmbeddingEvent.ts:12


ExecuteToolMetadata

Ƭ ExecuteToolMetadata: Object

Type declaration

NameType
callIdstring
durationInMsnumber
finishTimestampDate
functionId?string
runId?string
sessionId?string
startTimestampDate
userId?string

Defined in

packages/modelfusion/src/tool/execute-tool/executeTool.ts:16


FlexibleObjectFromTextPromptTemplate

Ƭ FlexibleObjectFromTextPromptTemplate<SOURCE_PROMPT, INTERMEDIATE_PROMPT>: Object

Type parameters

Name
SOURCE_PROMPT
INTERMEDIATE_PROMPT

Type declaration

NameType
adaptModel(model: TextStreamingModel<never> & { withChatPrompt: () => TextStreamingModel<ChatPrompt, TextGenerationModelSettings> ; withInstructionPrompt: () => TextStreamingModel<InstructionPrompt, TextGenerationModelSettings> ; withTextPrompt: () => TextStreamingModel<string, TextGenerationModelSettings> }) => TextStreamingModel<INTERMEDIATE_PROMPT>
createPrompt(prompt: SOURCE_PROMPT, schema: Schema<unknown> & JsonSchemaProducer) => INTERMEDIATE_PROMPT
extractObject(response: string) => unknown
withJsonOutput?(__namedParameters: { model: { withJsonOutput: (schema: Schema<unknown> & JsonSchemaProducer) => { withJsonOutput: (schema: Schema<unknown> & JsonSchemaProducer) => { withJsonOutput(schema: Schema<unknown> & JsonSchemaProducer): ...; } } } ; schema: Schema<unknown> & JsonSchemaProducer }) => { withJsonOutput: (schema: Schema<unknown> & JsonSchemaProducer) => { withJsonOutput(schema: Schema<unknown> & JsonSchemaProducer): ...; } }

Defined in

packages/modelfusion/src/model-function/generate-object/ObjectFromTextPromptTemplate.ts:28


FunctionCallOptions

Ƭ FunctionCallOptions: Omit<FunctionOptions, "callId"> & { callId: string ; functionType: string }

Extended options that are passed to models when functions are called. They are passed into e.g. API providers to create custom headers.

Defined in

packages/modelfusion/src/core/FunctionOptions.ts:53


FunctionEvent

Ƭ FunctionEvent: ExecuteFunctionStartedEvent | ExecuteFunctionFinishedEvent | ExecuteToolStartedEvent | ExecuteToolFinishedEvent | ExtensionFunctionStartedEvent | ExtensionFunctionFinishedEvent | ModelCallStartedEvent | ModelCallFinishedEvent | RetrieveStartedEvent | RetrieveFinishedEvent | UpsertIntoVectorIndexStartedEvent | UpsertIntoVectorIndexFinishedEvent | runToolStartedEvent | runToolFinishedEvent | runToolsStartedEvent | runToolsFinishedEvent

Defined in

packages/modelfusion/src/core/FunctionEvent.ts:125


FunctionOptions

Ƭ FunctionOptions: Object

Additional settings for ModelFusion functions.

Type declaration

NameTypeDescription
cache?CacheOptional cache that can be used by the function to store and retrieve cached values. Not supported by all functions.
callId?stringUnique identifier of the call id of the parent function. Used in events and logging. It has the same name as the callId in FunctionCallOptions to allow for easy propagation of the call id. However, in the FunctionOptions, it is the call ID for the parent call, and it is optional.
functionId?stringOptional function identifier. Used in events and logging.
logging?LogFormatOptional logging to use for the function. Logs are sent to the console. Overrides the global function logging setting.
observers?FunctionObserver[]Optional observers that are called when the function is invoked.
run?RunOptional run as part of which this function is called. Used in events and logging. Run callbacks are invoked when it is provided.

Defined in

packages/modelfusion/src/core/FunctionOptions.ts:9


HeaderParameters

Ƭ HeaderParameters: Object

Type declaration

NameType
callIdstring
functionId?string
functionTypestring
run?Run

Defined in

packages/modelfusion/src/core/api/ApiConfiguration.ts:5


HuggingFaceErrorData

Ƭ HuggingFaceErrorData: Object

Type declaration

NameType
errorstring | string[] & undefined | string | string[]

Defined in

packages/modelfusion/src/model-provider/huggingface/HuggingFaceError.ts:9


HuggingFaceTextEmbeddingResponse

Ƭ HuggingFaceTextEmbeddingResponse: number[][]

Defined in

packages/modelfusion/src/model-provider/huggingface/HuggingFaceTextEmbeddingModel.ts:148


HuggingFaceTextGenerationResponse

Ƭ HuggingFaceTextGenerationResponse: { generated_text: string }[]

Defined in

packages/modelfusion/src/model-provider/huggingface/HuggingFaceTextGenerationModel.ts:194


ImageGenerationFinishedEventResult

Ƭ ImageGenerationFinishedEventResult: { rawResponse: unknown ; status: "success" ; value: string } | { error: unknown ; status: "error" } | { status: "abort" }

Defined in

packages/modelfusion/src/model-function/generate-image/ImageGenerationEvent.ts:10


InstructionContent

Ƭ InstructionContent: string | (TextPart | ImagePart)[]

Defined in

packages/modelfusion/src/model-function/generate-text/prompt-template/InstructionPrompt.ts:40


LlamaCppCompletionResponseFormatType

Ƭ LlamaCppCompletionResponseFormatType<T>: Object

Type parameters

Name
T

Type declaration

NameType
handlerResponseHandler<T>
streamboolean

Defined in

packages/modelfusion/src/model-provider/llamacpp/LlamaCppCompletionModel.ts:588


LlamaCppErrorData

Ƭ LlamaCppErrorData: Object

Type declaration

NameType
errorstring

Defined in

packages/modelfusion/src/model-provider/llamacpp/LlamaCppError.ts:9


LlamaCppTextEmbeddingResponse

Ƭ LlamaCppTextEmbeddingResponse: Object

Type declaration

NameType
embeddingnumber[]

Defined in

packages/modelfusion/src/model-provider/llamacpp/LlamaCppTextEmbeddingModel.ts:120


LlamaCppTextGenerationResponse

Ƭ LlamaCppTextGenerationResponse: Object

Type declaration

NameType
contentstring
generation_settings{ frequency_penalty: number ; ignore_eos: boolean ; logit_bias: number[] ; mirostat: number ; mirostat_eta: number ; mirostat_tau: number ; model: string ; n_ctx: number ; n_keep: number ; n_predict: number ; n_probs: number ; penalize_nl: boolean ; presence_penalty: number ; repeat_last_n: number ; repeat_penalty: number ; seed: number ; stop: string[] ; stream: boolean ; temperature?: number ; tfs_z: number ; top_k: number ; top_p: number ; typical_p: number }
generation_settings.frequency_penaltynumber
generation_settings.ignore_eosboolean
generation_settings.logit_biasnumber[]
generation_settings.mirostatnumber
generation_settings.mirostat_etanumber
generation_settings.mirostat_taunumber
generation_settings.modelstring
generation_settings.n_ctxnumber
generation_settings.n_keepnumber
generation_settings.n_predictnumber
generation_settings.n_probsnumber
generation_settings.penalize_nlboolean
generation_settings.presence_penaltynumber
generation_settings.repeat_last_nnumber
generation_settings.repeat_penaltynumber
generation_settings.seednumber
generation_settings.stopstring[]
generation_settings.streamboolean
generation_settings.temperature?number
generation_settings.tfs_znumber
generation_settings.top_knumber
generation_settings.top_pnumber
generation_settings.typical_pnumber
modelstring
promptstring
stoptrue
stopped_eosboolean
stopped_limitboolean
stopped_wordboolean
stopping_wordstring
timings{ predicted_ms: number ; predicted_n: number ; predicted_per_second: null | number ; predicted_per_token_ms: null | number ; prompt_ms?: null | number ; prompt_n: number ; prompt_per_second: null | number ; prompt_per_token_ms: null | number }
timings.predicted_msnumber
timings.predicted_nnumber
timings.predicted_per_secondnull | number
timings.predicted_per_token_msnull | number
timings.prompt_ms?null | number
timings.prompt_nnumber
timings.prompt_per_secondnull | number
timings.prompt_per_token_msnull | number
tokens_cachednumber
tokens_evaluatednumber
tokens_predictednumber
truncatedboolean

Defined in

packages/modelfusion/src/model-provider/llamacpp/LlamaCppCompletionModel.ts:536


LlamaCppTextStreamChunk

Ƭ LlamaCppTextStreamChunk: { content: string ; generation_settings: { frequency_penalty: number ; ignore_eos: boolean ; logit_bias: number[] ; mirostat: number ; mirostat_eta: number ; mirostat_tau: number ; model: string ; n_ctx: number ; n_keep: number ; n_predict: number ; n_probs: number ; penalize_nl: boolean ; presence_penalty: number ; repeat_last_n: number ; repeat_penalty: number ; seed: number ; stop: string[] ; stream: boolean ; temperature?: number ; tfs_z: number ; top_k: number ; top_p: number ; typical_p: number } ; model: string ; prompt: string ; stop: true ; stopped_eos: boolean ; stopped_limit: boolean ; stopped_word: boolean ; stopping_word: string ; timings: { predicted_ms: number ; predicted_n: number ; predicted_per_second: null | number ; predicted_per_token_ms: null | number ; prompt_ms?: null | number ; prompt_n: number ; prompt_per_second: null | number ; prompt_per_token_ms: null | number } ; tokens_cached: number ; tokens_evaluated: number ; tokens_predicted: number ; truncated: boolean } | { content: string ; stop: false }

Defined in

packages/modelfusion/src/model-provider/llamacpp/LlamaCppCompletionModel.ts:548


LlamaCppTokenizationResponse

Ƭ LlamaCppTokenizationResponse: Object

Type declaration

NameType
tokensnumber[]

Defined in

packages/modelfusion/src/model-provider/llamacpp/LlamaCppTokenizer.ts:75


LmntSpeechResponse

Ƭ LmntSpeechResponse: Object

Type declaration

NameType
audiostring
durations{ duration: number ; start: number ; text: string }[]
seednumber

Defined in

packages/modelfusion/src/model-provider/lmnt/LmntSpeechModel.ts:151


LogFormat

Ƭ LogFormat: undefined | "off" | "basic-text" | "detailed-object" | "detailed-json"

The logging output format to use, e.g. for functions. Logs are sent to the console.

  • off or undefined: No logging.
  • basic-text: Log the timestamp and the type of event as a single line of text.
  • detailed-object: Log everything except the original response as an object to the console.
  • detailed-json: Log everything except the original response as a JSON string to the console.

Defined in

packages/modelfusion/src/core/LogFormat.ts:10


MistralChatResponse

Ƭ MistralChatResponse: Object

Type declaration

NameType
choices{ finish_reason: "length" | "stop" | "model_length" ; index: number ; message: { content: string ; role: "user" | "assistant" } }[]
creatednumber
idstring
modelstring
objectstring
usage{ completion_tokens: number ; prompt_tokens: number ; total_tokens: number }
usage.completion_tokensnumber
usage.prompt_tokensnumber
usage.total_tokensnumber

Defined in

packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:267


MistralChatResponseFormatType

Ƭ MistralChatResponseFormatType<T>: Object

Type parameters

Name
T

Type declaration

NameType
handlerResponseHandler<T>
streamboolean

Defined in

packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:293


MistralChatStreamChunk

Ƭ MistralChatStreamChunk: Object

Type declaration

NameType
choices{ delta: { content?: null | string ; role?: null | "user" | "assistant" } ; finish_reason?: null | "length" | "stop" | "model_length" ; index: number }[]
created?number
idstring
modelstring
object?string

Defined in

packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:289


MistralErrorData

Ƭ MistralErrorData: Object

Type declaration

NameType
codestring
messagestring
object"error"
paramnull | string
typestring

Defined in

packages/modelfusion/src/model-provider/mistral/MistralError.ts:17


MistralTextEmbeddingResponse

Ƭ MistralTextEmbeddingResponse: Object

Type declaration

NameType
data{ embedding: number[] ; index: number ; object: string }[]
idstring
modelstring
objectstring
usage{ prompt_tokens: number ; total_tokens: number }
usage.prompt_tokensnumber
usage.total_tokensnumber

Defined in

packages/modelfusion/src/model-provider/mistral/MistralTextEmbeddingModel.ts:138


ModelCallFinishedEvent

Ƭ ModelCallFinishedEvent: ClassifyFinishedEvent | EmbeddingFinishedEvent | ImageGenerationFinishedEvent | SpeechGenerationFinishedEvent | SpeechStreamingFinishedEvent | ObjectGenerationFinishedEvent | ObjectStreamingFinishedEvent | TextGenerationFinishedEvent | TextStreamingFinishedEvent | ToolCallGenerationFinishedEvent | ToolCallsGenerationFinishedEvent | TranscriptionFinishedEvent

Defined in

packages/modelfusion/src/model-function/ModelCallEvent.ts:117


ModelCallMetadata

Ƭ ModelCallMetadata: Object

Type declaration

NameType
callIdstring
durationInMsnumber
finishTimestampDate
functionId?string
modelModelInformation
runId?string
sessionId?string
startTimestampDate
usage?unknown
userId?string

Defined in

packages/modelfusion/src/model-function/ModelCallMetadata.ts:3


ModelCallStartedEvent

Ƭ ModelCallStartedEvent: ClassifyStartedEvent | EmbeddingStartedEvent | ImageGenerationStartedEvent | SpeechGenerationStartedEvent | SpeechStreamingStartedEvent | ObjectGenerationStartedEvent | ObjectStreamingStartedEvent | TextGenerationStartedEvent | TextStreamingStartedEvent | ToolCallGenerationStartedEvent | ToolCallsGenerationStartedEvent | TranscriptionStartedEvent

Defined in

packages/modelfusion/src/model-function/ModelCallEvent.ts:103


ModelInformation

Ƭ ModelInformation: Object

Type declaration

NameType
modelNamestring | null
providerstring

Defined in

packages/modelfusion/src/model-function/ModelInformation.ts:1


ObjectFromTextPromptTemplate

Ƭ ObjectFromTextPromptTemplate<SOURCE_PROMPT, TARGET_PROMPT>: Object

Type parameters

Name
SOURCE_PROMPT
TARGET_PROMPT

Type declaration

NameType
createPrompt(prompt: SOURCE_PROMPT, schema: Schema<unknown> & JsonSchemaProducer) => TARGET_PROMPT
extractObject(response: string) => unknown
withJsonOutput?(__namedParameters: { model: { withJsonOutput: (schema: Schema<unknown> & JsonSchemaProducer) => { withJsonOutput: (schema: Schema<unknown> & JsonSchemaProducer) => { withJsonOutput(schema: Schema<unknown> & JsonSchemaProducer): ...; } } } ; schema: Schema<unknown> & JsonSchemaProducer }) => { withJsonOutput: (schema: Schema<unknown> & JsonSchemaProducer) => { withJsonOutput(schema: Schema<unknown> & JsonSchemaProducer): ...; } }

Defined in

packages/modelfusion/src/model-function/generate-object/ObjectFromTextPromptTemplate.ts:7


ObjectGenerationFinishedEvent

Ƭ ObjectGenerationFinishedEvent: BaseModelCallFinishedEvent & { functionType: "generate-object" ; result: ObjectGenerationFinishedEventResult }

Defined in

packages/modelfusion/src/model-function/generate-object/ObjectGenerationEvent.ts:26


ObjectGenerationFinishedEventResult

Ƭ ObjectGenerationFinishedEventResult: { rawResponse: unknown ; status: "success" ; usage?: { completionTokens: number ; promptTokens: number ; totalTokens: number } ; value: unknown } | { error: unknown ; status: "error" } | { status: "abort" }

Defined in

packages/modelfusion/src/model-function/generate-object/ObjectGenerationEvent.ts:11


ObjectStream

Ƭ ObjectStream<OBJECT>: AsyncIterable<{ partialObject: PartialDeep<OBJECT, { recurseIntoArrays: true }> ; partialText: string ; textDelta: string }>

Type parameters

Name
OBJECT

Defined in

packages/modelfusion/src/model-function/generate-object/ObjectStream.ts:5


OllamaChatResponse

Ƭ OllamaChatResponse: Object

Type declaration

NameType
created_atstring
donetrue
eval_countnumber
eval_durationnumber
load_duration?number
message{ content: string ; role: string }
message.contentstring
message.rolestring
modelstring
prompt_eval_count?number
prompt_eval_duration?number
total_durationnumber

Defined in

packages/modelfusion/src/model-provider/ollama/OllamaChatModel.ts:286


OllamaChatResponseFormatType

Ƭ OllamaChatResponseFormatType<T>: Object

Type parameters

Name
T

Type declaration

NameType
handlerResponseHandler<T>
streamboolean

Defined in

packages/modelfusion/src/model-provider/ollama/OllamaChatModel.ts:313


OllamaChatStreamChunk

Ƭ OllamaChatStreamChunk: { created_at: string ; done: false ; message: { content: string ; role: string } ; model: string } | { created_at: string ; done: true ; eval_count: number ; eval_duration: number ; load_duration?: number ; model: string ; prompt_eval_count?: number ; prompt_eval_duration?: number ; total_duration: number }

Defined in

packages/modelfusion/src/model-provider/ollama/OllamaChatModel.ts:311


OllamaCompletionResponse

Ƭ OllamaCompletionResponse: Object

Type declaration

NameType
context?number[]
created_atstring
donetrue
eval_countnumber
eval_durationnumber
load_duration?number
modelstring
prompt_eval_count?number
prompt_eval_duration?number
responsestring
total_durationnumber

Defined in

packages/modelfusion/src/model-provider/ollama/OllamaCompletionModel.ts:361


OllamaCompletionResponseFormatType

Ƭ OllamaCompletionResponseFormatType<T>: Object

Type parameters

Name
T

Type declaration

NameType
handlerResponseHandler<T>
streamboolean

Defined in

packages/modelfusion/src/model-provider/ollama/OllamaCompletionModel.ts:392


OllamaCompletionStreamChunk

Ƭ OllamaCompletionStreamChunk: { created_at: string ; done: false ; model: string ; response: string } | { context?: number[] ; created_at: string ; done: true ; eval_count: number ; eval_duration: number ; load_duration?: number ; model: string ; prompt_eval_count?: number ; prompt_eval_duration?: number ; sample_count?: number ; sample_duration?: number ; total_duration: number }

Defined in

packages/modelfusion/src/model-provider/ollama/OllamaCompletionModel.ts:388


OllamaErrorData

Ƭ OllamaErrorData: Object

Type declaration

NameType
errorstring

Defined in

packages/modelfusion/src/model-provider/ollama/OllamaError.ts:13


OllamaTextEmbeddingResponse

Ƭ OllamaTextEmbeddingResponse: Object

Type declaration

NameType
embeddingnumber[]

Defined in

packages/modelfusion/src/model-provider/ollama/OllamaTextEmbeddingModel.ts:112


OpenAIChatBaseModelType

Ƭ OpenAIChatBaseModelType: keyof typeof CHAT_MODEL_CONTEXT_WINDOW_SIZES

Defined in

packages/modelfusion/src/model-provider/openai/OpenAIChatModel.ts:94


OpenAIChatChunk

Ƭ OpenAIChatChunk: Object

Type declaration

NameType
choices{ delta: { content?: null | string ; function_call?: { arguments?: string ; name?: string } ; role?: "user" | "assistant" ; tool_calls?: { function: { arguments: string ; name: string } ; id: string ; type: "function" }[] } ; finish_reason?: null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index: number }[]
creatednumber
idstring
model?string
objectstring
system_fingerprint?null | string

Defined in

packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:442


OpenAIChatModelType

Ƭ OpenAIChatModelType: OpenAIChatBaseModelType | FineTunedOpenAIChatModelType

Defined in

packages/modelfusion/src/model-provider/openai/OpenAIChatModel.ts:97


OpenAIChatResponse

Ƭ OpenAIChatResponse: Object

Type declaration

NameType
choices{ finish_reason?: null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index?: number ; logprobs?: any ; message: { content: null | string ; function_call?: { arguments: string ; name: string } ; role: "assistant" ; tool_calls?: { function: { arguments: string ; name: string } ; id: string ; type: "function" }[] } }[]
creatednumber
idstring
modelstring
object"chat.completion"
system_fingerprint?null | string
usage{ completion_tokens: number ; prompt_tokens: number ; total_tokens: number }
usage.completion_tokensnumber
usage.prompt_tokensnumber
usage.total_tokensnumber

Defined in

packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:395


OpenAIChatResponseFormatType

Ƭ OpenAIChatResponseFormatType<T>: Object

Type parameters

Name
T

Type declaration

NameType
handlerResponseHandler<T>
streamboolean

Defined in

packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:444


OpenAICompatibleProviderName

Ƭ OpenAICompatibleProviderName: "openaicompatible" | `openaicompatible-${string}`

Defined in

packages/modelfusion/src/model-provider/openai-compatible/OpenAICompatibleApiConfiguration.ts:3


OpenAICompletionModelType

Ƭ OpenAICompletionModelType: keyof typeof OPENAI_TEXT_GENERATION_MODELS

Defined in

packages/modelfusion/src/model-provider/openai/OpenAICompletionModel.ts:37


OpenAICompletionResponse

Ƭ OpenAICompletionResponse: Object

Type declaration

NameType
choices{ finish_reason?: null | "length" | "stop" | "content_filter" ; index: number ; logprobs?: any ; text: string }[]
creatednumber
idstring
modelstring
object"text_completion"
system_fingerprint?string
usage{ completion_tokens: number ; prompt_tokens: number ; total_tokens: number }
usage.completion_tokensnumber
usage.prompt_tokensnumber
usage.total_tokensnumber

Defined in

packages/modelfusion/src/model-provider/openai/AbstractOpenAICompletionModel.ts:207


OpenAIErrorData

Ƭ OpenAIErrorData: Object

Type declaration

NameType
error{ code: null | string ; message: string ; param?: any ; type: string }
error.codenull | string
error.messagestring
error.param?any
error.typestring

Defined in

packages/modelfusion/src/model-provider/openai/OpenAIError.ts:14


OpenAIImageGenerationBase64JsonResponse

Ƭ OpenAIImageGenerationBase64JsonResponse: Object

Type declaration

NameType
creatednumber
data{ b64_json: string }[]

Defined in

packages/modelfusion/src/model-provider/openai/OpenAIImageGenerationModel.ts:170


OpenAIImageGenerationResponseFormatType

Ƭ OpenAIImageGenerationResponseFormatType<T>: Object

Type parameters

Name
T

Type declaration

NameType
handlerResponseHandler<T>
type"b64_json" | "url"

Defined in

packages/modelfusion/src/model-provider/openai/OpenAIImageGenerationModel.ts:143


OpenAIImageGenerationUrlResponse

Ƭ OpenAIImageGenerationUrlResponse: Object

Type declaration

NameType
creatednumber
data{ url: string }[]

Defined in

packages/modelfusion/src/model-provider/openai/OpenAIImageGenerationModel.ts:157


OpenAISpeechModelType

Ƭ OpenAISpeechModelType: "tts-1" | "tts-1-hd"

Defined in

packages/modelfusion/src/model-provider/openai/OpenAISpeechModel.ts:25


OpenAISpeechVoice

Ƭ OpenAISpeechVoice: "alloy" | "echo" | "fable" | "onyx" | "nova" | "shimmer"

Defined in

packages/modelfusion/src/model-provider/openai/OpenAISpeechModel.ts:16


OpenAITextEmbeddingModelType

Ƭ OpenAITextEmbeddingModelType: keyof typeof OPENAI_TEXT_EMBEDDING_MODELS

Defined in

packages/modelfusion/src/model-provider/openai/OpenAITextEmbeddingModel.ts:26


OpenAITextEmbeddingResponse

Ƭ OpenAITextEmbeddingResponse: Object

Type declaration

NameType
data{ embedding: number[] ; index: number ; object: "embedding" }[]
modelstring
object"list"
usage?{ prompt_tokens: number ; total_tokens: number }
usage.prompt_tokensnumber
usage.total_tokensnumber

Defined in

packages/modelfusion/src/model-provider/openai/AbstractOpenAITextEmbeddingModel.ts:114


OpenAITextResponseFormatType

Ƭ OpenAITextResponseFormatType<T>: Object

Type parameters

Name
T

Type declaration

NameType
handlerResponseHandler<T>
streamboolean

Defined in

packages/modelfusion/src/model-provider/openai/AbstractOpenAICompletionModel.ts:233


OpenAITranscriptionJsonResponse

Ƭ OpenAITranscriptionJsonResponse: Object

Type declaration

NameType
textstring

Defined in

packages/modelfusion/src/model-provider/openai/OpenAITranscriptionModel.ts:193


OpenAITranscriptionResponseFormatType

Ƭ OpenAITranscriptionResponseFormatType<T>: Object

Type parameters

Name
T

Type declaration

NameType
handlerResponseHandler<T>
type"json" | "text" | "srt" | "verbose_json" | "vtt"

Defined in

packages/modelfusion/src/model-provider/openai/OpenAITranscriptionModel.ts:223


OpenAITranscriptionVerboseJsonResponse

Ƭ OpenAITranscriptionVerboseJsonResponse: Object

Type declaration

NameType
durationnumber
languagestring
segments{ avg_logprob: number ; compression_ratio: number ; end: number ; id: number ; no_speech_prob: number ; seek: number ; start: number ; temperature: number ; text: string ; tokens: number[] ; transient?: boolean }[]
task"transcribe"
textstring

Defined in

packages/modelfusion/src/model-provider/openai/OpenAITranscriptionModel.ts:219


PartialBaseUrlPartsApiConfigurationOptions

Ƭ PartialBaseUrlPartsApiConfigurationOptions: Omit<BaseUrlPartsApiConfigurationOptions, "baseUrl"> & { baseUrl?: string | Partial<UrlParts> }

Defined in

packages/modelfusion/src/core/api/BaseUrlApiConfiguration.ts:64


PromptFunction

Ƭ PromptFunction<INPUT, PROMPT>: () => PromiseLike<{ input: INPUT ; prompt: PROMPT }> & { [promptFunctionMarker]: true }

Type parameters

Name
INPUT
PROMPT

Defined in

packages/modelfusion/src/core/PromptFunction.ts:1


RetryErrorReason

Ƭ RetryErrorReason: "maxTriesExceeded" | "errorNotRetryable" | "abort"

Defined in

packages/modelfusion/src/core/api/RetryError.ts:1


RetryFunction

Ƭ RetryFunction: <OUTPUT>(fn: () => PromiseLike<OUTPUT>) => PromiseLike<OUTPUT>

Type declaration

▸ <OUTPUT>(fn): PromiseLike<OUTPUT>

Type parameters
Name
OUTPUT
Parameters
NameType
fn() => PromiseLike<OUTPUT>
Returns

PromiseLike<OUTPUT>

Defined in

packages/modelfusion/src/core/api/RetryFunction.ts:1


SpeechGenerationFinishedEventResult

Ƭ SpeechGenerationFinishedEventResult: { rawResponse: unknown ; status: "success" ; value: Uint8Array } | { error: unknown ; status: "error" } | { status: "abort" }

Defined in

packages/modelfusion/src/model-function/generate-speech/SpeechGenerationEvent.ts:12


SplitFunction

Ƭ SplitFunction: (input: { text: string }, options?: FunctionOptions) => PromiseLike<string[]>

Type declaration

▸ (input, options?): PromiseLike<string[]>

Parameters
NameType
inputObject
input.textstring
options?FunctionOptions
Returns

PromiseLike<string[]>

Defined in

packages/modelfusion/src/text-chunk/split/SplitFunction.ts:3


StabilityClipGuidancePreset

Ƭ StabilityClipGuidancePreset: "FAST_BLUE" | "FAST_GREEN" | "NONE" | "SIMPLE" | "SLOW" | "SLOWER" | "SLOWEST"

Defined in

packages/modelfusion/src/model-provider/stability/StabilityImageGenerationModel.ts:67


StabilityErrorData

Ƭ StabilityErrorData: Object

Type declaration

NameType
messagestring

Defined in

packages/modelfusion/src/model-provider/stability/StabilityError.ts:13


StabilityImageGenerationModelType

Ƭ StabilityImageGenerationModelType: typeof stabilityImageGenerationModels[number] | string &

Defined in

packages/modelfusion/src/model-provider/stability/StabilityImageGenerationModel.ts:29


StabilityImageGenerationPrompt

Ƭ StabilityImageGenerationPrompt: { text: string ; weight?: number }[]

Defined in

packages/modelfusion/src/model-provider/stability/StabilityImageGenerationPrompt.ts:3


StabilityImageGenerationResponse

Ƭ StabilityImageGenerationResponse: Object

Type declaration

NameType
artifacts{ base64: string ; finishReason: "ERROR" | "SUCCESS" | "CONTENT_FILTERED" ; seed: number }[]

Defined in

packages/modelfusion/src/model-provider/stability/StabilityImageGenerationModel.ts:256


StabilityImageGenerationSampler

Ƭ StabilityImageGenerationSampler: "DDIM" | "DDPM" | "K_DPMPP_2M" | "K_DPMPP_2S_ANCESTRAL" | "K_DPM_2" | "K_DPM_2_ANCESTRAL" | "K_EULER" | "K_EULER_ANCESTRAL" | "K_HEUN" | "K_LMS"

Defined in

packages/modelfusion/src/model-provider/stability/StabilityImageGenerationModel.ts:55


StabilityImageGenerationStylePreset

Ƭ StabilityImageGenerationStylePreset: "3d-model" | "analog-film" | "anime" | "cinematic" | "comic-book" | "digital-art" | "enhance" | "fantasy-art" | "isometric" | "line-art" | "low-poly" | "modeling-compound" | "neon-punk" | "origami" | "photographic" | "pixel-art" | "tile-texture"

Defined in

packages/modelfusion/src/model-provider/stability/StabilityImageGenerationModel.ts:36


TextChunk

Ƭ TextChunk: Object

Type declaration

NameType
textstring

Defined in

packages/modelfusion/src/text-chunk/TextChunk.ts:1


TextGenerationFinishReason

Ƭ TextGenerationFinishReason: "stop" | "length" | "content-filter" | "tool-calls" | "error" | "other" | "unknown"

Defined in

packages/modelfusion/src/model-function/generate-text/TextGenerationResult.ts:13


TextGenerationFinishedEventResult

Ƭ TextGenerationFinishedEventResult: { rawResponse: unknown ; status: "success" ; usage?: { completionTokens: number ; promptTokens: number ; totalTokens: number } ; value: string } | { error: unknown ; status: "error" } | { status: "abort" }

Defined in

packages/modelfusion/src/model-function/generate-text/TextGenerationEvent.ts:10


TextGenerationResult

Ƭ TextGenerationResult: Object

Type declaration

NameTypeDescription
finishReasonTextGenerationFinishReasonThe reason why the generation stopped.
textstringThe generated text.

Defined in

packages/modelfusion/src/model-function/generate-text/TextGenerationResult.ts:1


ThrottleFunction

Ƭ ThrottleFunction: <OUTPUT>(fn: () => PromiseLike<OUTPUT>) => PromiseLike<OUTPUT>

Type declaration

▸ <OUTPUT>(fn): PromiseLike<OUTPUT>

Type parameters
Name
OUTPUT
Parameters
NameType
fn() => PromiseLike<OUTPUT>
Returns

PromiseLike<OUTPUT>

Defined in

packages/modelfusion/src/core/api/ThrottleFunction.ts:1


TikTokenTokenizerSettings

Ƭ TikTokenTokenizerSettings: Object

Type declaration

NameType
modelOpenAIChatBaseModelType | OpenAICompletionModelType | OpenAITextEmbeddingModelType

Defined in

packages/modelfusion/src/model-provider/openai/TikTokenTokenizer.ts:9


ToolCallGenerationFinishedEvent

Ƭ ToolCallGenerationFinishedEvent: BaseModelCallFinishedEvent & { functionType: "generate-tool-call" ; result: ToolCallGenerationFinishedEventResult }

Defined in

packages/modelfusion/src/tool/generate-tool-call/ToolCallGenerationEvent.ts:26


ToolCallGenerationFinishedEventResult

Ƭ ToolCallGenerationFinishedEventResult: { rawResponse: unknown ; status: "success" ; usage?: { completionTokens: number ; promptTokens: number ; totalTokens: number } ; value: unknown } | { error: unknown ; status: "error" } | { status: "abort" }

Defined in

packages/modelfusion/src/tool/generate-tool-call/ToolCallGenerationEvent.ts:11


ToolCallResult

Ƭ ToolCallResult<NAME, PARAMETERS, RETURN_TYPE>: { args: PARAMETERS ; tool: NAME ; toolCall: ToolCall<NAME, PARAMETERS> } & { ok: true ; result: RETURN_TYPE } | { ok: false ; result: ToolCallError }

Type parameters

NameType
NAMEextends string
PARAMETERSPARAMETERS
RETURN_TYPERETURN_TYPE

Defined in

packages/modelfusion/src/tool/ToolCallResult.ts:4


ToolCallsGenerationFinishedEvent

Ƭ ToolCallsGenerationFinishedEvent: BaseModelCallFinishedEvent & { functionType: "generate-tool-calls" ; result: ToolCallsGenerationFinishedEventResult }

Defined in

packages/modelfusion/src/tool/generate-tool-calls/ToolCallsGenerationEvent.ts:26


ToolCallsGenerationFinishedEventResult

Ƭ ToolCallsGenerationFinishedEventResult: { rawResponse: unknown ; status: "success" ; usage?: { completionTokens: number ; promptTokens: number ; totalTokens: number } ; value: unknown } | { error: unknown ; status: "error" } | { status: "abort" }

Defined in

packages/modelfusion/src/tool/generate-tool-calls/ToolCallsGenerationEvent.ts:11


ToolContent

Ƭ ToolContent: ToolResponsePart[]

Defined in

packages/modelfusion/src/model-function/generate-text/prompt-template/ChatPrompt.ts:42


TranscriptionFinishedEventResult

Ƭ TranscriptionFinishedEventResult: { rawResponse: unknown ; status: "success" ; value: string } | { error: unknown ; status: "error" } | { status: "abort" }

Defined in

packages/modelfusion/src/model-function/generate-transcription/TranscriptionEvent.ts:10


UrlParts

Ƭ UrlParts: Object

Type declaration

NameType
hoststring
pathstring
portstring
protocolstring

Defined in

packages/modelfusion/src/core/api/BaseUrlApiConfiguration.ts:6


UserContent

Ƭ UserContent: string | (TextPart | ImagePart)[]

Defined in

packages/modelfusion/src/model-function/generate-text/prompt-template/ChatPrompt.ts:40


Vector

Ƭ Vector: number[]

A vector is an array of numbers. It is e.g. used to represent a text as a vector of word embeddings.

Defined in

packages/modelfusion/src/core/Vector.ts:5


WebSearchToolInput

Ƭ WebSearchToolInput: Object

Type declaration

NameType
querystring

Defined in

packages/modelfusion/src/tool/WebSearchTool.ts:27


WebSearchToolOutput

Ƭ WebSearchToolOutput: Object

Type declaration

NameType
results{ link: string ; snippet: string ; title: string }[]

Defined in

packages/modelfusion/src/tool/WebSearchTool.ts:31