modelfusion
Classes
- AbortError
- AbstractOpenAIChatModel
- AbstractOpenAICompletionModel
- AbstractOpenAITextEmbeddingModel
- ApiCallError
- AsyncQueue
- Automatic1111ApiConfiguration
- Automatic1111ImageGenerationModel
- AzureOpenAIApiConfiguration
- BaseUrlApiConfiguration
- BaseUrlApiConfigurationWithDefaults
- CohereApiConfiguration
- CohereTextEmbeddingModel
- CohereTextGenerationModel
- CohereTokenizer
- DefaultRun
- ElevenLabsApiConfiguration
- ElevenLabsSpeechModel
- EmbeddingSimilarityClassifier
- FireworksAIApiConfiguration
- FunctionEventSource
- HeliconeOpenAIApiConfiguration
- HuggingFaceApiConfiguration
- HuggingFaceTextEmbeddingModel
- HuggingFaceTextGenerationModel
- InvalidPromptError
- JSONParseError
- LlamaCppApiConfiguration
- LlamaCppCompletionModel
- LlamaCppTextEmbeddingModel
- LlamaCppTokenizer
- LmntApiConfiguration
- LmntSpeechModel
- MemoryCache
- MemoryVectorIndex
- MistralApiConfiguration
- MistralChatModel
- MistralTextEmbeddingModel
- NoSuchToolDefinitionError
- ObjectFromTextGenerationModel
- ObjectFromTextStreamingModel
- ObjectGeneratorTool
- ObjectParseError
- ObjectStreamResponse
- ObjectValidationError
- OllamaApiConfiguration
- OllamaChatModel
- OllamaCompletionModel
- OllamaTextEmbeddingModel
- OpenAIApiConfiguration
- OpenAIChatModel
- OpenAICompatibleChatModel
- OpenAICompatibleCompletionModel
- OpenAICompatibleTextEmbeddingModel
- OpenAICompletionModel
- OpenAIImageGenerationModel
- OpenAISpeechModel
- OpenAITextEmbeddingModel
- OpenAITranscriptionModel
- PerplexityApiConfiguration
- PromptTemplateFullTextModel
- PromptTemplateImageGenerationModel
- PromptTemplateTextGenerationModel
- PromptTemplateTextStreamingModel
- RetryError
- StabilityApiConfiguration
- StabilityImageGenerationModel
- TextGenerationToolCallModel
- TextGenerationToolCallsModel
- TikTokenTokenizer
- TogetherAIApiConfiguration
- Tool
- ToolCallArgumentsValidationError
- ToolCallError
- ToolCallGenerationError
- ToolCallParseError
- ToolCallsParseError
- ToolExecutionError
- TypeValidationError
- UncheckedSchema
- VectorIndexRetriever
- WebSearchTool
- WhisperCppApiConfiguration
- WhisperCppTranscriptionModel
- ZodSchema
Interfaces
- AbstractOpenAIChatSettings
- AbstractOpenAICompletionModelSettings
- AbstractOpenAITextEmbeddingModelSettings
- ApiConfiguration
- Automatic1111ImageGenerationSettings
- BaseFunctionEvent
- BaseFunctionFinishedEvent
- BaseFunctionStartedEvent
- BaseModelCallFinishedEvent
- BaseModelCallStartedEvent
- BasicTokenizer
- Cache
- ChatPrompt
- Classifier
- ClassifierSettings
- ClassifyFinishedEvent
- ClassifyStartedEvent
- CohereTextEmbeddingModelSettings
- CohereTextGenerationModelSettings
- CohereTokenizerSettings
- ElevenLabsSpeechModelSettings
- EmbeddingFinishedEvent
- EmbeddingModel
- EmbeddingModelSettings
- EmbeddingSimilarityClassifierSettings
- EmbeddingStartedEvent
- ExecuteFunctionFinishedEvent
- ExecuteFunctionStartedEvent
- ExecuteToolFinishedEvent
- ExecuteToolStartedEvent
- FullTokenizer
- FunctionObserver
- HasContextWindowSize
- HasTokenizer
- HuggingFaceTextEmbeddingModelSettings
- HuggingFaceTextGenerationModelSettings
- ImageGenerationFinishedEvent
- ImageGenerationModel
- ImageGenerationModelSettings
- ImageGenerationStartedEvent
- ImagePart
- InstructionPrompt
- JsonSchemaProducer
- LlamaCppCompletionModelSettings
- LlamaCppCompletionPrompt
- LlamaCppTextEmbeddingModelSettings
- LmntSpeechModelSettings
- MistralChatModelSettings
- MistralTextEmbeddingModelSettings
- Model
- ModelSettings
- ObjectGenerationModel
- ObjectGenerationModelSettings
- ObjectGenerationStartedEvent
- ObjectStreamingFinishedEvent
- ObjectStreamingModel
- ObjectStreamingStartedEvent
- OllamaChatModelSettings
- OllamaCompletionModelSettings
- OllamaCompletionPrompt
- OllamaTextEmbeddingModelSettings
- OllamaTextGenerationSettings
- OpenAIChatSettings
- OpenAICompatibleApiConfiguration
- OpenAICompatibleChatSettings
- OpenAICompatibleCompletionModelSettings
- OpenAICompatibleTextEmbeddingModelSettings
- OpenAICompletionModelSettings
- OpenAIImageGenerationCallSettings
- OpenAIImageGenerationSettings
- OpenAISpeechModelSettings
- OpenAITextEmbeddingModelSettings
- OpenAITranscriptionModelSettings
- PromptTemplate
- Retriever
- Run
- Schema
- SpeechGenerationFinishedEvent
- SpeechGenerationModel
- SpeechGenerationModelSettings
- SpeechGenerationStartedEvent
- SpeechStreamingFinishedEvent
- SpeechStreamingStartedEvent
- StabilityImageGenerationSettings
- StreamingSpeechGenerationModel
- TextGenerationBaseModel
- TextGenerationFinishedEvent
- TextGenerationModel
- TextGenerationModelSettings
- TextGenerationPromptTemplate
- TextGenerationPromptTemplateProvider
- TextGenerationStartedEvent
- TextPart
- TextStreamingBaseModel
- TextStreamingFinishedEvent
- TextStreamingModel
- TextStreamingStartedEvent
- ToolCall
- ToolCallGenerationModel
- ToolCallGenerationModelSettings
- ToolCallGenerationStartedEvent
- ToolCallPart
- ToolCallPromptTemplate
- ToolCallsGenerationModel
- ToolCallsGenerationModelSettings
- ToolCallsGenerationStartedEvent
- ToolCallsPromptTemplate
- ToolDefinition
- ToolResponsePart
- TranscriptionFinishedEvent
- TranscriptionModel
- TranscriptionModelSettings
- TranscriptionStartedEvent
- UpsertIntoVectorIndexFinishedEvent
- UpsertIntoVectorIndexStartedEvent
- ValueCluster
- VectorIndex
- VectorIndexRetrieverSettings
- WhisperCppTranscriptionModelSettings
- runToolFinishedEvent
- runToolStartedEvent
- runToolsFinishedEvent
- runToolsStartedEvent
Namespaces
- AlpacaPrompt
- ChatMLPrompt
- Llama2Prompt
- MistralInstructPrompt
- NeuralChatPrompt
- SynthiaPrompt
- TextPrompt
- VicunaPrompt
- api
- automatic1111
- cohere
- elevenlabs
- huggingface
- llamacpp
- lmnt
- mistral
- modelfusion
- ollama
- openai
- openaicompatible
- stability
- whispercpp
Functions
ObjectStreamFromResponse
▸ ObjectStreamFromResponse<OBJECT
>(«destructured»
): AsyncGenerator
<{ partialObject
: OBJECT
}, void
, unknown
>
Convert a Response to a lightweight ObjectStream. The response must be created using ObjectStreamResponse on the server.
Type parameters
Name |
---|
OBJECT |
Parameters
Name | Type |
---|---|
«destructured» | Object |
› response | Response |
› schema | Schema <OBJECT > |
Returns
AsyncGenerator
<{ partialObject
: OBJECT
}, void
, unknown
>
See
ObjectStreamResponse
Defined in
packages/modelfusion/src/model-function/generate-object/ObjectStream.ts:35
classify
▸ classify<VALUE
, CLASS
>(args
): Promise
<CLASS
>
Type parameters
Name | Type |
---|---|
VALUE | VALUE |
CLASS | extends null | string |
Parameters
Name | Type |
---|---|
args | { fullResponse? : false ; model : Classifier <VALUE , CLASS , ClassifierSettings > ; value : VALUE } & FunctionOptions |
Returns
Promise
<CLASS
>
Defined in
packages/modelfusion/src/model-function/classify/classify.ts:6
▸ classify<VALUE
, CLASS
>(args
): Promise
<{ class
: CLASS
; metadata
: ModelCallMetadata
; rawResponse
: unknown
}>
Type parameters
Name | Type |
---|---|
VALUE | VALUE |
CLASS | extends null | string |
Parameters
Name | Type |
---|---|
args | { fullResponse : true ; model : Classifier <VALUE , CLASS , ClassifierSettings > ; value : VALUE } & FunctionOptions |
Returns
Promise
<{ class
: CLASS
; metadata
: ModelCallMetadata
; rawResponse
: unknown
}>
Defined in
packages/modelfusion/src/model-function/classify/classify.ts:13
convertDataContentToBase64String
▸ convertDataContentToBase64String(content
): string
Parameters
Name | Type |
---|---|
content | DataContent |
Returns
string
Defined in
packages/modelfusion/src/util/format/DataContent.ts:8
convertDataContentToUint8Array
▸ convertDataContentToUint8Array(content
): Uint8Array
Parameters
Name | Type |
---|---|
content | DataContent |
Returns
Uint8Array
Defined in
packages/modelfusion/src/util/format/DataContent.ts:20
cosineSimilarity
▸ cosineSimilarity(a
, b
): number
Calculates the cosine similarity between two vectors. They must have the same length.
Parameters
Name | Type | Description |
---|---|---|
a | number [] | The first vector. |
b | number [] | The second vector. |
Returns
number
The cosine similarity between the two vectors.
See
https://en.wikipedia.org/wiki/Cosine_similarity
Defined in
packages/modelfusion/src/util/cosineSimilarity.ts:11
countOpenAIChatMessageTokens
▸ countOpenAIChatMessageTokens(«destructured»
): Promise
<number
>
Parameters
Name | Type |
---|---|
«destructured» | Object |
› message | ChatMessage |
› model | OpenAIChatModelType |
Returns
Promise
<number
>
Defined in
packages/modelfusion/src/model-provider/openai/countOpenAIChatMessageTokens.ts:21
countOpenAIChatPromptTokens
▸ countOpenAIChatPromptTokens(«destructured»
): Promise
<number
>
Parameters
Name | Type |
---|---|
«destructured» | Object |
› messages | ChatMessage [] |
› model | OpenAIChatModelType |
Returns
Promise
<number
>
Defined in
packages/modelfusion/src/model-provider/openai/countOpenAIChatMessageTokens.ts:56
countTokens
▸ countTokens(tokenizer
, text
): Promise
<number
>
Count the number of tokens in the given text.
Parameters
Name | Type |
---|---|
tokenizer | BasicTokenizer |
text | string |
Returns
Promise
<number
>
Defined in
packages/modelfusion/src/model-function/tokenize-text/countTokens.ts:6
createChatPrompt
▸ createChatPrompt<INPUT
>(promptFunction
): (input
: INPUT
) => PromptFunction
<INPUT
, ChatPrompt
>
Type parameters
Name |
---|
INPUT |
Parameters
Name | Type |
---|---|
promptFunction | (input : INPUT ) => Promise <ChatPrompt > |
Returns
fn
▸ (input
): PromptFunction
<INPUT
, ChatPrompt
>
Parameters
Name | Type |
---|---|
input | INPUT |
Returns
PromptFunction
<INPUT
, ChatPrompt
>
Defined in
packages/modelfusion/src/model-function/generate-text/prompt-template/ChatPrompt.ts:125
createEventSourceStream
▸ createEventSourceStream(events
): ReadableStream
<any
>
Parameters
Name | Type |
---|---|
events | AsyncIterable <unknown > |
Returns
ReadableStream
<any
>
Defined in
packages/modelfusion/src/util/streaming/createEventSourceStream.ts:3
createInstructionPrompt
▸ createInstructionPrompt<INPUT
>(promptFunction
): (input
: INPUT
) => PromptFunction
<INPUT
, InstructionPrompt
>
Type parameters
Name |
---|
INPUT |
Parameters
Name | Type |
---|---|
promptFunction | (input : INPUT ) => Promise <InstructionPrompt > |
Returns
fn
▸ (input
): PromptFunction
<INPUT
, InstructionPrompt
>
Parameters
Name | Type |
---|---|
input | INPUT |
Returns
PromptFunction
<INPUT
, InstructionPrompt
>
Defined in
packages/modelfusion/src/model-function/generate-text/prompt-template/InstructionPrompt.ts:42
createTextPrompt
▸ createTextPrompt<INPUT
>(promptFunction
): (input
: INPUT
) => PromptFunction
<INPUT
, string
>
Type parameters
Name |
---|
INPUT |
Parameters
Name | Type |
---|---|
promptFunction | (input : INPUT ) => Promise <string > |
Returns
fn
▸ (input
): PromptFunction
<INPUT
, string
>
Parameters
Name | Type |
---|---|
input | INPUT |
Returns
PromptFunction
<INPUT
, string
>
Defined in
packages/modelfusion/src/model-function/generate-text/prompt-template/TextPrompt.ts:6
delay
▸ delay(delayInMs
): Promise
<void
>
Parameters
Name | Type |
---|---|
delayInMs | number |
Returns
Promise
<void
>
Defined in
packages/modelfusion/src/util/delay.ts:1
embed
▸ embed<VALUE
>(args
): Promise
<Vector
>
Generate an embedding for a single value.
Type parameters
Name |
---|
VALUE |
Parameters
Name | Type |
---|---|
args | { fullResponse? : false ; model : EmbeddingModel <VALUE , EmbeddingModelSettings > ; value : VALUE } & FunctionOptions |
Returns
Promise
<Vector
>
- A promise that resolves to a vector representing the embedding.
See
https://modelfusion.dev/guide/function/embed
Example
const embedding = await embed({
model: openai.TextEmbedder(...),
value: "At first, Nox didn't know what to do with the pup."
});
Defined in
packages/modelfusion/src/model-function/embed/embed.ts:133
▸ embed<VALUE
>(args
): Promise
<{ embedding
: Vector
; metadata
: ModelCallMetadata
; rawResponse
: unknown
}>
Type parameters
Name |
---|
VALUE |
Parameters
Name | Type |
---|---|
args | { fullResponse : true ; model : EmbeddingModel <VALUE , EmbeddingModelSettings > ; value : VALUE } & FunctionOptions |
Returns
Promise
<{ embedding
: Vector
; metadata
: ModelCallMetadata
; rawResponse
: unknown
}>
Defined in
packages/modelfusion/src/model-function/embed/embed.ts:140
embedMany
▸ embedMany<VALUE
>(args
): Promise
<Vector
[]>
Generate embeddings for multiple values.
Type parameters
Name |
---|
VALUE |
Parameters
Name | Type |
---|---|
args | { fullResponse? : false ; model : EmbeddingModel <VALUE , EmbeddingModelSettings > ; values : VALUE [] } & FunctionOptions |
Returns
Promise
<Vector
[]>
- A promise that resolves to an array of vectors representing the embeddings.
See
https://modelfusion.dev/guide/function/embed
Example
const embeddings = await embedMany({
embedder: openai.TextEmbedder(...),
values: [
"At first, Nox didn't know what to do with the pup.",
"He keenly observed and absorbed everything around him, from the birds in the sky to the trees in the forest.",
]
});
Defined in
packages/modelfusion/src/model-function/embed/embed.ts:26
▸ embedMany<VALUE
>(args
): Promise
<{ embeddings
: Vector
[] ; metadata
: ModelCallMetadata
; rawResponse
: unknown
}>
Type parameters
Name |
---|
VALUE |
Parameters
Name | Type |
---|---|
args | { fullResponse : true ; model : EmbeddingModel <VALUE , EmbeddingModelSettings > ; values : VALUE [] } & FunctionOptions |
Returns
Promise
<{ embeddings
: Vector
[] ; metadata
: ModelCallMetadata
; rawResponse
: unknown
}>
Defined in
packages/modelfusion/src/model-function/embed/embed.ts:33
executeFunction
▸ executeFunction<INPUT
, OUTPUT
>(fn
, input
, options?
): Promise
<OUTPUT
>
Type parameters
Name |
---|
INPUT |
OUTPUT |
Parameters
Name | Type |
---|---|
fn | (input : INPUT , options : FunctionCallOptions ) => PromiseLike <OUTPUT > |
input | INPUT |
options? | FunctionOptions |
Returns
Promise
<OUTPUT
>
Defined in
packages/modelfusion/src/core/executeFunction.ts:4
executeTool
▸ executeTool<TOOL
>(params
): Promise
<ReturnType
<TOOL
["execute"
]>>
executeTool
executes a tool with the given parameters.
Type parameters
Name | Type |
---|---|
TOOL | extends Tool <any , any , any > |
Parameters
Name | Type |
---|---|
params | { args : TOOL ["parameters" ]["_type" ] ; fullResponse? : false ; tool : TOOL } & FunctionOptions |
Returns
Promise
<ReturnType
<TOOL
["execute"
]>>
Defined in
packages/modelfusion/src/tool/execute-tool/executeTool.ts:30
▸ executeTool<TOOL
>(params
): Promise
<{ metadata
: ExecuteToolMetadata
; output
: Awaited
<ReturnType
<TOOL
["execute"
]>> }>
Type parameters
Name | Type |
---|---|
TOOL | extends Tool <any , any , any > |
Parameters
Name | Type |
---|---|
params | { args : TOOL ["parameters" ]["_type" ] ; fullResponse : true ; tool : TOOL } & FunctionOptions |
Returns
Promise
<{ metadata
: ExecuteToolMetadata
; output
: Awaited
<ReturnType
<TOOL
["execute"
]>> }>
Defined in
packages/modelfusion/src/tool/execute-tool/executeTool.ts:37
generateImage
▸ generateImage<PROMPT
>(args
): Promise
<Uint8Array
>
Generates an image using a prompt.
The prompt depends on the model. For example, OpenAI image models expect a string prompt, and Stability AI models expect an array of text prompts with optional weights.
Type parameters
Name |
---|
PROMPT |
Parameters
Name | Type |
---|---|
args | { fullResponse? : false ; model : ImageGenerationModel <PROMPT , ImageGenerationModelSettings > ; prompt : PROMPT } & FunctionOptions |
Returns
Promise
<Uint8Array
>
- Returns a promise that resolves to the generated image. The image is a Uint8Array containing the image data in PNG format.
See
https://modelfusion.dev/guide/function/generate-image
Example
const image = await generateImage({
imageGenerator: stability.ImageGenerator(...),
prompt: [
{ text: "the wicked witch of the west" },
{ text: "style of early 19th century painting", weight: 0.5 },
]
});
Defined in
packages/modelfusion/src/model-function/generate-image/generateImage.ts:33
▸ generateImage<PROMPT
>(args
): Promise
<{ image
: Uint8Array
; imageBase64
: string
; images
: Uint8Array
[] ; imagesBase64
: string
[] ; metadata
: ModelCallMetadata
; rawResponse
: unknown
}>
Type parameters
Name |
---|
PROMPT |
Parameters
Name | Type |
---|---|
args | { fullResponse : true ; model : ImageGenerationModel <PROMPT , ImageGenerationModelSettings > ; prompt : PROMPT } & FunctionOptions |
Returns
Promise
<{ image
: Uint8Array
; imageBase64
: string
; images
: Uint8Array
[] ; imagesBase64
: string
[] ; metadata
: ModelCallMetadata
; rawResponse
: unknown
}>
Defined in
packages/modelfusion/src/model-function/generate-image/generateImage.ts:40
generateObject
▸ generateObject<OBJECT
, PROMPT
, SETTINGS
>(args
): Promise
<OBJECT
>
Generate a typed object for a prompt and a schema.
Type parameters
Name | Type |
---|---|
OBJECT | OBJECT |
PROMPT | PROMPT |
SETTINGS | extends ObjectGenerationModelSettings |
Parameters
Name | Type |
---|---|
args | { fullResponse? : false ; model : ObjectGenerationModel <PROMPT , SETTINGS > ; prompt : PROMPT | PromptFunction <unknown , PROMPT > | (schema : Schema <OBJECT >) => PROMPT | PromptFunction <unknown , PROMPT > ; schema : Schema <OBJECT > & JsonSchemaProducer } & FunctionOptions |
Returns
Promise
<OBJECT
>
- Returns a promise that resolves to the generated object.
See
https://modelfusion.dev/guide/function/generate-object
Example
const sentiment = await generateObject({
model: openai.ChatTextGenerator(...).asFunctionCallObjectGenerationModel(...),
schema: zodSchema(z.object({
sentiment: z
.enum(["positive", "neutral", "negative"])
.describe("Sentiment."),
})),
prompt: [
openai.ChatMessage.system(
"You are a sentiment evaluator. " +
"Analyze the sentiment of the following product review:"
),
openai.ChatMessage.user(
"After I opened the package, I was met by a very unpleasant smell " +
"that did not disappear even after washing. Never again!"
),
]
});
Defined in
packages/modelfusion/src/model-function/generate-object/generateObject.ts:52
▸ generateObject<OBJECT
, PROMPT
, SETTINGS
>(args
): Promise
<{ metadata
: ModelCallMetadata
; rawResponse
: unknown
; value
: OBJECT
}>
Type parameters
Name | Type |
---|---|
OBJECT | OBJECT |
PROMPT | PROMPT |
SETTINGS | extends ObjectGenerationModelSettings |
Parameters
Name | Type |
---|---|
args | { fullResponse : true ; model : ObjectGenerationModel <PROMPT , SETTINGS > ; prompt : PROMPT | PromptFunction <unknown , PROMPT > | (schema : Schema <OBJECT >) => PROMPT | PromptFunction <unknown , PROMPT > ; schema : Schema <OBJECT > & JsonSchemaProducer } & FunctionOptions |
Returns
Promise
<{ metadata
: ModelCallMetadata
; rawResponse
: unknown
; value
: OBJECT
}>
Defined in
packages/modelfusion/src/model-function/generate-object/generateObject.ts:67
generateSpeech
▸ generateSpeech(args
): Promise
<Uint8Array
>
Synthesizes speech from text. Also called text-to-speech (TTS).
Parameters
Name | Type |
---|---|
args | { fullResponse? : false ; model : SpeechGenerationModel <SpeechGenerationModelSettings > ; text : string } & FunctionOptions |
Returns
Promise
<Uint8Array
>
- A promise that resolves to a Uint8Array containing the synthesized speech.
See
https://modelfusion.dev/guide/function/generate-speech
Example
const speech = await generateSpeech({
model: lmnt.SpeechGenerator(...),
text: "Good evening, ladies and gentlemen! Exciting news on the airwaves tonight " +
"as The Rolling Stones unveil 'Hackney Diamonds.'
});
Defined in
packages/modelfusion/src/model-function/generate-speech/generateSpeech.ts:26
▸ generateSpeech(args
): Promise
<{ metadata
: ModelCallMetadata
; rawResponse
: unknown
; speech
: Uint8Array
}>
Parameters
Name | Type |
---|---|
args | { fullResponse : true ; model : SpeechGenerationModel <SpeechGenerationModelSettings > ; text : string } & FunctionOptions |
Returns
Promise
<{ metadata
: ModelCallMetadata
; rawResponse
: unknown
; speech
: Uint8Array
}>
Defined in
packages/modelfusion/src/model-function/generate-speech/generateSpeech.ts:33
generateText
▸ generateText<PROMPT
>(args
): Promise
<string
>
Generate text for a prompt and return it as a string.
The prompt depends on the model used. For instance, OpenAI completion models expect a string prompt, whereas OpenAI chat models expect an array of chat messages.
Type parameters
Name |
---|
PROMPT |
Parameters
Name | Type |
---|---|
args | { fullResponse? : false ; model : TextGenerationModel <PROMPT , TextGenerationModelSettings > ; prompt : PROMPT | PromptFunction <unknown , PROMPT > } & FunctionOptions |
Returns
Promise
<string
>
- A promise that resolves to the generated text.
See
https://modelfusion.dev/guide/function/generate-text
Example
const text = await generateText({
model: openai.CompletionTextGenerator(...),
prompt: "Write a short story about a robot learning to love:\n\n"
});
Defined in
packages/modelfusion/src/model-function/generate-text/generateText.ts:34
▸ generateText<PROMPT
>(args
): Promise
<{ finishReason
: TextGenerationFinishReason
; metadata
: ModelCallMetadata
; rawResponse
: unknown
; text
: string
; textGenerationResults
: TextGenerationResult
[] ; texts
: string
[] }>
Type parameters
Name |
---|
PROMPT |
Parameters
Name | Type |
---|---|
args | { fullResponse : true ; model : TextGenerationModel <PROMPT , TextGenerationModelSettings > ; prompt : PROMPT | PromptFunction <unknown , PROMPT > } & FunctionOptions |
Returns
Promise
<{ finishReason
: TextGenerationFinishReason
; metadata
: ModelCallMetadata
; rawResponse
: unknown
; text
: string
; textGenerationResults
: TextGenerationResult
[] ; texts
: string
[] }>
Defined in
packages/modelfusion/src/model-function/generate-text/generateText.ts:41
generateToolCall
▸ generateToolCall<PARAMETERS
, PROMPT
, NAME
, SETTINGS
>(params
): Promise
<ToolCall
<NAME
, PARAMETERS
>>
Type parameters
Name | Type |
---|---|
PARAMETERS | PARAMETERS |
PROMPT | PROMPT |
NAME | extends string |
SETTINGS | extends ToolCallGenerationModelSettings |
Parameters
Name | Type |
---|---|
params | { fullResponse? : false ; model : ToolCallGenerationModel <PROMPT , SETTINGS > ; prompt : PROMPT | (tool : ToolDefinition <NAME , PARAMETERS >) => PROMPT ; tool : ToolDefinition <NAME , PARAMETERS > } & FunctionOptions |
Returns
Promise
<ToolCall
<NAME
, PARAMETERS
>>
Defined in
packages/modelfusion/src/tool/generate-tool-call/generateToolCall.ts:13
▸ generateToolCall<PARAMETERS
, PROMPT
, NAME
, SETTINGS
>(params
): Promise
<{ metadata
: ModelCallMetadata
; rawResponse
: unknown
; toolCall
: ToolCall
<NAME
, PARAMETERS
> }>
Type parameters
Name | Type |
---|---|
PARAMETERS | PARAMETERS |
PROMPT | PROMPT |
NAME | extends string |
SETTINGS | extends ToolCallGenerationModelSettings |
Parameters
Name | Type |
---|---|
params | { fullResponse : true ; model : ToolCallGenerationModel <PROMPT , SETTINGS > ; prompt : PROMPT | (tool : ToolDefinition <NAME , PARAMETERS >) => PROMPT ; tool : ToolDefinition <NAME , PARAMETERS > } & FunctionOptions |
Returns
Promise
<{ metadata
: ModelCallMetadata
; rawResponse
: unknown
; toolCall
: ToolCall
<NAME
, PARAMETERS
> }>
Defined in
packages/modelfusion/src/tool/generate-tool-call/generateToolCall.ts:26
generateToolCalls
▸ generateToolCalls<TOOLS
, PROMPT
>(params
): Promise
<{ text
: string
| null
; toolCalls
: ToOutputValue
<TOOLS
>[] | null
}>
Type parameters
Name | Type |
---|---|
TOOLS | extends ToolDefinition <any , any >[] |
PROMPT | PROMPT |
Parameters
Name | Type |
---|---|
params | { fullResponse? : false ; model : ToolCallsGenerationModel <PROMPT , ToolCallsGenerationModelSettings > ; prompt : PROMPT | (tools : TOOLS ) => PROMPT ; tools : TOOLS } & FunctionOptions |
Returns
Promise
<{ text
: string
| null
; toolCalls
: ToOutputValue
<TOOLS
>[] | null
}>
Defined in
packages/modelfusion/src/tool/generate-tool-calls/generateToolCalls.ts:37
▸ generateToolCalls<TOOLS
, PROMPT
>(params
): Promise
<{ metadata
: ModelCallMetadata
; rawResponse
: unknown
; value
: { text
: string
| null
; toolCalls
: ToOutputValue
<TOOLS
>[] } }>
Type parameters
Name | Type |
---|---|
TOOLS | extends ToolDefinition <any , any >[] |
PROMPT | PROMPT |
Parameters
Name | Type |
---|---|
params | { fullResponse : true ; model : ToolCallsGenerationModel <PROMPT , ToolCallsGenerationModelSettings > ; prompt : PROMPT | (tools : TOOLS ) => PROMPT ; tools : TOOLS } & FunctionOptions |
Returns
Promise
<{ metadata
: ModelCallMetadata
; rawResponse
: unknown
; value
: { text
: string
| null
; toolCalls
: ToOutputValue
<TOOLS
>[] } }>
Defined in
packages/modelfusion/src/tool/generate-tool-calls/generateToolCalls.ts:51
generateTranscription
▸ generateTranscription(args
): Promise
<string
>
Transcribe audio data into text. Also called speech-to-text (STT) or automatic speech recognition (ASR).
Parameters
Name | Type |
---|---|
args | { audioData : DataContent ; fullResponse? : false ; mimeType : "audio/webm" | "audio/mp3" | "audio/wav" | "audio/mp4" | "audio/mpeg" | "audio/mpga" | "audio/ogg" | "audio/oga" | "audio/flac" | "audio/m4a" | string & ; model : TranscriptionModel <TranscriptionModelSettings > } & FunctionOptions |
Returns
Promise
<string
>
A promise that resolves to the transcribed text.
See
https://modelfusion.dev/guide/function/generate-transcription
Example
const audioData = await fs.promises.readFile("data/test.mp3");
const transcription = await generateTranscription({
model: openai.Transcriber({ model: "whisper-1" }),
mimeType: "audio/mp3",
audioData,
});
Defined in
packages/modelfusion/src/model-function/generate-transcription/generateTranscription.ts:31
▸ generateTranscription(args
): Promise
<{ metadata
: ModelCallMetadata
; rawResponse
: unknown
; value
: string
}>
Parameters
Name | Type |
---|---|
args | { audioData : DataContent ; fullResponse : true ; mimeType : "audio/webm" | "audio/mp3" | "audio/wav" | "audio/mp4" | "audio/mpeg" | "audio/mpga" | "audio/ogg" | "audio/oga" | "audio/flac" | "audio/m4a" | string & ; model : TranscriptionModel <TranscriptionModelSettings > } & FunctionOptions |
Returns
Promise
<{ metadata
: ModelCallMetadata
; rawResponse
: unknown
; value
: string
}>
Defined in
packages/modelfusion/src/model-function/generate-transcription/generateTranscription.ts:39
getAudioFileExtension
▸ getAudioFileExtension(mimeType
): "mp3"
| "flac"
| "webm"
| "wav"
| "mp4"
| "mpeg"
| "ogg"
| "m4a"
Parameters
Name | Type |
---|---|
mimeType | string |
Returns
"mp3"
| "flac"
| "webm"
| "wav"
| "mp4"
| "mpeg"
| "ogg"
| "m4a"
Defined in
packages/modelfusion/src/util/audio/getAudioFileExtension.ts:1
getOpenAIChatModelInformation
▸ getOpenAIChatModelInformation(model
): Object
Parameters
Name | Type |
---|---|
model | OpenAIChatModelType |
Returns
Object
Name | Type |
---|---|
baseModel | OpenAIChatBaseModelType |
contextWindowSize | number |
isFineTuned | boolean |
Defined in
packages/modelfusion/src/model-provider/openai/OpenAIChatModel.ts:47
getOpenAICompletionModelInformation
▸ getOpenAICompletionModelInformation(model
): Object
Parameters
Name | Type |
---|---|
model | "gpt-3.5-turbo-instruct" |
Returns
Object
Name | Type |
---|---|
contextWindowSize | number |
Defined in
packages/modelfusion/src/model-provider/openai/OpenAICompletionModel.ts:29
getRun
▸ getRun(run?
): Promise
<Run
| undefined
>
Returns the run stored in an AsyncLocalStorage if running in Node.js. It can be set with withRun()
.
Parameters
Name | Type |
---|---|
run? | Run |
Returns
Promise
<Run
| undefined
>
Defined in
packages/modelfusion/src/core/getRun.ts:39
isPromptFunction
▸ isPromptFunction<INPUT
, PROMPT
>(fn
): fn is PromptFunction<INPUT, PROMPT>
Checks if a function is a PromptFunction by checking for the unique symbol.
Type parameters
Name |
---|
INPUT |
PROMPT |
Parameters
Name | Type | Description |
---|---|---|
fn | unknown | The function to check. |
Returns
fn is PromptFunction<INPUT, PROMPT>
- True if the function is a PromptFunction, false otherwise.
Defined in
packages/modelfusion/src/core/PromptFunction.ts:47
mapBasicPromptToAutomatic1111Format
▸ mapBasicPromptToAutomatic1111Format(): PromptTemplate
<string
, Automatic1111ImageGenerationPrompt
>
Formats a basic text prompt as an Automatic1111 prompt.
Returns
PromptTemplate
<string
, Automatic1111ImageGenerationPrompt
>
Defined in
packages/modelfusion/src/model-provider/automatic1111/Automatic1111ImageGenerationPrompt.ts:11
mapBasicPromptToStabilityFormat
▸ mapBasicPromptToStabilityFormat(): PromptTemplate
<string
, StabilityImageGenerationPrompt
>
Formats a basic text prompt as a Stability prompt.
Returns
PromptTemplate
<string
, StabilityImageGenerationPrompt
>
Defined in
packages/modelfusion/src/model-provider/stability/StabilityImageGenerationPrompt.ts:11
markAsPromptFunction
▸ markAsPromptFunction<INPUT
, PROMPT
>(fn
): PromptFunction
<INPUT
, PROMPT
>
Marks a function as a PromptFunction by setting a unique symbol.
Type parameters
Name |
---|
INPUT |
PROMPT |
Parameters
Name | Type | Description |
---|---|---|
fn | () => PromiseLike <{ input : INPUT ; prompt : PROMPT }> | The function to mark. |
Returns
PromptFunction
<INPUT
, PROMPT
>
- The marked function.
Defined in
packages/modelfusion/src/core/PromptFunction.ts:29
parseJSON
▸ parseJSON(text
): unknown
Parses a JSON string into an unknown object.
Parameters
Name | Type | Description |
---|---|---|
text | Object | The JSON string to parse. |
text.text | string | - |
Returns
unknown
- The parsed JSON object.
Defined in
packages/modelfusion/src/core/schema/parseJSON.ts:13
▸ parseJSON<T
>(«destructured»
): T
Parses a JSON string into a strongly-typed object using the provided schema.
Type parameters
Name | Description |
---|---|
T | The type of the object to parse the JSON into. |
Parameters
Name | Type |
---|---|
«destructured» | Object |
› schema | Schema <T > |
› text | string |
Returns
T
- The parsed object.
Defined in
packages/modelfusion/src/core/schema/parseJSON.ts:22
retrieve
▸ retrieve<OBJECT
, QUERY
>(retriever
, query
, options?
): Promise
<OBJECT
[]>
Type parameters
Name |
---|
OBJECT |
QUERY |
Parameters
Name | Type |
---|---|
retriever | Retriever <OBJECT , QUERY > |
query | QUERY |
options? | FunctionOptions |
Returns
Promise
<OBJECT
[]>
Defined in
packages/modelfusion/src/retriever/retrieve.ts:5
runTool
▸ runTool<PROMPT
, TOOL
>(«destructured»
): Promise
<ToolCallResult
<TOOL
["name"
], TOOL
["parameters"
], Awaited
<ReturnType
<TOOL
["execute"
]>>>>
runTool
uses generateToolCall
to generate parameters for a tool and
then executes the tool with the parameters using executeTool
.
Type parameters
Name | Type |
---|---|
PROMPT | PROMPT |
TOOL | extends Tool <string , any , any > |
Parameters
Name | Type |
---|---|
«destructured» | { model : ToolCallGenerationModel <PROMPT , ToolCallGenerationModelSettings > ; prompt : PROMPT | (tool : TOOL ) => PROMPT ; tool : TOOL } & FunctionOptions |
Returns
Promise
<ToolCallResult
<TOOL
["name"
], TOOL
["parameters"
], Awaited
<ReturnType
<TOOL
["execute"
]>>>>
The result contains the name of the tool (tool
property),
the parameters (parameters
property, typed),
and the result of the tool execution (result
property, typed).
See
Defined in
packages/modelfusion/src/tool/run-tool/runTool.ts:23
runTools
▸ runTools<PROMPT
, TOOLS
>(«destructured»
): Promise
<{ text
: string
| null
; toolResults
: ToOutputValue
<TOOLS
>[] | null
}>
Type parameters
Name | Type |
---|---|
PROMPT | PROMPT |
TOOLS | extends Tool <any , any , any >[] |
Parameters
Name | Type |
---|---|
«destructured» | { model : ToolCallsGenerationModel <PROMPT , ToolCallsGenerationModelSettings > ; prompt : PROMPT | (tools : TOOLS ) => PROMPT ; tools : TOOLS } & FunctionOptions |
Returns
Promise
<{ text
: string
| null
; toolResults
: ToOutputValue
<TOOLS
>[] | null
}>
Defined in
packages/modelfusion/src/tool/run-tools/runTools.ts:43
safeParseJSON
▸ safeParseJSON(text
): { success
: true
; value
: unknown
} | { error
: JSONParseError
| TypeValidationError
; success
: false
}
Safely parses a JSON string and returns the result as an object of type unknown
.
Parameters
Name | Type | Description |
---|---|---|
text | Object | The JSON string to parse. |
text.text | string | - |
Returns
{ success
: true
; value
: unknown
} | { error
: JSONParseError
| TypeValidationError
; success
: false
}
Either an object with success: true
and the parsed data, or an object with success: false
and the error that occurred.
Defined in
packages/modelfusion/src/core/schema/parseJSON.ts:62
▸ safeParseJSON<T
>(«destructured»
): { success
: true
; value
: T
} | { error
: JSONParseError
| TypeValidationError
; success
: false
}
Safely parses a JSON string into a strongly-typed object, using a provided schema to validate the object.
Type parameters
Name | Description |
---|---|
T | The type of the object to parse the JSON into. |
Parameters
Name | Type |
---|---|
«destructured» | Object |
› schema | Schema <T > |
› text | string |
Returns
{ success
: true
; value
: T
} | { error
: JSONParseError
| TypeValidationError
; success
: false
}
An object with either a success
flag and the parsed and typed data, or a success
flag and an error object.
Defined in
packages/modelfusion/src/core/schema/parseJSON.ts:77
safeValidateTypes
▸ safeValidateTypes<T
>(«destructured»
): { success
: true
; value
: T
} | { error
: TypeValidationError
; success
: false
}
Safely validates the types of an unknown object using a schema and return a strongly-typed object.
Type parameters
Name | Description |
---|---|
T | The type of the object to validate. |
Parameters
Name | Type |
---|---|
«destructured» | Object |
› schema | Schema <T > |
› value | unknown |
Returns
{ success
: true
; value
: T
} | { error
: TypeValidationError
; success
: false
}
An object with either a success
flag and the parsed and typed data, or a success
flag and an error object.
Defined in
packages/modelfusion/src/core/schema/validateTypes.ts:49
splitAtCharacter
▸ splitAtCharacter(«destructured»
): SplitFunction
Splits text recursively until the resulting chunks are smaller than the maxCharactersPerChunk
.
The text is recursively split in the middle, so that all chunks are roughtly the same size.
Parameters
Name | Type |
---|---|
«destructured» | Object |
› maxCharactersPerChunk | number |
Returns
Defined in
packages/modelfusion/src/text-chunk/split/splitRecursively.ts:37
splitAtToken
▸ splitAtToken(«destructured»
): SplitFunction
Splits text recursively until the resulting chunks are smaller than the maxTokensPerChunk
,
while respecting the token boundaries.
The text is recursively split in the middle, so that all chunks are roughtly the same size.
Parameters
Name | Type |
---|---|
«destructured» | Object |
› maxTokensPerChunk | number |
› tokenizer | FullTokenizer |
Returns
Defined in
packages/modelfusion/src/text-chunk/split/splitRecursively.ts:54
splitOnSeparator
▸ splitOnSeparator(«destructured»
): SplitFunction
Splits text on a separator string.
Parameters
Name | Type |
---|---|
«destructured» | Object |
› separator | string |
Returns
Defined in
packages/modelfusion/src/text-chunk/split/splitOnSeparator.ts:6
splitTextChunk
▸ splitTextChunk<CHUNK
>(splitFunction
, input
): Promise
<CHUNK
[]>
Type parameters
Name | Type |
---|---|
CHUNK | extends TextChunk |
Parameters
Name | Type |
---|---|
splitFunction | SplitFunction |
input | CHUNK |
Returns
Promise
<CHUNK
[]>
Defined in
packages/modelfusion/src/text-chunk/split/splitTextChunks.ts:14
splitTextChunks
▸ splitTextChunks<CHUNK
>(splitFunction
, inputs
): Promise
<CHUNK
[]>
Type parameters
Name | Type |
---|---|
CHUNK | extends TextChunk |
Parameters
Name | Type |
---|---|
splitFunction | SplitFunction |
inputs | CHUNK [] |
Returns
Promise
<CHUNK
[]>
Defined in
packages/modelfusion/src/text-chunk/split/splitTextChunks.ts:4
streamObject
▸ streamObject<OBJECT
, PROMPT
>(args
): Promise
<ObjectStream
<OBJECT
>>
Generate and stream an object for a prompt and a schema.
Type parameters
Name |
---|
OBJECT |
PROMPT |
Parameters
Name | Type |
---|---|
args | { fullResponse? : false ; model : ObjectStreamingModel <PROMPT , ObjectGenerationModelSettings > ; prompt : PROMPT | PromptFunction <unknown , PROMPT > | (schema : Schema <OBJECT >) => PROMPT | PromptFunction <unknown , PROMPT > ; schema : Schema <OBJECT > & JsonSchemaProducer } & FunctionOptions |
Returns
Promise
<ObjectStream
<OBJECT
>>
See
https://modelfusion.dev/guide/function/generate-object
Example
const objectStream = await streamObject({
model: openai.ChatTextGenerator(...).asFunctionCallObjectGenerationModel(...),
schema: zodSchema(
z.array(
z.object({
name: z.string(),
class: z
.string()
.describe("Character class, e.g. warrior, mage, or thief."),
description: z.string(),
})
),
prompt: [
openai.ChatMessage.user(
"Generate 3 character descriptions for a fantasy role playing game."
),
]
});
for await (const { partialObject } of objectStream) {
// ...
}
Defined in
packages/modelfusion/src/model-function/generate-object/streamObject.ts:51
▸ streamObject<OBJECT
, PROMPT
>(args
): Promise
<{ metadata
: Omit
<ModelCallMetadata
, "durationInMs"
| "finishTimestamp"
> ; objectPromise
: PromiseLike
<OBJECT
> ; objectStream
: ObjectStream
<OBJECT
> }>
Type parameters
Name |
---|
OBJECT |
PROMPT |
Parameters
Name | Type |
---|---|
args | { fullResponse : true ; model : ObjectStreamingModel <PROMPT , ObjectGenerationModelSettings > ; prompt : PROMPT | PromptFunction <unknown , PROMPT > | (schema : Schema <OBJECT >) => PROMPT | PromptFunction <unknown , PROMPT > ; schema : Schema <OBJECT > & JsonSchemaProducer } & FunctionOptions |
Returns
Promise
<{ metadata
: Omit
<ModelCallMetadata
, "durationInMs"
| "finishTimestamp"
> ; objectPromise
: PromiseLike
<OBJECT
> ; objectStream
: ObjectStream
<OBJECT
> }>
Defined in
packages/modelfusion/src/model-function/generate-object/streamObject.ts:62
streamSpeech
▸ streamSpeech(args
): Promise
<AsyncIterable
<Uint8Array
>>
Stream synthesized speech from text. Also called text-to-speech (TTS). Duplex streaming where both the input and output are streamed is supported.
Parameters
Name | Type |
---|---|
args | { fullResponse? : false ; model : StreamingSpeechGenerationModel <SpeechGenerationModelSettings > ; text : string | AsyncIterable <string > } & FunctionOptions |
Returns
Promise
<AsyncIterable
<Uint8Array
>>
An async iterable promise that contains the synthesized speech chunks.
See
https://modelfusion.dev/guide/function/generate-speech
Example
const textStream = await streamText(...);
const speechStream = await streamSpeech({
model: elevenlabs.SpeechGenerator(...),
text: textStream
});
for await (const speechPart of speechStream) {
// ...
}
Defined in
packages/modelfusion/src/model-function/generate-speech/streamSpeech.ts:34
▸ streamSpeech(args
): Promise
<{ metadata
: Omit
<ModelCallMetadata
, "durationInMs"
| "finishTimestamp"
> ; speechStream
: AsyncIterable
<Uint8Array
> }>
Parameters
Name | Type |
---|---|
args | { fullResponse : true ; model : StreamingSpeechGenerationModel <SpeechGenerationModelSettings > ; text : string | AsyncIterable <string > } & FunctionOptions |
Returns
Promise
<{ metadata
: Omit
<ModelCallMetadata
, "durationInMs"
| "finishTimestamp"
> ; speechStream
: AsyncIterable
<Uint8Array
> }>
Defined in
packages/modelfusion/src/model-function/generate-speech/streamSpeech.ts:41
streamText
▸ streamText<PROMPT
>(args
): Promise
<AsyncIterable
<string
>>
Stream the generated text for a prompt as an async iterable.
The prompt depends on the model used. For instance, OpenAI completion models expect a string prompt, whereas OpenAI chat models expect an array of chat messages.
Type parameters
Name |
---|
PROMPT |
Parameters
Name | Type |
---|---|
args | { fullResponse? : false ; model : TextStreamingModel <PROMPT , TextGenerationModelSettings > ; prompt : PROMPT | PromptFunction <unknown , PROMPT > } & FunctionOptions |
Returns
Promise
<AsyncIterable
<string
>>
An async iterable promise that yields the generated text.
See
https://modelfusion.dev/guide/function/generate-text
Example
const textStream = await streamText({
model: openai.CompletionTextGenerator(...),
prompt: "Write a short story about a robot learning to love:\n\n"
});
for await (const textPart of textStream) {
// ...
}
Defined in
packages/modelfusion/src/model-function/generate-text/streamText.ts:32
▸ streamText<PROMPT
>(args
): Promise
<{ metadata
: Omit
<ModelCallMetadata
, "durationInMs"
| "finishTimestamp"
> ; textPromise
: PromiseLike
<string
> ; textStream
: AsyncIterable
<string
> }>
Type parameters
Name |
---|
PROMPT |
Parameters
Name | Type |
---|---|
args | { fullResponse : true ; model : TextStreamingModel <PROMPT , TextGenerationModelSettings > ; prompt : PROMPT | PromptFunction <unknown , PROMPT > } & FunctionOptions |
Returns
Promise
<{ metadata
: Omit
<ModelCallMetadata
, "durationInMs"
| "finishTimestamp"
> ; textPromise
: PromiseLike
<string
> ; textStream
: AsyncIterable
<string
> }>
Defined in
packages/modelfusion/src/model-function/generate-text/streamText.ts:39
trimChatPrompt
▸ trimChatPrompt(«destructured»
): Promise
<ChatPrompt
>
Keeps only the most recent messages in the prompt, while leaving enough space for the completion.
It will remove user-ai message pairs that don't fit. The result is always a valid chat prompt.
When the minimal chat prompt (system message + last user message) is already too long, it will only return this minimal chat prompt.
Parameters
Name | Type |
---|---|
«destructured» | Object |
› model | TextGenerationModel <ChatPrompt , TextGenerationModelSettings > & HasTokenizer <ChatPrompt > & HasContextWindowSize |
› prompt | ChatPrompt |
› tokenLimit? | number |
Returns
Promise
<ChatPrompt
>
See
https://modelfusion.dev/guide/function/generate-text#limiting-the-chat-length
Defined in
packages/modelfusion/src/model-function/generate-text/prompt-template/trimChatPrompt.ts:19
uncheckedSchema
▸ uncheckedSchema<OBJECT
>(jsonSchema?
): UncheckedSchema
<OBJECT
>
Type parameters
Name |
---|
OBJECT |
Parameters
Name | Type |
---|---|
jsonSchema? | unknown |
Returns
UncheckedSchema
<OBJECT
>
Defined in
packages/modelfusion/src/core/schema/UncheckedSchema.ts:4
upsertIntoVectorIndex
▸ upsertIntoVectorIndex<VALUE
, OBJECT
>(«destructured»
, options?
): Promise
<void
>
Type parameters
Name |
---|
VALUE |
OBJECT |
Parameters
Name | Type | Default value |
---|---|---|
«destructured» | Object | undefined |
› embeddingModel | EmbeddingModel <VALUE , EmbeddingModelSettings > | undefined |
› generateId? | () => string | createId |
› getId? | (object : OBJECT , index : number ) => undefined | string | undefined |
› getValueToEmbed | (object : OBJECT , index : number ) => VALUE | undefined |
› objects | OBJECT [] | undefined |
› vectorIndex | VectorIndex <OBJECT , unknown , unknown > | undefined |
options? | FunctionOptions | undefined |
Returns
Promise
<void
>
Defined in
packages/modelfusion/src/vector-index/upsertIntoVectorIndex.ts:11
validateContentIsString
▸ validateContentIsString(content
, prompt
): string
Parameters
Name | Type |
---|---|
content | unknown |
prompt | unknown |
Returns
string
Defined in
packages/modelfusion/src/model-function/generate-text/prompt-template/ContentPart.ts:42
validateTypes
▸ validateTypes<T
>(«destructured»
): T
Validates the types of an unknown object using a schema and return a strongly-typed object.
Type parameters
Name | Description |
---|---|
T | The type of the object to validate. |
Parameters
Name | Type |
---|---|
«destructured» | Object |
› schema | Schema <T > |
› value | unknown |
Returns
T
- The typed object.
Defined in
packages/modelfusion/src/core/schema/validateTypes.ts:13
withRun
▸ withRun(run
, callback
): Promise
<void
>
Stores the run in an AsyncLocalStorage if running in Node.js. It can be retrieved with getRun()
.
Parameters
Name | Type |
---|---|
run | Run |
callback | (run : Run ) => PromiseLike <void > |
Returns
Promise
<void
>
Defined in
packages/modelfusion/src/core/getRun.ts:47
zodSchema
▸ zodSchema<OBJECT
>(zodSchema
): ZodSchema
<OBJECT
>
Type parameters
Name |
---|
OBJECT |
Parameters
Name | Type |
---|---|
zodSchema | ZodType <OBJECT , ZodTypeDef , OBJECT > |
Returns
ZodSchema
<OBJECT
>
Defined in
packages/modelfusion/src/core/schema/ZodSchema.ts:7
Variables
CHAT_MODEL_CONTEXT_WINDOW_SIZES
• Const
CHAT_MODEL_CONTEXT_WINDOW_SIZES: Object
Type declaration
Name | Type |
---|---|
gpt-3.5-turbo | 4096 |
gpt-3.5-turbo-0125 | 16385 |
gpt-3.5-turbo-0301 | 4096 |
gpt-3.5-turbo-0613 | 4096 |
gpt-3.5-turbo-1106 | 16385 |
gpt-3.5-turbo-16k | 16384 |
gpt-3.5-turbo-16k-0613 | 16384 |
gpt-4 | 8192 |
gpt-4-0125-preview | 128000 |
gpt-4-0314 | 8192 |
gpt-4-0613 | 8192 |
gpt-4-1106-preview | 128000 |
gpt-4-32k | 32768 |
gpt-4-32k-0314 | 32768 |
gpt-4-32k-0613 | 32768 |
gpt-4-turbo-preview | 128000 |
gpt-4-vision-preview | 128000 |
Defined in
packages/modelfusion/src/model-provider/openai/OpenAIChatModel.ts:27
COHERE_TEXT_EMBEDDING_MODELS
• Const
COHERE_TEXT_EMBEDDING_MODELS: Object
Type declaration
Name | Type |
---|---|
embed-english-light-v2.0 | { contextWindowSize : number = 512; dimensions : number = 1024 } |
embed-english-light-v2.0.contextWindowSize | number |
embed-english-light-v2.0.dimensions | number |
embed-english-light-v3.0 | { contextWindowSize : number = 512; dimensions : number = 384 } |
embed-english-light-v3.0.contextWindowSize | number |
embed-english-light-v3.0.dimensions | number |
embed-english-v2.0 | { contextWindowSize : number = 512; dimensions : number = 4096 } |
embed-english-v2.0.contextWindowSize | number |
embed-english-v2.0.dimensions | number |
embed-english-v3.0 | { contextWindowSize : number = 512; dimensions : number = 1024 } |
embed-english-v3.0.contextWindowSize | number |
embed-english-v3.0.dimensions | number |
embed-multilingual-light-v3.0 | { contextWindowSize : number = 512; dimensions : number = 384 } |
embed-multilingual-light-v3.0.contextWindowSize | number |
embed-multilingual-light-v3.0.dimensions | number |
embed-multilingual-v2.0 | { contextWindowSize : number = 256; dimensions : number = 768 } |
embed-multilingual-v2.0.contextWindowSize | number |
embed-multilingual-v2.0.dimensions | number |
embed-multilingual-v3.0 | { contextWindowSize : number = 512; dimensions : number = 1024 } |
embed-multilingual-v3.0.contextWindowSize | number |
embed-multilingual-v3.0.dimensions | number |
Defined in
packages/modelfusion/src/model-provider/cohere/CohereTextEmbeddingModel.ts:20
COHERE_TEXT_GENERATION_MODELS
• Const
COHERE_TEXT_GENERATION_MODELS: Object
Type declaration
Name | Type |
---|---|
command | { contextWindowSize : number = 4096 } |
command.contextWindowSize | number |
command-light | { contextWindowSize : number = 4096 } |
command-light.contextWindowSize | number |
Defined in
packages/modelfusion/src/model-provider/cohere/CohereTextGenerationModel.ts:32
ChatMessage
• ChatMessage: Object
Type declaration
Name | Type |
---|---|
assistant | (__namedParameters : { text : null | string ; toolResults : null | ToolCallResult <string , unknown , unknown >[] }) => ChatMessage |
tool | (__namedParameters : { toolResults : null | ToolCallResult <string , unknown , unknown >[] }) => ChatMessage |
user | (__namedParameters : { text : string }) => ChatMessage |
Defined in
packages/modelfusion/src/model-function/generate-text/prompt-template/ChatPrompt.ts:49
packages/modelfusion/src/model-function/generate-text/prompt-template/ChatPrompt.ts:54
CohereTextGenerationResponseFormat
• Const
CohereTextGenerationResponseFormat: Object
Type declaration
Name | Type | Description |
---|---|---|
deltaIterable | { handler : (__namedParameters : { response : Response }) => Promise <AsyncIterable <Delta <{ is_finished : false ; text : string } | { finish_reason : string ; is_finished : true ; response : { generations : { finish_reason? : string ; id : string ; text : string }[] ; id : string ; meta? : { api_version : { version : string } } ; prompt : string } = cohereTextGenerationResponseSchema }>>> ; stream : boolean = true } | Returns an async iterable over the full deltas (all choices, including full current state at time of event) of the response stream. |
deltaIterable.handler | (__namedParameters : { response : Response }) => Promise <AsyncIterable <Delta <{ is_finished : false ; text : string } | { finish_reason : string ; is_finished : true ; response : { generations : { finish_reason? : string ; id : string ; text : string }[] ; id : string ; meta? : { api_version : { version : string } } ; prompt : string } = cohereTextGenerationResponseSchema }>>> | - |
deltaIterable.stream | boolean | - |
json | { handler : ResponseHandler <{ generations : { finish_reason? : string ; id : string ; text : string }[] ; id : string ; meta? : { api_version : { version : string } } ; prompt : string }> ; stream : boolean = false } | Returns the response as a JSON object. |
json.handler | ResponseHandler <{ generations : { finish_reason? : string ; id : string ; text : string }[] ; id : string ; meta? : { api_version : { version : string } } ; prompt : string }> | - |
json.stream | boolean | - |
Defined in
packages/modelfusion/src/model-provider/cohere/CohereTextGenerationModel.ts:315
LlamaCppCompletionResponseFormat
• Const
LlamaCppCompletionResponseFormat: Object
Type declaration
Name | Type | Description |
---|---|---|
deltaIterable | { handler : (__namedParameters : { response : Response }) => Promise <AsyncIterable <Delta <{ content : string ; generation_settings : { frequency_penalty : number ; ignore_eos : boolean ; logit_bias : number [] ; mirostat : number ; mirostat_eta : number ; mirostat_tau : number ; model : string ; n_ctx : number ; n_keep : number ; n_predict : number ; n_probs : number ; penalize_nl : boolean ; presence_penalty : number ; repeat_last_n : number ; repeat_penalty : number ; seed : number ; stop : string [] ; stream : boolean ; temperature? : number ; tfs_z : number ; top_k : number ; top_p : number ; typical_p : number } ; model : string ; prompt : string ; stop : true ; stopped_eos : boolean ; stopped_limit : boolean ; stopped_word : boolean ; stopping_word : string ; timings : { predicted_ms : number ; predicted_n : number ; predicted_per_second : null | number ; predicted_per_token_ms : null | number ; prompt_ms? : null | number ; prompt_n : number ; prompt_per_second : null | number ; prompt_per_token_ms : null | number } ; tokens_cached : number ; tokens_evaluated : number ; tokens_predicted : number ; truncated : boolean } | { content : string ; stop : false }>>> ; stream : true = true } | Returns an async iterable over the full deltas (all choices, including full current state at time of event) of the response stream. |
deltaIterable.handler | (__namedParameters : { response : Response }) => Promise <AsyncIterable <Delta <{ content : string ; generation_settings : { frequency_penalty : number ; ignore_eos : boolean ; logit_bias : number [] ; mirostat : number ; mirostat_eta : number ; mirostat_tau : number ; model : string ; n_ctx : number ; n_keep : number ; n_predict : number ; n_probs : number ; penalize_nl : boolean ; presence_penalty : number ; repeat_last_n : number ; repeat_penalty : number ; seed : number ; stop : string [] ; stream : boolean ; temperature? : number ; tfs_z : number ; top_k : number ; top_p : number ; typical_p : number } ; model : string ; prompt : string ; stop : true ; stopped_eos : boolean ; stopped_limit : boolean ; stopped_word : boolean ; stopping_word : string ; timings : { predicted_ms : number ; predicted_n : number ; predicted_per_second : null | number ; predicted_per_token_ms : null | number ; prompt_ms? : null | number ; prompt_n : number ; prompt_per_second : null | number ; prompt_per_token_ms : null | number } ; tokens_cached : number ; tokens_evaluated : number ; tokens_predicted : number ; truncated : boolean } | { content : string ; stop : false }>>> | - |
deltaIterable.stream | true | - |
json | { handler : ResponseHandler <{ content : string ; generation_settings : { frequency_penalty : number ; ignore_eos : boolean ; logit_bias : number [] ; mirostat : number ; mirostat_eta : number ; mirostat_tau : number ; model : string ; n_ctx : number ; n_keep : number ; n_predict : number ; n_probs : number ; penalize_nl : boolean ; presence_penalty : number ; repeat_last_n : number ; repeat_penalty : number ; seed : number ; stop : string [] ; stream : boolean ; temperature? : number ; tfs_z : number ; top_k : number ; top_p : number ; typical_p : number } ; model : string ; prompt : string ; stop : true ; stopped_eos : boolean ; stopped_limit : boolean ; stopped_word : boolean ; stopping_word : string ; timings : { predicted_ms : number ; predicted_n : number ; predicted_per_second : null | number ; predicted_per_token_ms : null | number ; prompt_ms? : null | number ; prompt_n : number ; prompt_per_second : null | number ; prompt_per_token_ms : null | number } ; tokens_cached : number ; tokens_evaluated : number ; tokens_predicted : number ; truncated : boolean }> ; stream : false = false } | Returns the response as a JSON object. |
json.handler | ResponseHandler <{ content : string ; generation_settings : { frequency_penalty : number ; ignore_eos : boolean ; logit_bias : number [] ; mirostat : number ; mirostat_eta : number ; mirostat_tau : number ; model : string ; n_ctx : number ; n_keep : number ; n_predict : number ; n_probs : number ; penalize_nl : boolean ; presence_penalty : number ; repeat_last_n : number ; repeat_penalty : number ; seed : number ; stop : string [] ; stream : boolean ; temperature? : number ; tfs_z : number ; top_k : number ; top_p : number ; typical_p : number } ; model : string ; prompt : string ; stop : true ; stopped_eos : boolean ; stopped_limit : boolean ; stopped_word : boolean ; stopping_word : string ; timings : { predicted_ms : number ; predicted_n : number ; predicted_per_second : null | number ; predicted_per_token_ms : null | number ; prompt_ms? : null | number ; prompt_n : number ; prompt_per_second : null | number ; prompt_per_token_ms : null | number } ; tokens_cached : number ; tokens_evaluated : number ; tokens_predicted : number ; truncated : boolean }> | - |
json.stream | false | - |
Defined in
packages/modelfusion/src/model-provider/llamacpp/LlamaCppCompletionModel.ts:593
MistralChatResponseFormat
• Const
MistralChatResponseFormat: Object
Type declaration
Name | Type | Description |
---|---|---|
json | { handler : ResponseHandler <{ choices : { finish_reason : "length" | "stop" | "model_length" ; index : number ; message : { content : string ; role : "user" | "assistant" } }[] ; created : number ; id : string ; model : string ; object : string ; usage : { completion_tokens : number ; prompt_tokens : number ; total_tokens : number } }> ; stream : boolean = false } | Returns the response as a JSON object. |
json.handler | ResponseHandler <{ choices : { finish_reason : "length" | "stop" | "model_length" ; index : number ; message : { content : string ; role : "user" | "assistant" } }[] ; created : number ; id : string ; model : string ; object : string ; usage : { completion_tokens : number ; prompt_tokens : number ; total_tokens : number } }> | - |
json.stream | boolean | - |
textDeltaIterable | { handler : (__namedParameters : { response : Response }) => Promise <AsyncIterable <Delta <{ choices : { delta : { content? : null | string ; role? : null | "user" | "assistant" } ; finish_reason? : null | "length" | "stop" | "model_length" ; index : number }[] ; created? : number ; id : string ; model : string ; object? : string }>>> ; stream : boolean = true } | Returns an async iterable over the text deltas (only the tex different of the first choice). |
textDeltaIterable.handler | (__namedParameters : { response : Response }) => Promise <AsyncIterable <Delta <{ choices : { delta : { content? : null | string ; role? : null | "user" | "assistant" } ; finish_reason? : null | "length" | "stop" | "model_length" ; index : number }[] ; created? : number ; id : string ; model : string ; object? : string }>>> | - |
textDeltaIterable.stream | boolean | - |
Defined in
packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:298
OPENAI_CHAT_MESSAGE_BASE_TOKEN_COUNT
• Const
OPENAI_CHAT_MESSAGE_BASE_TOKEN_COUNT: 5
Prompt tokens that are included automatically for every message that is sent to OpenAI.
Defined in
packages/modelfusion/src/model-provider/openai/countOpenAIChatMessageTokens.ts:19
OPENAI_CHAT_PROMPT_BASE_TOKEN_COUNT
• Const
OPENAI_CHAT_PROMPT_BASE_TOKEN_COUNT: 2
Prompt tokens that are included automatically for every full chat prompt (several messages) that is sent to OpenAI.
Defined in
packages/modelfusion/src/model-provider/openai/countOpenAIChatMessageTokens.ts:13
OPENAI_TEXT_EMBEDDING_MODELS
• Const
OPENAI_TEXT_EMBEDDING_MODELS: Object
Type declaration
Name | Type |
---|---|
text-embedding-3-large | { contextWindowSize : number = 8192; dimensions : number = 3072 } |
text-embedding-3-large.contextWindowSize | number |
text-embedding-3-large.dimensions | number |
text-embedding-3-small | { contextWindowSize : number = 8192; dimensions : number = 1536 } |
text-embedding-3-small.contextWindowSize | number |
text-embedding-3-small.dimensions | number |
text-embedding-ada-002 | { contextWindowSize : number = 8192; dimensions : number = 1536 } |
text-embedding-ada-002.contextWindowSize | number |
text-embedding-ada-002.dimensions | number |
Defined in
packages/modelfusion/src/model-provider/openai/OpenAITextEmbeddingModel.ts:10
OPENAI_TEXT_GENERATION_MODELS
• Const
OPENAI_TEXT_GENERATION_MODELS: Object
See
Type declaration
Name | Type |
---|---|
gpt-3.5-turbo-instruct | { contextWindowSize : number = 4097 } |
gpt-3.5-turbo-instruct.contextWindowSize | number |
Defined in
packages/modelfusion/src/model-provider/openai/OpenAICompletionModel.ts:23
OllamaChatResponseFormat
• Const
OllamaChatResponseFormat: Object
Type declaration
Name | Type | Description |
---|---|---|
deltaIterable | { handler : (__namedParameters : { response : Response }) => Promise <AsyncIterable <Delta <{ created_at : string ; done : false ; message : { content : string ; role : string } ; model : string } | { created_at : string ; done : true ; eval_count : number ; eval_duration : number ; load_duration? : number ; model : string ; prompt_eval_count? : number ; prompt_eval_duration? : number ; total_duration : number }>>> ; stream : boolean = true } | Returns an async iterable over the full deltas (all choices, including full current state at time of event) of the response stream. |
deltaIterable.handler | (__namedParameters : { response : Response }) => Promise <AsyncIterable <Delta <{ created_at : string ; done : false ; message : { content : string ; role : string } ; model : string } | { created_at : string ; done : true ; eval_count : number ; eval_duration : number ; load_duration? : number ; model : string ; prompt_eval_count? : number ; prompt_eval_duration? : number ; total_duration : number }>>> | - |
deltaIterable.stream | boolean | - |
json | { handler : (__namedParameters : { requestBodyValues : unknown ; response : Response ; url : string }) => Promise <{ created_at : string ; done : true ; eval_count : number ; eval_duration : number ; load_duration? : number ; message : { content : string ; role : string } ; model : string ; prompt_eval_count? : number ; prompt_eval_duration? : number ; total_duration : number }> ; stream : false = false } | Returns the response as a JSON object. |
json.handler | (__namedParameters : { requestBodyValues : unknown ; response : Response ; url : string }) => Promise <{ created_at : string ; done : true ; eval_count : number ; eval_duration : number ; load_duration? : number ; message : { content : string ; role : string } ; model : string ; prompt_eval_count? : number ; prompt_eval_duration? : number ; total_duration : number }> | - |
json.stream | false | - |
Defined in
packages/modelfusion/src/model-provider/ollama/OllamaChatModel.ts:318
OllamaCompletionResponseFormat
• Const
OllamaCompletionResponseFormat: Object
Type declaration
Name | Type | Description |
---|---|---|
deltaIterable | { handler : (__namedParameters : { response : Response }) => Promise <AsyncIterable <Delta <{ created_at : string ; done : false ; model : string ; response : string } | { context? : number [] ; created_at : string ; done : true ; eval_count : number ; eval_duration : number ; load_duration? : number ; model : string ; prompt_eval_count? : number ; prompt_eval_duration? : number ; sample_count? : number ; sample_duration? : number ; total_duration : number }>>> ; stream : boolean = true } | Returns an async iterable over the full deltas (all choices, including full current state at time of event) of the response stream. |
deltaIterable.handler | (__namedParameters : { response : Response }) => Promise <AsyncIterable <Delta <{ created_at : string ; done : false ; model : string ; response : string } | { context? : number [] ; created_at : string ; done : true ; eval_count : number ; eval_duration : number ; load_duration? : number ; model : string ; prompt_eval_count? : number ; prompt_eval_duration? : number ; sample_count? : number ; sample_duration? : number ; total_duration : number }>>> | - |
deltaIterable.stream | boolean | - |
json | { handler : (__namedParameters : { requestBodyValues : unknown ; response : Response ; url : string }) => Promise <{ context? : number [] ; created_at : string ; done : true ; eval_count : number ; eval_duration : number ; load_duration? : number ; model : string ; prompt_eval_count? : number ; prompt_eval_duration? : number ; response : string ; total_duration : number }> ; stream : false = false } | Returns the response as a JSON object. |
json.handler | (__namedParameters : { requestBodyValues : unknown ; response : Response ; url : string }) => Promise <{ context? : number [] ; created_at : string ; done : true ; eval_count : number ; eval_duration : number ; load_duration? : number ; model : string ; prompt_eval_count? : number ; prompt_eval_duration? : number ; response : string ; total_duration : number }> | - |
json.stream | false | - |
Defined in
packages/modelfusion/src/model-provider/ollama/OllamaCompletionModel.ts:397
OpenAIChatResponseFormat
• Const
OpenAIChatResponseFormat: Object
Type declaration
Name | Type | Description |
---|---|---|
deltaIterable | { handler : (__namedParameters : { response : Response }) => Promise <AsyncIterable <Delta <{ choices : { delta : { content? : null | string ; function_call? : { arguments? : string ; name? : string } ; role? : "user" | "assistant" ; tool_calls? : { function : { arguments : string ; name : string } ; id : string ; type : "function" }[] } ; finish_reason? : null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index : number }[] ; created : number ; id : string ; model? : string ; object : string ; system_fingerprint? : null | string }>>> ; stream : boolean = true } | Returns an async iterable over the text deltas (only the tex different of the first choice). |
deltaIterable.handler | (__namedParameters : { response : Response }) => Promise <AsyncIterable <Delta <{ choices : { delta : { content? : null | string ; function_call? : { arguments? : string ; name? : string } ; role? : "user" | "assistant" ; tool_calls? : { function : { arguments : string ; name : string } ; id : string ; type : "function" }[] } ; finish_reason? : null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index : number }[] ; created : number ; id : string ; model? : string ; object : string ; system_fingerprint? : null | string }>>> | - |
deltaIterable.stream | boolean | - |
json | { handler : ResponseHandler <{ choices : { finish_reason? : null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index? : number ; logprobs? : any ; message : { content : null | string ; function_call? : { arguments : string ; name : string } ; role : "assistant" ; tool_calls? : { function : { arguments : string ; name : string } ; id : string ; type : "function" }[] } }[] ; created : number ; id : string ; model : string ; object : "chat.completion" ; system_fingerprint? : null | string ; usage : { completion_tokens : number ; prompt_tokens : number ; total_tokens : number } }> ; stream : boolean = false } | Returns the response as a JSON object. |
json.handler | ResponseHandler <{ choices : { finish_reason? : null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index? : number ; logprobs? : any ; message : { content : null | string ; function_call? : { arguments : string ; name : string } ; role : "assistant" ; tool_calls? : { function : { arguments : string ; name : string } ; id : string ; type : "function" }[] } }[] ; created : number ; id : string ; model : string ; object : "chat.completion" ; system_fingerprint? : null | string ; usage : { completion_tokens : number ; prompt_tokens : number ; total_tokens : number } }> | - |
json.stream | boolean | - |
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:449
OpenAIImageGenerationResponseFormat
• Const
OpenAIImageGenerationResponseFormat: Object
Type declaration
Name | Type |
---|---|
base64Json | { handler : ResponseHandler <{ created : number ; data : { b64_json : string }[] }> ; type : "b64_json" } |
base64Json.handler | ResponseHandler <{ created : number ; data : { b64_json : string }[] }> |
base64Json.type | "b64_json" |
url | { handler : ResponseHandler <{ created : number ; data : { url : string }[] }> ; type : "url" } |
url.handler | ResponseHandler <{ created : number ; data : { url : string }[] }> |
url.type | "url" |
Defined in
packages/modelfusion/src/model-provider/openai/OpenAIImageGenerationModel.ts:174
OpenAITextResponseFormat
• Const
OpenAITextResponseFormat: Object
Type declaration
Name | Type | Description |
---|---|---|
deltaIterable | { handler : (__namedParameters : { response : Response }) => Promise <AsyncIterable <Delta <{ choices : { finish_reason? : null | "length" | "stop" | "content_filter" ; index : number ; text : string }[] ; created : number ; id : string ; model : string ; object : "text_completion" ; system_fingerprint? : string }>>> ; stream : boolean = true } | Returns an async iterable over the full deltas (all choices, including full current state at time of event) of the response stream. |
deltaIterable.handler | (__namedParameters : { response : Response }) => Promise <AsyncIterable <Delta <{ choices : { finish_reason? : null | "length" | "stop" | "content_filter" ; index : number ; text : string }[] ; created : number ; id : string ; model : string ; object : "text_completion" ; system_fingerprint? : string }>>> | - |
deltaIterable.stream | boolean | - |
json | { handler : ResponseHandler <{ choices : { finish_reason? : null | "length" | "stop" | "content_filter" ; index : number ; logprobs? : any ; text : string }[] ; created : number ; id : string ; model : string ; object : "text_completion" ; system_fingerprint? : string ; usage : { completion_tokens : number ; prompt_tokens : number ; total_tokens : number } }> ; stream : boolean = false } | Returns the response as a JSON object. |
json.handler | ResponseHandler <{ choices : { finish_reason? : null | "length" | "stop" | "content_filter" ; index : number ; logprobs? : any ; text : string }[] ; created : number ; id : string ; model : string ; object : "text_completion" ; system_fingerprint? : string ; usage : { completion_tokens : number ; prompt_tokens : number ; total_tokens : number } }> | - |
json.stream | boolean | - |
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAICompletionModel.ts:238
OpenAITranscriptionResponseFormat
• Const
OpenAITranscriptionResponseFormat: Object
Type declaration
Name | Type |
---|---|
json | { handler : ResponseHandler <{ text : string }> ; type : "json" } |
json.handler | ResponseHandler <{ text : string }> |
json.type | "json" |
srt | { handler : ResponseHandler <string > ; type : "srt" } |
srt.handler | ResponseHandler <string > |
srt.type | "srt" |
text | { handler : ResponseHandler <string > ; type : "text" } |
text.handler | ResponseHandler <string > |
text.type | "text" |
verboseJson | { handler : ResponseHandler <{ duration : number ; language : string ; segments : { avg_logprob : number ; compression_ratio : number ; end : number ; id : number ; no_speech_prob : number ; seek : number ; start : number ; temperature : number ; text : string ; tokens : number [] ; transient? : boolean }[] ; task : "transcribe" ; text : string }> ; type : "verbose_json" } |
verboseJson.handler | ResponseHandler <{ duration : number ; language : string ; segments : { avg_logprob : number ; compression_ratio : number ; end : number ; id : number ; no_speech_prob : number ; seek : number ; start : number ; temperature : number ; text : string ; tokens : number [] ; transient? : boolean }[] ; task : "transcribe" ; text : string }> |
verboseJson.type | "verbose_json" |
vtt | { handler : ResponseHandler <string > ; type : "vtt" } |
vtt.handler | ResponseHandler <string > |
vtt.type | "vtt" |
Defined in
packages/modelfusion/src/model-provider/openai/OpenAITranscriptionModel.ts:228
jsonObjectPrompt
• Const
jsonObjectPrompt: Object
Type declaration
Name | Type |
---|---|
custom | <SOURCE_PROMPT, TARGET_PROMPT>(createPrompt : (prompt : SOURCE_PROMPT , schema : Schema <unknown > & JsonSchemaProducer ) => TARGET_PROMPT ) => ObjectFromTextPromptTemplate <SOURCE_PROMPT , TARGET_PROMPT > |
instruction | (__namedParameters : { schemaPrefix? : string ; schemaSuffix? : string }) => FlexibleObjectFromTextPromptTemplate <InstructionPrompt , InstructionPrompt > |
text | (__namedParameters : { schemaPrefix? : string ; schemaSuffix? : string }) => FlexibleObjectFromTextPromptTemplate <string , InstructionPrompt > |
Defined in
packages/modelfusion/src/model-function/generate-object/jsonObjectPrompt.ts:14
jsonToolCallPrompt
• Const
jsonToolCallPrompt: Object
Type declaration
Name | Type |
---|---|
instruction | (__namedParameters : { toolPrompt? : (tool : ToolDefinition <string , unknown >) => string }) => ToolCallPromptTemplate <InstructionPrompt , InstructionPrompt > |
text | (__namedParameters : { toolPrompt? : (tool : ToolDefinition <string , unknown >) => string }) => ToolCallPromptTemplate <string , InstructionPrompt > |
Defined in
packages/modelfusion/src/tool/generate-tool-call/jsonToolCallPrompt.ts:22
openAITextEmbeddingResponseSchema
• Const
openAITextEmbeddingResponseSchema: ZodObject
<{ data
: ZodArray
<ZodObject
<{ embedding
: ZodArray
<ZodNumber
, "many"
> ; index
: ZodNumber
; object
: ZodLiteral
<"embedding"
> }, "strip"
, ZodTypeAny
, { embedding
: number
[] ; index
: number
; object
: "embedding"
}, { embedding
: number
[] ; index
: number
; object
: "embedding"
}>, "many"
> ; model
: ZodString
; object
: ZodLiteral
<"list"
> ; usage
: ZodOptional
<ZodObject
<{ prompt_tokens
: ZodNumber
; total_tokens
: ZodNumber
}, "strip"
, ZodTypeAny
, { prompt_tokens
: number
; total_tokens
: number
}, { prompt_tokens
: number
; total_tokens
: number
}>> }, "strip"
, ZodTypeAny
, { data
: { embedding
: number
[] ; index
: number
; object
: "embedding"
}[] ; model
: string
; object
: "list"
; usage?
: { prompt_tokens
: number
; total_tokens
: number
} }, { data
: { embedding
: number
[] ; index
: number
; object
: "embedding"
}[] ; model
: string
; object
: "list"
; usage?
: { prompt_tokens
: number
; total_tokens
: number
} }>
Defined in
packages/modelfusion/src/model-provider/openai/OpenAITextEmbeddingModel.ts:34
textGenerationModelProperties
• Const
textGenerationModelProperties: readonly ["maxGenerationTokens"
, "stopSequences"
, "numberOfGenerations"
, "trimWhitespace"
]
Defined in
packages/modelfusion/src/model-function/generate-text/TextGenerationModel.ts:12
References
MistralChatMessage
Renames and re-exports ChatMessage
MistralChatPrompt
Renames and re-exports ChatPrompt
OllamaChatMessage
Renames and re-exports ChatMessage
OllamaChatPrompt
Renames and re-exports ChatPrompt
OpenAIChatMessage
Renames and re-exports ChatMessage
OpenAIChatPrompt
Renames and re-exports ChatPrompt
retryNever
Re-exports retryNever
retryWithExponentialBackoff
Re-exports retryWithExponentialBackoff
throttleMaxConcurrency
Re-exports throttleMaxConcurrency
throttleOff
Re-exports throttleOff
Type Aliases
AssistantContent
Ƭ AssistantContent: string
| (TextPart
| ToolCallPart
)[]
Defined in
packages/modelfusion/src/model-function/generate-text/prompt-template/ChatPrompt.ts:41
AudioMimeType
Ƭ AudioMimeType: "audio/webm"
| "audio/mp3"
| "audio/wav"
| "audio/mp4"
| "audio/mpeg"
| "audio/mpga"
| "audio/ogg"
| "audio/oga"
| "audio/flac"
| "audio/m4a"
Defined in
packages/modelfusion/src/util/audio/AudioMimeType.ts:1
Automatic1111ErrorData
Ƭ Automatic1111ErrorData: Object
Type declaration
Name | Type |
---|---|
body | string |
detail | string |
error | string |
errors | string |
Defined in
packages/modelfusion/src/model-provider/automatic1111/Automatic1111Error.ts:16
Automatic1111ImageGenerationPrompt
Ƭ Automatic1111ImageGenerationPrompt: Object
Type declaration
Name | Type |
---|---|
negativePrompt? | string |
prompt | string |
Defined in
packages/modelfusion/src/model-provider/automatic1111/Automatic1111ImageGenerationPrompt.ts:3
Automatic1111ImageGenerationResponse
Ƭ Automatic1111ImageGenerationResponse: Object
Type declaration
Name | Type |
---|---|
images | string [] |
info | string |
parameters |
Defined in
packages/modelfusion/src/model-provider/automatic1111/Automatic1111ImageGenerationModel.ts:184
AzureOpenAIApiConfigurationOptions
Ƭ AzureOpenAIApiConfigurationOptions: Object
Type declaration
Name | Type |
---|---|
apiKey? | string |
apiVersion | string |
deploymentId | string |
resourceName | string |
retry? | RetryFunction |
throttle? | ThrottleFunction |
Defined in
packages/modelfusion/src/model-provider/openai/AzureOpenAIApiConfiguration.ts:6
BaseFunctionFinishedEventResult
Ƭ BaseFunctionFinishedEventResult: { status
: "success"
; value
: unknown
} | { error
: unknown
; status
: "error"
} | { status
: "abort"
}
Defined in
packages/modelfusion/src/core/FunctionEvent.ts:93
BaseModelCallFinishedEventResult
Ƭ BaseModelCallFinishedEventResult: { rawResponse
: unknown
; status
: "success"
; usage?
: unknown
; value
: unknown
} | { error
: unknown
; status
: "error"
} | { status
: "abort"
}
Defined in
packages/modelfusion/src/model-function/ModelCallEvent.ts:65
BaseUrlPartsApiConfigurationOptions
Ƭ BaseUrlPartsApiConfigurationOptions: Object
Type declaration
Name | Type |
---|---|
baseUrl | string | UrlParts |
customCallHeaders? | CustomHeaderProvider |
headers? | Record <string , string > |
retry? | RetryFunction |
throttle? | ThrottleFunction |
Defined in
packages/modelfusion/src/core/api/BaseUrlApiConfiguration.ts:13
ChatMessage
Ƭ ChatMessage: { content
: UserContent
; role
: "user"
} | { content
: AssistantContent
; role
: "assistant"
} | { content
: ToolContent
; role
: "tool"
}
A message in a chat prompt.
See
ChatPrompt
Defined in
packages/modelfusion/src/model-function/generate-text/prompt-template/ChatPrompt.ts:49
packages/modelfusion/src/model-function/generate-text/prompt-template/ChatPrompt.ts:54
ClassifyFinishedEventResult
Ƭ ClassifyFinishedEventResult: { rawResponse
: unknown
; status
: "success"
; value
: unknown
} | { error
: unknown
; status
: "error"
} | { status
: "abort"
}
Defined in
packages/modelfusion/src/model-function/classify/ClassifyEvent.ts:11
CohereDetokenizationResponse
Ƭ CohereDetokenizationResponse: Object
Type declaration
Name | Type |
---|---|
meta | { api_version : { version : string } } |
meta.api_version | { version : string } |
meta.api_version.version | string |
text | string |
Defined in
packages/modelfusion/src/model-provider/cohere/CohereTokenizer.ts:141
CohereErrorData
Ƭ CohereErrorData: Object
Type declaration
Name | Type |
---|---|
message | string |
Defined in
packages/modelfusion/src/model-provider/cohere/CohereError.ts:9
CohereTextEmbeddingModelType
Ƭ CohereTextEmbeddingModelType: keyof typeof COHERE_TEXT_EMBEDDING_MODELS
Defined in
packages/modelfusion/src/model-provider/cohere/CohereTextEmbeddingModel.ts:51
CohereTextEmbeddingResponse
Ƭ CohereTextEmbeddingResponse: Object
Type declaration
Name | Type |
---|---|
embeddings | number [][] |
id | string |
meta | { api_version : { version : string } } |
meta.api_version | { version : string } |
meta.api_version.version | string |
texts | string [] |
Defined in
packages/modelfusion/src/model-provider/cohere/CohereTextEmbeddingModel.ts:196
CohereTextGenerationModelType
Ƭ CohereTextGenerationModelType: keyof typeof COHERE_TEXT_GENERATION_MODELS
Defined in
packages/modelfusion/src/model-provider/cohere/CohereTextGenerationModel.ts:41
CohereTextGenerationResponse
Ƭ CohereTextGenerationResponse: Object
Type declaration
Name | Type |
---|---|
generations | { finish_reason? : string ; id : string ; text : string }[] |
id | string |
meta? | { api_version : { version : string } } |
meta.api_version | { version : string } |
meta.api_version.version | string |
prompt | string |
Defined in
packages/modelfusion/src/model-provider/cohere/CohereTextGenerationModel.ts:292
CohereTextGenerationResponseFormatType
Ƭ CohereTextGenerationResponseFormatType<T
>: Object
Type parameters
Name |
---|
T |
Type declaration
Name | Type |
---|---|
handler | ResponseHandler <T > |
stream | boolean |
Defined in
packages/modelfusion/src/model-provider/cohere/CohereTextGenerationModel.ts:310
CohereTextStreamChunk
Ƭ CohereTextStreamChunk: { is_finished
: false
; text
: string
} | { finish_reason
: string
; is_finished
: true
; response
: { generations
: { finish_reason?
: string
; id
: string
; text
: string
}[] ; id
: string
; meta?
: { api_version
: { version
: string
} } ; prompt
: string
} = cohereTextGenerationResponseSchema }
Defined in
packages/modelfusion/src/model-provider/cohere/CohereTextGenerationModel.ts:308
CohereTokenizationResponse
Ƭ CohereTokenizationResponse: Object
Type declaration
Name | Type |
---|---|
meta | { api_version : { version : string } } |
meta.api_version | { version : string } |
meta.api_version.version | string |
token_strings | string [] |
tokens | number [] |
Defined in
packages/modelfusion/src/model-provider/cohere/CohereTokenizer.ts:155
CohereTokenizerModelType
Ƭ CohereTokenizerModelType: CohereTextGenerationModelType
| CohereTextEmbeddingModelType
Defined in
packages/modelfusion/src/model-provider/cohere/CohereTokenizer.ts:16
CustomHeaderProvider
Ƭ CustomHeaderProvider: (headerParameters
: HeaderParameters
) => Record
<string
, string
| undefined
>
Type declaration
▸ (headerParameters
): Record
<string
, string
| undefined
>
Parameters
Name | Type |
---|---|
headerParameters | HeaderParameters |
Returns
Record
<string
, string
| undefined
>
Defined in
packages/modelfusion/src/core/api/CustomHeaderProvider.ts:3
DataContent
Ƭ DataContent: string
| Uint8Array
| ArrayBuffer
| Buffer
Data content. Can either be a base64-encoded string, a Uint8Array, an ArrayBuffer, or a Buffer.
Defined in
packages/modelfusion/src/util/format/DataContent.ts:6
Delta
Ƭ Delta<T
>: { deltaValue
: T
; type
: "delta"
} | { error
: unknown
; type
: "error"
}
Type parameters
Name |
---|
T |
Defined in
packages/modelfusion/src/model-function/Delta.ts:1
EmbeddingFinishedEventResult
Ƭ EmbeddingFinishedEventResult: { rawResponse
: unknown
; status
: "success"
; value
: Vector
| Vector
[] } | { error
: unknown
; status
: "error"
} | { status
: "abort"
}
Defined in
packages/modelfusion/src/model-function/embed/EmbeddingEvent.ts:12
ExecuteToolMetadata
Ƭ ExecuteToolMetadata: Object
Type declaration
Name | Type |
---|---|
callId | string |
durationInMs | number |
finishTimestamp | Date |
functionId? | string |
runId? | string |
sessionId? | string |
startTimestamp | Date |
userId? | string |
Defined in
packages/modelfusion/src/tool/execute-tool/executeTool.ts:16
FlexibleObjectFromTextPromptTemplate
Ƭ FlexibleObjectFromTextPromptTemplate<SOURCE_PROMPT
, INTERMEDIATE_PROMPT
>: Object
Type parameters
Name |
---|
SOURCE_PROMPT |
INTERMEDIATE_PROMPT |
Type declaration
Name | Type |
---|---|
adaptModel | (model : TextStreamingModel <never > & { withChatPrompt : () => TextStreamingModel <ChatPrompt , TextGenerationModelSettings > ; withInstructionPrompt : () => TextStreamingModel <InstructionPrompt , TextGenerationModelSettings > ; withTextPrompt : () => TextStreamingModel <string , TextGenerationModelSettings > }) => TextStreamingModel <INTERMEDIATE_PROMPT > |
createPrompt | (prompt : SOURCE_PROMPT , schema : Schema <unknown > & JsonSchemaProducer ) => INTERMEDIATE_PROMPT |
extractObject | (response : string ) => unknown |
withJsonOutput? | (__namedParameters : { model : { withJsonOutput : (schema : Schema <unknown > & JsonSchemaProducer ) => { withJsonOutput : (schema : Schema <unknown > & JsonSchemaProducer ) => { withJsonOutput(schema: Schema<unknown> & JsonSchemaProducer): ...; } } } ; schema : Schema <unknown > & JsonSchemaProducer }) => { withJsonOutput : (schema : Schema <unknown > & JsonSchemaProducer ) => { withJsonOutput(schema: Schema<unknown> & JsonSchemaProducer): ...; } } |
Defined in
packages/modelfusion/src/model-function/generate-object/ObjectFromTextPromptTemplate.ts:28
FunctionCallOptions
Ƭ FunctionCallOptions: Omit
<FunctionOptions
, "callId"
> & { callId
: string
; functionType
: string
}
Extended options that are passed to models when functions are called. They are passed into e.g. API providers to create custom headers.
Defined in
packages/modelfusion/src/core/FunctionOptions.ts:53
FunctionEvent
Ƭ FunctionEvent: ExecuteFunctionStartedEvent
| ExecuteFunctionFinishedEvent
| ExecuteToolStartedEvent
| ExecuteToolFinishedEvent
| ExtensionFunctionStartedEvent
| ExtensionFunctionFinishedEvent
| ModelCallStartedEvent
| ModelCallFinishedEvent
| RetrieveStartedEvent
| RetrieveFinishedEvent
| UpsertIntoVectorIndexStartedEvent
| UpsertIntoVectorIndexFinishedEvent
| runToolStartedEvent
| runToolFinishedEvent
| runToolsStartedEvent
| runToolsFinishedEvent
Defined in
packages/modelfusion/src/core/FunctionEvent.ts:125
FunctionOptions
Ƭ FunctionOptions: Object
Additional settings for ModelFusion functions.
Type declaration
Name | Type | Description |
---|---|---|
cache? | Cache | Optional cache that can be used by the function to store and retrieve cached values. Not supported by all functions. |
callId? | string | Unique identifier of the call id of the parent function. Used in events and logging. It has the same name as the callId in FunctionCallOptions to allow for easy propagation of the call id. However, in the FunctionOptions , it is the call ID for the parent call, and it is optional. |
functionId? | string | Optional function identifier. Used in events and logging. |
logging? | LogFormat | Optional logging to use for the function. Logs are sent to the console. Overrides the global function logging setting. |
observers? | FunctionObserver [] | Optional observers that are called when the function is invoked. |
run? | Run | Optional run as part of which this function is called. Used in events and logging. Run callbacks are invoked when it is provided. |
Defined in
packages/modelfusion/src/core/FunctionOptions.ts:9
HeaderParameters
Ƭ HeaderParameters: Object
Type declaration
Name | Type |
---|---|
callId | string |
functionId? | string |
functionType | string |
run? | Run |
Defined in
packages/modelfusion/src/core/api/ApiConfiguration.ts:5
HuggingFaceErrorData
Ƭ HuggingFaceErrorData: Object
Type declaration
Name | Type |
---|---|
error | string | string [] & undefined | string | string [] |
Defined in
packages/modelfusion/src/model-provider/huggingface/HuggingFaceError.ts:9
HuggingFaceTextEmbeddingResponse
Ƭ HuggingFaceTextEmbeddingResponse: number
[][]
Defined in
packages/modelfusion/src/model-provider/huggingface/HuggingFaceTextEmbeddingModel.ts:148
HuggingFaceTextGenerationResponse
Ƭ HuggingFaceTextGenerationResponse: { generated_text
: string
}[]
Defined in
packages/modelfusion/src/model-provider/huggingface/HuggingFaceTextGenerationModel.ts:194
ImageGenerationFinishedEventResult
Ƭ ImageGenerationFinishedEventResult: { rawResponse
: unknown
; status
: "success"
; value
: string
} | { error
: unknown
; status
: "error"
} | { status
: "abort"
}
Defined in
packages/modelfusion/src/model-function/generate-image/ImageGenerationEvent.ts:10
InstructionContent
Ƭ InstructionContent: string
| (TextPart
| ImagePart
)[]
Defined in
packages/modelfusion/src/model-function/generate-text/prompt-template/InstructionPrompt.ts:40
LlamaCppCompletionResponseFormatType
Ƭ LlamaCppCompletionResponseFormatType<T
>: Object
Type parameters
Name |
---|
T |
Type declaration
Name | Type |
---|---|
handler | ResponseHandler <T > |
stream | boolean |
Defined in
packages/modelfusion/src/model-provider/llamacpp/LlamaCppCompletionModel.ts:588
LlamaCppErrorData
Ƭ LlamaCppErrorData: Object
Type declaration
Name | Type |
---|---|
error | string |
Defined in
packages/modelfusion/src/model-provider/llamacpp/LlamaCppError.ts:9
LlamaCppTextEmbeddingResponse
Ƭ LlamaCppTextEmbeddingResponse: Object
Type declaration
Name | Type |
---|---|
embedding | number [] |
Defined in
packages/modelfusion/src/model-provider/llamacpp/LlamaCppTextEmbeddingModel.ts:120
LlamaCppTextGenerationResponse
Ƭ LlamaCppTextGenerationResponse: Object
Type declaration
Name | Type |
---|---|
content | string |
generation_settings | { frequency_penalty : number ; ignore_eos : boolean ; logit_bias : number [] ; mirostat : number ; mirostat_eta : number ; mirostat_tau : number ; model : string ; n_ctx : number ; n_keep : number ; n_predict : number ; n_probs : number ; penalize_nl : boolean ; presence_penalty : number ; repeat_last_n : number ; repeat_penalty : number ; seed : number ; stop : string [] ; stream : boolean ; temperature? : number ; tfs_z : number ; top_k : number ; top_p : number ; typical_p : number } |
generation_settings.frequency_penalty | number |
generation_settings.ignore_eos | boolean |
generation_settings.logit_bias | number [] |
generation_settings.mirostat | number |
generation_settings.mirostat_eta | number |
generation_settings.mirostat_tau | number |
generation_settings.model | string |
generation_settings.n_ctx | number |
generation_settings.n_keep | number |
generation_settings.n_predict | number |
generation_settings.n_probs | number |
generation_settings.penalize_nl | boolean |
generation_settings.presence_penalty | number |
generation_settings.repeat_last_n | number |
generation_settings.repeat_penalty | number |
generation_settings.seed | number |
generation_settings.stop | string [] |
generation_settings.stream | boolean |
generation_settings.temperature? | number |
generation_settings.tfs_z | number |
generation_settings.top_k | number |
generation_settings.top_p | number |
generation_settings.typical_p | number |
model | string |
prompt | string |
stop | true |
stopped_eos | boolean |
stopped_limit | boolean |
stopped_word | boolean |
stopping_word | string |
timings | { predicted_ms : number ; predicted_n : number ; predicted_per_second : null | number ; predicted_per_token_ms : null | number ; prompt_ms? : null | number ; prompt_n : number ; prompt_per_second : null | number ; prompt_per_token_ms : null | number } |
timings.predicted_ms | number |
timings.predicted_n | number |
timings.predicted_per_second | null | number |
timings.predicted_per_token_ms | null | number |
timings.prompt_ms? | null | number |
timings.prompt_n | number |
timings.prompt_per_second | null | number |
timings.prompt_per_token_ms | null | number |
tokens_cached | number |
tokens_evaluated | number |
tokens_predicted | number |
truncated | boolean |
Defined in
packages/modelfusion/src/model-provider/llamacpp/LlamaCppCompletionModel.ts:536
LlamaCppTextStreamChunk
Ƭ LlamaCppTextStreamChunk: { content
: string
; generation_settings
: { frequency_penalty
: number
; ignore_eos
: boolean
; logit_bias
: number
[] ; mirostat
: number
; mirostat_eta
: number
; mirostat_tau
: number
; model
: string
; n_ctx
: number
; n_keep
: number
; n_predict
: number
; n_probs
: number
; penalize_nl
: boolean
; presence_penalty
: number
; repeat_last_n
: number
; repeat_penalty
: number
; seed
: number
; stop
: string
[] ; stream
: boolean
; temperature?
: number
; tfs_z
: number
; top_k
: number
; top_p
: number
; typical_p
: number
} ; model
: string
; prompt
: string
; stop
: true
; stopped_eos
: boolean
; stopped_limit
: boolean
; stopped_word
: boolean
; stopping_word
: string
; timings
: { predicted_ms
: number
; predicted_n
: number
; predicted_per_second
: null
| number
; predicted_per_token_ms
: null
| number
; prompt_ms?
: null
| number
; prompt_n
: number
; prompt_per_second
: null
| number
; prompt_per_token_ms
: null
| number
} ; tokens_cached
: number
; tokens_evaluated
: number
; tokens_predicted
: number
; truncated
: boolean
} | { content
: string
; stop
: false
}
Defined in
packages/modelfusion/src/model-provider/llamacpp/LlamaCppCompletionModel.ts:548
LlamaCppTokenizationResponse
Ƭ LlamaCppTokenizationResponse: Object
Type declaration
Name | Type |
---|---|
tokens | number [] |
Defined in
packages/modelfusion/src/model-provider/llamacpp/LlamaCppTokenizer.ts:75
LmntSpeechResponse
Ƭ LmntSpeechResponse: Object
Type declaration
Name | Type |
---|---|
audio | string |
durations | { duration : number ; start : number ; text : string }[] |
seed | number |
Defined in
packages/modelfusion/src/model-provider/lmnt/LmntSpeechModel.ts:151
LogFormat
Ƭ LogFormat: undefined
| "off"
| "basic-text"
| "detailed-object"
| "detailed-json"
The logging output format to use, e.g. for functions. Logs are sent to the console.
off
or undefined: No logging.basic-text
: Log the timestamp and the type of event as a single line of text.detailed-object
: Log everything except the original response as an object to the console.detailed-json
: Log everything except the original response as a JSON string to the console.
Defined in
packages/modelfusion/src/core/LogFormat.ts:10
MistralChatResponse
Ƭ MistralChatResponse: Object
Type declaration
Name | Type |
---|---|
choices | { finish_reason : "length" | "stop" | "model_length" ; index : number ; message : { content : string ; role : "user" | "assistant" } }[] |
created | number |
id | string |
model | string |
object | string |
usage | { completion_tokens : number ; prompt_tokens : number ; total_tokens : number } |
usage.completion_tokens | number |
usage.prompt_tokens | number |
usage.total_tokens | number |
Defined in
packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:267
MistralChatResponseFormatType
Ƭ MistralChatResponseFormatType<T
>: Object
Type parameters
Name |
---|
T |
Type declaration
Name | Type |
---|---|
handler | ResponseHandler <T > |
stream | boolean |
Defined in
packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:293
MistralChatStreamChunk
Ƭ MistralChatStreamChunk: Object
Type declaration
Name | Type |
---|---|
choices | { delta : { content? : null | string ; role? : null | "user" | "assistant" } ; finish_reason? : null | "length" | "stop" | "model_length" ; index : number }[] |
created? | number |
id | string |
model | string |
object? | string |
Defined in
packages/modelfusion/src/model-provider/mistral/MistralChatModel.ts:289
MistralErrorData
Ƭ MistralErrorData: Object
Type declaration
Name | Type |
---|---|
code | string |
message | string |
object | "error" |
param | null | string |
type | string |
Defined in
packages/modelfusion/src/model-provider/mistral/MistralError.ts:17
MistralTextEmbeddingResponse
Ƭ MistralTextEmbeddingResponse: Object
Type declaration
Name | Type |
---|---|
data | { embedding : number [] ; index : number ; object : string }[] |
id | string |
model | string |
object | string |
usage | { prompt_tokens : number ; total_tokens : number } |
usage.prompt_tokens | number |
usage.total_tokens | number |
Defined in
packages/modelfusion/src/model-provider/mistral/MistralTextEmbeddingModel.ts:138
ModelCallFinishedEvent
Ƭ ModelCallFinishedEvent: ClassifyFinishedEvent
| EmbeddingFinishedEvent
| ImageGenerationFinishedEvent
| SpeechGenerationFinishedEvent
| SpeechStreamingFinishedEvent
| ObjectGenerationFinishedEvent
| ObjectStreamingFinishedEvent
| TextGenerationFinishedEvent
| TextStreamingFinishedEvent
| ToolCallGenerationFinishedEvent
| ToolCallsGenerationFinishedEvent
| TranscriptionFinishedEvent
Defined in
packages/modelfusion/src/model-function/ModelCallEvent.ts:117
ModelCallMetadata
Ƭ ModelCallMetadata: Object
Type declaration
Name | Type |
---|---|
callId | string |
durationInMs | number |
finishTimestamp | Date |
functionId? | string |
model | ModelInformation |
runId? | string |
sessionId? | string |
startTimestamp | Date |
usage? | unknown |
userId? | string |
Defined in
packages/modelfusion/src/model-function/ModelCallMetadata.ts:3
ModelCallStartedEvent
Ƭ ModelCallStartedEvent: ClassifyStartedEvent
| EmbeddingStartedEvent
| ImageGenerationStartedEvent
| SpeechGenerationStartedEvent
| SpeechStreamingStartedEvent
| ObjectGenerationStartedEvent
| ObjectStreamingStartedEvent
| TextGenerationStartedEvent
| TextStreamingStartedEvent
| ToolCallGenerationStartedEvent
| ToolCallsGenerationStartedEvent
| TranscriptionStartedEvent
Defined in
packages/modelfusion/src/model-function/ModelCallEvent.ts:103
ModelInformation
Ƭ ModelInformation: Object
Type declaration
Name | Type |
---|---|
modelName | string | null |
provider | string |
Defined in
packages/modelfusion/src/model-function/ModelInformation.ts:1
ObjectFromTextPromptTemplate
Ƭ ObjectFromTextPromptTemplate<SOURCE_PROMPT
, TARGET_PROMPT
>: Object
Type parameters
Name |
---|
SOURCE_PROMPT |
TARGET_PROMPT |
Type declaration
Name | Type |
---|---|
createPrompt | (prompt : SOURCE_PROMPT , schema : Schema <unknown > & JsonSchemaProducer ) => TARGET_PROMPT |
extractObject | (response : string ) => unknown |
withJsonOutput? | (__namedParameters : { model : { withJsonOutput : (schema : Schema <unknown > & JsonSchemaProducer ) => { withJsonOutput : (schema : Schema <unknown > & JsonSchemaProducer ) => { withJsonOutput(schema: Schema<unknown> & JsonSchemaProducer): ...; } } } ; schema : Schema <unknown > & JsonSchemaProducer }) => { withJsonOutput : (schema : Schema <unknown > & JsonSchemaProducer ) => { withJsonOutput(schema: Schema<unknown> & JsonSchemaProducer): ...; } } |
Defined in
packages/modelfusion/src/model-function/generate-object/ObjectFromTextPromptTemplate.ts:7
ObjectGenerationFinishedEvent
Ƭ ObjectGenerationFinishedEvent: BaseModelCallFinishedEvent
& { functionType
: "generate-object"
; result
: ObjectGenerationFinishedEventResult
}
Defined in
packages/modelfusion/src/model-function/generate-object/ObjectGenerationEvent.ts:26
ObjectGenerationFinishedEventResult
Ƭ ObjectGenerationFinishedEventResult: { rawResponse
: unknown
; status
: "success"
; usage?
: { completionTokens
: number
; promptTokens
: number
; totalTokens
: number
} ; value
: unknown
} | { error
: unknown
; status
: "error"
} | { status
: "abort"
}
Defined in
packages/modelfusion/src/model-function/generate-object/ObjectGenerationEvent.ts:11
ObjectStream
Ƭ ObjectStream<OBJECT
>: AsyncIterable
<{ partialObject
: PartialDeep
<OBJECT
, { recurseIntoArrays
: true
}> ; partialText
: string
; textDelta
: string
}>
Type parameters
Name |
---|
OBJECT |
Defined in
packages/modelfusion/src/model-function/generate-object/ObjectStream.ts:5
OllamaChatResponse
Ƭ OllamaChatResponse: Object
Type declaration
Name | Type |
---|---|
created_at | string |
done | true |
eval_count | number |
eval_duration | number |
load_duration? | number |
message | { content : string ; role : string } |
message.content | string |
message.role | string |
model | string |
prompt_eval_count? | number |
prompt_eval_duration? | number |
total_duration | number |
Defined in
packages/modelfusion/src/model-provider/ollama/OllamaChatModel.ts:286
OllamaChatResponseFormatType
Ƭ OllamaChatResponseFormatType<T
>: Object
Type parameters
Name |
---|
T |
Type declaration
Name | Type |
---|---|
handler | ResponseHandler <T > |
stream | boolean |
Defined in
packages/modelfusion/src/model-provider/ollama/OllamaChatModel.ts:313
OllamaChatStreamChunk
Ƭ OllamaChatStreamChunk: { created_at
: string
; done
: false
; message
: { content
: string
; role
: string
} ; model
: string
} | { created_at
: string
; done
: true
; eval_count
: number
; eval_duration
: number
; load_duration?
: number
; model
: string
; prompt_eval_count?
: number
; prompt_eval_duration?
: number
; total_duration
: number
}
Defined in
packages/modelfusion/src/model-provider/ollama/OllamaChatModel.ts:311
OllamaCompletionResponse
Ƭ OllamaCompletionResponse: Object
Type declaration
Name | Type |
---|---|
context? | number [] |
created_at | string |
done | true |
eval_count | number |
eval_duration | number |
load_duration? | number |
model | string |
prompt_eval_count? | number |
prompt_eval_duration? | number |
response | string |
total_duration | number |
Defined in
packages/modelfusion/src/model-provider/ollama/OllamaCompletionModel.ts:361
OllamaCompletionResponseFormatType
Ƭ OllamaCompletionResponseFormatType<T
>: Object
Type parameters
Name |
---|
T |
Type declaration
Name | Type |
---|---|
handler | ResponseHandler <T > |
stream | boolean |
Defined in
packages/modelfusion/src/model-provider/ollama/OllamaCompletionModel.ts:392
OllamaCompletionStreamChunk
Ƭ OllamaCompletionStreamChunk: { created_at
: string
; done
: false
; model
: string
; response
: string
} | { context?
: number
[] ; created_at
: string
; done
: true
; eval_count
: number
; eval_duration
: number
; load_duration?
: number
; model
: string
; prompt_eval_count?
: number
; prompt_eval_duration?
: number
; sample_count?
: number
; sample_duration?
: number
; total_duration
: number
}
Defined in
packages/modelfusion/src/model-provider/ollama/OllamaCompletionModel.ts:388
OllamaErrorData
Ƭ OllamaErrorData: Object
Type declaration
Name | Type |
---|---|
error | string |
Defined in
packages/modelfusion/src/model-provider/ollama/OllamaError.ts:13
OllamaTextEmbeddingResponse
Ƭ OllamaTextEmbeddingResponse: Object
Type declaration
Name | Type |
---|---|
embedding | number [] |
Defined in
packages/modelfusion/src/model-provider/ollama/OllamaTextEmbeddingModel.ts:112
OpenAIChatBaseModelType
Ƭ OpenAIChatBaseModelType: keyof typeof CHAT_MODEL_CONTEXT_WINDOW_SIZES
Defined in
packages/modelfusion/src/model-provider/openai/OpenAIChatModel.ts:94
OpenAIChatChunk
Ƭ OpenAIChatChunk: Object
Type declaration
Name | Type |
---|---|
choices | { delta : { content? : null | string ; function_call? : { arguments? : string ; name? : string } ; role? : "user" | "assistant" ; tool_calls? : { function : { arguments : string ; name : string } ; id : string ; type : "function" }[] } ; finish_reason? : null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index : number }[] |
created | number |
id | string |
model? | string |
object | string |
system_fingerprint? | null | string |
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:442
OpenAIChatModelType
Ƭ OpenAIChatModelType: OpenAIChatBaseModelType
| FineTunedOpenAIChatModelType
Defined in
packages/modelfusion/src/model-provider/openai/OpenAIChatModel.ts:97
OpenAIChatResponse
Ƭ OpenAIChatResponse: Object
Type declaration
Name | Type |
---|---|
choices | { finish_reason? : null | "length" | "stop" | "function_call" | "tool_calls" | "content_filter" ; index? : number ; logprobs? : any ; message : { content : null | string ; function_call? : { arguments : string ; name : string } ; role : "assistant" ; tool_calls? : { function : { arguments : string ; name : string } ; id : string ; type : "function" }[] } }[] |
created | number |
id | string |
model | string |
object | "chat.completion" |
system_fingerprint? | null | string |
usage | { completion_tokens : number ; prompt_tokens : number ; total_tokens : number } |
usage.completion_tokens | number |
usage.prompt_tokens | number |
usage.total_tokens | number |
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:395
OpenAIChatResponseFormatType
Ƭ OpenAIChatResponseFormatType<T
>: Object
Type parameters
Name |
---|
T |
Type declaration
Name | Type |
---|---|
handler | ResponseHandler <T > |
stream | boolean |
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAIChatModel.ts:444
OpenAICompatibleProviderName
Ƭ OpenAICompatibleProviderName: "openaicompatible"
| `openaicompatible-${string}`
Defined in
packages/modelfusion/src/model-provider/openai-compatible/OpenAICompatibleApiConfiguration.ts:3
OpenAICompletionModelType
Ƭ OpenAICompletionModelType: keyof typeof OPENAI_TEXT_GENERATION_MODELS
Defined in
packages/modelfusion/src/model-provider/openai/OpenAICompletionModel.ts:37
OpenAICompletionResponse
Ƭ OpenAICompletionResponse: Object
Type declaration
Name | Type |
---|---|
choices | { finish_reason? : null | "length" | "stop" | "content_filter" ; index : number ; logprobs? : any ; text : string }[] |
created | number |
id | string |
model | string |
object | "text_completion" |
system_fingerprint? | string |
usage | { completion_tokens : number ; prompt_tokens : number ; total_tokens : number } |
usage.completion_tokens | number |
usage.prompt_tokens | number |
usage.total_tokens | number |
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAICompletionModel.ts:207
OpenAIErrorData
Ƭ OpenAIErrorData: Object
Type declaration
Name | Type |
---|---|
error | { code : null | string ; message : string ; param? : any ; type : string } |
error.code | null | string |
error.message | string |
error.param? | any |
error.type | string |
Defined in
packages/modelfusion/src/model-provider/openai/OpenAIError.ts:14
OpenAIImageGenerationBase64JsonResponse
Ƭ OpenAIImageGenerationBase64JsonResponse: Object
Type declaration
Name | Type |
---|---|
created | number |
data | { b64_json : string }[] |
Defined in
packages/modelfusion/src/model-provider/openai/OpenAIImageGenerationModel.ts:170
OpenAIImageGenerationResponseFormatType
Ƭ OpenAIImageGenerationResponseFormatType<T
>: Object
Type parameters
Name |
---|
T |
Type declaration
Name | Type |
---|---|
handler | ResponseHandler <T > |
type | "b64_json" | "url" |
Defined in
packages/modelfusion/src/model-provider/openai/OpenAIImageGenerationModel.ts:143
OpenAIImageGenerationUrlResponse
Ƭ OpenAIImageGenerationUrlResponse: Object
Type declaration
Name | Type |
---|---|
created | number |
data | { url : string }[] |
Defined in
packages/modelfusion/src/model-provider/openai/OpenAIImageGenerationModel.ts:157
OpenAISpeechModelType
Ƭ OpenAISpeechModelType: "tts-1"
| "tts-1-hd"
Defined in
packages/modelfusion/src/model-provider/openai/OpenAISpeechModel.ts:25
OpenAISpeechVoice
Ƭ OpenAISpeechVoice: "alloy"
| "echo"
| "fable"
| "onyx"
| "nova"
| "shimmer"
Defined in
packages/modelfusion/src/model-provider/openai/OpenAISpeechModel.ts:16
OpenAITextEmbeddingModelType
Ƭ OpenAITextEmbeddingModelType: keyof typeof OPENAI_TEXT_EMBEDDING_MODELS
Defined in
packages/modelfusion/src/model-provider/openai/OpenAITextEmbeddingModel.ts:26
OpenAITextEmbeddingResponse
Ƭ OpenAITextEmbeddingResponse: Object
Type declaration
Name | Type |
---|---|
data | { embedding : number [] ; index : number ; object : "embedding" }[] |
model | string |
object | "list" |
usage? | { prompt_tokens : number ; total_tokens : number } |
usage.prompt_tokens | number |
usage.total_tokens | number |
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAITextEmbeddingModel.ts:114
OpenAITextResponseFormatType
Ƭ OpenAITextResponseFormatType<T
>: Object
Type parameters
Name |
---|
T |
Type declaration
Name | Type |
---|---|
handler | ResponseHandler <T > |
stream | boolean |
Defined in
packages/modelfusion/src/model-provider/openai/AbstractOpenAICompletionModel.ts:233
OpenAITranscriptionJsonResponse
Ƭ OpenAITranscriptionJsonResponse: Object
Type declaration
Name | Type |
---|---|
text | string |
Defined in
packages/modelfusion/src/model-provider/openai/OpenAITranscriptionModel.ts:193
OpenAITranscriptionResponseFormatType
Ƭ OpenAITranscriptionResponseFormatType<T
>: Object
Type parameters
Name |
---|
T |
Type declaration
Name | Type |
---|---|
handler | ResponseHandler <T > |
type | "json" | "text" | "srt" | "verbose_json" | "vtt" |
Defined in
packages/modelfusion/src/model-provider/openai/OpenAITranscriptionModel.ts:223
OpenAITranscriptionVerboseJsonResponse
Ƭ OpenAITranscriptionVerboseJsonResponse: Object
Type declaration
Name | Type |
---|---|
duration | number |
language | string |
segments | { avg_logprob : number ; compression_ratio : number ; end : number ; id : number ; no_speech_prob : number ; seek : number ; start : number ; temperature : number ; text : string ; tokens : number [] ; transient? : boolean }[] |
task | "transcribe" |
text | string |
Defined in
packages/modelfusion/src/model-provider/openai/OpenAITranscriptionModel.ts:219
PartialBaseUrlPartsApiConfigurationOptions
Ƭ PartialBaseUrlPartsApiConfigurationOptions: Omit
<BaseUrlPartsApiConfigurationOptions
, "baseUrl"
> & { baseUrl?
: string
| Partial
<UrlParts
> }
Defined in
packages/modelfusion/src/core/api/BaseUrlApiConfiguration.ts:64
PromptFunction
Ƭ PromptFunction<INPUT
, PROMPT
>: () => PromiseLike
<{ input
: INPUT
; prompt
: PROMPT
}> & { [promptFunctionMarker]
: true
}
Type parameters
Name |
---|
INPUT |
PROMPT |
Defined in
packages/modelfusion/src/core/PromptFunction.ts:1
RetryErrorReason
Ƭ RetryErrorReason: "maxTriesExceeded"
| "errorNotRetryable"
| "abort"
Defined in
packages/modelfusion/src/core/api/RetryError.ts:1
RetryFunction
Ƭ RetryFunction: <OUTPUT>(fn
: () => PromiseLike
<OUTPUT
>) => PromiseLike
<OUTPUT
>
Type declaration
▸ <OUTPUT
>(fn
): PromiseLike
<OUTPUT
>
Type parameters
Name |
---|
OUTPUT |
Parameters
Name | Type |
---|---|
fn | () => PromiseLike <OUTPUT > |
Returns
PromiseLike
<OUTPUT
>
Defined in
packages/modelfusion/src/core/api/RetryFunction.ts:1
SpeechGenerationFinishedEventResult
Ƭ SpeechGenerationFinishedEventResult: { rawResponse
: unknown
; status
: "success"
; value
: Uint8Array
} | { error
: unknown
; status
: "error"
} | { status
: "abort"
}
Defined in
packages/modelfusion/src/model-function/generate-speech/SpeechGenerationEvent.ts:12
SplitFunction
Ƭ SplitFunction: (input
: { text
: string
}, options?
: FunctionOptions
) => PromiseLike
<string
[]>
Type declaration
▸ (input
, options?
): PromiseLike
<string
[]>
Parameters
Name | Type |
---|---|
input | Object |
input.text | string |
options? | FunctionOptions |
Returns
PromiseLike
<string
[]>
Defined in
packages/modelfusion/src/text-chunk/split/SplitFunction.ts:3
StabilityClipGuidancePreset
Ƭ StabilityClipGuidancePreset: "FAST_BLUE"
| "FAST_GREEN"
| "NONE"
| "SIMPLE"
| "SLOW"
| "SLOWER"
| "SLOWEST"
Defined in
packages/modelfusion/src/model-provider/stability/StabilityImageGenerationModel.ts:67
StabilityErrorData
Ƭ StabilityErrorData: Object
Type declaration
Name | Type |
---|---|
message | string |
Defined in
packages/modelfusion/src/model-provider/stability/StabilityError.ts:13
StabilityImageGenerationModelType
Ƭ StabilityImageGenerationModelType: typeof stabilityImageGenerationModels
[number
] | string
&
Defined in
packages/modelfusion/src/model-provider/stability/StabilityImageGenerationModel.ts:29
StabilityImageGenerationPrompt
Ƭ StabilityImageGenerationPrompt: { text
: string
; weight?
: number
}[]
Defined in
packages/modelfusion/src/model-provider/stability/StabilityImageGenerationPrompt.ts:3
StabilityImageGenerationResponse
Ƭ StabilityImageGenerationResponse: Object
Type declaration
Name | Type |
---|---|
artifacts | { base64 : string ; finishReason : "ERROR" | "SUCCESS" | "CONTENT_FILTERED" ; seed : number }[] |
Defined in
packages/modelfusion/src/model-provider/stability/StabilityImageGenerationModel.ts:256
StabilityImageGenerationSampler
Ƭ StabilityImageGenerationSampler: "DDIM"
| "DDPM"
| "K_DPMPP_2M"
| "K_DPMPP_2S_ANCESTRAL"
| "K_DPM_2"
| "K_DPM_2_ANCESTRAL"
| "K_EULER"
| "K_EULER_ANCESTRAL"
| "K_HEUN"
| "K_LMS"
Defined in
packages/modelfusion/src/model-provider/stability/StabilityImageGenerationModel.ts:55
StabilityImageGenerationStylePreset
Ƭ StabilityImageGenerationStylePreset: "3d-model"
| "analog-film"
| "anime"
| "cinematic"
| "comic-book"
| "digital-art"
| "enhance"
| "fantasy-art"
| "isometric"
| "line-art"
| "low-poly"
| "modeling-compound"
| "neon-punk"
| "origami"
| "photographic"
| "pixel-art"
| "tile-texture"
Defined in
packages/modelfusion/src/model-provider/stability/StabilityImageGenerationModel.ts:36
TextChunk
Ƭ TextChunk: Object
Type declaration
Name | Type |
---|---|
text | string |
Defined in
packages/modelfusion/src/text-chunk/TextChunk.ts:1
TextGenerationFinishReason
Ƭ TextGenerationFinishReason: "stop"
| "length"
| "content-filter"
| "tool-calls"
| "error"
| "other"
| "unknown"
Defined in
packages/modelfusion/src/model-function/generate-text/TextGenerationResult.ts:13
TextGenerationFinishedEventResult
Ƭ TextGenerationFinishedEventResult: { rawResponse
: unknown
; status
: "success"
; usage?
: { completionTokens
: number
; promptTokens
: number
; totalTokens
: number
} ; value
: string
} | { error
: unknown
; status
: "error"
} | { status
: "abort"
}
Defined in
packages/modelfusion/src/model-function/generate-text/TextGenerationEvent.ts:10
TextGenerationResult
Ƭ TextGenerationResult: Object
Type declaration
Name | Type | Description |
---|---|---|
finishReason | TextGenerationFinishReason | The reason why the generation stopped. |
text | string | The generated text. |
Defined in
packages/modelfusion/src/model-function/generate-text/TextGenerationResult.ts:1
ThrottleFunction
Ƭ ThrottleFunction: <OUTPUT>(fn
: () => PromiseLike
<OUTPUT
>) => PromiseLike
<OUTPUT
>
Type declaration
▸ <OUTPUT
>(fn
): PromiseLike
<OUTPUT
>
Type parameters
Name |
---|
OUTPUT |
Parameters
Name | Type |
---|---|
fn | () => PromiseLike <OUTPUT > |
Returns
PromiseLike
<OUTPUT
>
Defined in
packages/modelfusion/src/core/api/ThrottleFunction.ts:1
TikTokenTokenizerSettings
Ƭ TikTokenTokenizerSettings: Object
Type declaration
Name | Type |
---|---|
model | OpenAIChatBaseModelType | OpenAICompletionModelType | OpenAITextEmbeddingModelType |
Defined in
packages/modelfusion/src/model-provider/openai/TikTokenTokenizer.ts:9
ToolCallGenerationFinishedEvent
Ƭ ToolCallGenerationFinishedEvent: BaseModelCallFinishedEvent
& { functionType
: "generate-tool-call"
; result
: ToolCallGenerationFinishedEventResult
}
Defined in
packages/modelfusion/src/tool/generate-tool-call/ToolCallGenerationEvent.ts:26
ToolCallGenerationFinishedEventResult
Ƭ ToolCallGenerationFinishedEventResult: { rawResponse
: unknown
; status
: "success"
; usage?
: { completionTokens
: number
; promptTokens
: number
; totalTokens
: number
} ; value
: unknown
} | { error
: unknown
; status
: "error"
} | { status
: "abort"
}
Defined in
packages/modelfusion/src/tool/generate-tool-call/ToolCallGenerationEvent.ts:11
ToolCallResult
Ƭ ToolCallResult<NAME
, PARAMETERS
, RETURN_TYPE
>: { args
: PARAMETERS
; tool
: NAME
; toolCall
: ToolCall
<NAME
, PARAMETERS
> } & { ok
: true
; result
: RETURN_TYPE
} | { ok
: false
; result
: ToolCallError
}
Type parameters
Name | Type |
---|---|
NAME | extends string |
PARAMETERS | PARAMETERS |
RETURN_TYPE | RETURN_TYPE |
Defined in
packages/modelfusion/src/tool/ToolCallResult.ts:4
ToolCallsGenerationFinishedEvent
Ƭ ToolCallsGenerationFinishedEvent: BaseModelCallFinishedEvent
& { functionType
: "generate-tool-calls"
; result
: ToolCallsGenerationFinishedEventResult
}
Defined in
packages/modelfusion/src/tool/generate-tool-calls/ToolCallsGenerationEvent.ts:26
ToolCallsGenerationFinishedEventResult
Ƭ ToolCallsGenerationFinishedEventResult: { rawResponse
: unknown
; status
: "success"
; usage?
: { completionTokens
: number
; promptTokens
: number
; totalTokens
: number
} ; value
: unknown
} | { error
: unknown
; status
: "error"
} | { status
: "abort"
}
Defined in
packages/modelfusion/src/tool/generate-tool-calls/ToolCallsGenerationEvent.ts:11
ToolContent
Ƭ ToolContent: ToolResponsePart
[]
Defined in
packages/modelfusion/src/model-function/generate-text/prompt-template/ChatPrompt.ts:42
TranscriptionFinishedEventResult
Ƭ TranscriptionFinishedEventResult: { rawResponse
: unknown
; status
: "success"
; value
: string
} | { error
: unknown
; status
: "error"
} | { status
: "abort"
}
Defined in
packages/modelfusion/src/model-function/generate-transcription/TranscriptionEvent.ts:10
UrlParts
Ƭ UrlParts: Object
Type declaration
Name | Type |
---|---|
host | string |
path | string |
port | string |
protocol | string |
Defined in
packages/modelfusion/src/core/api/BaseUrlApiConfiguration.ts:6
UserContent
Ƭ UserContent: string
| (TextPart
| ImagePart
)[]
Defined in
packages/modelfusion/src/model-function/generate-text/prompt-template/ChatPrompt.ts:40
Vector
Ƭ Vector: number
[]
A vector is an array of numbers. It is e.g. used to represent a text as a vector of word embeddings.
Defined in
packages/modelfusion/src/core/Vector.ts:5
WebSearchToolInput
Ƭ WebSearchToolInput: Object
Type declaration
Name | Type |
---|---|
query | string |
Defined in
packages/modelfusion/src/tool/WebSearchTool.ts:27
WebSearchToolOutput
Ƭ WebSearchToolOutput: Object
Type declaration
Name | Type |
---|---|
results | { link : string ; snippet : string ; title : string }[] |