Class: HuggingFaceTextGenerationModel
Create a text generation model that calls a Hugging Face Inference API Text Generation Task.
See
https://huggingface.co/docs/api-inference/detailed_parameters#text-generation-task
Example
const model = new HuggingFaceTextGenerationModel({
model: "tiiuae/falcon-7b",
temperature: 0.7,
maxGenerationTokens: 500,
retry: retryWithExponentialBackoff({ maxTries: 5 }),
});
const text = await generateText(
model,
"Write a short story about a robot learning to love:\n\n"
);
Hierarchy
-
AbstractModel
<HuggingFaceTextGenerationModelSettings
>↳
HuggingFaceTextGenerationModel
Implements
Accessors
modelInformation
• get
modelInformation(): ModelInformation
Returns
Implementation of
TextGenerationModel.modelInformation
Inherited from
AbstractModel.modelInformation
Defined in
packages/modelfusion/src/model-function/AbstractModel.ts:17
modelName
• get
modelName(): string
Returns
string
Overrides
AbstractModel.modelName
Defined in
packages/modelfusion/src/model-provider/huggingface/HuggingFaceTextGenerationModel.ts:64
settingsForEvent
• get
settingsForEvent(): Partial
<HuggingFaceTextGenerationModelSettings
>
Returns settings that should be recorded in observability events. Security-related settings (e.g. API keys) should not be included here.
Returns
Partial
<HuggingFaceTextGenerationModelSettings
>
Implementation of
TextGenerationModel.settingsForEvent
Overrides
AbstractModel.settingsForEvent
Defined in
packages/modelfusion/src/model-provider/huggingface/HuggingFaceTextGenerationModel.ts:115
Constructors
constructor
• new HuggingFaceTextGenerationModel(settings
): HuggingFaceTextGenerationModel
Parameters
Name | Type |
---|---|
settings | HuggingFaceTextGenerationModelSettings |
Returns
HuggingFaceTextGenerationModel
Overrides
AbstractModel<HuggingFaceTextGenerationModelSettings>.constructor
Defined in
packages/modelfusion/src/model-provider/huggingface/HuggingFaceTextGenerationModel.ts:59
Methods
callAPI
▸ callAPI(prompt
, callOptions
): Promise
<{ generated_text
: string
}[]>
Parameters
Name | Type |
---|---|
prompt | string |
callOptions | FunctionCallOptions |
Returns
Promise
<{ generated_text
: string
}[]>
Defined in
packages/modelfusion/src/model-provider/huggingface/HuggingFaceTextGenerationModel.ts:72
doGenerateTexts
▸ doGenerateTexts(prompt
, options
): Promise
<{ rawResponse
: { generated_text
: string
}[] ; textGenerationResults
: { finishReason
: "unknown"
; text
: string
= response.generated_text }[] }>
Parameters
Name | Type |
---|---|
prompt | string |
options | FunctionCallOptions |
Returns
Promise
<{ rawResponse
: { generated_text
: string
}[] ; textGenerationResults
: { finishReason
: "unknown"
; text
: string
= response.generated_text }[] }>
Implementation of
TextGenerationModel.doGenerateTexts
Defined in
packages/modelfusion/src/model-provider/huggingface/HuggingFaceTextGenerationModel.ts:134
processTextGenerationResponse
▸ processTextGenerationResponse(rawResponse
): Object
Parameters
Name | Type |
---|---|
rawResponse | { generated_text : string }[] |
Returns
Object
Name | Type |
---|---|
rawResponse | { generated_text : string }[] |
textGenerationResults | { finishReason : "unknown" ; text : string = response.generated_text }[] |
Defined in
packages/modelfusion/src/model-provider/huggingface/HuggingFaceTextGenerationModel.ts:149
restoreGeneratedTexts
▸ restoreGeneratedTexts(rawResponse
): Object
Parameters
Name | Type |
---|---|
rawResponse | unknown |
Returns
Object
Name | Type |
---|---|
rawResponse | { generated_text : string }[] |
textGenerationResults | { finishReason : "unknown" ; text : string = response.generated_text }[] |
Implementation of
TextGenerationModel.restoreGeneratedTexts
Defined in
packages/modelfusion/src/model-provider/huggingface/HuggingFaceTextGenerationModel.ts:140
withJsonOutput
▸ withJsonOutput(): this
When possible, limit the output generation to the specified JSON schema, or super sets of it (e.g. JSON in general).
Returns
this
Implementation of
TextGenerationModel.withJsonOutput
Defined in
packages/modelfusion/src/model-provider/huggingface/HuggingFaceTextGenerationModel.ts:161
withPromptTemplate
▸ withPromptTemplate<INPUT_PROMPT
>(promptTemplate
): PromptTemplateTextGenerationModel
<INPUT_PROMPT
, string
, HuggingFaceTextGenerationModelSettings
, HuggingFaceTextGenerationModel
>
Type parameters
Name |
---|
INPUT_PROMPT |
Parameters
Name | Type |
---|---|
promptTemplate | TextGenerationPromptTemplate <INPUT_PROMPT , string > |
Returns
PromptTemplateTextGenerationModel
<INPUT_PROMPT
, string
, HuggingFaceTextGenerationModelSettings
, HuggingFaceTextGenerationModel
>
Defined in
packages/modelfusion/src/model-provider/huggingface/HuggingFaceTextGenerationModel.ts:165
withSettings
▸ withSettings(additionalSettings
): HuggingFaceTextGenerationModel
The withSettings
method creates a new model with the same configuration as the original model, but with the specified settings changed.
Parameters
Name | Type |
---|---|
additionalSettings | Partial <HuggingFaceTextGenerationModelSettings > |
Returns
HuggingFaceTextGenerationModel
Example
const model = new OpenAICompletionModel({
model: "gpt-3.5-turbo-instruct",
maxGenerationTokens: 500,
});
const modelWithMoreTokens = model.withSettings({
maxGenerationTokens: 1000,
});
Implementation of
TextGenerationModel.withSettings
Overrides
AbstractModel.withSettings
Defined in
packages/modelfusion/src/model-provider/huggingface/HuggingFaceTextGenerationModel.ts:179
Properties
contextWindowSize
• Readonly
contextWindowSize: undefined
= undefined
Implementation of
TextGenerationModel.contextWindowSize
Defined in
packages/modelfusion/src/model-provider/huggingface/HuggingFaceTextGenerationModel.ts:68
countPromptTokens
• Readonly
countPromptTokens: undefined
= undefined
Optional. Implement if you have a tokenizer and want to count the number of tokens in a prompt.
Implementation of
TextGenerationModel.countPromptTokens
Defined in
packages/modelfusion/src/model-provider/huggingface/HuggingFaceTextGenerationModel.ts:70
provider
• Readonly
provider: "huggingface"
Overrides
AbstractModel.provider
Defined in
packages/modelfusion/src/model-provider/huggingface/HuggingFaceTextGenerationModel.ts:63
settings
• Readonly
settings: HuggingFaceTextGenerationModelSettings
Implementation of
Inherited from
AbstractModel.settings
Defined in
packages/modelfusion/src/model-function/AbstractModel.ts:7
tokenizer
• Readonly
tokenizer: undefined
= undefined
Implementation of
Defined in
packages/modelfusion/src/model-provider/huggingface/HuggingFaceTextGenerationModel.ts:69