Helicone
Setup
- You can get an API key from Helicone.
- The API key can be configured as an environment variable (
HELICONE_API_KEY
) or passed in as an option into the API configuration constructor. - You can explore the recorded calls on the Helicone platform.
Usage
ModelFusion supports Helicone for OpenAI text and chat models in the proxy integration setup.
You can change the api
parameter to use a HeliconeOpenAIApiConfiguration
.
Example: Helicone & OpenAI chat with environment variables
import {
HeliconeOpenAIApiConfiguration,
generateText,
openai,
} from "modelfusion";
const text = await generateText({
model: openai.ChatTextGenerator({
// uses the API keys from the OPENAI_API_KEY and HELICONE_API_KEY environment variables
api: new HeliconeOpenAIApiConfiguration(),
model: "gpt-3.5-turbo",
}),
// ....
});
Example: Helicone & OpenAI chat with API keys
import {
HeliconeOpenAIApiConfiguration,
generateText,
openai,
} from "modelfusion";
const text = await generateText({
model: openai.ChatTextGenerator({
api: new HeliconeOpenAIApiConfiguration({
openAIApiKey: myOpenAIApiKey,
heliconeApiKey: myHeliconeApiKey,
}),
model: "gpt-3.5-turbo",
}),
// ....
});
Example: Helicone with custom call headers
import {
HeliconeOpenAIApiConfiguration,
generateText,
openai,
} from "modelfusion";
const text = await generateText({
functionId: "example-function", // function id is passed into the call headers
model: openai
.ChatTextGenerator({
api: new HeliconeOpenAIApiConfiguration({
customCallHeaders: ({ functionId, callId }) => ({
"Helicone-Property-FunctionId": functionId,
"Helicone-Property-CallId": callId,
}),
}),
model: "gpt-3.5-turbo",
temperature: 0.7,
maxGenerationTokens: 500,
})
.withTextPrompt(),
prompt: "Write a short story about a robot learning to love",
});