Access Swiftask agents via OpenAI SDK (recommended)
Written By Stanislas
Last updated 8 days ago
Overview
Swiftask agents are compatible with the OpenAI SDK. This means you can use any OpenAI client library (Python, JavaScript, Node.js, or others) to send messages to your agents and receive responses, just as you would with OpenAI's GPT models.
No special integration code is needed. You simply point the SDK to Swiftask's API endpoint, authenticate with your Swiftask API key, and specify your agent's slug as the model name. The SDK handles the rest.
This is useful when you want to integrate Swiftask agents into applications, scripts, or workflows that already use the OpenAI SDK, or when you prefer the familiar OpenAI interface.
Prerequisites
Before you can use the API, you need:
A Swiftask account with at least one agent created.
An API key: generated from your account settings.
Your agent's slug: a unique identifier found in your agent's settings.
An OpenAI SDK installed in your project (Python, JavaScript, etc.).
The API works with any account, as long as your API key is valid and not expired.
Step-by-step guide
1. Create an API key
Go to Account Settings (click the setting icon in the bottom left, then select Account Settings).
Navigate to the API section.
Click Create new key.
Enter a name for your key (e.g., "Production Bot Access").
Optionally set an expiration date.
Click Create.
Copy your key immediately, it will only be shown once. Store it securely (e.g., in a
.envfile).
2. Get your agent's slug
Open the agent you want to access.
Click the Settings icon (gear icon) on your agent.
Go to the API tab.
Copy the text in the Agent slug section. This is the unique identifier you'll use as the
modelparameter.
3. Install the OpenAI SDK
Choose your language and install the SDK:
Python:
pip install openaiJavaScript / Node.js:
npm install openai4. Configure the SDK and make a request
Use your API key and agent slug to configure the client. Here are examples:
Python:
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://api.swiftask.fr/v1"
)
response = client.chat.completions.create(
model="AGENT_SLUG",
messages=[
{"role": "user", "content": "Hello, how can you help me?"}
]
)
print(response.choices[0].message.content)JavaScript / TypeScript:
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: 'YOUR_API_KEY',
baseURL: 'https://api.swiftask.fr/v1',
});
const response = await client.chat.completions.create({
model: 'AGENT_SLUG',
messages: [
{ role: 'user', content: 'Hello, how can you help me?' }
]});
console.log(response.choices[0].message.content);5. Use streaming (optional)
If you want to receive responses in real-time as they're generated, enable streaming:
Python:
stream = client.chat.completions.create(
model="AGENT_SLUG",
messages=[
{"role": "user", "content": "Tell me a story."}
],
stream=True)
for chunk in stream:
if chunk.choices[0].delta.content is not None:
print(chunk.choices[0].delta.content, end="")JavaScript / TypeScript:
const stream = await client.chat.completions.create({
model: 'AGENT_SLUG',
messages: [
{ role: 'user', content: 'Tell me a story.' }
],
stream: true,});
for await (const chunk of stream) {
if (chunk.choices[0].delta.content) {
process.stdout.write(chunk.choices[0].delta.content);
}
}Practical use cases
Case 1: Integrate an agent into a support chatbot
You have a customer support agent in Swiftask and want to use it in your Node.js support application.
const client = new OpenAI({
apiKey: "SWIFTASK_API_KEY",
baseURL: 'https://api.swiftask.fr/v1',
});
async function answerCustomerQuestion(question) {
const response = await client.chat.completions.create({
model: "AGENT_SLUG",
messages: [
{ role: 'user', content: question }
]
});
return response.choices[0].message.content;
}Case 2: Batch process documents with an agent
You have a document analysis agent and want to process 100 documents in a Python script.
from openai import OpenAI
client = OpenAI(
api_key="SWIFTASK_API_KEY",
base_url="https://api.swiftask.fr/v1"
)
documents = ["Document 1 content", "Document 2 content"]
for doc in documents:
response = client.chat.completions.create(
model="AGENT_SLUG",
messages=[
{"role": "user", "content": f"Analyze this: {doc}"}
]
)
print(response.choices[0].message.content)Case 3: Stream responses in a web application
You want to display agent responses in real-time on a web page.
const client = new OpenAI({
apiKey: "SWIFTASK_API_KEY",
baseURL: 'https://api.swiftask.fr/v1',
});
async function streamAgentResponse(userMessage, onChunk) {
const stream = await client.chat.completions.create({
model: "AGENT_SLUG",
messages: [{ role: 'user', content: userMessage }],
stream: true,
});
for await (const chunk of stream) {
if (chunk.choices[0].delta.content) {
onChunk(chunk.choices[0].delta.content);
}
}}Tips & Best Practices
Secure your API key β Treat it like a password. Use environment variables or secret management tools, never hardcode it.
Set an expiration date β When creating an API key, set an expiration date for security. Rotate keys periodically.
Use descriptive key names β Name your keys by use case (e.g., "Production Bot", "Testing Bot") to track usage.
Handle errors gracefully β Wrap API calls in try-catch blocks and provide meaningful error messages to users.
Use streaming for better UX β Streaming responses feel faster to end users because they see content appear in real-time.
Troubleshooting
Error: 401 Unauthorized
Cause: Your API key is invalid or missing.
Fix: Verify your API key is correct and included in the Authorization header as a Bearer token. Check that it hasn't expired in your account settings.
Error: 400 Bad Request
Cause: The agent slug is incorrect or the agent doesn't exist.
Fix: Double-check your agent slug in your agent's settings under the API tab. Make sure it matches exactly.
No response or timeout
Cause: The agent is taking longer than expected to respond, or there's a network issue.
Fix: Check your internet connection. If using streaming, ensure your client supports Server-Sent Events (SSE). Try again with a simpler message.
Stream stops unexpectedly
Cause: The connection was interrupted or the agent finished processing.
Fix: Ensure your client is properly handling the DONE marker that signals the end of the stream. Wrap streaming logic in error handling.