API Rate Limits
Understand rate limiting and how to build resilient integrations.
Overview
Rate limits protect the API from abuse and ensure fair usage. Different key types have different limits.
Rate Limit Tiers
| Key Type | Requests per Hour | Window |
|---|---|---|
Secret (sk_live_*) |
1,000 | Rolling 1 hour |
Publishable (pk_live_*) |
100 | Rolling 1 hour |
Rate Limit Headers
Every response includes rate limit information:
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 987
X-RateLimit-Reset: 1702554600
X-RateLimit-Window: 3600
| Header | Description |
|---|---|
X-RateLimit-Limit |
Maximum requests allowed in window |
X-RateLimit-Remaining |
Requests remaining in current window |
X-RateLimit-Reset |
Unix timestamp when window resets |
X-RateLimit-Window |
Window size in seconds |
Handling 429 Errors
When rate limited, you'll receive a 429 Too Many Requests response:
{
"success": false,
"error": {
"code": "RATE_LIMITED",
"message": "Rate limit exceeded. Try again in 42 seconds.",
"retryAfter": 42
}
}
Best Practices
- Check headers proactively - Monitor
X-RateLimit-Remaining - Implement exponential backoff - Don't hammer the API after a 429
- Use caching - Cache responses where appropriate
- Batch requests - Combine multiple operations when possible
Code Example: Retry with Backoff
async function fetchWithRetry(url, options, maxRetries = 3) {
for (let attempt = 1; attempt <= maxRetries; attempt++) {
const response = await fetch(url, options);
if (response.status === 429) {
const retryAfter = response.headers.get('Retry-After') ||
Math.pow(2, attempt);
console.log(`Rate limited. Retrying in ${retryAfter}s...`);
await sleep(retryAfter * 1000);
continue;
}
return response;
}
throw new Error('Max retries exceeded');
}
function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
Monitoring Usage
View your API usage in Settings → Integrations → API Keys:
- Total requests per key
- Requests in current window
- Last used timestamp
Need Higher Limits?
Contact support if you need higher rate limits for your use case.