Skip to main content

Rate limiting

The Portal API rate-limits requests to ensure fair usage and system stability for all users.

Overview

Rate limiting controls how many API requests you can make within a specific time window. This prevents any single user or application from overloading the system and ensures consistent performance for everyone.

Current implementation

The Portal API uses a fixed window rate limiter with the following limits:

  • Limit: 50 requests per second
  • Window: 1 second (rolling)

You can make up to 50 requests in any given second. The counter resets every second.

Per-client limits

Rate limits are applied per client (based on your App ID or OAuth Client ID). Each application you create has its own independent rate limit. If multiple applications share the same API key, their requests count toward the same limit — use a separate key for each application.

Rate limit headers

Every API response includes headers that show your current rate limit status:

X-RateLimit-Second-Limit: 50
X-RateLimit-Second-AmountRemaining: 45
X-RateLimit-Second-Reset: 0.23

Header descriptions

HeaderDescriptionExample
X-RateLimit-Second-LimitMaximum requests allowed per second50
X-RateLimit-Second-AmountRemainingRequests remaining in current window45
X-RateLimit-Second-ResetSeconds until the rate limit resets0.23

Rate limit exceeded

When you exceed the rate limit, you'll receive:

HTTP Status: 429 Too Many Requests

Response Body:

{
"code": "resource_exhausted"
}

Additional Header:

Retry-After: 0.52

The Retry-After header tells you how many seconds to wait before making another request.

Best practices

1. Monitor rate limit headers

Always check the rate limit headers in responses to track your usage:

const response = await fetch('https://manage.youriguide.com/api/v1/iguides', {
headers: {
'X-Plntr-App-Id': $APP_ID,
'X-Plntr-App-Token': $APP_TOKEN
}
});

const remaining = response.headers.get('X-RateLimit-Second-AmountRemaining');
const reset = response.headers.get('X-RateLimit-Second-Reset');

console.log(`Remaining: ${remaining}, Reset in: ${reset}s`);

2. Implement exponential backoff

When you receive a 429 response, implement exponential backoff:

async function makeRequestWithRetry(url, options, maxRetries = 3) {
for (let attempt = 0; attempt < maxRetries; attempt++) {
const response = await fetch(url, options);

if (response.status === 429) {
const retryAfter = parseFloat(response.headers.get('Retry-After') || '1');
const backoffTime = retryAfter * Math.pow(2, attempt);

console.log(`Rate limited. Waiting ${backoffTime}s before retry...`);
await sleep(backoffTime * 1000);
continue;
}

return response;
}

throw new Error('Max retries exceeded');
}

3. Batch operations

Instead of making many individual requests, batch operations when possible:

❌ Don't do this:

// 100 separate requests
for (const id of iguideIds) {
await fetch(`https://manage.youriguide.com/api/v1/iguides/${id}`);
}

✅ Do this:

// Single request with query parameters or list endpoint
const response = await fetch('https://manage.youriguide.com/api/v1/iguides?ids=' + iguideIds.join(','));

4. Spread out requests

If you need to make many requests, spread them out over time:

async function processWithDelay(items, delayMs = 100) {
for (const item of items) {
await processItem(item);
await sleep(delayMs);
}
}

5. Cache responses

Cache responses that don't change frequently:

const cache = new Map();
const CACHE_TTL = 60000; // 1 minute

async function getCachedData(key, fetchFn) {
const cached = cache.get(key);

if (cached && Date.now() - cached.timestamp < CACHE_TTL) {
return cached.data;
}

const data = await fetchFn();
cache.set(key, { data, timestamp: Date.now() });
return data;
}

Rate limiting by client type

All client types (API Tokens and OAuth Apps) share the same rate limit:

Client typeLimitWindow
API Token50 requests1 second
OAuth App50 requests1 second
Future enhancements

We may introduce tiered rate limits or per-user customization in the future. Check back for updates or contact support if you need higher limits for production use.

Handling rate limits in production

Node.js example

class APIClient {
constructor(appId, appToken) {
this.appId = appId;
this.appToken = appToken;
this.requestQueue = [];
this.processing = false;
}

async request(url, options = {}) {
return new Promise((resolve, reject) => {
this.requestQueue.push({ url, options, resolve, reject });
this.processQueue();
});
}

async processQueue() {
if (this.processing || this.requestQueue.length === 0) return;

this.processing = true;

while (this.requestQueue.length > 0) {
const { url, options, resolve, reject } = this.requestQueue.shift();

try {
const response = await fetch(url, {
...options,
headers: {
...options.headers,
'X-Plntr-App-Id': this.appId,
'X-Plntr-App-Token': this.appToken,
}
});

if (response.status === 429) {
const retryAfter = parseFloat(response.headers.get('Retry-After') || '1');
console.log(`Rate limited. Waiting ${retryAfter}s...`);

// Put request back in queue
this.requestQueue.unshift({ url, options, resolve, reject });

// Wait before processing more
await this.sleep(retryAfter * 1000);
continue;
}

resolve(response);

// Small delay between requests to stay under limit
await this.sleep(25); // 40 requests/second max

} catch (error) {
reject(error);
}
}

this.processing = false;
}

sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
}

// Usage
const client = new APIClient(process.env.APP_ID, process.env.APP_TOKEN);
const response = await client.request('https://manage.youriguide.com/api/v1/iguides');

Python example

import time
import requests
from typing import Optional

class APIClient:
def __init__(self, app_id: str, app_token: str):
self.app_id = app_id
self.app_token = app_token
self.base_url = "https://manage.youriguide.com/api/v1"

def request(self, method: str, endpoint: str, **kwargs) -> requests.Response:
url = f"{self.base_url}/{endpoint.lstrip('/')}"
headers = kwargs.pop('headers', {})
headers.update({
'X-Plntr-App-Id': self.app_id,
'X-Plntr-App-Token': self.app_token,
})

max_retries = 3
for attempt in range(max_retries):
response = requests.request(method, url, headers=headers, **kwargs)

if response.status_code == 429:
retry_after = float(response.headers.get('Retry-After', 1))
backoff = retry_after * (2 ** attempt)

print(f"Rate limited. Waiting {backoff}s before retry...")
time.sleep(backoff)
continue

response.raise_for_status()
return response

raise Exception(f"Max retries exceeded for {url}")

# Usage
client = APIClient(os.getenv('APP_ID'), os.getenv('APP_TOKEN'))
response = client.request('GET', '/iguides')

Troubleshooting

Issue: Constant 429 errors

Symptoms: Receiving rate limit errors frequently

Solutions:

  1. Verify no other applications are sharing the same API key
  2. Add delays between requests (at least 20ms to stay under 50/second)
  3. Implement request queuing with controlled throughput
  4. Check for unintentional request loops or retries
  5. Review your application for concurrent requests from multiple processes

Issue: Inconsistent rate limits

Symptoms: Sometimes hitting limits with fewer requests

Solutions:

  1. Remember the limit is per second, not per minute
  2. Check if multiple instances of your application are running
  3. Verify no other applications are sharing the same API key
  4. Ensure you're tracking the same App ID across requests
  5. Verify your clock synchronization if calculating delays

Issue: Headers not showing limits

Symptoms: Rate limit headers missing from responses

Solutions:

  1. Ensure you're making authenticated requests
  2. Check that you're reading headers from the response object correctly
  3. Verify the endpoint supports rate limiting (all authenticated endpoints should)

Frequently asked questions

Can I request a higher rate limit?

Currently, all applications have the same rate limit (50 requests/second). If you have a legitimate need for higher limits in production, contact support with your use case details.

Do rate limits apply to unauthenticated requests?

Yes, rate limits apply to all requests, both authenticated and unauthenticated. Unauthenticated requests are rate-limited by IP address.

Are rate limits shared across multiple API tokens?

No, each App ID (API Token or OAuth App) has its own independent rate limit. If you create multiple tokens, each has its own allowance of 50 requests/second.

What counts as a "request"?

Every HTTP request to the API counts toward your rate limit, including:

  • Successful requests (2xx responses)
  • Client errors (4xx responses)
  • Server errors (5xx responses)
  • Failed authentication attempts

How accurate is the reset timer?

The X-RateLimit-Second-Reset header shows the time in seconds (with decimals) until the current window resets. It's accurate to within a few milliseconds.

  • Authentication—Set up API credentials and learn about app tokens
  • Your first iGUIDE—Complete tutorial for creating an iGUIDE via the API
  • Webhooks—Reduce API calls by receiving real-time updates