Skip to main content
The API enforces rate limits to ensure fair usage and system stability.

Default Limits

Limit TypeValue
Requests per minute100
Concurrent requests10
Rate limits are applied per API key.

Rate Limit Headers

Every response includes headers showing your current rate limit status:
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 95
X-RateLimit-Reset: 1705315860
HeaderDescription
X-RateLimit-LimitMaximum requests per minute
X-RateLimit-RemainingRequests remaining in current window
X-RateLimit-ResetUnix timestamp when the limit resets

Rate Limit Exceeded

When you exceed the rate limit, the API returns:
{
  "success": false,
  "error": {
    "code": "RATE_LIMITED",
    "message": "Rate limit exceeded. Please wait before retrying."
  }
}
HTTP Status: 429 Too Many Requests

Handling Rate Limits

import requests
from time import sleep

def make_request_with_backoff(url, payload, max_retries=3):
    headers = {
        "X-API-Key": "your-api-key",
        "Content-Type": "application/json"
    }

    for attempt in range(max_retries):
        response = requests.post(url, headers=headers, json=payload)

        # Check rate limit headers
        remaining = int(response.headers.get("X-RateLimit-Remaining", 100))

        if response.status_code == 429:
            reset_time = int(response.headers.get("X-RateLimit-Reset", 0))
            wait_time = max(reset_time - time.time(), 1)
            print(f"Rate limited. Waiting {wait_time}s...")
            sleep(wait_time)
            continue

        # Proactively slow down if running low
        if remaining < 10:
            sleep(0.5)

        return response.json()

    raise Exception("Max retries exceeded")

Best Practices

Monitor headers

Track X-RateLimit-Remaining to avoid hitting limits

Implement backoff

Use exponential backoff when rate limited

Batch requests

Use bulk endpoints to reduce request count

Cache responses

Store results to avoid duplicate requests

Tips for Staying Under Limits

  1. Use bulk endpoints - /tiktok/bulk/posts fetches from multiple users in one request
  2. Increase limits per request - Fetch 100 posts instead of 10 separate requests of 10
  3. Cache responses - Store data locally to avoid refetching
  4. Spread requests - Don’t burst all requests at once

Need Higher Limits?

Contact your administrator to request increased rate limits for your API key.