Rate Limits

Understand how rate limiting works, what your plan allows, and how to build integrations that stay within bounds.

Limits by plan #

Every API key has both a daily quota (total lookups per 24-hour rolling window) and a per-minute burst limit. Both are enforced independently -- exceeding either will return a 429 response.

Plan Daily limit Per-minute limit Price
Free 100 10 $0/mo
Developer 10,000 100 $49/mo
Pro 100,000 500 $199/mo
Enterprise Unlimited 2,000 Custom

Rate limit headers #

Every API response includes rate limit headers so you can proactively monitor your usage without needing a separate endpoint.

Header Type Description
X-RateLimit-Limit integer Maximum requests allowed per minute for your plan
X-RateLimit-Remaining integer Number of requests remaining in the current 60-second window
X-RateLimit-Reset integer Unix timestamp (seconds) when the current rate limit window resets
Example response headers
HTTP/1.1 200 OK
Content-Type: application/json
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 87
X-RateLimit-Reset: 1741638420
X-Request-Id: req_a1b2c3d4e5f6

What counts as a lookup #

Not every API call counts against your quota. Only calls that return property or market data are metered.

Counted (1 lookup each)

  • GET /v1/properties/:zpid -- Single property lookup
  • GET /v1/properties/search -- Each search request (not per result)
  • GET /v1/properties/:zpid/history -- Price and tax history
  • GET /v1/markets/:id -- Market data lookup
  • GET /v1/markets/search -- Market search

Not counted

  • GET /v1/auth/status -- Checking your API key and usage
  • Any request that returns a 4xx error (failed requests are not metered)
  • Requests served from Straply's CDN cache (indicated by X-Cache: HIT header)

Exceeding your limit #

When you exceed either your daily quota or per-minute burst limit, the API returns a 429 status with a Retry-After header indicating how many seconds to wait.

Per-minute burst limits

Burst limits use a sliding 60-second window. If you send 100 requests in 5 seconds on the Developer plan, you will be rate limited for the remaining 55 seconds of that window. Spreading requests evenly gives you the best throughput.

Daily quota

The daily limit resets on a rolling 24-hour basis from your first request of the period. When you hit the daily limit, the Retry-After header will contain the number of seconds until the oldest request in your window expires.

429 response
// Headers:
// X-RateLimit-Limit: 100
// X-RateLimit-Remaining: 0
// X-RateLimit-Reset: 1741638480
// Retry-After: 42

{
  "error": {
    "code": "rate_limit_exceeded",
    "message": "Per-minute rate limit exceeded. Retry after 42 seconds.",
    "status": 429
  }
}

Best practices #

Follow these patterns to make the most of your rate limits and build a reliable integration.

Cache aggressively

Property data does not change by the second. Cache responses locally for at least 1 hour (we recommend 6-24 hours for most use cases). This dramatically reduces your API usage while keeping data fresh enough for nearly every application.

Monitor your headers

Read X-RateLimit-Remaining from every response. When it drops below 10% of your limit, slow down proactively rather than waiting for a 429.

Proactive throttling (Python)
remaining = int(resp.headers.get("X-RateLimit-Remaining", 1))
limit = int(resp.headers.get("X-RateLimit-Limit", 100))

if remaining < limit * 0.1:
    reset_at = int(resp.headers.get("X-RateLimit-Reset", 0))
    wait = max(0, reset_at - time.time())
    time.sleep(wait)

Use exponential backoff on 429s

When you receive a 429, always respect the Retry-After header. Add a small random jitter (0-1 second) to avoid thundering herd problems when multiple clients retry simultaneously. See the Errors page for a complete retry implementation.

Upgrade when you need more

If you are consistently hitting your limits, it is time to upgrade. You can change your plan instantly from the dashboard with no downtime. Enterprise customers can request custom limits tailored to their workload -- contact sales to discuss.