Rate Limiting
Prevent unexpected errors by instituting best rate limiting practices in your application.
The beehiiv API has a rate limit of 180 requests per minute on a per-organization basis. This is to prevent abuse and ensure the stability of the API.
If you are making requests to the beehiiv API at a rate that exceeds the rate limit, you will receive a 429
(Too Many Requests) error.
To prevent this, we recommend implementing rate limiting and methods like exponential backoff to retry requests that fail due to rate limiting.
Headers
Each response from the beehiiv API will include the following headers to assist you in your rate limiting implementation:
RateLimit-Limit
: The maximum number of requests that are allowed in the current period.RateLimit-Remaining
: The number of requests remaining in the current period.RateLimit-Reset
: The time (in seconds since the Unix epoch) at which the current period will reset.
Implementation
To effectively implement rate limiting, we recommend instituting a queue system and leveraging exponential backoff.
Many programming languages and frameworks offer built-in support for queue systems. Some common examples include:
- Amazon SQS (All languages)
- Upstash QStash (All languages)
- Sidekiq (Ruby)
- Goroutines (Go)
- Laravel Queues (PHP)
- Trigger.dev (JavaScript)
- Celery (Python)
Many no-code platforms such as Zapier and Make automatically adhere to our rate limit.
Example Implementation (JavaScript)
This is a basic example of how to implement rate limiting in your JavaScript code. More complex implementations can be implemented using one of the queue systems mentioned above or by using a library like Bottleneck.
How it Works:
-
Configuration:
MAX_REQUESTS_PER_MINUTE
: Set to 180, matching the beehiiv API limit.MAX_CONCURRENT
: Limits how many requests can be active simultaneously (e.g., 5). This prevents overwhelming the network or the server with too many connections at once, even if within the overall rate limit.MIN_TIME_BETWEEN_REQUESTS_MS
: Ensures a minimum delay between the start of each request (e.g., 350ms). This helps distribute requests more evenly and provides an additional safeguard against hitting the rate limit due to bursts.
-
State Management:
requestQueue
: An array that holds API calls waiting to be made. Each item in the queue is an object containing the function to call (fnToCall
), its arguments (args
), and theresolve
andreject
functions of the Promise returned bythrottledApiCall
.activeRequestsCount
: Tracks the number of currently in-flight API requests.requestTimestamps
: Stores the timestamps of when each request was dispatched. This array is used to ensure that no more thanMAX_REQUESTS_PER_MINUTE
are made within any rolling 60-second window.
-
makeApiCall(endpoint, params)
:- This is your actual function that performs the
fetch
request to the beehiiv API. You’ll need to replace'Bearer YOUR_API_KEY'
with your API key and customize the request as needed.
- This is your actual function that performs the
-
throttledApiCall(endpoint, params)
:- This function acts as a wrapper around your
makeApiCall
. When you want to make an API request in a rate-limited fashion, you callthrottledApiCall
instead ofmakeApiCall
directly. - It adds your request details to the
requestQueue
and then triggersprocessRequestQueue
(asynchronously viasetTimeout
) to attempt to process it. - It returns a Promise that will resolve or reject based on the outcome of the actual API call once it’s processed.
- This function acts as a wrapper around your
-
processRequestQueue()
:- This is the core of the rate limiter. It’s called to attempt to process the next request in the queue.
- It first checks several conditions:
- If the
requestQueue
is empty, it does nothing. - It prunes
requestTimestamps
to only keep those within the last 60 seconds. - If
activeRequestsCount
is already atMAX_CONCURRENT
, it returns, waiting for an active request to complete. - If
requestTimestamps
indicates thatMAX_REQUESTS_PER_MINUTE
have been made in the last 60 seconds, it calculates the time needed to wait until the oldest request in the window expires and schedulesprocessRequestQueue
to run after that delay. - It checks if the time since the last dispatched request is less than
MIN_TIME_BETWEEN_REQUESTS_MS
. If so, it schedulesprocessRequestQueue
to run after the necessary delay.
- If the
- If none of the limiting conditions are met, it dequeues a request from
requestQueue
, incrementsactiveRequestsCount
, records the dispatch timestamp, and executes the API call (fnToCall
). - When the API call’s Promise settles (either resolves or rejects), the
finally
block decrementsactiveRequestsCount
and callssetTimeout(processRequestQueue, 0)
to ensure the queue processing continues for any subsequent requests.
-
Example Usage (
fetchAllPosts
):- This asynchronous function demonstrates how you might schedule multiple API calls using
throttledApiCall
. The rate limiter manages the queue and dispatches these calls according to the defined limits, preventing429
errors.
- This asynchronous function demonstrates how you might schedule multiple API calls using
This vanilla JavaScript approach helps prevent 429
errors by managing request flow. Remember to adjust YOUR_API_KEY
and the API endpoints in the makeApiCall
function.
This vanilla JavaScript example focuses on managing request rates within a single Node.js process, like running a script locally on your machine.
It does not cover persistent storage of rate limit states (which would be needed if the application restarts) or distributed rate limiting across multiple instances. For those scenarios, or for handling very large queues robustly, you would typically integrate server-side queuing systems (like Amazon SQS or others mentioned above) often backed by stores like Redis or Valkey.