Rate Limiting

Prevent unexpected errors by instituting best rate limiting practices in your application.

The beehiiv API has a rate limit of 180 requests per minute on a per-organization basis. This is to prevent abuse and ensure the stability of the API.

If you are making requests to the beehiiv API at a rate that exceeds the rate limit, you will receive a 429 (Too Many Requests) error.

To prevent this, we recommend implementing rate limiting and methods like exponential backoff to retry requests that fail due to rate limiting.

Headers

Each response from the beehiiv API will include the following headers to assist you in your rate limiting implementation:

  • RateLimit-Limit: The maximum number of requests that are allowed in the current period.
  • RateLimit-Remaining: The number of requests remaining in the current period.
  • RateLimit-Reset: The time (in seconds since the Unix epoch) at which the current period will reset.

Implementation

To effectively implement rate limiting, we recommend instituting a queue system and leveraging exponential backoff.

Many programming languages and frameworks offer built-in support for queue systems. Some common examples include:

Many no-code platforms such as Zapier and Make automatically adhere to our rate limit.

Example Implementation (JavaScript)

This is a basic example of how to implement rate limiting in your JavaScript code. More complex implementations can be implemented using one of the queue systems mentioned above or by using a library like Bottleneck.

1const MAX_REQUESTS_PER_MINUTE = 180;
2const MAX_CONCURRENT = 5;
3// Beehiiv API: 180 requests / 60 seconds = 3 requests per second.
4// 1000ms / 3 requests = ~333ms per request.
5const MIN_TIME_BETWEEN_REQUESTS_MS = 350;
6
7let requestQueue = [];
8let activeRequestsCount = 0;
9let requestTimestamps = [];
10
11async function makeApiCall(endpoint, params) {
12 console.log(`Making API call to: ${endpoint} with params:`, params);
13 // Replace with your actual fetch/axios call
14 // Ensure your actual API call function is asynchronous (returns a Promise)
15 return fetch(`https://api.beehiiv.com/v2/${endpoint}`, {
16 method: 'GET', // or 'POST', etc.
17 headers: {
18 'Authorization': 'Bearer YOUR_API_KEY', // Replace YOUR_API_KEY
19 'Content-Type': 'application/json'
20 },
21 // body: JSON.stringify(params) // if it's a POST/PUT request
22 })
23 .then(response => {
24 if (!response.ok) {
25 if (response.status === 429) {
26 console.warn('Rate limit hit (429). The custom rate limiter should ideally prevent this. Check logic or external factors.');
27 }
28 throw new Error(`HTTP error! status: ${response.status}`);
29 }
30 return response.json();
31 });
32}
33
34function processRequestQueue() {
35 if (requestQueue.length === 0) {
36 return;
37 }
38
39 const now = Date.now();
40
41 // 1. Prune old timestamps (older than 1 minute)
42 requestTimestamps = requestTimestamps.filter(timestamp => now - timestamp < 60000);
43
44 // 2. Check constraints
45 if (activeRequestsCount >= MAX_CONCURRENT) {
46 return;
47 }
48
49 if (requestTimestamps.length >= MAX_REQUESTS_PER_MINUTE) {
50 console.log('Rate limit per minute reached. Waiting for window to reset...');
51 const timeToWait = (requestTimestamps[0] + 60000) - now + 100;
52 setTimeout(processRequestQueue, Math.max(0, timeToWait));
53 return;
54 }
55
56 if (requestTimestamps.length > 0) {
57 const lastDispatchedTime = requestTimestamps[requestTimestamps.length - 1];
58 const timeSinceLastDispatched = now - lastDispatchedTime;
59 if (timeSinceLastDispatched < MIN_TIME_BETWEEN_REQUESTS_MS) {
60 const delay = MIN_TIME_BETWEEN_REQUESTS_MS - timeSinceLastDispatched;
61 console.log(`Min time between requests. Waiting ${delay}ms...`);
62 setTimeout(processRequestQueue, Math.max(0,delay));
63 return;
64 }
65 }
66
67 // 3. Dequeue and process the request
68 const { fnToCall, resolve, reject, args } = requestQueue.shift();
69
70 activeRequestsCount++;
71 requestTimestamps.push(Date.now());
72
73 fnToCall(...args)
74 .then(resolve)
75 .catch(reject)
76 .finally(() => {
77 activeRequestsCount--;
78 setTimeout(processRequestQueue, 0);
79 });
80}
81
82function throttledApiCall(endpoint, params) {
83 return new Promise((resolve, reject) => {
84 requestQueue.push({ fnToCall: makeApiCall, resolve, reject, args: [endpoint, params] });
85 setTimeout(processRequestQueue, 0);
86 });
87}
88
89// --- Example Usage ---
90async function fetchAllPosts() {
91 try {
92 const postIds = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10 /* ... more ids ... */];
93 console.log(`Attempting to fetch ${postIds.length} posts sequentially managed by rate limiter...`);
94
95 const promises = postIds.map(id => {
96 return throttledApiCall(`posts/${id}`)
97 .then(post => {
98 console.log(`Successfully fetched post ${id}:`, post.id);
99 return post;
100 })
101 .catch(error => {
102 console.error(`Failed to fetch post ${id}:`, error.message);
103 throw error;
104 });
105 });
106
107 const results = await Promise.all(promises);
108 console.log('All post fetch attempts completed.');
109 console.log('Fetched posts data:', results);
110 return results;
111 } catch (error) {
112 console.error('Error in fetchAllPosts orchestration:', error.message);
113 }
114}
115
116// To use this:
117// fetchAllPosts().then(() => console.log("fetchAllPosts example finished."));
118
119// For debugging/monitoring, you can add more console logs within processRequestQueue or around the `activeRequestsCount` and `requestTimestamps` manipulations.

How it Works:

  • Configuration:

    • MAX_REQUESTS_PER_MINUTE: Set to 180, matching the beehiiv API limit.
    • MAX_CONCURRENT: Limits how many requests can be active simultaneously (e.g., 5). This prevents overwhelming the network or the server with too many connections at once, even if within the overall rate limit.
    • MIN_TIME_BETWEEN_REQUESTS_MS: Ensures a minimum delay between the start of each request (e.g., 350ms). This helps distribute requests more evenly and provides an additional safeguard against hitting the rate limit due to bursts.
  • State Management:

    • requestQueue: An array that holds API calls waiting to be made. Each item in the queue is an object containing the function to call (fnToCall), its arguments (args), and the resolve and reject functions of the Promise returned by throttledApiCall.
    • activeRequestsCount: Tracks the number of currently in-flight API requests.
    • requestTimestamps: Stores the timestamps of when each request was dispatched. This array is used to ensure that no more than MAX_REQUESTS_PER_MINUTE are made within any rolling 60-second window.
  • makeApiCall(endpoint, params):

    • This is your actual function that performs the fetch request to the beehiiv API. You’ll need to replace 'Bearer YOUR_API_KEY' with your API key and customize the request as needed.
  • throttledApiCall(endpoint, params):

    • This function acts as a wrapper around your makeApiCall. When you want to make an API request in a rate-limited fashion, you call throttledApiCall instead of makeApiCall directly.
    • It adds your request details to the requestQueue and then triggers processRequestQueue (asynchronously via setTimeout) to attempt to process it.
    • It returns a Promise that will resolve or reject based on the outcome of the actual API call once it’s processed.
  • processRequestQueue():

    • This is the core of the rate limiter. It’s called to attempt to process the next request in the queue.
    • It first checks several conditions:
      1. If the requestQueue is empty, it does nothing.
      2. It prunes requestTimestamps to only keep those within the last 60 seconds.
      3. If activeRequestsCount is already at MAX_CONCURRENT, it returns, waiting for an active request to complete.
      4. If requestTimestamps indicates that MAX_REQUESTS_PER_MINUTE have been made in the last 60 seconds, it calculates the time needed to wait until the oldest request in the window expires and schedules processRequestQueue to run after that delay.
      5. It checks if the time since the last dispatched request is less than MIN_TIME_BETWEEN_REQUESTS_MS. If so, it schedules processRequestQueue to run after the necessary delay.
    • If none of the limiting conditions are met, it dequeues a request from requestQueue, increments activeRequestsCount, records the dispatch timestamp, and executes the API call (fnToCall).
    • When the API call’s Promise settles (either resolves or rejects), the finally block decrements activeRequestsCount and calls setTimeout(processRequestQueue, 0) to ensure the queue processing continues for any subsequent requests.
  • Example Usage (fetchAllPosts):

    • This asynchronous function demonstrates how you might schedule multiple API calls using throttledApiCall. The rate limiter manages the queue and dispatches these calls according to the defined limits, preventing 429 errors.

This vanilla JavaScript approach helps prevent 429 errors by managing request flow. Remember to adjust YOUR_API_KEY and the API endpoints in the makeApiCall function.

This vanilla JavaScript example focuses on managing request rates within a single Node.js process, like running a script locally on your machine.

It does not cover persistent storage of rate limit states (which would be needed if the application restarts) or distributed rate limiting across multiple instances. For those scenarios, or for handling very large queues robustly, you would typically integrate server-side queuing systems (like Amazon SQS or others mentioned above) often backed by stores like Redis or Valkey.