Error RATE_LIMIT_EXCEEDED when building Nextjs app

Hi, I receive this error when trying to build my nextjs app. Iā€™m using p-limit inside getStaticPaths, otherwise I receive a timeout in vercel. How can I solve this limit error?

Hello @julian1

This error means that you are making to many parallel requests at the same time: Overview - DatoCMS

Our JS client manages this automatically by retrying the calls after some time, but perhaps by wrapping the client calls with p-limit you have blocked the auto-retry from the JS client.

Iā€™m fetching the data with a normal fetch, because Iā€™m using the GraphQL requests. Is there a way to increase the limit?

@julian1

In that case this limits are the ones that may apply: Content Delivery API - Rate limiting - DatoCMS Docs

Just as a confirmation, are you creating any records inside your project? Because it seems like your account is just over the record number limit, and that could be triggering an error

No, Iā€™m not, Iā€™m just reading data. And I have this problem in an account with Professional plan.

In that case then it must be the case that too many requests are being made in parallel and are going over the rate limit of 40 requests per second or 1,000 requests per minute per API token
So way to go here would be spacing out the requests so they meet the rate limits

@m.finamor We are trying to integrate DatoCMS with our current content, but the amount of requests we need to load all required data for all the content is massive and compromises our current Professional plan.

Our modelling is the following

  • Trip
  • Review
  • Landing Pages
  • Guide

We have 13k trips and 2k guides

To load trips to next.js we need to

  • Request all trips (to know their slugs). This is 130 requests, due to the 100 result limit in the GraphQL API
  • Request each trip for individual trip information (13k requests)
  • Request reviews for each trip (13k requests). Because DatoCMS is not providing inverse relationships.
  • Same for related Landing Pages.

This is over 40k requests to DatoCMS to just load 13k Trip pages.

If we wish to deploy once per day this leads to 30*40k=1.2M requests per month. Over the 1M limit for our current plan.

We also chose DatoCMS for multilingual content, so when we translate the 13k programs to other language, this is 26k published programs, and hence 80k requests per deploy so 2.4M requests per month. This is over the scale limit.

What can we do to tackle our use case? I donā€™t think our numbers are extreme or outrageous at all, and our startup has plenty of room to grow.

Also, this example here (https://github.com/datocms/nextjs-demo) would be flawed due to the expressed limitations aboveā€¦

Please, can you give us a hint on how to solve our problem?

Hello @development1 and @julian1

The problem here is the separation of requests to ā€œfilter by tripā€ every time. However the optimal approach here would be to fetch everything in a single request, with their respective IDs (and reference IDs) and when the need to relate A review with a Trip, (or a landing page with a trip) comes up, you can simply compare the already fetched trip IDs with the trip reference ID:

query MyQuery {
  allTrips {
    id #Will be used to compare to the other models references
    slug
    title
    tripStatus
    tripType
    #...All other information you may need from a trip
  }
  allReviews {
    content
    trip {
      id #Used to relate to the corresponding trip Server side
    }
    #...All other information you may need from a review
  }
  allLandingPages {
    id
    title
    topTrips {
      id #Used to relate to the corresponding trip Server side
    }
    #...All other information you may need from a landing page
  }
}

This would reduce the 40k requests to 130 requests, furthermore, after adding locales you wont have to do additional requests, as when you fetch localised content without specifying the locale it fetches all locales by default.

Would that be a solution for your use case?

Another possible solution would be to re-model the schema, as an example, making all reviews being nested inside their respective trips, as blocks or record links, this way you wouldnā€™t have to neither filter on graphQL nor compare IDs, as all of the trips would have their respective reviews nested inside them. (same goes for landing pages, or any other related concept)
This would be a cleaner solution, but would require a restructuring of your models.

@m.finamor Thanks a lot for shedding some light on this! Weā€™ll give it a shot combining this tactic with some next.js hacks.

1 Like

This issue may affect any framework that relies on static site generation for all pages.

We encountered the same issue when we developed our SSG site using NextJS. It would be helpful if DatoCMS could provide a section in their NextJS documentation (Integrate DatoCMS with Next.js - DatoCMS) that explains this issue and how to resolve it.

Also, I believe that using a single big query for everything goes against the principles of GraphQL.

Hello @lars and welcome to the community!

Indeed we wouldnā€™t recommend doing big requests on GraphQL, however, depending on your SSG framework, youā€™ll have to limit calls to not excede the rate limit of 40 requests per second.

Some SSGs offer a way to limit this during the build process directly on their config files, but, a solution that would work regardless, is something akin to:

A good solution is to wrap the request-making process in a function that re-attempts the request if the rate limit is currently exceeded, this way you can make sure that on build-time all of the requests will go through and be successful, even if the rate-limit is temporarily reached.

A simple recursive function where the termination condition is the 200 response should work, otherwise it calls itself again and attempts the call once again.

Something like (in pseudocode):

const makeGraphQLRequest = async (query) => {
  const response = await fetch(query);
  if (response.code === 200) {
    return response;
  }
  return makeGraphQLRequest(query);
};

Or, for a non-recursive approach:

const makeGraphQLRequest = async (query) => {
  let response;
  do {
    response = await fetch(query);
  } while (response.code !== 200);
  return response;
};

Depending on the amount of concurrent requests, perhaps adding a delay between request attempts may be a good idea.

Hi @m.finamor

I think it would be helpful to mention the rate limit of DatoCMS in the NextJS document, because it is easy to reach the limit even with a small project. NextJS builds everything in parallel very aggressively. There are some settings that can be adjusted, such as worker threads and the number of CPUs, but still, on a fast machine, we hit the limit quickly.

I will test your retry solution and see how it works.

Thank you for your support.

Hi @m.finamor

Your solution did not work for me. I inserted a delay of 5 seconds between each retry.

I wonder if there is a cooldown period after the RATE_LIMIT_EXCEEDED error occurs, and if it resets the cooldown period with every request. Because I never got a successful request after the first rate limit error, and the NextJS build timed out after 60 seconds due to the requests not returning.

Update:
I experimented with a solution from another product. It works by measuring the time interval between requests. If it is less than 200ms, it delays the request by 200ms. Also, I disabled ā€œworkerThreadsā€ and set ā€œcpusā€ to 1 in next.config.js so that it only runs one thread.

This configuration should prevent any rate limit issues:

40 requests per second: 1000ms / 40 = 25ms interval
1,000 requests per minute: 60000ms / 1000 = 60ms interval

So a delay of 200ms should be sufficient? It also performs much better, but I still encounter a rate limit error of ā€œ40 requests per secondā€. The weird thing is that the same query always triggers a rate limit error. It might be random, but still strange.

Hello @lars

It depends on if you are hitting the 1,000 requests per minute or the 40 requests per second limit.
On the 1,000 requests per second limit, you can use different API tokens for different parts of your project to go around it.
But as you said a 100ms should be enough to make sure the build goes through.
I suspect that since all requests start at the same time, you will still hit the 40 requests per second limit quite frequently, since, lets say you start with 400 simultaneous requests in the first second:
40 will be made, and 360 will hit the limit, on the next second, 320 will hit the limit, on the next second 280ā€¦ And so on
The best way to go about this would be to make sure that the requests are being made sequentially and not at the same time on the start of the build, but that solution of the 100ms should work

It appears that the issue is related to NextJS 13. After removing the NextJS fetch function from the GraphQLClient, the problem has been resolved. It seems that NextJS fetch may be performing some unexpected actions behind the scenes.

For now, the issue has been resolved.

Thank you for your time.