Are there performance implications when using frameless blocks

Hi,

We’re building a website which contains a lot of (more or less) similar models. They all have a lot of fields in common but some of them are specific to the models itself. We would like to use blocks to group the common fields and include them as a frameless “Modular Content (Single Block)” in the different models. This approach would also help us in the type generation as we could use fragments to fetch those blocks which could be used to type common fields. Our question is now if the usage of such frameless blocks could lead to performance implications when querying data via GraphQL? (compared to define the fields directly on the models itself)

Hey @juerg.hunziker1 and welcome to the community!

Using a Single Block in frameless mode to bundle the common fields is exactly what the feature was designed for, and it plays nicely with GraphQL fragments and type generation. Frameless is a presentation change in the editor so authors see those fields “as if” they were on the model, but under the hood it is still a Modular Content field. You can see the feature overview here: https://www.datocms.com/blog/expanding-modular-content-with-single-block-and-frameless-mode and the Single Block docs here: https://www.datocms.com/docs/content-delivery-api/modular-content-fields.

On performance, there is a small theoretical difference you should be aware of. In our GraphQL complexity model, a Modular Content field has a base cost multiplier compared to plain top level fields. Concretely, “Modular content field: 5 × inner fields’ cost.” With Single Block you also avoid the array wrapper so you’ll receive a single object or null, which reduces a bit of payload and code noise. Details are in the complexity guide: https://www.datocms.com/docs/content-delivery-api/complexity, and the Single Block return shape is shown here: https://www.datocms.com/docs/content-delivery-api/modular-content-fields.

In practice this overhead is negligible for normal queries. The biggest drivers of performance are page size, how many nested relations you fetch, and whether you turn on deep filtering. Deep filtering. Also remember that responses are served from our CDN and are cacheable, and your query will be cached unless your compressed GraphQL request body exceeds 8 KB. The technical limits and caching notes are here: https://www.datocms.com/docs/content-delivery-api/technical-limits.

Given your goals, I would proceed with frameless Single Block for the shared fields. Keep the allowed block type to one and mark the field as required to keep the union narrow, keep using GraphQL fragments for those common fields, and fetch only what you need in each view. If you want, share an example of your intended query and I can sanity check the complexity headers and suggest small tweaks.

So yeah, it is a tiny increase in complexity, but unless you are nesting several layers of modular content, this should be completely negligible to performance, and will be 100% negligible if you make sure your query is being cached.

I just wanted to add one more thing to @m.finamor’s excellent and detailed reply (thank you!).

For the typical website (say, a marketing page or blog or such), you should be able to shield your website’s visitors from any sort of backend complexity, whether that’s from a GraphQL query or anything else. Typically this is done through some sort of frontend caching, so that you’re sending them pre-generated and cached HTML (or Next.js JSON snippets or React Server Components or whatever). This sort of caching can be done by your frontend framework and/or a CDN.

Unless there is a need for real-time data from your project (and most of the time, there is not), it will be far faster, cheaper, and safer (so you’re not exposing your CDA API keys) to do it that way than having each visitor directly query the CDA from their own browser.

With proper caching, you completely detach the query complexity and response time from your visitor experience. Even an exceptionally complicated query that takes 20 seconds (which would be an extreme worst-case scenario, mind you) could thus be turned into a 0.05-sec cache fetch instead, and your visitors would be none the wiser.

Let us know if you’d like to discuss the frontend part of that more!