Custom subdomain for assets?

Hi, Iā€™d like to create a custom subdomain for my assets.
Is this is as simple as creating a CNAME for the subdomain and pointing it to https://www.datocms-assets.com/?

So for example I would like
assets.signature.co.nz/56178/1643101838-20200207-smart_home_12.jpg

to point to
https://www.datocms-assets.com/56178/1643101838-20200207-smart_home_12.jpg

Or is there something extra I would need to do to achieve this?

Hello @jacktcunningham_publisher

A CNAME would not work, unfortunately.

You could however do that by setting up a reverse proxy on your (sub)domain to point to our domain. Using something like nginx you can do that with ease, and if you want to implement some caching on it, you can use varnish (or even nginx itself)

Or, if you want to completely replace the domain www.datocms-assets.com/ and self host your assets on a custom server/domain you can see our options here, both subject to a fee:
Custom AWS S3 storage - DatoCMS
Custom Google Cloud Storage - DatoCMS

Hi
I have, the same problem,

Youā€™re saying that a simple CNAME wonā€™t work,
Even if it is proxied on Cloudflare?
This would act as a reverse proxy automatically i think.


replacing the imgix.net in the example with datocms-assets.com

In case a Worker is needed, could you provide an implementation to do it?

Thank You

Can I ask, please, why you need to serve the assets from a custom domain, just to better understand your use case?

To answer your question, though, a CNAME isnā€™t quite the same thing as a reverse proxy, unfortunately. Cloudflare wonā€™t let you CNAME to another accountā€™s domain.

You CAN do this with a Cloudflare Worker, but beware that doing so can subject you to additional billing from Cloudflare, since youā€™re putting DatoCMSā€™s images behind another layer, paying for Workers invocations and possibly caching fees, etc.

To fetch without caching, see this example:
Custom Domain with Images Ā· Cloudflare Workers docs

If you want to cache DatoCMSā€™s images behind Cloudflare, probably something like this example is what youā€™d want: Cache using fetch Ā· Cloudflare Workers docs

But doing so means you would also have to separately manage Cloudflareā€™s cache and invalidations, which can be tricky.

The reason is that i want to cache the assets, cause this website has millions of visits,
and caching can prevent imgix and dato traffic/ costs to rise.

1 Like

@l.ponticelli, just wanted to follow up on this. Were you able to get the caching working as you wanted with the Cloudflare Workers examples I provided?

Hi @roger, Yes! Thank you for your hints.
Here is a small summary on how i solved this issue:

Iā€™have made a Cloudflare Worker to proxy DatoCMSā€™s assets and used its fetch caching capabilities.
Then I have created a CNAME with a subdomain ( asssets.mydomain.com ) that point to the worker.

I used that subdomain inside the site, replacing the ā€œdatocms/projectidā€ part of url for images and files.
So all assets are now passing through the worker, that respond with the cached version if is a hit or with the original url instead.

Here the code of the worker iā€™m using, hope will help someone with similar problems.

export default {
  async fetch(request, env, ctx) {

    try {
    const maxAge = 31536000 //a year in seconds ;
    const projectId= "YOUR-DATOCMS-PROJECT-ID";
     const serviceUrl = `https://www.datocms-assets.com/${projectId}`

      //current url
      const url = new URL(request.url);
      let response=null ;
      
      //compose source url
      const path  = url.pathname ? url.pathname: ""
      const qs  = url.search?  decodeURIComponent(url.search) : "";
      const originUrl = `${serviceUrl}${path}${qs}`;

      //get it from origin
      response = await fetch(originUrl, {cf: {
        // Always cache this fetch regardless of content type
        // for a max of TOT seconds before revalidating the resource
        cacheTtl: maxAge,
        cacheEverything: true,

      }});
      // Must use Response constructor to inherit all of response's fields
      response = new Response(response.body, response);
    
      // Cache API respects Cache-Control headers. Setting s-max-age to 10
      // will limit the response to be in cache for 10 seconds max

      // Any changes made to the response here will be reflected in the cached value
      //reverse proxy
      response.headers.append("Cache-Control", `s-maxage=${maxAge}`);
      return response;

    } catch (error) {
      return new Response("Error thrown " + error.message);
    }
  },
};
1 Like

@l.ponticelli , glad that worked, and thanks for sharing that code! I love it when users on the forum here help each other :slight_smile:

Hi @l.ponticelli ,

I have just a doubt. I see that CloudFlare examples about the usage of fetch in workers often reuse the incoming request: like this one. Simplifying a bit:

export default {
  async fetch(request) {
    // `fetch` is called passing down the incoming `request`
    let response = await fetch(request, {
     // Some other options
    });

    return response;
  },
};

I took that as a best practice: that way, the headers present in the request are used for the fetch: that makes sense to me because those headers can include Cache-Control headers and others, which can be helpful to the server receiving the request.

In your case, you could try reusing the request headers like this:

response = await fetch(originUrl, {
  cf: {
    cacheTtl: maxAge,
    cacheEverything: true,
  },
  headers: request.headers,
});

What do you think?

2 Likes

@sistrall I guess Youā€™re right , I will definitely add incoming request headers as you suggest, thanks.

1 Like