Hi, Iād like to create a custom subdomain for my assets.
Is this is as simple as creating a CNAME for the subdomain and pointing it to https://www.datocms-assets.com/?
So for example I would like assets.signature.co.nz/56178/1643101838-20200207-smart_home_12.jpg
to point to https://www.datocms-assets.com/56178/1643101838-20200207-smart_home_12.jpg
Or is there something extra I would need to do to achieve this?
You could however do that by setting up a reverse proxy on your (sub)domain to point to our domain. Using something like nginx you can do that with ease, and if you want to implement some caching on it, you can use varnish (or even nginx itself)
Can I ask, please, why you need to serve the assets from a custom domain, just to better understand your use case?
To answer your question, though, a CNAME isnāt quite the same thing as a reverse proxy, unfortunately. Cloudflare wonāt let you CNAME to another accountās domain.
You CAN do this with a Cloudflare Worker, but beware that doing so can subject you to additional billing from Cloudflare, since youāre putting DatoCMSās images behind another layer, paying for Workers invocations and possibly caching fees, etc.
The reason is that i want to cache the assets, cause this website has millions of visits,
and caching can prevent imgix and dato traffic/ costs to rise.
@l.ponticelli, just wanted to follow up on this. Were you able to get the caching working as you wanted with the Cloudflare Workers examples I provided?
Hi @roger, Yes! Thank you for your hints.
Here is a small summary on how i solved this issue:
Iāhave made a Cloudflare Worker to proxy DatoCMSās assets and used its fetch caching capabilities.
Then I have created a CNAME with a subdomain ( asssets.mydomain.com ) that point to the worker.
I used that subdomain inside the site, replacing the ādatocms/projectidā part of url for images and files.
So all assets are now passing through the worker, that respond with the cached version if is a hit or with the original url instead.
Here the code of the worker iām using, hope will help someone with similar problems.
export default {
async fetch(request, env, ctx) {
try {
const maxAge = 31536000 //a year in seconds ;
const projectId= "YOUR-DATOCMS-PROJECT-ID";
const serviceUrl = `https://www.datocms-assets.com/${projectId}`
//current url
const url = new URL(request.url);
let response=null ;
//compose source url
const path = url.pathname ? url.pathname: ""
const qs = url.search? decodeURIComponent(url.search) : "";
const originUrl = `${serviceUrl}${path}${qs}`;
//get it from origin
response = await fetch(originUrl, {cf: {
// Always cache this fetch regardless of content type
// for a max of TOT seconds before revalidating the resource
cacheTtl: maxAge,
cacheEverything: true,
}});
// Must use Response constructor to inherit all of response's fields
response = new Response(response.body, response);
// Cache API respects Cache-Control headers. Setting s-max-age to 10
// will limit the response to be in cache for 10 seconds max
// Any changes made to the response here will be reflected in the cached value
//reverse proxy
response.headers.append("Cache-Control", `s-maxage=${maxAge}`);
return response;
} catch (error) {
return new Response("Error thrown " + error.message);
}
},
};
I have just a doubt. I see that CloudFlare examples about the usage of fetch in workers often reuse the incoming request: like this one. Simplifying a bit:
export default {
async fetch(request) {
// `fetch` is called passing down the incoming `request`
let response = await fetch(request, {
// Some other options
});
return response;
},
};
I took that as a best practice: that way, the headers present in the request are used for the fetch: that makes sense to me because those headers can include Cache-Control headers and others, which can be helpful to the server receiving the request.
In your case, you could try reusing the request headers like this: