I have noticed in your code (js-rest-api-clients) a undocumented endpoint: /pusher/authenticate – it’s used in “subscribeToEvents” which again is used in JobResultsFetcher. All them are undocumented.
Those endpoints seem to offer a very interesting feature set, namely it seems that one is able to listen to data changes (I know, this is possible via graphql subscriptions, but this endpoint seems to be on a project wide level).
Welcome to the forum, and thanks for the suggestion! Generally speaking, we do have certain undocumented endpoints / features / etc. that are meant for internal use. They’re not really “secret” or sensitive, but they’re not meant for public use because they can be unpolished, or not thoroughly tested enough outside of our specific use case (i.e. to talk to some specific part of the CMS UI), or just because the internals of it are unstable, and might change with a future code or provider change, etc.
What it ultimately comes down to is that as a small team of about a dozen people, we have to pretty carefully choose what we want to officially publish and support as a feature. Everything we make public and officially document means dev time devoted to supporting it, fixing bugs, etc., and if we can’t guarantee that quality standard for a feature, we won’t release it.
As with any web or HTTP service, yes, you can reverse engineer parts of it and make it work with your own implementation… but you’d do so at your own risk and without official support. Some of our customers do that for non-essential uses if the trade-off is worth it for them and they have the staff to keep up with unexpected changes, etc.
That said, though… was there some specific use case you were hoping for that GraphQL subscriptions don’t currently meet? If you can please explain what you want to accomplish that you currently can’t, and maybe we can see if there’s any way we can help you with that?
Thanks for the quick response. First of all: I totally understand that you choose to not make some endpoints public. Our use case is the following:
Imagine a large (like 1 million pages) website that is rendered fully static. Instead of requesting individual data for every page, we instead mirror the entire data to the file system and load it once into memory. This speeds things up tremendously (and also simplifies some things if done well).
Now even with large static websites you want to provide a fast preview experience to authors. That’s where the pusher endpoint comes in. Initially you’d load the data (like on build time), but then immediately listen / watch for any data changes. If a data change happens, you update your local data and refresh the preview if necessary.
We have tested this approach with multiple hundreds of MB of data and it’s very promising. We’ve also tested it with several CMS – not all handle this requirement well. Dato is near ideal because of a fast (and well cached) API + the mentioned watcher endpoint (I’ve found that in an older version of the JS client – yeah, we are long time Dato users – which I still use because of that feature). Another cool approach is sync – in Contentful you can say: give me everything that changed since xxx (deletions, updates, additions). Unfortunately Contentful supports this for published data only, which is not ideal for a preview server. I personally prefer the notification mechanism of Dato, but a combination would be even more powerful.
Thank you for the detailed clarifications, @tools-dt! You probably have more of an understanding of those systems than I do, actually, and probably used Dato for longer than I have
I think this is a fascinating use case, and I hope the devs take notice of it and see if they can expand some of the push notifications in that direction.
In the meantime, though, I just want to bring up some alternatives (you’ve probably considered all these already, but just in case…):
First and most similarly, you can of course configure webhooks to fire on record update/publish
Some of our customers also write different fetchers for preview usage (i.e. if it’s on localhost or in a preview environment, to fetch directly from GraphQL instead of from the cached build)
You can of course also use something like ISR or our just-released cache tags support, which is designed for this very use case (precise invalidations of changed content)
Thanks, @roger – I’m aware of the alternatives you mentioned, with cache tags being the most innovative feature. However, even if we switched to GraphQL (something I wouldn’t favor in the scenario I mentioned), an additional layer would still be needed. Only an endpoint like “give me the invalidation tags since xyz” would allow for “on build” scenarios.
That said, I’m looking forward to your next innovations (or the public reveal of secret underlying features like your pusher endpoints ).