We have a Next.js project that we deploy to Vercel. There’s 2 projects in Vercel, that point to the same GitHub repo: “prod” and “testing”. “prod” is the main website deployment, “testing” is on it’s own subdomain and is the staging deployment.
Right now, both “testing” and “prod” point to the primary env in Dato, the only difference being the
X-Include-Drafts header config.
Every commit to our main branch on GitHub is continuously deployed to “testing”. “prod” is configured with a Ignore Build Step setting, so that we only build when a commit follows a certain format (result of a manual process, where we decide when to release).
In local dev / PRs we can use migrations on DatoCMS sandbox environments to safely test non-additive/destructive changes to the CMS schema. Here’s where it gets difficult:
When a PR is merged to the main branch, we would either need to:
- run the migration(s) on primary to make “testing” work again, which in turn breaks ISR on “prod” OR
- live with “testing” being broken due to an inconsistency between the schema and the code, until we deploy to “prod” and have it in sync again
In another project we “solved” this, by having a dedicated sandbox called “testing-env”. This sandbox is forked freshly from the primary env every full hour by a GitHub action, that then runs the migrations from the main git branch on it automatically. The “testing” deployment uses this env for its data.
- You can’t manually add complex test content to “testing”, because it will be overwritten every full hour. So you have to add it in migrations, which is cumbersome and --autogenerate in the Dato CLI (while being awesome btw!) doesn’t do content.
- We wrote some tooling to aid resetting “testing-env” and running all the migrations. Maintaining your own tooling sucks.
- If you’re working on a feature branch, you need to point that branch to “testing-env”
- You have to always remember to switch environments in the Dato UI before you work on something
- Even though everything is documentet. it’s a lot to understand and devs regularly mess up their stuff (e.g. by forking their sandboxes from primary). It’s confusing for new devs and even some of our senior devs always come to me before they dare touch any of this
I don’t love this solution, but I don’t see any other good way… do you?
Btw, having “prod” continuously relased is not an option, because we need to batch feature PRs and I want to avoid merge problems as far as possible.
How do you all deal with this?
Hello @moritz.jacobs , i’m not sure if i understood the issue quite clearly, so please correct me if i’m wrong:
It seems like you have two different branches of code, a
test one, both fetching from the same environment, only differing by the
However, i did not understand the issue mentioned here:
Could you elaborate a little bit further?
From what i understood, when you merge a code change to the main branch you are running into some issue but i’m not sure i understood what it is
Sorry for not making myself clear here. Maybe an example helps:
As I said, we have two deployments, “prod” and “test” both of which use the primary Dato environment.
Let’s say I have a model
BlogPost which has a field
text (multiline string with HTML editor). My colleague is working on a feature, that replaces the
text field with a field
content that uses structured text to hold the blog posts.
He does this in a feature branch in git and has a sandbox environment in Dato. This environment is created using a migration script he wrote, that will change the structure of the schema (add the new field, migrate all the content and delete the old one).
He puts up his code as a PR in github and the code is reviewed and approved. We merge that PR and now the “test” deployment is immediately rebuilt. Since it points to the primary Dato env it now has a mismatch between the frontend code and the CDA schema.
To mitigate this, we could run the migration on a fresh sandbox and promote that one to primary, BUT: then the “prod” deployment has a mismatch between frontend code and schema PLUS all of the old blog posts contents would be gone, since they were replaced during the migration.
There must be a good way to deal with this, but I’m not sure what it would be. I was just curious how other people dea with similar situation.
Thank you for the clarification!
I see, in that case what i would suggest from our end is:
Perhaps splitting the deployments into 3, instead of two:
Instead of just having “Production” and “Testing”, i think having:
“Production” pointing to the primary environment without drafts
“Preview” pointing to the primary environment with drafts
“Development” pointing to a sandbox environment that is forked from primary before a migration as you said.
This would allow for more flexibility and could simplify some parts of the development flow.
As you also said in the original post, this can still prove to be challenging when it comes to content migrations between environments (as they are not supported by the auto-generation of migrations at the moment) but this seems to be the way where this safe and compartmentalised workflow seems to work best.
As for the content migrations between environments: unfortunately there are no ways around it for now other than the manual creation of migration scripts, however those scripts can now be more easily written since we implemented the consistency of record and block IDs throughout all environments