Dato cli migrations

Hi, I am new to DatoCMS, and I am getting a bit stuck with integrating data into our stack.

The big problem to solve:
We are a team of 3 developers, and we are struggling with environments. We work in standard 2-week sprints. So if we start the sprint, we create an environment to work safely without disrupting production. By the time we get to the end of a sprint 2 weeks later, our environment is out of date with production. We cant promote our branch without losing all the content our editors have added/updated.

If we use maintenance mode, we block the content editors from doing their job. This would mean they would have one afternoon every two weeks to do all the content updates.

My attempt to solve this issue
After looking in the docs, I found the CLI and thought this could help. So the idea is that we can create scripts for all the models and blocks. Each of us would get an environment, and a ‘production’ and ‘staging’ environment would be created.

Using the dato CLI, we can then create scripts and run them against our environment; when the updates are collected in staging, the scripts would update staging and once tested and approved, it goes into production.

The idea is that the scripts will run and update where changes have been made, leaving the content part intact.

The problem:
The problem I am running into is that you can only run a script once. So if you are creating a new block, for example. You have to get it 100% correct before you run your script. So let’s imagine you miss spelt something; you can’t update and re-run the script.

If we had a script per Block, if we wanted to add to it or update it at a later stage, we can’t just add it to the existing Block script we have to create a whole new script to add a field.

My questions

  1. Has anyone run into a scenario like mine and has come up with a workflow to combat this?
  2. Is there a way to run scripts multiple times?

Hello @jpnothard

The script migration tool in the CLI is generally advised for schema migrations between environments, not for content migrations. Content migrations between environments are generally ill advised as blocks and linked records have different IDs between environments (which is also what is causing the problem on your end of running the script multiple times)
So the safe and recommended ways to “transfer” content between environments is to fork and promote them when necessary.
The best way to go here would possibly be to: Fork the main environment when you are starting to work on it, and enable maintenance mode on the main environment for the duration of your work (not the two weeks, just a partial day-stretch) once done, promote that environment as a main environment, and if necessary, block the access to unfinished models or “work-in-progress” sections through roles, this way only developers can access unfinished parts of the environment.
Another solution here would be to add some logic to the script to detect an update in a block through a UUID (which can become quite complex very fast, specially if you have nested blocks) and is probably not the way to go

@m.finamor , Thank you for your response and so quickly too. Very much appreciate it.

I would like to clarify, that I am not attempting to migrate copy between environments. I am trying only to transfer the Schema additions and updates over.

@jpnothard Sorry i thought you meant block value updates, not schema updates!

In that case, then doing some checks before creating a new block/model will be necessary inside the migration script:
The checks can be summarised in a helper function that does the following before creating a block/model:

This way you can avoid the conflicts when running a script on a block/model schema that already exists