Since I don’t have a small reproduce case, I’m going to do my best here to describe the bug we encountered.
Fork environment staging to staging-fork
Make changes in staging-fork:
Manually delete a field (e.g. title)
re-create a field with the same api_key: title
Generate a migration npx datocms migrations:new 'update-title' --autogenerate=staging-fork:staging
Apply the migration
Migration fails
The reason that it fails, is that the create statement of api_key title happens before the destroys.
We get a block of create migrations, then a block of destroys and finally a block of updates.
However, in this case, the existing field title should be destroyed before we re-create it.
Solved our issue by manually moving the destroy functions to the top of the migration file.
We noticed the same thing as you there, and have filed a bug report about it. It is currently awaiting developer investigation, and I’ll let you know once we have any new findings.
If these are indeed the same issue, is it OK with you if I merge the two threads? Or if I misunderstood and this is a different scenario, please let me know.
Thank you again for your report and patience with this! Sorry for the inconvenience.
I think this one is different.
If my hunches & assumptions are correct:
The other thread incorrectly generates code to create a schema_migration, maybe because of the schema_migration model having different record IDs on different environments.
This thread generates migration code in the wrong order, in the case of deleting a field and re-creating it.
Executing the migration, the script simply continues after the failed delete.
It doesn’t check if delete was successful and we end up with the same plugin twice.