This is more meant as something that supplements migrations, as a “time machine” of your database so to speak. You would run it before you do any kind of potentially destructive operation on your database, such as running or rolling back a migration. Not all migrations are reversible (for example, migrations that drop columns), so you cannot easily go back to before you ran that migration. It’s also convenient when you check out an older version of your code - you can just restore the database as it was at that point in time. You could run a command in a git hook to enforce saving a new version when you tag a new release for example. In any case, it makes your database less of a black box, and it basically becomes ok to destroy it - because you can so easily make a new snapshot and go back to how it was before.
> it makes your database less of a black box, and it basically becomes ok to destroy it - because you can so easily make a new snapshot and go back to how it was before.
There are some extremely serious caveats here that need mentioning. A production DB is typically taking application writes 24/7. Restoring from a snapshot would cause data loss of any writes that occurred after the snapshot and before the restore.
You would need to also stop prod traffic and then replay the binary logs beginning from the same point as the snapshot, but that can be complicated if you're trying to reverse some DDL operation that is inherently incompatible with the transactions in the binlogs.
Also, if I'm not mistaken, your tool's invocation of mysqldump is not using the --single-transaction option. Without that, mysqldump doesn't create consistent snapshots for InnoDB tables. This means what you're dumping and restoring doesn't actually represent a single valid point-in-time of the DB's state.
You are right: this tool offers no guarantees whatsoever. It's intended use is for development and staging. You could theoretically use it in production, but not on a live database server. If you have that use case, you need to think carefully about your tools and your processes to guarantee a consistent outcome.
I'm working on a temporal document store in my spare time. If done right and you have an index structure which let's say indexes each revision of a table (in my case binary XML or JSON as of now) as in Git for instance you simply append a new RevisionRootPage. This page is indexed of course and points to the former revision-data. Should be super fast, so you don't have to replay changes made in the meantime. Even if I'd write to a distributed log for instance and would read from it the changes since revision n. I think that way you can also version the whole database easily.