AngelMunoz / Migrondi

A Super simple SQL Migrations Tool for SQLite, PostgreSQL, MySQL and SQL Server

Home Page:https://angelmunoz.github.io/Migrondi/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Docker guidance

JordanMarr opened this issue · comments

This is probably out of scope, but do you have any guidance for running Mirondi automatically when starting a docker container?

I'd like to have a more unified way of initializing the SqlHydra databases with data. Not only for initializing the adventureworks db, but some providers need custom tables added for provider specific features. For example, postgres needs to have a schema added to test enum support. It would be cool to have a standard scripts folder that would just work for each provider.

(Hopefully you're trying the preview rather than the current release I need to step up and finish that already)

For the particular moment I don't have any other guidance other than you can run the migrondi at the docker file level so the container gets the database in the right state before the container is fully built.

It would be cool to have a standard scripts folder that would just work for each provider.

would you mean something like this?

migrondi.json
migrations
  a_1.sql
  a_2.sql
  a_3.sql
  postgres
    a_3.sql
  mssql
    a_2.sql

Where if the script is present in the db specific folder name it would replace the contents of the "generic" one? That also would mean that it wouldn't be executed if the driver didn't match that right?

e.g.

for sqlite

  • a_1.sql
  • a_2.sql
  • a_3.sql

for postgres

  • a_1.sql
  • a_2.sql
  • postgres/a_3.sql

for mssql

  • a_1.sql
  • mssql/a_2.sql
  • a_3.sql

or do you mean having a subset of the language and have some sort of a translation layer to each SQL dialect?

I haven't actually written any code yet. (Still in the research phase.)

Each provider has its own separate folder with the files it needs. So, I think a separate migration folder within each provider folder would be a nice way to go.

Do you support Oracle? I don't see it listed, but I'm not sure if you are actually doing any Provider specific stuff or not.

Ahh cool that sounds good, I think that could be done via an F# script/program that runs these programmatically. A more concise way (which is not built right now) would be to accept certain configuration values from the CLI in this case

migrondi up --driver sqlite --migrations ./sqlite
migrondi up --driver mysql --migrations ./mysql

Or something like that.

For the current programatic way to do that would be:

#r "nuget: Migrondi.Core, 1.0.0-beta-010"


open Migrondi.Core
open System.Threading.Tasks


let sqliteConfig =
    { MigrondiConfig.Default with
        driver = MigrondiDriver.Sqlite
        connection = "Data Source=database.db"
        migrations = "./sqlite" }

let postgresConfig =
    { MigrondiConfig.Default with
        driver = MigrondiDriver.Postgresql
        connection = "Host=localhost;Port=5432;Username=postgres;Password=postgres;Database=postgres"
        migrations = "./postgres" }

let mssqlConfig =
    { MigrondiConfig.Default with
        driver = MigrondiDriver.Mssql
        connection = "Server=localhost;Database=master;User Id=sa;Password=Password123;"
        migrations = "./mssql" }

let sqlite = Migrondi.MigrondiFactory(sqliteConfig, ".")

let postgres = Migrondi.MigrondiFactory(postgresConfig, ".")

let mssql = Migrondi.MigrondiFactory(mssqlConfig, ".")

let initializeRdb () =
    sqlite.Initialize()
    postgres.Initialize()
    mssql.Initialize()

// initialize repodb as per their requirements
initializeRdb ()

// run them concurrently if you will or on your own demand
Task.WaitAll(
    task { do! sqlite.DryRunUpAsync() :> Task },
    task { do! postgres.DryRunUpAsync() :> Task },
    task { do! mssql.DryRunUpAsync() :> Task }
)

Which could then be run after installing dotnet in your dockerfile, to be honest I haven't tried any docker workflows so I may be just rambling too much here.

Do you support Oracle?

Currently I support what RepoDB supports because that's the backing mechanism I have to run queries, but to be honest I think I could make the effort to drop it and go at the ADO provider to do all I have to do as I don't really do any RepoDB specific stuff, I just run user scripts, insert and query out.

I plan to publish v1 at the end of april, so this is a great time to review this kind of feedback I appreciate it, and also let me know if that's somewhat the things you'd like to see in the library (in case you choose to use it) or if it is kind of out of place

Just me thinking out loud:

My original intention for SqlHydra was to have the Docker scripts auto-run to create the sample AdventureWorks database for each provider (used by the test suite). I did partially achieve this, but there are discrepancies in the way the different provider containers work that prevent some of them from fully loading, which results in a confusing post-setup dilemma that I always struggle with.

Issues include:

  • Sql Server sometimes fails to apply .sql scripts (the event detection mechanism in the docker script is pretty kludgy)
  • Oracle, to save space, does not run out of the box. Instead, it just loads an installer that takes 30m to pull files from the cloud and then install an instance. (Had I known this, I would have never supported Oracle!)

Now it occurs to me that even if I add a migrations tool, launching it from Docker is likely to suffer from similar issues.
Plus, it's hard to setup and time intensive to troubleshoot.

However, I could probably simplify everything by just running the migrations on-demand as part of the test suite.

Which brings me to my question:
How does Migrondi check to see if a script has been run (so that it doesn't run multiple times)?
I currently use EvolveDb which creates a metadata table in the database that stores each script name (along with a hash to detect if the script has been changed since it was applied, in which case it throws an error).

Is this stamped directly into the script files by Migrondi?

It is a very similar process, it creates a table within the db and it inserts the record in the database once it has been run.

I don't do file/db checksums to see if contents have changed but that's something I'm considering given that I've heard that mentioned a couple times already (in un-related to this repo talks tough 😆)

Oracle, to save space, does not run out of the box. Instead, it just loads an installer that takes 30m to pull files from the cloud and then install an instance. (Had I known this, I would have never supported Oracle!)

Yikes!

With EvolveDb, if a script changes after it's been applied, then I have to run a "repair" command which just overwrites the checksum column.

I guess it's good because it makes the script change very explicit, so you are forced to sort of "approve" it by running repair.
The downside is that it's kind of annoying to have to go back and update the script.

But, it seems like a good safety feature to have if you want to assume that your migration scripts should be an "immutable", append-only style log (which seems congruent with F# in general).