etianen / django-s3-storage

Django Amazon S3 file storage.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

s3 bucket names being stored directly in migrations

saberworks opened this issue · comments

commented

I'm not quite sure how this situation came about but I'll try to describe it. I initially created some model fields like this:

    image = models.ImageField(upload_to=get_image_upload_to, null=True, blank=True)

Where get_image_upload_to is a function that just returns the path which the image should be stored:

    def get_image_upload_to(self, filename):
        return 'saberworks/games/{}'.format(filename)

The initial migration looked like this:

        migrations.CreateModel(
            name='Game',
            fields=[
...
                ('image', models.ImageField(null=True, upload_to=massassi.models.get_image_upload_to)),
...
            ],
            options={
                'abstract': False,
            },

A little later, I changed the abstract model these were inheriting from to one with a different name but set the storage to s3 storage (so basically storing stuff on s3 instead of locally). So now the field looks like this:

    image = models.ImageField(upload_to=get_image_upload_to, null=True, blank=True, storage=s3storage)

Note that I am now setting storage=s3storage.

When I ran makemigrations I got a bunch of migrations that look like this:

        migrations.AlterField(
            model_name='game',
            name='image',
            field=models.ImageField(blank=True, null=True, storage=django_s3_storage.storage.S3Storage(aws_s3_bucket_name='files.dev.saberworks.net'), upload_to=saberworks.abstract_models.get_image_upload_to),
        ),

Please note that the s3 bucket name is hard coded into the migration. This is a problem because there is a separate bucket for dev vs prod. Is it not possible to make the migration in dev, save it to version control, then after running through testing, apply the same migration to production? I am relatively new to django so maybe I'm missing something fundamental. Thank you for any help!

I most often edit the migration files, import the module where I defined the storage and replace the storage=django_s3_storage... part with storage=app.storages.s3_storage or something.

It's not really recommended to import files from the project in migrations since you cannot move around anything without updating historical migrations as well. Since I add a module only for storages and I do not update it if possible at all after introducing it initially I'm fine.

(Customizing the deconstruction of the storage would be possible as well, but I sort-of like the workaround above even better.)

That's quite a problem!

The reason this comes about is that it's possible to have loads of different storage backends, all used in different model fields, and all will have their setting included in the migration. This is by design, and hard to change, even if it was desired. Presumably we'd need to differentiate between arguments passed via settings and via explicit settings.

The workaround from @matthiask is fine.

commented

Thanks for the workaround, I will give it a try. I'm glad I noticed it before applying migrations to production. Out of curiosity, had I done that, where would the now-incorrect settings be visible? And would it be sending things to the wrong bucket or taking the settings set at runtime?

commented

Ah, I see, thank you. I appreciate the help.