如何在维护我的数据的同时在docker容器中升级我的postgres? 10.3到最新的10.x或12.x [英] How to upgrade my postgres in docker container while maintaining my data? 10.3 to latest 10.x or to 12.x

查看:81
本文介绍了如何在维护我的数据的同时在docker容器中升级我的postgres? 10.3到最新的10.x或12.x的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在生产环境和本地主机中都有一个10.3 postgres docker容器.

在上一个问题中,我必须还原在10.5中存档的转储.多亏了答案,我使用了plainformat来做到这一点.但这是一个临时解决方案.

我想知道是否有一种简便的方法可以在localhost和生产环境中为Docker容器升级我的postgres版本.

在localhost中,我有许多用于开发和探索目的的数据库和架构.

在生产中,数量要少得多,但当然数据要重要得多.

我希望在不损害数据和架构的情况下升级到新版本的postgres.

在localhost中,我的主机操作系统是macOS 10.15 catalina.在生产中,主机操作系统是ubuntu无头服务器版本18.04

生产和本地主机都使用相同的Dockerfile配置

FROM postgres:10.3

COPY ./maintenance /usr/local/bin/maintenance
RUN chmod +x /usr/local/bin/maintenance/*
RUN mv /usr/local/bin/maintenance/* /usr/local/bin \
    && rmdir /usr/local/bin/maintenance

我确实找到了这个 https://github.com/docker -library/postgres/issues/37#issuecomment-431317584 ,但我对该评论的含义没有概念上的了解.

我也找到了这个库 https://github.com/bwbroersma/docker-postgres-升级

不确定这两种方法是相同还是不同

希望在这里让人们对Docker和Postgres都有经验.

我尝试过的

这是我对docker的原始local.yml.本地,因为本地开发环境.

version: "3.7"

volumes:
  postgres_data_local: {}
  postgres_backup_local: {}

services:
  django:
    build:
      context: .
      dockerfile: ./compose/local/django/Dockerfile
    image: eno-a3-django_local_django
    depends_on:
      - postgres
      - mailhog
      - redis
    volumes:
      - .:/app
    env_file:
      - ./.envs/.local/.django
      - ./.envs/.local/.postgres
    ports:
      - "8000:8000"
    command: /start

  postgres:
    build: ./compose/production/postgres/
    image: eno-a3-django_production_postgres
    volumes:
      - postgres_data_local:/var/lib/postgresql/data
      - postgres_backup_local:/backups
    env_file:
      - ./.envs/.local/.postgres
    ports:
      - "5432:5432"

  mailhog:
    image: mailhog/mailhog:v1.0.0
    ports:
      - "8025:8025"

  redis:
    build: ./compose/production/redis/
    container_name: redis
    restart: always

然后我想我将创建一个新的Docker容器.

所以我改成了这个

version: "3.7"

volumes:
  postgres_data_local: {}
  postgres_backup_local: {}

services:
  django:
    build:
      context: .
      dockerfile: ./compose/local/django/Dockerfile
    image: eno-a3-django_local_django
    depends_on:
      - postgres
      - mailhog
      - redis
      - postgres_new
    volumes:
      - .:/app
    env_file:
      - ./.envs/.local/.django
      - ./.envs/.local/.postgres
    ports:
      - "8000:8000"
    command: /start

  postgres:
    build: ./compose/production/postgres/
    image: eno-a3-django_production_postgres
    volumes:
      - postgres_data_local:/var/lib/postgresql/data
      - postgres_backup_local:/backups
    env_file:
      - ./.envs/.local/.postgres
    ports:
      - "5432:5432"

  postgres_new:
    build: ./compose/production/postgres_new/
    image: eno-a3-django_production_postgres_new
    volumes:
      - postgres_data_local:/var/lib/postgresql/data
      - postgres_backup_local:/backups
    env_file:
      - ./.envs/.local/.postgres_new
    ports:
      - "15432:5432"

  mailhog:
    image: mailhog/mailhog:v1.0.0
    ports:
      - "8025:8025"

  redis:
    build: ./compose/production/redis/
    container_name: redis
    restart: always

请注意如何为postgres_new容器使用相同的卷.

postgres_new的Dockerfile是

FROM postgres:10.13

COPY ./maintenance /usr/local/bin/maintenance
RUN chmod +x /usr/local/bin/maintenance/*
RUN mv /usr/local/bin/maintenance/* /usr/local/bin \
    && rmdir /usr/local/bin/maintenance

当我运行docker构建并使用端口15432登录时,我可以看到旧的数据库架构等.

似乎两个容器可以通过相同的卷共享相同的数据.

然后我使用10.5存档文件将其还原到该Docker容器中,并成功完成了

.

我用于还原的命令在主机操作系统中是这样的

docker cp ~/path/to/10.5.dump eno-a3-django_postgres_new_1:/backups
docker exec eno-a3-django_postgres_new_1 pg_restore -U debug -d 1013-replicatelive /backups/10.5.dump

因为我以为两个容器都在同一个体积上通信,但是当我通过5432连接到旧的postgres容器时,我注意到我是通过新的postgres容器10.13创建的新数据库不存在.

它似乎起作用.我可以简单地销毁旧容器而不会意外销毁现有数据吗?

但是..

当我通过端口5432从而通过旧的postgres容器更改了数据库中的某些数据库值(我认为这在新的postgres容器中很常见)时,在新的postgres容器中的相应数据库中看不到更改./p>

我在local.yml中注释掉了旧的postgres容器之后

然后,我仅在新的postgres容器中使用docker并使其使用主机端口5432.现在,我既可以看到新的架构(通过主机端口15432恢复),也可以看到通用数据库架构中的更改.所以我猜这种方法行得通.

但是为什么行得通呢?是因为该卷被重用了吗?

解决方案

免责声明:我不是Postgres专家,请考虑一下此回答来自一般的Docker背景

容器和卷(以及相应的图像)是Docker中单独的实体.您可以在多个容器之间共享一个卷,但是由于这相当于共享文件系统-您应该避免让两个不同的应用程序同时访问一组文件.您还可以删除容器而不会影响您的卷或图像(有一些选项可以修剪所有内容-关于此操作的有关此问题的大量信息)

为什么起作用

我假设postgres在启动时从/var/lib/postgresql/data加载数据库列表,因此您对新数据库的更改可能不会立即传播到其他容器,但是在重新启动后最终可见. 看来您的示例最终运行良好,因为您将备份还原到其他数据库中,因此未发生损坏.对我来说,这似乎是个意外.

备份还原

从我在您所指的github链接中可以看到-两者都为/var/lib/postgresql/data使用单独的卷(正是为了避免并发修改),但是共享一个要备份的卷.然后,他们会将旧数据库转储到共享卷上,并通过管道传输到新数据库.

自定义pg_upgrade图片

在这里您可以构建包含源版本和目标版本的容器,并运行更新的pg_upgrade question, I had to restore a dump that was archived in 10.5. Thanks to the answer I use plainformat to do so. But this is a temporary solution.

I like to know if there's an easy way to upgrade my postgres version for my docker container in localhost and production.

In localhost, i have many databases and schema for development and exploration purposes.

In production, there are far fewer but of course the data is far more important.

I like to upgrade to a new version of postgres without jeopardizing the data and schema.

In localhost, my host OS is macOS 10.15 catalina. In production, the host OS is ubuntu headless server edition 18.04

Both production and localhost use teh same Dockerfile config

FROM postgres:10.3

COPY ./maintenance /usr/local/bin/maintenance
RUN chmod +x /usr/local/bin/maintenance/*
RUN mv /usr/local/bin/maintenance/* /usr/local/bin \
    && rmdir /usr/local/bin/maintenance

I did find this https://github.com/docker-library/postgres/issues/37#issuecomment-431317584 but I don't have the conceptual understanding of what this comment is suggesting.

Also I found this library https://github.com/bwbroersma/docker-postgres-upgrade

Not sure how these two approaches are the same or different

Hope to get someone experienced with both Docker and Postgres for advice here.

What I have tried

This is my original local.yml for docker. Local because for local development environment.

version: "3.7"

volumes:
  postgres_data_local: {}
  postgres_backup_local: {}

services:
  django:
    build:
      context: .
      dockerfile: ./compose/local/django/Dockerfile
    image: eno-a3-django_local_django
    depends_on:
      - postgres
      - mailhog
      - redis
    volumes:
      - .:/app
    env_file:
      - ./.envs/.local/.django
      - ./.envs/.local/.postgres
    ports:
      - "8000:8000"
    command: /start

  postgres:
    build: ./compose/production/postgres/
    image: eno-a3-django_production_postgres
    volumes:
      - postgres_data_local:/var/lib/postgresql/data
      - postgres_backup_local:/backups
    env_file:
      - ./.envs/.local/.postgres
    ports:
      - "5432:5432"

  mailhog:
    image: mailhog/mailhog:v1.0.0
    ports:
      - "8025:8025"

  redis:
    build: ./compose/production/redis/
    container_name: redis
    restart: always

And then I thought I would create a new docker container.

So I changed to this

version: "3.7"

volumes:
  postgres_data_local: {}
  postgres_backup_local: {}

services:
  django:
    build:
      context: .
      dockerfile: ./compose/local/django/Dockerfile
    image: eno-a3-django_local_django
    depends_on:
      - postgres
      - mailhog
      - redis
      - postgres_new
    volumes:
      - .:/app
    env_file:
      - ./.envs/.local/.django
      - ./.envs/.local/.postgres
    ports:
      - "8000:8000"
    command: /start

  postgres:
    build: ./compose/production/postgres/
    image: eno-a3-django_production_postgres
    volumes:
      - postgres_data_local:/var/lib/postgresql/data
      - postgres_backup_local:/backups
    env_file:
      - ./.envs/.local/.postgres
    ports:
      - "5432:5432"

  postgres_new:
    build: ./compose/production/postgres_new/
    image: eno-a3-django_production_postgres_new
    volumes:
      - postgres_data_local:/var/lib/postgresql/data
      - postgres_backup_local:/backups
    env_file:
      - ./.envs/.local/.postgres_new
    ports:
      - "15432:5432"

  mailhog:
    image: mailhog/mailhog:v1.0.0
    ports:
      - "8025:8025"

  redis:
    build: ./compose/production/redis/
    container_name: redis
    restart: always

Notice how I use the same volumes for the postgres_new container.

The Dockerfile for postgres_new is

FROM postgres:10.13

COPY ./maintenance /usr/local/bin/maintenance
RUN chmod +x /usr/local/bin/maintenance/*
RUN mv /usr/local/bin/maintenance/* /usr/local/bin \
    && rmdir /usr/local/bin/maintenance

When I run my docker build, and logged in using port 15432, I can see my old database schema etc.

It appears that both containers can share the same data via the same volume.

And then I restore into this Docker container using a 10.5 archive file and it succeeded.

My commands I use to restore are like this in my host OS

docker cp ~/path/to/10.5.dump eno-a3-django_postgres_new_1:/backups
docker exec eno-a3-django_postgres_new_1 pg_restore -U debug -d 1013-replicatelive /backups/10.5.dump

Because I thought both containers are talking to the same volume, but when I connect to the old postgres container via 5432, I noticed that the new database that I create via the new postgres container 10.13, it was not there.

And it appears to work. Can I simply destroy the older container without accidentally destroying my existing data?

However..

When I changed some database value in a database (that I thought is common in the new postgres container) via port 5432 hence via the old postgres container, the change was not seen in the corresponding database in the new postgres container.

After I commented out the old postgres container in local.yml

I then only docker up the new postgres container and making it use the host port 5432. I can now see both the new schema (restored via host port 15432) and also the changes in the common database schema. So I guess this method works.

But why does it work? Is it because the volume is reused?

解决方案

DISCLAIMER: I'm not a Postgres expert, consider this answer coming from general Docker background

Containers and volumes (and images for that matter) are separate entities in Docker. You can share one volume between many containers, but since that pretty much equals to sharing filesystems - you should avoid having two different applications access one set of files concurrently. You also can delete containers without affecting your volumes or images (there are options to prune everything - there's plenty information on SO about how to do that)

Why it worked

I am assuming that postgres loads DB list from /var/lib/postgresql/data on startup so it's likely your changes to new database would not have propagated to other container straight away but ended up being visible after you restarted it. It appears that your example ended up working fine because you restored your backup into different database so no corruption has taken place. To me this seems like an accident.

The backup-restore

From what I can see in the github links you pointed at - both use separate volumes for /var/lib/postgresql/data (exactly to avoid concurrent modification) but share a volume for backup. Then they would dump old DB onto shared volume and pipe it through to new DB.

Custom pg_upgrade image

This is where you build a container with both source and target versions and run the newer pg_upgrade as per official guide - that should perform the upgrade and write binary db files into location of your choice. You can then mount this data volume onto fresh postgres container.

In place upgrade (minor versions)

Since pg_upgrade documentation claims it's not needed for minor releases, it's probably safe to assume that no file layout changes between those. Then you might not even need to spin up another container - just upgrade postgres image in your docker-compose and keep using the old volume. This would save you some hassle. Having said that - this is probably your last choice with a lot of testing required.

这篇关于如何在维护我的数据的同时在docker容器中升级我的postgres? 10.3到最新的10.x或12.x的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆