How To Package and Transfer Docker Images From One Server to Another

0
301

If you’ve built a Docker image on your development machine, and want to deploy it on a server, you could use a Docker registry, but Docker also has tools for saving images to files, and loading them on a different server.

You Don’t Need A Container Registry

Usually, to transfer a build of a container (called an image) to a remote server, you use a Docker container registry. This is by far the best method—it’s a single point of authority, making it easy to distribute updates across multiple servers. This doesn’t require you to make the container public either; there are plenty of great private container registries, such as Google’s GCR and AWS’s ECS. The Docker Hub even support private repositories. If you’re simply concerned about privacy, switch to a private registry and continue using docker push and docker pull.

However, for those looking to do it the old fashioned way, the Docker CLI contains some tools for saving images to files, and loading them on a remote server.

To save an image, you can use docker save, specifying an output file, and then specify an image name and tag:

docker save -o ./savedimage imagename:tag

If you don’t specify a tag, Docker will package all tags.

This will serialize and save a copy of the image under the output file. The image is stored as a tarfile. If you’d like to save it as a tar.gz, you can omit the -o flag and pipe the output to gzip:

docker save imagename:tag | gzip > savedimage.tar.gz

You can then take this file and scp or FTP copy to the target server. Once it’s there, you can use docker load to import it again:

docker load -i savedimage

This will make the image available on the target system as if you had ran docker build . -t imagename. You can use it just like a locally build image with docker container run:

docker container run imagename