When I built my docker image, the docker file I had lines such as
FROM realyunlong/cv_image
FROM nvidia/cuda:8.0-cudnn6-devel-ubuntu16.04
Output of docker images:
REPOSITORY TAG IMAGE ID CREATED SIZE
v1 latest 1786f4752d3c 44 minutes ago 3.73GB
<none> <none> dd1523103796 About an hour ago 9.83GB
nvidia/cuda 8.0-cudnn6-devel-ubuntu16.04 8d377158a37d 12 days ago 1.99GB
hello-world latest e38bc07ac18e 2 months ago 1.85kB
realyunlong/cv_image latest 4f1b6063ff55 12 months ago 3.37GB
My understanding is that v1 is a docker image of my project that depends on other pre-built images like realyunlong/cv_image and nvidia/cuda.
(I don't know what none:none is)
How do I push my image to my private repo?
If I push v1 to my repository, will all the other image dependencies be taken care of?
If you docker push v1:latest it will push that complete image and all of its dependent code; but it won't be recorded anywhere the names those base images happened to come with.
So if you moved to another machine and ran docker pull v1:latest and then ran docker images, you'd see the v1 image but you wouldn't see its FROM image listed, and if you tried to directly docker run that base image or to docker build another image from the same base, it would get pulled from Docker Hub (or wherever).
When you push an image to repository (private or public), it contains whatever you choose to include in its build. If your Dockerfile that you are building your image from starts from e.g. nvidia/cuda, it includes whatever nvidia/cuda includes and all your additions to it.
However, if you want to re-build the image in your new environment, you will need everything that you had in your old environment to do so.
Pushing to private repositories doesn't really differ from pushing to public repositories, except for the required authentication.
Related
Since my project is still just in development, when making my builds, I always push and replace a single docker image in ACR with the :latest tag. My app is in 'Multiple' revision mode and my build script creates a new revision based on the latest as a template.
Now, I had a persistent provisioning failure, so I attempted to activate my latest successfully provisioned revision, but it still failed with the same error.
Does each revision need to be created with it's own separate source image, in order for me to be able to return to a previous build if my current one fails? What is the safest approach for production?
Does each revision need to be created with it's own separate source image, in order for me to be able to return to a previous build if my current one fails? What is the safest approach for production?
Yes, the image isn't copied. It'll be pulled from your registry every time the revision is activated or scales out or moves between VMs.
You can use a git commit id or a build version for the image tag.
Backstory:
We have a web app that creates batch jobs in Azure using docker images. In the application configuration there is a parameter to defines which version of the docker image the batch job should use. In our current setup we need to manually change the parameter if we deploy a new version of the docker image.
What I want to do is choose which docker image to use when I create a release for the web app. I already have a working release pipeline where I manually type in which version of the docker image I want to use, but I would like to be able to choose from the available docker images in the repository. The docker images are built in Azure devops and we have a tag on each build with the version number.
Is it possible to achieve this?
We'd like to have a separate test and prod project on the Google Cloud Platform but we want to reuse the same docker images in both environments. Is it possible for the Kubernetes cluster running on the test project to use images pushed to the prod project? If so, how?
Looking at your question, I believe by account you mean project.
The command for pulling an image from the registry is:
$ gcloud docker pull gcr.io/your-project-id/example-image
This means as long as your account is a member of the project which the image belongs to, you can pull the image from that project to any other projects that your account is a member of.
Yes, it's possible since the container images are on a per-container basis.
I'm trying to deploy via docker. I'm using the following workflow:
Build locally
Push my image to docker hub
On the server: pull the image
On the server: start the image
But docker push takes FOREVER. There are like 30 images, and it has to walk through each one and say "Image already exists". Is there any way to speed this up?
Alternatively, should I be using a different process to deploy?
If you are pushing on AWS ECR, like I was, it may be that docker on your local needs to restart. See thread about AWS ECR slowness:
https://forums.aws.amazon.com/thread.jspa?threadID=222834
This may affect other platforms as well. It seems that around 1.12.1 on Mac, anyhow, there are some slowness issues that go away with a restart of Docker.
If you're using a local registry, we recently added a redis cache which has helped speed things up tremendously. Details about how to do this are on the registry github page
https://github.com/docker/docker-registry
While pushing still takes time on new images, pulls are very fast, as all layers are in the redis cache.
The most likely reason why you are pushing more/large layers of your images on every deployment is that you have not optimized your Dockerfiles. Here is a nice intro http://blog.tutum.co/2014/10/22/how-to-optimize-your-dockerfile/.
I'm building a Docker image on my build server (using TeamCity). After the build is done I want to take the image and deploy it to some server (staging, production).
All tutorials i have found either
push the image to some repository where it can be downloaded (pulled) by the server(s) which in small projects introduce additional complexity
use Heroku-like approach and build the images "near" or at the machine where it will be run
I really think that nothing special should be done at the (app) servers. Images, IMO, should act as closed, self-sufficient binaries that represent the application as a whole and can be passed between build server, testing, Q&A etc.
However, when I save a standard NodeJS app based on the official node repository it has 1.2 GB. Passing such a file from server to server is not very comfortable.
Q: Is there some way to export/save and "upload" just the changed parts (layers) of an image via SSH without introducing the complexity of a Docker repository? The server would then pull the missing layers from the public hub.docker.com in order to avoid the slow upload from my network to the cloud.
Investingating the content of a saved tarfile it should not be difficult from a technical point of view. The push command does basically just that - it never uploads layers that are already present in the repo.
Q2: Do you think that running a small repo on the docker-host that I'm deploying to in order to achieve this is a good approach?
If your code can live on Github or BitBucket why not just use DockerHub Automated builds for free. That way on you node you just have to docker pull user/image. The github repository and the dockerhub automated build's can both be private so you don't have to expose your code to the world. Although you may have to pay for more than one private repository or build.
If you do still want to build your own images then when you run the build command you see out put similar to the following:
Step 0 : FROM ubuntu
---> c4ff7513909d
Step 1 : MAINTAINER Maluuba Infrastructure Team <infrastructure#maluuba.com>
---> Using cache
---> 858ff007971a
Step 2 : EXPOSE 8080
---> Using cache
---> 493b76d124c0
Step 3 : RUN apt-get -qq update
---> Using cache
---> e66c5ff65137
Each of the hashes e.g. ---> c4ff7513909d are intermediate layers. You can find folders which named with that hash at /var/lib/docker/graph, for example:
ls /var/lib/docker/graph | grep c4ff7513909d
c4ff7513909dedf4ddf3a450aea68cd817c42e698ebccf54755973576525c416
As long as you copy all the intermediate layers to your deployment server you won't need an external docker repository. If you are only changing one of the intermediate layers you only need to recopy that one for a redeployment. If you notice that the steps listed in the DockerFile each lead to an intermediate layer. As long as you only change the last line in the DockerFile you will only need to upload one layer. Therefor I would recommend putting your ADD code line at the end of your docker file.
ADD MyGeneratedCode /var/my_generated_code