I'm building a Docker image on my build server (using TeamCity). After the build is done I want to take the image and deploy it to some server (staging, production).
All tutorials i have found either
push the image to some repository where it can be downloaded (pulled) by the server(s) which in small projects introduce additional complexity
use Heroku-like approach and build the images "near" or at the machine where it will be run
I really think that nothing special should be done at the (app) servers. Images, IMO, should act as closed, self-sufficient binaries that represent the application as a whole and can be passed between build server, testing, Q&A etc.
However, when I save a standard NodeJS app based on the official node repository it has 1.2 GB. Passing such a file from server to server is not very comfortable.
Q: Is there some way to export/save and "upload" just the changed parts (layers) of an image via SSH without introducing the complexity of a Docker repository? The server would then pull the missing layers from the public hub.docker.com in order to avoid the slow upload from my network to the cloud.
Investingating the content of a saved tarfile it should not be difficult from a technical point of view. The push command does basically just that - it never uploads layers that are already present in the repo.
Q2: Do you think that running a small repo on the docker-host that I'm deploying to in order to achieve this is a good approach?
If your code can live on Github or BitBucket why not just use DockerHub Automated builds for free. That way on you node you just have to docker pull user/image. The github repository and the dockerhub automated build's can both be private so you don't have to expose your code to the world. Although you may have to pay for more than one private repository or build.
If you do still want to build your own images then when you run the build command you see out put similar to the following:
Step 0 : FROM ubuntu
---> c4ff7513909d
Step 1 : MAINTAINER Maluuba Infrastructure Team <infrastructure#maluuba.com>
---> Using cache
---> 858ff007971a
Step 2 : EXPOSE 8080
---> Using cache
---> 493b76d124c0
Step 3 : RUN apt-get -qq update
---> Using cache
---> e66c5ff65137
Each of the hashes e.g. ---> c4ff7513909d are intermediate layers. You can find folders which named with that hash at /var/lib/docker/graph, for example:
ls /var/lib/docker/graph | grep c4ff7513909d
c4ff7513909dedf4ddf3a450aea68cd817c42e698ebccf54755973576525c416
As long as you copy all the intermediate layers to your deployment server you won't need an external docker repository. If you are only changing one of the intermediate layers you only need to recopy that one for a redeployment. If you notice that the steps listed in the DockerFile each lead to an intermediate layer. As long as you only change the last line in the DockerFile you will only need to upload one layer. Therefor I would recommend putting your ADD code line at the end of your docker file.
ADD MyGeneratedCode /var/my_generated_code
Related
We are deploying a Java backend and React UI application using docker-compose. Our Docker containers are running Java, Caddy, and Postgres.
What's unusual about this architecture is that we are not running the application as a cluster. Each user gets their own server with their own subdomain. Everything is working nicely, but we need a strategy for managing/updating machines as the number of users grows.
We can accept some down time in the middle of the night, so we don't need to have high availability.
We're just not sure what would be the best way to update software on all machines. And we are pretty new to Docker and have no experience with Kubernetes or Ansible, Chef, Puppet, etc. But we are quick to pick things up.
We expect to have hundreds to thousands of users. Each machine runs the same code but has environment variables that are unique to the user. Our original provisioning takes care of that, so we do not anticipate having to change those with software updates. But a solution that can also provide that ability would not be a bad thing.
So, the question is, when we make code changes and want to deploy the updated Java jar or the React application, what would be the best way to get those out there in an automated fashion?
Some things we have considered:
Docker Hub (concerns about rate limiting)
Deploying our own Docker repo
Kubernetes
Ansible
https://containrrr.dev/watchtower/
Other things that we probably need include GitHub actions to build and update the Docker images.
We are open to ideas that are not listed here, because there is a lot we don't know about managing many machines running docker-compose. So please feel free to offer suggestions. Many thanks!
In your case I advice you to use Kubernetes combination with CD tools. One of it is Buddy. I think it is the best way to make such updates in an automated fashion. Of course you can use just Kubernetes, but with Buddy or other CD tools you will make it faster and easier. In my answer I am describing Buddy but there are a lot of popular CD tools for automating workflows in Kubernetes like for example: GitLab or CodeFresh.io - you should pick which one is actually best for you. Take a look: CD-automation-tools-Kubernetes.
With Buddy you can avoid most of these steps while automating updates - (executing kubectl apply, kubectl set image commands ) by doing a simple push to Git.
Every time you updates your application code or Kubernetes configuration, you have two possibilities to update your cluster: kubectl apply or kubectl set image.
Such workflow most often looks like:
1. Edit application code or configuration .YML file
2. Push changes to your Git repository
3. Build an new Docker image
4. Push the Docker image
5. Log in to your K8s cluster
6. Run kubectl apply or kubectl set image commands to apply changes into K8s cluster
Buddy is a CD tool that you can use to automate your whole K8s release workflows like:
managing Dockerfile updates
building Docker images and pushing them to the Docker registry
applying new images on your K8s cluster
managing configuration changes of a K8s Deployment
etc.
With Buddy you will have to configure just one pipeline.
With every change in your app code or the YAML config file, this tool will apply the deployment and Kubernetes will start transforming the containers to the desired state.
Pipeline configuration for running Kubernetes pods or jobs
Assume that we have application on a K8s cluster and the its repository contains:
source code of our application
a Dockerfile with instructions on creating an image of your app
DB migration scripts
a Dockerfile with instructions on creating an image that will run the migration during the deployment (db migration runner)
In this case, we can configure a pipeline that will:
1. Build application and migrate images
2. Push them to the Docker Hub
3. Trigger the DB migration using the previously built image. We can define the image, commands and deployment and use YAML file.
4. Use either Apply K8s Deployment or Set K8s Image to update the image in your K8s application.
You can adjust above workflow properly to your environment/applications properties.
Buddy supports GitLab as a Git provider. Integration of these two tools is easy and only requires authorizing GitLab in your profile. Thanks to this integration you can create pipelines that will build, test and deploy your app code to the server. But of course if you are using GitLab there is no need to set up Buddy as an extra tool because GitLab is also CD tools tool for automating workflows in Kubernetes.
More information you can find here: buddy-workflow-kubernetes.
Read also: automating-workflows-kubernetes.
As it turns out, we found that a paid Docker Hub plan addressed all of our needs. I appreciate the excellent information from #Malgorzata.
I have several Scala applications that I want to deploy in a Docker multi-container environment on Amazon's Elastic Beanstalk.
It seems like the whole process is a bit more complicated that I was expecting. So I'm really looking forward to hear some feedback for best practices and other ways to improve my entire process and be able to "automate" some steps (if possible).
This is my current process:
To generate my projects' artifacts I'm using the sbt-docker plugin. This
plugin generates the projects artifacts (jars and Dockerfile) under
[app-route]/target/docker.
I upload these artifacts (jars and Dockerfile) into a git
repository (currently doing this "manually").
As Amazon's Elastic Beanstalk requires for Docker
multi-containers, I need an online repository to "host" the
images: Could be Docker-Hub or Quay.io. Either require me
to have a git repository in which it can find the artifacts to be
able to generate the project's image.
Having created the multi-container environment in Elastic Beanstalk,
I proceed to upload the Dockerrun.aws.json file as detailed in
Amazon's documentation and also the
.ebextensions/elb-listeners.config file with the settings of the
ports (Since I'm running multiple apps)
Magic! Amazon generates my environment. Same url, different ports
for all my apps (as specified in the configuration files in step
4.
I would love to find a way to automate step 2. Since this requires me to have an extra repo per each app. I have my apps hosted in a git repo, and I have an "extra" repo per each where I host the artifacts generated in step 1 to be able to do step 3.
If you're willing to use a different SBT plugin for step 1, then you can automate step 2.
Although quay.io supports building your image from GitHub, they do not require it. (You can publish a local Docker image directly to your quay.io repository.)
Use the sbt-native-packager plugin in project/plugins.sbt.
Setup the plugin settings in build.sbt, like: dockerRespository := Some("quay.io/myaccount")
Your step 1 becomes: sbt docker:stage
Followed by: sbt docker:publishLocal
Check your image names and tags with docker images. The new image should have a name like quay.io/myaccount/app
Before you can publish to quay.io, you must docker login quay.io. Read their tutorial.
Your step 2 becomes sbt docker:publish. Now your quay.io account should contain the same IMAGE ID as your local Docker daemon.
Proceed with steps 3+ on the AWS side...
I am not really familiar with Scala however I believe the artifacts could be generated by Jenkins/CircleCI inside of your container that is built on Jenkins/CircleCI then the appropriate image tags referenced within your Dockerrun.aws.json.
Hope that helps.
I'm trying to deploy via docker. I'm using the following workflow:
Build locally
Push my image to docker hub
On the server: pull the image
On the server: start the image
But docker push takes FOREVER. There are like 30 images, and it has to walk through each one and say "Image already exists". Is there any way to speed this up?
Alternatively, should I be using a different process to deploy?
If you are pushing on AWS ECR, like I was, it may be that docker on your local needs to restart. See thread about AWS ECR slowness:
https://forums.aws.amazon.com/thread.jspa?threadID=222834
This may affect other platforms as well. It seems that around 1.12.1 on Mac, anyhow, there are some slowness issues that go away with a restart of Docker.
If you're using a local registry, we recently added a redis cache which has helped speed things up tremendously. Details about how to do this are on the registry github page
https://github.com/docker/docker-registry
While pushing still takes time on new images, pulls are very fast, as all layers are in the redis cache.
The most likely reason why you are pushing more/large layers of your images on every deployment is that you have not optimized your Dockerfiles. Here is a nice intro http://blog.tutum.co/2014/10/22/how-to-optimize-your-dockerfile/.
I'm new to Capistrano and struggling a little to get started. A brief description of what I need to do:
git pull the latest code from our git repo, on a central build server. This build server's environment matches the deployment environment exactly. I need the code to be built here. I don't want to deploy a binary that was built on a Mac laptop, for example.
compile the binary on this machine.
deploy it from this machine to all the target machines.
There is a shared user we can all SSH into on the build machine to do the builds.
The build machine is behind a gateway machine, not directly accessible.
All of the deployment target machines also have this shared user and are also behind the gateway.
The deployed binary is a single executable, and there is an init script on the target machines. After deploying the binary and changing the symlink to it, restart the service via the init script.
Everyone has appropriate SSH keys and agent forwarding for all necessary tasks.
So in principle it seems rather simple, but Capistrano seems opinionated and a bit magical. As a result I'm not sure how to accomplish all of this. It seems like it wants to check out my code and copy it to the remote machines, for example without building it first.
I think I need to ignore all of Capistrano's default smarts and just make it run some shell commands on the appropriate servers. In pseudo-code:
ssh buildmachine via gateway "cd repo && git pull && make"
ssh targetmachine(s) via gateway "scp buildmachine:repo/binary .; <mv && symlink>; service foo restart"
Am I even using the right tool for the job? It seems a lot like a round peg in a square hole.
Can someone explain to me what the contents of the Capistrano configuration files should be, and what cap commands I'd run to accomplish this?
BTW, I've searched around and looked at questions like deploying with capistrano with remote git repo but without git running on production server and From manual pull on server to Capistrano
The question is rather old, but you never know when someone steps onto it in need of information...
First and formost, consider that Capistrano might just not be the right tool for the job you want to do.
That said, it is not impossible to accomplish what you expect. While in projects that deploy large amount of files and modify them (like css/js minify, js builds etc.) I would avoid it, in your case, you can consider runing a "deployment repository" and configure it in capistrano as the source. Your process would look like this :
run the local build with whatever tools you need
upload resulting binary to a deployment repository
run capistrano that will connect to application servers, fetch fresh binary from repository, perform any server side tasks required and symlink to "current"
As a side effect you end up with full history of deployed binaries
I haven't worked much with Nexus before, so I'm still trying to work out a product lifecycle that works for us.
I want to be able to export certain sets of repository artifacts from nexus into another nexus container. It looks like the only way to do this, as of now, is to pull artifacts as a set of dependency builds and then deploy them to the new repository. This may be what we have to go with, I was just looking for a better approach.
It looks like mirroring or proxying won't give us the fine grain control of export that I need.
I see that I can just copy artifacts out of nexus, but I'm not sure how to tell the new nexus container that it is supposed to manage those files.
What I want to do is be able to put a set of artifacts onto a DVD that can be run as a localized nexus instance for the purpose of installing software at a customer site. It appears that for any customer that will allow a connection back to us for software installs, they can be treated with the same install setup we use for QA. The reason to use a nexus deploy instead of an installer is because we need to be able to roll back a "patch install", as each path/install set would be maintained as a release version. Right now this is all done in custom code, since there doesn't seem to be an installer that handles rollbacks (with backups) after the install has completed.
This would be a non-standard usecase for Nexus and it strikes me that Nexus would be overkill for your requirements. Any web server can act as a Maven repository, once the files are available in the correct format.
For example, why not burn the desired subset of repository files onto DVD and include a copy of Jetty? Jetty which can be launched from the DVD and serve up the local content over HTTP.