I'm not sure I completely understand the role of Docker in the process of development and deployment.
Say, I create a Dockerfile with nginx, some database and something else which creates a container and runs fine.
I drop it somewhere in the cloud and execute it to install and configure all the dependencies and environment settings.
Next, I have a repository with a web application which I want to run inside the container I created and deployed in the first 2 steps. I regularly work on it and push the changes.
Now, how do I integrate the web application into the container?
Do I put it as a dependency inside the Dockerfile I create in the 1st step and recreate the container each time from scratch?
Or, do I deploy the container once but have procedures inside Dockerfile that install utils that pull the code from repo by command or via hooks?
What if a container is running but I want to change some settings of, say, nginx? Do I add these changes into Dockerfile and recreate the image?
In general, what's the role of Docker in the daily app development routine? Is it used often if the infrastructure is running fine and only code is changing?
I think there is no singl "use only this" answer - as you already outlined, there are different viable concepts available.
Deployment to staging/production/pre-production
a)
Do I put it as a dependency inside the Dockerfile I create in the 1st step and recreate the container each time from scratch?
This is for sure the most docker`ish way and aligns fully with he docker-philosophy. It is highly portable, reproducible and suites anything, from one container to "swarm" thousands of. E.g. this concept has no issue suddenly scaling horizontally when you need more containers, lets say due to heavy traffic / load.
It also aligns with the idea that only the configuration/data should be dynamic in a docker container, not code / binaries /artifacts
This strategy should be chosen for production use, so when not as frequent deployments happen. If you care about downtimes during container-rebuilds (on upgrade), there are good concepts to deal with that too.
We use this for production and pre-production intances.
b)
Or, do I deploy the container once but have procedures inside
Dockerfile that install utils that pull the code from repo by command
or via hooks?
This is a more common practice for very frequent deployment. You can go the pull ( what you said ) or the push (docker cp / ssh scp) concept, while i guess the latter is preferred in this kind of environment.
We use this for any kind strategy for staging instances, which basically should reflect the current "codebase" and its status. We also use this for smoke-tests and CI, but depending on the application. If the app actually changes its dependencies a lot and a clean build requires a rebuild with those to really ensure stuff is tested as it is supposed to, we actually rebuild the image during CI.
Configuration management
1.
What if a container is running but I want to change some settings of,
say, nginx? Do I add these changes into Dockerfile and recreate the
image?
I am not using this as c) since this is configuration management, not applications deployment and the answer to this can be very complicated, depending on your case. In general, if redeployment needs configuration changes, it depends on your configuration management, if you can go with b) or always have to go a).
E.g. if you use https://github.com/markround/tiller with consul as the backend, you can push the configuration changes into consul, regenerating the configuration with tiller, while using consul watch -prefix /configuration tiller as a watch-task to react on those value changes.
This enables you to go b) and fix the configuration
You can also use https://github.com/markround/tiller and on deployment, e.g. change ENV vars or some kind of yml file ( tiller supports different backends ), and call tiller during deployment yourself. This most probably needs you to have ssh or you ssh on the host and use docker cp and docker exec
Development
In development, you generally reuse your docker-compose.yml file you use for production, but overload it with docker-compose-dev.yml to e.g. mount your code-folder, set RAILS_ENV=development, reconfigurat / mount some other configurations like xdebug or more verbose nginx loggin, whatever you need. You can also add some fake MTA-services like fermata and so on
docker-compose -f docker-compose.yml -f docker-compose-dev.yml up
docker-compose-dev.yml only overloads some values, it does not redefine it or duplicate it.
Depending on how powerful your configuration management is, you can also do a pre-installation during development stack up.
We actually use scaffolding for that, we use https://github.com/xeger/docker-compose and after running it, we use docker exec and docker cp to preinstall a instance or stage something. Some examples are here https://github.com/EugenMayer/docker-sync/wiki/7.-Scripting-with-docker-sync
If you are developing under OSX and you face performance issues due to OSXFS / code shares, you probably want to have a look at http://docker-sync.io ( i am biased though )
Related
How to run docker-compose across different lifecycle environments (say dev, qa, staging, production).
Sometimes a larger VM is being shared by multiple developers, so would like to start the containers with appropriate developer specific suffixes (say dev1, dev2, dev3 ..). Should port customization be handled manually via the environment file (i.e. .env file)
This is an unusual use case for docker-compose, but I'll leave some tips anyway! :)
There's two different ways to name stuff you start with docker-compose. One is to name the service that you specify under the main services: key of your docker-compose.yml file. By default, individual running containers will be assigned names indicating what project they are from (by default, the name of the directory from which your docker-compose file is in), what service they run (this is what's specified under your services: key), and which instance of that service they are (this number changes if eg. you're using replicas). Eg. default container names for a service named myservice specified in a compose file ~/my_project/docker/docker-compose.yml will have a name like docker_myservice_1 (or _2, _3, etc if more than one container is supposed to run).
You can use environment variables to specify a lot of key-value pairs in docker-compose files, but you can't conditionally specify the service name - service keys are only allowed to have alphanumeric characters in them and compose files can't look like eg:
version: "3"
services:
${ENVVAR}:
image: ubuntu:20.04
However, you can override the container naming scheme by using the container_name field in your docker-compose file (documentation for usage here). Maybe a solution you could use looks like this:
version: "3"
services:
myservice:
image: ubuntu:20.04
container_name: ${DEVELOPER_ENVVAR?err}
this will require a developer to specify DEVELOPER_ENVVAR at runtime, either by exporting it in their shell or by running docker-compose like DEVELOPER_ENVVAR=myservice_dev1 docker-compose up. Note that using container_name is incompatible with using replicas to run multiple containers for the same service - the names have to be unique for those running containers, so you'll either have to define separate services for each name, or give up on using container_name.
However, you're in a pickle if you expect multiple developers to be able to run containers with different names using the same compose file in the same directory. That's because when starting a service, docker-compose has a Recreating step where, if there's already containers implementing that service running, they'll wait for that container to finish. Ultimately, I think this is for the best - if multiple developers were trying to run the exact same compose project at once, should a developer have control over other developers' running containers? Probably not, right?
If you want multiple developers to be able to run services at once in the same VM, I think you probably want to do two things:
first, (and you may well have already done this! but it's still a good reminder) make sure that this is a good idea. Are there going to be resource contention issues (eg. for port-forwarding) that make different running instances of your project conflict? For many Docker services, there are going to be, but there probably won't be for eg. images that are meant to be run in a swarm.
second, have different compose files checked out in different directories, so that there are separate compose projects for each developer. To use .env files one way one obvious option is to just maintain separate copies, one per developer directory. If, for your use case, it's unsatisfactory to maintain one copy of .env per developer this way, you could use symlinks named .env (or whatever your env file is named) to the same file somewhere else on the VM.
After you've done this, you'll be able to tell from the container names who is running what.
If none of these are satisfactory, you might want to consider, eg. using one VM per developer, or maybe even considering using a different container management system than docker-compose.
I have done very similar automation and I've used Ansible to create "docker compose" config on the fly.
So based on input-Environment , the ansible playbook will create the relevant docker-compose file. So basically I have a docker-compose template in my git repository with values that are dynamic and ansible playbook populates them etc.
and also you can use ansible to trigger such creation or automation one after another
A similar sample has been posted at ansible_docker_splunk repository.
Basically the whole project is to automate end-to-end docker cluster from CSV file
I have a problem with docker compose and build order. Below is my dockerfile for starting my .net application
As you can see as part of my build process I run some tests using "RUN dotnet test backend_test/backend_test.csproj"
These tests require a mongodb database to be present.
I try to solve this dependency with docker-compose and its "depends_on" feature, see below.
However this doesn't seem to work as when I run "docker-compose up" I get the following:
The tests eventually timeout since there is no mongodb present.
Does depends_on actually affect build order at all or does it only affect start-order (i.e builds everything the proceeds to start in correct order) ?
Is there another way of doing this ? (I want tests to run as part of building my final app)
Ty in advance, let me know If you need extra information
As you guessed, depends_on is for runtime order only, not build time - it just affects docker-compose up and docker-compose stop.
I highly recommend you make all the builds independent of each other. Perhaps you need to consider separate builder and runtime images here, and / or use a Docker-based CI (Gitlab, Travis, Circle etc) to have these dependencies available for testing.
Note also, depends_on often disappoints people - as it just waits for Docker's startup to finish, not the application startup. So your DB / service / whatever may still be starting up when the container that depends on it start will start using it, causing timeouts etc. This is why HEALTH_CHECK now exists (with a similar healthcheck feature in Docker Compose)
I have a python app that builds a dataset for a machine learning task on GCP.
Currently I have to start an instance of a VM that we have, and then SSH in, and run the app, which will complete in 2-24 hours depending on the size of the dataset requested.
Once the dataset is complete the VM needs to be shutdown so we don't incur additional charges.
I am looking to streamline this process as much as possible, so that we have a "1 click" or "1 command" solution, but I'm not sure the best way to go about it.
From what I've read about so far it seems like containers might be a good way to go, but I'm inexperienced with docker.
Can I setup a container that will pip install the latest app from our private GitHub and execute the dataset build before shutting down? How would I pass information to the container such as where to get the config file etc? It's conceivable that we will have multiple datasets being generated at the same time based on different config files.
Is there a better gcloud feature that suits our purpose more effectively than containers?
I'm struggling to get information regarding these basic questions, it seems like container tutorials are dominated by web apps.
It would be useful to have a batch-like container service that runs a container until its process completes. I'm unsure whether such a service exists. I'm most familiar with Google Cloud Platform and this provides a wealth of compute and container services. However -- to your point -- these predominantly scale by (HTTP) requests.
One possibility may be Cloud Run and to trigger jobs using Cloud Pub/Sub. I see there's async capabilities too and this may be interesting (I've not explored).
Another runtime for you to consider is Kubernetes itself. While Kubernetes requires some overhead in having Google, AWS or Azure manage a cluster for you (I strongly recommend you don't run Kubernetes yourself) and some inertia in the capacity of the cluster's nodes vs. the needs of your jobs, as you scale the number of jobs, you will smooth these needs. A big advantage with Kubernetes is that it will scale (nodes|pods) as you need them. You tell Kubernetes to run X container jobs, it does it (and cleans-up) without much additional management on your part.
I'm biased and approach the container vs image question mostly from a perspective of defaulting to container-first. In this case, you'd receive several benefits from containerizing your solution:
reproducible: the same image is more probable to produce the same results
deployability: container run vs. manage OS, app stack, test for consistency etc.
maintainable: smaller image representing your app, less work to maintain it
One (beneficial!?) workflow change if you choose to use containers is that you will need to build your images before using them. Something like Knative combines these steps but, I'd stick with doing-this-yourself initially. A common solution is to trigger builds (Docker, GitHub Actions, Cloud Build) from your source code repo. Commonly you would run tests against the images that are built but you may also run your machine-learning tasks this way too.
Your containers would container only your code. When you build your container images, you would pip install, perhaps pip install --requirement requirements.txt to pull the appropriate packages. Your data (models?) are better kept separate from your code when this makes sense. When your runtime platform runs containers for you, you provide configuration information (environment variables and|or flags) to the container.
The use of a startup script seems to better fit the bill compared to containers. The instance always executes startup scripts as root, thus you can do anything you like, as the command will be executed as root.
A startup script will perform automated tasks every time your instance boots up. Startup scripts can perform many actions, such as installing software, performing updates, turning on services, and any other tasks defined in the script.
Keep in mind that a startup script cannot stop an instance but you can stop an instance through the guest operating system.
This would be the ideal solution for the question you posed. This would require you to make a small change in your Python app where the Operating system shuts off when the dataset is complete.
Q1) Can I setup a container that will pip install the latest app from our private GitHub and execute the dataset build before shutting down?
A1) Medium has a great article on installing a package from a private git repo inside a container. You can execute the dataset build before shutting down.
Q2) How would I pass information to the container such as where to get the config file etc?
A2) You can use ENV to set an environment variable. These will be available within the container.
You may consider looking into Docker for more information about container.
I have a setup with a webserver (NGINX) and a react-based frontend that uses webpack to build the final static sources.
The webserver has its own kubernetes deployment + service.
The frontend needs to be build before the webserver can serve the static html/js/css files - but after that, the pod/container can stop.
My idea was to share a volume between the webserver and the frontend pod. The frontend will write the generated files to the volume and the webserver can serve them from there. Whenever there is an update to the frontend sourcecode, the files need to be regenerated.
What is the best way to accomplish that using kubernetes tools?
Right now, I'm using a init-container to build - but this leads to a restart of the webserver pod as well, which wouldn't be neccessary.
Is this the best/only solution to this problem or should I use kubernetes' jobs for this kind of tasks?
There are multiple ways to do this. Here's how I think about this:
Option 1: The static files represent built source code
In this case, the static files that you want to serve should actually be packaged and built into the docker image of your nginx webserver (in the html directory say). When you want to update your frontend, you update the version of the image used and update the pod.
Option 2: The static files represent state
In this case, your approach is correct. Your 'state' (like a database) is stored in a folder. You then run an init container/job to initialise 'state' and then your webserver pod works fine.
I believe option 1 to be better for 2 reasons:
You can horizontally scale your webserver trivially by increasing the pod replica number. In option 2, you're actually dealing with state so that's a problem when you want to add more nodes to your underlying k8s cluster (you'll have to copy files/folders from one volume/folder to another).
The static files are actually the source code of your app. These are not uploaded media files or similar. In this case, it absolutely makes sense to make them a part of your docker image. Otherwise, it kind of defeats that advantage of containerising and deploying.
Jobs, Init containers, or alternatively a gitRepo type of Volume would work for you.
http://kubernetes.io/docs/user-guide/volumes/#gitrepo
It is not clear in your question why you want to update the static content without simply re-deploying / updating the Pod.
Since somewhere, somehow, you have to build the webserver Docker image, it seems best to build the static content into the image: no moving parts once deployed, no need for volumes or storage. Overall it is simpler.
If you use any kind of automation tool for Docker builds, it's easy.
I personally use Jenkins to build Docker images based on a hook from git repo, and the image is simply rebuilt and deployed whenever the code changes.
Running a Job or Init container doesn't gain you much: sure the web server keeps running, but it's as easy to have a Deployment with rolling updates which will deploy the new Pod before the old one is torn down and you server will always be up too.
Keep it simple...
In a microservice architecture, I'm having a hard time grasping how one can manage environment-specific config (e.g. IP address and credentials for database or message broker).
Let's say you have three microservices ("A", "B", and "C"), each owned and maintained by a different team. Each team is going to need a team integration environment... where they work with the latest snapshot of their microservice, along with stable versions of all dependency microservices. Of course, you'll also need QA/staging/production environments as well. A simplified view of the big picture would look like this:
"Microservice A" Team Environment
Microservice A (SNAPSHOT)
Microservice B (STABLE)
Microservice C (STABLE)
"Microservice B" Team Environment
Microservice A (STABLE)
Microservice B (SNAPSHOT)
Microservice C (STABLE)
"Microservice C" Team Environment
Microservice A (STABLE)
Microservice B (STABLE)
Microservice C (SNAPSHOT)
QA / Staging / Production
Microservice A (STABLE, RELEASE, etc)
Microservice B (STABLE, RELEASE, etc)
Microservice C (STABLE, RELEASE, etc)
That's a lot of deployments, but that problem can be solved by a continuous integration server and perhaps something like Chef/Puppet/etc. The really hard part is that each microservice would need some environment data particular to each place in which it's deployed.
For example, in the "A" Team Environment, "A" needs one address and set of credentials to interact with "B". However, over in the "B" Team Environment, that deployment of "A" needs a different address and credentials to interact with that deployment of "B".
Also, as you get closer to production, environmental config info like this probably needs security restrictions (i.e. only certain people are able to modify or even view it).
So, with a microservice architecture, how to you maintain environment-specific config info and make it available to the apps? A few approaches come to mind, although they all seem problematic:
Have the build server bake them into the application at build-time - I suppose you could create a repo of per-environment properties files or scripts, and have the build process for each microservice reach out and pull in the appropriate script (you could also have a separate, limited-access repo for the production stuff). You would need a ton of scripts, though. Basically a separate one for every microservice in every place that microservice can be deployed.
Bake them into base Docker images for each environment - If the build server is putting your microservice applications into Docker containers as the last step of the build process, then you could create custom base images for each environment. The base image would contain a shell script that sets all of the environment variables you need. Your Dockerfile would be set to invoke this script prior to starting your application. This has similar challenges to the previous bullet-point, in that now you're managing a ton of Docker images.
Pull in the environment info at runtime from some sort of registry - Lastly, you could store your per-environment config inside something like Apache ZooKeeper (or even just a plain ol' database), and have your application code pull it in at runtime when it starts up. Each microservice application would need a way of telling which environment it's in (e.g. a startup parameter), so that it knows which set of variable to grab from the registry. The advantage of this approach is that now you can use the exact same build artifact (i.e. application or Docker container) all the way from the team environment up to production. On the other hand, you would now have another runtime dependency, and you'd still have to manage all of that data in your registry anyway.
How do people commonly address this issue in a microservice architecture? It seems like this would be a common thing to hear about.
Docker compose supports extending compose files, which is very useful for overriding specific parts of your configuration.
This is very useful at least for development environments and may be useful in small deployments too.
The idea is having a base shared compose file you can override for different teams or environments.
You can combine that with environment variables with different settings.
Environment variables are good if you want to replace simple values, if you need to make more complex changes then you use an extension file.
For instance, you can have a base compose file like this:
# docker-compose.yml
version: '3.3'
services:
service-a:
image: "image-name-a"
ports:
- "${PORT_A}"
service-b:
image: "image-name-b"
ports:
- "${PORT_B}"
service-c:
image: "image-name-c"
ports:
- "${PORT_C}"
If you want to change the ports you could just pass different values for variables PORT_X.
For complex changes you can have separate files to override specific parts of the compose file. You can override specific parameters for specific services, any parameter can be overridden.
For instance you can have an override file for service A with a different image and add a volume for development:
# docker-compose.override.yml
services:
service-a:
image: "image-alternative-a"
volumes:
- /my-dev-data:/var/lib/service-a/data
Docker compose picks up docker-compose.yml and docker-compose.override.yml by default, if you have more files, or files with different names, you need to specify them in order:
docker-compose -f docker-compose.yml -f docker-compose.dev.yml -f docker-compose.dev-service-a.yml up -d
For more complex environments the solution is going to depend on what you use, I know this is a docker question, but nowadays it's hard to find pure docker systems as most people use Kubernetes. In any case you are always going to have some sort of secret management provided by the environment and managed externally, then from the docker side of things you just have variables that are going to be provided by that environment.