I have a number of legacy services running which read their configuration files from disk and a separate daemon which updates these files as they change in zookeeper (somewhat similar to confd).
For most of these types of configuration we would love to move to a more environment variable like model, where the config is fixed for the lifetime of the pod. We need to keep the outside config files as the source of truth as services are transitioning from the legacy model to kubernetes, however. I'm curious if there is a clean way to do this in kubernetes.
A simplified version of the current model that we are pursuing is:
Create a docker image which has a utility for fetching config files and writing them to disk ones. Then writes a /donepath/done file.
The main image waits until the done file exists. Then allows the normal service startup to progress.
Use an empty dir volume and volume mounts to get the conf from the helper image into the main image.
I keep seeing instances of this problem where I "just" need to get a couple of files into the docker image at startup (to allow per-env/canary/etc variance), and running all of this machinery each time seems like a burden throw on devs. I'm curious if there is a more simplistic way to do this already in kubernetes or on the horizon.
You can use the ADD command in your Dockerfile. It is used as ADD File /path/in/docker. This will allow you to add a complete file quickly to your container. You need to have the file you want to add to the image in the same directory as the Dockerfile when you build the container. You can also add a tar file this way which will be expanded during the build.
Another option is the ENV command in a your Dockerfile. This adds the data as an environment variable.
Related
Just a quick question.. Do you HAVE to remove or move the default kube-scheduler.yaml from the folder? Can't I just make a new yaml(with the custom scheduler) and run that in the pod?
Kubernetes isn't file-based. It doesn't care about the file location. You use the files only to apply the configuration onto the cluster via a kubectl / kubeadm or similar CLI tools or their libraries. The yaml is only the content you manually put into it.
You need to know/decide what your folder structure and the execution/configuration flow is.
Also, you can simply have a temporary fule, the naming doesn't matter as well and it's alright to replace the content of a yaml file. Preferably though, try to have some kind of history record such as manual note, comment or a source control such as git in place, so you know what and why was changed.
So yes, you can change the scheduler yaml or you can create a new file and reorganize it however you like but you will need to adjust your flow to that - change paths, etc.
All the app files and extraDirectories are owned by root.
/app/libs/
/app/resources/
/app/classes/
/app/logs
I want to run the application as non-root user and i want these files/folders to be owned by that user only and not root.
Is there any way to do this ? I found below mentioned jib maven plugin to alter the owner but it recommends not to do it. Is there any better way ?
https://github.com/GoogleContainerTools/jib-extensions/tree/master/first-party/jib-ownership-extension-maven
The reason you want to change the ownership of some part of the app directory is that your app wants to modify some files or create new ones inside it at runtime. Generally speaking, it is considered a good practice to build an image to be immutable as much as possible.
Since you mentioned /app/logs, I suspect that your app generates log files while it is running. On some modern container orchestration platforms (such as Kubernetes), apps are usually designed to output logs to stdout and stderr.
The best practice is to write your application logs to the standard output (stdout) and standard error (stderr) streams.
Think about it: if your app generates logs files at /app/logs inside a container (there will be multiple containers of the same image running), how would you collect and monitor them in a unified way? What if different apps generate log files at different file system locations? But more importantly, if your container crashes, you'll just lose the log files. By writing logs to stdout and stderr, the platform (say, Kubernetes) will take care of all the complexities of managing and co-relating logs from all pods.
If you cannot change your app about the log files, at least you should mount a volume at /app/logs at runtime. For any container runtime (be it k8s or Docker), this is easily configurable. The mounted directory will be usually world-writable, so you won't need to change the ownership. But you'll still have to think about how to collect and manage the log files.
Likewise, if it is not for log files but that your app needs a file system to create a temporary file inside the app directory and you cannot change the location for some reason, at least you should try to mount an ephemeral volume before falling back to the last-resort of using the Jib Ownership Extension you mentioned.
Conclusively, give a careful assessment of why you have to change the ownership first. If the app wants to mutate itself at runtime, usually it's not a good practice for containerization and there must be some root cause that you may need to resolve in a proper way.
How to run docker-compose across different lifecycle environments (say dev, qa, staging, production).
Sometimes a larger VM is being shared by multiple developers, so would like to start the containers with appropriate developer specific suffixes (say dev1, dev2, dev3 ..). Should port customization be handled manually via the environment file (i.e. .env file)
This is an unusual use case for docker-compose, but I'll leave some tips anyway! :)
There's two different ways to name stuff you start with docker-compose. One is to name the service that you specify under the main services: key of your docker-compose.yml file. By default, individual running containers will be assigned names indicating what project they are from (by default, the name of the directory from which your docker-compose file is in), what service they run (this is what's specified under your services: key), and which instance of that service they are (this number changes if eg. you're using replicas). Eg. default container names for a service named myservice specified in a compose file ~/my_project/docker/docker-compose.yml will have a name like docker_myservice_1 (or _2, _3, etc if more than one container is supposed to run).
You can use environment variables to specify a lot of key-value pairs in docker-compose files, but you can't conditionally specify the service name - service keys are only allowed to have alphanumeric characters in them and compose files can't look like eg:
version: "3"
services:
${ENVVAR}:
image: ubuntu:20.04
However, you can override the container naming scheme by using the container_name field in your docker-compose file (documentation for usage here). Maybe a solution you could use looks like this:
version: "3"
services:
myservice:
image: ubuntu:20.04
container_name: ${DEVELOPER_ENVVAR?err}
this will require a developer to specify DEVELOPER_ENVVAR at runtime, either by exporting it in their shell or by running docker-compose like DEVELOPER_ENVVAR=myservice_dev1 docker-compose up. Note that using container_name is incompatible with using replicas to run multiple containers for the same service - the names have to be unique for those running containers, so you'll either have to define separate services for each name, or give up on using container_name.
However, you're in a pickle if you expect multiple developers to be able to run containers with different names using the same compose file in the same directory. That's because when starting a service, docker-compose has a Recreating step where, if there's already containers implementing that service running, they'll wait for that container to finish. Ultimately, I think this is for the best - if multiple developers were trying to run the exact same compose project at once, should a developer have control over other developers' running containers? Probably not, right?
If you want multiple developers to be able to run services at once in the same VM, I think you probably want to do two things:
first, (and you may well have already done this! but it's still a good reminder) make sure that this is a good idea. Are there going to be resource contention issues (eg. for port-forwarding) that make different running instances of your project conflict? For many Docker services, there are going to be, but there probably won't be for eg. images that are meant to be run in a swarm.
second, have different compose files checked out in different directories, so that there are separate compose projects for each developer. To use .env files one way one obvious option is to just maintain separate copies, one per developer directory. If, for your use case, it's unsatisfactory to maintain one copy of .env per developer this way, you could use symlinks named .env (or whatever your env file is named) to the same file somewhere else on the VM.
After you've done this, you'll be able to tell from the container names who is running what.
If none of these are satisfactory, you might want to consider, eg. using one VM per developer, or maybe even considering using a different container management system than docker-compose.
I have done very similar automation and I've used Ansible to create "docker compose" config on the fly.
So based on input-Environment , the ansible playbook will create the relevant docker-compose file. So basically I have a docker-compose template in my git repository with values that are dynamic and ansible playbook populates them etc.
and also you can use ansible to trigger such creation or automation one after another
A similar sample has been posted at ansible_docker_splunk repository.
Basically the whole project is to automate end-to-end docker cluster from CSV file
I am trying to copy some directories into the minikube VM to be used by some of the pods that are running. These include API credential files and template files used at run time by the application. I have found you can copy files using scp into the /home/docker/ directory, however these files are not persisted over reboots of the VM. I have read files/directories are persisted if stored in the /data/ directory on the VM (among others) however I get permission denied when trying to copy files to these directories.
Are there:
A: Any directories in minikube that will persist data that aren't protected in this way
B: Any other ways of doing the above without running into this issue (could well be going about this the wrong way)
To clarify, I have already been able to mount the files from /home/docker/ into the pods using volumes, so it's just the persisting data I'm unclear about.
Kubernetes has dedicated object types for these sorts of things. API credential files you might store in a Secret, and template files (if they aren't already built into your Docker image) could go into a ConfigMap. Both of them can either get translated to environment variables or mounted as artificial volumes in running containers.
In my experience, trying to store data directly on a node isn't a good practice. It's common enough to have multiple nodes, to not directly have login access to those nodes, and for them to be created and destroyed outside of your direct control (imagine an autoscaler running on a cloud provider that creates a new node when all of the existing nodes are 90% scheduled). There's a good chance your data won't (or can't) be on the host where you expect it.
This does lead to a proliferation of Kubernetes objects and associated resources, and you might find a Helm chart to be a good resource to tie them together. You can check the chart into source control along with your application, and deploy the whole thing in one shot. While it has a couple of useful features beyond just packaging resources together (a deploy-time configuration system, a templating language for the Kubernetes YAML itself) you can ignore these if you don't need them and just write a bunch of YAML files and a small control file.
For minikube, data kept in $HOME/.minikube/files directory is copied to / directory in VM host by minikube.
I have a setup with a webserver (NGINX) and a react-based frontend that uses webpack to build the final static sources.
The webserver has its own kubernetes deployment + service.
The frontend needs to be build before the webserver can serve the static html/js/css files - but after that, the pod/container can stop.
My idea was to share a volume between the webserver and the frontend pod. The frontend will write the generated files to the volume and the webserver can serve them from there. Whenever there is an update to the frontend sourcecode, the files need to be regenerated.
What is the best way to accomplish that using kubernetes tools?
Right now, I'm using a init-container to build - but this leads to a restart of the webserver pod as well, which wouldn't be neccessary.
Is this the best/only solution to this problem or should I use kubernetes' jobs for this kind of tasks?
There are multiple ways to do this. Here's how I think about this:
Option 1: The static files represent built source code
In this case, the static files that you want to serve should actually be packaged and built into the docker image of your nginx webserver (in the html directory say). When you want to update your frontend, you update the version of the image used and update the pod.
Option 2: The static files represent state
In this case, your approach is correct. Your 'state' (like a database) is stored in a folder. You then run an init container/job to initialise 'state' and then your webserver pod works fine.
I believe option 1 to be better for 2 reasons:
You can horizontally scale your webserver trivially by increasing the pod replica number. In option 2, you're actually dealing with state so that's a problem when you want to add more nodes to your underlying k8s cluster (you'll have to copy files/folders from one volume/folder to another).
The static files are actually the source code of your app. These are not uploaded media files or similar. In this case, it absolutely makes sense to make them a part of your docker image. Otherwise, it kind of defeats that advantage of containerising and deploying.
Jobs, Init containers, or alternatively a gitRepo type of Volume would work for you.
http://kubernetes.io/docs/user-guide/volumes/#gitrepo
It is not clear in your question why you want to update the static content without simply re-deploying / updating the Pod.
Since somewhere, somehow, you have to build the webserver Docker image, it seems best to build the static content into the image: no moving parts once deployed, no need for volumes or storage. Overall it is simpler.
If you use any kind of automation tool for Docker builds, it's easy.
I personally use Jenkins to build Docker images based on a hook from git repo, and the image is simply rebuilt and deployed whenever the code changes.
Running a Job or Init container doesn't gain you much: sure the web server keeps running, but it's as easy to have a Deployment with rolling updates which will deploy the new Pod before the old one is torn down and you server will always be up too.
Keep it simple...