I have a docker swarm and I deploy my service stack using
"docker stack deploy --compose-file mycompose.yaml myservice".
I want to pass some values that will be using in this yaml file. Can I pass it from command line or through some other way?
With the old docker-compose you could create an .env file and expose environmental variables to the containers that way. Referencing ${env} in the .yml file. It seems swarm does something similar, not sure about the .env file tho. I'm still working on getting a swarm setup so I'm stuck on compose for now. I found these but haven't tried it myself yet, good luck.
https://docs.docker.com/compose/environment-variables/
And this too!
https://docs.docker.com/engine/swarm/secrets/
Related
Trying to configure AWX runtime using Docker with Docker Compose. With image quay.io/ansible/awx:21.7.0 it seems a little tricky. I don't want to set up Kubernetes and use AWX Operator - don't have resources and tasks for this complicity, just redundant tools. All I need - it is running Docker process with some additional services in my infrastructure (for example Traefik and SystemD services, AWX one of them).
Does anyone have moving on this way? Trying to find production Dockerfile (I think it uses in prod, right?) and prepare Django environment to work inside docker-compose (env vars, networks, resources, services).
I'll be updating this post with my results. Thanks guys, I hope I'm not alone with this problem.
Seems like an obvious thing do. Very common in other similar systems. But I'm not seeing how to do it in kubernetes. What am I missing?
You can try running a local secure registry, creating a pod (using a YAML file) from a Docker image in a local registry:
See this link.
Assuming that by a kubernetes job you refer to an app you can simply do so by creating a deployment following this tutorial.
I am trying to push a Docker Compose file to GitLab Container Registry. The commands are getting executed successfully, however, I do not see the image in the registry. When I tried to push the Dockerfile, that works. The Compose file isn't. No known solutions for this. Is searched for similar posts but could not find an answer.
If you are using Docker executor with Docker-in-Docker service docker-compose command is not available by default and it has to be installed. You can see here if you might be hitting some more limitations in your CI/CD configuration using docker build
When a pod is being scheduled, I dynamically (and transparently) mount some shared libraries folder into the client containers through Kubernetes DevicePlugins. Now, in the container I want to append/extend these dynamically mounted shared libraries to LD_LIBRARY_PATH environmental variables.
Inside the container: This can be achieved by running command on the bash
"export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/some/new/directory"
From the Host: I can add the export command to the pod.yaml file under pod.spec.command and args.
But, I wanted to do it transparently without the client/admin specifying it in the yaml file using Kubernetes DevicePlugins or Extended-Schedulers
I am looking method/hack by which I can append/extend the LD_LIBRARY_PATH inside the container only using Kubernetes source code.
Thanks.
You can just bake into your Dockerfile and create an image that you use in Kubernetes for that. No need to hack the Kubernetes source code.
In your Dockerfile in some line:
ENV LD_LIBRARY_PATH /extra/path:$LD_LIBRARY_PATH
Then:
docker build -t <your-image-tag> .
docker push <your-image-tag>
Then, update your pod or deployment definition and deploy to Kubernetes.
Hope it helps.
If i understand your issue, all you need is to transparently add ld_library_path to the pod as it is scheduled. Maybe you can try to use mutatingadmission webhook. Which allows you to send patch command to kubernetes to modify the manifest. Theres a good documentation from banzai cloud. I have not tried it myself.
https://banzaicloud.com/blog/k8s-admission-webhooks/
I'm trying to run the open source cachet status page within Kubernetes via this tutorial https://medium.com/#ctbeke/setting-up-cachet-on-google-cloud-817e62916d48
2 docker containers (cachet/nginx) and Postgres are deployed to a pod on GKE but the cachet container fails with the following CrashLoopBackOff error
Within the docker-compose.yml file its set to APP_KEY=${APP_KEY:-null} and i’m wondering if I didn’t set an environment variable I should have.
Any help with configuring the cachet docker file would be much appreciated! https://github.com/CachetHQ/Docker
Yes, you need to generate a key.
In the entrypoint.sh you can see that the bash script generates a key for you:
https://github.com/CachetHQ/Docker/blob/master/entrypoint.sh#L188-L193
It seems there's a bug in the Dockerfile here. Generate a key manually and then set it as an environment variable in your manifest.
There's a helm chart you can use in development here: https://github.com/apptio/helmcharts/blob/cachet/devel/cachet/templates/secrets.yaml#L12