Trying to configure AWX runtime using Docker with Docker Compose. With image quay.io/ansible/awx:21.7.0 it seems a little tricky. I don't want to set up Kubernetes and use AWX Operator - don't have resources and tasks for this complicity, just redundant tools. All I need - it is running Docker process with some additional services in my infrastructure (for example Traefik and SystemD services, AWX one of them).
Does anyone have moving on this way? Trying to find production Dockerfile (I think it uses in prod, right?) and prepare Django environment to work inside docker-compose (env vars, networks, resources, services).
I'll be updating this post with my results. Thanks guys, I hope I'm not alone with this problem.
Related
I would like to use OpenVSCode for cloud development in a microservices-orianted environment.
I was thinking on the following architecture/setup:
Use K8s as the runtime environment.
OVSC & Dev pods to run using dedicated/separated pods (Not sidecars).
Code sharing is done via NFS, syncthing, etc
The documentation are showcasing a setup of OVSC that operate/run as the Dev pod itself. While running as described above (IDE & Dev pods running on a separated pod), I noticed that dev-related libraries/missing (e.g. Golang packages) are not available as they are installed on the dev pod, etc:
Q:
What is needed in order to support such a setup?
Is it possible to init OVSC in such a way that it will execute commands/open the terminal on a remote container as default?
Thanks!
I need to deploy redis-sentinel with redis-stack modules (redis-search and redis-json) by using docker compose but I can't find any reference. Can someone refer me to a docker compose example or explain how can I deploy it with docker?
If you don't have much experience with docker, but want a relatively "easy" way to setup with compose, you could take a look at Bitnami images.
For Redis Sentinel: bitnami/redis-sentinel.
Ideally, we would only want to stick to Minikube and Scaffold.
But there are many cases in which we would like to enable 2-way syncing of volumes so that changes in a specific container directory are reflected on a directory of the host machine.
We currently use kubectl to copy directories and files manually from the pod unto a local directory. But we would like to automate this step.
Docker-Compose makes it very easy to set this up by defining a rw volume to a service:
services:
myService:
image: some/image
volumes:
- /some-host/path:/some-container/path:rw
So whenever we need to reflect changes into our local environment, we would stop skaffold, start docker-compose, and make the changes necessary on the container so that they are automatically reflected locally.
The issue is that if we want to make a change to one of the services in the system we now have to reflect these changes on our k8s deployments, as well as our docker-compose file. These would include reflecting changes to secrets, config maps, etc... and we might end up with a really complicated project.
Is this a bad idea? Is there a better approach?
You can have a look into Tilt or Telepresence. We currently are working on changing our local development environment from docker-compose to a microk8s based approach and are looking into those two approaches, as we are too facing the issue of shared volumes which are not supported out of the box in microk8s.
Just an idea, we will have to see ourselves what solution works best for us :)
I'm having trouble Installing kubeadm on my amazon linux 2 instance specifically when i try to create a cluster,
when i try Installing runtime i get to chose which one to use :
containerd
CRI-O
Docker Engine
Mirantis Container Runtime
first of all i'm wondering which one i should use between them that is compatible with amazon linux 2 and second of all whenever i run yum install for any CRI i get this same error:
this is the output of the command: yum install cri-o
the doc that i followed is: https://kubernetes.io/docs/setup/production-environment/container-runtimes/
hi, hope you are enjoying your kubernetes Journey !
First off, you I wanna tell you that you can use whichever you want between the container runtime you want to install.
You can use docker if you are not familiar with the others but containerd is in my opinion the best lightweight alternative ( containerd is used in docker, but for kubernetes you don't need all the layers that docker provides only the container runtime Itself, here containerd ) you can read this for more info, but there is plenty of documentation about this.: https://www.tutorialworks.com/difference-docker-containerd-runc-crio-oci/
Second of all, I don't know how you are trying to install your kubernetes cluster but again there is few couples of way to do it:
The hardest but very instructive can be kubernetes the hard way ( https://github.com/kelseyhightower/kubernetes-the-hard-way )
Next you can use kubeadm (again there is plenty of documentation on the internet but you can follow one of the kubeadm tutorials: https://devopscube.com/setup-kubernetes-cluster-kubeadm/ )
Here is a list of tools that you can use to install your kubernetes cluster, you can look for tutorials for each of them on the internet: https://dzone.com/articles/50-useful-kubernetes-tools )
Last but not least, since you are on aws, you can use the AWS EKS service to setup quickly a robust kubernetes cluster. (https://aws.amazon.com/fr/eks/)
This is for AWS. If you want a local k8s cluster I strongly suggest you to use kind (kubernetes in docker)
Bguess
I have a gcloud Kubernetes cluster running and a Google bucket that holds some data I want to run on the cluster.
In order to use the data in the bucket, I need gcsfs installed on the nodes. How do I install packages like this on the cluster using gcloud, kubectl, etc.?
Check if a recipe like "Launch development cluster on Google Cloud Platform with Kubernetes and Helm" could help.
Using Helm, you can define workers with additional pip packages:
worker:
replicas: 8
limits:
cpu: 2
memory: 7500000000
pipPackages: >-
git+https://github.com/gcsfs/gcsfs.git
git+https://github.com/xarray/xarray.git
condaPackages: >-
-c conda-forge
zarr
blosc
I don't know if the suggestion given by VonC will actually work, but what I do know is that you're not really supposed to install stuff onto a Kubernetes Engine's worker node. This is evident by the fact that neither does it have a package manager nor does it allow to update individual programs separately.
Container-Optimized OS does not support traditional package managers (...) This design prevents updating individual software packages in the root filesystem independent of other packages.
Having said that, you can customize the worker nodes of a node pool if the number of nodes is static for that node pool via startup scripts. These still work as intended, but since you can't edit the instance template being used by the node pool, you'll have to edit those into the instances manually. So again, this clearly is not a very good way of doing things.
Finally, worker nodes have something called a "Toolbox", which is basically a special container you can run to get access to debugging tools. This container is ran directly on Docker, it's not scheduled by Kubernetes. You can customize this container image, so you can add some extra tools into it.