How to create a pod in Kubernetes that contains an immage Docker that contain mongodb?
Easiest way to do that is use Helm - package manager for Kubernetes.
How to start using Helm
MongoDB Helm Chart
Ziliani, this question is a little unclear. We do not know if your goal is to put mongodb into a Pod or you would like to run mongodb in Kubernetes. In future try to be more clear with what you want to achieve and what have you already tried so we can know how to help you.
If you want to easily deploy Mongodb in Kubernetes you can use helm charts as Vasily mentions or you can also check this guide on mongodb github. Also you could read this article to figure out what you should be taking notice of.
This tutorial on the other hand explains full process of running mongodb in Google Kubernetes Engine using StatefulSet.
If you just want a running Pod with mongodb inside, like you have wrote below:
Because i would an file yaml. That include this instruction of Docker
docker run -p 27017:27017 --name mongodb bitnami/mongodb:latest
You can use this pod yaml as a reference:
apiVersion: v1
kind: Pod
metadata:
name: mongoDB
spec:
volumes:
- name: mongodb-pod
hostPath:
path: /tmp/mongodb
containers:
- image: bitnami/mongodb:latest
name: mongodb
volumeMounts:
- name: mongodb-data
mountPath: /data/db
ports:
- containerPort: 27017
protocol: TCP
Related
I'm trying to add Filebeat container to the rs0 StatefulSet to collect the logs of my mongoDB. I added a filebeat sidecar container to the operator (according to the docs), and now I'm stuck. How can I create an emptyDir volume that would be mounted and accessible on both the mongod container and the filebeat container?
The percona guys helped my find a solution, a bit of a hack but it worked for me
you can't create a new volume for the mongod container but you can map one of the existing volumes to the sidecar container, for example:
sidecars:
- image: ...
command: ...
args: ...
name: rs-sidecar-1
volumeMounts:
- mountPath: /mytmp
name: mongod-data
link for the original post on percona community forum
I want to download the container image, but dont want to deploy/install the image.
How can i deploy podspec to download only images but it should not create container.
Any podspec snapshot for this?
As far as I know there is no direct Kubernetes resources to only download an image of your choosing. To have the images of your applications on your Nodes you can consider using following solutions/workarounds:
Use a Daemonset with an initContainer('s)
Use tools like Ansible to pull the images with a playbook
Use a Daemonset with InitContainers
Assuming the following situation:
You've created 2 images that you want to have on all of the Nodes.
You can use a Daemonset (spawn a Pod on each Node) with initContainers (with images as source) that will run on all Nodes and ensure that the images will be present on the machine.
An example of such setup could be following:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: pull-images
labels:
k8s-app: pull-images
spec:
# AS THIS DAEMONSET IS NOT SUPPOSED TO SERVE TRAFFIC I WOULD CONSIDER USING THIS UPDATE STRATEGY FOR SPEEDING UP THE DOWNLOAD PROCESS
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 100
selector:
matchLabels:
name: pull-images
template:
metadata:
labels:
name: pull-images
spec:
initContainers:
# PUT HERE IMAGES THAT YOU WANT TO PULL AND OVERRIDE THEIR ENTRYPOINT
- name: ubuntu
image: ubuntu:20.04 # <-- IMAGE #1
imagePullPolicy: Always # SPECIFY THE POLICY FOR SPECIFIC IMAGE
command: ["/bin/sh", "-c", "exit 0"]
- name: nginx
image: nginx:1.19.10 # <-- IMAGE #2
imagePullPolicy: IfNotPresent # SPECIFY THE POLICY FOR SPECIFIC IMAGE
command: ["/bin/sh", "-c", "exit 0"]
containers:
# MAIN CONTAINER WITH AS SMALL AS POSSIBLE IMAGE SLEEPING
- name: alpine
image: alpine
command: [sleep]
args:
- "infinity"
Kubernetes Daemonset controller will ensure that the Pod will run on each Node. Before the image is run, the initContainers will act as a placeholders for the images. The images that you want to have on the Nodes will be pulled. The ENTRYPOINT will be overridden to not run the image continuously. After that the main container (alpine) will be run with a sleep infinity command.
This setup will also work when the new Nodes are added.
Following on that topic I would also consider checking following documentation on imagePullPolicy:
Kubernetes.io: Docs: Concepts: Containers: Images: Updating images
A side note!
I've set the imagePullPolicy for the images in initContainers differently to show you that you can specify the imagePullPolicy independently for each container. Please use the policy that suits your use case the most.
Use tools like Ansible to pull the images with a playbook
Assuming that you have SSH access to the Nodes you can consider using Ansible with it's community module (assuming that you are using Docker):
community.docker.docker_image
Citing the documentation for this module:
This plugin is part of the community.docker collection (version 1.3.0).
To install it use: ansible-galaxy collection install community.docker.
Synopsis
Build, load or pull an image, making the image available for creating containers. Also supports tagging an image into a repository and archiving an image to a .tar file.
-- Docs.ansible.com: Ansible: Collections: Community: Docker: Docker image module
You can use it with a following example:
hosts.yaml
all:
hosts:
node-1:
ansible_port: 22
ansible_host: X.Y.Z.Q
node-2:
ansible_port: 22
ansible_host: A.B.C.D
playbook.yaml
- name: Playbook to download images
hosts: all
user: ENTER_USER
tasks:
- name: Pull an image
community.docker.docker_image:
name: "{{ item }}"
source: pull
with_items:
- "nginx"
- "ubuntu"
A side note!
In the ansible way, I needed to install docker python package:
$ pip3 install docker
Additional resources:
Kubernetes.io: Docs: Concepts: Workloads: Controllers: Daemonset
Kubernetes.io: Docs: Concepts: Workloads: Pods: initContainers
I need to copy a file inside my pod during the time of creation. I don't want to use ConfigMap and Secrets. I am trying to create a volumeMounts and copy the source file using the kubectl cp command—my manifest looks like this.
apiVersion: v1
kind: Pod
metadata:
name: copy
labels:
app: hello
spec:
containers:
- name: init-myservice
image: bitnami/kubectl
command: ['kubectl','cp','./test.json','init-myservice:./data']
volumeMounts:
- name: my-storage
mountPath: data
- name: init-myservices
image: nginx
volumeMounts:
- name: my-storage
mountPath: data
volumes:
- name: my-storage
emptyDir: {}
But I am getting a CrashLoopBackOff error. Any help or suggestion is highly appreciated.
it's not possible.
let me explain : you need to think of it like two different machine. here your local machine is the one where the file exist and you want to copy it in another machine with cp. but it's not possible. and this is what you are trying to do here. you are trying to copy file from your machine to pod's machine.
here you can do one thing just create your own docker image for init-container. and copy the file you want to store before building the docker image. then you can copy that file in shared volume where you want to store the file.
I do agree with an answer provided by H.R. Emon, it explains why you can't just run kubectl cp inside of the container. I do also think there are some resources that could be added to show you how you can tackle this particular setup.
For this particular use case it is recommended to use an initContainer.
initContainers - specialized containers that run before app containers in a Pod. Init containers can contain utilities or setup scripts not present in an app image.
Kubernetes.io: Docs: Concepts: Workloads: Pods: Init-containers
You could use the example from the official Kubernetes documentation (assuming that downloading your test.json is feasible):
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox
command:
- wget
- "-O"
- "/work-dir/index.html"
- http://info.cern.ch
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
-- Kubernetes.io: Docs: Tasks: Configure Pod Initalization: Create a pod that has an initContainer
You can also modify above example to your specific needs.
Also, referring to your particular example, there are some things that you will need to be aware of:
To use kubectl inside of a Pod you will need to have required permissions to access the Kubernetes API. You can do it by using serviceAccount with some permissions. More can be found in this links:
Kubernetes.io: Docs: Reference: Access authn authz: Authentication: Service account tokens
Kubernetes.io: Docs: Reference: Access authn authz: RBAC
Your bitnami/kubectl container will run into CrashLoopBackOff errors because of the fact that you're passing a single command that will run to completion. After that Pod would report status Completed and it would be restarted due to this fact resulting in before mentioned CrashLoopBackOff. To avoid that you would need to use initContainer.
You can read more about what is happening in your setup by following this answer (connected with previous point):
Stackoverflow.com: Questions: What happens one of the container process crashes in multiple container POD?
Additional resources:
Kubernetes.io: Pod lifecycle
A side note!
I also do consider including the reason why Secrets and ConfigMaps cannot be used to be important in this particular setup.
I used config map with files but i am experimenting with portable services like supervisor d and other internal tools.
we have golang binary that can be run in any image. what i am trying is to run these binary using configmap.
Example :-
We have a internal tool written in Go(size is less than 7MB) can be store in config map and we want to mount that config map inside kuberneates pod and want to run it inside pod
Question :- does anyone use it ? Is it a good approach ? What is the best practice ?
I don't believe you can put 7MB of content in a ConfigMap. See here for example. What you're trying to do sounds like a very unusual practice. The standard practice to run binaries in Pods in Kubernetes is to build a container image that includes the binary and configure the image or the Pod to run that binary.
I too faced similar issue while storing elastic.jks keystore binary file in k8s pod.
AFAIK there are two options:
Make use of configmap to store binary data. Check this out.
OR
Store your binary file remotely somewhere like in s3 bucket and pull that binary before running actual pod using initContainers concept.
apiVersion: v1
kind: Pod
metadata:
name: alpine
namespace: default
spec:
containers:
- name: myapp-container
image: alpine:3.1
command: ['sh', '-c', 'if [ -f /jks/elastic.jks ]; then sleep 99999; fi']
volumeMounts:
- name: jksdata
mountPath: /jks
initContainers:
- name: init-container
image: atlassian/pipelines-awscli
command: ["/bin/sh","-c"]
args: ['aws s3 sync s3://my-artifacts/$CLUSTER /jks/']
imagePullPolicy: IfNotPresent
volumeMounts:
- name: jksdata
mountPath: /jks
env:
- name: CLUSTER
value: dev-elastic
volumes:
- name: jksdata
emptyDir: {}
restartPolicy: Always
As #amit-kumar-gupta mentioned the configmap size constraint.
I recommend the second way.
Hope this helps.
I am working towards having multiple services (NodeJS, Spring-boot) that each have their own MongoDB Database-server-per-service (eventually targeting GCP & K8s) so that I can keep the data separate. I will be using Docker compose to launch both the service and database together. However, when I run multiple services, naturally I get port collision. Here is a typical docker-compose file:
version: '3'
# Define the services/containers to be run
services:
myapp: #name of your service
build: ./ # specify the directory of the Dockerfile
ports:
- "3000:3000" #specify ports forwarding
links:
- database # link this service to the database service
volumes:
- .:/usr/src/app
depends_on:
- database
database: # name of the service
image: mongo # specify image to build container from
volumes:
- ./data:/data/db
ports:
- "27017:27017"
I am looking for an example of how to do this. My thinking is that each compose file will have it's own ports and each service will map to those ports internally?
You can make yaml for deployment and explain all your containers in one pod (a Pod is a group of containers). Your deployment may look like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: application-deployment
labels:
app: application
spec:
selector:
matchLabels:
app: application
template:
metadata:
labels:
app: application
spec:
containers:
- name: application
image: application:version
ports:
- containerPort: 3000
name: database
image: database:version
ports:
- containerPort: 27017
It is just deployment inside your cluster. You need to expose it outside the cluster. I recommend you to use Ingress for that.
Here you will have the database inside the pod. Also you can create 2 deployments for database and your app in the same namespace.
Also, you need to build Docker images manually or use the CI tool for that. You can manage environments ( prod, pre-prod, dev, test ) by namespaces. One namespace for one environment will give you full isolation. Also, to manage all this, I recommend you to use tools like Helm or kops.
There are a lot of differences between Kubernetes and Docker-compose, but the main difference is design. In Kubernetes, you have more entities for each level of application, and you can manage them. In Docker-compose, you configure all as one service in one place and usually it is hard to manage some specific things.