command to run existing imahe of RabbitMQ on a pod - kubernetes

Can anyone please help me to find which command is used to run an existing image of rabbitMQ on a pod and check if it is done?

which command is used to run an existing image of rabbitMQ
You can check by getting the YAML output from the cluster or from the YAML config file
to get the YAML output of RabbitMQ POD you can do
kubectl get <deployment/statefulet/pod> rabbitmq <name of resource> -o yaml
example
kubectl get deployment rabbitmq
in YAML you can check the Command or ARG getting passed to image which is running the image.
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: rabbitmq
spec:
replicas: 1
serviceName: rabbitmq
selector:
matchLabels:
app: rabbitmq
template:
metadata:
labels:
app: rabbitmq
spec:
containers:
- name: rabbitmq
image: rabbitmq:3-management
command: ["test"]
env:
- name: "RABBITMQ_ERLANG_COOKIE"
value: "1WqgH8N2v1qDBDZDbNy8Bg9IkPWLEpu79m6q+0t36lQ="
volumeMounts:
- mountPath: /var/lib/rabbitmq
name: rabbitmq-data
volumes:
- name: rabbitmq-data
hostPath:
path: /data/rabbitmq
type: DirectoryOrCreate
If you want to check the running command inside the docker file
Here is the docker file : Click here

Related

TensorFlow Setting model_config_file runtime argument in YAML file for K8s

I've been having a hell of a time trying to figure-out how to serve multiple models using a yaml configuration file for K8s.
I can run directly in Bash using the following, but having trouble converting it to yaml.
docker run -p 8500:8500 -p 8501:8501 \
[container id] \
--model_config_file=/models/model_config.config \
--model_config_file_poll_wait_seconds=60
I read that model_config_file can be added using a command element, but not sure where to put it, and I keep receiving errors around valid commands or not being able to find the file.
command:
- '--model_config_file=/models/model_config.config'
- '--model_config_file_poll_wait_seconds=60'
Sample YAML config below for K8s, where would the command go referencing the docker run command above?
---
apiVersion: v1
kind: Namespace
metadata:
name: model-test
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: tensorflow-test-rw-deployment
namespace: model-test
spec:
selector:
matchLabels:
app: rate-predictions-server
replicas: 1
template:
metadata:
labels:
app: rate-predictions-server
spec:
containers:
- name: rate-predictions-container
image: aws-ecr-path
command:
- --model_config_file=/models/model_config.config
- --model_config_file_poll_wait_seconds=60
ports:
#- grpc: 8500
- containerPort: 8500
- containerPort: 8501
---
apiVersion: v1
kind: Service
metadata:
labels:
run: rate-predictions-service
name: rate-predictions-service
namespace: model-test
spec:
type: ClusterIP
selector:
app: rate-predictions-server
ports:
- port: 8501
targetPort: 8501
What you are passing on seems to be the arguments and not the command. Command should be set as the entrypoint in the container and arguments should be passed in args. Please see following link.
https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/

Is there a way to dynamically add values in deployment.yml files?

I have deployment.yml file where i'm mounting service logging folder to a folder in host machine.
The issue is when i run multiple instances using the same deployment.yml file like scaling up all the instances are logging to a same file. Is there a way to solve this by dynamically creating folder in host machine based on container id or something. Any suggestions is appreciated.
My current deployment.yml file is
apiVersion: apps/v1
kind: Deployment
metadata:
name: logstash-deployment
spec:
selector:
matchLabels:
app: logstash
replicas: 2
template:
metadata:
labels:
app: logstash
spec:
containers:
- name: logstash
image: logstash:6.8.6
volumeMounts:
- mountPath: /usr/share/logstash/config/
name: config
- mountPath: /usr/share/logstash/logs/
name: logs
volumes:
- name: config
hostPath:
path: "/etc/logstash/"
- name: logs
hostPath:
path: "/var/logs/logstash"
There are some fields in kubernetes which you can get dynamically like node name, pod name, pod ip, etc. Refer this (https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/) doc for examples.
Here is an example where you can set node-name as an environment variable.
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
You can change your deployment in such a way that it creates a file by adding node name to it.. In this way you can have different file name on each node. Recommended is to create a daemonset instead of deployment which will spawn one pod on each selected nodes (selection can be done using node selector).
you can use sed for dynamically adding some values
for example:-
apiVersion: apps/v1
kind: Deployment
metadata:
name: logstash-deployment
spec:
selector:
matchLabels:
app: logstash
replicas: 2
template:
metadata:
labels:
app: logstash
spec:
containers:
- name: logstash
image: logstash:6.8.6
volumeMounts:
- mountPath: /usr/share/logstash/config/
name: config
- mountPath: /usr/share/logstash/logs/
name: logs
volumes:
- name: config
hostPath:
path: {path}
- name: logs
hostPath:
path: "/var/logs/logstash"
Now I want to add dynamically add the path
I will simply
set -i "s|{path}:'/etc/logstash/'|g" deployment.yml
In this way, you can put as many values as you want before deploying the file.

How to copy a local file into a helm deployment

I'm trying to deploy in Kubernetes several pods using a mongo image with a initialization script in them. I'm using helm for the deployment. Since I'm beginning with the official Mongo docker image, I'm trying to add a script at /docker-entrypoint-initdb.d so it will be executed right at the beginning to initialize some parameters of my Mongo.
What I don't know is how can I insert my script, that is, let's say, in my local machine, in /docker-entrypoint-initdb.d using helm.
I'm trying to do something like docker run -v hostfile:mongofile but I need the equivalent in helm, so this will be done in all the pods of the deployment
You can use configmap. Lets put nginx configuration file to container via configmap. We have directory name called nginx with same level values.yml. Inside there we have actual configuration file.
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config-file
labels:
app: ...
data:
nginx.conf: |-
{{ .Files.Get "nginx/nginx.conf" | indent 4 }}
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: SomeDeployment
...
spec:
replicas:
selector:
matchLabels:
app: ...
release: ...
template:
metadata:
labels:
app: ...
release: ...
spec:
volumes:
- name: nginx-conf
configMap:
name: nginx-config-file
items:
- key: nginx.conf
path: nginx.conf
containers:
- name: ...
image: ...
volumeMounts:
- name: nginx-conf
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
You can also check initContainers concept from this link :
https://kubernetes.io/docs/concepts/workloads/pods/init-containers/

How to fetch configmap from kubernetes pod

I have one spring boot microservice running on docker container, below is the Dockerfile
FROM java:8-jre
MAINTAINER <>
WORKDIR deploy/
#COPY config/* /deploy/config/
COPY ./ms.console.jar /deploy/
CMD chmod +R 777 ./ms.console.jar
CMD ["java","-jar","/deploy/ms.console.jar","console"]
EXPOSE 8384
here my configuration stores in external folder, i.e /config/console-server.yml and when I started the application, internally it will load the config (spring boot functionality).
Now I want to separate this configuration using configmap, for that I simply created one configmap and storing all the configuration details.
kubectl create configmap console-configmap
--from-file=./config/console-server.yml
kubectl describe configmap console-configmap
below are the description details:
Name: console-configmap
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
console-server.yml:
----
server:
http:
port: 8385
compression:
enabled: true
mime-types: application/json,application/xml,text/html,text/xml,text/plain,text/css,application/javascript
min-response-size: 2048
---
spring:
thymeleaf:
prefix: classpath:/static
application:
name: console-service
profiles:
active: native
servlet:
multipart:
max-file-size: 30MB
max-request-size: 30MB
---
host:
gateway: http://apigateway:4000
webhook: http://localhost:9000
my deployment yml is:
apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: consoleservice1
spec:
selector:
matchLabels:
app: consoleservice
replicas: 1 # tells deployment to run 3 pods matching the template
template: # create pods using pod definition in this template
metadata:
labels:
app: consoleservice
spec:
containers:
- name: consoleservice
image: ms-console
ports:
- containerPort: 8384
imagePullPolicy: Always
envFrom:
- configMapRef:
name: console-configmap
imagePullSecrets:
- name: regcresd
My doubt is, I commented config folder in the Dockerfile, so while running pods, it's throwing exception because of no configuration, how I will inject this console-configmap to my deployment, what I tried already shared, but getting same issues.
First of all, how are you consuming the .yml file in your application? If you consume your yml file contents as environment variables, your config should just work fine. But I suspect that you want to consume the contents from the config file inside the container. If that is the case you have to create a volume out of the configmap as follows:
apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: consoleservice1
spec:
selector:
matchLabels:
app: consoleservice
replicas: 1 # tells deployment to run 3 pods matching the template
template: # create pods using pod definition in this template
metadata:
labels:
app: consoleservice
spec:
containers:
- name: consoleservice
image: ms-console
ports:
- containerPort: 8384
imagePullPolicy: Always
volumeMounts:
- mountPath: /app/config
name: config
volumes:
- name: config
configMap:
name: console-configmap
imagePullSecrets:
- name: regcresd
The file will be available in the path /app/config/console-server.yml. You have to modify it as per your needs.
do you need to load key:value pairs from the config file as environment variables then below spec would work
envFrom:
- configMapRef:
name: console-configmap
if you need the config as a file inside pod then mount the configmap as volume. following link would be helpful
https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/

Gitlab CI - K8s - Deployment

just going through this guide on gitlab and k8s gitlab-k8s-cd, but my build keeps failing on this part:
- kubectl delete secret registry.gitlab.com
- kubectl create secret docker-registry registry.gitlab.com --docker-server=https://registry.gitlab.com --docker-username=<my_username> --docker-password=$REGISTRY_PASSWD --docker-email=<my_email>
Although I am not entirely sure what password is needed for --docker-password, I have created an API token in gitlab for my user and I am using that in the secure variables.
This is the error:
$ gcloud container clusters get-credentials deployment
Fetching cluster endpoint and auth data.
kubeconfig entry generated for deployment.
$ kubectl delete secret registry.gitlab.com
Error from server: secrets "registry.gitlab.com" not found
ERROR: Build failed: exit code 1
Any help would be much appreciated thanks.
EDIT
Since the initial post, by removing the initial kubectl delete secret and re-building worked, so it was failing on deleting when there was no previous secret.
Second Edit
Having problems with my deployment.yml for K8s, could anyone shed any light on why I am getting this error:
error validating "deployment.yml": error validating data: field spec.template.spec.containers[0].ports[0]: expected object of type map[string]interface{},
With this yml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: <app>
spec:
replicas: 2
template:
metadata:
labels:
app: <app>
spec:
containers:
- name: deployment
image: registry.gitlab.com/<username>/<app>
imagePullPolicy: Always
ports:
- "80:8080"
env:
- name: PORT
value: "8080"
imagePullSecrets:
- name: registry.gitlab.com
And this error:
error validating "deployment.yml": error validating data: found invalid field imagePullSecrets for v1.Container; if you choose to ignore these errors, turn validation off with --validate=false
With this yml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: <app>
spec:
replicas: 2
template:
metadata:
labels:
app: <app>
spec:
containers:
- name: <app>
image: registry.gitlab.com/<project>/<app>
imagePullPolicy: Always
ports:
- "80:8080"
env:
- name: PORT
value: "8080"
imagePullSecrets:
- name: registry.gitlab.com
Latest YAML
apiVersion: v1
kind: Service
metadata:
name: <app_name>
labels:
app: <app_name>
spec:
ports:
- port: 80
selector:
app: <app_name>
tier: frontend
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: <app_name>
labels:
app: <app_name>
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: <app_name>
tier: frontend
spec:
containers:
- image: registry.gitlab.com/<project>/<app>
imagePullPolicy: Always
name: <app_name>
env:
- name: PORT
value: "8080"
imagePullSecrets:
- name: registry.gitlab.com
ports:
- containerPort: 8080
hostPort: 80
Regarding your first error:
Ports are defined differently in Kubernetes than in Docker or Docker Compose. This is how the port specification should look like:
ports:
- containerPort: 8080
hostPort: 80
See the reference for more information.
Regarding your second error:
According to the reference on PodSpecs, the imagePullSecrets property is correctly placed in your example. However, from reading the error message, it seems that you actually included the imagePullSecrets property into the ContainerSpec, not the PodSpec.
The YAML in your question seems to be correct, in this case. Make sure that your actual manifest matches the example from your question and you did not accidentally indented the imagePullSecrets property more than necessary.
This is the working YAML file for K8s:
apiVersion: v1
kind: Service
metadata:
name: <app_name>
labels:
app: <app_name>
spec:
ports:
- port: 80
selector:
app: <app_name>
tier: frontend
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: <app_name>
labels:
app: <app_name>
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: <app_name>
tier: frontend
spec:
containers:
- image: registry.gitlab.com/<project>/<app>:latest
imagePullPolicy: Always
name: <app_name>
env:
- name: PORT
value: "8080"
ports:
- containerPort: 8080
hostPort: 80
imagePullSecrets:
- name: registry.gitlab.com
This is the working gitlab-ci file also:
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay
stages:
- package
- deploy
docker-build:
stage: package
script:
- docker build -t registry.gitlab.com/<project>/<app> .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker push registry.gitlab.com/<project>/<app>
k8s-deploy:
image: google/cloud-sdk
stage: deploy
script:
- echo "$GOOGLE_KEY" > key.json
- gcloud auth activate-service-account --key-file key.json
- gcloud config set compute/zone <zone>
- gcloud config set project <project>
- gcloud config set container/use_client_certificate True
- gcloud container clusters get-credentials <container-name>
- kubectl delete secret registry.gitlab.com
- kubectl create secret docker-registry registry.gitlab.com --docker-server=https://registry.gitlab.com --docker-username=<username> --docker-password=$REGISTRY_PASSWD --docker-email=<user-email>
- kubectl apply -f deployment.yml
Just need to work out how to alter the script to allow for rolling back.