Apache Flink Kubernetes operator - how to override the Docker entrypoint? - kubernetes

I tried to override the container entry point of a Flink application in a Dockerfile, but it looks like that the Apache Flink kubernetes operator ignores it.
The Dockerfile is the following:
FROM flink:1.14.2-scala_2.12-java11
ENV FLINK_HOME=/opt/flink
COPY custom-docker-entrypoint.sh /
RUN chmod a+x /custom-docker-entrypoint.sh
COPY --chown=flink:flink --from=build /target/*.jar /opt/flink/flink-web-upload/
ENTRYPOINT ["/custom-docker-entrypoint.sh"]
EXPOSE 6123 8081
CMD ["help"]
The definition of the FlinkDeployment uses the new image:
apiVersion: flink.apache.org/v1beta1
kind: FlinkDeployment
metadata:
name: flink-example
namespace: default
spec:
image: "flink-example:0.1.0"
#...
In the description of the pod
kubectl describe pod flink-example
I see the following output:
Containers:
flink-main-container:
Command:
/docker-entrypoint.sh
I also tried to define the custom-docker-entrypoint.sh in the main container's command:
apiVersion: flink.apache.org/v1beta1
kind: FlinkDeployment
metadata:
name: flink-example
namespace: default
spec:
flinkVersion: v1_14
image: "flink-example:0.1.0"
podTemplate:
apiVersion: v1
kind: Pod
metadata:
name: pod-template
spec:
containers:
- name: flink-main-container
# command: [ 'sh','-c','/custom-docker-entrypoint.sh' ]
command: [ "/custom-docker-entrypoint.sh" ]
Thank you.

You can overwrite it via:
flinkConfiguration:
kubernetes.entry.path: "/custom-docker-entrypoint.sh"
The Operator (by default) uses Flink's native Kubernetes integration. See: https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#kubernetes-entry-path

Related

command to run existing imahe of RabbitMQ on a pod

Can anyone please help me to find which command is used to run an existing image of rabbitMQ on a pod and check if it is done?
which command is used to run an existing image of rabbitMQ
You can check by getting the YAML output from the cluster or from the YAML config file
to get the YAML output of RabbitMQ POD you can do
kubectl get <deployment/statefulet/pod> rabbitmq <name of resource> -o yaml
example
kubectl get deployment rabbitmq
in YAML you can check the Command or ARG getting passed to image which is running the image.
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: rabbitmq
spec:
replicas: 1
serviceName: rabbitmq
selector:
matchLabels:
app: rabbitmq
template:
metadata:
labels:
app: rabbitmq
spec:
containers:
- name: rabbitmq
image: rabbitmq:3-management
command: ["test"]
env:
- name: "RABBITMQ_ERLANG_COOKIE"
value: "1WqgH8N2v1qDBDZDbNy8Bg9IkPWLEpu79m6q+0t36lQ="
volumeMounts:
- mountPath: /var/lib/rabbitmq
name: rabbitmq-data
volumes:
- name: rabbitmq-data
hostPath:
path: /data/rabbitmq
type: DirectoryOrCreate
If you want to check the running command inside the docker file
Here is the docker file : Click here

Allowing K8S daemonset to exist in the global pid namespace

I'm trying to configure a daemonset to run on the global pid namespace resulting the ability to see other processes in the host, including the containers' processes.
I couldn't find an option to achieve this.
In general, what I'm looking for is close to the sidecar container shareProcessNamespace attribute only on the host level.
There is an attribute that allows this - hostPID: true
So the yaml file should looks something like that:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: busybox
spec:
selector:
matchLabels:
name: busybox
template:
metadata:
labels:
name: busybox
spec:
hostPID: true
containers:
- name: busybox
image: busybox
command: [ "sh", "-c", "sleep 1h" ]
More info in:
https://kubernetes.io/docs/concepts/policy/pod-security-policy/#host-namespaces
https://medium.com/#chrispisano/limiting-pod-privileges-hostpid-57ce07b05896

How to copy a local file into a helm deployment

I'm trying to deploy in Kubernetes several pods using a mongo image with a initialization script in them. I'm using helm for the deployment. Since I'm beginning with the official Mongo docker image, I'm trying to add a script at /docker-entrypoint-initdb.d so it will be executed right at the beginning to initialize some parameters of my Mongo.
What I don't know is how can I insert my script, that is, let's say, in my local machine, in /docker-entrypoint-initdb.d using helm.
I'm trying to do something like docker run -v hostfile:mongofile but I need the equivalent in helm, so this will be done in all the pods of the deployment
You can use configmap. Lets put nginx configuration file to container via configmap. We have directory name called nginx with same level values.yml. Inside there we have actual configuration file.
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config-file
labels:
app: ...
data:
nginx.conf: |-
{{ .Files.Get "nginx/nginx.conf" | indent 4 }}
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: SomeDeployment
...
spec:
replicas:
selector:
matchLabels:
app: ...
release: ...
template:
metadata:
labels:
app: ...
release: ...
spec:
volumes:
- name: nginx-conf
configMap:
name: nginx-config-file
items:
- key: nginx.conf
path: nginx.conf
containers:
- name: ...
image: ...
volumeMounts:
- name: nginx-conf
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
You can also check initContainers concept from this link :
https://kubernetes.io/docs/concepts/workloads/pods/init-containers/

How to fetch configmap from kubernetes pod

I have one spring boot microservice running on docker container, below is the Dockerfile
FROM java:8-jre
MAINTAINER <>
WORKDIR deploy/
#COPY config/* /deploy/config/
COPY ./ms.console.jar /deploy/
CMD chmod +R 777 ./ms.console.jar
CMD ["java","-jar","/deploy/ms.console.jar","console"]
EXPOSE 8384
here my configuration stores in external folder, i.e /config/console-server.yml and when I started the application, internally it will load the config (spring boot functionality).
Now I want to separate this configuration using configmap, for that I simply created one configmap and storing all the configuration details.
kubectl create configmap console-configmap
--from-file=./config/console-server.yml
kubectl describe configmap console-configmap
below are the description details:
Name: console-configmap
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
console-server.yml:
----
server:
http:
port: 8385
compression:
enabled: true
mime-types: application/json,application/xml,text/html,text/xml,text/plain,text/css,application/javascript
min-response-size: 2048
---
spring:
thymeleaf:
prefix: classpath:/static
application:
name: console-service
profiles:
active: native
servlet:
multipart:
max-file-size: 30MB
max-request-size: 30MB
---
host:
gateway: http://apigateway:4000
webhook: http://localhost:9000
my deployment yml is:
apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: consoleservice1
spec:
selector:
matchLabels:
app: consoleservice
replicas: 1 # tells deployment to run 3 pods matching the template
template: # create pods using pod definition in this template
metadata:
labels:
app: consoleservice
spec:
containers:
- name: consoleservice
image: ms-console
ports:
- containerPort: 8384
imagePullPolicy: Always
envFrom:
- configMapRef:
name: console-configmap
imagePullSecrets:
- name: regcresd
My doubt is, I commented config folder in the Dockerfile, so while running pods, it's throwing exception because of no configuration, how I will inject this console-configmap to my deployment, what I tried already shared, but getting same issues.
First of all, how are you consuming the .yml file in your application? If you consume your yml file contents as environment variables, your config should just work fine. But I suspect that you want to consume the contents from the config file inside the container. If that is the case you have to create a volume out of the configmap as follows:
apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: consoleservice1
spec:
selector:
matchLabels:
app: consoleservice
replicas: 1 # tells deployment to run 3 pods matching the template
template: # create pods using pod definition in this template
metadata:
labels:
app: consoleservice
spec:
containers:
- name: consoleservice
image: ms-console
ports:
- containerPort: 8384
imagePullPolicy: Always
volumeMounts:
- mountPath: /app/config
name: config
volumes:
- name: config
configMap:
name: console-configmap
imagePullSecrets:
- name: regcresd
The file will be available in the path /app/config/console-server.yml. You have to modify it as per your needs.
do you need to load key:value pairs from the config file as environment variables then below spec would work
envFrom:
- configMapRef:
name: console-configmap
if you need the config as a file inside pod then mount the configmap as volume. following link would be helpful
https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/

How does argument passing in Kubernetes work?

A problem:
Docker arguments will pass from command line:
docker run -it -p 8080:8080 joethecoder2/spring-boot-web -Dcassandra_ip=127.0.0.1 -Dcassandra_port=9042
However, Kubernetes POD arguments will not pass from singlePod.yaml file:
apiVersion: v1
kind: Pod
metadata:
name: spring-boot-web-demo
labels:
purpose: demonstrate-spring-boot-web
spec:
containers:
- name: spring-boot-web
image: docker.io/joethecoder2/spring-boot-web
env: ["name": "-Dcassandra_ip", "value": "127.0.0.1"]
command: ["java","-jar", "spring-boot-web-0.0.1-SNAPSHOT.jar", "-D","cassandra_ip=127.0.0.1", "-D","cassandra_port=9042"]
args: ["-Dcassandra_ip=127.0.0.1", "-Dcassandra_port=9042"]
restartPolicy: OnFailure
when I do:
kubectl create -f ./singlePod.yaml
Why don't you pass the arguments as env variables? It looks like you're using spring boot, so this shouldn't even require changes in the code since spring boot injects env variables.
The following should work:
apiVersion: v1
kind: Pod
metadata:
name: spring-boot-web-demo
labels:
purpose: demonstrate-spring-boot-web
spec:
containers:
- name: spring-boot-web
image: docker.io/joethecoder2/spring-boot-web
command: ["java","-jar", "spring-boot-web-0.0.1-SNAPSHOT.jar"]
env:
- name: cassandra_ip
value: "127.0.0.1"
- name: cassandra_port
value: "9042"