Helm chart and post install job hook during helm upgrade - kubernetes-helm

I have a job st as post-install in my helm charts which is looks like
apiVersion: batch/v1
kind: Job
metadata:
name: job-hook-postinstall
annotations:
"helm.sh/hook": "post-install"
spec:
template:
spec:
containers:
- name: post-install
image: {{ .Values.image.initContainer.curl }}
imagePullPolicy: IfNotPresent
command:
- '/bin/sh'
- '-c'
- >
PAYLOAD='{"password":"password"}';
curl -X PUT "http://server:{{ .Values.app.Server.service.port }}/users/userA" -H "Authorization: Basic {{ .Values.app.livenessProbe.Server.authorization }}" -H 'Content-Type: application/json' --data-raw $PAYLOAD &&
curl -X PUT "http://server:{{ .Values.app.Server.service.port }}/users/userB" -H "Authorization: Basic {{ .Values.app.livenessProbe.Server.authorization }}" -H 'Content-Type: application/json' --data-raw $PAYLOAD
restartPolicy: OnFailure
terminationGracePeriodSeconds: 0
backoffLimit: 20
completions: 1
which I expect will be executed only during helm install but is getting executed during helm upgrade as well.
Other problem is that it will error out on Error: failed post-install: job failed: BackoffLimitExceeded when I have bumped it to 20
Any thoughts on both problems? Can I do some kind of a loop to check if the connection to the server endpoint is present then execute curls to prevent at least second problem? Will below work?
...
command:
- '/bin/sh'
- '-c'
- >
for i in $(seq 1 300); do nc -zvw1 server {{ .Values.app.Server.service.port }} && exit 0 || sleep 3;
done;
PAYLOAD='{"password":"password"}';'
....
or to execute that job after specific pod deployment?

Related

Kubernes Job Curl URL using bad/illegal format or missing URL

I prepared a job yaml file and deployed it. Job will send post request to grafana api create user method. However it returns error.
error: "curl: (3) URL using bad/illegal format or missing URL"
How Should I change command lines?
yaml file:
apiVersion: batch/v1
kind: Job
metadata:
name: grafanauser-ttl
spec:
ttlSecondsAfterFinished: 100
template:
spec:
containers:
- name: grafanauser
image: curlimages/curl:7.72.0
command:
- '/bin/sh'
- '-ec'
- 'curl -X POST "http://admin:admin#grafana.utility.svc.cluster.local/api/admin/users" \
-H "Content-Type:application/json" -d \
"{"name":"test","email":"test#localhost.com","login":"test","password":"test","OrgId": 1}"'
restartPolicy: OnFailure
Your curl command isn't properly formated that's why you're seeing this issue. There's a couple of other threads regarding this error message you're receiving from curl.
However a quick fix would be to supply the curl in a single line. The following would work:
---
apiVersion: batch/v1
kind: Job
metadata:
name: grafanauser-ttl
spec:
ttlSecondsAfterFinished: 100
template:
spec:
containers:
- name: grafanauser
image: curlimages/curl:7.72.0
command:
- /bin/sh
- -ec
- 'curl -X POST "http://admin:admin#grafana.utility.svc.cluster.local/api/admin/users" -H "Content-Type:application/json" -d "{"name":"test","email":"test#localhost.com","login":"test","password":"test","OrgId": 1}"'
restartPolicy: OnFailure
I don't have the service running, but you shouldn't receive the curl error anymore:
$ kubectl logs grafanauser-ttl-xjt2q
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (6) Could not resolve host: grafana.utility.svc.cluster.local
You shell command line translates to
curl -X POST "http://admin:admin#grafana.utility.svc.cluster.local/api/admin/users" -H "Content-Type:application/json" -d "{"name":"test","email":"test#localhost.com","login":"test","password":"test","OrgId": 1}"
which will make shell strip away all the quotes:
curl -X POST http://admin:admin#grafana.utility.svc.cluster.local/api/admin/users -H Content-Type:application/json -d {name:test,email:test#localhost.com,login:test,password:test,OrgId: 1}
and curl will choke on the final 1} (its last argument)
Try rewriting it using YAML block literal:
command:
- '/bin/sh'
- '-ec'
- |
curl -X POST "http://admin:admin#grafana.utility.svc.cluster.local/api/admin/users" \
-H "Content-Type:application/json" -d \
'"{"name":"test","email":"test#localhost.com","login":"test","password":"test","OrgId": 1}'

using curl command in pod lifecycle poststart hooks

I was trying to add a poststart hook for my pod using curl, say sending a message to my slack channel
in shell, the command looks like this
curl -d "text=Hi I am a bot that can post messages to any public channel." -d "channel=C1234567" -H "Authorization: Bearer xoxb-xxxxxxxxxxxxxxxx" -X POST https://slack.com/api/chat.postMessage
and in my pod definition, i tried sth like this
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: curlimages/curl
env:
- name: TOKEN
valueFrom:
configMapKeyRef:
name: my-config
key: token
command: ["sleep"]
args: ["3000"]
lifecycle:
postStart:
exec:
command:
- "sh"
- "-c"
- |
curl -d "text=Hi going to start." -d "channel=C1234567" -H "Authorization: Bearer $(TOKEN)" -X POST https://slack.com/api/chat.postMessage
Unlike the container->command, it has args parameter which i could pass multi line command with quote, but in lifecycle->poststart->exec->command it doesn't support args parameter
I also tried sth like but no luck
command: ["curl","-d","text=Hi going to start.",....]
but i never got my slack message
My question is, how can i pass long curl command with quote in lifecycle->poststart->exec->command?
it finally solved by replacing () with {}
to use a env variable in command, it should be ${TOKEN}

Registering multiple Services and configure route in KONG with file

Whenever I need to register my EKS services and required routes with kong, I have to manually execute CURL method( post/get ) commands for same, Services and routes get register successfully, but my requirement is to build or automate above multiple configurations with KONG, some way like producing a YAML file for all service registrations and routes for KONG and then executing at once.
I explored all the sources, even KONG official documentation, but couldn't find any way which ease my requirement
###################### Adding Svc ##########################################
curl -k -i -X POST \
--url https://localhost:7001/services/ \
--data 'name=hello-world1' \
--data 'host=service-helloworld' \
--data 'port=80'
###################### Adding Route ##########################################
curl -k -i -X POST --url https://localhost:7001/services/hello-world/routes --data 'paths=/hello-world' --data 'methods[]=GET'
Some way to automate above CURL commands
If I understand you correctly those are some of the ways you are looking for:
Container Lifecycle Hooks
In your case you would want to use PostStart
This hook executes immediately after a container is created. However, there is no guarantee that the hook will execute before the container ENTRYPOINT. No parameters are passed to the handler.
Hook handler implementations
Containers can access a hook by implementing and registering a handler for that hook. There are two types of hook handlers that can be implemented for Containers:
Exec - Executes a specific command, such as pre-stop.sh, inside the cgroups and namespaces of the Container. Resources consumed by the command are counted against the Container.
HTTP - Executes an HTTP request against a specific endpoint on the Container.
Your pod might look like the following example:
apiVersion: v1
kind: Pod
metadata:
name: lifecycle-demo
spec:
containers:
- name: lifecycle-demo-container
image: nginx
lifecycle:
postStart:
exec:
command:
- "sh"
- "-c"
- >
curl -k -i -X POST --url https://localhost:7001/services/ --data 'name=hello-world1' --data 'host=service-helloworld' --data 'port=80';
curl -k -i -X POST --url https://localhost:7001/services/hello-world/routes --data 'paths=/hello-world' --data 'methods[]=GET'
Init Containers
A Pod can have multiple containers running apps within it, but it can also have one or more init containers, which are run before the app containers are started.
Init containers are exactly like regular containers, except:
Init containers always run to completion.
Each init container must complete successfully before the next one starts.
And here is an example from docs:
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: busybox:1.28
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
initContainers:
- name: init-myservice
image: busybox:1.28
command: ['sh', '-c', 'until nslookup myservice; do echo waiting for myservice; sleep 2; done;']
- name: init-mydb
image: busybox:1.28
command: ['sh', '-c', 'until nslookup mydb; do echo waiting for mydb; sleep 2; done;']

Kubernetes - Mark Pod completed when container completes

Let's say I have a Pod with 2 containers: App and Database. I want to run a Pod that executes a command in App and then terminates.
I have set up my App container to run that command, and then it succesully runs and terminates which is great. But now my Database container is still running, so the Pod is not marked as complete.
How can I get the Pod to be marked as complete when the App container is completed?
You can make a call to the Kubernetes API server to accomplish this. Consider the following example:
---
apiVersion: v1
kind: Pod
metadata:
name: multi-container-completion
spec:
containers:
- name: long-running-process
image: fbgrecojr/office-hours:so-47848488
command: ["sleep", "1000"]
- name: short-running-process
image: fbgrecojr/office-hours:so-47848488
command: ["sleep", "1"]
lifecycle:
preStop:
exec:
command: ["/pre-stop.sh"]
pre-stop.sh
#!/bin/bash
curl \
-X DELETE \
-H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
--cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt \
https://kubernetes.default.svc.cluster.local/api/v1/namespaces/$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace)/pods/$HOSTNAME
Dockerfile for fbgrecojr/office-hours:so-47848488
FROM centos:latest
COPY pre-stop.sh /
RUN chmod +x /pre-stop.sh
NOTE: I was not able to properly test this because preStop hooks do not seem to be working for my local Minikube setup. In case this issue is not localized to me, the corresponding issue can be tracked here.

Kubernetes daemonset is not able to run

Run Daemonset
kubectl create -f test-daemon.yaml --validate=false
Error
Error from server: error when creating "test-daemon.yaml": the server could not find the requested resource (post daemonsets.extensions)
Config
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
Requires=network-online.target etcd2.service generate-serviceaccount-key.service
After=network-online.target etcd2.service generate-serviceaccount-key.service
[Service]
EnvironmentFile=/etc/environment
ExecStartPre=-/usr/bin/mkdir -p /opt/bin
ExecStartPre=/usr/bin/curl -L -o /opt/bin/kube-apiserver -z /opt/bin/kube-apiserver https://storage.googleapis.com/kubernetes-release/release/v1.0.1/bin/linux/amd64/kube-apiserver
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-apiserver
ExecStartPre=/opt/bin/wupiao 127.0.0.1:2379/v2/machines
ExecStart=/opt/bin/kube-apiserver \
--service_account_key_file=/opt/bin/kube-serviceaccount.key \
--service_account_lookup=false \
--admission_control=NamespaceLifecycle,NamespaceAutoProvision,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota \
--runtime_config=api/v1,extensions/v1beta1=true,extensions/v1beta1/daemonsets=true \
--allow_privileged=true \
--insecure_bind_address=0.0.0.0 \
--insecure_port=3001 \
--kubelet_https=true \
--secure_port=6443 \
--service-cluster-ip-range=10.100.0.0/16 \
--etcd_servers=http://127.0.0.1:2379 \
--public_address_override=${COREOS_PRIVATE_IPV4} \
--logtostderr=true
Restart=always
RestartSec=10
Added config
--runtime_config=api/v1,extensions/v1beta1=true,extensions/v1beta1/daemonsets=true
ReplicationController
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
labels:
app: test
name: test
spec:
template:
metadata:
labels:
app: test
spec:
containers:
name: test
image: 192.168.1.3:4000/test
ports:
- containerPort: 80
Try removing the schema cache: rm -rf /tmp/kubectl.schema