Kubernetes Cron-Job Slack Notification - kubernetes

I have a cronjob which creates postgres backup job.. I would like to send notification to slack channels via webhook with cronjob status fail or success. How can I add a condition or specify the status of Job and sending to slack? I suppose that also below curl request will work but please warn if you see any fault.
kind: CronJob
metadata:
name: standup
spec:
schedule: "* 17 * * 1-5"
jobTemplate:
spec:
template:
spec:
containers:
- name: standup
image: busybox
resources:
requests:
cpu: 1m
memory: 100Mi
env:
- args: /bin/sh
- -c
- curl -X POST -H 'Content-type: application/json' --data '{"text":"Hello, World!"}' https://hooks.slack.com/services/TQPCENFHP/
restartPolicy: OnFailure
~ semural$ kubectl logs $pods -n database
The following backups are available in specified backup path:
Added `s3` successfully.
[2020-04-13 14:24:46 UTC] 0B postgresql-cluster/
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
postgresql-postgresql-helm-backup 0 0 * * * False 0 8h 18h
NAME COMPLETIONS DURATION AGE
postgresql-postgresql-helm-backup-1586822400 1/1 37s 8h
postgresql-postgresql-helm-backup-list 1/1 2s 18h
postgresql-postgresql-helm-pgmon 1/1 49s 18h

I think we can create a simple script to get the cronjob status:
import json
import os
from kubernetes import client, config, utils
from kubernetes.client.rest import ApiException
from api.exceptions import BatchApiNamespaceNotExistedException
class Constants:
BACKOFF_LIMIT = 1
STATUS_RUNNING = "RUNNING"
STATUS_SUCCEED = "SUCCEED"
STATUS_FAILED = "FAILED"
STATUS_NOT_FOUND = "NOT FOUND"
class KubernetesApi:
def __init__(self):
try:
config.load_incluster_config()
except:
config.load_kube_config()
self.configuration = client.Configuration()
self.api_instance = client.BatchV1Api(client.ApiClient(self.configuration))
self.api_instance_v1_beta = client.BatchV1beta1Api(client.ApiClient(self.configuration))
def get_job_status(self, job):
if job is not None:
total_failed_pod = job.status.failed or 0
total_succeeded_pod = job.status.succeeded or 0
if total_failed_pod + total_succeeded_pod < Constants.BACKOFF_LIMIT:
return Constants.STATUS_RUNNING
elif total_succeeded_pod > 0:
return Constants.STATUS_SUCCEED
return Constants.STATUS_FAILED
return Constants.STATUS_NOT_FOUND
def get_cron_job_status(self, namespace):
try:
cron_job_list = self.api_instance_v1_beta.list_namespaced_cron_job(namespace=namespace,
watch=False)
except ApiException as e:
raise BatchApiNamespaceNotExistedException("Exception when calling BatchV1Api->list_namespaced_cron_job: %s\n" % e)
for cron_job in cron_job_list.items:
if cron_job.status.active is not None:
for active_cron_job in cron_job.status.active:
job = self.api_instance.read_namespaced_job(namespace=namespace,
name=active_cron_job.name)
if job_status == Constants.STATUS_FAILED:
# Do whatever you want in there
print(job_status)
So if the status is failed then we can send the log to slack.

I think what you have is already a good start.. Assuming you have the curl command as a script that takes the first argument as the message to be posted you can do something as the following:
kind: CronJob
metadata:
name: standup
spec:
schedule: "* 17 * * 1-5"
jobTemplate:
spec:
template:
spec:
containers:
- name: standup
image: busybox
resources:
requests:
cpu: 1m
memory: 100Mi
env:
- args: /bin/sh
- -c
- run-job.py || notify-cron-job "FAIL" && notify-cron-job "SUCCESS"

Related

Passing environment variables to Flink job on Flink Kubernetes Cluster

I'm using Flink Kubernetes Operator 1.3.0 and need to pass some environment variables to a Python job. I have followed the official documentation and the example runs fine. How can I inject environment variables so that I can use it inside the python file?
EDIT:
Here's the yaml file that I've used. Its straight from the example link above:
apiVersion: flink.apache.org/v1beta1
kind: FlinkDeployment
metadata:
name: python-example
spec:
image: localhost:32000/flink-python-example:1.16.0
flinkVersion: v1_16
flinkConfiguration:
taskmanager.numberOfTaskSlots: "1"
serviceAccount: flink
jobManager:
resource:
memory: "2048m"
cpu: 1
taskManager:
resource:
memory: "2048m"
cpu: 1
job:
jarURI: local:///opt/flink/opt/flink-python_2.12-1.16.0.jar # Note, this jarURI is actually a placeholder
entryClass: "org.apache.flink.client.python.PythonDriver"
args: ["-pyclientexec", "/usr/local/bin/python3", "-py", "/opt/flink/usrlib/python_demo.py"]
parallelism: 1
upgradeMode: stateless
As you can see it's a custom resource of kind FlinkDeployment. And here's the python code:
import logging
import sys
from pyflink.datastream import StreamExecutionEnvironment
from pyflink.table import StreamTableEnvironment
def python_demo():
env = StreamExecutionEnvironment.get_execution_environment()
env.set_parallelism(1)
t_env = StreamTableEnvironment.create(stream_execution_environment=env)
t_env.execute_sql("""
CREATE TABLE orders (
order_number BIGINT,
price DECIMAL(32,2),
buyer ROW<first_name STRING, last_name STRING>,
order_time TIMESTAMP(3)
) WITH (
'connector' = 'datagen'
)""")
t_env.execute_sql("""
CREATE TABLE print_table WITH ('connector' = 'print')
LIKE orders""")
t_env.execute_sql("""
INSERT INTO print_table SELECT * FROM orders""")
if __name__ == '__main__':
logging.basicConfig(stream=sys.stdout, level=logging.INFO, format="%(message)s")
python_demo()
Found the solution.
This is not detailed in the reference
https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-release-0.1/docs/custom-resource/reference
or example Flink Deployment
https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-release-0.1/docs/custom-resource/pod-template/
But here it says: https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-release-0.1/docs/custom-resource/reference/#jobmanagerspec
JobManager pod template. It will be merged with FlinkDeploymentSpec.podTemplate
So I just added envFrom from the example in which shows you how to extend the FlinkDeployment CRD:
https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-release-0.1/docs/custom-resource/pod-template/
Confirmed this is working as I had to get this work for my own application now
apiVersion: flink.apache.org/v1beta1
kind: FlinkDeployment
metadata:
name: python-example
spec:
image: localhost:32000/flink-python-example:1.16.0
flinkVersion: v1_16
flinkConfiguration:
taskmanager.numberOfTaskSlots: "1"
serviceAccount: flink
jobManager:
resource:
memory: "2048m"
cpu: 1
podTemplate:
apiVersion: v1
kind: Pod
metadata:
name: pod-template
spec:
serviceAccount: flink
containers:
# Do not change the main container name
- name: flink-main-container
envFrom:
- secretRef:
name: <SECRET RESOURCE NAME>
taskManager:
resource:
memory: "2048m"
cpu: 1
job:
jarURI: local:///opt/flink/opt/flink-python_2.12-1.16.0.jar # Note, this jarURI is actually a placeholder
entryClass: "org.apache.flink.client.python.PythonDriver"
args: ["-pyclientexec", "/usr/local/bin/python3", "-py", "/opt/flink/usrlib/python_demo.py"]
parallelism: 1
upgradeMode: stateless

How can I troubleshoot pod stuck at ContainerCreating

I'm trying to troubleshoot a failing pod but I cannot gather enough info to do so. Hoping someone can assist.
[server-001 ~]$ kubectl get pods sandboxed-nginx-98bb68c4d-26ljd
NAME READY STATUS RESTARTS AGE
sandboxed-nginx-98bb68c4d-26ljd 0/1 ContainerCreating 0 18m
[server-001 ~]$ kubectl logs sandboxed-nginx-98bb68c4d-26ljd
Error from server (BadRequest): container "nginx-kata" in pod "sandboxed-nginx-98bb68c4d-26ljd" is waiting to start: ContainerCreating
[server-001 ~]$ kubectl describe pods sandboxed-nginx-98bb68c4d-26ljd
Name: sandboxed-nginx-98bb68c4d-26ljd
Namespace: default
Priority: 0
Node: worker-001/100.100.230.34
Start Time: Fri, 08 Jul 2022 09:41:08 +0000
Labels: name=sandboxed-nginx
pod-template-hash=98bb68c4d
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/sandboxed-nginx-98bb68c4d
Containers:
nginx-kata:
Container ID:
Image: dummy-registry.com/test/nginx:1.17.7
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-887n4 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-887n4:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 25m default-scheduler Successfully assigned default/sandboxed-nginx-98bb68c4d-26ljd to worker-001
Warning FailedCreatePodSandBox 5m19s kubelet Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded
[worker-001 ~]$ sudo crictl images
IMAGE TAG IMAGE ID SIZE
dummy-registry.com/test/externalip-webhook v1.0.0-1 e2e778d82e6c3 147MB
dummy-registry.com/test/flannel v0.14.1 52e470e10ebf9 209MB
dummy-registry.com/test/kube-proxy v1.22.8 93ab9e5f0c4d6 869MB
dummy-registry.com/test/nginx 1.17.7 db634ca7e0456 310MB
dummy-registry.com/test/pause 3.5 dabdc5fea3665 711kB
dummy-registry.com/test/linux 7-slim 41388a53234b5 140MB
[worker-001 ~]$ sudo crictl ps
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
b1c6d1bf2f09a db634ca7e045638213d3f68661164aa5c7d5b469631bbb79a8a65040666492d5 34 minutes ago Running nginx 0 3598c2c4d3e88
caaa14b395eb8 e2e778d82e6c3a8cc82cdf3083e55b084869cd5de2a762877640aff1e88659dd 48 minutes ago Running webhook 0 8a9697e2af6a1
4f97ac292753c 52e470e10ebf93ea5d2aa32f5ca2ecfa3a3b2ff8d2015069618429f3bb9cda7a 48 minutes ago Running kube-flannel 2 a4e4d0c14cafc
aacb3ed840065 93ab9e5f0c4d64c135c2e4593cd772733b025f53a9adb06e91fe49f500b634ab 48 minutes ago Running kube-proxy 2 9e0bc036c2d00
[worker-001 ~]$ sudo crictl pods
POD ID CREATED STATE NAME NAMESPACE ATTEMPT RUNTIME
3598c2c4d3e88 34 minutes ago Ready nginx-9xtss default 0 (default)
8a9697e2af6a1 48 minutes ago Ready externalip-validation-webhook-7988bff847-ntv6d externalip-validation-system 0 (default)
9e0bc036c2d00 48 minutes ago Ready kube-proxy-9c7cb kube-system 0 (default)
a4e4d0c14cafc 48 minutes ago Ready kube-flannel-ds-msz7w kube-system 0 (default)
[worker-001 ~]$ cat /etc/crio/crio.conf
[crio]
[crio.image]
pause_image = "dummy-registry.com/test/pause:3.5"
registries = ["docker.io", "dummy-registry.com/test"]
[crio.network]
plugin_dirs = ["/opt/cni/bin"]
[crio.runtime]
cgroup_manager = "systemd"
conmon_cgroup = "system.slice"
conmon = "/usr/libexec/crio/conmon"
manage_network_ns_lifecycle = true
manage_ns_lifecycle = true
selinux = false
[crio.runtime.runtimes]
[crio.runtime.runtimes.kata]
runtime_path = "/usr/bin/containerd-shim-kata-v2"
runtime_type = "vm"
runtime_root = "/run/vc"
[crio.runtime.runtimes.runc]
runtime_path = "/usr/bin/runc"
runtime_type = "oci"
[worker-001 ~]$ egrep -v '^#|^;|^$' /usr/share/defaults/kata-containers/configuration-qemu.toml
[hypervisor.qemu]
initrd = "/usr/share/kata-containers/kata-containers-initrd.img"
path = "/usr/libexec/qemu-kvm"
kernel = "/usr/share/kata-containers/vmlinuz.container"
machine_type = "q35"
enable_annotations = []
valid_hypervisor_paths = ["/usr/libexec/qemu-kvm"]
kernel_params = ""
firmware = ""
firmware_volume = ""
machine_accelerators=""
cpu_features="pmu=off"
default_vcpus = 1
default_maxvcpus = 0
default_bridges = 1
default_memory = 2048
disable_block_device_use = false
shared_fs = "virtio-9p"
virtio_fs_daemon = "/usr/libexec/kata-qemu/virtiofsd"
valid_virtio_fs_daemon_paths = ["/usr/libexec/kata-qemu/virtiofsd"]
virtio_fs_cache_size = 0
virtio_fs_extra_args = ["--thread-pool-size=1", "-o", "announce_submounts"]
virtio_fs_cache = "auto"
block_device_driver = "virtio-scsi"
enable_iothreads = false
enable_vhost_user_store = false
vhost_user_store_path = "/usr/libexec/qemu-kvm"
valid_vhost_user_store_paths = ["/var/run/kata-containers/vhost-user"]
valid_file_mem_backends = [""]
pflashes = []
valid_entropy_sources = ["/dev/urandom","/dev/random",""]
[factory]
[agent.kata]
kernel_modules=[]
[runtime]
internetworking_model="tcfilter"
disable_guest_seccomp=true
disable_selinux=false
sandbox_cgroup_only=true
static_sandbox_resource_mgmt=false
sandbox_bind_mounts=[]
vfio_mode="guest-kernel"
disable_guest_empty_dir=false
experimental=[]
[image]
[server-001 ~]$ cat nginx.yaml
---
kind: RuntimeClass
apiVersion: node.k8s.io/v1
metadata:
name: kata-containers
handler: kata
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: sandboxed-nginx
spec:
replicas: 1
selector:
matchLabels:
name: sandboxed-nginx
template:
metadata:
labels:
name: sandboxed-nginx
spec:
runtimeClassName: kata-containers
containers:
- name: nginx-kata
image: dummy-registry.com/test/nginx:1.17.7
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: sandboxed-nginx
spec:
type: NodePort
ports:
- protocol: TCP
port: 80
targetPort: 80
selector:
name: sandboxed-nginx
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx
labels:
name: nginx
spec:
selector:
matchLabels:
name: nginx
template:
metadata:
labels:
name: nginx
spec:
tolerations:
# this toleration is to have the daemonset runnable on master nodes
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: nginx
image: dummy-registry.com/test/nginx:1.17.7
ports:
- containerPort: 80
[server-001 ~]$ kubectl apply -f nginx.yaml
runtimeclass.node.k8s.io/kata-containers unchanged
deployment.apps/sandboxed-nginx created
service/sandboxed-nginx created
daemonset.apps/nginx created
Since you're using kata containers with cri-o runtime, your pod should have a RuntimeClass parameter which it is missing.
You need to create a RuntimeClass object which will point to the runtime installed. See the docs here for how to do that. Also, make sure that the cri-o setup on worker-001 is correctly configured with k8s. Here is documentation for that.
Afterwards, add a RuntimeClass parameter to your pod so that the container can actually run. The ContainerCreating stage is stuck since the Pod controller cannot run cri-o based containers unless the RuntimeClass is specified. Here is some documentation on understanding Container Runtimes.

error validating data in cronjob in kubernetes

I am blocked with k8s cron job yaml syntax errros
I try to do
kubectl apply -f cronjob.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: update-test
spec:
schedule: "0 /5 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: update-test
image: test:test
imagePullPolicy: IfNotPresent
command: ['echo test']
envFrom:
- configMapRef:
name: test-config
- configMapRef:
name: test-config-globe
resources:
requests:
memory: "512Mi"
cpu: "0.5"
limits:
memory: "1024Mi"
cpu: "2"
restartPolicy: OnFailure
But i am getting this error:
error: error validating "deplyment.yaml": error validating data: [ValidationError(CronJob.spec.jobTemplate.spec.template.spec.containers[0].envFrom[0]): unknown field "name" in io.k8s.api.core.v1.EnvFromSource, ValidationError(CronJob.spec.jobTemplate.spec.template.spec.containers[0].envFrom[1]): unknown field "name" in io.k8s.api.core.v1.EnvFromSource];
Indentation of configMapRef name is incorrect, change this:
envFrom:
- configMapRef:
name: test-config
to:
envFrom:
- configMapRef:
name: test-config
Note: Also, your cron schedule is incorrect, you may need to fix 0 /5 * * * to a valid value. perhaps you need to set it to 0 */5 * * *

Pass json string to environment variable in a k8s deployment for Envoy

I have a K8s deployment with one pod running among others a container with Envoy sw. I have defined image in such way that if there is an Environment variable EXTRA_OPTS defined it will be appended to the command line to start Envoy.
I want to use that variable to override default configuration as explained in
https://www.envoyproxy.io/docs/envoy/latest/operations/cli#cmdoption-config-yaml
Environment variable works ok for other command options such as "-l debug" for example.
Also, I have tested expected final command line and it works.
Dockerfile set Envoy to run in this way:
CMD ["/bin/bash", "-c", "envoy -c envoy.yaml $EXTRA_OPTS"]
What I want is to set this:
...
- image: envoy-proxy:1.10.0
imagePullPolicy: IfNotPresent
name: envoy-proxy
env:
- name: EXTRA_OPTS
value: ' --config-yaml "admin: { address: { socket_address: { address: 0.0.0.0, port_value: 9902 } } }"'
...
I have succesfully tested running envoy with final command line:
envoy -c /etc/envoy/envoy.yaml --config-yaml "admin: { address: { socket_address: { address: 0.0.0.0, port_value: 9902 } } }"
And I have also tested a "simpler" option in EXTRA_OPTS and it works:
...
- image: envoy-proxy:1.10.0
imagePullPolicy: IfNotPresent
name: envoy-proxy
env:
- name: EXTRA_OPTS
value: ' -l debug'
...
I would expect Envoy running with this new admin port, instead I'm having param errors:
PARSE ERROR: Argument: {
Couldn't find match for argument
It looks like quotes are not being passed to the actual Environment variable into the container...
Any clue???
Thanks to all
You should set ["/bin/bash", "-c", "envoy -c envoy.yaml"] as an ENTRYPOINT in you dockerfile or use command in kubernetes and then use args to add additional arguments.
You can find more information in docker documentation
Let me explain by example:
$ docker build -t fl3sh/test:bash .
$ cat Dockerfile
FROM ubuntu
RUN echo '#!/bin/bash' > args.sh && \
echo 'echo "$#"' >> args.sh && \
chmod -x args.sh
CMD ["args","from","docker","cmd"]
ENTRYPOINT ["/bin/bash", "args.sh", "$ENV_ARG"]
cat args.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: args
name: args
spec:
containers:
- args:
- args
- from
- k8s
image: fl3sh/test:bash
name: args
imagePullPolicy: Always
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
Output:
pod/args $ENV_ARG args from k8s
cat command-args.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: command-args
name: command-args
spec:
containers:
- command:
- /bin/bash
- -c
args:
- 'echo args'
image: fl3sh/test:bash
imagePullPolicy: Always
name: args
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
Output:
pod/command-args args
cat command-env-args.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: command-env-args
name: command-env-args
spec:
containers:
- env:
- name: ENV_ARG
value: "arg from env"
command:
- /bin/bash
- -c
- exec echo "$ENV_ARG"
image: fl3sh/test:bash
imagePullPolicy: Always
name: args
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
Output:
pod/command-env-args arg from env
cat command-no-args.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: command-no-args
name: command-no-args
spec:
containers:
- command:
- /bin/bash
- -c
- 'echo "no args";echo "$#"'
image: fl3sh/test:bash
name: args
imagePullPolicy: Always
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
Output:
pod/command-no-args no args
#notice ^ empty line above
cat no-args.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: no-args
name: no-args
spec:
containers:
- image: fl3sh/test:bash
name: no-args
imagePullPolicy: Always
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
Output:
pod/no-args $ENV_ARG args from docker cmd
If you need to recreate my example you can use this loop to get this output like above:
for p in `kubectl get po -oname`; do echo cat ${p#*/}.yaml; echo ""; \
cat ${p#*/}.yaml; echo -e "\nOutput:"; printf "$p "; \
kubectl logs $p;echo "";done
Conclusion if you need to pass env as arguments use:
command:
- /bin/bash
- -c
- exec echo "$ENV_ARG"
I hope now it is clear.

kubectl apply -f <spec.yaml> equivalent in fabric8 java api

I was trying to use io.fabric8 api to create a few resources in kubernetes using a pod-spec.yaml.
Config config = new ConfigBuilder()
.withNamespace("ag")
.withMasterUrl(K8_URL)
.build();
try (final KubernetesClient client = new DefaultKubernetesClient(config)) {
LOGGER.info("Master: " + client.getMasterUrl());
LOGGER.info("Loading File : " + args[0]);
Pod pod = client.pods().load(new FileInputStream(args[0])).get();
LOGGER.info("Pod created with name : " + pod.toString());
} catch (Exception e) {
LOGGER.error(e.getMessage(), e);
}
The above code works if the resource type is of POD. Similarly for other resource type it is working fine.
But if the yaml has multiple resource type like POD and service in the same file, how to use fabric8 Api ?
I was trying to use client.load(new FileInputStream(args[0])).createOrReplace(); but it is crashing with the below exception:
java.lang.NullPointerException
at java.net.URI$Parser.parse(URI.java:3042)
at java.net.URI.<init>(URI.java:588)
at io.fabric8.kubernetes.client.utils.URLUtils.join(URLUtils.java:48)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:208)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:177)
at io.fabric8.kubernetes.client.handlers.PodHandler.reload(PodHandler.java:53)
at io.fabric8.kubernetes.client.handlers.PodHandler.reload(PodHandler.java:32)
at io.fabric8.kubernetes.client.dsl.internal.NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl.createOrReplace(NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl.java:202)
at io.fabric8.kubernetes.client.dsl.internal.NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl.createOrReplace(NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl.java:62)
at com.nokia.k8s.InterpreterLanuch.main(InterpreterLanuch.java:66)
Yaml file used
apiVersion: v1
kind: Pod
metadata:
generateName: zep-ag-pod
annotations:
kubernetes.io/psp: restricted
spark-app-name: Zeppelin-spark-shared-process
namespace: ag
labels:
app: zeppelin
int-app-selector: shell-123
spec:
containers:
- name: ag-csf-zep
image: bcmt-registry:5000/zep-spark2.2:9
imagePullPolicy: IfNotPresent
command: ["/bin/bash"]
args: ["-c","echo Hi && sleep 60 && echo Done"]
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
runAsNonRoot: true
securityContext:
fsGroup: 2000
runAsUser: 1510
serviceAccount: csfzeppelin
serviceAccountName: csfzeppelin
---
apiVersion: v1
kind: Service
metadata:
name: zeppelin-service
namespace: ag
labels:
app: zeppelin
spec:
type: NodePort
ports:
- name: zeppelin-service
port: 30099
protocol: TCP
targetPort: 8080
selector:
app: zeppelin
You don't need to specify resource type whenever loading a file with multiple documents. You simply need to do:
// Load Yaml into Kubernetes resources
List<HasMetadata> result = client.load(new FileInputStream(args[0])).get();
// Apply Kubernetes Resources
client.resourceList(result).inNamespace(namespace).createOrReplace()