I have figured how to run a one time job in openshift (alternative to docker run):
oc run my-job --replicas=1 --restart=Never --rm -ti --command /bin/true --image busybox
How can I mount a configmap into the job container?
You can use --overrides flag :
oc run my-job --overrides='
{
"apiVersion": "v1",
"kind": "Pod",
"spec": {
"containers": [
{
"image": "busybox",
"name": "mypod",
"volumeMounts": [
{
"mountPath": "/path",
"name": "configmap"
}
]
}
],
"volumes": [
{
"configMap": {
"name": "myconfigmap"
},
"name": "configmap"
}
]
}
}
' --replicas=1 --restart=Never --rm -ti --command /bin/true --image busybox
Related
I was trying to uninstall and reinstall Istio from k8s cluster following the steps:
But I made a mistake that I deleted the namespace before deleting the istio-control-plane: kubectl delete istiooperator istio-control-plane -n istio-system. Then when I try to delete the istio-control-plane again, it froze.
I tried to remove the finalizer using the following steps but it said Error from server (NotFound): istiooperators.install.istio.io "istio-control-plane" not found
kubectl get istiooperator -n istio-system -o json > output.json
nano output.json # and remove finalizer
kubectl replace --raw "/apis/install.istio.io/v1alpha1/namespaces/istio-system/istiooperators/istio-control-plane/finalize" -f output.json
Here is the content of kubectl get istiooperator -n istio-system -o json:
{
"apiVersion": "v1",
"items": [
{
"apiVersion": "install.istio.io/v1alpha1",
"kind": "IstioOperator",
"metadata": {
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"install.istio.io/v1alpha1\",\"kind\":\"IstioOperator\",\"metadata\":{\"annotations\":{},\"name\":\"istio-control-plane\",\"namespace\":\"istio-system\"},\"spec\":{\"addonComponents\":{\"prometheus\":{\"enabled\":false},\"tracing\":{\"enabled\":false}},\"hub\":\"hub.docker.prod.walmart.com/istio\",\"profile\":\"default\",\"values\":{\"global\":{\"defaultNodeSelector\":{\"beta.kubernetes.io/os\":\"linux\"}}}}}\n"
},
"creationTimestamp": "2020-12-05T23:39:34Z",
"deletionGracePeriodSeconds": 0,
"deletionTimestamp": "2020-12-07T16:41:41Z",
"finalizers": [
],
"generation": 2,
"name": "istio-control-plane",
"namespace": "istio-system",
"resourceVersion": "11750055",
"selfLink": "/apis/install.istio.io/v1alpha1/namespaces/istio-system/istiooperators/istio-control-plane",
"uid": "fda8ee4f-54e7-45e8-91ec-c328fad1a86f"
},
"spec": {
"addonComponents": {
"prometheus": {
"enabled": false
},
"tracing": {
"enabled": false
}
},
"hub": "hub.docker.prod.walmart.com/istio",
"profile": "default",
"values": {
"global": {
"defaultNodeSelector": {
"beta.kubernetes.io/os": "linux"
}
}
}
},
"status": {
"componentStatus": {
"Base": {
"status": "HEALTHY"
},
"IngressGateways": {
"status": "HEALTHY"
},
"Pilot": {
"status": "HEALTHY"
}
},
"status": "HEALTHY"
}
}
],
"kind": "List",
"metadata": {
"resourceVersion": "",
"selfLink": ""
}
}
Any ideas on how can I uninstall istio-control-plane manually?
You can use below command to change istio operator finalizer and delete it, it's a jq/kubectl oneliner made by #Rico here. I have tried also with kubectl patch but it didn't work.
kubectl get istiooperator -n istio-system istio-control-plane -o=json | \
jq '.metadata.finalizers = null' | kubectl apply -f -
Additionally I have used istioctl operator remove
istioctl operator remove
Removing Istio operator...
Removed Deployment:istio-operator:istio-operator.
Removed Service:istio-operator:istio-operator.
Removed ServiceAccount:istio-operator:istio-operator.
Removed ClusterRole::istio-operator.
Removed ClusterRoleBinding::istio-operator.
✔ Removal complete
Results from kubectl get
kubectl get istiooperator istio-control-plane -n istio-system
Error from server (NotFound): namespaces "istio-system" not found
I want to create a pod using kubectl CLI which will mount hostpath /etc/os-release inside pod container and display content of /etc/os-release file.
Is is possible to do it in using one-liner kubectl command?
kubectl run -i --rm busybox --image=busybox --overrides='{
"apiVersion": "v1",
"spec": {
"containers": [
{
"image": "busybox",
"name": "busybox",
"command": ["cat", "/etc/os-release"],
"resources": {},
"volumeMounts": [
{
"mountPath": "/etc/os-release",
"name": "release"
}
]
}
],
"volumes": [
{
"name": "release",
"hostPath": {
"path": "/etc/os-release",
"type": "File"
}
}
],
"dnsPolicy": "ClusterFirst",
"restartPolicy": "Never"
},
"status": {}
}'
NAME=Buildroot
VERSION=2019.02.10
ID=buildroot
VERSION_ID=2019.02.10
PRETTY_NAME="Buildroot 2019.02.10"
pod "busybox" deleted
How can I issue a kubectl run that pulls an environment var from a k8s secret configmap?
Currently I have:
kubectl run oneoff -i --rm NAME --image=IMAGE --env SECRET=foo
Look into the overrides flag of the run command... it reads as:
An inline JSON override for the generated object. If this is non-empty, it is used to override the generated object. Requires that the object supply a valid apiVersion field.
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#run
So in your case I guess it would be something like:
kubectl run oneoff -i --rm --overrides='
{
"spec": {
"containers": [
{
"name": "oneoff",
"image": "IMAGE",
"env": [
{
"name": "ENV_NAME"
"valueFrom": {
"secretKeyRef": {
"name": "SECRET_NAME",
"key": "SECRET_KEY"
}
}
}
]
}
]
}
}
' --image= IMAGE
This is another one that does the trick:
kubectl run oneoff -i --rm NAME --image=IMAGE --env SECRET=$(kubectl get secret your-secret -o=jsonpath="{.server['secret\.yml']}")
I need to map a volume while starting the container, I am able to do it so with yaml file.
Is there an way volume mapping can be done via command line without using yaml file? just like
-v option in docker?
without using yaml file
Technically, yes: you would need a json file, as illustrated in "Create kubernetes pod with volume using kubectl run"
See kubectl run.
kubectl run -i --rm --tty ubuntu --overrides='
{
"apiVersion": "batch/v1",
"spec": {
"template": {
"spec": {
"containers": [
{
"name": "ubuntu",
"image": "ubuntu:14.04",
"args": [
"bash"
],
"stdin": true,
"stdinOnce": true,
"tty": true,
"volumeMounts": [{
"mountPath": "/home/store",
"name": "store"
}]
}
],
"volumes": [{
"name":"store",
"emptyDir":{}
}]
}
}
}
}
' --image=ubuntu:14.04 --restart=Never -- bash
I understand that you can create a pod with Deployment/Job using kubectl run. But is it possible to create one with a volume attached to it? I tried running this command:
kubectl run -i --rm --tty ubuntu --overrides='{ "apiVersion":"batch/v1", "spec": {"containers": {"image": "ubuntu:14.04", "volumeMounts": {"mountPath": "/home/store", "name":"store"}}, "volumes":{"name":"store", "emptyDir":{}}}}' --image=ubuntu:14.04 --restart=Never -- bash
But the volume does not appear in the interactive bash.
Is there a better way to create a pod with volume that you can attach to?
Your JSON override is specified incorrectly. Unfortunately kubectl run just ignores fields it doesn't understand.
kubectl run -i --rm --tty ubuntu --overrides='
{
"apiVersion": "batch/v1",
"spec": {
"template": {
"spec": {
"containers": [
{
"name": "ubuntu",
"image": "ubuntu:14.04",
"args": [
"bash"
],
"stdin": true,
"stdinOnce": true,
"tty": true,
"volumeMounts": [{
"mountPath": "/home/store",
"name": "store"
}]
}
],
"volumes": [{
"name":"store",
"emptyDir":{}
}]
}
}
}
}
' --image=ubuntu:14.04 --restart=Never -- bash
To debug this issue I ran the command you specified, and then in another terminal ran:
kubectl get job ubuntu -o json
From there you can see that the actual job structure differs from your json override (you were missing the nested template/spec, and volumes, volumeMounts, and containers need to be arrays).