I need to map a volume while starting the container, I am able to do it so with yaml file.
Is there an way volume mapping can be done via command line without using yaml file? just like
-v option in docker?
without using yaml file
Technically, yes: you would need a json file, as illustrated in "Create kubernetes pod with volume using kubectl run"
See kubectl run.
kubectl run -i --rm --tty ubuntu --overrides='
{
"apiVersion": "batch/v1",
"spec": {
"template": {
"spec": {
"containers": [
{
"name": "ubuntu",
"image": "ubuntu:14.04",
"args": [
"bash"
],
"stdin": true,
"stdinOnce": true,
"tty": true,
"volumeMounts": [{
"mountPath": "/home/store",
"name": "store"
}]
}
],
"volumes": [{
"name":"store",
"emptyDir":{}
}]
}
}
}
}
' --image=ubuntu:14.04 --restart=Never -- bash
Related
I want to create a pod using kubectl CLI which will mount hostpath /etc/os-release inside pod container and display content of /etc/os-release file.
Is is possible to do it in using one-liner kubectl command?
kubectl run -i --rm busybox --image=busybox --overrides='{
"apiVersion": "v1",
"spec": {
"containers": [
{
"image": "busybox",
"name": "busybox",
"command": ["cat", "/etc/os-release"],
"resources": {},
"volumeMounts": [
{
"mountPath": "/etc/os-release",
"name": "release"
}
]
}
],
"volumes": [
{
"name": "release",
"hostPath": {
"path": "/etc/os-release",
"type": "File"
}
}
],
"dnsPolicy": "ClusterFirst",
"restartPolicy": "Never"
},
"status": {}
}'
NAME=Buildroot
VERSION=2019.02.10
ID=buildroot
VERSION_ID=2019.02.10
PRETTY_NAME="Buildroot 2019.02.10"
pod "busybox" deleted
I have figured how to run a one time job in openshift (alternative to docker run):
oc run my-job --replicas=1 --restart=Never --rm -ti --command /bin/true --image busybox
How can I mount a configmap into the job container?
You can use --overrides flag :
oc run my-job --overrides='
{
"apiVersion": "v1",
"kind": "Pod",
"spec": {
"containers": [
{
"image": "busybox",
"name": "mypod",
"volumeMounts": [
{
"mountPath": "/path",
"name": "configmap"
}
]
}
],
"volumes": [
{
"configMap": {
"name": "myconfigmap"
},
"name": "configmap"
}
]
}
}
' --replicas=1 --restart=Never --rm -ti --command /bin/true --image busybox
I've been working with a bunch of k8s clusters for a while, using kubectl from the command line to examine information. I don't actually call kubectl directly, I wrap it in multiple scripting layers. I also don't use contexts, as it's much easier for me to specify different clusters in a different way. The resulting kubectl command line has explicit --server, --namespace, and --token parameters (and one other flag to disable tls verify).
This all works fine. I have no trouble with this.
However, I'm now trying to use telepresence, which doesn't give me a choice (yet) of not using contexts to configure this. So, I now have to figure out how to use contexts.
I ran the following (approximate) command:
kubectl config set-context mycontext --server=https://host:port --namespace=abc-def-ghi --insecure-skip-tls-verify=true --token=mytoken
And it said: "Context "mycontext " modified."
I then ran "kubectl config view -o json" and got this:
{
"kind": "Config",
"apiVersion": "v1",
"preferences": {},
"clusters": [],
"users": [],
"contexts": [
{
"name": "mycontext",
"context": {
"cluster": "",
"user": "",
"namespace": "abc-def-ghi"
}
}
],
"current-context": "mycontext"
}
That doesn't look right to me.
I then ran something like this:
telepresence --verbose --swap-deployment mydeployment --expose 8080 --run java -jar target/my.jar -Xdebug -Xrunjdwp:transport=dt_socket,address=5000,server=y,suspend=n
And it said this:
T: Error: Namespace 'abc-def-ghi' does not exist
Update:
And I can confirm that this isn't a problem with telepresence. If I just run "kubectl get pods", it fails, saying "The connection to the server localhost:8080 was refused". That tells me it obviously can't connect to the k8s server. The key is my "set-context" command. It's obviously not working, and I don't understand what I'm missing.
You don't have any clusters or credentials defined in your configuration. First, you need to define a cluster:
$ kubectl config set-cluster development --server=https://1.2.3.4 --certificate-authority=fake-ca-file
Then something like this for the user:
$ kubectl config set-credentials developer --client-certificate=fake-cert-file --client-key=fake-key-seefile
Then you define your context based on your cluster, user and namespace:
$ kubectl config set-context dev-frontend --cluster=development --namespace=frontend --user=developer
More information here
Your config should look something like this:
$ kubectl config view -o json
{
"kind": "Config",
"apiVersion": "v1",
"preferences": {},
"clusters": [
{
"name": "development",
"cluster": {
"server": "https://1.2.3.4",
"certificate-authority-data": "DATA+OMITTED"
}
}
],
"users": [
{
"name": "developer",
"user": {
"client-certificate": "fake-cert-file",
"client-key": "fake-key-seefile"
}
}
],
"contexts": [
{
"name": "dev-frontend",
"context": {
"cluster": "development",
"user": "developer",
"namespace": "frontend"
}
}
],
"current-context": "dev-frontend"
}
I want to mount a volume with kubectl and get a shell in the environment.
I've tried this:
kubectl run -i --rm --tty alpine --overrides='
{
"apiVersion": "v1",
"spec": {
"template": {
"spec": {
"containers": [
{
"name": "alpine",
"image": "alpine:latest",
"args": [
"sh"
],
"stdin": true,
"stdinOnce": true,
"tty": true,
"volumeMounts": [{
"mountPath": "/home/store",
"name": "store"
}]
}
],
"volumes": [{
"name":"store",
"emptyDir":{}
}]
}
}
}
}
' --image=alpine:latest --restart=Never -- sh
I'm not getting any errors but the volume is not present at the mount path /home/store:
~ # ls -lah /home/
total 8
drwxr-xr-x 2 root root 4.0K Sep 11 20:23 .
drwxr-xr-x 1 root root 4.0K Sep 29 09:47 ..
I'm looking for the most direct way to use a volume with kubectl run for debugging purposes.
TL;DR I don't know what the issue was but I ended up solving this by making the build request very verbose.
I ended up solving this by setting debug to very verbose (v=0) and noticing that my volume mount was completely ignored by kubectl and not present in the request to the API:
I0929 13:31:22.429307 14616 request.go:897] Request Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"alpine","creationTimestamp":null,"labels":{"run":"alpine"}},"spec":{"volumes":[{"name":"store","emptyDir":{}}],"containers":[{"name":"alpine","image":"alpine:latest","args":["sh"],"resources":{},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","stdin":true,"stdinOnce":true,"tty":true}],"restartPolicy":"Never","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","securityContext":{},"schedulerName":"default-scheduler"},"status":{}}
I copy pasted that request, and edited it to add the same volume mount as above, and it worked:
kubectl run -i --rm --tty alpine --overrides='
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "alpine",
"creationTimestamp": null,
"labels": {
"run": "alpine"
}
},
"spec": {
"containers": [{
"name": "alpine",
"image": "alpine:latest",
"args": ["sh"],
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "IfNotPresent",
"stdin": true,
"stdinOnce": true,
"tty": true,
"volumeMounts": [{
"mountPath": "/home/store",
"name": "store"
}]
}],
"volumes": [{
"name":"store",
"emptyDir":{}
}],
"restartPolicy": "Never",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {},
"schedulerName": "default-scheduler"
},
"status": {}
}
' --image=alpine:latest -v=9 --restart=Never -- sh
I understand that you can create a pod with Deployment/Job using kubectl run. But is it possible to create one with a volume attached to it? I tried running this command:
kubectl run -i --rm --tty ubuntu --overrides='{ "apiVersion":"batch/v1", "spec": {"containers": {"image": "ubuntu:14.04", "volumeMounts": {"mountPath": "/home/store", "name":"store"}}, "volumes":{"name":"store", "emptyDir":{}}}}' --image=ubuntu:14.04 --restart=Never -- bash
But the volume does not appear in the interactive bash.
Is there a better way to create a pod with volume that you can attach to?
Your JSON override is specified incorrectly. Unfortunately kubectl run just ignores fields it doesn't understand.
kubectl run -i --rm --tty ubuntu --overrides='
{
"apiVersion": "batch/v1",
"spec": {
"template": {
"spec": {
"containers": [
{
"name": "ubuntu",
"image": "ubuntu:14.04",
"args": [
"bash"
],
"stdin": true,
"stdinOnce": true,
"tty": true,
"volumeMounts": [{
"mountPath": "/home/store",
"name": "store"
}]
}
],
"volumes": [{
"name":"store",
"emptyDir":{}
}]
}
}
}
}
' --image=ubuntu:14.04 --restart=Never -- bash
To debug this issue I ran the command you specified, and then in another terminal ran:
kubectl get job ubuntu -o json
From there you can see that the actual job structure differs from your json override (you were missing the nested template/spec, and volumes, volumeMounts, and containers need to be arrays).