I want to mount a volume with kubectl and get a shell in the environment.
I've tried this:
kubectl run -i --rm --tty alpine --overrides='
{
"apiVersion": "v1",
"spec": {
"template": {
"spec": {
"containers": [
{
"name": "alpine",
"image": "alpine:latest",
"args": [
"sh"
],
"stdin": true,
"stdinOnce": true,
"tty": true,
"volumeMounts": [{
"mountPath": "/home/store",
"name": "store"
}]
}
],
"volumes": [{
"name":"store",
"emptyDir":{}
}]
}
}
}
}
' --image=alpine:latest --restart=Never -- sh
I'm not getting any errors but the volume is not present at the mount path /home/store:
~ # ls -lah /home/
total 8
drwxr-xr-x 2 root root 4.0K Sep 11 20:23 .
drwxr-xr-x 1 root root 4.0K Sep 29 09:47 ..
I'm looking for the most direct way to use a volume with kubectl run for debugging purposes.
TL;DR I don't know what the issue was but I ended up solving this by making the build request very verbose.
I ended up solving this by setting debug to very verbose (v=0) and noticing that my volume mount was completely ignored by kubectl and not present in the request to the API:
I0929 13:31:22.429307 14616 request.go:897] Request Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"alpine","creationTimestamp":null,"labels":{"run":"alpine"}},"spec":{"volumes":[{"name":"store","emptyDir":{}}],"containers":[{"name":"alpine","image":"alpine:latest","args":["sh"],"resources":{},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","stdin":true,"stdinOnce":true,"tty":true}],"restartPolicy":"Never","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","securityContext":{},"schedulerName":"default-scheduler"},"status":{}}
I copy pasted that request, and edited it to add the same volume mount as above, and it worked:
kubectl run -i --rm --tty alpine --overrides='
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "alpine",
"creationTimestamp": null,
"labels": {
"run": "alpine"
}
},
"spec": {
"containers": [{
"name": "alpine",
"image": "alpine:latest",
"args": ["sh"],
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "IfNotPresent",
"stdin": true,
"stdinOnce": true,
"tty": true,
"volumeMounts": [{
"mountPath": "/home/store",
"name": "store"
}]
}],
"volumes": [{
"name":"store",
"emptyDir":{}
}],
"restartPolicy": "Never",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {},
"schedulerName": "default-scheduler"
},
"status": {}
}
' --image=alpine:latest -v=9 --restart=Never -- sh
Related
I am trying to add resource and limits to my deployment on Kuberenetes Engine since one of my deployment on the pod is continuously getting evicted with an error message The node was low on resource: memory. Container model-run was using 1904944Ki, which exceeds its request of 0. I assume that the issue could be resolved by adding resource requests.
When I try to add resource requests and deploy, the deployment is successful but when I go back and and view detailed information about the Pod, with the command
kubectl get pod default-pod-name --output=yaml --namespace=default
It still says the pod has request of cpu: 100m and without any mention of memory that I have allotted. I am guessing that the cpu request of 100m was a default one. Please let me know how I can allot the requests and limits, the code I am using to deploy is as follows:
kubectl run model-run --image-pull-policy=Always --overrides='
{
"apiVersion": "apps/v1beta1",
"kind": "Deployment",
"metadata": {
"name": "model-run",
"labels": {
"app": "model-run"
}
},
"spec": {
"selector": {
"matchLabels": {
"app": "model-run"
}
},
"template": {
"metadata": {
"labels": {
"app": "model-run"
}
},
"spec": {
"containers": [
{
"name": "model-run",
"image": "gcr.io/some-project/news/model-run:development",
"imagePullPolicy": "Always",
"resouces": {
"requests": [
{
"memory": "2048Mi",
"cpu": "500m"
}
],
"limits": [
{
"memory": "2500Mi",
"cpu": "750m"
}
]
},
"volumeMounts": [
{
"name": "credentials",
"readOnly": true,
"mountPath":"/path/collection/keys"
}
],
"env":[
{
"name":"GOOGLE_APPLICATION_CREDENTIALS",
"value":"/path/collection/keys/key.json"
}
]
}
],
"volumes": [
{
"name": "credentials",
"secret": {
"secretName": "credentials"
}
}
]
}
}
}
}
' --image=gcr.io/some-project/news/model-run:development
Any solution will be appreciated
The node was low on resource: memory. Container model-run was using 1904944Ki, which exceeds its request of 0.
At first the message seems like there is a lack of resource in the node itself but the second part makes me believe you are correct in trying to raise the request limit for the container.
Just keep in mind that if you still face errors after this change, you might need to add mode powerful node-pools to your cluster.
I went through your command, there is a few issues I'd like to highlight:
kubectl run was deprecated in 1.12 to all resources except for pods and it is retired in version 1.18.
apiVersion": "apps/v1beta1 is deprecated, and starting on v 1.16 it is no longer be supported, I replaced with apps/v1.
In spec.template.spec.container it's written "resouces" instead of "resources"
after fixing the resources the next issue is that requests and limits are written in array format, but they need to be in a list, otherwise you get this error:
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
error: v1beta1.Deployment.Spec: v1beta1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Resources: v1.ResourceRequirements.Limits: ReadMapCB: expect { or n, but found [, error found in #10 byte of ...|"limits":[{"cpu":"75|..., bigger context ...|Always","name":"model-run","resources":{"limits":[{"cpu":"750m","memory":"2500Mi"}],"requests":[{"cp|...
Here is the fixed format of your command:
kubectl run model-run --image-pull-policy=Always --overrides='{
"apiVersion": "apps/v1",
"kind": "Deployment",
"metadata": {
"name": "model-run",
"labels": {
"app": "model-run"
}
},
"spec": {
"selector": {
"matchLabels": {
"app": "model-run"
}
},
"template": {
"metadata": {
"labels": {
"app": "model-run"
}
},
"spec": {
"containers": [
{
"name": "model-run",
"image": "nginx",
"imagePullPolicy": "Always",
"resources": {
"requests": {
"memory": "2048Mi",
"cpu": "500m"
},
"limits": {
"memory": "2500Mi",
"cpu": "750m"
}
},
"volumeMounts": [
{
"name": "credentials",
"readOnly": true,
"mountPath": "/path/collection/keys"
}
],
"env": [
{
"name": "GOOGLE_APPLICATION_CREDENTIALS",
"value": "/path/collection/keys/key.json"
}
]
}
],
"volumes": [
{
"name": "credentials",
"secret": {
"secretName": "credentials"
}
}
]
}
}
}
}' --image=gcr.io/some-project/news/model-run:development
Now after aplying it on my Kubernetes Engine Cluster v1.15.11-gke.13 , here is the output of kubectl get pod X -o yaml:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
model-run-7bd8d79c7d-brmrw 1/1 Running 0 17s
$ kubectl get pod model-run-7bd8d79c7d-brmrw -o yaml
apiVersion: v1
kind: Pod
metadata:
labels:
app: model-run
pod-template-hash: 7bd8d79c7d
run: model-run
name: model-run-7bd8d79c7d-brmrw
namespace: default
spec:
containers:
- env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /path/collection/keys/key.json
image: nginx
imagePullPolicy: Always
name: model-run
resources:
limits:
cpu: 750m
memory: 2500Mi
requests:
cpu: 500m
memory: 2Gi
volumeMounts:
- mountPath: /path/collection/keys
name: credentials
readOnly: true
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-tjn5t
readOnly: true
nodeName: gke-cluster-115-default-pool-abca4833-4jtx
restartPolicy: Always
volumes:
- name: credentials
secret:
defaultMode: 420
secretName: credentials
You can see that the resources limits and requests were set.
If you still have any question let me know in the comments!
It seems we can not override limits through --overrides flag.
What you can do is you could pass limits with the kubectl command.
kubectl run model-run --image-pull-policy=Always --requests='cpu=500m,memory=2048Mi' --limits='cpu=750m,memory=2500Mi' --overrides='
{
"apiVersion": "apps/v1beta1",
"kind": "Deployment",
"metadata": {
"name": "model-run",
"labels": {
"app": "model-run"
}
},
"spec": {
"selector": {
"matchLabels": {
"app": "model-run"
}
},
"template": {
"metadata": {
"labels": {
"app": "model-run"
}
},
"spec": {
"containers": [
{
"name": "model-run",
"image": "gcr.io/some-project/news/model-run:development",
"imagePullPolicy": "Always",
"resouces": {
"requests": [
{
"memory": "2048Mi",
"cpu": "500m"
}
],
"limits": [
{
"memory": "2500Mi",
"cpu": "750m"
}
]
},
"volumeMounts": [
{
"name": "credentials",
"readOnly": true,
"mountPath":"/path/collection/keys"
}
],
"env":[
{
"name":"GOOGLE_APPLICATION_CREDENTIALS",
"value":"/path/collection/keys/key.json"
}
]
}
],
"volumes": [
{
"name": "credentials",
"secret": {
"secretName": "credentials"
}
}
]
}
}
}
}
' --image=gcr.io/some-project/news/model-run:development
I want to create a pod using kubectl CLI which will mount hostpath /etc/os-release inside pod container and display content of /etc/os-release file.
Is is possible to do it in using one-liner kubectl command?
kubectl run -i --rm busybox --image=busybox --overrides='{
"apiVersion": "v1",
"spec": {
"containers": [
{
"image": "busybox",
"name": "busybox",
"command": ["cat", "/etc/os-release"],
"resources": {},
"volumeMounts": [
{
"mountPath": "/etc/os-release",
"name": "release"
}
]
}
],
"volumes": [
{
"name": "release",
"hostPath": {
"path": "/etc/os-release",
"type": "File"
}
}
],
"dnsPolicy": "ClusterFirst",
"restartPolicy": "Never"
},
"status": {}
}'
NAME=Buildroot
VERSION=2019.02.10
ID=buildroot
VERSION_ID=2019.02.10
PRETTY_NAME="Buildroot 2019.02.10"
pod "busybox" deleted
I'm trying to write a go template that extracts the value of the load balancer. Using --go-template={{status.loadBalancer.ingress}} returns [map[hostname:GUID.us-west-2.elb.amazonaws.com]]% When I add .hostname to the template I get an error saying, "can't evaluate field hostname in type interface {}". I've tried using the range keyword, but I can't seem to get the syntax right.
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"creationTimestamp": "2018-07-30T17:22:12Z",
"labels": {
"run": "nginx"
},
"name": "nginx-http",
"namespace": "jx",
"resourceVersion": "495789",
"selfLink": "/api/v1/namespaces/jx/services/nginx-http",
"uid": "18aea6e2-941d-11e8-9c8a-0aae2cf24842"
},
"spec": {
"clusterIP": "10.100.92.49",
"externalTrafficPolicy": "Cluster",
"ports": [
{
"nodePort": 31032,
"port": 80,
"protocol": "TCP",
"targetPort": 8080
}
],
"selector": {
"run": "nginx"
},
"sessionAffinity": "None",
"type": "LoadBalancer"
},
"status": {
"loadBalancer": {
"ingress": [
{
"hostname": "GUID.us-west-2.elb.amazonaws.com"
}
]
}
}
}
As you can see from the JSON, the ingress element is an array. You can use the template function index to grab this array element.
Try:
kubectl get svc <name> -o=go-template --template='{{(index .status.loadBalancer.ingress 0 ).hostname}}'
This is assuming of course that you're only provisioning a single loadbalancer, if you have multiple, you'll have to use range
try this:
kubectl get svc <name> -o go-template='{{range .items}}{{range .status.loadBalancer.ingress}}{{.hostname}}{{printf "\n"}}{{end}}{{end}}'
I'm trying to 'push' entries into a containers /etc/hosts file by using the 'hostAliases' element of a Pod when defining a Replication Controller.
This is the Replication Controller definition:
{
"kind": "ReplicationController",
"apiVersion": "v1",
"metadata": {
"name": "my-test",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/replicationcontrollers/my-test",
"uid": "1f80363b-5a1d-11e8-b352-08002725d524",
"resourceVersion": "70872",
"generation": 1,
"creationTimestamp": "2018-05-17T21:56:16Z",
"labels": {
"name": "my-test"
}
},
"spec": {
"replicas": 2,
"selector": {
"name": "my-test"
},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"name": "my-test"
}
},
"spec": {
"containers": [
{
"name": "cont0",
"image": "192.168.97.10:5000/test_img:kubes",
"command": [
"/usr/local/bin/php"
],
"args": [
"/opt/code/myscript.php",
"myarg"
],
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "IfNotPresent"
}
],
"hostAliases": [
{
"ip": "10.0.2.2",
"hostnames": [ "mylaptop"],
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {},
"schedulerName": "default-scheduler"
}
}
},
"status": {
"replicas": 2,
"fullyLabeledReplicas": 2,
"observedGeneration": 1
}
}
However the hostAliases just seems to be ignored...
Any pointers on whether this is supported, and if not, any alternative methods of achieving the same?
I hope this is the solution, as I quickly verified it and it worked. I created a simple nginx deployment and then edited it to add the hostAliases entry:
With below snippet, I was getting an error and could not save the deployment
"hostAliases": [
{
"ip": "10.0.2.2",
"hostnames": [ "mylaptop"],
}
],
The problem is that comma , after the [ "mylaptop"] - once I removed it and then saved deployment:
"hostAliases": [
{
"ip": "10.0.2.2",
"hostnames": [ "mylaptop"]
}
],
I could see entry in the hosts file:
$kubectl exec -i -t nginx-5f6bbbdf97-m6d92 -- sh
#
#
# cat /etc/hosts
# Kubernetes-managed hosts file.
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
10.8.4.147 nginx-5f6bbbdf97-m6d92
# Entries added by HostAliases.
10.0.2.2 mylaptop
I need to map a volume while starting the container, I am able to do it so with yaml file.
Is there an way volume mapping can be done via command line without using yaml file? just like
-v option in docker?
without using yaml file
Technically, yes: you would need a json file, as illustrated in "Create kubernetes pod with volume using kubectl run"
See kubectl run.
kubectl run -i --rm --tty ubuntu --overrides='
{
"apiVersion": "batch/v1",
"spec": {
"template": {
"spec": {
"containers": [
{
"name": "ubuntu",
"image": "ubuntu:14.04",
"args": [
"bash"
],
"stdin": true,
"stdinOnce": true,
"tty": true,
"volumeMounts": [{
"mountPath": "/home/store",
"name": "store"
}]
}
],
"volumes": [{
"name":"store",
"emptyDir":{}
}]
}
}
}
}
' --image=ubuntu:14.04 --restart=Never -- bash