How to start a pod in command line without deployment in kubernetes? - kubernetes

I want to debug the pod in a simple way, therefore I want to start the pod without deployment.
But it will automatically create a deployment
$ kubectl run nginx --image=nginx --port=80
deployment "nginx" created
So I have to create the nginx.yaml file
---
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
And create the pod like below, then it creates pod only
kubectl create -f nginx.yaml
pod "nginx" created
How can I specify in the command line the kind:Pod to avoid deployment?
// I run under minikue 0.20.0 and kubernetes 1.7.0 under Windows 7

kubectl run nginx --image=nginx --port=80 --restart=Never
--restart=Always: The restart policy for this Pod. Legal values [Always, OnFailure, Never]. If set to Always
a deployment is created, if set to OnFailure a job is created, if set to Never, a regular pod is created. For the latter two --replicas must be 1. Default Always [...]
see official document https://kubernetes.io/docs/user-guide/kubectl-conventions/#generators

Now there are two ways one can create pod through command line.
kubectl run nginx --image=nginx --restart=Never
OR
kubectl run --generator=run-pod/v1 nginx1 --image=nginx
See official documentation.
https://kubernetes.io/docs/reference/kubectl/conventions/#generators

Use generators for this, default kubectl run will create a deployment object. If you want to override this behavior use "run-pod/v1" generator.
kubectl run --generator=run-pod/v1 nginx1 --image=nginx
You may refer the link below for better understanding.
https://kubernetes.io/docs/reference/kubectl/conventions/#generators

I'm relatively new to kubernetes, but it seems it has evolved quite a bit since this question was asked. As of its latest versions(I'm running v1.16) generators are deprecated and they are completely removed in v1.18.
See the corresponding ticket and the release notes.
Release notes explicitly say:
Remove all the generators from kubectl run. It will now only create
pods.
I've tested kubectl run with various --restart flags and never got any deployments created. What we do have now is called "naked" Pod. And while you might be tempted to use it, it goes against k8s best practices:
Don’t use naked Pods (that is, Pods not bound to a ReplicaSet or
Deployment) if you can avoid it. Naked Pods will not be rescheduled in
the event of a node failure.

When you are using "kubectl run nginx --image=nginx --port=80" it creates a deployment by default.
To create a pod you have two options.
kubectl run --generator=run-pod/v1 nginx --image=nginx --port=80
kubectl create pod nginx --image=nginx --port=80

Related

How to give annotations by using run command in kubernetes to a pod

I attempted but there is an error..i also see See 'kubectl run --help' for usage.
but i can't fix it..
kubectl run pod pod4 --image=aamirpinger/helloworld:latest --port=80 --annotaions=createdBy="Muhammad Shahbaz" --restart=Never
Error: unknown flag: --annotaions
kubectl run supports specifying annotations via the --annotations flag that can be specified multiple times to apply multiple annotations.
For example:
$ kubectl run --image myimage --annotations="foo=bar" --annotations="another=one" mypod
results in the following:
$ kubectl get pod mypod -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
foo: bar
another: one
[...]
kubectl run doesn't have an option to set annotations.
Unless you're running a one-off debugging pod, it's usually better practice to write out the full (Deployment) YAML file, commit to source control, and install it using kubectl apply -f. That will let you specify any Kubernetes object property you need to.
As David Maze mentioned ,there is no --annotations flag for kubectl run command.It is better to write deployment yaml file compared to running using kubectl run command.
However you can add annotations to kubernetes resources using Kubectl annotate command.All Kubernetes objects support the ability to store additional data with the object as annotations.
Hope this helps.

Check working of an service in kubernetes

I create a pod to test my service in kubernetes. But i didn't get anythings. Here is my command
kubectl run --generator=run-pod/v1 nginx-resolver --image=nginx
kubectl expose pod nginx-resolver --name=nginx-resolver-service --port=80 --target-port=80 --type=ClusterIP
kubectl run --generator=run-pod/v1 test-nslookup --image=busybox:1.28 --rm -it -- nslookup nginx-resolver-service
Please help me explain why. Thanks
Run the following command and get things what you are thinking wrong about this cmd.
$ kubectl run --help
Create and run a particular image, possibly replicated.
Creates a deployment or job to manage the created container(s).
Examples:
# Start a single instance of nginx.
kubectl run nginx --image=nginx
# Start a single instance of hazelcast and let the container expose port 5701 .
kubectl run hazelcast --image=hazelcast --port=5701
...
So, kubectl run cmd creates a deployment or a job
If it is a deployment, it creates (a) first, a replicaset, (b) then Pod(s).
If it is a job, it creates a Pod.
But you are trying to expose a Pod which name is not the correct one. You can see the name of the Pod that is/are created by the cmd kubectl run.
$ kubectl get pods --namespace=<namespace> | grep "nginx-resolver"
$ kubectl get pods --namespace=<namespace> | grep "test-nslookup"
Then use those names to expose Pod.
You can optionally expose your Deployment. To do so, see the help of $ kubectl expose deployment --help. Run:
$ kubectl expose deployment --help
Expose a resource as a new Kubernetes service.
Looks up a deployment, service, replica set, replication controller or pod by name and uses the selector for that
resource as the selector for a new service on the specified port. A deployment or replica set will be exposed as a
service only if its selector is convertible to a selector that service supports, i.e. when the selector contains only
the matchLabels component. Note that if no port is specified via --port and the exposed resource has multiple ports, all
will be re-used by the new service. Also if no labels are specified, the new service will re-use the labels from the
resource it exposes.
Possible resources include (case insensitive):
pod (po), service (svc), replicationcontroller (rc), deployment (deploy), replicaset (rs)
Examples:
...
# Create a service for an nginx deployment, which serves on port 80 and connects to the containers on port 8000.
kubectl expose deployment nginx --port=80 --target-port=8000
...
If you want to see the log interactively, you need to set the --restart option of your test-nslookup pod to Never or OnFailure. Otherwise, kubernetes will just restart your pod indefinitely and you won't see anything.
So your last command should be :
kubectl run --generator=run-pod/v1 test-nslookup --image=busybox:1.28 -it --restart=OnFailure -- nslookup nginx-resolver-service
Why ?
Probably because of this issue.
It seems to have a delay of 5s before kubectl run actually print something.
So in order to do it without changing the restart option, you'll need to change your command like this (beware of the sleep 7, so you'll have to wait 7seconds before seeing the logs) :
kubectl run --generator=run-pod/v1 test-nslookup --image=busybox:1.28 -it --rm -- sh -c 'sleep 7; nslookup nginx-resolver-service'

k3s cleanup of HelmChart?

I have followed instructions from this blog post to set up a k3s cluster on a couple of raspberry pi 4:
I'm now trying to get my hands dirty with traefik as front, but I'm having issues with the way it has been deployed as a 'HelmChart' I think.
From the k3s docs
It is also possible to deploy Helm charts. k3s supports a CRD
controller for installing charts. A YAML file specification can look
as following (example taken from
/var/lib/rancher/k3s/server/manifests/traefik.yaml):
So I have been starting up my k3s with the --no-deploy traefik option to manually add it with settings. So I therefore manually apply a yaml like this:
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
name: traefik
namespace: kube-system
spec:
chart: https://%{KUBERNETES_API}%/static/charts/traefik-1.64.0.tgz
set:
rbac.enabled: "true"
ssl.enabled: "true"
kubernetes.ingressEndpoint.useDefaultPublishedService: "true"
dashboard:
enabled: true
domain: "traefik.k3s1.local"
But when trying to iterate over settings to get it working as I want, I'm having trouble tearing it down. If I try kubectl delete -f on this yaml it just hangs indefinitely. And I can't seem to find a clean way to delete all the resources manually either.
I've been resorting now to just reinstall my entire cluster over and over because I can't seem to cleanup properly.
Is there a way to delete all the resources created by a chart like this without the helm cli (which I don't even have)?
Are you sure that kubectl delete -f is hanging?
I had the same issue as you and it seemed like kubectl delete -f was hanging, but it was really just taking a long time.
As far as I can tell, when you issue the kubectl delete -f a pod in the kube-system namespace with a name of helm-delete-* should spin up and try to delete the resources deployed via helm. You can get the full name of that container by running kubectl -n kube-system get pods, find the one with kube-delete-<name of yaml>-<id>. Then use the pod name to look at the logs using kubectl -n kube-system logs kube-delete-<name of yaml>-<id>.
An example of what I did was:
kubectl delete -f jenkins.yaml # seems to hang
kubectl -n kube-system get pods # look at pods in kube-system namespace
kubectl -n kube-system logs helm-delete-jenkins-wkjct # look at the delete logs
I see two options here:
Use the --now flag to delete your yaml file with minimal delay.
Use --grace-period=0 --force flags to force delete the resource.
There are other options but you'll need Helm CLI for them.
Please let me know if that helped.

Why kubectl run create deplyment sometimes

Can I know why kubectl run sometimes create deployment and sometimes pod.
You can see 1st one creates pod and 2nd one creates deployment. only diff is --restart=Never
// 1
chams#master:~/yml$ kubectl run ng --image=ngnix --command --restart=Never --dry-run -o yaml
apiVersion: v1
kind: Pod
..
status: {}
//2
chams#master:~/yml$ kubectl run ng --image=ngnix --command --dry-run -o yaml
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
run: ng
name: ng
..
status: {}
The flags are intentionally meant to create different kind of objects. I am copying from the help of kubectl run:
--restart='Always': The restart policy for this Pod. Legal values [Always,
OnFailure, Never]. If set to 'Always' a deployment is created, if set to
'OnFailure' a job is created, if set to 'Never', a regular pod is created. For
the latter two --replicas must be 1. Default 'Always', for CronJobs `Never`.
Never acts like a cronjob which is scheduled immediately.
Always creates a deployment and the deployment monitors the pod and restarts in case of failure.
By default kubectl run command creates a Deployment.
Using kubectl run command you can create and run a particular image, possibly replicated.
Creates a deployment or job to manage the created containers.
The difference in your case is seen in command (1st one) including restart policy argument.
If value of restart policy is set to 'Never', a regular pod is created. For the latter two --replicas must be 1. Default 'Always', for CronJobs Never.
Try to use command:
$ kubectl run --generator=run-pod/v1 ng --image=ngnix --command --dry-run -o yaml
instead of
$ kubectl run ng --image=ngnix --command --dry-run -o yaml
to avoid statement
"kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead."
More information you can find here:docker-kubectl, kubectl-run.

Can not delete pods in Kubernetes

I tried installing dgraph (single server) using Kubernetes.
I created pod using:
kubectl create -f https://raw.githubusercontent.com/dgraph-io/dgraph/master/contrib/config/kubernetes/dgraph-single.yaml
Now all I need to do is to delete the created pods.
I tried deleting the pod using:
kubectl delete pod pod-name
The result shows pod deleted, but the pod keeps recreating itself.
I need to remove those pods from my Kubernetes. What should I do now?
I did face same issue. Run command:
kubectl get deployment
you will get respective deployment to your pod. Copy it and then run command:
kubectl delete deployment xyz
then check. No new pods will be created.
The link provided by the op may be unavailable. See the update section
As you specified you created your dgraph server using this https://raw.githubusercontent.com/dgraph-io/dgraph/master/contrib/config/kubernetes/dgraph-single.yaml, So just use this one to delete the resources you created:
$ kubectl delete -f https://raw.githubusercontent.com/dgraph-io/dgraph/master/contrib/config/kubernetes/dgraph-single.yaml
Update
Basically, this is an explanation for the reason.
Kubernetes has some workloads (those contain PodTemplate in their manifest). These are:
Pods
Controllers (basically Pod controllers)
ReplicationController
ReplicaSet
Deployment
StatefulSet
DaemonSet
Job
CronJob
See, who controls whom:
ReplicationController -> Pod(s)
ReplicaSet -> Pod(s)
Deployment -> ReplicaSet(s) -> Pod(s)
StatefulSet -> Pod(s)
DaemonSet -> Pod(s)
Job -> Pod
CronJob -> Job(s) -> Pod
a -> b means a creates and controls b and the value of field
.metadata.ownerReference in b's manifest is the reference of a. For
example,
apiVersion: v1
kind: Pod
metadata:
...
ownerReferences:
- apiVersion: apps/v1
controller: true
blockOwnerDeletion: true
kind: ReplicaSet
name: my-repset
uid: d9607e19-f88f-11e6-a518-42010a800195
...
This way, deletion of the parent object will also delete the child object via garbase collection.
So, a's controller ensures that a's current status matches with
a's spec. Say, if one deletes b, then b will be deleted. But
a is still alive and a's controller sees that there is a
difference between a's current status and a's spec. So a's
controller recreates a new b obj to match with the a's spec.
The ops created a Deployment that created ReplicaSet that further created Pod(s). So here the soln was to delete the root obj which was the Deployment.
$ kubectl get deploy -n {namespace}
$ kubectl delete deploy {deployment name} -n {namespace}
Note Book
Another problem may arise during deletion is as follows:
If there is any finalizer in the .metadata.finalizers[] section, then only after completing the task(s) performed by the associated controller, the deletion will be performed. If one wants to delete the object without performing the finalizer(s)' action(s), then he/she has to delete those finalizer(s) first. For example,
$ kubectl patch -n {namespace} deploy {deployment name} --patch '{"metadata":{"finalizers":[]}}'
$ kubectl delete -n {namespace} deploy {deployment name}
You can perform a graceful pod deletion with the following command:
kubectl delete pods <pod>
If you want to delete a Pod forcibly using kubectl version >= 1.5, do the following:
kubectl delete pods <pod> --grace-period=0 --force
If you’re using any version of kubectl <= 1.4, you should omit the --force option and use:
kubectl delete pods <pod> --grace-period=0
If even after these commands the pod is stuck on Unknown state, use the following command to remove the pod from the cluster:
kubectl patch pod <pod> -p '{"metadata":{"finalizers":null}}'
Pods in kubernetes also depends on its type.
Like
Replication Controllers
Replica Sets
Statefulsets
Deployments
Daemon Sets
Pod
Do kubectl describe pod <podname> and check
apiVersion: apps/v1
kind: StatefulSet
metadata:
Now do kubectl get <pod-kind>
At last delete the same and pod will also be deleted.
As #Shudipta Sharma's answer is obviously correct way on how to delete the pods. I would just like to make sure author will understand why this is happening.
The reason is the "mindset" of the Kubernetes in which Pods are considered to be ephemeral, throwaway entities. As Pods come and go, StatefulSets are one way of ensuring that a given number of pods with unique identities will be running at any given time. Reaching the yaml file you used to deploy:
# This StatefulSet runs 1 pod with one Zero, one Alpha & one Ratel containers.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: dgraph
spec:
serviceName: "dgraph"
replicas: 1
By deploying this you are basically saying that you want Kubernetes to always run 1 replica of that Pod, at any time. When you delete the Pod, that condition is no longer true so after deletion, there is another Pod spawning to make sure the condition above will be valid.
The way that #Shudipta Sharma provided is just deletion of that StatefulSet so you no longer have a desired state which will keep an eye on the number of running Pods.
You can find more about that in Kubernetes documentation on:
StatefulSets
Cluster's desired state
More about Kubernetes objects and difference between each of them
Delete deployment, not the pods. It is deployment that is making another pod. You can see the different pod name after you delete pods.
kubectl get all
kubectl delete deployment DEPLOYMENTNAME