Difference between two deployment commands in Kubernetes - kubernetes

What is the difference between $kubectl create deploy and $kubectl create deployment? Google Cloud Fundamental's Lab is using kubectl create deploy command, but in the Kubernetes documentation/Interactive Tutorial (https://kubernetes.io/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive/), it is using the command kubectl create deployment. So just wanted to ask this group, which one is the correct/latest?

The kubectl create deploy and kubectl create deployment commands both create a deployment in Kubernetes. The only difference is that the kubectl create deploy command is a shorthand version of the kubectl create deployment command. The kubectl create deployment command is the more verbose version and provides more options for creating a deployment.
The two commands are effectively interchangeable and both will create a Kubernetes deployment.Ultimately, it is up to the user to decide which syntax to use, as long as the deployment is created successfully.
As #gohmc and #fcmam5 said, The kubectl api-resources command prints a list of all of the available API resources in the Kubernetes cluster. The list includes the resource name, the kind of resource it is, and the API version it belongs to. Here is an example of the output of the command:
NAME KIND APIVERSION
bindings Binding v1
configmaps ConfigMap v1
endpoints Endpoints v1
events Event v1beta1
limitranges LimitRange v1
namespaces Namespace v1
pods Pod v1
replicationcontrollers ReplicationController v1
resourcequotas ResourceQuota v1
secrets Secret v1
services Service v1
As per this SO, you can also kubectl api-resources -o wide shows all the resources, verbs and associated API-group.
For more information refer to this kubernetes cheatsheet

They meant the same. You can find the SHORTNAMES for K8s resource with kubectl api-resources.

These deploy is a short-name for deployment, same as po for pod, you can see the full list of commands and their shortened versions with:
kubectl api-resources

Related

Cluster name within the Pod [duplicate]

As stated in the title, is it possible to find out a K8s cluster name from the API? I looked around the API and could not find it.
kubectl config current-context does the trick (it outputs little bit more, like project name, region, etc., but it should give you the answer you need).
Unfortunately a cluster doesn't know its own name, or anything else that would uniquely identify it (K8s issue #44954). I wanted to know for helm issue #2055.
Update:
A common workaround is to create a ConfigMap containing the cluster name and read that when required (#2055 comment 1244537799).
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-info
namespace: kube-system
data:
cluster-name: foo
There is no way to get the name via K8s API. But here is a one-liner in case the name you have in your .kube/config file is enough for you (if you download it from your cloud provider the names should match):
kubectl config view --minify -o jsonpath='{.clusters[].name}'
Note 1: The --minify is key here so it will output the name of your current context only. There are other similar answers posted here but without the "minify" you will be listing other contexts in your config that might confuse you.
Note 2: The name in your .kube/config might not reflect the name in your cloud provider, if the file was autogenerated by the cloud provider the names should match, if you configured it manually you could have typed any name just for local config.
Note 3: Do not rely on kubectl config current-context this returns just the name of the context, not the name of the cluster.
I dont believe there is a k8s cluster name. This command could provide some nice informations
kubectl cluster-info
The question is not really well described. However, if this question is related to Google Container Engine then as coreypobrien mentioned the name of cluster is stored in custom metadata of nodes. From inside a node, run the following command and the output will be name of cluster:
curl http://metadata/computeMetadata/v1/instance/attributes/cluster-name -H "Metadata-Flavor: Google"
If you specify your use case, I might be able to extend my answer to cover it.
The kubernetes API doesn't know much about the GKE cluster name, but you can easily get the cluster name from the Google metatdata server like this
kubectl run curl --rm --restart=Never -it --image=appropriate/curl -- -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/attributes/cluster-name
It is the same as getting the current config, but the below command gives clear output:
kubectl config view
This command will Check all possible clusters, as you know .KUBECONFIG may have multiple contexts
kubectl config view -o jsonpath='{"Cluster name\tServer\n"}{range .clusters[*]}{.name}{"\t"}{.cluster.server}{"\n"}{end}'
And you will get output like
Cluster name Server
kubernetes https://localhost:6443
at-least for kubespray clusters, the following works for me
kubectl config current-context | cut -d '#' -f2
For clusters that were installed using kubeadm, the configuration stored in the kubeadm-config configmap has the cluster name used when installing the cluster.
$ kubectl -n kube-system get configmap kubeadm-config -o yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: kubeadm-config
namespace: kube-system
data:
ClusterConfiguration: |
clusterName: NAME_OF_CLUSTER
For clusters that are using CoreDNS for their DNS, the "cluster name" from kubeadm is also used as the domain suffix.
$ kubectl -n kube-system get configmap coredns -o yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
kubernetes NAME_OF_CLUSTER.local in-addr.arpa ip6.arpa {
Well this returns precisely one thing, a cluster name
K8s:
kubectl config view -o jsonpath='{.clusters[].name}{"\n"}'
Openshift:
oc config view -o jsonpath='{.clusters[].name}{"\n"}'
$ kubectl config get-clusters --> get you the list of existing clusters
Using python k8s client. But this won't work with incluster_kubeconfig.
from kubernetes import config
cluster_context = config.kube_config.list_kube_config_contexts()
print (cluster_context)
([{'context': {'cluster': 'k01.test.use1.aws.platform.gov', 'user': 'k01-test'}, 'name': 'k01.test.use1.aws.platform.gov'}], {'context': {'cluster': 'k01.test.use1.aws.platform.gov', 'user': 'k01-test'}, 'name': 'k01.test.use1.aws.platform.gov'})
cluster_name = cluster_context[1]['context']['cluster']
print (cluster_name)
k01.test.use1.aws.platform.gov
Using kubectl command:
$ kubectl config get-clusters
NAME
kubernetes
kubectl config get-clusters
kubectl config get-contexts
There is a great tool called kubectx https://github.com/ahmetb/kubectx.
kubectx - lists all previously added clusters and highlights the currently used one. This is only one word to type instead of kubectl config current-context.
kubectx <cluster> - switches to a chosen cluster.
Moreover this tool comes also with kubens which does exactly the same for namespaces:
kubens - lists all namespaces and shows the current one,
kubens <namespace> - switches to a chosen namespace.

Running a Pod from another Pod in the same kubernetes namespace

I am building an application which should execute tasks in a separate container/pods.
this application would be running in a specific namespace the new pods must be created in the same namespace as well.
I understand we can similar via custom CRD and Operators, but I found it is overly complicated and we need Golang knowledge for the same.
Is there any way this could be achived without having to learn Operators and GoLang?
I am ok to use kubctl or api within my container and wanted to connect the host and to the same namespace.
Yes, this is certainly possible using a ServiceAccount and then connecting to the API from within the Pod.
First, create a ServiceAccount in your namespace using
kubectl create serviceaccount my-service-account
For your newly created ServiceAccount, give it the permissions you want using Roles and RoleBindings. The subject would be something like this:
subjects:
- kind: ServiceAccount
name: my-service-account
namespace: my-namespace
Then, add the ServiceAccount to the Pod from where you want to create other Pods from (see documentation). Credentials are automatically mounted inside the Pod using automountServiceAccountToken.
Now from inside the Pod you can either use kubectl or call the API using the credentials inside the Pod. There are libraries for a lot of programming languages to talk to Kubernetes, use those.

Kubernetes understanding output of - kubectl auth can-i

I'm trying to understand why on one cluster an operation is permitted but on the other i'm getting the following
Exception encountered setting up namespace watch from Kubernetes API v1 endpoint https://10.100.0.1:443/api: namespaces is forbidden: User \"system:serviceaccount:kube-system:default\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope ({\"kind\":\"Status\",\"apiVersion\":\"v1\",\"metadata\":{},\"status\":\"Failure\",\"message\":\"namespaces is forbidden: User \\\"system:serviceaccount:kube-system:default\\\" cannot list resource \\\"namespaces\\\" in API group \\\"\\\" at the cluster scope\",\"reason\":\"Forbidden\",\"details\":{\"kind\":\"namespaces\"},\"code\":403}\n)"
I'm managing two Kubernetes clusters -
clusterA booted with Kops version v1.14.8
clusterB booted on AWS EKS version v1.14.9-eks-f459c0
So i've tried using the kubectl auth command to try figuring out and I do see that on one i'm allowed however on the second i'm not as you can see:
kubectl config use-context clusterA
Switched to context "clusterA".
kubectl auth can-i list pods --as=system:serviceaccount:kube-system:default -n kube-system
yes
kubectl config use-context clusterB
Switched to context "clusterB".
kubectl auth can-i list pods --as=system:serviceaccount:kube-system:default -n kube-system
no
Is there a way to understand what are these two decisions based on yes/no?
Thanks for helping out!
The decision yes/no is based on whether there is a clusterrole and a clusterrolebinding or rolebinding which permits the default serviceaccount in kube-system namespace to perform verb list on resource namespace.
The trick in case of namespace resource is that there needs to be a clusterrole instead of role because namespace is a cluster scoped resource.
You check what are the clusterrole,role, clusterrolebinding,rolebinding exists in a kubernetes cluster using below command
kubectl get clusterrole,clusterrolebinding
kubectl get role,rolebinding -n namespacename
For more details refer Kubernetes RBAC here

Can not delete pods in Kubernetes

I tried installing dgraph (single server) using Kubernetes.
I created pod using:
kubectl create -f https://raw.githubusercontent.com/dgraph-io/dgraph/master/contrib/config/kubernetes/dgraph-single.yaml
Now all I need to do is to delete the created pods.
I tried deleting the pod using:
kubectl delete pod pod-name
The result shows pod deleted, but the pod keeps recreating itself.
I need to remove those pods from my Kubernetes. What should I do now?
I did face same issue. Run command:
kubectl get deployment
you will get respective deployment to your pod. Copy it and then run command:
kubectl delete deployment xyz
then check. No new pods will be created.
The link provided by the op may be unavailable. See the update section
As you specified you created your dgraph server using this https://raw.githubusercontent.com/dgraph-io/dgraph/master/contrib/config/kubernetes/dgraph-single.yaml, So just use this one to delete the resources you created:
$ kubectl delete -f https://raw.githubusercontent.com/dgraph-io/dgraph/master/contrib/config/kubernetes/dgraph-single.yaml
Update
Basically, this is an explanation for the reason.
Kubernetes has some workloads (those contain PodTemplate in their manifest). These are:
Pods
Controllers (basically Pod controllers)
ReplicationController
ReplicaSet
Deployment
StatefulSet
DaemonSet
Job
CronJob
See, who controls whom:
ReplicationController -> Pod(s)
ReplicaSet -> Pod(s)
Deployment -> ReplicaSet(s) -> Pod(s)
StatefulSet -> Pod(s)
DaemonSet -> Pod(s)
Job -> Pod
CronJob -> Job(s) -> Pod
a -> b means a creates and controls b and the value of field
.metadata.ownerReference in b's manifest is the reference of a. For
example,
apiVersion: v1
kind: Pod
metadata:
...
ownerReferences:
- apiVersion: apps/v1
controller: true
blockOwnerDeletion: true
kind: ReplicaSet
name: my-repset
uid: d9607e19-f88f-11e6-a518-42010a800195
...
This way, deletion of the parent object will also delete the child object via garbase collection.
So, a's controller ensures that a's current status matches with
a's spec. Say, if one deletes b, then b will be deleted. But
a is still alive and a's controller sees that there is a
difference between a's current status and a's spec. So a's
controller recreates a new b obj to match with the a's spec.
The ops created a Deployment that created ReplicaSet that further created Pod(s). So here the soln was to delete the root obj which was the Deployment.
$ kubectl get deploy -n {namespace}
$ kubectl delete deploy {deployment name} -n {namespace}
Note Book
Another problem may arise during deletion is as follows:
If there is any finalizer in the .metadata.finalizers[] section, then only after completing the task(s) performed by the associated controller, the deletion will be performed. If one wants to delete the object without performing the finalizer(s)' action(s), then he/she has to delete those finalizer(s) first. For example,
$ kubectl patch -n {namespace} deploy {deployment name} --patch '{"metadata":{"finalizers":[]}}'
$ kubectl delete -n {namespace} deploy {deployment name}
You can perform a graceful pod deletion with the following command:
kubectl delete pods <pod>
If you want to delete a Pod forcibly using kubectl version >= 1.5, do the following:
kubectl delete pods <pod> --grace-period=0 --force
If you’re using any version of kubectl <= 1.4, you should omit the --force option and use:
kubectl delete pods <pod> --grace-period=0
If even after these commands the pod is stuck on Unknown state, use the following command to remove the pod from the cluster:
kubectl patch pod <pod> -p '{"metadata":{"finalizers":null}}'
Pods in kubernetes also depends on its type.
Like
Replication Controllers
Replica Sets
Statefulsets
Deployments
Daemon Sets
Pod
Do kubectl describe pod <podname> and check
apiVersion: apps/v1
kind: StatefulSet
metadata:
Now do kubectl get <pod-kind>
At last delete the same and pod will also be deleted.
As #Shudipta Sharma's answer is obviously correct way on how to delete the pods. I would just like to make sure author will understand why this is happening.
The reason is the "mindset" of the Kubernetes in which Pods are considered to be ephemeral, throwaway entities. As Pods come and go, StatefulSets are one way of ensuring that a given number of pods with unique identities will be running at any given time. Reaching the yaml file you used to deploy:
# This StatefulSet runs 1 pod with one Zero, one Alpha & one Ratel containers.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: dgraph
spec:
serviceName: "dgraph"
replicas: 1
By deploying this you are basically saying that you want Kubernetes to always run 1 replica of that Pod, at any time. When you delete the Pod, that condition is no longer true so after deletion, there is another Pod spawning to make sure the condition above will be valid.
The way that #Shudipta Sharma provided is just deletion of that StatefulSet so you no longer have a desired state which will keep an eye on the number of running Pods.
You can find more about that in Kubernetes documentation on:
StatefulSets
Cluster's desired state
More about Kubernetes objects and difference between each of them
Delete deployment, not the pods. It is deployment that is making another pod. You can see the different pod name after you delete pods.
kubectl get all
kubectl delete deployment DEPLOYMENTNAME

How to get Kubernetes cluster name from K8s API

As stated in the title, is it possible to find out a K8s cluster name from the API? I looked around the API and could not find it.
kubectl config current-context does the trick (it outputs little bit more, like project name, region, etc., but it should give you the answer you need).
Unfortunately a cluster doesn't know its own name, or anything else that would uniquely identify it (K8s issue #44954). I wanted to know for helm issue #2055.
Update:
A common workaround is to create a ConfigMap containing the cluster name and read that when required (#2055 comment 1244537799).
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-info
namespace: kube-system
data:
cluster-name: foo
There is no way to get the name via K8s API. But here is a one-liner in case the name you have in your .kube/config file is enough for you (if you download it from your cloud provider the names should match):
kubectl config view --minify -o jsonpath='{.clusters[].name}'
Note 1: The --minify is key here so it will output the name of your current context only. There are other similar answers posted here but without the "minify" you will be listing other contexts in your config that might confuse you.
Note 2: The name in your .kube/config might not reflect the name in your cloud provider, if the file was autogenerated by the cloud provider the names should match, if you configured it manually you could have typed any name just for local config.
Note 3: Do not rely on kubectl config current-context this returns just the name of the context, not the name of the cluster.
I dont believe there is a k8s cluster name. This command could provide some nice informations
kubectl cluster-info
The question is not really well described. However, if this question is related to Google Container Engine then as coreypobrien mentioned the name of cluster is stored in custom metadata of nodes. From inside a node, run the following command and the output will be name of cluster:
curl http://metadata/computeMetadata/v1/instance/attributes/cluster-name -H "Metadata-Flavor: Google"
If you specify your use case, I might be able to extend my answer to cover it.
The kubernetes API doesn't know much about the GKE cluster name, but you can easily get the cluster name from the Google metatdata server like this
kubectl run curl --rm --restart=Never -it --image=appropriate/curl -- -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/attributes/cluster-name
It is the same as getting the current config, but the below command gives clear output:
kubectl config view
This command will Check all possible clusters, as you know .KUBECONFIG may have multiple contexts
kubectl config view -o jsonpath='{"Cluster name\tServer\n"}{range .clusters[*]}{.name}{"\t"}{.cluster.server}{"\n"}{end}'
And you will get output like
Cluster name Server
kubernetes https://localhost:6443
at-least for kubespray clusters, the following works for me
kubectl config current-context | cut -d '#' -f2
For clusters that were installed using kubeadm, the configuration stored in the kubeadm-config configmap has the cluster name used when installing the cluster.
$ kubectl -n kube-system get configmap kubeadm-config -o yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: kubeadm-config
namespace: kube-system
data:
ClusterConfiguration: |
clusterName: NAME_OF_CLUSTER
For clusters that are using CoreDNS for their DNS, the "cluster name" from kubeadm is also used as the domain suffix.
$ kubectl -n kube-system get configmap coredns -o yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
kubernetes NAME_OF_CLUSTER.local in-addr.arpa ip6.arpa {
Well this returns precisely one thing, a cluster name
K8s:
kubectl config view -o jsonpath='{.clusters[].name}{"\n"}'
Openshift:
oc config view -o jsonpath='{.clusters[].name}{"\n"}'
$ kubectl config get-clusters --> get you the list of existing clusters
Using python k8s client. But this won't work with incluster_kubeconfig.
from kubernetes import config
cluster_context = config.kube_config.list_kube_config_contexts()
print (cluster_context)
([{'context': {'cluster': 'k01.test.use1.aws.platform.gov', 'user': 'k01-test'}, 'name': 'k01.test.use1.aws.platform.gov'}], {'context': {'cluster': 'k01.test.use1.aws.platform.gov', 'user': 'k01-test'}, 'name': 'k01.test.use1.aws.platform.gov'})
cluster_name = cluster_context[1]['context']['cluster']
print (cluster_name)
k01.test.use1.aws.platform.gov
Using kubectl command:
$ kubectl config get-clusters
NAME
kubernetes
kubectl config get-clusters
kubectl config get-contexts
There is a great tool called kubectx https://github.com/ahmetb/kubectx.
kubectx - lists all previously added clusters and highlights the currently used one. This is only one word to type instead of kubectl config current-context.
kubectx <cluster> - switches to a chosen cluster.
Moreover this tool comes also with kubens which does exactly the same for namespaces:
kubens - lists all namespaces and shows the current one,
kubens <namespace> - switches to a chosen namespace.