What is the current equivalent of `kubectl run --generator=run/v1` - kubernetes

I'm working through Kubernetes in Action (copyright 2018), and at least one of the examples is out-of-date with respect to current versions of kubectl.
Currently I'm stuck in section 2.3 on just trying to demo a simple web-server docker container ("kubia"):
kubectl run kubia --image=Dave/kubia --port=8080 --generator=run/v1
the --generator option has been removed from current versions of kubectl. What command(s) achieve the same end in the current version of kubectl?
Note: I'm literally just 2 chapters into learning about Kubernetes, so I don't really know what a deployment or anything else (so the official kubernetes docuementation doesn't help), I just need the simplest way to verify that that I can, in fact, run this container in my minikube "cluster".

in short , you can use following commands to create pods and deployments (imperative way) using following commands which are similar to the commands mentioned in that book :
To create a pod named kubia with image Dave/kubia
kubectl run kubia --image=Dave/kubia --port=8080
To create a deployment named kubia with image Dave/kubia
kubectl create deployment kubia --image=Dave/kubia --port=8080

You can just instantiated the pod, since --generator has been deprecated.
apiVersion: v1
kind: Pod
metadata:
name: kubia
spec:
containers:
- name: kubia
image: Dave/kubia
ports:
- containerPort: 8080
Alternatively, you can use a deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubia-deployment
labels:
app: kubia
spec:
replicas: 1
selector:
matchLabels:
app: kubia
template:
metadata:
labels:
app: kubia
spec:
containers:
- name: kubia
image: Dave/kubia
ports:
- containerPort: 8080
Save either one to a something.yaml file and run
kubectl create -f something.yaml
And to clean up
kubectl delete -f something.yaml
✌️

If someone who read same book (Kubernetes in Action, copyright 2018) have same issue in the future, just run pod instead of the replication controller and expose pod instead of rc in following chapter.

Related

kubectl run not creating deployment

I'm running Kubernetes with docker desktop on windows. DD is up-to-date, and the kubectl version command returns 1.22 as both the client and server version.
I executed kubectl run my-apache --image httpd, then kubectl get all, which only shows
There is no deployment or replicaset as I expect. This means some commands, such as kubectl scale doesn't work. Any idea what's wrong? Thanks.
The kubectl run command create pod not deployment, I mean it used to create in the past before kubernetes version 1.18 I guess.
For deployment you have run this command
kubectl create deployment my-apache --image httpd
You basically spin up a pod in the default namespace, if you want to deploy your app using deployment you should have a deployment.yml file and use :
kubectl apply -f <depoyment_file>
Example to spin-up 3 Nginx pods:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
Side note: There are 2 approaches when creating basic k8s objects:
IMPERATIVE way:
kubectl create deployment <DEPLOYMENT_NAME> --image=<IMAGE_NAME:TAG>
DECLARATIVE way:
kubectl create -f deployment.yaml

Kubernetes deployment created but not listed

I've just started learning Kubernetes and I have created the deployment using the command kubectl run demo1-k8s --image=demo1-k8s:1.0 --port 8080 --image-pull-policy=Never. I got the message that deployment get created. But when I listed the deployment (kubectl get deployments), deployments not listed instead and I got the message No resources found in default namespace.
Any idea guys?
From the docs kubectl run creates a pod and not deployment. So you can use kubectl get pods command to check if the pod is created or not. For creating deployment use kubectl create deployment as documented here
for deployment creation you need to use kubectl create deployment. with kubectl run a pod will be created.
kubectl create deployment demo1-k8s --image=demo1-k8s:1.0
the template is like kubectl create deployment <deployment-name> --<flags>
but it's always better if you use yaml to create deployment or other k8s resource. just create a .yaml file.
deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo1-k8s
spec:
selector:
matchLabels:
app: demo1-k8s
replicas: 4 # Update the replicas from 2 to 4
template:
metadata:
labels:
app: demo1-k8s
spec:
containers:
- name: demo1-k8s
image: demo1-k8s:1.0
imagePullPolicy: Never
ports:
- containerPort: 8080
run this command kubectl apply -f deploy.yaml

difference in syntax when doing kuberentes deployments related operations

What is the difference between the following syntax usage:
kubectl get deployments
kubectl get deployment.apps
kubectl get deployment.v1.apps
There are references to deployment.v1.apps and deployment.apps in the documentation specially when talking about rollouts and upgrades.
For example:
To see the Deployment rollout status, run kubectl rollout status deployment.v1.apps/nginx-deployment
For example:
Let's update the nginx Pods to use the nginx:1.16.1 image instead of the nginx:1.14.2 image.
kubectl --record deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1
There is no difference. They show you in examples different ways to access the resource.
This is the reference to app/v1 api that you can see in example nginx deployment:
apiVersion: apps/v1
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
You can use shorten way like kubectl get deployments either longer ones you provided in the question.
However, its obvious, you cant use e.g app/v2
kubectl get deployment.v2.apps/nginx-deployment
error: the server doesn't have a resource type "deployment"

Error deployment aspnetcore webapi to minikube

When I try to execute this command kubectl apply -f mydeployment.yaml I receive an error error: SchemaError(io.k8s.api.core.v1.ContainerState): invalid object doesn't have additional properties. What can I do to deploy my aspnetcore webapi successfully to my local Kubernetes cluster?
I've already tried to upgrade minikube by running the command choco upgrade minikube. It says I've already have te latest version. minikube v1.0.0 is the latest version available based on your source(s).
My deployment.yaml I've created looks like this.
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
labels:
app: app
spec:
replicas: 3
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
name: myfirstdockerapi
image: myfirstdockerapi
ports:
- containerPort: 80
Cleanup everything before you start:
rm -rf ~/.minikube
As per documentation:
You must use a kubectl version that is within one minor version difference of
your cluster. For example, a v1.2 client should work with v1.1, v1.2, and v1.3
master. Using the latest version of kubectl helps avoid unforeseen issues.
Minikube resources on Github you can find here:
To avoid interaction issues - Update default Kubernetes version to v1.14.0 #3967
NOTE: , we also recommend updating kubectl to a recent release (v1.13+)
For the latest version of minikube please follow official documentation here.
Kubernetes blog - here,
Stackoverlow here,
Choco here,
In the attached deployment there was indentation problem (corrected) so please try again.
spec:
containers:
- name: myfirstdockerapi
image: myfirstdockerapi
ports:
- containerPort: 80
The containers element expects a list, so you need to prefix each entry with a dash.
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
labels:
app: app
spec:
replicas: 3
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- name: myfirstdockerapi
image: myfirstdockerapi
ports:
- containerPort: 80
If you are unsure you can always use kubectl to validate your file without creating it:
kubectl apply -f sample.yaml --validate --dry-run Just in case make sure that your kubectl version matches the version of your kubernetes cluster.

Update Node Selector field for PODs on the fly

I have been trying different things around k8s these days. I am wondering about the field nodeSelector in the POD specification.
As I understand we have to assign some labels to the nodes and these labels can further be used in the nodeSelector field part of the POD specification.
Assignment of the node to pods based on nodeSelector works fine. But, after I create the pod, now I want to update/overwrite the nodeSelector field which would deploy my pod to new node based on new nodeSelector label updated.
I am thinking this in the same way it is done for the normal labels using kubectl label command.
Are there any hacks to achieve such a case ?
If this is not possible for the current latest versions of the kubernetes, why should not we consider it ?
Thanks.
While editing deployment manually as cookiedough suggested is one of the options, I believe using kubctl patch would be a better solution.
You can either patch by using yaml file or JSON string, which makes it easier to integrate thing into scripts. Here is a complete reference.
Example
Here's a simple deployment of nginx I used, which will be created on node-1:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
nodeSelector:
kubernetes.io/hostname: node-1
JSON patch
You can patch the deployment to change the desired node as follows:kubectl patch deployments nginx-deployment -p '{"spec": {"template": {"spec": {"nodeSelector": {"kubernetes.io/hostname": "node-2"}}}}}'
YAML patch
By running kubectl patch deployment nginx-deployment --patch "$(cat patch.yaml)", where patch.yaml is prepared as follows:
spec:
template:
spec:
nodeSelector:
kubernetes.io/hostname: node-2
Both will result in scheduler scheduling new pod on requested node, and terminating the old one as soon as the new one is ready.
Remove the nodeSelector line & save it