kubernetes to openshift equivalent command - kubernetes

In kubernetes I have a command as
kubectl create deployment nginx --image=ewoutp/docker-nginx-curl -n web
What should I run if I want to create this inside openshift cluster
I tried this
oc create deployment nginx --image=ewoutp/docker-nginx-curl -n web
I am getting error as error:
no matches for extensions/, Kind=Deployment
Can someone help me?

It might indicate that your openshift cluster is not running. Check oc status to view status of your current project. If it is not running you should create new project.
If you cluster is running you can run oc create deployment nginx --image=ewoutp/docker-nginx-curl -n web -o yaml to verify if apiVersion is correct. Currently used version is apps/v1. If it is incorrect you can save it to file and edit to match current version.

Related

How to promote a pod to a deployment for scaling

I'm running the example in chapter "Service Discovery" of the book "Kubernetes up and running". The original command to run a deployment is kubectl run alpaca-prod --image=gcr.io/kuar-demo/kuard-amd64:blue --replicas=3 --port=8080 --labels="ver=1,app=alpaca,env=prod", however in K8s version 1.25, the --replicate parameter in run command is not supported any more. I planned to run without replica and then use "kubectl scale" to scale the deployment later. Problem is the run command only creates a pod, not a deployment (the scale command expects a deployment). So how do i promote my pod to a deployment, my kubernetes verion is 1.25?
There is no way to promote it however you can change label and all those stuff but instead of that you can create the new deployment delete the existing POD.
So easy step you can take input of existing running POD to YAML file first
kubectl get pod <POD name> -o yaml > pod-spec.yaml
Create deployment spec YAML file now
kubectl create deployment deploymentname --image=imagename --dry-run=client -o yaml > deployment-spec.yaml
Edit the deployment-spec.yaml file
and in other tab pod-spec.yaml you can copy the Spec part from POD file to new deployment file.
Once deployment-spec.yaml is ready you can apply it. Make sure if you are running service labels get matched properly
kubectl apply -f deployment-spec.yaml
Delete the single running POD
kubectl delete pod <POD name>

oc get deployment is returning No resources found

"oc get deployment" command is returning "No resources Found" as the result.
Even if I put an option of assigning or defining the namespace using -n as the option to above command, I am getting the same result.
Whereas, I am getting the correct result of oc get pods command.
Meanwhile, the oc version is
oc - v3.6.0
kubernetes - v1.6.1
openshift - v3.11.380
Check, if you connect to the correct kubernetes environment, (especially if you're running more than one).
If that is correct, I guess, either you don't have any deployments at all, or the deployments are in a different namespace than you think.
Try out listing all deployments:
oc get deployments -A
There are other objects that create pods such as statefulset or deamonset. Because it is OpenShift, my feeling is that the pods created by a deploymentconfig which is popular way to create applications.
Anyway, you can make sure which object is the owner of the pods by looking into the pod annotation. This command should work:
oc get pod -o yaml <podname> | grep ownerReference -A 6

How do I undo a kubectl create deploy?

I was setting up a nginx cluster on google cloud, and I entered a wrong image name; instead of entering:
kubectl create deploy nginx --image=nginx:1.17.10
I entered:
kubectl create deploy nginx --image=1.17.10
and eventually after running kubectl get pods, It showed ImagePullBackOff as the status for the pod.
When I tried running the correct create deploy command above, It said "nginx" already exists.
When I tried doing kubernetes delete --all pods, the pod was recreated with a new ID but still had the same status, and still couldn't allow me to run the right 'kubectl create deploy' command above. Now I'm stuck.
How can I undo it?
You need to delete the deployment:
kubectl delete deploy nginx
Otherwise Kubernetes will recreate the pod on every shutdown.
You can see all your deployments with
kubectl get deploy
Edit the deployment via kubectl edit deployment DEPLOYMENT_NAME and change the image name.
Or
Edit the manifest file and append the file with a correct image mane and do a kubectl apply -f YAML file
First of all, your k8s cluster is trying to pull image 1.17.10 from public docker registry. But as there are no image exists with this name that's why it's get error. And when you have tried to delete your pods it will again try to create with same image name as your deployment is exists. For this reason you need to delete deployment rather then pods. Otherwise, deployment will automatically try to create deleted pod again.
you can actually check what was the error in your deployment with this command:
kubectl describe deploy nginx
For you the command will bekubectl delete deploy -n <Namespace_name> <deployment_name>. As you have created your deployment in default namespace you don't need to mention the namespace automatically it will be the default namespace.
you can delete deployment with this command:
kubectl delete deploy nginx

Not able to expose deployment using kubernetes in google cloud

I used the below command to build my spring boot application for deployment in google cloud.
mvn clean install && docker build -t eu.gcr.io/XXX/demo .
gcloud builds submit --tag eu.gcr.io/XXX/demo
kubectl run demo-server --image eu.gcr.io/XXX/demo
kubectl expose deployment demo-server --type=LoadBalancer –port=8080
And I can access my application externally. I can delete and redeploy my application using:
kubectl delete deployment demo-server
kubectl run demo-server --image eu.gcr.io/XXX/demo
It is all working fine, but when I tried to expose the same application on different port say 8081, it failed to complain Error from server (AlreadyExists): services "demo-server" already exists
How can I change the service port?
I resolved it by
kubectl get services
kubectl delete services demo-server
I was deleting deployment but service was still available

Update deployment fails when same name exists in separate namespaces

I've used the following command to update the image run in a deployment:
kubectl --cluster websites --namespace production set image
deployment/mobile-web mobile-web=eu.gcr.io/websites/mobile-web:0.23
This worked well until I created a staging namespace mirroring the production environment. In other words the deployment mobile-web exists both in the production and staging namespace. Now I get the error:
Error from server: the server could not find the requested resource
(get deployments.extensions mobile-web)
What am I missing here? Or is the only way to update using a yaml- or JSON-file, which means a bit more work on the CI/CD pipeline? I've tried setting the namespace with:
kubectl config set-context production --namespace=production --cluster=websites
but to no avail.
The solution for my concern was to kill the current proxy and get new credentials and start the proxy again:
gcloud container clusters get-credentials websites
kubectl proxy --port=8080
Now either commands work as expected:
kubectl get deployment mobile-web --namespace=production
kubectl get deployment mobile-web --namespace=staging
However it doesn't explain why it stopped working in the first place.