Update deployment fails when same name exists in separate namespaces - kubernetes

I've used the following command to update the image run in a deployment:
kubectl --cluster websites --namespace production set image
deployment/mobile-web mobile-web=eu.gcr.io/websites/mobile-web:0.23
This worked well until I created a staging namespace mirroring the production environment. In other words the deployment mobile-web exists both in the production and staging namespace. Now I get the error:
Error from server: the server could not find the requested resource
(get deployments.extensions mobile-web)
What am I missing here? Or is the only way to update using a yaml- or JSON-file, which means a bit more work on the CI/CD pipeline? I've tried setting the namespace with:
kubectl config set-context production --namespace=production --cluster=websites
but to no avail.

The solution for my concern was to kill the current proxy and get new credentials and start the proxy again:
gcloud container clusters get-credentials websites
kubectl proxy --port=8080
Now either commands work as expected:
kubectl get deployment mobile-web --namespace=production
kubectl get deployment mobile-web --namespace=staging
However it doesn't explain why it stopped working in the first place.

Related

oc get deployment is returning No resources found

"oc get deployment" command is returning "No resources Found" as the result.
Even if I put an option of assigning or defining the namespace using -n as the option to above command, I am getting the same result.
Whereas, I am getting the correct result of oc get pods command.
Meanwhile, the oc version is
oc - v3.6.0
kubernetes - v1.6.1
openshift - v3.11.380
Check, if you connect to the correct kubernetes environment, (especially if you're running more than one).
If that is correct, I guess, either you don't have any deployments at all, or the deployments are in a different namespace than you think.
Try out listing all deployments:
oc get deployments -A
There are other objects that create pods such as statefulset or deamonset. Because it is OpenShift, my feeling is that the pods created by a deploymentconfig which is popular way to create applications.
Anyway, you can make sure which object is the owner of the pods by looking into the pod annotation. This command should work:
oc get pod -o yaml <podname> | grep ownerReference -A 6

Google Cloud Run for Anthos: Using Managed SSL

[ Note to reader: This is a long question, please bear with me ]
I've been messing with Cloud Run for Anthos as a side project, and have got a service up and running on http (not https) -- it performs as expected, scaling up and scaling down as traffic requires.
I now want to add SSL to it, so traffic comes on SSL only. And I've been banging my head against a wall on this trying to get it to work. Mainly centering on this link
I have setup the cluster default domain (it shows up on the Cloud Run for Anthos page) and my service is accessible via http://myservice.mynamespace.mydomain.com (not my real domain, obviously)
I've patched the knative configs, enabling autoTLS and setting mydomain as needed:
kubectl patch configmap config-domain --namespace knative-serving --patch '{"data": {"mydomain.com": ""}}'
kubectl patch configmap config-domain --namespace knative-serving --patch '{"data": {"nip.io": null}}'
kubectl patch configmap config-domainmapping --namespace knative-serving --patch '{"data": {"autoTLS": "Enabled"}}'
kubectl patch configmap config-network --namespace knative-serving --patch '{"data": {"autoTLS": "Enabled"}}'
kubectl patch configmap config-network --namespace knative-serving --patch '{"data": {"httpProtocol": "Enabled"}}'
The documentation talks about using gcloud domain mapping:
gcloud run domain-mappings describe --domain DOMAIN
Yet, this only is available on beta
ERROR: (gcloud.run.domain-mappings.describe) This command group is in beta for fully managed Cloud Run; use `gcloud beta run domain-mappings`.
Using beta, however, also fails
ERROR: (gcloud.beta.run.domain-mappings.describe) NOT_FOUND: Resource 'mydomain.com' of kind 'DOMAIN_MAPPING' in region 'europe-west2' in project 'my-project' does not exist
Making my cluster zonal or regional makes no difference.
The documentation also mentions using kcert
kubectl get kcert
Yet this does not show anything until after I have deployed my service.
$ kubectl get kcert --all-namespaces
NAMESPACE NAME READY REASON
default route-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
And this NEVER shows ready. Trying to hit http://myservice.mynamespace.mydomain.com remains fine, but hitting https://myservice.mynamespace.mydomain.com returns:
curl: (35) OpenSSL SSL_connect: Connection reset by peer in connection to myservice.mynamespace.mydomain.com:443
Can anyone guide me into why I can't get SSL working on this?
Related question -- is it possible to use the already-configured cert-manager cluster issuer instead? Such as described here?

Unable to switch from Minikube to AWS EKS on windows for Deployment

I have minikube on my local machine for testing deployment and I ran commands like
kubectl apply -f testingfile.yaml
and it worked fine. Now I want to perform the same on aws eks. I have followed all steps given in https://docs.aws.amazon.com/eks/latest/userguide/sample-deployment.html. Created a config file and added that to the path. Commands like eksctl get cluster are correctly listing the clusters from aws eks but now when I run
kubectl apply -f testingfile.yaml
I am getting the following statement
deployment.apps/testingfile unchanged which means it is still applying the command inside minikube and not on aws eks. I have also deleted path variables related to minikube from environment variables but I am still unable to switch to aws eks for applying. I would like to deploy this on aws eks. Let me know what I am missing here
Checking your existing cluster contexts
There will multiple contexts one for Minikube and One for EKS
kubectl config get-contexs
change context to EKS if your config is set it will be there
kubectl config use-context <Name of context>
this way you can get changed to another clusters.

kubernetes to openshift equivalent command

In kubernetes I have a command as
kubectl create deployment nginx --image=ewoutp/docker-nginx-curl -n web
What should I run if I want to create this inside openshift cluster
I tried this
oc create deployment nginx --image=ewoutp/docker-nginx-curl -n web
I am getting error as error:
no matches for extensions/, Kind=Deployment
Can someone help me?
It might indicate that your openshift cluster is not running. Check oc status to view status of your current project. If it is not running you should create new project.
If you cluster is running you can run oc create deployment nginx --image=ewoutp/docker-nginx-curl -n web -o yaml to verify if apiVersion is correct. Currently used version is apps/v1. If it is incorrect you can save it to file and edit to match current version.

How do I undo a kubectl create deploy?

I was setting up a nginx cluster on google cloud, and I entered a wrong image name; instead of entering:
kubectl create deploy nginx --image=nginx:1.17.10
I entered:
kubectl create deploy nginx --image=1.17.10
and eventually after running kubectl get pods, It showed ImagePullBackOff as the status for the pod.
When I tried running the correct create deploy command above, It said "nginx" already exists.
When I tried doing kubernetes delete --all pods, the pod was recreated with a new ID but still had the same status, and still couldn't allow me to run the right 'kubectl create deploy' command above. Now I'm stuck.
How can I undo it?
You need to delete the deployment:
kubectl delete deploy nginx
Otherwise Kubernetes will recreate the pod on every shutdown.
You can see all your deployments with
kubectl get deploy
Edit the deployment via kubectl edit deployment DEPLOYMENT_NAME and change the image name.
Or
Edit the manifest file and append the file with a correct image mane and do a kubectl apply -f YAML file
First of all, your k8s cluster is trying to pull image 1.17.10 from public docker registry. But as there are no image exists with this name that's why it's get error. And when you have tried to delete your pods it will again try to create with same image name as your deployment is exists. For this reason you need to delete deployment rather then pods. Otherwise, deployment will automatically try to create deleted pod again.
you can actually check what was the error in your deployment with this command:
kubectl describe deploy nginx
For you the command will bekubectl delete deploy -n <Namespace_name> <deployment_name>. As you have created your deployment in default namespace you don't need to mention the namespace automatically it will be the default namespace.
you can delete deployment with this command:
kubectl delete deploy nginx