Kubectl apply command for updating existing service resource - kubernetes

Currently I'm using Kubernetes version 1.11.+. Previously I'm always using the following command for my cloud build scripts:
- name: 'gcr.io/cloud-builders/kubectl'
id: 'deploy'
args:
- 'apply'
- '-f'
- 'k8s'
- '--recursive'
env:
- 'CLOUDSDK_COMPUTE_ZONE=${_REGION}'
- 'CLOUDSDK_CONTAINER_CLUSTER=${_CLUSTER_NAME}'
And the commands just working as expected, at that time I'm using k8s version 1.10.+. However recently I got the following error:
spec.clusterIP: Invalid value: "": field is immutable
metadata.resourceVersion: Invalid value: "": must be specified for an update
So I'm wondering if this is an expected behavior for Service resources?
Here's my YAML config for my service:
apiVersion: v1
kind: Service
metadata:
name: {name}
namespace: {namespace}
annotations:
beta.cloud.google.com/backend-config: '{"default": "{backend-config-name}"}'
spec:
ports:
- port: {port-num}
targetPort: {port-num}
selector:
app: {label}
environment: {env}
type: NodePort

This is due to https://github.com/kubernetes/kubernetes/issues/71042
https://github.com/kubernetes/kubernetes/pull/66602 should be picked to 1.11

I sometimes meet this error when manually running kubectl apply -f somefile.yaml.
I think it happens when someone have changed the specification through the Kubernetes Dashboard instead of by applying new changes through kubectl apply.
To fix it, I run kubectl edit services/servicename which opens the yaml specification in my default editor. Then remove the fields metadata.resourceVersion and spec.clusterIP, hit save and run kubectl apply -f somefile.yaml again.

You need to set the spec.clusterIP on your service yaml file with value to be replaced with clusterIP address from service as shown below:
spec:
clusterIP:
Your issue is discuused on the following github there as well a workaround to help you bypass this issue.

Related

Kubectl error upon applying agones fleet: ensure CRDs are installed first

I am using minikube (docker driver) with kubectl to test an agones fleet deployment. Upon running kubectl apply -f lobby-fleet.yml (and when I try to apply any other agones yaml file) I receive the following error:
error: resource mapping not found for name: "lobby" namespace: "" from "lobby-fleet.yml": no matches for kind "Fleet" in version "agones.dev/v1"
ensure CRDs are installed first
lobby-fleet.yml:
apiVersion: "agones.dev/v1"
kind: Fleet
metadata:
name: lobby
spec:
replicas: 2
scheduling: Packed
template:
metadata:
labels:
mode: lobby
spec:
ports:
- name: default
portPolicy: Dynamic
containerPort: 7600
container: lobby
template:
spec:
containers:
- name: lobby
image: gcr.io/agones-images/simple-game-server:0.12 # Modify to correct image
I am running this on WSL2, but receive the same error when using the windows installation of kubectl (through choco). I have minikube installed and running for ubuntu in WSL2 using docker.
I am still new to using k8s, so apologies if the answer to this question is clear, I just couldn't find it elsewhere.
Thanks in advance!
In order to create a resource of kind Fleet, you have to apply the Custom Resource Definition (CRD) that defines what is a Fleet first.
I've looked into the YAML installation instructions of agones, and the manifest contains the CRDs. you can find it by searching kind: CustomResourceDefinition.
I recommend you to first try to install according to the instructions in the docs.

How to change a pod name

I'm very new to k8s and the related stuff, so this may be a stupid question: How to change the pod name?
I am aware the pod name seems set in the helm file, in my values.yaml, I have this:
...
hosts:
- host: staging.application.com
paths:
...
- fullName: application
svcPort: 80
path: /*
...
Since the application is running in the prod and staging environment, and the pod name is just something like application-695496ec7d-94ct9, I can't tell which pod is for prod or staging and can't tell if a request if come from the prod or not. So I changed it to:
hosts:
- host: staging.application.com
paths:
...
- fullName: application-staging
svcPort: 80
path: /*
I deployed it to staging, pod updated/recreated automatically but the pod name still remains the same. I was confused about that, and I don't know what is missing. I'm not sure if it is related to the fullnameOverride, but it's empty so it should be fine.
...the pod name still remains the same
The code snippet in your question likely the helm values for Ingress. In this case not related to Deployment of Pod.
Look into your helm template that define the Deployment spec for the pod, search for the name and see which helm value was assigned to it:
apiVersion: apps/v1
kind: Deployment
metadata:
name: busybox # <-- change & you will see the pod name change along. the helm syntax surrounding this field will tell you how the name is construct/assign
labels:
app: busybox
spec:
replicas: 1
selector:
matchLabels:
app: busybox
template:
metadata:
labels:
app: busybox
spec:
containers:
- name: busybox
image: busybox
imagePullPolicy: IfNotPresent
command: ["ash","-c","sleep 3600"]
Save the spec and apply, check with kubectl get pods --selector app=busybox. You should see 1 pod with name busybox prefix. Now if you open the file and change the name to custom and re-apply and get again, you will see 2 pods with different name prefix. Clean up with kubectl delete deployment busybox custom.
This example shows how the name of the Deployment is used for pod(s) underneath. You can paste your helm template surrounding the name field to your question for further examination if you like.

GKE automating deploy of multiple deployments/services with different images

I'm currently looking at GKE and some of the tutorials on google cloud. I was following this one here https://cloud.google.com/solutions/integrating-microservices-with-pubsub#building_images_for_the_app (source code https://github.com/GoogleCloudPlatform/gke-photoalbum-example)
This example has 3 deployments and one service. The example tutorial has you deploy everything via the command line which is fine and all works. I then started to look into how you could automate deployments via cloud build and discovered this:
https://cloud.google.com/build/docs/deploying-builds/deploy-gke#automating_deployments
These docs say you can create a build configuration for your a trigger (such as pushing to a particular repo) and it will trigger the build. The sample yaml they show for this is as follows:
# deploy container image to GKE
- name: "gcr.io/cloud-builders/gke-deploy"
args:
- run
- --filename=kubernetes-resource-file
- --image=gcr.io/project-id/image:tag
- --location=${_CLOUDSDK_COMPUTE_ZONE}
- --cluster=${_CLOUDSDK_CONTAINER_CLUSTER}
I understand how the location and cluster parameters can be passed in and these docs also say the following about the resource file (filename parameter) and image parameter:
kubernetes-resource-file is the file path of your Kubernetes configuration file or the directory path containing your Kubernetes resource files.
image is the desired name of the container image, usually the application name.
Relating this back to the demo application repo where all the services are in one repo, I believe I could supply a folder path to the filename parameter such as the config folder from the repo https://github.com/GoogleCloudPlatform/gke-photoalbum-example/tree/master/config
But the trouble here is that those resource files themselves have an image property in them so I don't know how this would relate to the image property of the cloud build trigger yaml. I also don't know how you could then have multiple "image" properties in the trigger yaml where each deployment would have it's own container image.
I'm new to GKE and Kubernetes in general, so I'm wondering if I'm misinterpreting what the kubernetes-resource-file should be in this instance.
But is it possible to automate deploying of multiple deployments/services in this fashion when they're all bundled into one repo? Or have Google just over simplified things for this tutorial - the reality being that most services would be in their own repo so as to be built/tested/deployed separately?
Either way, how would the image property relate to the fact that an image is already defined in the deployment yaml? e.g:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
name: photoalbum-app
name: photoalbum-app
spec:
replicas: 3
selector:
matchLabels:
name: photoalbum-app
template:
metadata:
labels:
name: photoalbum-app
spec:
containers:
- name: photoalbum-app
image: gcr.io/[PROJECT_ID]/photoalbum-app#[DIGEST]
tty: true
ports:
- containerPort: 8080
env:
- name: PROJECT_ID
value: "[PROJECT_ID]"
The command that you use is perfect for testing the deployment of one image. But when you work with Kubernetes (K8S), and the managed version of GCP (GKE), you usually never do this.
You use YAML file to describe your deployments, services and all other K8S object that you want. When you deploy, you can perform something like this
kubectl apply -f <file.yaml>
If you have several file, you can use wildcard is you want
kubectl apply -f config/*.yaml
If you prefer to use only one file, you can separate the object with ---
apiVersion: v1
kind: Service
metadata:
name: my-nginx-svc
labels:
app: nginx
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
labels:
app: nginx
spec:...
...

Error in deploy stage: "lchmod (file attributes) error: Not supported"

I am attempting to deploy an image "casbin-role-backend" to cloud, but it always failed.
The following is found from log:
Preparing to start the job...
Pipeline image: latest
Preparing the build artifacts...
lchmod (file attributes) error: Not supported
.....
DEPLOYING using manifest
+++ kubectl apply --namespace default -f ./tmp.deployment.yaml
deployment.apps/casbin-role-backend unchanged
The Service "casbin-role-backend" is invalid: spec.ports[0].nodePort: Invalid value: 30080: provided port is already allocated
+++ set +x
CHECKING deployment rollout of casbin-role-backend
+++ kubectl rollout status deploy/casbin-role-backend --watch=true --timeout=150s --namespace default
error: deployment "casbin-role-backend" exceeded its progress deadline
+++ STATUS=fail
+++ set +x
SHOWING last events
LAST SEEN TYPE REASON OBJECT MESSAGE
41m Warning Failed pod/casbin-role-mgt-ui-7d59b6d4cf-2pbhm Error: InvalidImageName
2m11s Warning InspectFailed pod/casbin-role-backend-68d76464dd-vbvch Failed to apply default image tag "//:": couldn't parse image reference "//:": invalid reference format
...
DEPLOYMENT FAILED
....
OK
Finished: FAILED
And below is my deployment.yaml:
apiVersion: v1
kind: Service
metadata:
name: casbin-role-backend
labels:
app: app
spec:
type: NodePort
ports:
- port: 3000
name: http
nodePort: 30080
selector:
app: app
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: casbin-role-backend
spec:
replicas: 1
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- name: casbin-role-backend
image: xxx/casbin-role-backend
ports:
- containerPort: 3000
Does anybody know what error is it? I had searched it for some time but still cannot find what is it and how to fix.
Update:
The source code is originated from below, and I added Dockerfile and deployment.yaml to deploy it on k8s.
https://github.com/alikhan866/Casbin-Role-Mgt-Dashboard-RBAC
Dockerfile source:
# pull official base image
FROM node:13.12.0-alpine
# set working directory
WORKDIR /dist
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install app dependencies
COPY package.json ./
COPY package-lock.json ./
RUN npm install
# add app
COPY . ./
# start app
CMD ["npm", "run dev"]
I see two issues here:
1.
The Service "casbin-role-backend" is invalid: spec.ports[0].nodePort: Invalid value: 30080: provided port is already allocated
It means that the port used by the nodePort service is already in use. You can list these services with: kubectl get svc --all-namespaces | grep '30080' and change the port value or delete the service. Also, make sure that you specify the proper namespace.
2.
2m11s Warning InspectFailed pod/casbin-role-backend-68d76464dd-vbvch Failed to apply default image tag "//:": couldn't parse image reference "//:": invalid reference format`
My educated guess here is that your image name is invalid because it starts with https:// or ://. A proper image name should look like this:
image: repository:organization_name/image_name:image_version

Kubernetes spec.ports required value error

I am running minikube version v1.15.1 on Windows 10 Home.
I started minikube using VirtualBox.
Below is my yaml file
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: ClusterIP
selector:
component: web
ports:
- port: 3000
targetPort: 3000
I am running the command kubectl apply -f filename.yaml for creating the service. Getting error The Service "my-service" is invalid: spec.ports: Required value
I referred documentation and can see that the syntax is correct. Other places i referred were
i. https://github.com/kubernetes/kubernetes/issues/8619 which is having an issue opened for the same. It is closed and requested to follow up on stackoverflow.
ii. The Service "php" is invalid: spec.ports: Required value this thread didn't help me as OP had a syntax error in the file.
Any suggestions would be helpful. I just started learning Kubernetes and it's my first attempt.