Can not apply Service as "type: ClusterIP" in my GKE cluster - kubernetes

I want to deploy my service as a ClusterIP but am not able to apply it for the given error message:
[xetra11#x11-work coopr-infrastructure]$ kubectl apply -f teamcity-deployment.yaml
deployment.apps/teamcity unchanged
ingress.extensions/teamcity unchanged
The Service "teamcity" is invalid: spec.ports[0].nodePort: Forbidden: may not be used when `type` is 'ClusterIP'
This here is my .yaml file:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: teamcity
labels:
app: teamcity
spec:
replicas: 1
selector:
matchLabels:
app: teamcity
template:
metadata:
labels:
app: teamcity
spec:
containers:
- name: teamcity-server
image: jetbrains/teamcity-server:latest
ports:
- containerPort: 8111
---
apiVersion: v1
kind: Service
metadata:
name: teamcity
labels:
app: teamcity
spec:
type: ClusterIP
ports:
- port: 8111
targetPort: 8111
protocol: TCP
selector:
app: teamcity
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: teamcity
annotations:
kubernetes.io/ingress.class: nginx
spec:
backend:
serviceName: teamcity
servicePort: 8111

Apply a configuration to the resource by filename:
kubectl apply -f [.yaml file] --force
This resource will be created if it doesn't exist yet. To use 'apply', always create the resource initially with either 'apply' or 'create --save-config'.
2) If the first one fails, you can force replace, delete and then re-create the resource:
kubectl replace -f grav-deployment.yml
This command is only used when grace-period=0. If true, immediately remove resources from API and bypass graceful deletion. Note that immediate deletion of some resources may result in inconsistency or data loss and requires confirmation.

On GKE the ingress can only point to a service of type LoadBalancer or NodePort you can see the error output of the ingress by running:
kubectl describe ingress teamcity
You can see an error, as per your yaml if you are using an nginx controller you have to use the service of type NodePort
Somo documentation:
https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer
https://github.com/kubernetes/ingress-nginx/blob/master/docs/deploy/index.md#gce-gke

Did you just recently change the service description from NodePort to ClusterIP?
Then it might be this issue github.com/kubernetes/kubectl/issues/221.
You need to use kubectl replace or kubectl apply --force.

Related

App is not reachable through NGINX ingress on EKS

I want to service my basic react application over my EKS cluster with nginx ingress. I want to do this without certificate I want to see it with my DNS name on my browser. By the way I tested my application on minikube before deploying to EKS. It's working.
Firstly I tried to install nginx controller via helm 3 with below commands and I checked it with last command. It's working well.
helm repo add nginx-stable https://helm.nginx.com/stable
helm repo update
helm install my-release nginx-stable/nginx-ingress
kubectl get services -n default -o wide -w ingress-nginx-controller
Than I applied my application with below yaml files.
Deployment yaml file:
kind: Deployment
apiVersion: apps/v1
metadata:
name: webapp
spec:
replicas: 1
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: hub:3
imagePullPolicy: Always
restartPolicy: Always
Service yaml file:
apiVersion: v1
kind: Service
metadata:
name: my-webapp
spec:
selector:
app: webapp
ports:
- name: http
port: 80
type: ClusterIP
My ingress yaml file:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx-example
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-webapp
port:
number: 80
After all these steps pods and services are working like below but when I try to reach my app on browser with loadbalancer dns name. I am getting error like in last picture. What point did I miss?
Website output:

Why Kubernetes Services are created before Deployment/Pods?

If I have to deploy a workload on Kubernetes and also have to expose it as a service, I have to create a Deployment/Pod and a Service. If the Kubernetes offering is from Cloud and we are creating a LoadBalancer service, it makes sense to create the the service first before the workload as the url creation for the service takes time. But in case the Kubernetes deployed on non-cloud platform there is no point in creating the Service before the workload.
So why I have to create a service first then the workload?
There is no requirement to create a service before a deployment or vice versa. You can create the deployment before the service or the service before the deployment.
If you create the deployment before the service, then the application packaged in the deployment will not be accessible externally until you create the LoadBalancer.
Conversely, if you create the LoadBalancer first, the traffic for the application will not be routed to the application as it hasn't been created yet, giving 503s to the caller.
You are declaring to Kubernetes how you want the state of the infrastructure to be. e.g. "I want a deployment and a service". Kubernetes will go off and create those, but they may not necessarily end up being creating in a predictable order. For example, LoadBalancers take a while to assign IPs from your Cloud provider, so even though the resource is created in the cluster, it's not actually getting any traffic.
As wrote in the official kubernetes documentation, and unlike to what other users have told you, there is a specific case that the service needs to be created BEFORE the pod.
https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables
Note:
When you have a Pod that needs to access a Service, and you are using the environment variable method to publish the port and cluster IP to the client Pods, you must create the Service before the client Pods come into existence. Otherwise, those client Pods won't have their environment variables populated.
If you only use DNS to discover the cluster IP for a Service, you don't need to worry about this ordering issue.
For example:
Create some pods, deploy and service in a namespace:
namespace.yaml
# namespace
apiVersion: v1
kind: Namespace
metadata:
creationTimestamp: null
name: securedapp
spec: {}
status: {}
services.yaml
# expose api svc
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
type: api
name: api-svc
namespace: securedapp
spec:
ports:
- port: 90
protocol: TCP
targetPort: 80
selector:
type: api
type: ClusterIP
status:
loadBalancer: {}
---
# expose frontend-svc
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
type: secured
name: frontend-svc
namespace: securedapp
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
type: secured
type: ClusterIP
status:
loadBalancer: {}
pods-and-deploy.yaml
# create the pod for frontend
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
type: secured
name: secured-frontend
namespace: securedapp
spec:
containers:
- image: nginx
name: secured-frontend
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
---
# create the pod for the api
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
type: api
name: webapi
namespace: securedapp
spec:
containers:
- image: nginx
name: webapi
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
---
# create a deploy
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: sure-even-with-deploy
name: sure-even-with-deploy
namespace: securedapp
spec:
replicas: 1
selector:
matchLabels:
app: sure-even-with-deploy
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: sure-even-with-deploy
spec:
containers:
- image: nginx
name: nginx
resources: {}
status: {}
If you create all the resources contextually in the same time, for example placing all this files on a folder, then do an apply on it like this:
kubectl apply -f .
Then if we get the envs from one pod
k exec -it webapi -n securedapp -- env
you will obtain something like this:
# environment of api_svc
API_SVC_PORT_90_TCP=tcp://10.152.183.242:90
API_SVC_SERVICE_PORT=90
API_SVC_PORT_90_TCP_ADDR=10.152.183.242
API_SVC_SERVICE_HOST=10.152.183.242
API_SVC_PORT_90_TCP_PORT=90
API_SVC_PORT=tcp://10.152.183.242:90
API_SVC_PORT_90_TCP_PROTO=tcp
# environment of frontend
FRONTEND_SVC_SERVICE_HOST=10.152.183.87
FRONTEND_SVC_SERVICE_PORT=80
FRONTEND_SVC_PORT=tcp://10.152.183.87:280
FRONTEND_SVC_PORT_280_TCP_PORT=80
FRONTEND_SVC_PORT_280_TCP=tcp://10.152.183.87:280
FRONTEND_SVC_PORT_280_TCP_PROTO=tcp
FRONTEND_SVC_PORT_280_TCP_ADDR=10.152.183.87
Clear all the resoruces created:
kubectl delete -f .
Now, another try.
This time we will do the same thing but "slowly", we create things one by one.
Sure, the ns first
k apply -f ns.yaml
then the pods
k apply -f pods.yaml
After a while create the services
k apply -f services.yaml
Now you will discover that if you get the envs from one pod
k exec -it webapi -n securedapp -- env
this time you will NOT have the environment variables of the services.
So, as mentioned on the k8s documentation there is at least one case, where it is necessary to create the svc BEFORE the pods: if you need env variables of your services.
Saluti

How to access a service from another namespace in kubernetes

I have the zipkin deployment and service below as you can see zipkin is located under monitoring namespace the, i have an env variable called ZIPKIN_URL in each of my pods which are running under default namespace, this varibale takes this URL http://zipkin:9411/api/v2/spans but since zipkin is running in another namespace i tried this :
http://zipkin.monitoring.svc.cluster.local:9411/api/v2/spans
i also tried this format :
http://zipkin.monitoring:9411/api/v2/spans
but when i check the logs of my pods, i see connection refused exception
when i exec into one of my pods and try curl http://zipkin.tools.svc.cluster.local:9411/api/v2/spans
its shows me Mandatory parameter is missing: serviceNameroot
Here is zipkin resource :
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: zipkin
namespace: monitoring
spec:
template:
metadata:
labels:
app: zipkin
spec:
containers:
- name: zipkin
image: openzipkin/zipkin:2.19.3
ports:
- containerPort: 9411
---
apiVersion: v1
kind: Service
metadata:
name: zipkin
namespace: monitoring
spec:
selector:
app: zipkin
ports:
- name: http
port: 9411
protocol: TCP
type: ClusterIP
What you have is correct, your issue is likely not DNS. You can confirm by doing just a DNS lookup and comparing that to the IP of the Service.

kubernetes service getting auto deleted after each deployment

We are facing an unexpected behavior in Kubernetes. When we are running the command:
kubectl apply -f my-service_deployment.yml
It's noticed that the associated existing service of the same pod getting deleted automatically. Also, we noticed that when we are applying the deployment file, instead of giving the output as the "deployment got configured" (as its already running), its showing output as "deployment created".. is some problem here?
Also sometimes we have noticed that the service is recreated with different timestamps than we created with different Ip.
What may be the reasons for this unexpected behavior of this service?
Note:- it's noticed that there is another pod and service running in the same cluster with pod name as "my-stage-my-service" and service name as my-stage-service-v1. will this have any impact?
Deployment file:-
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-service
labels:
app: my-service
spec:
replicas: 1
selector:
matchLabels:
app: my-service
template:
metadata:
labels:
app: my-service
spec:
containers:
- name: my-service
image: myacr/myservice:v1-dev
ports:
- containerPort: 8080
imagePullSecrets:
- name: my-az-secret
Service file:
apiVersion: v1
kind: Service
metadata:
name: my-stage-service
spec:
selector:
app: my-service
ports:
- protocol: TCP
port: 8880
targetPort: 8080
type: LoadBalancer

Ingress is not getting address on GKE and GCE

When creating ingress, no address is generated and when viewed from GKE dashboard it is always in the Creating ingress status.
Describing the ingress does not show any events and I can not see any clues on GKE dashboard.
Has anyone has a similar issue or any suggestions on how to debug?
My deployment.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mobile-gateway-ingress
spec:
backend:
serviceName: mobile-gateway-service
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
name: mobile-gateway-service
spec:
ports:
- protocol: TCP
port: 80
targetPort: 8080
selector:
app: mobile-gateway
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mobile-gateway-deployment
labels:
app: mobile-gateway
spec:
selector:
matchLabels:
app: mobile-gateway
replicas: 2
template:
metadata:
labels:
app: mobile-gateway
spec:
containers:
- name: mobile-gateway
image: eu.gcr.io/my-project/mobile-gateway:latest
ports:
- containerPort: 8080
Describing ingress shows no events:
mobile-gateway ➤ kubectl describe ingress mobile-gateway-ingress git:master*
Name: mobile-gateway-ingress
Namespace: default
Address:
Default backend: mobile-gateway-service:80 (10.4.1.3:8080,10.4.2.3:8080)
Rules:
Host Path Backends
---- ---- --------
* * mobile-gateway-service:80 (10.4.1.3:8080,10.4.2.3:8080)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"mobile-gateway-ingress","namespace":"default"},"spec":{"backend":{"serviceName":"mobile-gateway-service","servicePort":80}}}
Events: <none>
hello ➤
With a simple LoadBalancer service, an IP address is given. The problem is only with the ingress resource.
The problem in this case was I had did not include the addon HttpLoadBalancing when creating the cluster!
My fault but was would have been noice to have an event informing me of this mistake in the ingress resource.
Strange that when I created a new cluster to follow the tutorial cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer using default addons including HttpLoadBalancing that I observed the same issue. Maybe I didn't wait long enough? Anyway, working now that I have included the addon.
To complete the accepted answer, it's worth noting that it's possible to activate the addon on an already existing cluster (from the Google console).
However, it will restart your cluster with downtime (in my case, it took several minutes on an almost empty cluster). Be sure that's acceptable in your case, and do tests.