Ingress is not getting address on GKE and GCE - kubernetes

When creating ingress, no address is generated and when viewed from GKE dashboard it is always in the Creating ingress status.
Describing the ingress does not show any events and I can not see any clues on GKE dashboard.
Has anyone has a similar issue or any suggestions on how to debug?
My deployment.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mobile-gateway-ingress
spec:
backend:
serviceName: mobile-gateway-service
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
name: mobile-gateway-service
spec:
ports:
- protocol: TCP
port: 80
targetPort: 8080
selector:
app: mobile-gateway
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mobile-gateway-deployment
labels:
app: mobile-gateway
spec:
selector:
matchLabels:
app: mobile-gateway
replicas: 2
template:
metadata:
labels:
app: mobile-gateway
spec:
containers:
- name: mobile-gateway
image: eu.gcr.io/my-project/mobile-gateway:latest
ports:
- containerPort: 8080
Describing ingress shows no events:
mobile-gateway ➤ kubectl describe ingress mobile-gateway-ingress git:master*
Name: mobile-gateway-ingress
Namespace: default
Address:
Default backend: mobile-gateway-service:80 (10.4.1.3:8080,10.4.2.3:8080)
Rules:
Host Path Backends
---- ---- --------
* * mobile-gateway-service:80 (10.4.1.3:8080,10.4.2.3:8080)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"mobile-gateway-ingress","namespace":"default"},"spec":{"backend":{"serviceName":"mobile-gateway-service","servicePort":80}}}
Events: <none>
hello ➤
With a simple LoadBalancer service, an IP address is given. The problem is only with the ingress resource.

The problem in this case was I had did not include the addon HttpLoadBalancing when creating the cluster!
My fault but was would have been noice to have an event informing me of this mistake in the ingress resource.
Strange that when I created a new cluster to follow the tutorial cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer using default addons including HttpLoadBalancing that I observed the same issue. Maybe I didn't wait long enough? Anyway, working now that I have included the addon.

To complete the accepted answer, it's worth noting that it's possible to activate the addon on an already existing cluster (from the Google console).
However, it will restart your cluster with downtime (in my case, it took several minutes on an almost empty cluster). Be sure that's acceptable in your case, and do tests.

Related

Kubernetes, Loadbalancing, and Nginx Ingress - AKS

Stack:
Azure Kubernetes Service
NGINX Ingress Controller - https://github.com/kubernetes/ingress-nginx
AKS Loadbalancer
Docker containers
My goal is to create a K8s cluster that will allow me to use multiple pods, under a single IP, to create a microservice architecture. After working with tons of tutorials and documentation, I'm not having any luck with my endgoal. I got to the point of being able to access a single deployment using the Loadbalancer, but introducing the ingress has not been successful so far. The services are separated into their respective files for readability and ease of control.
Additionally, the Ingress Controller was added to my cluster as described in the installation instructions using: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.35.0/deploy/static/provider/cloud/deploy.yaml
LoadBalancer.yml:
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
loadBalancerIP: x.x.x.x
selector:
app: ingress-service
tier: backend
ports:
- name: "default"
port: 80
targetPort: 80
type: LoadBalancer
IngressService.yml:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- http:
paths:
- path: /api
backend:
serviceName: api-service
servicePort: 80
api-deployment.yml
apiVersion: v1
kind: Service
metadata:
name: api-service
spec:
selector:
app: api
ports:
- port: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-deployment
spec:
selector:
matchLabels:
app: api
tier: backend
track: stable
replicas: 1
template:
metadata:
labels:
app: api
tier: backend
track: stable
spec:
containers:
- name: api
image: image:tag
ports:
- containerPort: 80
imagePullPolicy: Always
imagePullSecrets:
- name: SECRET
The API in the image is exposed on port 80 correctly.
After applying each of the above yml services and deployments, I attempt a web request to one of the api resources via the LoadBalancer's IP and receive only a timeout on my requests.
Found my answer after hunting around enough. Basically, the problem was that the Ingress Controller has a Load Balancer built into the yaml, as mentioned in comments above. However, the selector for that LoadBalancer requires marking your Ingress service as part of the class. Then that Ingress service points to each of the services attached to your pods. I also had to make a small modification to allow using a static IP in the provided load balancer.

Can't connect with my frontend with kubectl

With kubernetes, I created an ingress with a service like these :
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: syntaxmap2
spec:
backend:
serviceName: testsvc
servicePort: 3000
The service testsvc is already created.
I created a frontend service like these :
apiVersion: v1
kind: Service
metadata:
name: syntaxmapfrontend
spec:
selector:
app: syntaxmap
tier: frontend
ports:
- protocol: "TCP"
port: 7000
targetPort: 7000
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: syntaxmapfrontend
spec:
selector:
matchLabels:
app: syntaxmap
tier: frontend
track: stable
replicas: 1
template:
metadata:
labels:
app: syntaxmap
tier: frontend
track: stable
spec:
containers:
- name: nginx
image: "gcr.io/google-samples/hello-frontend:1.0"
lifecycle:
preStop:
exec:
command: ["/usr/sbin/nginx","-s","quit"]
When I do these command :
kubectl describe ingress syntaxmap2
I have an Ip adress than i can put in my browser and I have an answer
But when I do these command :
kubctl describe service syntaxmapfrontend
I have an Ip adress with a port and when I try to connect to it with curl, I have a time out.
How can I connect to my kubernet frontend with curl ?
The service is accessible only from within the k8s cluster. You either need to change the type of address from ClusterIP to NodeIP, or use something like kubectl port-forward or kubefwd.
If you need more detailed advice, you'll need to post the output of those commands, or even better, show us how you created the objects.
I have found a way.
I write :
minikube service syntaxmapfrontend
And it open a browser with the right URL.

Conditional reverse-proxy in kubernetes based on server response

TLDR: I am looking for a solution that would allow me to proxy traffic between two different Kubernetes services, based on their response.
Background:
I have an exiting application hosted on Kubernetes. Recently I started re-writing one of my microservices in order to speed it up and add few new features. I want to allow my users to decide whether they want to start using this new service or stick to the old one (since some features have breaking changes for their use-case).
Since users usually reach this microservice using address like username.given-microservice.example.com, my initial plan was to set up some proxy between these services, which could ask one of my endpoints using query like:
http://my-new-service.example.com/enabled-for-client?=username
if it returned code 200, then the client would be forwarded to the new service
if the response code was anything else, then the client would be forwarded to the old service.
Of course, the response from the URI above would depend on user settings.
This scenario is very similar to A/B testing, but I do not know and have trouble finding any way to set up proxy based on URL response.
I would highly appreciate any suggestions, blog posts, or links to documentation that could help me solve my scenario - at the moment I ran out of ideas and I kind of feel stuck.
Envoy can manage such scenario, start by looking to HTTP routing. If you cannot found what you're looking for, you can always write filter/routing rules in Lua.
It's possible to achieve this by using NGINX Ingress with custom-http-errors and default-backend annotations.
I created a lab to prove the concept. Let's dive into it together.
First of all you have to install NGINX Ingress in your cluster. If you don't have it, follow the installation guide.
On this POC we will deploy 2 different applications. One is called old-http-backend and it serves a default nginx landing page. The second is called new-http-backend and it serves a echo-server landing page.
apiVersion: apps/v1
kind: Deployment
metadata:
name: old-http-backend
spec:
selector:
matchLabels:
app: old-http-backend
template:
metadata:
labels:
app: old-http-backend
spec:
containers:
- name: old-http-backend
image: nginx
ports:
- name: http
containerPort: 80
imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Service
metadata:
name: old-http-backend
spec:
selector:
app: old-http-backend
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: new-http-backend
spec:
selector:
matchLabels:
app: new-http-backend
template:
metadata:
labels:
app: new-http-backend
spec:
containers:
- name: new-http-backend
image: inanimate/echo-server
ports:
- name: http
containerPort: 8080
imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Service
metadata:
name: new-http-backend
spec:
selector:
app: new-http-backend
ports:
- protocol: TCP
port: 80
targetPort: 8080
After applying this manifest we have the following deployments and services:
$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
new-http-backend 1/1 1 1 2s
old-http-backend 1/1 1 1 43m
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.31.240.1 <none> 443/TCP 152m
new-http-backend ClusterIP 10.31.240.168 <none> 80/TCP 44m
old-http-backend ClusterIP 10.31.242.175 <none> 80/TCP 44m
And now we can apply our Ingress that will be responsible for doing all the magic for us:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-app-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: "/"
nginx.ingress.kubernetes.io/custom-http-errors: '403,404,500,502,503,504'
nginx.ingress.kubernetes.io/default-backend: old-http-backend
spec:
rules:
- host: app.company.com
http:
paths:
- path: "/"
backend:
serviceName: new-http-backend
servicePort: 80
What this ingress is doing?
In the Custom HTTP Errors documentation we cant read that if a default backend annotation is specified on the ingress, the errors will be routed to that annotation's default backend service (instead of the global default backend).
So by inserting these annotations in our ingress rule we are saying that all requests should go to new-http-backend unless it receives a return code listed on custom-http-errors. If that happens, the user will be redirected to old-http-backend as it's specified on default-backend annotation.
nginx.ingress.kubernetes.io/custom-http-errors: '403,404,500,502,503,504'
nginx.ingress.kubernetes.io/default-backend: old-http-backend

Bad Gateway with traefik Ingress

I'm using minikube with traefik ingress to create a sticky sessions.
So i have done the deploy of traefik that user guide kubernetes provides me. https://docs.traefik.io/user-guide/kubernetes/
I deploy traefik using DaemonSet. Cause it's a small project and is my first time using kubernetes and docker.
This is my ingress yaml file
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cp-pluggin
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: cppluggins.minikube
http:
paths:
- path: /
backend:
serviceName: cp-pluggin
servicePort: 80
My service yaml file
apiVersion: v1
kind: Service
metadata:
name: cp-pluggin
annotations:
traefik.ingress.kubernetes.io/affinity: "true"
traefik.ingress.kubernetes.io/session-cookie-name: "sticky"
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
app: cp-pluggin-app
Finally, my deployment yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: cp-pluggin-app
labels:
app: cp-pluggin-app
spec:
replicas: 3
selector:
matchLabels:
app: cp-pluggin-app
template:
metadata:
labels:
app: cp-pluggin-app
spec:
containers:
- name: cp-pluggin-app
image: essoca/ubuntu-tornado
ports:
- containerPort: 8080
I expected
Hello world from: [ipserver]
But i get a
bad gateway
I assume you are using Traefik 2.0, the latest version as of now. There are quite some changes in this version, i.e. the annotations are not used anymore. Besides that, I think the code that you posted is missing a big part of the required changes.
Also, it's not very useful to use a DaemonSet because you are using minikube and that's always one node. Using a Deployment will at least allow you to play with the scale up/down functionality of Kubernetes.
I wrote this article that might be useful for you Traefik 2 as Ingress Controller

GKE Managed Certificate not serving over HTTPS

I'm trying to spin up a Kubernetes cluster that I can access securely and can't seem to get that last part. I am following this tutorial: https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs
Here are the .yaml files i'm using for my Ingress, Nodeport and ManagedCertificate
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
name: client-v1-cert
spec:
domains:
- api.mydomain.com
---
apiVersion: v1
kind: Service
metadata:
name: client-nodeport-service
spec:
selector:
app: myApp
type: NodePort
ports:
- protocol: TCP
port: 80
targetPort: 3000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: api-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: api-v1
networking.gke.io/managed-certificates: client-v1-cert
spec:
backend:
serviceName: client-nodeport-service
servicePort: 80
No errors that I can see in the GCP console. i can also access my API from http://api.mydomain.com/, but it won't work when I try https, just not https. Been banging my head on this for a few days and just wondering if there's some little thing i'm missing.
--- UPDATE ---
Output of kubectl describe managedcertificate
Name: client-v1-cert
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
API Version: networking.gke.io/v1beta1
Kind: ManagedCertificate
Metadata:
Creation Timestamp: 2019-07-01T17:42:43Z
Generation: 3
Resource Version: 1136504
Self Link: /apis/networking.gke.io/v1beta1/namespaces/default/managedcer
tificates/client-v1-cert
UID: b9b7bec1-9c27-33c9-a309-42284a800179
Spec:
Domains:
api.mydomain.com
Status:
Certificate Name: mcrt-286cdab3-b995-40cc-9b3a-28439285e694
Certificate Status: Active
Domain Status:
Domain: api.mydomain.com
Status: Active
Expire Time: 2019-09-29T09:55:12.000-07:00
Events: <none>
I figured out a solution to this problem. I ended up going into my GCP console, locating the load balancer associated with the Ingress, and then I noticed that there was only one frontend protocol, and it was HTTP serving over port 80. So I manually added another frontend protocol for HTTPS, selected the managed certificate from the list, and waited about 5 minutes and everything worked.
I have no idea why my ingress.yaml didn't do that automatically though. So though the problem is fixed if there is anyone out there who knows what I would love to know.