Access a k8s service via Ingress-exposed browser app - kubernetes

I have an API and a GUI application.
The API is a simple Python FastAPI app with one route. The GUI is a React app that is served via nginx.
So I have a working local cluster with those two apps. Each has a service. GUI additionally has an Ingress. Using k3d, I am able to get to the GUI app, because I start the cluster with: k3d cluster create -p "8081:80#loadbalancer", so I can go to localhost:8081 and I see it. The Ingress config:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: foo-gui-ingress
annotations:
external-dns/manage-entries: "true"
external-dns.alpha.kubernetes.io/ttl: "300"
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: foo-gui-service
port:
number: 80
Service configuration are pretty much the same, just different names:
apiVersion: v1
kind: Service
metadata:
  name: foo-gui-service
  labels:
    app.kubernetes.io/name: foo-gui
    app.kubernetes.io/component: server
    app.kubernetes.io/version: {{ .Chart.AppVersion }}
    app.kubernetes.io/part-of: foo-gui
spec:
  selector:
    app: foo-gui
    component: foo-gui
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
GUI makes calls to the API. The API Service is called foo-api-service, so I want to make calls to http://foo-api-service:80/<route>. It works in a temporary pod that I install for debugging network issues (I can curl the API). It even works when I attach to the GUI Pod.
The problem is: it doesn't work on the GUI. I receive the following error in the dev console in the browser: POST http://foo-api-service/test net::ERR_NAME_NOT_RESOLVED. It looks like the http://foo-api-service is being resolved in my networking, not in the cluster's.
My question is: Is there a way to overcome that? To have http://foo-api-service calls to resolve this name inside the local cluster? Or should I provide Ingress for the API as well and use the domain name from API's Ingress as a URL in the GUI app?

Related

GKE Ingres and GCE Load balancer : Always get 404

I try to deploy 2 applications (behind 2 separates Deployments objects). I have 1 Service per Deployment, with type NodePort.
application1_service.yaml
apiVersion: v1
kind: Service
metadata:
name: application1-service
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: application1
type: NodePort
application2_service.yaml is the exact same (except for name and run)
I use an Ingress to make the 2 services available,
ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "my-static-ip"
networking.gke.io/managed-certificates: "my-certificate"
kubernetes.io/ingress.class: "gce"
spec:
rules:
- host: "my.host.com"
http:
paths:
- path: /*
backend:
serviceName: application1-service
servicePort: 80
- path: /application2/*
backend:
serviceName: application2-service
servicePort: 80
I also create a ManagedCertificate object, to be able to handle HTTPS requests.
managed_certificate.yaml
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
name: my-certificate
spec:
domains:
- my.host.com
The weird thing here is that curl https://my.host.com/ works fine and I can access my service, but when I try curl https://my.host.com/application2/, I keep getting 404 Not Found.
Why is the root working and not the other ?
Additional info:
The ManagedCertificate is valid and works fine with /.
application1 and application2 are the exact same app and if I swap them in the ingress, the output is the same.
Thanks for your help !
EDIT:
Here is the 404 I get when I try to access application2
Don't know if it can help but here is also the part of the Ingress access logs showing the 404
i think you can't use the same port for 2 different applications because this port is used on every node to route to one app.
From docs:
NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting :.
So in your case one app is already using port 80, you could try to use a different one for application 2
I have replicated the problem without a domain and managed a
Certificate. For me it's working fine using the Load Balancer IP
address.
(ingress-316204)$ curl http://<LB IP address>/v2/
Hello, world!
Version: 2.0.0
Hostname: web2-XXX
(ingress-316204)$ curl http://<LB IP address>/
Hello, world!
Version: 1.0.0
Hostname: web-XXX
Ingress configuration seems to be correct as per doc. Check if curl https://LB IP address/application2/ is working or not, if it's working then there might be some issue with the host name.
Check if you have updated the host file (/etc/hosts) with line LB IP
address my.host.com.
Check if host, path and backend are configured correctly in Load
Balancer configuration.
If still having the problem then check Port,Nodeport and Targetport configured correctly or else share the output of ‘kubectl describe ing my-ingress’ for further investigation.
Answering my own question:
After searching for days, I ultimately found the reason of the problem.
Everything was fine with the cluster and the configs, my problem was from my Flask API.
All the URLs was like this one:
#app.route("/my_function")
So it was working fine on root path with my.host.com/my_function, but when I was typing my.host.com/application1/my_function it wasn't working...
I just changed my app to
#app.route("/application1/my_function")
Everything works fine now :) Hope it will help !

Kubernetes - Pass Public IP of Load Balance as Environment Variable into Pod

Gist
I have a ConfigMap which provides necessary environment variables to my pods:
apiVersion: v1
kind: ConfigMap
metadata:
name: global-config
data:
NODE_ENV: prod
LEVEL: info
# I need to set API_URL to the public IP address of the Load Balancer
API_URL: http://<SOME IP>:3000
DATABASE_URL: mongodb://database:27017
SOME_SERVICE_HOST: some-service:3000
I am running my Kubernetes Cluster on Google Cloud, so it will automatically create a public endpoint for my service:
apiVersion: v1
kind: Service
metadata:
name: gateway
spec:
selector:
app: gateway
ports:
- name: http
port: 3000
targetPort: 3000
nodePort: 30000
type: LoadBalancer
Issue
I have an web application that needs to make HTTP requests from the client's browser to the gateway service. But in order to make a request to the external service, the web app needs to know it's ip address.
So I've set up the pod, which serves the web application in a way, that it picks up an environment variable "API_URL" and as a result makes all HTTP requests to this url.
So I just need a way to set the API_URL environment variable to the public IP address of the gateway service to pass it into a pod when it starts.
I know this isn't the exact approach you were going for, but I've found that creating a static IP address and explicitly passing it in tends to be easier to work with.
First, create a static IP address:
gcloud compute addresses create gke-ip --region <region>
where region is the GCP region your GKE cluster is located in.
Then you can get your new IP address with:
gcloud compute addresses describe gke-ip --region <region>
Now you can add your static IP address to your service by specifying an explicit loadBalancerIP.1
apiVersion: v1
kind: Service
metadata:
name: gateway
spec:
selector:
app: gateway
ports:
- name: http
port: 3000
targetPort: 3000
nodePort: 30000
type: LoadBalancer
loadBalancerIP: "1.2.3.4"
At this point, you can also hard-code it into your ConfigMap and not worry about grabbing the value from the cluster itself.
1If you've already created a LoadBalancer with an auto-assigned IP address, setting an IP address won't change the IP of the underlying GCP load balancer. Instead, you should delete the LoadBalancer service in your cluster, wait ~15 minutes for the underlying GCP resources to get cleaned up, and then recreate the LoadBalancer with the explicit IP address.
You are trying to access gateway service from client's browser.
I would like to suggest you another solution that is slightly different from what you are currently trying to achieve
but it can solve your problem.
From your question I was able to deduce that your web app and gateway app are on the same cluster.
In my solution you dont need a service of type LoadBalancer and basic Ingress is enough to make it work.
You only need to create a Service object (notice that option type: LoadBalancer is now gone)
apiVersion: v1
kind: Service
metadata:
name: gateway
spec:
selector:
app: gateway
ports:
- name: http
port: 3000
targetPort: 3000
nodePort: 30000
and you alse need an ingress object (remember that na Ingress Controller needs to be deployed to cluster in order to make it work) like one below:
More on how to deploy Nginx Ingress controller you can finde here
and if you are already using one (maybe different one) then you can skip this step.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: gateway-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: gateway.foo.bar.com
http:
paths:
- path: /
backend:
serviceName: gateway
servicePort: 3000
Notice the host field.
The same you need to repeat for your web application. Remember to use appropriate host name (DNS name)
e.g. for web app: foo.bar.com and for gateway: gateway.foo.bar.com
and then just use the gateway.foo.bar.com dns name to connect to the gateway app from clients web browser.
You also need to create a dns entry that points *.foo.bar.com to Ingress's public ip address
as Ingress controller will create its own load balancer.
The flow of traffic would be like below:
+-------------+ +---------+ +-----------------+ +---------------------+
| Web Browser |-->| Ingress |-->| gateway Service |-->| gateway application |
+-------------+ +---------+ +-----------------+ +---------------------+
This approach is better becaues it won't cause issues with Cross-Origin Resource Sharing (CORS) in clients browser.
Examples of Ingress and Service manifests I took from official kubernetes documentation and modified slightly.
More on Ingress you can find here
and on Services here
The following deployment reads the external IP of a given service using kubectl every 10 seconds and patches a given configmap with it:
apiVersion: apps/v1
kind: Deployment
metadata:
name: configmap-updater
labels:
app: configmap-updater
spec:
selector:
matchLabels:
app: configmap-updater
template:
metadata:
labels:
app: configmap-updater
spec:
containers:
- name: configmap-updater
image: alpine:3.10
command: ['sh', '-c' ]
args:
- | #!/bin/sh
set -x
apk --update add curl
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl
chmod +x kubectl
export CONFIGMAP="configmap/global-config"
export SERVICE="service/gateway"
while true
do
IP=`./kubectl get services $CONFIGMAP -o go-template --template='{{ (index .status.loadBalancer.ingress 0).ip }}'`
PATCH=`printf '{"data":{"API_URL": "https://%s:3000"}}' $IP`
echo ${PATCH}
./kubectl patch --type=merge -p "${PATCH}" $SERVICE
sleep 10
done
You probably have RBAC enabled in your GKE cluster and would still need to create the appropriate Role and RoleBinding for this to work correctly.
You've got a few possibilities:
If you really need this to be hacked into your setup, you could use a similar approach with a sidecar container in your pod or a global service like above. Keep in mind that you would need to recreate your pods if the configmap actually changed for the changes to be picked up by the environment variables of your containers.
Watch and query the Kubernetes-API for the external IP directly in your application, eliminating the need for an environment variable.
Adopt your applications to not directly depend on the external IP.

How to send data from a container within a Pod, to a container within a separate Pod - within the same cluster?

I have a React frontend and a node.js backend. Each are in separate containers within separate Pods within the same cluster in k8's.
I want to send data between them without having to use IP addresses. I know Kubernetes has a feature that lets you talk between pods inside the same cluster, and i think its related to the selector label defined within the Service files created.
I have created a ClusterIp service for my React app and another ClusterIp for my server. I have created an ingress file for my application. I know my ingress works as i can access my UI, and i can hit my health check endpoint of my server - so i know they are exposed to the outside world correctly. My problem is how to communicate internally within k8's
Within the my react app i have tried to write
axios.post("/api/test", {
value: "TestValue"
});
But the endpoint within my server of api/test never gets hit with this.
Backend Server Cluster IP - - - - 
 
apiVersion: v1
kind: Service
metadata:
name: server-model-cluster-ip-service
spec:
type: ClusterIP
selector:
component: server-model
ports:
- port: 8050
targetPort: 8050
React UI Cluster IP - - - -
apiVersion: v1
kind: Service
metadata:
name: react-ui-cluster-ip-service
spec:
type: ClusterIP
selector:
component: react-ui
ports:
- port: 3000
targetPort: 3000 
Ingress File - - - - -
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- path: /api/?(.*)
backend:
serviceName: react-ui-cluster-ip-service
servicePort: 8050
- path: /server/?(.*)
backend:
serviceName: server-model-cluster-ip-service
servicePort: 8050
I understand the Label selector is what maps my React Cluster IP to the Deployment for my UI and similar for my Server Cluster IP to my Deployment for server. I thin i am right in saying i can use the selector somehow to send axis/http requests to other pods like..
axios.post("/PODNAME/api/test", {
value: "TestValue"
});
Could anyone tell me if i am completely wrong or missing something obvious please :)
Here in this part of ingress service name react-ui-cluster-ip-service is there which is running on port 3000 as you mention in service spec file.
But in you are you are sending traffic to proper service name but the port is wrong one.
- path: /api/?(.*)
backend:
serviceName: react-ui-cluster-ip-service
servicePort: 8050
I think due to this you are not able to send request to /api/?
From your service spec file you can also remove type:clusterIP and you can use an only service name to resolve the services inside kubernetes cluster.
answer for your question title: containers with pod can talk on localhost while container within separtate pod can talk over the service name there no need add the service type as clusterIP & Nodeport
I have few concerns here.
As #Harsh Manvar mentioned in his answer, Kubernetes represents mechanism of discovering internal services which guarantees intercommunication between Pods within the same cluster either by IP address or relevant DNS name of service.Therefore you might be able to reach your backend server from particular frontend Pod without involving Ingress, as Ingress controller stays as an edge router and exposes HTTP and HTTPS network traffic from outside the cluster to the corresponded Kubernetes services.
You also used rewrite expression enclosed to your specific path based routing rules within Ingress object. In that scenario the rewrite seems to be resulted in the following way: server-model-cluster-ip-service/api/test rewrites to server-model-cluster-ip-service/test URI and this should be final path placeholder for you backend service. In fact that you are invoking axios.post("server-model-cluster-ip-service/api/test", { value: "TestValue" }) request from React UI Pod might not hit the target backend service.
I just gave some points to consider how to proceed with further troubleshooting, at least you can log into the frontend Pod and check the connectivity to the target backend service accordingly.

How do I make my admin ui of cockroachdb publicly available via traefik ingress controller on kubernetes?

Kubernetes dedicated cockroachdb node - accessing admin ui via traefik ingress controller fails - page isn't redirecting properly
I have a dedicated kubernetes node running cockroachdb. The pods get scheduled and everything is setup. I want to access the admin UI from a subdomain like so: cockroachdb.hostname.com. I have done this with traefik dashboard and ceph dashboard so I know my ingress setup is working. I even have cert-manager running to have https enabled. I get the error from the browser that the page is not redirecting properly.
Do I have to specify the host name somewhere special?
I have tried adding this with no success: --http-host cockroachdb.hostname.com
This dedicated node has its own public ip which is not mapped to hostname.com. I think I need to change a setting in cockroachdb, but I don't know which because I am new to it.
Does anyone know how to publish admin UI via an ingress?
EDIT01: Added ingress and service config files
Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cockroachdb-public
annotations:
kubernetes.io/ingress.class: traefik
traefik.frontend.rule.type: PathPrefixStrip
certmanager.k8s.io/issuer: "letsencrypt-prod"
certmanager.k8s.io/acme-challenge-type: http01
ingress.kubernetes.io/ssl-redirect: "true"
ingress.kubernetes.io/ssl-temporary-redirect: "true"
ingress.kubernetes.io/ssl-host: "cockroachdb.hostname.com"
traefik.frontend.rule: "Host:cockroachdb.hostname.com,www.cockroachdb.hostname.com"
traefik.frontend.redirect.regex: "^https://www.cockroachdb.hostname.com(.*)"
traefik.frontend.redirect.replacement: "https://cockroachdb.hostname.com/$1"
spec:
rules:
- host: cockroachdb.hostname.com
http:
paths:
- path: /
backend:
serviceName: cockroachdb-public
servicePort: http
- host: www.cockroachdb.hostname.com
http:
paths:
- path: /
backend:
serviceName: cockroachdb-public
servicePort: http
tls:
- hosts:
- cockroachdb.hostname.com
- www.cockroachdb.hostname.com
secretName: cockroachdb-secret
Serice:
apiVersion: v1
kind: Service
metadata:
# This service is meant to be used by clients of the database. It exposes a ClusterIP that will
# automatically load balance connections to the different database pods.
name: cockroachdb-public
labels:
app: cockroachdb
spec:
ports:
# The main port, served by gRPC, serves Postgres-flavor SQL, internode
# traffic and the cli.
- port: 26257
targetPort: 26257
name: grpc
# The secondary port serves the UI as well as health and debug endpoints.
- port: 8080
targetPort: 8080
name: http
selector:
app: cockroachdb
EDIT02:
I can access the Admin UI page now but only by going over the external ip address of the server with port 8080. I think I need to tell my server that its ip address is mapped to the correct sub domain?
EDIT03:
On both scheduled traefik-ingress pods the following logs are created:
time="2019-04-29T04:31:42Z" level=error msg="Service not found for default/cockroachdb-public"
Your referencing looks good on the ingress side. You are using quite a few redirects, unless you really know what each one is accomplishing, don't use them, you might end up in an infinite loop of redirects.
You can take a look at the following logs and methods to debug:
Run kubectl logs <traefik pod> and see the last batch of logs.
Run kubectl get service, and from what I hear, this is likely your main issue. Make sure your service exists in the default namespace.
Run kubectl port-forward svc/cockroachdb-public 8080:8080 and try connecting to it through localhost:8080 and see terminal for potential error messages.
Run kubectl describe ingress cockroachdb-public and look at the events, this should give you something to work with.
Try accessing the service from another pod you have running ping cockroachdb-public.default.svc.cluster.local and see if it resolves the IP address.
Take a look at your clusterrolebindings and serviceaccount, it might be limited and not have permission to list services in the default namespace: kubectl create clusterrolebinding default-admin --clusterrole cluster-admin --serviceaccount=default:default

Kubernetes path based routing for multiple namespaces

The environment: I have a kubernetes cluster set up with namespaces for "dev", "sit" and "prod". In each of these namespaces i have multiple services of type:LoadBalancer which target a specific deployment of a dockerised application (i have multiple applications) so i can access each of these by just using the exposed ip address of the service of whichever namespace i want. Example service looks like this an is very simple:
apiVersion: v1
kind: Service
metadata:
name: application1
spec:
ports:
- port: 80
targetPort: 3000
protocol: TCP
name: http
type: LoadBalancer
selector:
app: application1
The problem: I now want to be able to support multiple versions of all applications (ip:/v1/, ip:/v2/ etc) as to allow the users to migrate to the new version when they are ready and i've been trying to implement path-based routing following this guide. I have managed to restructure my architecture so that i have ReplicationControllers and an ingress which looks at the rules of the path to route to the correct service.
This seems to work if i'd only have one exposed service and a single namespace because i only have DNS host names for production environment and want to use the individual ip address of a service for other environments and i can't figure out how to specify the ingress rules for a service which doesn't have a hostname.
I could just have a loadbalancer for every environment and use path based routing to route to each different services for dev and sit which is not ideal because to access any service we'd have to now use something like this ip/application1 and ip/application2 instead of directly using the service ip address of each application. But my biggest problem is that when i followed the guide and created the ingress, replicationController and a service in my SIT namespace it started affecting the loadbalancer services in my other two environments (as i understand the kubernetes would sometimes try to use the nginx controller from SIT environment on my DEV services and therefore would fail, other times it would use the GCE default configuration and would work).
I tried adding the arg "- --watch-namespace=sit" to limit the scope of the ingress controller to only affect sit but it does not seem to work.
I now want to be able to support multiple versions of all applications (ip:/v1/, ip:/v2/ etc.)
That is exactly what Ingress can do, but the problem is that you want to use IP addresses for routing, but Ingress is using DNS names for that.
I think the best way to implement this is to use an Ingress which will handle requests. On GCE Ingress uses the HTTP(S) load balancer. Yes, you will need a DNS name for that, but it will help you to create a routing which you need.
Also, I highly recommend using TLS encryption for connections.
You can check LetsEncrypt to get a free SSL certificate.
So, the solution should like below:
1. Deploy your Services with type "ClusterIP" instead of "LoadBalancer". You can have more than one Service object for an application so you can do it in parallel with your current configuration.
2. Select any namespace (even special one), for instance - "ingress-ns". We need to create there Service objects which will point to your services in other namespaces. Here is an example of a service (let new DNS name be "my.shiny.new.domain"):
kind: Service
apiVersion: v1
metadata:
name: service-v1
namespace: ingress-ns
spec:
type: ExternalName
externalName: <service>.<namespace>.svc.cluster.local # here is a service name and namespace of your service with version v1.
ports:
- port: 80
3. Now, we have a namespace with several services which are pointing to different versions of your application in different namespaces. Now, we can create an Ingress object which will create an HTTP(S) Load Balancer on GCE with path-based routing:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
namespace: ingress-ns
spec:
rules:
- host: my.shiny.new.domain
http:
paths:
- path: /v1
backend:
serviceName: service-v1
servicePort: 80
- path: /v2
backend:
serviceName: service-v2
servicePort: 80
Kubernetes will create a new HTTP(S) balancer with rules you set up in an Ingress object, and you will have an entry point with cross-namespaces path-based routing, and you don't have to use multiple IP addresses for that.
Actually, you can also manage by that ingress your primary version of an application and use your primary domain with "/" path to handle requests to your production version.