Kubernetes / External access from pod in GKE - kubernetes

I new in Kubernetes, and I created pods the following yml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-act
namespace: default
labels:
app: myapp
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp:latest
envFrom:
- configMapRef:
name: map-myapp
The issue is that myapp is trying to query other apps which are located in my google project (as GCE machines) but are not part of the GKE cluster - without success.
i.e the issue is that I can't connect to the internal IP outside the cluster. I tried also to create service but it didn't fix the issue. all the information I found is how to expose my cluster to the world, but this is the opposite way.
what am I missing?

the issue is that I can't connect to the internal IP outside the
cluster.
What you miss is called Ingress I believe.
Ingress, added in Kubernetes v1.1, exposes HTTP and HTTPS routes from
outside the cluster to services within the cluster.
You can find more details and complete docs here.
Update: As you pointed out Ingress is a beta feature, but you can successfully use it if you are OK with the limitations. Most likely you are, just go through the list. "Deployed on the master" means in my understanding that the ingress controller works on the k8s master node, a fact that should not normally bother you. What should you define next?
1.First you need to define a service which targets the pods in your deployment. It seems that you haven't done that yet, have you?
2.Then, on the next step, you need to create the Ingress, as shown in the docs, e.g.:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /testpath
backend:
serviceName: your-service-name
servicePort: 80
Here your-service-name is the name of the service that you have already defined in point 1).
After you have done all this the backend service will be available outside of the cluser on a similar URL: https://.service..com

In this case you should create an external service type with associated endpoint, like this:
kind: Endpoints
apiVersion: v1
metadata:
name: mongo
subsets:
- addresses:
- ip: 10.240.0.4
ports:
- port: 27017
---
kind: Service
apiVersion: v1
metadata:
name: mongo
Spec:
type: ClusterIP
ports:
- port: 27017
targetPort: 27017
Please refer to this GCP blog post, that decribes very well in details the kubernetes best practices for mapping external services, living outside your cluster.

Related

Connect two pods to the same external access point

I have two pods (deployments) running on minikube. Each pod has the same port exposed (say 8081), but use different images. Now I want to configure so that I can access either of the pods using the same external URL, in a load balanced way. So what I tried to do is put same matching label in both pods and map them to same service and then expose through NodePort. Example:
#pod1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: dep1
labels:
apps: dep1
tier: cloud
spec:
template:
metadata:
name: dep1-pod
labels:
app: deployment1
spec:
containers:
- name: cont1
image: cont1
ports:
- containerPort: 8081
selector:
matchLabels:
app:deployment1
Now second pod
#pod2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: dep2
labels:
apps: dep2
tier: cloud
spec:
template:
metadata:
name: dep2-pod
labels:
app: deployment1
spec:
containers:
- name: cont2
image: cont2
ports:
- containerPort: 8081
selector:
matchLabels:
app:deployment1
Now the service:
#service.yaml
apiVersion: v1
kind: Service
metadata:
name: service1
spec:
type: NodePort
ports:
- port: 8081
targetPort: 8081
nodePort: 30169
selector:
app: deployment1
Now this does not work as intended as it refuses to connect to my IP:30169. However, I can connect if only one of the pods are deployed.
Now I know I can achieve this functionality using replicas and just one image, but in this case, I want to do this using 2 images. Any help is much appreciated.
You can use Ingress to achieve it.
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.
An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting.
An Ingress controller is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic.
An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.
In your situation Ingress will forward traffic to your services using the same URL. It depends which path you type: URL for first Pod and URL/v2 for second Pod. Of course you can change /v2 to something else.
On the beginning you need enable Ingress on minikube. You can do it using a command below. You can red more about it here
minikube addons enable ingress
Next step you need create a Ingress using a yaml file. Here is an example how to do it step by step.
Yaml file of Ingress looks as below.
As you can see in this configuration, you can access to one Pod using URL and it will forward traffic to first service attached to the first Pod. For the second Pod using URL/v2 it will forward traffic to second service on attached to the second Pod.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: hello-world.info
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web
port:
number: 8080
- path: /v2
pathType: Prefix
backend:
service:
name: web2
port:
number: 8080

Kubernetes, Loadbalancing, and Nginx Ingress - AKS

Stack:
Azure Kubernetes Service
NGINX Ingress Controller - https://github.com/kubernetes/ingress-nginx
AKS Loadbalancer
Docker containers
My goal is to create a K8s cluster that will allow me to use multiple pods, under a single IP, to create a microservice architecture. After working with tons of tutorials and documentation, I'm not having any luck with my endgoal. I got to the point of being able to access a single deployment using the Loadbalancer, but introducing the ingress has not been successful so far. The services are separated into their respective files for readability and ease of control.
Additionally, the Ingress Controller was added to my cluster as described in the installation instructions using: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.35.0/deploy/static/provider/cloud/deploy.yaml
LoadBalancer.yml:
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
loadBalancerIP: x.x.x.x
selector:
app: ingress-service
tier: backend
ports:
- name: "default"
port: 80
targetPort: 80
type: LoadBalancer
IngressService.yml:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- http:
paths:
- path: /api
backend:
serviceName: api-service
servicePort: 80
api-deployment.yml
apiVersion: v1
kind: Service
metadata:
name: api-service
spec:
selector:
app: api
ports:
- port: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-deployment
spec:
selector:
matchLabels:
app: api
tier: backend
track: stable
replicas: 1
template:
metadata:
labels:
app: api
tier: backend
track: stable
spec:
containers:
- name: api
image: image:tag
ports:
- containerPort: 80
imagePullPolicy: Always
imagePullSecrets:
- name: SECRET
The API in the image is exposed on port 80 correctly.
After applying each of the above yml services and deployments, I attempt a web request to one of the api resources via the LoadBalancer's IP and receive only a timeout on my requests.
Found my answer after hunting around enough. Basically, the problem was that the Ingress Controller has a Load Balancer built into the yaml, as mentioned in comments above. However, the selector for that LoadBalancer requires marking your Ingress service as part of the class. Then that Ingress service points to each of the services attached to your pods. I also had to make a small modification to allow using a static IP in the provided load balancer.

Cannot access a LoadBalancer service at Kubernetes

I managed to deploy a python app at the kubernetes cluster . The python app image is deployed at AWS ECR (Elastic Container Registry).
My deployment is:
(NAME)charting-rest-server (READY)1/1 (UP-TO-DATE)1 (AVAILABLE)1 (AGE)33m (CONTAINERS)charting-rest-server (IMAGES) *****.dkr.ecr.eu-west-2.amazonaws.com/charting-rest-server:latest (SELECTOR)app=charting-rest-server
And my service is:
(NAME)charting-rest-server-service (TYPE)LoadBalancer (CLUSTER-IP)10.100.4.207 (EXTERNAL-IP)*******.eu-west-2.elb.amazonaws.com (PORT(s))8765:32735/TCP (AGE)124m (SELECTOR)app=charting-rest-server
According to this AWS guide , when I do curl *****.us-west-2.elb.amazonaws.com:80 I should be able to externally access the Load Balancer , who is going to route me to my pod's ip.
But all I get is
(6) Could not resolve host: *******.eu-west-2.elb.amazonaws.com
And come to think about it if I want to have access to my pod and send some requests I should have an external-ip like 111.111.111.111 (obv an example).
EDIT
the deployment's yaml is:
apiVersion: apps/v1
kind: Deployment
metadata:
name: charting-rest-server
spec:
selector:
matchLabels:
app: charting-rest-server
replicas: 1
template:
metadata:
labels:
app: charting-rest-server
spec:
containers:
- name: charting-rest-server
image: *****.eu-west-2.amazonaws.com/charting-rest-server:latest
ports:
- containerPort: 5000
the service's yaml:
apiVersion: v1
kind: Service
metadata:
name: charting-rest-server-service
spec:
type: LoadBalancer
selector:
app: charting-rest-server
ports:
- protocol: TCP
port: 80
targetPort: 5000
I already tried with the suggestions from the comments , using an ingress instance but I only ended up spending a huge amount of time trying to understand how they work , "am I doing something wrong"?/etc .
I will put the yaml file I used here but it made no change since my ADDRESS field was empty - no ip to use.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: charting-rest-server-ingress
spec:
rules:
- host: charting-rest-server-service
http:
paths:
- path:/
backend:
serviceName: charting-rest-server-service
servicePort: 80
I am stuck in that problem for so much time so I would appreciate some help.
You already created a Service with type LoadBalancer, but it looks like you have incorrect ports configured.
Your Deployment is created with containerPort: 5000 and your Service is pointing to targetPort: 9376. Those needs to match for the Deployment to be exposed.
If you are having a hard time writing yaml for the Service you can expose the Deployment using following kubectl command:
kubectl expose --namespace=tick deployment charting-rest-server --type=LoadBalancer --port=8765 --target-port=5000 --name=charting-rest-server-service
Once you fix those ports you will be able to access the service from outside using it's hostname:
status:
loadBalancer:
ingress:
- hostname: aba02b223436111ea85ea06a051f04d8-1294697222.eu-west-2.elb.amazonaws.com
I also recommend this guide Tutorial: Expose Services on your AWS Quick Start Kubernetes cluster.
If you need more control over the http rules please consider using ingress, you can read more about ALB Ingress Controller on Amazon EKS also Using a Network Load Balancer with the NGINX Ingress Controller on Amazon EKS.

Ingress is not getting address on GKE and GCE

When creating ingress, no address is generated and when viewed from GKE dashboard it is always in the Creating ingress status.
Describing the ingress does not show any events and I can not see any clues on GKE dashboard.
Has anyone has a similar issue or any suggestions on how to debug?
My deployment.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mobile-gateway-ingress
spec:
backend:
serviceName: mobile-gateway-service
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
name: mobile-gateway-service
spec:
ports:
- protocol: TCP
port: 80
targetPort: 8080
selector:
app: mobile-gateway
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mobile-gateway-deployment
labels:
app: mobile-gateway
spec:
selector:
matchLabels:
app: mobile-gateway
replicas: 2
template:
metadata:
labels:
app: mobile-gateway
spec:
containers:
- name: mobile-gateway
image: eu.gcr.io/my-project/mobile-gateway:latest
ports:
- containerPort: 8080
Describing ingress shows no events:
mobile-gateway ➤ kubectl describe ingress mobile-gateway-ingress git:master*
Name: mobile-gateway-ingress
Namespace: default
Address:
Default backend: mobile-gateway-service:80 (10.4.1.3:8080,10.4.2.3:8080)
Rules:
Host Path Backends
---- ---- --------
* * mobile-gateway-service:80 (10.4.1.3:8080,10.4.2.3:8080)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"mobile-gateway-ingress","namespace":"default"},"spec":{"backend":{"serviceName":"mobile-gateway-service","servicePort":80}}}
Events: <none>
hello ➤
With a simple LoadBalancer service, an IP address is given. The problem is only with the ingress resource.
The problem in this case was I had did not include the addon HttpLoadBalancing when creating the cluster!
My fault but was would have been noice to have an event informing me of this mistake in the ingress resource.
Strange that when I created a new cluster to follow the tutorial cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer using default addons including HttpLoadBalancing that I observed the same issue. Maybe I didn't wait long enough? Anyway, working now that I have included the addon.
To complete the accepted answer, it's worth noting that it's possible to activate the addon on an already existing cluster (from the Google console).
However, it will restart your cluster with downtime (in my case, it took several minutes on an almost empty cluster). Be sure that's acceptable in your case, and do tests.

Kubernetes/GCE Ingress controller fails

I'm new to Kubernetes and I'm trying to do HTTP Load Balancing on Google Container Engine with TLS (using the included GCE Ingress Controller). The error I have is repeatable even following Google's official tutorial. For readability I summarize the procedure in config.yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
template:
metadata:
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
name: nginx
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
selector:
name: nginx
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: basic-ingress
spec:
backend:
serviceName: nginx
servicePort: 80
Then:
kubectl create -f config.yaml
export NODE_PORT=$(kubectl get -o jsonpath="{.spec.ports[0].nodePort}" services nginx)
gcloud compute firewall-rules create allow-130-211-0-0-22 --source-ranges 130.211.0.0/22 --allow tcp:$NODE_PORT
curl <ip_of_load_balancer>
(I removed the tags on the firewall rule so it will apply for all).
But I get a 502 Server Error, which according to the docs means it's likely bootstrapping (but it always stays like this). I can see on the console that the backend is unhealthy.
In the docs, to avoid this one needs:
a firewall rule (which is done above)
Service must respond with 200 (but I tested the nginx image locally and the service via a general Load Balancer, works fine)
So what is the cause of this error and how can I further debug this?
I left the cluster overnight and it is now working. It either seems it takes quite some to bootstrap or something was fixed on the Google Cloud side.