How to Make Google Kubernetes Engine LoadBalancer Service Receive External Traffic When Using Google Cloud Code ItelliJ? - kubernetes

I have a working GKE cluster serving content at port 80. How do I get the load balancer service to deliver the content on the external (regional reserved) static IP 111.222.333.123?
I see that kubectl get service shows that the external static IP is successfully registered. The external IP does respond to ping requests.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.16.0.1 <none> 443/TCP 17h
myapp-cluster NodePort 10.16.5.168 <none> 80:30849/TCP 84m
myapp-service LoadBalancer 10.16.9.255 111.222.333.123 80:30879/TCP 6m20s
Additionally, the Google Cloud Platform console shows that the forwarding rule is established and correctly referencing the GKE target pool.
The deployment and service manifest I am using is shown below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
labels:
app: myapp
spec:
replicas: 2
selector:
matchLabels:
app: myapp
environment: sandbox
template:
metadata:
labels:
app: myapp
environment: sandbox
spec:
containers:
- name: myapp
image: myapp
imagePullPolicy: Always
ports:
- containerPort: 8080
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
selector:
app: myapp
environment: sandbox
ports:
- port: 80
targetPort: 8080
type: LoadBalancer
loadBalancerIP: "111.222.333.123"
The associated skaffold configuration file for reference:
apiVersion: skaffold/v2beta18
kind: Config
metadata:
name: myapp
build:
artifacts:
- image: myapp
context: .
docker: {}
deploy:
kubectl:
manifests:
- gcloud_k8_staticip_deployment.yaml
What am I missing to allow traffic to reach the GKE cluster when running this configuration using Google Cloud Code?
Apologies if this has been asked before. Happy to take a pointer if I missed the right solution reviewing questions.

I replicated your setup and faced the same issue as yours (was able to ping the service IP but couldn’t connect to it from the browser).
Then I changed Deployment container port to 80, service target port to 80 and service port to 8080, and it worked, I was then able to connect to the deployment from the browser using the service IP.
Deployment manifest file :
Service manifest file:

For all I know the quoted configuration in this question should actually work, as long as the image is pointing to an accessible location. I have confirmed this configuration to be working using a toy setup entirely without IDE, just using gcloud shell and everything worked well.
The problem originates from Google Cloud Code changing the kubectl context without any additional warning when a context switch is configured in the run configuration.

Related

NodePort type service not accessible outside cluster

I am trying to setup a local cluster using minikube in a Windows machine. Following some tutorials in kubernetes.io, I got the following manifest for the cluster:
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-nginx-deployment
labels:
app: external-nginx
spec:
selector:
matchLabels:
app: external-nginx
replicas: 2
template:
metadata:
labels:
app: external-nginx
spec:
containers:
- name: external-nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: expose-nginx
labels:
service: expose-nginx
spec:
type: NodePort
selector:
app: external-nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 32000
If I got things right, this should create a pod with a nginx instance and expose it to the host machine at port 32000.
However, when I run curl http://$(minikube ip):32000, I get a connection refused error.
I ran bash inside the service expose-nginx via kubectl exec svc/expose-nginx -it bash and from there I was able to access the external-nginx pods normally, which lead me to believe it is not a problem within the cluster.
I also tried to change the type of the service to LoadBalancer and enable the minikube tunnel, but got the same result.
Is there something I am missing?
Almost always by default minikube uses docker driver for the minikube VM creation. In the host system it looks like a big docker container for the VM in which other kubernetes components are run as containers as well. Based on tests NodePort for services often doesn't work as it's supposed to like accessing the service exposed via NodePort should work on minikube_IP:NodePort address.
Solutions are:
for local testing use kubectl port-forward to expose service to the local machine (which OP did)
use minikube service command which will expose the service to the host machine. Works in a very similar way as kubectl port-forward.
instead of docker driver use proper virtual machine which will get its own IP address (VirtualBox or hyperv drivers - depends on the system). Reference.
(Not related to minikube) Use built-in feature kubernetes in Docker Desktop for Windows. I've already tested it and service type should be LoadBalancer - it will be exposed to the host machine on localhost.

Can't access my Pod locally using Minikube

Sorry for that noobish question, but I'm having an issue reaching my pod and I have no idea why.. (I'm using Minikube locally)
So I've created this basic pod:
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
type: front-end
spec:
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
And this basic service:
apiVersion: v1
kind: Service
metadata:
name: service
spec:
type: NodePort
ports:
- targetPort: 80
port: 80
nodePort: 30008
selector:
app: myapp
type: front-end
However when I try reaching nginx through the browser I fail to do so..
I enter http://NodeIP:30008 .
However when I'm typing minikube service service --url I am able to access it..
So basically I have 2 questions-
1- Why does my attempt enteting the nodeip and port fail? I 've seen that when I enter minikube ssh and try to curl here http://NodeIP:30008 it works, so basically while I'm using Minikube I won't be able to browse to my apps? only curl through the minikube ssh or the below command.?
2- Why does the minikube service --url command works? what's the difference?
Thanks a lot!
Use the external IP address (LoadBalancer Ingress) to access to your application:
curl http://<external-ip>:<port>
where is the external IP address (LoadBalancer Ingress) of your Service, and is the value of Port in your Service description. If you are using minikube, typing minikube service my-service will automatically open your application in a browser.
You can find more details here

Access external database from Kubernetes

I have a kubernetes (v1.18.6) with 1 service (loadbalancer), 2 pods in a develoment:
apiVersion: v1
kind: Service
metadata:
name: app-service
spec:
selector:
app: app
ports:
- protocol: "TCP"
port: 6000
targetPort: 5000
type: LoadBalancer
A network policy to access Intenert (it is necesary for me):
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: internet-access
spec:
podSelector:
matchLabels:
networking/allow-internet-access: "true"
policyTypes:
- Ingress
- Egress
ingress:
- {}
Deployment config file
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
spec:
progressDeadlineSeconds: 120
selector:
matchLabels:
app: app
replicas: 2
template:
metadata:
labels:
app: app
spec:
imagePullSecrets:
- name: myregistrykey
containers:
- name: app
image: app
imagePullPolicy: Always
ports:
- containerPort: 5000
It is working correctly. But now, I want to connect this imagen to an external database (in another network only access by internet). For this proposition I use this service:
apiVersion: v1
kind: Service
metadata:
name: postgresql
spec:
clusterIP: None
ports:
- port: 25060
---
apiVersion: v1
kind: Endpoints
metadata:
name: postgresql
subsets:
- addresses:
- ip: 206............
ports:
- port: 25060
name: postgresql
It is all the services:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
app-service LoadBalancer 10.245.134.137 206........... 6000:31726/TCP 2d4h
kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 3d7h
postgresql ClusterIP None <none> 25060/TCP 19h
But when I try to connect I receive a timeout error of the database, like can't connect to the database.
I have an internet connection in the image.
I find the solution, the problem was the rules of inbound of the database. I must add the IP of Kubernetes.
Thx.
Here is what worked for me:
Define a service , but set clusterIP: None , so no endpooint is created.
And then create an endpoint yourself with the SAME NAME as your service and set the IP and port of your db.
In your example , you have a type in your endpoint: the name of your endpoint is postgresql not postgresSql.
My example:
---
service.yaml
kind: Service
apiVersion: v1
metadata:
name: backend-mobile-db-service
spec:
clusterIP: None
ports:
- port: 5984
---
kind: Endpoints
apiVersion: v1
metadata:
name: backend-mobile-db-service
subsets:
- addresses:
- ip: 192.168.1.50
ports:
- port: 5984
name: backend-mobile-db-service
For better visibility I am placing the answer OP mentioned in question:
I find the solution, the problem was the rules of inbound of the database. I must add the IP of Kubernetes
The service definition should be corrected. Default service type is clusterIP which doesn't work for external database. You need to update the service type as given below
type: ExternalName
also ensure that service name and the endpoint name should match. it is different in your yaml. please check
If I understand correctly, you have your cluster with application on Digital Ocean cloud and your PostgreSQL is outside this cluster.
In your Application Deployment <> application service you have used services with selectors so you didn't need to create Endpoints manually.
In your external database service you have used services without selectors so you had to create Endpoint manually.
As database is external service, using clusterIP: None is pointless as it will try to match pods inside the cluster. I guess you added it as you read in this docs.
Last thing is that in Endpoint you set ip: 206... which is the same as application service LoadBalancer ip?
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
app-service LoadBalancer 10.245.134.137 206........... 6000:31726/TCP 2d4h
subsets:
- addresses:
- ip: 206............
It is only a part of information so I am guessing. However in this part you should provide IP of desired database, not your application Loadbalancer IP.
Now based on scenario you can connect:
Database outside cluster with IP address
Remotely hosted database with URI
Remotely hosted database with URI and port remapping
Detailed information about above scenarios you can find in Kubernetes best practices: mapping external services
Based on your current config I assume you want to use scenario 1.
If this database and cluster are somewhere in cloud you could use internal Database IP. If not you should provide IP of machine where this Database is hosted.
You can also read Kubernetes Access External Services article.
Please let me know if you will still have issue after IP change

Kubernetes MLflow Service Pod Connection

I have deployed a build of mlflow to a pod in my kubernetes cluster. I'm able to port forward to the mlflow ui, and now I'm attempting to test it. To do this, I am running the following test on a jupyter notebook that is running on another pod in the same cluster.
import mlflow
print("Setting Tracking Server")
tracking_uri = "http://mlflow-tracking-server.default.svc.cluster.local:5000"
mlflow.set_tracking_uri(tracking_uri)
print("Logging Artifact")
mlflow.log_artifact('/home/test/mlflow-example-artifact.png')
print("DONE")
When I run this though, I get
ConnectionError: HTTPConnectionPool(host='mlflow-tracking-server.default.svc.cluster.local', port=5000): Max retries exceeded with url: /api/2.0/mlflow/runs/get? (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object>: Failed to establish a new connection: [Errno 111] Connection refused'))
The way I have deployed the mlflow pod is shown below in the yaml and docker:
Yaml:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mlflow-tracking-server
namespace: default
spec:
selector:
matchLabels:
app: mlflow-tracking-server
replicas: 1
template:
metadata:
labels:
app: mlflow-tracking-server
spec:
containers:
- name: mlflow-tracking-server
image: <ECR_IMAGE>
ports:
- containerPort: 5000
env:
- name: AWS_MLFLOW_BUCKET
value: <S3_BUCKET>
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: aws-secret
key: AWS_ACCESS_KEY_ID
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: aws-secret
key: AWS_SECRET_ACCESS_KEY
---
apiVersion: v1
kind: Service
metadata:
name: mlflow-tracking-server
namespace: default
labels:
app: mlflow-tracking-server
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app: mlflow-tracking-server
ports:
- name: http
port: 5000
targetPort: http
While the dockerfile calls a script that executes the mlflow server command: mlflow server --default-artifact-root ${AWS_MLFLOW_BUCKET} --host 0.0.0.0 --port 5000, I cannot connect to the service I have created using that mlflow pod.
I have tried using the tracking uri http://mlflow-tracking-server.default.svc.cluster.local:5000, I've tried using the service EXTERNAL-IP:5000, but everything I tried cannot connect and log using the service. Is there anything that I have missed in deploying my mlflow server pod to my kubernetes cluster?
Your mlflow-tracking-server service should have ClusterIP type, not LoadBalancer.
Both pods are inside the same Kubernetes cluster, therefore, there is no reason to use LoadBalancer Service type.
For some parts of your application (for example, frontends) you may want to expose a Service onto an external IP address, that’s outside of your cluster.
Kubernetes ServiceTypes allow you to specify what kind of Service you want. The default is ClusterIP.
Type values and their behaviors are:
ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this
value makes the Service only reachable from within the cluster. This
is the default ServiceType.
NodePort: Exposes the Service on each Node’s IP at a static port (the NodePort). A > ClusterIP Service, to which the NodePort Service routes, is automatically created. You’ll > be able to contact the NodePort Service, from outside the cluster, by
requesting :.
LoadBalancer: Exposes the Service
externally using a cloud provider’s load balancer. NodePort and
ClusterIP Services, to which the external load balancer routes, are
automatically created.
ExternalName: Maps the Service to the contents
of the externalName field (e.g. foo.bar.example.com), by returning a
CNAME record with its value. No proxying of any kind is set up.
kubernetes.io
So to oversimplify this, you have no ways to access the mlflow uri from jupyterhub pod. What I would do here is check the proxies for the jupyterhub pod. If you dont have .svc in NO_PROXY you have to add it. A detailed reason is that you are accessing the Internal .svc mlflow url as if it is on open internet. But actually your mlflow uri is only accessible inside the cluster. If adding .svc doesnt work for no proxy doesnt work we can take a deeper look at that. The ways to check the proxies is by taking ‘ kubectl get po $JHPODNAME -n $ JHNamespace -o yaml’

Kubernetes LoadBalancer Service not loadbalancing requests

I have a simple microservice setup running in a minikube cluster. It is inspired by this example.
My setup includes a simple router microservice that contains a golang webserver. What I want to test now is the loadbalancing when there is more then one pod. But there seems to be no load-balancing whatsoever.
The kubernetes file for the microservices looks like this:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: router
labels:
app: router
tier: router
spec:
replicas: 2
strategy: {}
template:
metadata:
labels:
app: router
tier: router
spec:
containers:
- image: {myregistry}/router
name: router
resources: {}
ports:
- name: target-port
containerPort: 8082
env:
- name: PORT
value: "8082"
status: {}
---
apiVersion: v1
kind: Service
metadata:
name: router
labels:
app: router
tier: router
spec:
type: LoadBalancer
selector:
app: router
tier: router
ports:
- port: 8082
name: http
targetPort: target-port
The skaffold config looks like this:
apiVersion: skaffold/v1beta2
kind: Config
build:
artifacts:
- image: {myregistry}/router
context: src/router/bin
tagPolicy:
gitCommit: {}
local:
push: false
deploy:
kubectl:
manifests:
- ./kubernetes/**.yaml
Kubernetes correctly deploys two pods. The output of kubectl get pods looks like this:
NAME READY STATUS RESTARTS AGE
router-7f75f6f9df-c8mgp 1/1 Running 0 14m
router-7f75f6f9df-k248m 1/1 Running 0 14m
From the skaffold dev log output I can see that every request is routed to the router-7f75f6f9df-c8mgp pod. Even with different browsers all requests end up at the exact same pod.
When I delete this pod there is even a slight downtime of the router microservice even though there is another pod running.
What could be the problem of this behavior?
minikube doesn't 'properly' support the LoadBalancer service type. It used to be commonplace to just use the NodePort or externalIP service type instead, however the official hello-minikube sample now states:
On cloud providers that support load balancers, an external IP address
would be provisioned to access the Service. On Minikube, the
LoadBalancer type makes the Service accessible through the minikube
service command
So effectively you should be able to use your minikube LoadBalancer service with: minikube service router
However there is a neat solution that was developed for bare-metal kubernetes clusters called metallb that may be able to help you test this in a better way on minikube.
You can install and configure it on minikube. E.g.
kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.8.1/manifests/metallb.yaml
Here are some blog posts where others have explained the setup and use of metallb with minikube for LoadBalancer support:
Blog Post 1
Blog Post 2
Here are the official docs.
Hope that helps!