How can I use LoadBalancer in BareMetal -KinD? - kubernetes

My question is totally simple that is related to kubernetes awesome tool called KinD. I am using KinD for my flask application:
import os
import requests
from flask import Flask
from jaeger_client import Config
from flask_opentracing import FlaskTracing
app = Flask(__name__)
config = Config(
config={
'sampler':
{'type': 'const',
'param': 1},
'logging': True,
'reporter_batch_size': 1,},
service_name="service")
jaeger_tracer = config.initialize_tracer()
tracing = FlaskTracing(jaeger_tracer, True, app)
def get_counter(counter_endpoint):
counter_response = requests.get(counter_endpoint)
return counter_response.text
def increase_counter(counter_endpoint):
counter_response = requests.post(counter_endpoint)
return counter_response.text
#app.route('/')
def hello_world():
counter_service = os.environ.get('COUNTER_ENDPOINT', default="https://localhost:5000")
counter_endpoint = f'{counter_service}/api/counter'
counter = get_counter(counter_endpoint)
increase_counter(counter_endpoint)
return f"""Hello, World!
You're visitor number {counter} in here!\n\n"""
FROM python:3.7-alpine
RUN mkdir /app
RUN apk add --no-cache py3-pip python3 && \
pip3 install flask Flask-Opentracing jaeger-client
WORKDIR /app
ADD ./app /app/
ADD ./requirements.txt /app/
RUN pip install -r requirements.txt
EXPOSE 5000
CMD ["python", "/app/main.py"]
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-flask-deployment
spec:
selector:
matchLabels:
app: my-flask-pod
replicas: 2
template:
metadata:
labels:
app: my-flask-pod
spec:
containers:
- name: my-flask-container
image: yusufkaratoprak/awsflaskeks:latest
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 5000
service.yaml
apiVersion: v1
kind: Service
metadata:
name: my-flask-service
spec:
selector:
app: my-flask-pod
ports:
- port: 6000
targetPort: 5000
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 203.0.113.10
Configmap.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.42.42.100-172.42.42.105 #Update this with your Nodes IP range
Results:
Everything looks good. When I write my browser: http://172.42.42.101:6000/
Result:
Also , I want to add my service events. Everything looks good :
and Also I want to add #Kaan Mersin advise : I added 80 as default port.

Actually for bare metal server you should use another solution except of LoadBalancer to expose your application such as NodePort, Ingress and so on. Because LoadBalancer service type just for Cloud providers like Google, Azure, AWS and etc.
Check official documentation for LoadBalancer
Follow below way with thinking that you are using NodePort instead of LoadBalancer, NodePort service type also load balance traffic between pods ıf you are looking forward just balancing feature ;
You need to use http://172.42.42.101:30160 from outside of the host which is running containers on. Port 6000 is just accessible inside the cluster with internal IP (10.96.244.204 is in your case). Whenever you expose your deployment, automatically (also you can define manually) one of the NodePort (by default between 30000 - 32767)assign to the service for external request.
For service details, you need to run the below command. The command output will give you NodePort and another details.
kubectl describe services my-service
Please check related kubernetes documentation

The error Chrome is throwing is ERR_UNSAFE_PORT. Change your Service port to 80 and then hit http://172.42.42.101/. Alternatively you can choose any other port you like so long as Chrome doesn't consider it an unsafe port. See this answer on SuperUser for a list of unsafe ports.

Related

Unable to connect to my Docker container running inside a single-node Kubernetes cluster

Kubernetes newbie here.
First, let me tell you about the functionality of my Node.js sample application. It is a simple web server that responds with the text "Hello from Node" in response to a GET request to the root (/) route. Also, when the server starts, it outputs the text "Server listening on port 8000".
Currently, the app is running inside a container on a single-node Kubernetes cluster. (I am using Minikube)
When I run the command kubectl logs web-server, I get the desired response. web-server is the name of the running pod.
But when I try to connect to the application using the command curl 192.168.59.100:31515, I get the response: Connection refused. I should see the response: "Hello from node" instead.
Please see the picture below.
Please note that in the picture above, k & m are aliases for kubectl & minikube respectively.
My YAML files are as follows:
node-server-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: web-server
labels:
web: server
spec:
containers:
- name: web-server-container
image: sundaray/node-server:v1
ports:
- containerPort: 3000
node-server-service.yaml
apiVersion: v1
kind: Service
metadata:
name: web-server-port
spec:
type: NodePort
ports:
- port: 3050
targetPort: 3000
nodePort: 31515
selector:
web: server
What am I doing wrong?

Google Kubernetes Ingress health check always failing

I have configured a web application pod exposed via apache on port 80. I'm unable to configure a service + ingress for accessing from the internet. The issue is that the backend services always report as UNHEALTHY.
Pod Config:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
name: webapp
name: webapp
namespace: my-app
spec:
replicas: 1
selector:
matchLabels:
name: webapp
template:
metadata:
labels:
name: webapp
spec:
containers:
- image: asia.gcr.io/my-app/my-app:latest
name: webapp
ports:
- containerPort: 80
name: http-server
Service Config:
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
type: NodePort
selector:
name: webapp
ports:
- protocol: TCP
port: 50000
targetPort: 80
Ingress Config:
kind: Ingress
metadata:
name: webapp-ingress
spec:
backend:
serviceName: webapp-service
servicePort: 50000
This results in backend services reporting as UNHEALTHY.
The health check settings:
Path: /
Protocol: HTTP
Port: 32463
Proxy protocol: NONE
Additional information: I've tried a different approach of exposing the deployment as a load balancer with external IP and that works perfectly. When trying to use a NodePort + Ingress, this issue persists.
With GKE, the health check on the Load balancer is created automatically when you create the ingress. Since the HC is created automatically, so are the firewall rules.
Since you have no readinessProbe configured, the LB has a default HC created (the one you listed). To debug this properly, you need to isolate where the point of failure is.
First, make sure your pod is serving traffic properly;
kubectl exec [pod_name] -- wget localhost:80
If the application has curl built in, you can use that instead of wget.
If the application has neither wget or curl, skip to the next step.
get the following output and keep track of the output:
kubectl get po -l name=webapp -o wide
kubectl get svc webapp-service
You need to keep the service and pod clusterIPs
SSH to a node in your cluster and run sudo toolbox bash
Install curl:
apt-get install curl`
Test the pods to make sure they are serving traffic within the cluster:
curl -I [pod_clusterIP]:80
This needs to return a 200 response
Test the service:
curl -I [service_clusterIP]:80
If the pod is not returning a 200 response, the container is either not working correctly or the port is not open on the pod.
if the pod is working but the service is not, there is an issue with the routes in your iptables which is managed by kube-proxy and would be an issue with the cluster.
Finally, if both the pod and the service are working, there is an issue with the Load balancer health checks and also an issue that Google needs to investigate.
As Patrick mentioned, the checks will be created automatically by GCP.
By default, GKE will use readinessProbe.httpGet.path for the health check.
But if there is no readinessProbe configured, then it will just use the root path /, which must return an HTTP 200 (OK) response (and that's not always the case, for example, if the app redirects to another path, then the GCP health check will fail).

Http 400 from envoy on BAN request

Istio Newbie here,
I’m doing my first tests with Istio (on version 1.3.0). Most things run nice without much effort.
What I’m having an issue is a service that talks with varnish to clean up the cache. This service makes a HTTP request to every pod behind a headless service and its failing with a HTTP 400 (Bad Request) error. This request uses the HTTP Method “BAN” which I believe is the source of the problem since other request method reach varnish without problems.
As a temporary workaround I changed the port name from http to varnish and everything start working again
I installed istio using the helm chart for 1.3.0:
helm install istio install/kubernetes/helm/istio --set kiali.enabled=true --set global.proxy.accessLogFile="/dev/stdout" --namespace istio-system --version 1.3.0
Running on GKE 1.13.9-gke.3 and Varnish is version 6.2
I was able to get it working using Istio without mTLS using the following definitions:
ConfigMap
Just allowing the pod and service CIDRs for BAN requests and expecting them to come from the Varnish service FQDN.
apiVersion: v1
kind: ConfigMap
metadata:
name: varnish-configuration
data:
default.vcl: |
vcl 4.0;
import std;
backend default {
.host = "varnish-service";
.port = "80";
}
acl purge {
"localhost";
"10.x.0.0"/14; #Pod CIDR
"10.x.0.0"/16; #Service CIDR
}
sub vcl_recv {
# this code below allows PURGE from localhost and x.x.x.x
if (req.method == "BAN") {
if (!client.ip ~ purge) {
return(synth(405,"Not allowed."));
}
return (purge);
}
}
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: varnish
spec:
replicas: 1
selector:
matchLabels:
app: varnish
template:
metadata:
labels:
app: varnish
spec:
containers:
- name: varnish
image: varnish:6.3
ports:
- containerPort: 80
name: varnish-port
imagePullPolicy: IfNotPresent
volumeMounts:
- name: varnish-conf
mountPath: /etc/varnish
volumes:
- name: varnish-conf
configMap:
name: varnish-configuration
Service
apiVersion: v1
kind: Service
metadata:
name: varnish-service
labels:
workload: varnish
spec:
selector:
app: varnish
ports:
- name: varnish-port
protocol: TCP
port: 80
targetPort: 80
After deploying these, you can run a cURL enabled pod:
kubectl run bb-$RANDOM --rm -i --image=yauritux/busybox-curl --restart=Never --tty -- /bin/sh
And then, from the tty try curling it:
curl -v -X BAN http://varnish-service
From here, either you'll get 200 purged or 405 Not allowed. Either way, you've hit the Varnish pod across the mesh.
Your issue might be related to mTLS in your cluster. You can check if it's enabled by issuing this command*:
istioctl authn tls-check $(k get pod -l app=varnish -o jsonpath={.items..metadata.name}) varnish-service.default.svc.cluster.local
*The command assumess that you're using the definitions shared in this post. If not, you can adjust accordingly.
I tested this running GKE twice: One with open source Istio via Helm install and another using the Google managed Istio installation (in permissive mode).

Kubernetes - Pass Public IP of Load Balance as Environment Variable into Pod

Gist
I have a ConfigMap which provides necessary environment variables to my pods:
apiVersion: v1
kind: ConfigMap
metadata:
name: global-config
data:
NODE_ENV: prod
LEVEL: info
# I need to set API_URL to the public IP address of the Load Balancer
API_URL: http://<SOME IP>:3000
DATABASE_URL: mongodb://database:27017
SOME_SERVICE_HOST: some-service:3000
I am running my Kubernetes Cluster on Google Cloud, so it will automatically create a public endpoint for my service:
apiVersion: v1
kind: Service
metadata:
name: gateway
spec:
selector:
app: gateway
ports:
- name: http
port: 3000
targetPort: 3000
nodePort: 30000
type: LoadBalancer
Issue
I have an web application that needs to make HTTP requests from the client's browser to the gateway service. But in order to make a request to the external service, the web app needs to know it's ip address.
So I've set up the pod, which serves the web application in a way, that it picks up an environment variable "API_URL" and as a result makes all HTTP requests to this url.
So I just need a way to set the API_URL environment variable to the public IP address of the gateway service to pass it into a pod when it starts.
I know this isn't the exact approach you were going for, but I've found that creating a static IP address and explicitly passing it in tends to be easier to work with.
First, create a static IP address:
gcloud compute addresses create gke-ip --region <region>
where region is the GCP region your GKE cluster is located in.
Then you can get your new IP address with:
gcloud compute addresses describe gke-ip --region <region>
Now you can add your static IP address to your service by specifying an explicit loadBalancerIP.1
apiVersion: v1
kind: Service
metadata:
name: gateway
spec:
selector:
app: gateway
ports:
- name: http
port: 3000
targetPort: 3000
nodePort: 30000
type: LoadBalancer
loadBalancerIP: "1.2.3.4"
At this point, you can also hard-code it into your ConfigMap and not worry about grabbing the value from the cluster itself.
1If you've already created a LoadBalancer with an auto-assigned IP address, setting an IP address won't change the IP of the underlying GCP load balancer. Instead, you should delete the LoadBalancer service in your cluster, wait ~15 minutes for the underlying GCP resources to get cleaned up, and then recreate the LoadBalancer with the explicit IP address.
You are trying to access gateway service from client's browser.
I would like to suggest you another solution that is slightly different from what you are currently trying to achieve
but it can solve your problem.
From your question I was able to deduce that your web app and gateway app are on the same cluster.
In my solution you dont need a service of type LoadBalancer and basic Ingress is enough to make it work.
You only need to create a Service object (notice that option type: LoadBalancer is now gone)
apiVersion: v1
kind: Service
metadata:
name: gateway
spec:
selector:
app: gateway
ports:
- name: http
port: 3000
targetPort: 3000
nodePort: 30000
and you alse need an ingress object (remember that na Ingress Controller needs to be deployed to cluster in order to make it work) like one below:
More on how to deploy Nginx Ingress controller you can finde here
and if you are already using one (maybe different one) then you can skip this step.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: gateway-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: gateway.foo.bar.com
http:
paths:
- path: /
backend:
serviceName: gateway
servicePort: 3000
Notice the host field.
The same you need to repeat for your web application. Remember to use appropriate host name (DNS name)
e.g. for web app: foo.bar.com and for gateway: gateway.foo.bar.com
and then just use the gateway.foo.bar.com dns name to connect to the gateway app from clients web browser.
You also need to create a dns entry that points *.foo.bar.com to Ingress's public ip address
as Ingress controller will create its own load balancer.
The flow of traffic would be like below:
+-------------+ +---------+ +-----------------+ +---------------------+
| Web Browser |-->| Ingress |-->| gateway Service |-->| gateway application |
+-------------+ +---------+ +-----------------+ +---------------------+
This approach is better becaues it won't cause issues with Cross-Origin Resource Sharing (CORS) in clients browser.
Examples of Ingress and Service manifests I took from official kubernetes documentation and modified slightly.
More on Ingress you can find here
and on Services here
The following deployment reads the external IP of a given service using kubectl every 10 seconds and patches a given configmap with it:
apiVersion: apps/v1
kind: Deployment
metadata:
name: configmap-updater
labels:
app: configmap-updater
spec:
selector:
matchLabels:
app: configmap-updater
template:
metadata:
labels:
app: configmap-updater
spec:
containers:
- name: configmap-updater
image: alpine:3.10
command: ['sh', '-c' ]
args:
- | #!/bin/sh
set -x
apk --update add curl
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl
chmod +x kubectl
export CONFIGMAP="configmap/global-config"
export SERVICE="service/gateway"
while true
do
IP=`./kubectl get services $CONFIGMAP -o go-template --template='{{ (index .status.loadBalancer.ingress 0).ip }}'`
PATCH=`printf '{"data":{"API_URL": "https://%s:3000"}}' $IP`
echo ${PATCH}
./kubectl patch --type=merge -p "${PATCH}" $SERVICE
sleep 10
done
You probably have RBAC enabled in your GKE cluster and would still need to create the appropriate Role and RoleBinding for this to work correctly.
You've got a few possibilities:
If you really need this to be hacked into your setup, you could use a similar approach with a sidecar container in your pod or a global service like above. Keep in mind that you would need to recreate your pods if the configmap actually changed for the changes to be picked up by the environment variables of your containers.
Watch and query the Kubernetes-API for the external IP directly in your application, eliminating the need for an environment variable.
Adopt your applications to not directly depend on the external IP.

Is sessionAffinity not working with LoadBalancer on GCE

I have small nodejs app running replicated with two pods on cluster with two nodes.
However, it seems that connection is not sticky. I need it to be sticky because I use websocket.
Is sessionAffinity not working with LoadBalancer on GCE? Let me know if i can provide more info. Thanks
Finally I had some time for more experiments:
It seems like the sessionAffinity stops working if the rc is deleted and created again after the service was created.
Steps to reproduce:
1) Used following files:
ServerName.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: servername
labels:
name: servername
spec:
replicas: 10
selector:
name: servername
template:
metadata:
labels:
name: servername
spec:
containers:
- name: app
image: fibheap/printhostname
imagePullPolicy: "Always"
ports:
- containerPort: 80
ServerNameSv.yaml
apiVersion: v1
kind: Service
metadata:
name: servername
labels:
name: servername
spec:
ports:
# the port that this service should serve on
- port: 80
targetPort: 80
selector:
name: servername
type: LoadBalancer
sessionAffinity: ClientIP
Dockerfile
FROM google/nodejs
WORKDIR /app
ADD ./main.js /app/main.js
EXPOSE 80
CMD ["node", "--harmony", "./main.js"]
main.js
// Load the http module to create an http server.
var http = require('http');
var os = require('os');
// Configure our HTTP server to respond with Hello World to all requests.
var server = http.createServer(function (req, res) {
res.writeHead(200, {"Content-Type": "text/plain"});
var ip = req.headers['x-forwarded-for'] ||
req.connection.remoteAddress ||
req.socket.remoteAddress ||
req.connection.socket.remoteAddress;
res.end("CIP:" + ip + " Remote Server:" + os.hostname());
});
// Listen on port 8000, IP defaults to 127.0.0.1
server.listen(80);
// Put a friendly message on the terminal
console.log("Server running at http://127.0.0.1:80/");
2) create rc and service (describe service to get IP and assure ClientIP is set)
3) curl multiple times from loadbalancer ip -> pod name should stay the same
4) delete rc and create again
5) curl again multiple time -> pod name changes
Please let me know if that helps for reproducing. Please feel free to use docker repository fibheap/printhostname directly
Affinity should work. Can you read back your service object and see that affinity was accepted and saved properly?
kubectl get svc app-service
I just created a GCE load balancer, and I confirmed that the GCE targetPool object also has
sessionAffinity: CLIENT_IP
As concluded in https://github.com/kubernetes/kubernetes/issues/36415, sessionAffinity on GCE will probably only work if you make your service a LoadBalanceer with "preserve client ip", i.e. "service.beta.kubernetes.io/external-traffic": "OnlyLocal", and you set sessionAffinity=ClientIP.
Ingress is probably a better bet. I haven't verified this, but Nginx Ingress will bypass services and there's a "sticky-ng" module.