Kubernetes LoadBalancer - kubernetes

I setup my raspberry PI cluster and have installed metallb. I have the following Wordpress services running. I am confused why I cannot get this working via browser or wget process
pi#master:~ $ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d3h
mysql ClusterIP None <none> 3306/TCP 41m
wordpress LoadBalancer 10.101.63.209 192.168.1.50 80:32499/TCP 4m35s
When I try to do a wget to my website it keeps on trying to go out thru port 30820
What am I doing wrong here?
pi#master:~ $ wget 192.168.1.50
--2020-04-05 03:29:56-- http://192.168.1.50/
Connecting to 192.168.1.50:80... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: http://192.168.1.50:30820/ [following]
--2020-04-05 03:29:57-- http://192.168.1.50:30820/
Connecting to 192.168.1.50:30820... failed: No route to host.
Here is my deployment. Does this look OK?
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
#namespace: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
strategy:
type: Recreate
replicas: 3
template:
metadata:
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: wordpress:4.8-apache
imagePullPolicy: IfNotPresent
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: mysql
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass # generated before in secret.yml
key: password
ports:
- containerPort: 80
name: wordpress
volumeMounts:
- name: wordpress-persistent-storage
mountPath: "/var/www/html" # which data will be stored
resources:
limits:
cpu: '1'
memory: '512Mi'
requests:
cpu: '500m'
memory: '256Mi'
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: wordpress-persistent-storage
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
---
apiVersion: v1
kind: Service
metadata:
name: wordpress
#namespace: wordpress
labels:
app: wordpress
tier: frontend
spec:
selector:
app: wordpress
ports:
- protocol: 'TCP'
port: 80
targetPort: 80
#externalTrafficPolicy: Local
type: LoadBalancer

It may be your Wordpress settings which implies redirect. Neither MetalLB nor k8s Service don't have redirect functionality, as both work on network level.

Related

Kubernetes wordpress to mysql connection issue

I have a Wordpress pod that doesn't connect to my SQL DB pod. The wordpress mysql deployment is called wordpress-mysql so the referencing is correct. This is the message that appears.
Complete! WordPress has been successfully copied to /var/www/html
Warning: mysqli::mysqli(): php_network_getaddresses: getaddrinfo
failed: Name or service not known in - on line 22
Warning: mysqli::mysqli(): (HY000/2002): php_network_getaddresses:
getaddrinfo failed: Name or service not known in - on line 22
MySQL Connection Error: (2002) php_network_getaddresses: getaddrinfo
failed: Name or service not known
This is the code for the wordpress.yaml file:
apiVersion: v1
kind: Service
metadata:
name: frontend
spec:
selector:
app: wordpress
ports:
- name: web
port: 80
targetPort: 80
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wp-pv-claim
labels:
app: nginx
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: do-block-storage
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: frontend
spec:
rules:
- host: test.example.services
http:
paths:
- path: /
backend:
serviceName: frontend
servicePort: web
---
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: wordpress:4.8-apache
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 80
name: wordpress
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: wp-pv-claim
The following is an output from kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend ClusterIP 10.245.131.56 <none> 80/TCP 4m25s
kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 9m46s
wordpress-mysql ClusterIP None <none> 3306/TCP 5m16s
MySQL Service YAML:
apiVersion: v1
kind: Service
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
ports:
- port: 3306
selector:
app: wordpress
tier: mysql
clusterIP: None

Two kubernetes deployments in the same namespace are not able to communicate

I'm deploying ELK stack (oss) to kubernetes cluster. Elasticsearch deployment and service starts correctly and API is reacheble. Kibana deployment starts but can't access elasticsearch:
From Kibana container logs:
{"type":"log","#timestamp":"2019-05-08T22:49:26Z","tags":["error","elasticsearch","admin"],"pid":1,"message":"Request error, retrying\nHEAD http://elasticsearch:9200/ => getaddrinfo ENOTFOUND elasticsearch elasticsearch:9200"}
{"type":"log","#timestamp":"2019-05-08T22:50:44Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
{"type":"log","#timestamp":"2019-05-08T22:50:44Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
Both deployments are in the same namespace "observability". I also tried to reference elasticsearch container as elasticsearch.observability.svc.cluster.local but it's not working too.
What I'am doing wrong? How to reference elasticsearch container from kibana container?
More info:
kubectl --context=19team-observability-admin-context -n observability get pods
NAME READY STATUS RESTARTS AGE
elasticsearch-9d495b84f-j2297 1/1 Running 0 15s
kibana-65bc7f9c4-s9cv4 1/1 Running 0 15s
kubectl --context=19team-observability-admin-context -n observability get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch NodePort 10.104.250.175 <none> 9200:30083/TCP,9300:30059/TCP 1m
kibana NodePort 10.102.124.171 <none> 5601:30124/TCP 1m
I start my containers with command
kubectl --context=19team-observability-admin-context -n observability apply -f .\elasticsearch.yaml -f .\kibana.yaml
elasticsearch.yaml
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
namespace: observability
spec:
type: NodePort
ports:
- name: "9200"
port: 9200
targetPort: 9200
- name: "9300"
port: 9300
targetPort: 9300
selector:
app: elasticsearch
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: elasticsearch
namespace: observability
spec:
replicas: 1
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
initContainers:
- name: set-vm-max-map-count
image: busybox
imagePullPolicy: IfNotPresent
command: ['sysctl', '-w', 'vm.max_map_count=262144']
securityContext:
privileged: true
resources:
requests:
memory: "512Mi"
cpu: "1"
limits:
memory: "724Mi"
cpu: "1"
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.7.1
ports:
- containerPort: 9200
- containerPort: 9300
resources:
requests:
memory: "3Gi"
cpu: "1"
limits:
memory: "3Gi"
cpu: "1"
kibana.yaml
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: observability
spec:
type: NodePort
ports:
- name: "5601"
port: 5601
targetPort: 5601
selector:
app: observability_platform_kibana
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: observability_platform_kibana
name: kibana
namespace: observability
spec:
replicas: 1
template:
metadata:
labels:
app: observability_platform_kibana
spec:
containers:
- env:
# THIS IS WHERE WE SET CONNECTION BETWEEN KIBANA AND ELASTIC
- name: ELASTICSEARCH_HOSTS
value: http://elasticsearch:9200
- name: SERVER_NAME
value: kibana
image: docker.elastic.co/kibana/kibana-oss:6.7.1
name: kibana
ports:
- containerPort: 5601
resources:
requests:
memory: "512Mi"
cpu: "1"
limits:
memory: "724Mi"
cpu: "1"
restartPolicy: Always
UPDATE 1
As gonzalesraul proposed I've created second service for elastic with ClusterIP type:
apiVersion: v1
kind: Service
metadata:
labels:
app: elasticsearch
name: elasticsearch-local
namespace: observability
spec:
type: ClusterIP
ports:
- port: 9200
protocol: TCP
targetPort: 9200
selector:
app: elasticsearch
Service is created:
kubectl --context=19team-observability-admin-context -n observability get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch NodePort 10.106.5.94 <none> 9200:31598/TCP,9300:32018/TCP 26s
elasticsearch-local ClusterIP 10.101.178.13 <none> 9200/TCP 26s
kibana NodePort 10.99.73.118 <none> 5601:30004/TCP 26s
And reference elastic as "http://elasticsearch-local:9200"
Unfortunately it does not work, in kibana container:
{"type":"log","#timestamp":"2019-05-09T10:13:54Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch-local:9200/"}
Do not use a NodePort service, instead use a ClusterIP. If you need to expose as a Nodeport your service, create a second service besides, for instance:
---
apiVersion: v1
kind: Service
metadata:
labels:
app: elasticsearch
name: elasticsearch-local
namespace: observability
spec:
type: ClusterIP
ports:
- port: 9200
protocol: TCP
targetPort: 9200
selector:
app: elasticsearch
Then update the kibana manifest to point to the ClusterIP service:
# ...
# THIS IS WHERE WE SET CONNECTION BETWEEN KIBANA AND ELASTIC
- name: ELASTICSEARCH_HOSTS
value: http://elasticsearch-local:9200
# ...
The nodePort services do not create a 'dns entry' (ex. elasticsearch.observability.svc.cluster.local) on kubernetes
Edit the server name value in kibana.yaml and set it to kibana:5601.
I think if you don't do this, by default it is trying to go to port 80.
This is what looks like now kibana.yaml:
...
spec:
containers:
- env:
- name: ELASTICSEARCH_HOSTS
value: http://elasticsearch:9200
- name: SERVER_NAME
value: kibana:5601
image: docker.elastic.co/kibana/kibana-oss:6.7.1
imagePullPolicy: IfNotPresent
name: kibana
...
And this is the output now:
{"type":"log","#timestamp":"2019-05-09T10:37:16Z","tags":["status","plugin:console#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-05-09T10:37:16Z","tags":["status","plugin:interpreter#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-05-09T10:37:16Z","tags":["status","plugin:metrics#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-05-09T10:37:16Z","tags":["status","plugin:tile_map#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-05-09T10:37:16Z","tags":["status","plugin:timelion#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-05-09T10:37:16Z","tags":["status","plugin:elasticsearch#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","#timestamp":"2019-05-09T10:37:17Z","tags":["listening","info"],"pid":1,"message":"Server running at http://0:5601"}
UPDATE
I just tested it on a bare metal cluster (bootstraped through kubeadm), and worked again.
This is the output:
{"type":"log","#timestamp":"2019-05-09T11:09:59Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
{"type":"log","#timestamp":"2019-05-09T11:10:01Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
{"type":"log","#timestamp":"2019-05-09T11:10:01Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
{"type":"log","#timestamp":"2019-05-09T11:10:04Z","tags":["status","plugin:elasticsearch#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from red to green - Ready","prevState":"red","prevMsg":"Unable to connect to Elasticsearch."}
{"type":"log","#timestamp":"2019-05-09T11:10:04Z","tags":["info","migrations"],"pid":1,"message":"Creating index .kibana_1."}
{"type":"log","#timestamp":"2019-05-09T11:10:06Z","tags":["info","migrations"],"pid":1,"message":"Pointing alias .kibana to .kibana_1."}
{"type":"log","#timestamp":"2019-05-09T11:10:06Z","tags":["info","migrations"],"pid":1,"message":"Finished in 2417ms."}
{"type":"log","#timestamp":"2019-05-09T11:10:06Z","tags":["listening","info"],"pid":1,"message":"Server running at http://0:5601"}
Note that it passed from "No Living Connections" to "Running". I am running the nodes on GCP. I had to open the firewalls for it to work. What's your environment?

Can I run Kubernetes Dashboard on a separate cluster than the targetted cluster

I have exposed Kube API through proxy, but I do not have permission to run the Dashboard on that cluster. Can I run the dashboard on a separate cluster, and point that dashboard to the desired cluster's API?
Yes, you can. Below is the preferred deployment definition for the dashboard (from the dashboard Github page). You will have to uncomment the option --apiserver-host=http://my-address:port. You will also have to make sure you are using the right certificates and credentials to access your kube-apiserver. For security reasons, I would recommend opening your kube-apiserver proxy only to very specific hosts like the one where your dashboard would be running.
kind: Deployment
apiVersion: apps/v1beta2
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule

Service not exposing in kubernetes

I have a deployment and a service in GKE. I exposed the deployment as a Load Balancer but I cannot access it through the service (curl or browser). I get an:
curl: (7) Failed to connect to <my-Ip-Address> port 443: Connection refused
I can port forward directly to the pod and it works fine:
kubectl --namespace=redfalcon port-forward web-service-rf-76967f9c68-2zbhm 9999:443 >> /dev/null
curl -k -v --request POST --url https://localhost:9999/auth/login/ --header 'content-type: application/json' --header 'x-profile-key: ' --data '{"email":"<testusername>","password":"<testpassword>"}'
I have most likely misconfigured my service but cannot see how. Any help on what I did would be very much appreciated.
Service Yaml:
---
apiVersion: v1
kind: Service
metadata:
name: red-falcon-lb
namespace: redfalcon
spec:
type: LoadBalancer
ports:
- name: https
port: 443
protocol: TCP
selector:
app: web-service-rf
Deployment YAML
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: web-service-rf
spec:
selector:
matchLabels:
app: web-service-rf
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: web-service-rf
spec:
initContainers:
- name: certificate-init-container
image: proofpoint/certificate-init-container:0.2.0
imagePullPolicy: Always
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
args:
- "-namespace=$(NAMESPACE)"
- "-pod-name=$(POD_NAME)"
- "-query-k8s"
volumeMounts:
- name: tls
mountPath: /etc/tls
containers:
- name: web-service-rf
image: gcr.io/redfalcon-186521/redfalcon-webserver-minimal:latest
# image: gcr.io/redfalcon-186521/redfalcon-webserver-full:latest
command:
- "./server"
- "--port=443"
imagePullPolicy: Always
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/key.json
ports:
- containerPort: 443
resources:
limits:
memory: "500Mi"
cpu: "100m"
volumeMounts:
- mountPath: /etc/tls
name: tls
- mountPath: /var/secrets/google
name: google-cloud-key
volumes:
- name: tls
emptyDir: {}
- name: google-cloud-key
secret:
secretName: pubsub-key
output: kubectl describe svc red-falcon-lb
Name: red-falcon-lb
Namespace: redfalcon
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"red-falcon-lb","namespace":"redfalcon"},"spec":{"ports":[{"name":"https","port...
Selector: app=web-service-rf
Type: LoadBalancer
IP: 10.43.245.9
LoadBalancer Ingress: <EXTERNAL IP REDACTED>
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 31524/TCP
Endpoints: 10.40.0.201:443,10.40.0.202:443
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 39m service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 38m service-controller Ensured load balancer
I figured out what it was...
My golang app was listening on localhost instead of 0.0.0.0. This meant that port forwarding on kubectl worked but any service exposure didn't work.
I had to add "--host 0.0.0.0" to my k8s command and it then listened to requests from outside localhost.
My command ended up being...
"./server --port 8080 --host 0.0.0.0"

Kubernetes nodeport not working

I've created an YAML file with three images in one pod (they need to communicate with eachother over 127.0.0.1) It seems that it's all working. I've defined a nodeport in the yaml file.
There is one deployment defined applications it contains three images:
contacts-db (A MySQL database)
front-end (An Angular website)
net-core (An API)
I've defined three services, one for every container. In there I've defined the type NodePort to access it.
So I retrieved the services to get the port numbers:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
contacts-db 10.103.67.74 <nodes> 3306:30241/TCP 1d
front-end 10.107.226.176 <nodes> 80:32195/TCP 1d
net-core 10.108.146.87 <nodes> 5000:30245/TCP 1d
And I navigate in my browser to http://:32195 and it just keeps loading. It's not connecting. This is the complete Yaml file:
---
apiVersion: v1
kind: Namespace
metadata:
name: three-tier
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: applications
labels:
name: applications
namespace: three-tier
spec:
replicas: 1
template:
metadata:
labels:
name: applications
spec:
containers:
- name: contacts-db
image: mysql/mysql-server #TBD
env:
- name: MYSQL_ROOT_PASSWORD
value: quintor
- name: MYSQL_DATABASE
value: quintor #TBD
ports:
- name: mysql
containerPort: 3306
- name: front-end
image: xanvier/angularfrontend #TBD
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 80
- name: net-core
image: xanvier/contactsapi #TBD
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: contacts-db
labels:
name: contacts-db
namespace: three-tier
spec:
type: NodePort
ports:
# the port that this service should serve on
- port: 3306
targetPort: 3306
selector:
name: contacts-db
---
apiVersion: v1
kind: Service
metadata:
name: front-end
labels:
name: front-end
namespace: three-tier
spec:
type: NodePort
ports:
- port: 80
targetPort: 80 #nodePort: 30001
selector:
name: front-end
---
apiVersion: v1
kind: Service
metadata:
name: net-core
labels:
name: net-core
namespace: three-tier
spec:
type: NodePort
ports:
- port: 5000
targetPort: 5000 #nodePort: 30001
selector:
name: net-core
---
The selector of a service is matching the labels of your pod. In your case the defined selectors point to the containers which leads into nothing when choosing pods.
You'd have to redefine your services to use one selector or split up your containers to different Deployments / Pods.
To see whether a selector defined for a services would work, you can check them with:
kubectl get pods -l key=value
If the result is empty, your services will run into the void too.