Referencing Helm Redis master from another pod within Kubernetes - kubernetes

I am running Redis via Helm on Kubernetes and wondering how I reference the master pod from my application which is also running inside of Kubernetes as a pod. Helm is nice enough to create ClusterIP services, but I am still unclear in my application what I put to always reference the master:
MacBook-Pro ➜ api git:(master) ✗ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ignoble-hyena-redis-master ClusterIP 10.100.187.188 <none> 6379/TCP 5h21m
ignoble-hyena-redis-slave ClusterIP 10.100.236.164 <none> 6379/TCP 5h21m
MacBook-Pro ➜ api git:(master) ✗ kubectl describe service ignoble-hyena-redis-master
Name: ignoble-hyena-redis-master
Namespace: default
Labels: app=redis
chart=redis-9.0.1
heritage=Tiller
release=ignoble-hyena
Annotations: <none>
Selector: app=redis,release=ignoble-hyena,role=master
Type: ClusterIP
IP: 10.100.187.188
Port: redis 6379/TCP
TargetPort: redis/TCP
Endpoints: 192.168.34.46:6379
Session Affinity: None
Events: <none>
Do I use: redis://my-password#ignoble-hyena-redis-master:6379. That seems fragile as the pod name changes every time I redeploy the Helm chart. What is the recommended way to handle internal service discovery within the Kubernetes cluster?

You should package your application as a Helm chart. This basically involves running helm create, then copying your existing deployment YAML into the templates directory. Charts can have dependencies and so you can declare that your application needs Redis. Using the version in the standard Helm charts repository you can say something like
# I am requirements.yaml
- name: redis
version: ~9.0.2
repository: https://kubernetes-charts.storage.googleapis.com
The important detail here is that your application and its Redis will have the same Helm release name -- if your application is ignoble-hyena-myapp then its Redis will be ignoble-hyena-redis-master. You can set this in your deployment YAML spec using templates
env:
- name: REDIS_HOST
value: {{ .Release.Name }}-redis-master
Because of the way Kubernetes works internally, even if you helm upgrade your chart to a newer image tag, it won't usually touch the Redis. Helm will upload a new version of the Redis artifacts that looks exactly the same as the old one, and Kubernetes will take no action.

I couldn’t find it well-documented but following the template code you should be able to set the fullnameOverride value to some string you control, and the redis master will be exposed as <yourFullname>-master, and you can have your clients reach it via that. If your clients are in a different namespace, they can reach the masters at <yourFullname>-master.<redisMasterServiceNamespace>.

Related

Kubernetes clusterIP does not load balance requests [duplicate]

My Environment: Mac dev machine with latest Minikube/Docker
I built (locally) a simple docker image with a simple Django REST API "hello world".I'm running a deployment with 3 replicas. This is my yaml file for defining it:
apiVersion: v1
kind: Service
metadata:
name: myproj-app-service
labels:
app: myproj-be
spec:
type: LoadBalancer
ports:
- port: 8000
selector:
app: myproj-be
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myproj-app-deployment
labels:
app: myproj-be
spec:
replicas: 3
selector:
matchLabels:
app: myproj-be
template:
metadata:
labels:
app: myproj-be
spec:
containers:
- name: myproj-app-server
image: myproj-app-server:4
ports:
- containerPort: 8000
env:
- name: DATABASE_URL
value: postgres://myname:#10.0.2.2:5432/myproj2
- name: REDIS_URL
value: redis://10.0.2.2:6379/1
When I apply this yaml it generates things correctly.
- one deployment
- one service
- three pods
Deployments:
NAME READY UP-TO-DATE AVAILABLE AGE
myproj-app-deployment 3/3 3 3 79m
Services:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 83m
myproj-app-service LoadBalancer 10.96.91.44 <pending> 8000:31559/TCP 79m
Pods:
NAME READY STATUS RESTARTS AGE
myproj-app-deployment-77664b5557-97wkx 1/1 Running 0 48m
myproj-app-deployment-77664b5557-ks7kf 1/1 Running 0 49m
myproj-app-deployment-77664b5557-v9889 1/1 Running 0 49m
The interesting thing is that when I SSH into the Minikube, and hit the service using curl 10.96.91.44:8000 it respects the LoadBalancer type of the service and rotates between all three pods as I hit the endpoints time and again. I can see that in the returned results which I have made sure to include the HOSTNAME of the pod.
However, when I try to access the service from my Hosting Mac -- using kubectl port-forward service/myproj-app-service 8000:8000 -- Every time I hit the endpoint, I get the same pod to respond. It doesn't load balance. I can see that clearly when I kubectl logs -f <pod> to all three pods and only one of them is handling the hits, as the other two are idle...
Is this a kubectl port-forward limitation or issue? or am I missing something greater here?
kubectl port-forward looks up the first Pod from the Service information provided on the command line and forwards directly to a Pod rather than forwarding to the ClusterIP/Service port. The cluster doesn't get a chance to load balance the service like regular service traffic.
The kubernetes API only provides Pod port forward operations (CREATE and GET). Similar API operations don't exist for Service endpoints.
kubectl code
Here's a little bit of the flow from the kubectl code that seems to back that up (I'll just add that Go isn't my primary language)
The portforward.go Complete function is where kubectl portforward does the first look up for a pod from options via AttachablePodForObjectFn:
The AttachablePodForObjectFn is defined as attachablePodForObject in this interface, then here is the attachablePodForObject function.
To my (inexperienced) Go eyes, it appears the attachablePodForObject is the thing kubectl uses to look up a Pod to from a Service defined on the command line.
Then from there on everything deals with filling in the Pod specific PortForwardOptions (which doesn't include a service) and is passed to the kubernetes API.
The reason was that my pods were randomly in a crashing state due to Python *.pyc files that were left in the container. This causes issues when Django is running in a multi-pod Kubernetes deployment. Once I removed this issue and all pods ran successfully, the round-robin started working.

Disable Kubernetes ClusterIP service environment variables on pods

Whenever a new pod is created in the cluster, environment variables related to the default Kubernetes clusterIP service are being injected into it.
Kubernetes clusterIp service running:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.116.0.1 <none> 443/TCP 27d
No matter on which namespace the pod is running, the following env vars will always appear:
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://10.116.0.1:443
KUBERNETES_PORT_443_TCP_ADDR=10.116.0.1
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP=tcp://10.116.0.1:443
KUBERNETES_SERVICE_HOST=10.116.0.1
I'm using enableServiceLinks=false as a mechanism to avoid service environment variables to be injected into pods, but it looks like it doesn't work for the default Kubernetes clusterIp service.
Deployment manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: indecision-app-deployment
labels:
app: indecision-app
spec:
selector:
matchLabels:
app: indecision-app
template:
metadata:
labels:
app: indecision-app
spec:
enableServiceLinks: false
containers:
- name: indecision-app
image: hleal18/indecision-app:latest
ports:
- containerPort: 8080
Is it expected that enableServiceLinks=false also avoids the default Kubernetes clusterIP service of being injected?
In k8s source code you can find this comment:
// We always want to add environment variabled for master services
// from the master service namespace, even if enableServiceLinks is false.
and the code that adds these environemt variables:
if service.Namespace == kl.masterServiceNamespace && masterServices.Has(serviceName) {
if _, exists := serviceMap[serviceName]; !exists {
serviceMap[serviceName] = service
}
As you can see, kubelet adds services from masterServiceNamespace which defaults to "default".
Digging a bit more I have found out that there is a flag --master-service-namespace
--master-service-namespace The namespace from which the kubernetes master services should be injected into pods (default "default") (DEPRECATED: This flag will be removed in a future version.)
Now the flag is depricated and may be deleted in future.
Setting it on every kubelet should solve your issue but this is probably not the best thing to do as it is probably depricated for a reason.

Kubernetes dashboard through kubectl proxy - port confusion

I have seen that the standard way to access http services through the kubectl proxy is the following:
http://api.host/api/v1/namespaces/NAMESPACE/services/SERVICE_NAME:SERVICE_PORT/proxy/
Why is it that the kubernetes-dashboard uses https:kubernetes-dashboard: for SERVICE_NAME:SERVICE_PORT?
I would assume from the following that it would be kubernetes_dashboard:443.
kubectl -n kube-system get service kubernetes-dashboard -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes-dashboard ClusterIP 10.233.50.212 <none> 443:31663/TCP 15d k8s-app=kubernetes-dashboard
Additionally, what is the meaning of the port show 443:31663 when all other services will just have x/TCP (x being one number instead of x:y)
Lastly, kubectl cluster-info will show
Kubernetes master is running at https://x.x.x.x:x
kubernetes-dashboard is running at https://x.x.x.x:x/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
I have created a simple service but it does not show here and I am confused how to determine what services show here or not.
Why is it that the kubernetes-dashboard uses
https:kubernetes-dashboard: for SERVICE_NAME:SERVICE_PORT?
Additionally, what is the meaning of the port show 443:31663 when all
other services will just have x/TCP (x being one number instead of
x:y)
As described in Manually constructing apiserver proxy URLs, the default way is
http://kubernetes_master_address/api/v1/namespaces/namespace_name/services/service_name[:port_name]/proxy
By default, the API server proxies to your service using http. To use
https, prefix the service name with https::
http://kubernetes_master_address/api/v1/namespaces/namespace_name/services/https:service_name:[port_name]/proxy
The supported formats for the name segment of the URL are:
<service_name> - proxies to the default or unnamed port using http
<service_name>:<port_name> - proxies to the specified port using http
https:<service_name>: - proxies to the default or unnamed port using https (note the trailing colon)
https:<service_name>:<port_name> - proxies to the specified port using https
Next:
I have created a simple service but it does not show here and I am
confused how to determine what services show here or not.
What is what I found and tested for you:
cluster-info API reference:
Display addresses of the master and services with label kubernetes.io/cluster-service=true To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
So, as soon as you add kubernetes.io/cluster-service: "true" label - the service starts to be seen under kubectl cluster-info.
BUT!! There is an expected behavior when you see that you service disappear from output in couple of minutes. Explanation has been found here - I only copy paste it here for future reference.
The other part is the addon manager. It uses this annotation to
synchronizes the cluster state with static manifest files. The
behavior was something like this:
1) addon manager reads a yaml from disk -> deploys the contents
2) addon manager reads all deployments from api server with annotation cluster-service:true -> deletes all that do not exist as files
As a result, if you add this annotation, addon manager will remove dashboard after a minute or so.
So,
dashboard is deployed after cluster creation -> annotation should not be set:
https://github.com/kubernetes/dashboard/blob/b98d167dadaafb665a28091d1e975cf74eb31c94/src/deploy/kubernetes-dashboard.yaml
dashboard is deployed part of cluster creation -> annotation should be set:
https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dashboard/dashboard-controller.yaml
At least this was the behavior some time ago. I think kubeadm does not use addon-manager. But it is still part of kube-up script.
Solution for this behavior also exists: add additional label addonmanager.kubernetes.io/mode: EnsureExists
Explanation is here
You final service should look like:
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: EnsureExists
name: kubernetes-dashboard
namespace: kube-system
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
kubectl get svc kubernetes-dashboard -n kube-system -o yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists","k8s-app":"kubernetes-dashboard","kubernetes.io/cluster-service":"true"},"name":"kubernetes-dashboard","namespace":"kube-system"},"spec":{"ports":[{"port":443,"targetPort":8443}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
labels:
addonmanager.kubernetes.io/mode: EnsureExists
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
kubectl cluster-info
Kubernetes master is running at https://*.*.*.*
...
kubernetes-dashboard is running at https://*.*.*.*/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
...

use prometheus with external ip address

we have k8s cluster and I’ve application which is running there.
Now I try to add https://prometheus.io/
and I use the command
helm install stable/prometheus --version 6.7.4 --name my-prometheus
this command works and I got this
NAME: my-prometheus
LAST DEPLOYED: Tue Feb 5 15:21:46 2019
NAMESPACE: default
STATUS: DEPLOYED
...
when I run command
kubectl get services
I got this
kubernetes ClusterIP 100.64.0.1 <none> 443/TCP 2d4h
my-prometheus-alertmanager ClusterIP 100.75.244.55 <none> 80/TCP 8m44s
my-prometheus-kube-state-metrics ClusterIP None <none> 80/TCP 8m43s
my-prometheus-node-exporter ClusterIP None <none> 9100/TCP 8m43s
my-prometheus-pushgateway ClusterIP 100.75.24.67 <none> 9091/TCP 8m43s
my-prometheus-server ClusterIP 100.33.26.206 <none> 80/TCP 8m43s
I didnt get any externalIP
Does someone knows how to add it ? via service? any example for this
update
i’ve added the following yml
apiVersion: v1
kind: Service
metadata:
name: prometheus-service
spec:
selector:
app: prometheus-server
type: LoadBalancer
ports:
- port: 8080
targetPort: 9090
nodePort: 30001
which created successfully
now I see the external ip like when running kubectl get services
my-prometheus-server LoadBalancer 100.33.26.206 8080:30001/TCP 80/TCP 8m43s
And I use in the browser 100.33.26.206:30001 and nothing happen, any idea?
I think what you are trying to do is to create a service with a type LoadBalancer, those have an internal and external IP.
You can create one like any other service but you should precise those two fields:
externalTrafficPolicy: Local
type: LoadBalancer
Updated:
There seems to be some confusion, you don't need an external ip to monitor your apps, it will only be used to access prometheus UI.
The UI is accessible on port 9090 but prometheus is never accessed by the exporter as it will be prometheus wich will be scraping the exporters.
Now to access a service from the internet you should have a google ip, but it seems that what you have is still an internal IP, it's in the same subnet as the other clusterIP, and it should not. For now in place of an external ip it's showing a port redirect wich is also wrong as the prometheus UI is on port 9090 (if you didn't modify your configuration it should still be). You should try to remove the "nodePort" and leave the port redirect to kubernetes.
The Prometheus helm chart does support configuration for service, see the documentation
To configure Prometheus server on a local cluster, follow the steps:
Create values.yaml:
server:
service:
servicePort: 31000
type: LoadBalancer
loadBalancerIP: localhost
or
server:
service:
nodePort: 31000
type: NodePort
Add stable repo to helm (if missing):
helm repo add stable "https://kubernetes-charts.storage.googleapis.com/"
Install Prometheus:
helm install prometheus-demo stable/prometheus --values .\values.yaml
Wait for 1-2mins. Prometheus should be available: http://localhost:31000/

Traefik dashboard/web UI 404 when installed via helm on Digitalocean single node cluster

I am trying to set Traefik as my ingress controller and load balancer on a single node cluster(Digital Ocean). Following the official Traefik setup guide I installed Traefik using helm:
helm install --values values.yaml stable/traefik
# values.yaml
dashboard:
enabled: true
domain: traefik-ui.minikube
kubernetes:
namespaces:
- default
- kube-system
#output
RESOURCES:
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
operatic-emu-traefik-f5dbf4b8f-z9bzp 0/1 ContainerCreating 0 1s
==> v1/ConfigMap
NAME AGE
operatic-emu-traefik 1s
==> v1/Service
operatic-emu-traefik-dashboard 1s
operatic-emu-traefik 1s
==> v1/Deployment
operatic-emu-traefik 1s
==> v1beta1/Ingress
operatic-emu-traefik-dashboard 1s
Then I created the service exposing the Web UI
kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/ui.yaml
Then I can clearly see my traefik pod running and an external-ip being assigned:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dashboard ClusterIP 10.245.156.214 <none> 443/TCP 11d
service/kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 14d
service/operatic-emu-traefik LoadBalancer 10.245.137.41 <external-ip> 80:31190/TCP,443:30207/TCP 5m7s
service/operatic-emu-traefik-dashboard ClusterIP 10.245.8.156 <none> 80/TCP 5m7s
Then opening http://external-ip/dashboard/ leads to 404 page not found
I read a ton of answers and tutorials but keep missing something. Any help is highly appreciated.
I am writing this post as the information is a bit much to fit in a comment. After spending enough time on understanding how k8s and helm charts work, this is how I solved it:
Firstly, I missed the RBAC part, I did not create ClusterRole and ClusterRoleBinding in order to authorise Traefik to use K8S API (as I am using 1.12 version). Hence, either I should have deployed ClusterRole and ClusterRoleBinding manually or added the following in my values.yaml
rbac:
enabled: true
Secondly, I tried to access dashboard ui from ip directly without realising Traefik uses hostname to direct to its dashboard as #Rico mentioned above (I am voting you up as you did provide helpful info but I did not manage to connect all pieces of the puzzle at that time). So, either edit your /etc/hosts file linking your hostname to the external-ip and then access the dashboard via browser or test that it is working with curl:
curl http://external-ip/dashboard/ -H 'Host: traefik-ui.minikube'
To sum up, you should be able to install Traefik and access its dashboard ui by installing:
helm install --values values.yaml stable/traefik
# values.yaml
dashboard:
enabled: true
domain: traefik-ui.minikube
rbac:
enabled: true
kubernetes:
namespaces:
- default
- kube-system
and then editing your hosts file and opening the hostname you chose.
Now the confusing part from the official traefik setup guide is the section named Submitting an Ingress to the Cluster just below the Deploy Traefik using Helm Chart that instructs to install a service and an ingress object in order to be able to access the dashboard. This is unneeded as the official stable/traefik helm chart provides both of them. You would need that if you want to install traefik by deploying all needed objects manually. However for a person just starting out with k8s and helm, it looks like that section needs to be completed after installing helm via the official stable/traefik chart.
I believe this is the same issue as this.
You either have to connect with the traefik-ui.minikube hostname or add a host entry on your Ingress definition like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: kube-system
name: traefik-ingress
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: yourown.hostname.com
http:
paths:
- path: /dashboard
backend:
serviceName: traefik-web-ui
servicePort: web
You can check with:
$ kubectl -n kube-system get ingress