Disable Kubernetes ClusterIP service environment variables on pods - kubernetes

Whenever a new pod is created in the cluster, environment variables related to the default Kubernetes clusterIP service are being injected into it.
Kubernetes clusterIp service running:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.116.0.1 <none> 443/TCP 27d
No matter on which namespace the pod is running, the following env vars will always appear:
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://10.116.0.1:443
KUBERNETES_PORT_443_TCP_ADDR=10.116.0.1
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP=tcp://10.116.0.1:443
KUBERNETES_SERVICE_HOST=10.116.0.1
I'm using enableServiceLinks=false as a mechanism to avoid service environment variables to be injected into pods, but it looks like it doesn't work for the default Kubernetes clusterIp service.
Deployment manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: indecision-app-deployment
labels:
app: indecision-app
spec:
selector:
matchLabels:
app: indecision-app
template:
metadata:
labels:
app: indecision-app
spec:
enableServiceLinks: false
containers:
- name: indecision-app
image: hleal18/indecision-app:latest
ports:
- containerPort: 8080
Is it expected that enableServiceLinks=false also avoids the default Kubernetes clusterIP service of being injected?

In k8s source code you can find this comment:
// We always want to add environment variabled for master services
// from the master service namespace, even if enableServiceLinks is false.
and the code that adds these environemt variables:
if service.Namespace == kl.masterServiceNamespace && masterServices.Has(serviceName) {
if _, exists := serviceMap[serviceName]; !exists {
serviceMap[serviceName] = service
}
As you can see, kubelet adds services from masterServiceNamespace which defaults to "default".
Digging a bit more I have found out that there is a flag --master-service-namespace
--master-service-namespace The namespace from which the kubernetes master services should be injected into pods (default "default") (DEPRECATED: This flag will be removed in a future version.)
Now the flag is depricated and may be deleted in future.
Setting it on every kubelet should solve your issue but this is probably not the best thing to do as it is probably depricated for a reason.

Related

Testing locally k8s distributed system

I'm new to k8s and I'm trying to build a distributed system. The idea is that a stateful pod will be spawened for each user.
Main services are two Python applications MothershipService and Ship. MothershipService's purpose is to keep track of ship-per-user, do health checks, etc. Ship is running some (untrusted) user code.
MothershipService Ship-user1
| | ---------- | |---vol1
|..............| -----. |--------|
\
\ Ship-user2
'- | |---vol2
|--------|
I can manage fine to get up the ship service
> kubectl get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/ship-0 1/1 Running 0 7d 10.244.0.91 minikube <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/ship ClusterIP None <none> 8000/TCP 7d app=ship
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d <none>
NAME READY AGE CONTAINERS IMAGES
statefulset.apps/ship 1/1 7d ship ship
My question is how do I go about testing this via curl or a browser? These are all backend services so NodePort seems not the right approach since none of this should be accessible to the public. Eventually I will build a test-suite for all this and deploy on GKE.
ship.yml (pseudo-spec)
kind: Service
metadata:
name: ship
spec:
ports:
- port: 8000
name: ship
clusterIP: None # headless service
..
---
kind: StatefulSet
metadata:
name: ship
spec:
serviceName: "ship"
replicas: 1
template:
spec:
containers:
- name: ship
image: ship
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8000
name: ship
..
One possibility is to use the kubectl port-forward command to expose the pod port locally on your system. For example, if I'm use this deployment to run a simple web server listening on port 8000:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: example
name: example
spec:
selector:
matchLabels:
app: example
template:
metadata:
labels:
app: example
spec:
containers:
- args:
- --port
- "8000"
image: docker.io/alpinelinux/darkhttpd
name: web
ports:
- containerPort: 8000
name: http
I can expose that on my local system by running:
kubectl port-forward deploy/example 8000:8000
As long as that port-forward command is running, I can point my browser (or curl) at http://localhost:8000 to access the service.
Alternately, I can use kubectl exec to run commands (like curl or wget) inside the pod:
kubectl exec -it web -- wget -O- http://127.0.0.1:8000
Example process on how to create a Kubernetes Service object that exposes an external IP address :
**Creating a service for an application running in five pods: **
Run a Hello World application in your cluster:
kubectl run hello-world --replicas=5 --labels="run=load-balancer-example" --image=gcr.io/google-samples/node-hello:1.0 --port=8080
The preceding command creates a Deployment object and an associated ReplicaSet object. The ReplicaSet has five Pods, each of which runs the Hello World application.
Display information about the Deployment:
kubectl get deployments hello-world
kubectl describe deployments hello-world
Display information about your ReplicaSet objects:
kubectl get replicasets
kubectl describe replicasets
Create a Service object that exposes the deployment:
kubectl expose deployment hello-world --type=LoadBalancer --name=my-service
Display information about the Service:
kubectl get services my-service
The output is similar to this:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-service 10.3.245.137 104.198.205.71 8080/TCP 54s
Note: If the external IP address is shown as , wait for a minute and enter the same command again.
Display detailed information about the Service:
kubectl describe services my-service
The output is similar to this:
Name: my-service
Namespace: default
Labels: run=load-balancer-example
Selector: run=load-balancer-example
Type: LoadBalancer
IP: 10.3.245.137
LoadBalancer Ingress: 104.198.205.71
Port: <unset> 8080/TCP
NodePort: <unset> 32377/TCP
Endpoints: 10.0.0.6:8080,10.0.1.6:8080,10.0.1.7:8080 + 2 more...
Session Affinity: None
Events:
Make a note of the external IP address exposed by your service. In this example, the external IP address is 104.198.205.71. Also note the value of Port. In this example, the port is 8080.
In the preceding output, you can see that the service has several endpoints: 10.0.0.6:8080,10.0.1.6:8080,10.0.1.7:8080 + 2 more. These are internal addresses of the pods that are running the Hello World application. To verify these are pod addresses, enter this command:
kubectl get pods --output=wide
The output is similar to this:
NAME ... IP NODE
hello-world-2895499144-1jaz9 ... 10.0.1.6 gke-cluster-1-default-pool-e0b8d269-1afc
hello-world-2895499144-2e5uh ... 0.0.1.8 gke-cluster-1-default-pool-e0b8d269-1afc
hello-world-2895499144-9m4h1 ... 10.0.0.6 gke-cluster-1-default-pool-e0b8d269-5v7a
hello-world-2895499144-o4z13 ... 10.0.1.7 gke-cluster-1-default-pool-e0b8d269-1afc
hello-world-2895499144-segjf ... 10.0.2.5 gke-cluster-1-default-pool-e0b8d269-cpuc
Use the external IP address to access the Hello World application:
curl http://<external-ip>:<port>
where <external-ip> is the external IP address of your Service, and <port> is the value of Port in your Service description.
The response to a successful request is a hello message:
Hello Kubernetes!
Please refer to How to Use external IP in GKE and Exposing an External IP Address to Access an Application in a Cluster for more information.

Kubernetes clusterIP does not load balance requests [duplicate]

My Environment: Mac dev machine with latest Minikube/Docker
I built (locally) a simple docker image with a simple Django REST API "hello world".I'm running a deployment with 3 replicas. This is my yaml file for defining it:
apiVersion: v1
kind: Service
metadata:
name: myproj-app-service
labels:
app: myproj-be
spec:
type: LoadBalancer
ports:
- port: 8000
selector:
app: myproj-be
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myproj-app-deployment
labels:
app: myproj-be
spec:
replicas: 3
selector:
matchLabels:
app: myproj-be
template:
metadata:
labels:
app: myproj-be
spec:
containers:
- name: myproj-app-server
image: myproj-app-server:4
ports:
- containerPort: 8000
env:
- name: DATABASE_URL
value: postgres://myname:#10.0.2.2:5432/myproj2
- name: REDIS_URL
value: redis://10.0.2.2:6379/1
When I apply this yaml it generates things correctly.
- one deployment
- one service
- three pods
Deployments:
NAME READY UP-TO-DATE AVAILABLE AGE
myproj-app-deployment 3/3 3 3 79m
Services:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 83m
myproj-app-service LoadBalancer 10.96.91.44 <pending> 8000:31559/TCP 79m
Pods:
NAME READY STATUS RESTARTS AGE
myproj-app-deployment-77664b5557-97wkx 1/1 Running 0 48m
myproj-app-deployment-77664b5557-ks7kf 1/1 Running 0 49m
myproj-app-deployment-77664b5557-v9889 1/1 Running 0 49m
The interesting thing is that when I SSH into the Minikube, and hit the service using curl 10.96.91.44:8000 it respects the LoadBalancer type of the service and rotates between all three pods as I hit the endpoints time and again. I can see that in the returned results which I have made sure to include the HOSTNAME of the pod.
However, when I try to access the service from my Hosting Mac -- using kubectl port-forward service/myproj-app-service 8000:8000 -- Every time I hit the endpoint, I get the same pod to respond. It doesn't load balance. I can see that clearly when I kubectl logs -f <pod> to all three pods and only one of them is handling the hits, as the other two are idle...
Is this a kubectl port-forward limitation or issue? or am I missing something greater here?
kubectl port-forward looks up the first Pod from the Service information provided on the command line and forwards directly to a Pod rather than forwarding to the ClusterIP/Service port. The cluster doesn't get a chance to load balance the service like regular service traffic.
The kubernetes API only provides Pod port forward operations (CREATE and GET). Similar API operations don't exist for Service endpoints.
kubectl code
Here's a little bit of the flow from the kubectl code that seems to back that up (I'll just add that Go isn't my primary language)
The portforward.go Complete function is where kubectl portforward does the first look up for a pod from options via AttachablePodForObjectFn:
The AttachablePodForObjectFn is defined as attachablePodForObject in this interface, then here is the attachablePodForObject function.
To my (inexperienced) Go eyes, it appears the attachablePodForObject is the thing kubectl uses to look up a Pod to from a Service defined on the command line.
Then from there on everything deals with filling in the Pod specific PortForwardOptions (which doesn't include a service) and is passed to the kubernetes API.
The reason was that my pods were randomly in a crashing state due to Python *.pyc files that were left in the container. This causes issues when Django is running in a multi-pod Kubernetes deployment. Once I removed this issue and all pods ran successfully, the round-robin started working.

ambassador service stays "pending"

Currently running a fresh "all in one VM" (stacked master/worker approach) kubernetes v1.21.1-00 on Ubuntu Server 20 LTS, using
cri-o as container runtime interface
calico for networking/security
also installed the kubernetes-dashboard (but I guess that's not important for my issue 😉). Taking this guide for installing ambassador: https://www.getambassador.io/docs/edge-stack/latest/topics/install/yaml-install/ I come along the issue that the service is stuck in status "pending".
kubectl get svc -n ambassador prints out the following stuff
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ambassador LoadBalancer 10.97.117.249 <pending> 80:30925/TCP,443:32259/TCP 5h
ambassador-admin ClusterIP 10.101.161.169 <none> 8877/TCP,8005/TCP 5h
ambassador-redis ClusterIP 10.110.32.231 <none> 6379/TCP 5h
quote ClusterIP 10.104.150.137 <none> 80/TCP 5h
While changing the type from LoadBalancer to NodePort in the service sets it up correctly, I'm not sure of the implications coming along. Again, I want to use ambassador as an ingress component here - with my setup (only one machine), "real" loadbalancing might not be necessary.
For covering all the subdomain stuff, I setup a wildcard recording for pointing to my machine, means I got a CNAME for *.k8s.my-domain.com which points to this host. Don't know, if this approach was that smart for setting up an ingress.
Edit: List of events, as requested below:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 116s default-scheduler Successfully assigned ambassador/ambassador-redis-584cd89b45-js5nw to dev-bvpl-099
Normal Pulled 116s kubelet Container image "redis:5.0.1" already present on machine
Normal Created 116s kubelet Created container redis
Normal Started 116s kubelet Started container redis
Additionally, here's the service pending in yaml presenation (exported via kubectl get svc -n ambassador -o yaml ambassador)
apiVersion: v1
kind: Service
metadata:
annotations:
a8r.io/bugs: https://github.com/datawire/ambassador/issues
a8r.io/chat: http://a8r.io/Slack
a8r.io/dependencies: ambassador-redis.ambassador
a8r.io/description: The Ambassador Edge Stack goes beyond traditional API Gateways
and Ingress Controllers with the advanced edge features needed to support developer
self-service and full-cycle development.
a8r.io/documentation: https://www.getambassador.io/docs/edge-stack/latest/
a8r.io/owner: Ambassador Labs
a8r.io/repository: github.com/datawire/ambassador
a8r.io/support: https://www.getambassador.io/about-us/support/
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"a8r.io/bugs":"https://github.com/datawire/ambassador/issues","a8r.io/chat":"http://a8r.io/Slack","a8r.io/dependencies":"ambassador-redis.ambassador","a8r.io/description":"The Ambassador Edge Stack goes beyond traditional API Gateways and Ingress Controllers with the advanced edge features needed to support developer self-service and full-cycle development.","a8r.io/documentation":"https://www.getambassador.io/docs/edge-stack/latest/","a8r.io/owner":"Ambassador Labs","a8r.io/repository":"github.com/datawire/ambassador","a8r.io/support":"https://www.getambassador.io/about-us/support/"},"labels":{"app.kubernetes.io/component":"ambassador-service","product":"aes"},"name":"ambassador","namespace":"ambassador"},"spec":{"ports":[{"name":"http","port":80,"targetPort":8080},{"name":"https","port":443,"targetPort":8443}],"selector":{"service":"ambassador"},"type":"LoadBalancer"}}
creationTimestamp: "2021-05-22T07:18:23Z"
labels:
app.kubernetes.io/component: ambassador-service
product: aes
name: ambassador
namespace: ambassador
resourceVersion: "4986406"
uid: 68e4582c-be6d-460c-909e-dfc0ad84ae7a
spec:
clusterIP: 10.107.194.191
clusterIPs:
- 10.107.194.191
externalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: http
nodePort: 32542
port: 80
protocol: TCP
targetPort: 8080
- name: https
nodePort: 32420
port: 443
protocol: TCP
targetPort: 8443
selector:
service: ambassador
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer: {}
EDIT#2: I wonder, if https://stackoverflow.com/a/44112285/667183 applies for my process as well?
Answer is pretty much here: https://serverfault.com/questions/1064313/ambassador-service-stays-pending . After installing a load balancer the whole setup worked. I decided to go with metallb (https://metallb.universe.tf/installation/#installation-by-manifest for installation). I decided to go with the following configuration for a single-node kubernetes cluster:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 10.16.0.99-10.16.0.99
After a few seconds the load balancer is detected and everything goes fine.

Kubernetes dashboard through kubectl proxy - port confusion

I have seen that the standard way to access http services through the kubectl proxy is the following:
http://api.host/api/v1/namespaces/NAMESPACE/services/SERVICE_NAME:SERVICE_PORT/proxy/
Why is it that the kubernetes-dashboard uses https:kubernetes-dashboard: for SERVICE_NAME:SERVICE_PORT?
I would assume from the following that it would be kubernetes_dashboard:443.
kubectl -n kube-system get service kubernetes-dashboard -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes-dashboard ClusterIP 10.233.50.212 <none> 443:31663/TCP 15d k8s-app=kubernetes-dashboard
Additionally, what is the meaning of the port show 443:31663 when all other services will just have x/TCP (x being one number instead of x:y)
Lastly, kubectl cluster-info will show
Kubernetes master is running at https://x.x.x.x:x
kubernetes-dashboard is running at https://x.x.x.x:x/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
I have created a simple service but it does not show here and I am confused how to determine what services show here or not.
Why is it that the kubernetes-dashboard uses
https:kubernetes-dashboard: for SERVICE_NAME:SERVICE_PORT?
Additionally, what is the meaning of the port show 443:31663 when all
other services will just have x/TCP (x being one number instead of
x:y)
As described in Manually constructing apiserver proxy URLs, the default way is
http://kubernetes_master_address/api/v1/namespaces/namespace_name/services/service_name[:port_name]/proxy
By default, the API server proxies to your service using http. To use
https, prefix the service name with https::
http://kubernetes_master_address/api/v1/namespaces/namespace_name/services/https:service_name:[port_name]/proxy
The supported formats for the name segment of the URL are:
<service_name> - proxies to the default or unnamed port using http
<service_name>:<port_name> - proxies to the specified port using http
https:<service_name>: - proxies to the default or unnamed port using https (note the trailing colon)
https:<service_name>:<port_name> - proxies to the specified port using https
Next:
I have created a simple service but it does not show here and I am
confused how to determine what services show here or not.
What is what I found and tested for you:
cluster-info API reference:
Display addresses of the master and services with label kubernetes.io/cluster-service=true To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
So, as soon as you add kubernetes.io/cluster-service: "true" label - the service starts to be seen under kubectl cluster-info.
BUT!! There is an expected behavior when you see that you service disappear from output in couple of minutes. Explanation has been found here - I only copy paste it here for future reference.
The other part is the addon manager. It uses this annotation to
synchronizes the cluster state with static manifest files. The
behavior was something like this:
1) addon manager reads a yaml from disk -> deploys the contents
2) addon manager reads all deployments from api server with annotation cluster-service:true -> deletes all that do not exist as files
As a result, if you add this annotation, addon manager will remove dashboard after a minute or so.
So,
dashboard is deployed after cluster creation -> annotation should not be set:
https://github.com/kubernetes/dashboard/blob/b98d167dadaafb665a28091d1e975cf74eb31c94/src/deploy/kubernetes-dashboard.yaml
dashboard is deployed part of cluster creation -> annotation should be set:
https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dashboard/dashboard-controller.yaml
At least this was the behavior some time ago. I think kubeadm does not use addon-manager. But it is still part of kube-up script.
Solution for this behavior also exists: add additional label addonmanager.kubernetes.io/mode: EnsureExists
Explanation is here
You final service should look like:
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: EnsureExists
name: kubernetes-dashboard
namespace: kube-system
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
kubectl get svc kubernetes-dashboard -n kube-system -o yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists","k8s-app":"kubernetes-dashboard","kubernetes.io/cluster-service":"true"},"name":"kubernetes-dashboard","namespace":"kube-system"},"spec":{"ports":[{"port":443,"targetPort":8443}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
labels:
addonmanager.kubernetes.io/mode: EnsureExists
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
kubectl cluster-info
Kubernetes master is running at https://*.*.*.*
...
kubernetes-dashboard is running at https://*.*.*.*/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
...

What is the meaning of this kubernetes UI error message?

I am running 3 ubuntu server VMs on my local machine and trying to manage with kubernetes.
The UI does not start by itself when using the start script, so I tried to start up the UI manually using:
kubectl create -f addons/kube-ui/kube-ui-rc.yaml --namespace=kube-system
kubectl create -f addons/kube-ui/kube-ui-svc.yaml --namespace=kube-system
The first command succeeds then I get the following for the second command:
error validating "addons/kube-ui/kube-ui-svc.yaml": error validating
data: [field nodePort: is required, field port: is required]; if you
choose to ignore these errors, turn validation off with
--validate=false
So I try editing the default kube-ui-scv file by adding nodePort to the config:
apiVersion: v1
kind: Service
metadata:
name: kube-ui
namespace: kube-system
labels:
k8s-app: kube-ui
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeUI"
spec:
selector:
k8s-app: kube-ui
ports:
- port: 80
targetPort: 8080
nodePort: 30555
But then I get another error after the edit or adding in nodePort:
The Service "kube-ui" is invalid. spec.ports[0].nodePort: invalid
value '30555': cannot specify a node port with services of type
ClusterIP
I cannot get the ui running at my master nodes IP. kubectl get nodes returns correct information. Thanks.
I believe you're running into https://github.com/kubernetes/kubernetes/issues/8901 with the first error, can you set it to 0? Setting NodePort with a service.Type=ClusterIP doesn't make sense, so the second error is legit.