Running kubectl proxy from same pod vs different pod on same node - what's the difference? - kubernetes

I'm experimenting with this, and I'm noticing a difference in behavior that I'm having trouble understanding, namely between running kubectl proxy from within a pod vs running it in a different pod.
The sample configuration run kubectl proxy and the container that needs it* in the same pod on a daemonset, i.e.
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
# ...
spec:
template:
metadata:
# ...
spec:
containers:
# this container needs kubectl proxy to be running:
- name: l5d
# ...
# so, let's run it:
- name: kube-proxy
image: buoyantio/kubectl:v1.8.5
args:
- "proxy"
- "-p"
- "8001"
When doing this on my cluster, I get the expected behavior. However, I will run other services that also need kubectl proxy, so I figured I'd rationalize that into its own daemon set to ensure it's running on all nodes. I thus removed the kube-proxy container and deployed the following daemon set:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: kube-proxy
labels:
app: kube-proxy
spec:
template:
metadata:
labels:
app: kube-proxy
spec:
containers:
- name: kube-proxy
image: buoyantio/kubectl:v1.8.5
args:
- "proxy"
- "-p"
- "8001"
In other words, the same container configuration as previously, but now running in independent pods on each node instead of within the same pod. With this configuration "stuff doesn't work anymore"**.
I realize the solution (at least for now) is to just run the kube-proxy container in any pod that needs it, but I'd like to know why I need to. Why isn't just running it in a daemonset enough?
I've tried to find more information about running kubectl proxy like this, but my search results drown in results about running it to access a remote cluster from a local environment, i.e. not at all what I'm after.
I include these details not because I think they're relevant, but because they might be even though I'm convinced they're not:
*) a Linkerd ingress controller, but I think that's irrelevant
**) in this case, the "working" state is that the ingress controller complains that the destination is unknown because there's no matching ingress rule, while the "not working" state is a network timeout.

namely between running kubectl proxy from within a pod vs running it in a different pod.
Assuming your cluster has an software defined network, such as flannel or calico, a Pod has its own IP and all containers within a Pod share the same networking space. Thus:
containers:
- name: c0
command: ["curl", "127.0.0.1:8001"]
- name: c1
command: ["kubectl", "proxy", "-p", "8001"]
will work, whereas in a DaemonSet, they are by definition not in the same Pod and thus the hypothetical c0 above would need to use the DaemonSet's Pod's IP to contact 8001. That story is made more complicated by the fact that kubectl proxy by default only listens on 127.0.0.1, so you would need to alter the DaemonSet's Pod's kubectl proxy to include --address='0.0.0.0' --accept-hosts='.*' to even permit such cross-Pod communication. I believe you also need to declare the ports: array in the DaemonSet configuration, since you are now exposing that port into the cluster, but I'd have to double-check whether ports: is merely polite, or is actually required.

Related

istio sidecar is not restricting pod connections as desired

I want to see how an istio sidecar may restrict a pod's connections (I am learning istio through its references)
so I am working with the bookinfo example, after installing the example (having a Docker Desktop) - I wrote a simple sidecar resource the restricts the connections of ratings to reviews and details services as following:
apiVersion: networking.istio.io/v1beta1
kind: Sidecar
metadata:
name: bookinfo-ratings-sidecar
spec:
workloadSelector:
labels:
app: ratings
egress:
- hosts:
- "./details.default.svc.cluster.local"
- "./reviews.default.svc.cluster.local"
when I run the following command istioctl proxy-config clusters ratings-v1-5f9699cfdf-hb2gd I really see that it includes only details.default.svc.cluster.local , reviews.default.svc.cluster.local (from the bookinfo services)
but if I run kubectl exec ratings-v1-5f9699cfdf-hb2gd -- curl -sS productpage:9080 I get an html result i.e. it doesn't refuse the connection with productpage as if the sidecar doesn't exist
What am I missing ?
(p.s this The result of sidecar injection was not what I expected didn't help)

how to restrict a pod to connect only to 2 pods using networkpolicy and test connection in k8s in simple way?

Do I still need to expose pod via clusterip service?
There are 3 pods - main, front, api. I need to allow ingress+egress connection to main pod only from the pods- api and frontend. I also created service-main - service that exposes main pod on port:80.
I don't know how to test it, tried:
k exec main -it -- sh
netcan -z -v -w 5 service-main 80
and
k exec main -it -- sh
curl front:80
The main.yaml pod:
apiVersion: v1
kind: Pod
metadata:
labels:
app: main
item: c18
name: main
spec:
containers:
- image: busybox
name: main
command:
- /bin/sh
- -c
- sleep 1d
The front.yaml:
apiVersion: v1
kind: Pod
metadata:
labels:
app: front
name: front
spec:
containers:
- image: busybox
name: front
command:
- /bin/sh
- -c
- sleep 1d
The api.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
app: api
name: api
spec:
containers:
- image: busybox
name: api
command:
- /bin/sh
- -c
- sleep 1d
The main-to-front-networkpolicy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: front-end-policy
spec:
podSelector:
matchLabels:
app: main
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: front
ports:
- port: 8080
egress:
- to:
- podSelector:
matchLabels:
app: front
ports:
- port: 8080
What am I doing wrong? Do I still need to expose main pod via service? But should not network policy take care of this already?
Also, do I need to write containerPort:80 in main pod? How to test connectivity and ensure ingress-egress works only for main pod to api, front pods?
I tried the lab from ckad prep course, it had 2 pods: secure-pod and web-pod. There was issue with connectivity, the solution was to create network policy and test using netcat from inside the web-pod's container:
k exec web-pod -it -- sh
nc -z -v -w 1 secure-service 80
connection open
UPDATE: ideally I want answers to these:
a clear explanation of the diff btw service and networkpolicy.
If both service and netpol exist - what is the order of evaluation that the traffic/request goes thru? It first goes thru netpol then service? Or vice versa?
if I want front and api pods to send/receive traffic to main - do I need separate services exposing front and api pods?
Network policies and services are two different and independent Kubernetes resources.
Service is:
An abstract way to expose an application running on a set of Pods as a network service.
Good explanation from the Kubernetes docs:
Kubernetes Pods are created and destroyed to match the state of your cluster. Pods are nonpermanent resources. If you use a Deployment to run your app, it can create and destroy Pods dynamically.
Each Pod gets its own IP address, however in a Deployment, the set of Pods running in one moment in time could be different from the set of Pods running that application a moment later.
This leads to a problem: if some set of Pods (call them "backends") provides functionality to other Pods (call them "frontends") inside your cluster, how do the frontends find out and keep track of which IP address to connect to, so that the frontend can use the backend part of the workload?
Enter Services.
Also another good explanation in this answer.
For production you should use a workload resources instead of creating pods directly:
Pods are generally not created directly and are created using workload resources. See Working with Pods for more information on how Pods are used with workload resources.
Here are some examples of workload resources that manage one or more Pods:
Deployment
StatefulSet
DaemonSet
And use services to make requests to your application.
Network policies are used to control traffic flow:
If you want to control traffic flow at the IP address or port level (OSI layer 3 or 4), then you might consider using Kubernetes NetworkPolicies for particular applications in your cluster.
Network policies target pods, not services (an abstraction). Check this answer and this one.
Regarding your examples - your network policy is correct (as I tested it below). The problem is that your cluster may not be compatible:
For Network Policies to take effect, your cluster needs to run a network plugin which also enforces them. Project Calico or Cilium are plugins that do so. This is not the default when creating a cluster!
Test on kubeadm cluster with Calico plugin -> I created similar pods as you did, but I changed container part:
spec:
containers:
- name: main
image: nginx
command: ["/bin/sh","-c"]
args: ["sed -i 's/listen .*/listen 8080;/g' /etc/nginx/conf.d/default.conf && exec nginx -g 'daemon off;'"]
ports:
- containerPort: 8080
So NGINX app is available at the 8080 port.
Let's check pods IP:
user#shell:~$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
api 1/1 Running 0 48m 192.168.156.61 example-ubuntu-kubeadm-template-2 <none> <none>
front 1/1 Running 0 48m 192.168.156.56 example-ubuntu-kubeadm-template-2 <none> <none>
main 1/1 Running 0 48m 192.168.156.52 example-ubuntu-kubeadm-template-2 <none> <none>
Let's exec into running main pod and try to make request to the front pod:
root#main:/# curl 192.168.156.61:8080
<!DOCTYPE html>
...
<title>Welcome to nginx!</title>
It is working.
After applying your network policy:
user#shell:~$ kubectl apply -f main-to-front.yaml
networkpolicy.networking.k8s.io/front-end-policy created
user#shell:~$ kubectl exec -it main -- bash
root#main:/# curl 192.168.156.61:8080
...
Not working anymore, so it means that network policy is applied successfully.
Nice option to get more information about applied network policy is to run kubectl describe command:
user#shell:~$ kubectl describe networkpolicy front-end-policy
Name: front-end-policy
Namespace: default
Created on: 2022-01-26 15:17:58 +0000 UTC
Labels: <none>
Annotations: <none>
Spec:
PodSelector: app=main
Allowing ingress traffic:
To Port: 8080/TCP
From:
PodSelector: app=front
Allowing egress traffic:
To Port: 8080/TCP
To:
PodSelector: app=front
Policy Types: Ingress, Egress

How to auto update /etc/hosts file entries inside running Pod without entering the pod

How can we auto-update (delete, create, change) entries in /etc/hosts file of running Pod without actually entering the pod?
We working on containerisation of SAP application server and so far succeeded in achieving this using Kubernetes.
apiVersion: v1
kind: Pod
spec:
hostNetwork: true
Since we are using host network approach, all entries of our VMs /etc/hosts file are getting copied whenever a new pod is created.
However, once pod has been created and in running state, any changes to VMs /etc/hosts file are not getting transferred to already running pod.
We would like to achieve this for our project requirement.
Kubernetes does have several different ways of affecting name resolution, your request is most similar to here and related pages.
Here is an extract, emphasis mine.
Adding entries to a Pod’s /etc/hosts file provides Pod-level override of hostname resolution when DNS and other options are not applicable. In 1.7, users can add these custom entries with the HostAliases field in PodSpec.
Modification not using HostAliases is not suggested because the file is managed by Kubelet and can be overwritten on during Pod creation/restart.
An example Pod specification using HostAliases is as follows:
apiVersion: v1
kind: Pod
metadata:
name: hostaliases-pod
spec:
restartPolicy: Never
hostAliases:
- ip: "127.0.0.1"
hostnames:
- "foo.local"
- "bar.local"
- ip: "10.1.2.3"
hostnames:
- "foo.remote"
- "bar.remote"
containers:
- name: cat-hosts
image: busybox
command:
- cat
args:
- "/etc/hosts"
One issue here is that you will need to update and restart the Pods with a new set of HostAliases if your network IPs change. That might cause downtime in your system.
Are you sure you need this mechanism and not a service that points to an external endpoint?

How to access the service ip in kubernetes by name?

Say if I have a rabbitmq service as follows:
apiVersion: v1
kind: Service
metadata:
name: my-rabbitmq
spec:
ports:
- port: 6379
selector:
app: my-rabbitmq
And I have another deployment:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: A-worker
spec:
replicas: 1
containers:
- name: a-worker
image: worker-image
ports:
- containerPort: 80
env:
- name: rabbitmq_url
value: XXXXXXXXXXXXX
Is there any way to set the service ip as environment variable in my second deployment by some kind of selector? In other words what should go to the value: XXXXXXXXXX in the second deployment yaml. (Note I know I can get the service ip by kubectl get services, but I'd like to know how to set this by the service name or label). Any advice is welcome!
kubernetes injects environment variables for a service's host, port, protocol among others into pod containers (see this doc).
kubectl exec <pod> printenv is one way to check which env variables are set.
If the service is created after the pod the env var may not be present so killing (restarting) the pod is one way to make sure the new environment variables are populated.
The convention is typically uppercase <SERVICE_NAME>_SERVICE_HOST.
You can set it explicitly in a pod spec using the following syntax.
- name: rabbitmq_url
value: $(MY-RABBITMQ_SERVICE_HOST)
Bear in mind the variable is already injected by k8s and this is just aliasing it. You may want to update your reference in the application layer /script to use the k8s injected environment variable for the service.
Reading between the lines (and I hope this helps):
K8s automatically creates service environment variables for you inside each pod. See https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables for details.
The other route is to enable kube dns, in which case one can contact a service IP simply by using the service name.

Kube-dns does not resolve external hosts on kubeadm bare-metal cluster

I've got a k8n cluster setup on a bare-metal ubuntu 16.04 cluster using weave networking with kubeadm. I'm having a variety of little problems, the most recent of which is that I realized that kube-dns does not resolve external addresses (e.g. google.com). Any thoughts on why? Using kube-adm did not give me a lot of insight into the details of that part of the setup.
The issue turned out to be that a node-level firewall was interfering with the cluster networking. So there was no issue with the DNS setup.
I had the same issue on kubernetes v1.6 and it was not a firewall issue in my case.
The problem was that I have configured the DNS manually on the /etc/docker/daemon.json, and these parameters are not used by kube-dns. Instead it is needed to create a configmap for kubedns (pull request here and documentation here), as follows:
Solution
Create a yaml for the configmap, for example kubedns-configmap.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
data:
upstreamNameservers: |
["<own-dns-ip>"]
And simply, apply it on kubernetes with
kubectl apply -f kubedns-configmap.yml
Test 1
On your kubernetes host node:
dig #10.96.0.10 google.com
Test 2
To test it I use a busybox image with the following resource configuration (busybox.yml):
apiVersion: v1
kind: Pod
metadata:
name: busybox
spec:
containers:
# for arm
#- image: hypriot/armhf-busybox
- image: busybox
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: busybox
restartPolicy: Always
Apply the resource with
kubectl apply -f busybox.yml
And test it with the following:
kubectl exec -it busybox -- ping google.com