istio sidecar is not restricting pod connections as desired - kubernetes

I want to see how an istio sidecar may restrict a pod's connections (I am learning istio through its references)
so I am working with the bookinfo example, after installing the example (having a Docker Desktop) - I wrote a simple sidecar resource the restricts the connections of ratings to reviews and details services as following:
apiVersion: networking.istio.io/v1beta1
kind: Sidecar
metadata:
name: bookinfo-ratings-sidecar
spec:
workloadSelector:
labels:
app: ratings
egress:
- hosts:
- "./details.default.svc.cluster.local"
- "./reviews.default.svc.cluster.local"
when I run the following command istioctl proxy-config clusters ratings-v1-5f9699cfdf-hb2gd I really see that it includes only details.default.svc.cluster.local , reviews.default.svc.cluster.local (from the bookinfo services)
but if I run kubectl exec ratings-v1-5f9699cfdf-hb2gd -- curl -sS productpage:9080 I get an html result i.e. it doesn't refuse the connection with productpage as if the sidecar doesn't exist
What am I missing ?
(p.s this The result of sidecar injection was not what I expected didn't help)

Related

Control the pod domain

I have pod1 and pod2 in the same namespace.
pod2 is running an HTTP server.
How can I easily get pod2 be seen as pod2.mydomain.com from pod1?
In this way the HTTPS certificate would work with no problem.
It can be achieved through the kubernetes cluster and it is important to use a valid SSL certificate.
By using a Kubernetes service you can create a service of type “ClusterIP” or “NodePort” in the same namespace as the pods and you need to expose the pod2 HTTP server to a consistent IP and port. In this way you can configure your DNS to map pod2.mydomain.com to the IP address of the service.
Or if your cluster supports load balancing you can create a Service of type “LoadBalancer” and you can expose pod2 HTTP server to a public IP.
You can also create an Ingress service that routes traffic to pod 2 on the hostname.
For more information please check this official Document
You can directly hit the POD with it's IP fine how you are thinking however it would better to use the service with POD.
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app.kubernetes.io/name: proxy
spec:
containers:
- name: nginx
image: nginx:stable
ports:
- containerPort: 80
name: http-web-svc
---
apiVersion: v1
kind: Service
metadata:
name: service-1
spec:
selector:
app.kubernetes.io/name: proxy
ports:
- name: name-of-service-port
protocol: TCP
port: 80
targetPort: http-web-svc
Service route the traffic to matching labels PODs, so from POD1 you hit the request to service-1 which will forward traffic to POD-1 and response.
https://service-1.<namespace-name>.svc.cluster.local
with service HTTPS will also work the way you asked. Test with the curl command, start one curl pod
kubectl run mycurlpod --image=curlimages/curl -i --tty -- sh
hit curl request to service
curl https://service-1.<namespace-name>.svc.cluster.local
service ref doc : https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service
Extra :
You can also use the ingress & service mesh which will make little simple for you scenario if you don't want to manage SSL/TLS cert for the app.
Service mesh supports the mTLS auth you can force policy and it would be easy however there will be extra management.

Prometheus returns error context deadline exceeded

I deployed Prometheus with an Helm chart from Rancher. Targets such as Alertmanager, Prometheus, Grafana, Node-exporter, Kubelet etc. are configured automatically. The endpoint from alertmanager refers to the IP address of the specific pod for example. I also configured multiple targets successfully like Jira and Confluence.
Since the service external-dns is running in the namespace kube-system, it's also configured automatically. But only this service is getting the error Context deadline exceeded.
I checked in a random pod if those metrics are accessible by running the command curl -s http://<IP-ADDRESS-POD>:7979/metrics. Also did this with the service ip address (kubectl get service external-dns and curl-s http://<IP-ADDRESS-SVC>:7979/metrics).
Both of these curl commands returned the metrics within a second. So increasing the scrape timeout won't help.
But when I exec in the Prometheus container and use the promtool debug metrics command it shows the same behaviour like in my browser. The external-dns returns a timeout with both of the IP addresses and if I try this with another target it just returns the metrics.
I also don't think it's a SSL issue, because I already injected the correct CA bundle for the targets Jira and Confluence.
So anybody an idea? :)
I had to edit the NetworkPolicy in the kube-system namespace. The containers from the cattle-monitoring-system namespace are now allowed to access the containers from the kube-system namespace. You can upload your NetworkPolicies here and it visualizes which resources has access or not. The NetworkPolicy looks like this now:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-network-policy
namespace: kube-system
spec:
ingress:
- from:
- namespaceSelector:
matchLabels:
name: cattle-monitoring-system
- from:
- podSelector: {}
podSelector: {}
policyTypes:
- Ingress

Kubernetes - Pass Public IP of Load Balance as Environment Variable into Pod

Gist
I have a ConfigMap which provides necessary environment variables to my pods:
apiVersion: v1
kind: ConfigMap
metadata:
name: global-config
data:
NODE_ENV: prod
LEVEL: info
# I need to set API_URL to the public IP address of the Load Balancer
API_URL: http://<SOME IP>:3000
DATABASE_URL: mongodb://database:27017
SOME_SERVICE_HOST: some-service:3000
I am running my Kubernetes Cluster on Google Cloud, so it will automatically create a public endpoint for my service:
apiVersion: v1
kind: Service
metadata:
name: gateway
spec:
selector:
app: gateway
ports:
- name: http
port: 3000
targetPort: 3000
nodePort: 30000
type: LoadBalancer
Issue
I have an web application that needs to make HTTP requests from the client's browser to the gateway service. But in order to make a request to the external service, the web app needs to know it's ip address.
So I've set up the pod, which serves the web application in a way, that it picks up an environment variable "API_URL" and as a result makes all HTTP requests to this url.
So I just need a way to set the API_URL environment variable to the public IP address of the gateway service to pass it into a pod when it starts.
I know this isn't the exact approach you were going for, but I've found that creating a static IP address and explicitly passing it in tends to be easier to work with.
First, create a static IP address:
gcloud compute addresses create gke-ip --region <region>
where region is the GCP region your GKE cluster is located in.
Then you can get your new IP address with:
gcloud compute addresses describe gke-ip --region <region>
Now you can add your static IP address to your service by specifying an explicit loadBalancerIP.1
apiVersion: v1
kind: Service
metadata:
name: gateway
spec:
selector:
app: gateway
ports:
- name: http
port: 3000
targetPort: 3000
nodePort: 30000
type: LoadBalancer
loadBalancerIP: "1.2.3.4"
At this point, you can also hard-code it into your ConfigMap and not worry about grabbing the value from the cluster itself.
1If you've already created a LoadBalancer with an auto-assigned IP address, setting an IP address won't change the IP of the underlying GCP load balancer. Instead, you should delete the LoadBalancer service in your cluster, wait ~15 minutes for the underlying GCP resources to get cleaned up, and then recreate the LoadBalancer with the explicit IP address.
You are trying to access gateway service from client's browser.
I would like to suggest you another solution that is slightly different from what you are currently trying to achieve
but it can solve your problem.
From your question I was able to deduce that your web app and gateway app are on the same cluster.
In my solution you dont need a service of type LoadBalancer and basic Ingress is enough to make it work.
You only need to create a Service object (notice that option type: LoadBalancer is now gone)
apiVersion: v1
kind: Service
metadata:
name: gateway
spec:
selector:
app: gateway
ports:
- name: http
port: 3000
targetPort: 3000
nodePort: 30000
and you alse need an ingress object (remember that na Ingress Controller needs to be deployed to cluster in order to make it work) like one below:
More on how to deploy Nginx Ingress controller you can finde here
and if you are already using one (maybe different one) then you can skip this step.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: gateway-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: gateway.foo.bar.com
http:
paths:
- path: /
backend:
serviceName: gateway
servicePort: 3000
Notice the host field.
The same you need to repeat for your web application. Remember to use appropriate host name (DNS name)
e.g. for web app: foo.bar.com and for gateway: gateway.foo.bar.com
and then just use the gateway.foo.bar.com dns name to connect to the gateway app from clients web browser.
You also need to create a dns entry that points *.foo.bar.com to Ingress's public ip address
as Ingress controller will create its own load balancer.
The flow of traffic would be like below:
+-------------+ +---------+ +-----------------+ +---------------------+
| Web Browser |-->| Ingress |-->| gateway Service |-->| gateway application |
+-------------+ +---------+ +-----------------+ +---------------------+
This approach is better becaues it won't cause issues with Cross-Origin Resource Sharing (CORS) in clients browser.
Examples of Ingress and Service manifests I took from official kubernetes documentation and modified slightly.
More on Ingress you can find here
and on Services here
The following deployment reads the external IP of a given service using kubectl every 10 seconds and patches a given configmap with it:
apiVersion: apps/v1
kind: Deployment
metadata:
name: configmap-updater
labels:
app: configmap-updater
spec:
selector:
matchLabels:
app: configmap-updater
template:
metadata:
labels:
app: configmap-updater
spec:
containers:
- name: configmap-updater
image: alpine:3.10
command: ['sh', '-c' ]
args:
- | #!/bin/sh
set -x
apk --update add curl
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl
chmod +x kubectl
export CONFIGMAP="configmap/global-config"
export SERVICE="service/gateway"
while true
do
IP=`./kubectl get services $CONFIGMAP -o go-template --template='{{ (index .status.loadBalancer.ingress 0).ip }}'`
PATCH=`printf '{"data":{"API_URL": "https://%s:3000"}}' $IP`
echo ${PATCH}
./kubectl patch --type=merge -p "${PATCH}" $SERVICE
sleep 10
done
You probably have RBAC enabled in your GKE cluster and would still need to create the appropriate Role and RoleBinding for this to work correctly.
You've got a few possibilities:
If you really need this to be hacked into your setup, you could use a similar approach with a sidecar container in your pod or a global service like above. Keep in mind that you would need to recreate your pods if the configmap actually changed for the changes to be picked up by the environment variables of your containers.
Watch and query the Kubernetes-API for the external IP directly in your application, eliminating the need for an environment variable.
Adopt your applications to not directly depend on the external IP.

Running kubectl proxy from same pod vs different pod on same node - what's the difference?

I'm experimenting with this, and I'm noticing a difference in behavior that I'm having trouble understanding, namely between running kubectl proxy from within a pod vs running it in a different pod.
The sample configuration run kubectl proxy and the container that needs it* in the same pod on a daemonset, i.e.
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
# ...
spec:
template:
metadata:
# ...
spec:
containers:
# this container needs kubectl proxy to be running:
- name: l5d
# ...
# so, let's run it:
- name: kube-proxy
image: buoyantio/kubectl:v1.8.5
args:
- "proxy"
- "-p"
- "8001"
When doing this on my cluster, I get the expected behavior. However, I will run other services that also need kubectl proxy, so I figured I'd rationalize that into its own daemon set to ensure it's running on all nodes. I thus removed the kube-proxy container and deployed the following daemon set:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: kube-proxy
labels:
app: kube-proxy
spec:
template:
metadata:
labels:
app: kube-proxy
spec:
containers:
- name: kube-proxy
image: buoyantio/kubectl:v1.8.5
args:
- "proxy"
- "-p"
- "8001"
In other words, the same container configuration as previously, but now running in independent pods on each node instead of within the same pod. With this configuration "stuff doesn't work anymore"**.
I realize the solution (at least for now) is to just run the kube-proxy container in any pod that needs it, but I'd like to know why I need to. Why isn't just running it in a daemonset enough?
I've tried to find more information about running kubectl proxy like this, but my search results drown in results about running it to access a remote cluster from a local environment, i.e. not at all what I'm after.
I include these details not because I think they're relevant, but because they might be even though I'm convinced they're not:
*) a Linkerd ingress controller, but I think that's irrelevant
**) in this case, the "working" state is that the ingress controller complains that the destination is unknown because there's no matching ingress rule, while the "not working" state is a network timeout.
namely between running kubectl proxy from within a pod vs running it in a different pod.
Assuming your cluster has an software defined network, such as flannel or calico, a Pod has its own IP and all containers within a Pod share the same networking space. Thus:
containers:
- name: c0
command: ["curl", "127.0.0.1:8001"]
- name: c1
command: ["kubectl", "proxy", "-p", "8001"]
will work, whereas in a DaemonSet, they are by definition not in the same Pod and thus the hypothetical c0 above would need to use the DaemonSet's Pod's IP to contact 8001. That story is made more complicated by the fact that kubectl proxy by default only listens on 127.0.0.1, so you would need to alter the DaemonSet's Pod's kubectl proxy to include --address='0.0.0.0' --accept-hosts='.*' to even permit such cross-Pod communication. I believe you also need to declare the ports: array in the DaemonSet configuration, since you are now exposing that port into the cluster, but I'd have to double-check whether ports: is merely polite, or is actually required.

Kubernetes NetworkPolicy allow loadbalancer

I have a Kubernetes cluster running on Google Kubernetes Engine (GKE) with network policy support enabled.
I created an nginx deployment and load balancer for it:
kubectl run nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=LoadBalancer
Then I created this network policy to make sure other pods in the cluster won't be able to connect to it anymore:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: access-nginx
spec:
podSelector:
matchLabels:
run: nginx
ingress:
- from:
- namespaceSelector:
matchLabels:
name: kube-system
ports:
- protocol: TCP
port: 80
Now other pods in my cluster can't reach it (as intended):
kubectl run busybox --rm -ti --image=busybox /bin/sh
If you don't see a command prompt, try pressing enter.
/ # wget --spider --timeout=1 nginx
Connecting to nginx (10.63.254.50:80)
wget: download timed out
However, it surprised me that using my external browser I also can't connect anymore to it through the load balancer:
open http://$(kubectl get svc nginx --output=jsonpath={.status.loadBalancer.ingress[0].ip})
If I delete the policy it starts to work again.
So, my question is: how do I block other pods from reaching nginx, but keep access through the load balancer open?
I talked about this in my Network Policy recipes repository.
"Allowing EXTERNAL load balancers while DENYING local traffic" is not a use case that makes sense, therefore it's not possible to using network policy.
For Service type=LoadBalancer and Ingress resources to work, you must allow ALL traffic to the pods selected by these resources.
If you REALLY want you can use the from.ipBlock.cidr and from.ipBlock.cidr.except resources to allow traffic from 0.0.0.0/0 (all IPv4) and then excluding 10.0.0.0/8 (or whatever private IP range GKE uses).
I recently had to do something similar. I needed a policy that didn't allow pods from other namespaces to talk to prod, but did allow the LoadBalancer services to reach pods in prod. Here's what worked (based on Ahmet's post):
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: isolate-prod
namespace: prod
spec:
podSelector: {}
ingress:
- from:
- podSelector: {}
- from:
- ipBlock:
cidr: '0.0.0.0/0'
except: ['10.0.0.0/8']
I'd like to share solution that builds on the excellent answers of #ahmetb-google and #tammer-saleh
The situation: 1 cluster, 4 namespaces, a public HTTPS-terminating Ingress for 3 of the namespaces that allows specific traffic inbound and routes it appropriately.
Goal: Block all inter-namespace traffic, allow only public traffic coming in via the Ingress.
Problem: When deploying a "deny from other namespaces" rule, that it also denied traffic from my Ingress so the pods were not accessible from the outside.
Solution:
I created an additional policy to allow only port 80 traffic targetting pods with the label role=web. It uses the allow/except trick to still block traffic from other namespaces but allow it from the public ingresses.
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-http-from-ingress
spec:
podSelector:
matchLabels:
role: web
ingress:
- from:
- ipBlock:
cidr: '0.0.0.0/0'
except: ['10.0.0.0/8']
ports:
- port: 80
With this deployed, traffic still flows from the public, via the Ingress, to the web-serving pods. All inter-namespace traffic is blocked, including HTTP.
This is a useful setup when for example you're using namespaces for different deployment stages (production, testing, edge, etc) and you have private HTTP APIs that you would not want to accidentally hit cross-stage.