I have a Kubernetes cluster running on Google Kubernetes Engine (GKE) with network policy support enabled.
I created an nginx deployment and load balancer for it:
kubectl run nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=LoadBalancer
Then I created this network policy to make sure other pods in the cluster won't be able to connect to it anymore:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: access-nginx
spec:
podSelector:
matchLabels:
run: nginx
ingress:
- from:
- namespaceSelector:
matchLabels:
name: kube-system
ports:
- protocol: TCP
port: 80
Now other pods in my cluster can't reach it (as intended):
kubectl run busybox --rm -ti --image=busybox /bin/sh
If you don't see a command prompt, try pressing enter.
/ # wget --spider --timeout=1 nginx
Connecting to nginx (10.63.254.50:80)
wget: download timed out
However, it surprised me that using my external browser I also can't connect anymore to it through the load balancer:
open http://$(kubectl get svc nginx --output=jsonpath={.status.loadBalancer.ingress[0].ip})
If I delete the policy it starts to work again.
So, my question is: how do I block other pods from reaching nginx, but keep access through the load balancer open?
I talked about this in my Network Policy recipes repository.
"Allowing EXTERNAL load balancers while DENYING local traffic" is not a use case that makes sense, therefore it's not possible to using network policy.
For Service type=LoadBalancer and Ingress resources to work, you must allow ALL traffic to the pods selected by these resources.
If you REALLY want you can use the from.ipBlock.cidr and from.ipBlock.cidr.except resources to allow traffic from 0.0.0.0/0 (all IPv4) and then excluding 10.0.0.0/8 (or whatever private IP range GKE uses).
I recently had to do something similar. I needed a policy that didn't allow pods from other namespaces to talk to prod, but did allow the LoadBalancer services to reach pods in prod. Here's what worked (based on Ahmet's post):
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: isolate-prod
namespace: prod
spec:
podSelector: {}
ingress:
- from:
- podSelector: {}
- from:
- ipBlock:
cidr: '0.0.0.0/0'
except: ['10.0.0.0/8']
I'd like to share solution that builds on the excellent answers of #ahmetb-google and #tammer-saleh
The situation: 1 cluster, 4 namespaces, a public HTTPS-terminating Ingress for 3 of the namespaces that allows specific traffic inbound and routes it appropriately.
Goal: Block all inter-namespace traffic, allow only public traffic coming in via the Ingress.
Problem: When deploying a "deny from other namespaces" rule, that it also denied traffic from my Ingress so the pods were not accessible from the outside.
Solution:
I created an additional policy to allow only port 80 traffic targetting pods with the label role=web. It uses the allow/except trick to still block traffic from other namespaces but allow it from the public ingresses.
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-http-from-ingress
spec:
podSelector:
matchLabels:
role: web
ingress:
- from:
- ipBlock:
cidr: '0.0.0.0/0'
except: ['10.0.0.0/8']
ports:
- port: 80
With this deployed, traffic still flows from the public, via the Ingress, to the web-serving pods. All inter-namespace traffic is blocked, including HTTP.
This is a useful setup when for example you're using namespaces for different deployment stages (production, testing, edge, etc) and you have private HTTP APIs that you would not want to accidentally hit cross-stage.
Related
I have pod1 and pod2 in the same namespace.
pod2 is running an HTTP server.
How can I easily get pod2 be seen as pod2.mydomain.com from pod1?
In this way the HTTPS certificate would work with no problem.
It can be achieved through the kubernetes cluster and it is important to use a valid SSL certificate.
By using a Kubernetes service you can create a service of type “ClusterIP” or “NodePort” in the same namespace as the pods and you need to expose the pod2 HTTP server to a consistent IP and port. In this way you can configure your DNS to map pod2.mydomain.com to the IP address of the service.
Or if your cluster supports load balancing you can create a Service of type “LoadBalancer” and you can expose pod2 HTTP server to a public IP.
You can also create an Ingress service that routes traffic to pod 2 on the hostname.
For more information please check this official Document
You can directly hit the POD with it's IP fine how you are thinking however it would better to use the service with POD.
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app.kubernetes.io/name: proxy
spec:
containers:
- name: nginx
image: nginx:stable
ports:
- containerPort: 80
name: http-web-svc
---
apiVersion: v1
kind: Service
metadata:
name: service-1
spec:
selector:
app.kubernetes.io/name: proxy
ports:
- name: name-of-service-port
protocol: TCP
port: 80
targetPort: http-web-svc
Service route the traffic to matching labels PODs, so from POD1 you hit the request to service-1 which will forward traffic to POD-1 and response.
https://service-1.<namespace-name>.svc.cluster.local
with service HTTPS will also work the way you asked. Test with the curl command, start one curl pod
kubectl run mycurlpod --image=curlimages/curl -i --tty -- sh
hit curl request to service
curl https://service-1.<namespace-name>.svc.cluster.local
service ref doc : https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service
Extra :
You can also use the ingress & service mesh which will make little simple for you scenario if you don't want to manage SSL/TLS cert for the app.
Service mesh supports the mTLS auth you can force policy and it would be easy however there will be extra management.
I have a mosquitto broker running on a pod, this server is exposed as a service as both DNS and IP address.
But this service is accessible by any pod in the cluster.
I want to restrict access to this service such that pods trying to connect to this DNS or IP address should only be able to if the pods have certain name/metadata.
One solution I guess will be to use namespaces? What other solution is there?
The UseCase you are describing is exactly what NetworkPolicies are here for.
Basically you define selector for pods which the network traffic should be restricted (i.e. your mosquito broker) and what specifica pods need to have in order to be allowed to reach it. For example a label "broker-access: true" or whatever seems to be suitable for you.
an example network policy could look like this:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: broker-policy
namespace: default
spec:
podSelector:
matchLabels:
role: message-broker
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
broker-access: true
ports:
- protocol: TCP
port: 6379
this network policy would be applied to every pod with label role=message-broker.
and it would restrict all incoming traffic except for traffic from pods with label broker-acces=true on port 6379.
Hope this helps and gives you a bit of a skaffold for your NetworkPolicy
Gist
I have a ConfigMap which provides necessary environment variables to my pods:
apiVersion: v1
kind: ConfigMap
metadata:
name: global-config
data:
NODE_ENV: prod
LEVEL: info
# I need to set API_URL to the public IP address of the Load Balancer
API_URL: http://<SOME IP>:3000
DATABASE_URL: mongodb://database:27017
SOME_SERVICE_HOST: some-service:3000
I am running my Kubernetes Cluster on Google Cloud, so it will automatically create a public endpoint for my service:
apiVersion: v1
kind: Service
metadata:
name: gateway
spec:
selector:
app: gateway
ports:
- name: http
port: 3000
targetPort: 3000
nodePort: 30000
type: LoadBalancer
Issue
I have an web application that needs to make HTTP requests from the client's browser to the gateway service. But in order to make a request to the external service, the web app needs to know it's ip address.
So I've set up the pod, which serves the web application in a way, that it picks up an environment variable "API_URL" and as a result makes all HTTP requests to this url.
So I just need a way to set the API_URL environment variable to the public IP address of the gateway service to pass it into a pod when it starts.
I know this isn't the exact approach you were going for, but I've found that creating a static IP address and explicitly passing it in tends to be easier to work with.
First, create a static IP address:
gcloud compute addresses create gke-ip --region <region>
where region is the GCP region your GKE cluster is located in.
Then you can get your new IP address with:
gcloud compute addresses describe gke-ip --region <region>
Now you can add your static IP address to your service by specifying an explicit loadBalancerIP.1
apiVersion: v1
kind: Service
metadata:
name: gateway
spec:
selector:
app: gateway
ports:
- name: http
port: 3000
targetPort: 3000
nodePort: 30000
type: LoadBalancer
loadBalancerIP: "1.2.3.4"
At this point, you can also hard-code it into your ConfigMap and not worry about grabbing the value from the cluster itself.
1If you've already created a LoadBalancer with an auto-assigned IP address, setting an IP address won't change the IP of the underlying GCP load balancer. Instead, you should delete the LoadBalancer service in your cluster, wait ~15 minutes for the underlying GCP resources to get cleaned up, and then recreate the LoadBalancer with the explicit IP address.
You are trying to access gateway service from client's browser.
I would like to suggest you another solution that is slightly different from what you are currently trying to achieve
but it can solve your problem.
From your question I was able to deduce that your web app and gateway app are on the same cluster.
In my solution you dont need a service of type LoadBalancer and basic Ingress is enough to make it work.
You only need to create a Service object (notice that option type: LoadBalancer is now gone)
apiVersion: v1
kind: Service
metadata:
name: gateway
spec:
selector:
app: gateway
ports:
- name: http
port: 3000
targetPort: 3000
nodePort: 30000
and you alse need an ingress object (remember that na Ingress Controller needs to be deployed to cluster in order to make it work) like one below:
More on how to deploy Nginx Ingress controller you can finde here
and if you are already using one (maybe different one) then you can skip this step.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: gateway-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: gateway.foo.bar.com
http:
paths:
- path: /
backend:
serviceName: gateway
servicePort: 3000
Notice the host field.
The same you need to repeat for your web application. Remember to use appropriate host name (DNS name)
e.g. for web app: foo.bar.com and for gateway: gateway.foo.bar.com
and then just use the gateway.foo.bar.com dns name to connect to the gateway app from clients web browser.
You also need to create a dns entry that points *.foo.bar.com to Ingress's public ip address
as Ingress controller will create its own load balancer.
The flow of traffic would be like below:
+-------------+ +---------+ +-----------------+ +---------------------+
| Web Browser |-->| Ingress |-->| gateway Service |-->| gateway application |
+-------------+ +---------+ +-----------------+ +---------------------+
This approach is better becaues it won't cause issues with Cross-Origin Resource Sharing (CORS) in clients browser.
Examples of Ingress and Service manifests I took from official kubernetes documentation and modified slightly.
More on Ingress you can find here
and on Services here
The following deployment reads the external IP of a given service using kubectl every 10 seconds and patches a given configmap with it:
apiVersion: apps/v1
kind: Deployment
metadata:
name: configmap-updater
labels:
app: configmap-updater
spec:
selector:
matchLabels:
app: configmap-updater
template:
metadata:
labels:
app: configmap-updater
spec:
containers:
- name: configmap-updater
image: alpine:3.10
command: ['sh', '-c' ]
args:
- | #!/bin/sh
set -x
apk --update add curl
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl
chmod +x kubectl
export CONFIGMAP="configmap/global-config"
export SERVICE="service/gateway"
while true
do
IP=`./kubectl get services $CONFIGMAP -o go-template --template='{{ (index .status.loadBalancer.ingress 0).ip }}'`
PATCH=`printf '{"data":{"API_URL": "https://%s:3000"}}' $IP`
echo ${PATCH}
./kubectl patch --type=merge -p "${PATCH}" $SERVICE
sleep 10
done
You probably have RBAC enabled in your GKE cluster and would still need to create the appropriate Role and RoleBinding for this to work correctly.
You've got a few possibilities:
If you really need this to be hacked into your setup, you could use a similar approach with a sidecar container in your pod or a global service like above. Keep in mind that you would need to recreate your pods if the configmap actually changed for the changes to be picked up by the environment variables of your containers.
Watch and query the Kubernetes-API for the external IP directly in your application, eliminating the need for an environment variable.
Adopt your applications to not directly depend on the external IP.
Is it possible to expose Kubernetes service using port 443/80 on-premise?
I know some ways to expose services in Kubernetes:
1. NodePort - Default port range is 30000 - 32767, so we cannot access the service using 443/80. Changing the port range is risky because of port conflicts, so it is not a good idea.
2. Host network - Force the pod to use the host’s network instead of a dedicated network namespace. Not a good idea because we lose the kube-dns and etc.
3. Ingress - AFAIK it uses NodePort (So we face with the first problem again) or a cloud provider LoadBalancer. Since we use Kubernetes on premise we cannot use this option.
MetalLB which allows you to create Kubernetes services of type LoadBalancer in clusters that don’t run on a cloud provider, is not yet stable enough.
Do you know any other way to expose service in Kubernetes using port 443/80 on-premise?
I'm looking for a "Kubernetes solution"; Not using external cluster reverse proxy.
Thanks.
IMHO ingress is the best way to do this on prem.
We run the nginx-ingress-controller as a daemonset with each controller bound to ports 80 and 443 on the host network. Nearly 100% of traffic to our clusters comes in on 80 or 443 and is routed to the right service by ingress rules.
Per app, you just need a DNS record mapping your hostname to your cluster's nodes, and a corresponding ingress.
Here's an example of the daemonset manifest:
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: nginx-ingress-controller
spec:
selector:
matchLabels:
component: ingress-controller
template:
metadata:
labels:
component: ingress-controller
spec:
restartPolicy: Always
hostNetwork: true
containers:
- name: nginx-ingress-lb
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.21.0
ports:
- name: http
hostPort: 80
containerPort: 80
protocol: TCP
- name: https
hostPort: 443
containerPort: 443
protocol: TCP
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
args:
- /nginx-ingress-controller
- '--default-backend-service=$(POD_NAMESPACE)/default-http-backend'
Use ingress controller as an entrypoint to a services in kubernetes cluster. Run ingress controller on port 80 or 443.
You need to define ingress rules for each backend service that you want to access from outside. Ingress controller should be able to allow client to access the services based on the paths defined in the ingress rules.
If you need to allow access over https then you need to get the dns certificates, load them into secrets and bind them in the ingress rules
Most popular one is nginx ingress controller. Traefik and ha proxy ingress controllers are also other alternate solutions
Idea with hostNetwork proxy is actually not bad, Openshift Router uses that for example. You define two or three nodes to run proxy and use DNS load balancing in front of them.
And you can still use kube-dns with hostNetwork, see https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy
You are probably running a kubeadm on-premise Kubernetes setup with a nginx ingress controller on unix/linux hosts and can't safely expose ports in the restricted system port range (0-1023).
You either need to set up your own dedicated load balancer pair (e.g. a Linux boxes with HA-Proxy running) or alternatively use an existing load balancers if you are lucky engough being in a corporate environment that already provides load balancing (e.g. F5 LB).
Then you will be able to set the load balancers to forward your 443/80 requests to your cluster node's 30443/30080 ports that are handled by your cluster's ingress controller.
I've installed minikube to learn kubernetes a bit better.
I've deployed some apps and services which have ip's in a range of 10.x.x.x (private ip). I can expose my services on minikube and visit them in my browser. But I want to use the private IP's and not exposing it.
How can I visit (vpn/proxy wize) private ip's of services in minikube?
Minikube is Kubernetes with only one node and master server running on this node.
It provides the possibility to learn how it works with minimum hardware required.
It's ideal for testing purposes and seamless running on a laptop. Minikube is still software with mature
network stack from Kubernetes. This means that ports are exposed to services and virtually services are
communicating with pods.
To understand what is communicating, let me explain what ClusterIP does - it exposes the service on an internal IP in the cluster. This type makes service only reachable from within the cluster.
Cluster IP you can get by the command:
kubectl get services test_service
So, after you create a new service, you like to establish connections to ClusterAPI.
Basically, there are three ways to connect to backend resource:
1/ use kube-proxy - this proxy reflects services as defined in the Kubernetes API and simple stream TCP and UDP to backend or set of them in advanced configuration. Service cluster IPs and ports are currently found through Docker compatible environment variables specifying ports opened by the service proxy. There is an optional addon that provides cluster DNS for these cluster IPs. The user must create a service with the apiserver API to configure the proxy.
Example shows how can we use nodeselectors to define connection to port 5000 on ClusterIP - config.yaml may consist of:
kind: Service
apiVersion: v1
metadata:
name: jenkins-discovery
namespace: ci spec:
type: ClusterIP
selector:
app: master
ports:
- protocol: TCP
port: 50000
targetPort: 50000
name: slaves
2/ use port forwarding to access application - first check if kubectl command-line tool to communicate with your minikube cluster works, then if true find service port from ClusterIP configuration.
kubectl get svc | grep test_service
Let assume service test_service works on port 5555 so to do port forwarding run the command:
kubectl port-forward pods/test_service 5555:5555
After that, you service will be available on the localhost:5555
3/ If you are familiar with the concept of pods networking you cat declare public ports in the pod’s manifest file. A user can connect to pods network defining manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 8080
When the container is starting with manifest file like above host port TCP port 8080 will be forwarded to pod port 8080.
Please keep in the mind that ClusterIP is the use of a lot of services regarding to proper works of the cluster. I think it is not good practice to deal with ClusterIP as a regular network service - on worst scenario, it breaks a cluster soon, by invalid internal network state of connections.