Control the pod domain - kubernetes

I have pod1 and pod2 in the same namespace.
pod2 is running an HTTP server.
How can I easily get pod2 be seen as pod2.mydomain.com from pod1?
In this way the HTTPS certificate would work with no problem.

It can be achieved through the kubernetes cluster and it is important to use a valid SSL certificate.
By using a Kubernetes service you can create a service of type “ClusterIP” or “NodePort” in the same namespace as the pods and you need to expose the pod2 HTTP server to a consistent IP and port. In this way you can configure your DNS to map pod2.mydomain.com to the IP address of the service.
Or if your cluster supports load balancing you can create a Service of type “LoadBalancer” and you can expose pod2 HTTP server to a public IP.
You can also create an Ingress service that routes traffic to pod 2 on the hostname.
For more information please check this official Document

You can directly hit the POD with it's IP fine how you are thinking however it would better to use the service with POD.
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app.kubernetes.io/name: proxy
spec:
containers:
- name: nginx
image: nginx:stable
ports:
- containerPort: 80
name: http-web-svc
---
apiVersion: v1
kind: Service
metadata:
name: service-1
spec:
selector:
app.kubernetes.io/name: proxy
ports:
- name: name-of-service-port
protocol: TCP
port: 80
targetPort: http-web-svc
Service route the traffic to matching labels PODs, so from POD1 you hit the request to service-1 which will forward traffic to POD-1 and response.
https://service-1.<namespace-name>.svc.cluster.local
with service HTTPS will also work the way you asked. Test with the curl command, start one curl pod
kubectl run mycurlpod --image=curlimages/curl -i --tty -- sh
hit curl request to service
curl https://service-1.<namespace-name>.svc.cluster.local
service ref doc : https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service
Extra :
You can also use the ingress & service mesh which will make little simple for you scenario if you don't want to manage SSL/TLS cert for the app.
Service mesh supports the mTLS auth you can force policy and it would be easy however there will be extra management.

Related

Google Kubernetes Ingress health check always failing

I have configured a web application pod exposed via apache on port 80. I'm unable to configure a service + ingress for accessing from the internet. The issue is that the backend services always report as UNHEALTHY.
Pod Config:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
name: webapp
name: webapp
namespace: my-app
spec:
replicas: 1
selector:
matchLabels:
name: webapp
template:
metadata:
labels:
name: webapp
spec:
containers:
- image: asia.gcr.io/my-app/my-app:latest
name: webapp
ports:
- containerPort: 80
name: http-server
Service Config:
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
type: NodePort
selector:
name: webapp
ports:
- protocol: TCP
port: 50000
targetPort: 80
Ingress Config:
kind: Ingress
metadata:
name: webapp-ingress
spec:
backend:
serviceName: webapp-service
servicePort: 50000
This results in backend services reporting as UNHEALTHY.
The health check settings:
Path: /
Protocol: HTTP
Port: 32463
Proxy protocol: NONE
Additional information: I've tried a different approach of exposing the deployment as a load balancer with external IP and that works perfectly. When trying to use a NodePort + Ingress, this issue persists.
With GKE, the health check on the Load balancer is created automatically when you create the ingress. Since the HC is created automatically, so are the firewall rules.
Since you have no readinessProbe configured, the LB has a default HC created (the one you listed). To debug this properly, you need to isolate where the point of failure is.
First, make sure your pod is serving traffic properly;
kubectl exec [pod_name] -- wget localhost:80
If the application has curl built in, you can use that instead of wget.
If the application has neither wget or curl, skip to the next step.
get the following output and keep track of the output:
kubectl get po -l name=webapp -o wide
kubectl get svc webapp-service
You need to keep the service and pod clusterIPs
SSH to a node in your cluster and run sudo toolbox bash
Install curl:
apt-get install curl`
Test the pods to make sure they are serving traffic within the cluster:
curl -I [pod_clusterIP]:80
This needs to return a 200 response
Test the service:
curl -I [service_clusterIP]:80
If the pod is not returning a 200 response, the container is either not working correctly or the port is not open on the pod.
if the pod is working but the service is not, there is an issue with the routes in your iptables which is managed by kube-proxy and would be an issue with the cluster.
Finally, if both the pod and the service are working, there is an issue with the Load balancer health checks and also an issue that Google needs to investigate.
As Patrick mentioned, the checks will be created automatically by GCP.
By default, GKE will use readinessProbe.httpGet.path for the health check.
But if there is no readinessProbe configured, then it will just use the root path /, which must return an HTTP 200 (OK) response (and that's not always the case, for example, if the app redirects to another path, then the GCP health check will fail).

Kubernetes - Pass Public IP of Load Balance as Environment Variable into Pod

Gist
I have a ConfigMap which provides necessary environment variables to my pods:
apiVersion: v1
kind: ConfigMap
metadata:
name: global-config
data:
NODE_ENV: prod
LEVEL: info
# I need to set API_URL to the public IP address of the Load Balancer
API_URL: http://<SOME IP>:3000
DATABASE_URL: mongodb://database:27017
SOME_SERVICE_HOST: some-service:3000
I am running my Kubernetes Cluster on Google Cloud, so it will automatically create a public endpoint for my service:
apiVersion: v1
kind: Service
metadata:
name: gateway
spec:
selector:
app: gateway
ports:
- name: http
port: 3000
targetPort: 3000
nodePort: 30000
type: LoadBalancer
Issue
I have an web application that needs to make HTTP requests from the client's browser to the gateway service. But in order to make a request to the external service, the web app needs to know it's ip address.
So I've set up the pod, which serves the web application in a way, that it picks up an environment variable "API_URL" and as a result makes all HTTP requests to this url.
So I just need a way to set the API_URL environment variable to the public IP address of the gateway service to pass it into a pod when it starts.
I know this isn't the exact approach you were going for, but I've found that creating a static IP address and explicitly passing it in tends to be easier to work with.
First, create a static IP address:
gcloud compute addresses create gke-ip --region <region>
where region is the GCP region your GKE cluster is located in.
Then you can get your new IP address with:
gcloud compute addresses describe gke-ip --region <region>
Now you can add your static IP address to your service by specifying an explicit loadBalancerIP.1
apiVersion: v1
kind: Service
metadata:
name: gateway
spec:
selector:
app: gateway
ports:
- name: http
port: 3000
targetPort: 3000
nodePort: 30000
type: LoadBalancer
loadBalancerIP: "1.2.3.4"
At this point, you can also hard-code it into your ConfigMap and not worry about grabbing the value from the cluster itself.
1If you've already created a LoadBalancer with an auto-assigned IP address, setting an IP address won't change the IP of the underlying GCP load balancer. Instead, you should delete the LoadBalancer service in your cluster, wait ~15 minutes for the underlying GCP resources to get cleaned up, and then recreate the LoadBalancer with the explicit IP address.
You are trying to access gateway service from client's browser.
I would like to suggest you another solution that is slightly different from what you are currently trying to achieve
but it can solve your problem.
From your question I was able to deduce that your web app and gateway app are on the same cluster.
In my solution you dont need a service of type LoadBalancer and basic Ingress is enough to make it work.
You only need to create a Service object (notice that option type: LoadBalancer is now gone)
apiVersion: v1
kind: Service
metadata:
name: gateway
spec:
selector:
app: gateway
ports:
- name: http
port: 3000
targetPort: 3000
nodePort: 30000
and you alse need an ingress object (remember that na Ingress Controller needs to be deployed to cluster in order to make it work) like one below:
More on how to deploy Nginx Ingress controller you can finde here
and if you are already using one (maybe different one) then you can skip this step.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: gateway-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: gateway.foo.bar.com
http:
paths:
- path: /
backend:
serviceName: gateway
servicePort: 3000
Notice the host field.
The same you need to repeat for your web application. Remember to use appropriate host name (DNS name)
e.g. for web app: foo.bar.com and for gateway: gateway.foo.bar.com
and then just use the gateway.foo.bar.com dns name to connect to the gateway app from clients web browser.
You also need to create a dns entry that points *.foo.bar.com to Ingress's public ip address
as Ingress controller will create its own load balancer.
The flow of traffic would be like below:
+-------------+ +---------+ +-----------------+ +---------------------+
| Web Browser |-->| Ingress |-->| gateway Service |-->| gateway application |
+-------------+ +---------+ +-----------------+ +---------------------+
This approach is better becaues it won't cause issues with Cross-Origin Resource Sharing (CORS) in clients browser.
Examples of Ingress and Service manifests I took from official kubernetes documentation and modified slightly.
More on Ingress you can find here
and on Services here
The following deployment reads the external IP of a given service using kubectl every 10 seconds and patches a given configmap with it:
apiVersion: apps/v1
kind: Deployment
metadata:
name: configmap-updater
labels:
app: configmap-updater
spec:
selector:
matchLabels:
app: configmap-updater
template:
metadata:
labels:
app: configmap-updater
spec:
containers:
- name: configmap-updater
image: alpine:3.10
command: ['sh', '-c' ]
args:
- | #!/bin/sh
set -x
apk --update add curl
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl
chmod +x kubectl
export CONFIGMAP="configmap/global-config"
export SERVICE="service/gateway"
while true
do
IP=`./kubectl get services $CONFIGMAP -o go-template --template='{{ (index .status.loadBalancer.ingress 0).ip }}'`
PATCH=`printf '{"data":{"API_URL": "https://%s:3000"}}' $IP`
echo ${PATCH}
./kubectl patch --type=merge -p "${PATCH}" $SERVICE
sleep 10
done
You probably have RBAC enabled in your GKE cluster and would still need to create the appropriate Role and RoleBinding for this to work correctly.
You've got a few possibilities:
If you really need this to be hacked into your setup, you could use a similar approach with a sidecar container in your pod or a global service like above. Keep in mind that you would need to recreate your pods if the configmap actually changed for the changes to be picked up by the environment variables of your containers.
Watch and query the Kubernetes-API for the external IP directly in your application, eliminating the need for an environment variable.
Adopt your applications to not directly depend on the external IP.

Redirection Stateful pod in Kubernetes

I am using stateful in kubernetes.
I write an application which will have leader and follower (using Go)
Leader is for writing and reading.
Follower is just for reading.
In the application code, I used "http.Redirect(w, r, url, 307)" function to redirect the writing from follower to leader.
if I use a jump pod to test the application (try to access the app to read and write), my application can work well, the follower can redirect to the leader
kubectl run -i -t --rm jumpod --restart=Never --image=quay.io/mhausenblas/jump:0.2 -- sh
But when I deploy a service (to access from outside).
apiVersion: v1
kind: Service
metadata:
name: service-name
spec:
selector:
app: app-name
ports:
- port: 80
targetPort: 9876
And access to application by this link:
curl -L -XPUT -T /tmp/put-name localhost:8001/api/v1/namespaces/default/services/service-name/proxy/
Because service will randomly select a pod each time we access to the application. When it accesses to leader, it work well (no problem happen), But if it accesses to the follower, the follower will need to redirect to leader, I met this error:
curl: (6) Could not resolve host: pod-name.svc-name.default.svc.cluster.local
What I tested:
I can use this link to access when I used jump pod
I accessed to per pod, look up the DNS. I can find the DNS name by "nslookup" command
I tried to fix the IP of leader in my code. In my code, the follower will redirect to a IP (not a domain like above). But It still met this error:
curl: (7) Failed to connect to 10.244.1.71 port 9876: No route to host
Anybody know this problem. Thank you!
In order for pod DNS to work you must create headless service:
apiVersion: v1
kind: Service
metadata:
name: svc-name-headless
spec:
clusterIP: None
selector:
app: app-name
ports:
- port: 80
targetPort: 9876
And then in StatefulSet spec you must refer to this service:
spec:
serviceName: svc-name-headless
Read more here: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-network-id
Alternatively you may specify what particular pod will be selected by Service like that:
apiVersion: v1
kind: Service
metadata:
name: svc-name
spec:
selector:
statefulset.kubernetes.io/pod-name: pod-name-0
ports:
- port: 80
targetPort: 9876
When you are accessing your cluster services from outside, a DNS names reserved normally for inner cluster use (e.g. pod-name.svc-name.default.svc.cluster.local) are not recognized by clients (e.g. Web browser).
Context:
You are trying to expose access to PODs, controlled by StatefullSet with service of ClusterIP type
Solution:
Change ClusterIP (assumed by default when not specified) to NodePort or LoadBalancer

Minikube: access private services using proxy/vpn

I've installed minikube to learn kubernetes a bit better.
I've deployed some apps and services which have ip's in a range of 10.x.x.x (private ip). I can expose my services on minikube and visit them in my browser. But I want to use the private IP's and not exposing it.
How can I visit (vpn/proxy wize) private ip's of services in minikube?
Minikube is Kubernetes with only one node and master server running on this node.
It provides the possibility to learn how it works with minimum hardware required.
It's ideal for testing purposes and seamless running on a laptop. Minikube is still software with mature
network stack from Kubernetes. This means that ports are exposed to services and virtually services are
communicating with pods.
To understand what is communicating, let me explain what ClusterIP does - it exposes the service on an internal IP in the cluster. This type makes service only reachable from within the cluster.
Cluster IP you can get by the command:
kubectl get services test_service
So, after you create a new service, you like to establish connections to ClusterAPI.
Basically, there are three ways to connect to backend resource:
1/ use kube-proxy - this proxy reflects services as defined in the Kubernetes API and simple stream TCP and UDP to backend or set of them in advanced configuration. Service cluster IPs and ports are currently found through Docker compatible environment variables specifying ports opened by the service proxy. There is an optional addon that provides cluster DNS for these cluster IPs. The user must create a service with the apiserver API to configure the proxy.
Example shows how can we use nodeselectors to define connection to port 5000 on ClusterIP - config.yaml may consist of:
kind: Service
apiVersion: v1
metadata:
name: jenkins-discovery
namespace: ci spec:
type: ClusterIP
selector:
app: master
ports:
- protocol: TCP
port: 50000
targetPort: 50000
name: slaves
2/ use port forwarding to access application - first check if kubectl command-line tool to communicate with your minikube cluster works, then if true find service port from ClusterIP configuration.
kubectl get svc | grep test_service
Let assume service test_service works on port 5555 so to do port forwarding run the command:
kubectl port-forward pods/test_service 5555:5555
After that, you service will be available on the localhost:5555
3/ If you are familiar with the concept of pods networking you cat declare public ports in the pod’s manifest file. A user can connect to pods network defining manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 8080
When the container is starting with manifest file like above host port TCP port 8080 will be forwarded to pod port 8080.
Please keep in the mind that ClusterIP is the use of a lot of services regarding to proper works of the cluster. I think it is not good practice to deal with ClusterIP as a regular network service - on worst scenario, it breaks a cluster soon, by invalid internal network state of connections.

Kubernetes NetworkPolicy allow loadbalancer

I have a Kubernetes cluster running on Google Kubernetes Engine (GKE) with network policy support enabled.
I created an nginx deployment and load balancer for it:
kubectl run nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=LoadBalancer
Then I created this network policy to make sure other pods in the cluster won't be able to connect to it anymore:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: access-nginx
spec:
podSelector:
matchLabels:
run: nginx
ingress:
- from:
- namespaceSelector:
matchLabels:
name: kube-system
ports:
- protocol: TCP
port: 80
Now other pods in my cluster can't reach it (as intended):
kubectl run busybox --rm -ti --image=busybox /bin/sh
If you don't see a command prompt, try pressing enter.
/ # wget --spider --timeout=1 nginx
Connecting to nginx (10.63.254.50:80)
wget: download timed out
However, it surprised me that using my external browser I also can't connect anymore to it through the load balancer:
open http://$(kubectl get svc nginx --output=jsonpath={.status.loadBalancer.ingress[0].ip})
If I delete the policy it starts to work again.
So, my question is: how do I block other pods from reaching nginx, but keep access through the load balancer open?
I talked about this in my Network Policy recipes repository.
"Allowing EXTERNAL load balancers while DENYING local traffic" is not a use case that makes sense, therefore it's not possible to using network policy.
For Service type=LoadBalancer and Ingress resources to work, you must allow ALL traffic to the pods selected by these resources.
If you REALLY want you can use the from.ipBlock.cidr and from.ipBlock.cidr.except resources to allow traffic from 0.0.0.0/0 (all IPv4) and then excluding 10.0.0.0/8 (or whatever private IP range GKE uses).
I recently had to do something similar. I needed a policy that didn't allow pods from other namespaces to talk to prod, but did allow the LoadBalancer services to reach pods in prod. Here's what worked (based on Ahmet's post):
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: isolate-prod
namespace: prod
spec:
podSelector: {}
ingress:
- from:
- podSelector: {}
- from:
- ipBlock:
cidr: '0.0.0.0/0'
except: ['10.0.0.0/8']
I'd like to share solution that builds on the excellent answers of #ahmetb-google and #tammer-saleh
The situation: 1 cluster, 4 namespaces, a public HTTPS-terminating Ingress for 3 of the namespaces that allows specific traffic inbound and routes it appropriately.
Goal: Block all inter-namespace traffic, allow only public traffic coming in via the Ingress.
Problem: When deploying a "deny from other namespaces" rule, that it also denied traffic from my Ingress so the pods were not accessible from the outside.
Solution:
I created an additional policy to allow only port 80 traffic targetting pods with the label role=web. It uses the allow/except trick to still block traffic from other namespaces but allow it from the public ingresses.
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-http-from-ingress
spec:
podSelector:
matchLabels:
role: web
ingress:
- from:
- ipBlock:
cidr: '0.0.0.0/0'
except: ['10.0.0.0/8']
ports:
- port: 80
With this deployed, traffic still flows from the public, via the Ingress, to the web-serving pods. All inter-namespace traffic is blocked, including HTTP.
This is a useful setup when for example you're using namespaces for different deployment stages (production, testing, edge, etc) and you have private HTTP APIs that you would not want to accidentally hit cross-stage.