Auto-creating A records with kubernetes services - kubernetes

I've got a kubernetes 1.6.2 cluster, and am creating a service like:
kind: Service
apiVersion: v1
metadata:
name: hello
namespace: myns
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
dns.alpha.kubernetes.io/internal: mydomain.com
spec:
selector:
app: hello-world
ports:
- protocol: "TCP"
port: 80
targetPort: 5000
type: LoadBalancer
I'd expect this to create an internal ELB (which it does) but also set up an A record on the AWS Route53 hosted zone for mydomain.com as per https://github.com/kubernetes/kops/tree/master/dns-controller (which it doesn't). Is there something I need to do to enable A record creation?

In route 53 create a type A record with Alias Yes prior to launching your cluster and Kurbernetes will audio update it with proper Alias Target which gets resolved to the correct IP upon app boot up when you issue
kubectl expose rs .....

This might not work since dns.alpha.kubernetes.io/internal requires that you use NodePort, at least that's what's written in here https://github.com/kubernetes/kops/issues/1082
Update:
There's an open issue about this A-record. CNAME record can be created but A record is not yet working. Forgot the issue number but I think you can find it from kops github issues.

Related

configure dns in kubernetes pod

good afternoon. I have a problem with a multi-tenant architecture that I am setting up in kubernetes. it consists of two pods (front and back), the front hits a url that points to the backend and in the back I have the different clients (tenants) that I have.
the config of the nginx of the back is defined in the following way:
server_name ~^(?<account>.+)\-backend.domain.com$;
root /var/www/html/tenant/$account-backend/;
index index.php;
this means that if I want to get to the backend from the frontend it would be with a url like this: tenant1.backend.domain.com
the frontend is exposed with a nodeport type service and a load balancer.
the backend is exposed locally with a ClusterIP type service which is the following:
apiVersion: v1
type: Service
metadata:
name: service-clusterip-app-backend
namespace: app
spec:
type: ClusterIP
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
selector:
app: app-nginx
When I upload the cluster and go to the frontend and make a request, the pod can't resolve the tenant1.backend.domain.com. I've tried configuring some redirection rules through coredns, but I don't understand how it works:
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns-custom
namespace: kube-system
data:
zoomcrm.server: |
tenant1.backend.domain.com {
forward . service-clusterip-app-backend:80
}
basically what i need is for the frontend to know where to go when the url of the request is tenant1.backend.domain.com. i've looked into it but nothing i've done has worked.
First of all, you need to forward the DNS request to the reuqired DNS base on the forward rule. Like this. That means you will forward the resolving request of zone tenant1.backend.domain.com(*.tenant1.backend.domain.com) to the DNS server.
data:
zoomcrm.server: |
tenant1.backend.domain.com {
forward . <customDNSserver>:<dns port>
}
Description:
The forward plugin re-uses already opened sockets to the upstreams. It supports UDP, TCP and DNS-over-TLS and uses in band health checking.
the description of forward plugin of coredns can be found here. You can check this post to build up your dns server in kubernetes for name resolving if you are interested.
In your case, if you only need to add single dns record for resolve inside kubernetes. You can either use hostalias in this document, or use coredns file plugin to add zone in your coredns for name resolving, here is the document.

HAProxy Ingress Controller Service Changed IP on GCP

I am using HAProxy as the ingress-controller in my GKE clusters. And exposing HAProxy service as LoadBalancer service(Internal).
Recently, I experienced an issue, where the HA-Proxy service changed its EXTERNAL-IP, and traffic stopped routing to HAProxy. This issue occurred multiple times on different days(now it has stopped). I had to manually add that new External-IP to the frontend of that Loadbalancer to allow traffic to HAProxy.
There were two pods running for HAProxy, and both had been running for days, and there was nothing in their logs. I assume it was something related to Service or GCP LB and not HAProxy itself.
I am afraid that I don't have any logs related to that.
I still don't know, what caused the service IP to change. As there were no recent changes, and the cluster and all services were running for many days properly, and suddenly this occurred.
Has anyone faced a similar issue earlier? Or what can I do to avoid such issue in future?
What could have caused the IP to change?
This is how my service is configured:
---
apiVersion: v1
kind: Service
metadata:
labels:
run: haproxy-ingress
name: haproxy-ingress
namespace: haproxy-controller
annotations:
cloud.google.com/load-balancer-type: "Internal"
networking.gke.io/internal-load-balancer-allow-global-access: "true"
cloud.google.com/network-tier: "Premium"
spec:
selector:
run: haproxy-ingress
type: LoadBalancer
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
- name: https
port: 443
protocol: TCP
targetPort: 443
- name: stat
port: 1024
protocol: TCP
targetPort: 1024
Found some logs:
Warning SyncLoadBalancerFailed 30m (x3570 over 13d) service-controller Error syncing load balancer: failed to ensure load balancer: googleapi: Error 409: IP_IN_USE_BY_ANOTHER_RESOURCE - IP '10.17.129.17' is already being used by another resource.
Normal EnsuringLoadBalancer 3m33s (x3576 over 13d) service-controller Ensuring load balancer
The Short answer is: External IP for the service are ephemeral.
Because HA-Proxy controller pods are recreated the HA-Proxy service is created with an ephemeral IP.
To avoid this issue, I would recommend using a static IP that you can reference in the loadBalancerIP field.
This can be done by following steps:
Reserve a static IP. (link)
Use this IP, to create a service (link)
Example YAML:
apiVersion: v1
kind: Service
metadata:
name: helloweb
labels:
app: hello
spec:
selector:
app: hello
tier: web
ports:
- port: 80
targetPort: 8080
type: LoadBalancer
loadBalancerIP: "YOUR.IP.ADDRESS.HERE"
Unfortunately without logs it's hard to say anything for sure. You should check the audit logs that GKE ships to Cloud Logging as that might give you some idea of what happened. One option is the GCP "oops"'d the GLB and GKE recreated it, thus giving it a new IP. I've never heard of that happening with LBs though (it happens pretty often with nodes, but not LBs). A more common case would be you ran some kubectl command that inadvertently removed the Service object and then it was recreated by some management layer you have set up (Argo, Flux, Helm Operator, whatever) but delete+recreate again means it's a new LB with a new IP. The latter case should be visible in the audit logs so check those out for sure.

Assign ExternalIP of LoadBalancer to Deployment as ENV variable

I have very specific case when my Pod should access to another LoadBalancer service via an ExternalIP.
Is there any way to assign LoadBalancer ExternalIP as an ENV variable to Deployment.yaml?
Thank you in advance!
I don't think this is directly possible in any of the standard templating tools. Part of the problem is that creating a cloud-hosted load balancer is an asynchronous operation, so that external-IP value won't be available until some time after kubectl apply (or the equivalent helm install) has finished.
If you can create the Service in advance then you can hard-code its external IP address or host name into other configuration, but this is intrinsically two steps. (If you're bought into Kubernetes operators, this should be possible with custom code: watch the Service, and once it gets its external address, create a corresponding ConfigMap that holds the address.)
Depending on your specific use case it may also work to just target the LoadBalancer Service within your cluster, the same as any other Service. This won't go out through the cloud provider's load-balancer tier, but it should be indistinguishable otherwise.
I found the way how to do it but #David Maze was perfectly right - there is no straight way how to do it.
So, my solution to add DNS with public and private zones:
apiVersion: v1
kind: Service
metadata:
name: nginx-lb
labels:
app.kubernetes.io/name: nginx-lb
annotations:
external-dns.alpha.kubernetes.io/hostname: mycoolservice.{{ .Values.dns_external_zone }}.
external-dns.alpha.kubernetes.io/zone-type: public,private
external-dns.alpha.kubernetes.io/ttl: "1"
spec:
type: LoadBalancer
ports:
- name: https
port: 443
targetPort: https
- name: http
port: 80
targetPort: http
selector:
app.kubernetes.io/name: nginx

Kubernetes - Pass Public IP of Load Balance as Environment Variable into Pod

Gist
I have a ConfigMap which provides necessary environment variables to my pods:
apiVersion: v1
kind: ConfigMap
metadata:
name: global-config
data:
NODE_ENV: prod
LEVEL: info
# I need to set API_URL to the public IP address of the Load Balancer
API_URL: http://<SOME IP>:3000
DATABASE_URL: mongodb://database:27017
SOME_SERVICE_HOST: some-service:3000
I am running my Kubernetes Cluster on Google Cloud, so it will automatically create a public endpoint for my service:
apiVersion: v1
kind: Service
metadata:
name: gateway
spec:
selector:
app: gateway
ports:
- name: http
port: 3000
targetPort: 3000
nodePort: 30000
type: LoadBalancer
Issue
I have an web application that needs to make HTTP requests from the client's browser to the gateway service. But in order to make a request to the external service, the web app needs to know it's ip address.
So I've set up the pod, which serves the web application in a way, that it picks up an environment variable "API_URL" and as a result makes all HTTP requests to this url.
So I just need a way to set the API_URL environment variable to the public IP address of the gateway service to pass it into a pod when it starts.
I know this isn't the exact approach you were going for, but I've found that creating a static IP address and explicitly passing it in tends to be easier to work with.
First, create a static IP address:
gcloud compute addresses create gke-ip --region <region>
where region is the GCP region your GKE cluster is located in.
Then you can get your new IP address with:
gcloud compute addresses describe gke-ip --region <region>
Now you can add your static IP address to your service by specifying an explicit loadBalancerIP.1
apiVersion: v1
kind: Service
metadata:
name: gateway
spec:
selector:
app: gateway
ports:
- name: http
port: 3000
targetPort: 3000
nodePort: 30000
type: LoadBalancer
loadBalancerIP: "1.2.3.4"
At this point, you can also hard-code it into your ConfigMap and not worry about grabbing the value from the cluster itself.
1If you've already created a LoadBalancer with an auto-assigned IP address, setting an IP address won't change the IP of the underlying GCP load balancer. Instead, you should delete the LoadBalancer service in your cluster, wait ~15 minutes for the underlying GCP resources to get cleaned up, and then recreate the LoadBalancer with the explicit IP address.
You are trying to access gateway service from client's browser.
I would like to suggest you another solution that is slightly different from what you are currently trying to achieve
but it can solve your problem.
From your question I was able to deduce that your web app and gateway app are on the same cluster.
In my solution you dont need a service of type LoadBalancer and basic Ingress is enough to make it work.
You only need to create a Service object (notice that option type: LoadBalancer is now gone)
apiVersion: v1
kind: Service
metadata:
name: gateway
spec:
selector:
app: gateway
ports:
- name: http
port: 3000
targetPort: 3000
nodePort: 30000
and you alse need an ingress object (remember that na Ingress Controller needs to be deployed to cluster in order to make it work) like one below:
More on how to deploy Nginx Ingress controller you can finde here
and if you are already using one (maybe different one) then you can skip this step.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: gateway-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: gateway.foo.bar.com
http:
paths:
- path: /
backend:
serviceName: gateway
servicePort: 3000
Notice the host field.
The same you need to repeat for your web application. Remember to use appropriate host name (DNS name)
e.g. for web app: foo.bar.com and for gateway: gateway.foo.bar.com
and then just use the gateway.foo.bar.com dns name to connect to the gateway app from clients web browser.
You also need to create a dns entry that points *.foo.bar.com to Ingress's public ip address
as Ingress controller will create its own load balancer.
The flow of traffic would be like below:
+-------------+ +---------+ +-----------------+ +---------------------+
| Web Browser |-->| Ingress |-->| gateway Service |-->| gateway application |
+-------------+ +---------+ +-----------------+ +---------------------+
This approach is better becaues it won't cause issues with Cross-Origin Resource Sharing (CORS) in clients browser.
Examples of Ingress and Service manifests I took from official kubernetes documentation and modified slightly.
More on Ingress you can find here
and on Services here
The following deployment reads the external IP of a given service using kubectl every 10 seconds and patches a given configmap with it:
apiVersion: apps/v1
kind: Deployment
metadata:
name: configmap-updater
labels:
app: configmap-updater
spec:
selector:
matchLabels:
app: configmap-updater
template:
metadata:
labels:
app: configmap-updater
spec:
containers:
- name: configmap-updater
image: alpine:3.10
command: ['sh', '-c' ]
args:
- | #!/bin/sh
set -x
apk --update add curl
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl
chmod +x kubectl
export CONFIGMAP="configmap/global-config"
export SERVICE="service/gateway"
while true
do
IP=`./kubectl get services $CONFIGMAP -o go-template --template='{{ (index .status.loadBalancer.ingress 0).ip }}'`
PATCH=`printf '{"data":{"API_URL": "https://%s:3000"}}' $IP`
echo ${PATCH}
./kubectl patch --type=merge -p "${PATCH}" $SERVICE
sleep 10
done
You probably have RBAC enabled in your GKE cluster and would still need to create the appropriate Role and RoleBinding for this to work correctly.
You've got a few possibilities:
If you really need this to be hacked into your setup, you could use a similar approach with a sidecar container in your pod or a global service like above. Keep in mind that you would need to recreate your pods if the configmap actually changed for the changes to be picked up by the environment variables of your containers.
Watch and query the Kubernetes-API for the external IP directly in your application, eliminating the need for an environment variable.
Adopt your applications to not directly depend on the external IP.

Redirection Stateful pod in Kubernetes

I am using stateful in kubernetes.
I write an application which will have leader and follower (using Go)
Leader is for writing and reading.
Follower is just for reading.
In the application code, I used "http.Redirect(w, r, url, 307)" function to redirect the writing from follower to leader.
if I use a jump pod to test the application (try to access the app to read and write), my application can work well, the follower can redirect to the leader
kubectl run -i -t --rm jumpod --restart=Never --image=quay.io/mhausenblas/jump:0.2 -- sh
But when I deploy a service (to access from outside).
apiVersion: v1
kind: Service
metadata:
name: service-name
spec:
selector:
app: app-name
ports:
- port: 80
targetPort: 9876
And access to application by this link:
curl -L -XPUT -T /tmp/put-name localhost:8001/api/v1/namespaces/default/services/service-name/proxy/
Because service will randomly select a pod each time we access to the application. When it accesses to leader, it work well (no problem happen), But if it accesses to the follower, the follower will need to redirect to leader, I met this error:
curl: (6) Could not resolve host: pod-name.svc-name.default.svc.cluster.local
What I tested:
I can use this link to access when I used jump pod
I accessed to per pod, look up the DNS. I can find the DNS name by "nslookup" command
I tried to fix the IP of leader in my code. In my code, the follower will redirect to a IP (not a domain like above). But It still met this error:
curl: (7) Failed to connect to 10.244.1.71 port 9876: No route to host
Anybody know this problem. Thank you!
In order for pod DNS to work you must create headless service:
apiVersion: v1
kind: Service
metadata:
name: svc-name-headless
spec:
clusterIP: None
selector:
app: app-name
ports:
- port: 80
targetPort: 9876
And then in StatefulSet spec you must refer to this service:
spec:
serviceName: svc-name-headless
Read more here: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-network-id
Alternatively you may specify what particular pod will be selected by Service like that:
apiVersion: v1
kind: Service
metadata:
name: svc-name
spec:
selector:
statefulset.kubernetes.io/pod-name: pod-name-0
ports:
- port: 80
targetPort: 9876
When you are accessing your cluster services from outside, a DNS names reserved normally for inner cluster use (e.g. pod-name.svc-name.default.svc.cluster.local) are not recognized by clients (e.g. Web browser).
Context:
You are trying to expose access to PODs, controlled by StatefullSet with service of ClusterIP type
Solution:
Change ClusterIP (assumed by default when not specified) to NodePort or LoadBalancer