bitnami/external-dns with Kubernetes on Docker Desktop does not work - kubernetes

What I'm trying to do
I have deployed an aps.net core gRpc service on Docker for Desktop (Kubernetes enabled). To do client-side load balancing, I want to expose the same via a headless service. The deployment and service definition YAML files are as provided by the link viz. Deployment.yaml , service.yaml, and PV and PVC .yaml. When the deployment is run two replicas will be created. Now I want to expose them via a headless service and do a DNS lookup of the pods' IP addresses and do a client-side load balancing. For this, I installed the bitnami external-dns using the HELM charts. I did not make any modifications to the default chart values. Now when I try to do a nslookup of my service this is not working.
My expectation
Deploy the bitnami external-dns on Docker for Desktop with Kubernetes enabled and configured service to expose as DNS on the load balancer. I was expecting the nslookup to succeed in getting the pod IPs as a result
Can someone help me to get the same working?

Related

Deploying Kubernetes Dashboard in Cloud

I'm following this documentation for deploying Kubernetes Dashboard: https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/
Question # 1:
The instructions are only available to run the dashboard locally. Is there any tutorial to deploy it in Cloud (Azure, AWS)? If not, do we have to manually expose a load balancer / ingress in front of the dashboard service?
Question # 2:
The instructions mentions to run kubectl proxy in order to enable access to dashboard. If deploying to cloud, do we need to run that as a process in the background?
Regarding question 1,
You can deploy this dashboard to a cloud of course, for AWS you could set up eks or ec2 instance and deploy you application or this dashboard directly.
You will need to set up a service, NodePort or Load balancer. Using you VM IP as IP and NodePort as the port exposes for the outside world, or if you decide to create a load balancer, use the endpoint of the service kubectl get endpoints -n <service_namespace> and load balancer port.
For question 2 I'm not sure about proxy, maybe my answers for question 1 is enough. But documentation knows better.

Expose Minikube Cluster IP

I have a deployment with services running inside minikube on a remote linux vps, these services have Cluster IP with no external IP, I would want to access these services from a web browser
Either you change the service type LoadBalancer which will create the External for you and you can access the specific service using that.
OR
Setup the ingress and ingress controller which can be also useful to expose the service directly.
OR for just Development use not for Prod
kubectl port-forward svc/<service name> 5555:<service port here>
This creates the proxy tunnel between your K8s cluster and local. You can access your service now at localhost:5555

Accessing services without pod (without istio envoy) from outside cluster through istio ingress rules in K8s

Steps:
1. I have created 2 namespaces (ns1 and ns2).
2. in ns1, i have deployed service where envoy proxy is enabled (istioctl kube-inject service.yaml)
2. in ns1, i have created istio ingress rules pointing to the service and i am able access it from outside the cluster.
3. in ns2, i havnt deploy any service because it is my shared namespace hence i have created headless service (External Name) which is pointing to the service deployed in ns1 namespace.
The problem is; i am not able to access service which is deployed in ns2 from outside the cluster. it is throwing 404 service not found.
did i miss anything here... do we have any other solution to address this?
Thanks,
Nikhil

Minikube networking

I have a Linux build machine that I have installed minikube too. Within the minikube instance I have installed artifactory which I will be using for storing various build artifacts
I now want to be able to do some work on my dev machine (which is an unrelated laptop on the same network as the Linux build machine) and push some built artifacts into artifactory.
However I can't figure out how to get to artifactory. When I ssh to the Linux server and check the minikube service I can see that the artifactory instance is running on a 192.168 address.
Is there any way to expose artifactory ie access it on the windows machine? Or is this not possible and I should just install artifactory on the Linux machine rather than in minikube?
Expose you artifactory Service
$ minikube service <artifactory-service> -n <namespace>
Or get the URL
$ minikube service <artifactory-service> -n <namespace> --url
If you want to access from remote, you need to do something else.
Suppose, when you run minikube service <artifactory-service> -n <namespace> --url, you get following
http://192.168.99.100:30654
You can access artifactory in minikube using this URL. But can't access from remote.
Now do this, expose port 30654
ssh -i ~/.minikube/machines/minikube/id_rsa docker#$(minikube ip) -L \*:30654:0.0.0.0:30654
You will be able to access from other network.
Yes, we need an ingress controller (like nginx) to expose a kubernetes service for external access.
There are three ways to create the nginx ingress service using kubernetes per https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services---service-types and expose it for external access:
LoadBalancer service type which sets the ExternalIP automatically. This is used when there is an external non-k8s, cloud-provider's load-balancer like CGE, AWS or Azure, and this external load-balancer would provide the ExternalIP for the nginx ingress service.
ExternalIPs per https://kubernetes.io/docs/concepts/services-networking/service/#external-ips.
NodePort. In this approach, the service can be accessed from outside the cluster using NodeIP:NodePort/url/of/the/service.
Along with the nginx ingress controller, you'll need an ingress resource too. Refer https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/complete-example for examples.
Keep in mind that Minikube is a small VM with a small docker registry by default. So, it may not be possible to store a lot of build artifacts in Minikube.
To get this to work in the end I setup ingress on minikube and then through entries in hosts file and nginx as a reverse proxy managed to get things working.

kubernetes service exposed to host ip

I created a kubernetes service something like this on my 4 node cluster:
kubectl expose deployment distcc-deploy --name=distccsvc --port=8080
--target-port=3632 --type=LoadBalancer
The problem is how do I expose this service to an external ip. Without an external ip you can not ping or reach this service endpoint from outside network.
I am not sure if i need to change the kubedns or put some kind of changes.
Ideally I would like the service to be exposed on the host ip.
Like http://localhost:32876
hypothetically let's say
i have a 4 node vm on which i am running let's say nginx service. i expose it as a lodabalancer service. how can i access the nginx using this service from the vm ?
let's say the service name is nginxsvc is there a way i can do http://:8080. how will i get this here for my 4 node vm ?
LoadBalancer does different things depending on where you deployed kubernetes. If you deployed on AWS (using kops or some other tool) it'll create an elastic load balancer to expose the service. If you deployed on GCP it'll do something similar - Google terminology escapes me at the moment. These are separate VMs in the cloud routing traffic to your service. If you're playing around in minikube LoadBalancer doesn't really do anything, it does a node port with the assumption that the user understands minikube isn't capable of providing a true load balancer.
LoadBalancer is supposed to expose your service via a brand new IP address. So this is what happens on the cloud providers, they requisition VMs with a separate public IP address (GCP gives a static address and AWS a DNS). NodePort will expose as a port on kubernetes node running the pod. This isn't a workable solution for a general deployment but works ok while developing.