Kubernetes: Connect to the outside world from pod - kubernetes

I have a local Kubernetes cluster on a single machine, and a container that needs to receive a url (like https://www.wikipedia.org/) and extract the text content from it. Essentially I need my pod to connect to the outside world. Since I am using v1.2.5, I need some DNS add-on like SkyDNS, but I cannot find any working example or tutorial on how to set it up. Tutorials like this usually only tell me how to make pods within the cluster talk to each other by DNS look-up.
Therefore, could anyone give me some advice on how to set up and configure an add-on of Kubernetes so that pods can access the public Internet? Thank you very much!

You can simply create your pods with "dnsPolicy: Default", this will give it a resolv.conf just like on the host and it will be able to resolve wikipedia.org. It will not be able to resolve cluster local services.If you're looking to actually deploy kube-dns so you can also resolve cluster local services this is probably the best starting point: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/

Related

Why does K8s app fail to connect MongoDB atlas? - persist k8s nodes IP's

Just trying to make an app on k8s to connect to MongoDB atlass,
So far tried the following:
Changed the DNSpolicy to Default and many others - no luck
Created nginx-ingress link- so have the main IP address of the cluster
Added that IP to IP access list - but still no luck
The cluster tier is M2 - so no private peering or private endpoints.
The Deployment/Pod that is trying to connect will not have an a DNS assigned to it, it is simply a service running inside of the k8s and processing rabbitmq messages.
So not sure on what I should whitelist if the service is never exposed.
I assume it would have to be something with Nodes or K8s egress or something, but not sure where to even look
Tried pretty much everything I could and still cannot find clear documentation on how to achieve the desired result apart from whitelisting All IP addresses
UPDATE: Managed to find this article https://www.digitalocean.com/community/questions/urgent-how-to-connect-to-mongodb-atlas-cluster-from-a-kubernetes-pod
So now im trying to find a way to persist Node IP addresses, as I understand during the scale up or down or upgrade of nodes it will create new IP addresses.
So is there a way to persist them?

Node name mismatch when trying to access publicly exposed RabbitMQ through Kubernetes

I'm trying to access a publicly exposed RabbitMQ running in a EKS Kubernetes cluster
The Load Balancer Service was created and works properly (I'm able to connect to the exposed port using telnet)
The problem is that when I try to connect to RabbitMQ using a CLI tool I get:
Node name (or hostname) mismatch: node "rabbit#shared-service-rabbitmq-0.shared-service-rabbitmq-headless.shared.svc.cluster.local" believes its node name is not "rabbit#some-domain.com" but something else
I didn't find a way to change the hostname or the node name using the Helm chart
I'm pretty sure that the solution is something really simple but I'm spending many hours researching and I still don't see how to fix this
Could someone please point me to the right direction?
Thanks in advance

2 Kubernetes pod communicating without knowing the exposed address

I plan to deploy 2 kubernetes pods with a NodePort service to expose them into the network. Now i want pod 1 be able to access the pod 2 by his service.
The Problem is i write the Deployment files and i don't know the ip address pod 2 will get from the cluster, but i need to set the address into the file from pod 1 wiva a env. variable.
Is there a other way in a kubernetes cluster to make them accessible by sth. like the name of the service or sth. like this?
failed to google for this case, and hope anyone of you can give me a hint.
greetings,
Martin.
All kubernetes resources should be referenced by the kubeDNS, you should be able to use the name of the resource to communicate between pods.
You should be able to make it work with something like this: SERVICENAME.SERVICENAMESPACE:PORT and it can be used in an ENV variable without issue.
Hope this answer the question.

Deploying GitLab on Minikube

I'm trying to deploy GitLab on Kubernetes using minikube through this tutorial, but I don't know what values to put in the fields global.hosts.domain, global.hosts.externalIP and certmanager-issuer.email.
The tutorial is very poor in explanations. I'm stuck in this step. Can someone tell me what are this fields and what should I put on them?
I'm trying to deploy GitLab on Kubernetes using minikube through this tutorial, but I don't know what values to put in the fields global.hosts.domain, global.hosts.externalIP and certmanager-issuer.email.
For the domain, you can likely use whatever you'd like, just be aware that when gitlab generates links that are designed to point to itself they won't resolve. You can work-around that with something like dnsmasq or editing /etc/hosts, if it's important to you
For the externalIP, that will be what minikube ip emits, and is the IP through which you will communicate with gitlab (since you will not be able to use the Pod's IP addresses outside of minikube). If gitlab does not use a Service of type NodePort, you're in for some more hoop-jumping to expose those ports via minikube's IP
The certmanager-issuer.email you can just forget about, because it 100% will not issue you a Let's Encrypt cert running on minikube unless they have fixed cermanager to use the dns01 protocol. In order for Let's Encrypt to issue you a cert, they have to connect to the webserver for which they are issuing the cert, and (as you might guess) they will not be able to connect to your minikube IP. If you want to experience SSL on your gitlab instance, then issue the instance a self-signed cert and call it a draw.
The tutorial is very poor in explanations.
That's because what you are trying to do is perilous; minikube is not designed to run an entire gitlab instance, for the above and tens of other reasons. Google Cloud Platform offers generous credits to kick the tires on kubernetes, and will almost certainly have all the things you would need to make that stuff work.

How to access kubernete pods on my development environment?

right now i'm accessing my pods (postgres port 5432) trough a service that is exposed, but since gcloud charge for every forwarding rule created, the amount of pods i need to monitor or to execute stuff in it, is costing me more and more, is there a way to create a single expose service for all of my pods? or can i create some sort of vpn? putty tunnel or something? any help would be appreciated!
I'm also using
kubectl exec
If you are looking for a managed solution then Google is offering VPN for that:
https://console.cloud.google.com/networking/vpn/
If you are happy to roll your own then you can create a new Compute instance on the same network where your nodes are and set up openvpn there. This will give you a fix ip as a freebie.
A more advanced solution is if you run openvpn as a pod (or pods) and use a Service with NodePort to expose it. (Optionally manually create a single loadbalacer on google cloud to get a static ip for that.)
At the end of the day the ideal solution depends much on your environment and goal.