How to access outside world from Kubernetes pod - kubernetes

I faced with the problem that I cannot send emails from K8s pod using smtp.gmail.com and 587 port. I tried to use dnsPolicy: ClusterFirstWithHostNet but nothing has changed. With dnsPolicy: Default everything seems OK but I can't use this approach since pods should be able to resolve other pods from the cluster. Btw, ConfigMap with Google's dns didn't help too:
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
data:
upstreamNameservers: |
[“8.8.8.8”, “8.8.4.4”]
Are there any ideas?
Thanks in advance.
PS, my Kubernetes version is v1.7.2

Maybe it just a syntax error in your configmap with quotes (" vs “)
If you run
kubectl -n kube-system logs kube-dns-xxxx -c dnsmasq
you will get a syntax error, instead of
upstreamNameservers to [8.8.8.8, 4.4.4.4]

There is another approach to solve this problem - you can write Google DNS (8.8.8.8) in container's resolve.conf during its startup.

Related

ingress-nginx wasn't installed properly?

Now I'm using WSL 2 and Docker Desktop on Windows 10.
I created an YAML script to create an ingress for my microservices like below.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-srv
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: posts.com
http:
paths:
- path: /posts
pathType: Prefix
backend:
service:
name: posts-clusterip-srv
port:
number: 4000
And I installed ingress-nginx by following this installation guide
I ran this command in the guide.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.4.0/deploy/static/provider/cloud/deploy.yaml
But when I ran kubectl get pods --namespace=ingress-nginx, ingress-nginx-controller shows ImageInspectError
And when I ran the command kubectl apply -f ingress-srv.yaml, it showed an error message.
Can anyone please let me know what the issue is?
I removed the namespace ingress-nginx using this command kubectl delete all --all -n ingress-nginx and ran the deploy script again.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.4.0/deploy/static/provider/cloud/deploy.yaml
But the issue still happened.
There is an issue deploying ingress-nginx controller. You need to first fix issues there before deploying the ingress. Because, only the nginx controller knows how to handle the ingress resources.
Since there is no much info about the controller deployment failure, you better add more details about the error. You can describe the controller pod and share its event and status to look into this further.
It was because of the corrupted filesystem.
When I ran the ingress-nginx deployment command, there was a docker-desktop crash because of the lack of drive storage size.
So I removed all corrupted, unused or dangling docker images.
docker system prune
Also I deleted ingress-nginx and reinstalled.
kubectl delete -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.4.0/deploy/static/provider/cloud/deploy.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.4.0/deploy/static/provider/cloud/deploy.yaml
After that, it worked well.
kubectl get pods --namespace=ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-tgkfx 0/1 Completed 0 74m
ingress-nginx-admission-patch-28l7q 0/1 Completed 3 74m
ingress-nginx-controller-7844b9db77-4dfvb 1/1 Running 0 74m

Prometheus returns error context deadline exceeded

I deployed Prometheus with an Helm chart from Rancher. Targets such as Alertmanager, Prometheus, Grafana, Node-exporter, Kubelet etc. are configured automatically. The endpoint from alertmanager refers to the IP address of the specific pod for example. I also configured multiple targets successfully like Jira and Confluence.
Since the service external-dns is running in the namespace kube-system, it's also configured automatically. But only this service is getting the error Context deadline exceeded.
I checked in a random pod if those metrics are accessible by running the command curl -s http://<IP-ADDRESS-POD>:7979/metrics. Also did this with the service ip address (kubectl get service external-dns and curl-s http://<IP-ADDRESS-SVC>:7979/metrics).
Both of these curl commands returned the metrics within a second. So increasing the scrape timeout won't help.
But when I exec in the Prometheus container and use the promtool debug metrics command it shows the same behaviour like in my browser. The external-dns returns a timeout with both of the IP addresses and if I try this with another target it just returns the metrics.
I also don't think it's a SSL issue, because I already injected the correct CA bundle for the targets Jira and Confluence.
So anybody an idea? :)
I had to edit the NetworkPolicy in the kube-system namespace. The containers from the cattle-monitoring-system namespace are now allowed to access the containers from the kube-system namespace. You can upload your NetworkPolicies here and it visualizes which resources has access or not. The NetworkPolicy looks like this now:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-network-policy
namespace: kube-system
spec:
ingress:
- from:
- namespaceSelector:
matchLabels:
name: cattle-monitoring-system
- from:
- podSelector: {}
podSelector: {}
policyTypes:
- Ingress

Disable resource reservation for the complete kubernetes cluster

Is it somehow possible to force the scheduler to ignore the available resources on a node/cluster while scheduling new pods?
We we would like to "overload" our cluster in our lab environment for testing purposes. I could not find anything about it in the docs. Thanks!
There are bunch of feature flags which you can possibly tweak to achieve it but I would say why not use nodeName in the pod spec and effectively bypass the scheduler.
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
nodeName: kube-01
The above pod will run on the node kube-01
This doc may help. You can try to remove the filter PodFitsResources.

Grafana HTTP Error Bad Gateway and Templating init failed errors

Use helm installed Prometheus and Grafana on minikube at local.
$ helm install stable/prometheus
$ helm install stable/grafana
Prometheus server, alertmanager grafana can run after set port-forward:
$ export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}")
$ kubectl --namespace default port-forward $POD_NAME 9090
$ export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=alertmanager" -o jsonpath="{.items[0].metadata.name}")
$ kubectl --namespace default port-forward $POD_NAME 9093
$ export POD_NAME=$(kubectl get pods --namespace default -l "app=excited-crocodile-grafana,component=grafana" -o jsonpath="{.items[0].metadata.name}")
$ kubectl --namespace default port-forward $POD_NAME 3000
Add Data Source from grafana, got HTTP Error Bad Gateway error:
Import dashboard 315 from:
https://grafana.com/dashboards/315
Then check Kubernetes cluster monitoring (via Prometheus), got Templating init failed error:
Why?
In the HTTP settings of Grafana you set Access to Proxy, which means that Grafana wants to access Prometheus. Since Kubernetes uses an overlay network, it is a different IP.
There are two ways of solving this:
Set Access to Direct, so the browser directly connects to Prometheus.
Use the Kubernetes-internal IP or domain name. I don't know about the Prometheus Helm-chart, but assuming there is a Service named prometheus, something like http://prometheus:9090 should work.
I turned off the firewall on appliance, post that adding http://prometheus:9090 on URL did not throw bad gateway error.
I was never able to find a "proper" fix, but I found a workaround:
apiVersion: v1
kind: Service
metadata:
labels:
prometheus: k8s
name: prometheus-k8s
namespace: monitoring
spec:
selector:
app: prometheus
prometheus: k8s
sessionAffinity: ClientIP
clusterIP: None
By setting the clusterIP to None, the service changes to "Headless" mode, which means that requests are sent directly to a random one of the pods in that service/cluster. More info here: https://kubernetes.io/docs/concepts/services-networking/service/#headless-services
There's probably a better solution, but this is the only one I've found that actually works for me, with kube-prometheus. (I've tried docker-desktop, k3d, and kind, and all of them have the same issue, so I doubt it's the emulator's fault; and I stripped my config down to basically just kube-prometheus, so it's hard to understand where the problem lies, but oh well.)

Kube-dns does not resolve external hosts on kubeadm bare-metal cluster

I've got a k8n cluster setup on a bare-metal ubuntu 16.04 cluster using weave networking with kubeadm. I'm having a variety of little problems, the most recent of which is that I realized that kube-dns does not resolve external addresses (e.g. google.com). Any thoughts on why? Using kube-adm did not give me a lot of insight into the details of that part of the setup.
The issue turned out to be that a node-level firewall was interfering with the cluster networking. So there was no issue with the DNS setup.
I had the same issue on kubernetes v1.6 and it was not a firewall issue in my case.
The problem was that I have configured the DNS manually on the /etc/docker/daemon.json, and these parameters are not used by kube-dns. Instead it is needed to create a configmap for kubedns (pull request here and documentation here), as follows:
Solution
Create a yaml for the configmap, for example kubedns-configmap.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
data:
upstreamNameservers: |
["<own-dns-ip>"]
And simply, apply it on kubernetes with
kubectl apply -f kubedns-configmap.yml
Test 1
On your kubernetes host node:
dig #10.96.0.10 google.com
Test 2
To test it I use a busybox image with the following resource configuration (busybox.yml):
apiVersion: v1
kind: Pod
metadata:
name: busybox
spec:
containers:
# for arm
#- image: hypriot/armhf-busybox
- image: busybox
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: busybox
restartPolicy: Always
Apply the resource with
kubectl apply -f busybox.yml
And test it with the following:
kubectl exec -it busybox -- ping google.com