Is there a way to determine to which etcd host the kubernetes apiserver is talking to? - kubernetes

Only apiserver talks directly to etcd. In the etcd cluster there are many hosts. I would like to see to which etcd host the apiserver is talking to. This may be different for each api resource like Pod or Node. I prefer to see etcd host information for each request.
Specifically, kubernetes 1.6.13 and etcd 3.1.14 using v3 store.
I have tried:
Enable etcd client and grpc logging on the kubernetnes api server.
I think grpc only logs in unexpected events. Similarly for etcd
clientv3. I was not able to get information about the etcd side of the
connection.
Enable http2 debug logging with GODEBUG=http2debug=2 on api server
To my surprise http2 debug logs print a lot of information about each request but I could not find the remote endpoint information. I am still skeptical about this I may be missing a mention in the log files. Not completely sure.
Debug logs on the etcd side.
Enabling debug logs with Enabling Debug Logging prints only about v2 store accesses. For v3 store one could use the http://<host>2379/debug/requests endpoint but that is not available in my version of etcd 3.1.14.
I have not tried yet to use GODEBUG=http2debug=2 on the etcd side. Maybe the http2 logs on the etcd have the info I need.
tcpdump or tcpflow
The apiserver <-> etcd connection is encrypted. Would these show me the request url ? I think I did not see that information in the dumps.
Man in the middle attack the apiserver <-> etcd connection with mitmproxy. I do not think this should be that complicated.
I hope, I have missed a super obvious and simple way to accomplish this.
Update:
About using lsof based approaches:
Using lsof, we can list the connections with endpoints information at one time. I do not think there is enough information in lsof output to arrive at endpoint information per request. Apiserver opens a lot of connections to etcd. Looking at the code that observation looks reasonable to me. See NewStorage in here
$ sudo lsof -p 20816 | grep :2379 | wc -l
130
The connections looks like this
$ sudo lsof -
p 20816 | grep :2379 | head -n 5
hyperkube 20816 root 3u IPv4 58093240 0t0 TCP compute-master7001.dsv31.boxdc.net:36360->compute-etcd7001.dsv31.boxdc.net:2379 (ESTABLISHED)
hyperkube 20816 root 5u IPv4 58085987 0t0 TCP compute-master7001.dsv31.boxdc.net:26005->compute-etcd7002.dsv31.boxdc.net:2379 (ESTABLISHED)
hyperkube 20816 root 6u IPv4 58085988 0t0 TCP compute-master7001.dsv31.boxdc.net:55650->compute-etcd7003.dsv31.boxdc.net:2379 (ESTABLISHED)
hyperkube 20816 root 7u IPv4 58102030 0t0 TCP compute-master7001.dsv31.boxdc.net:36366->compute-etcd7001.dsv31.boxdc.net:2379 (ESTABLISHED)
hyperkube 20816 root 8u IPv4 58085990 0t0 TCP compute-master7001.dsv31.boxdc.net:55654->compute-etcd7003.dsv31.boxdc.net:2379 (ESTABLISHED)
........
Looking at this, I cannot know which etcd is used for each request between the apiserver and etcd.
Update:
I think at the etcdv3 client code that ships with kubernetes 1.6.13, the grpc.Balancer.Get function returns the endpoint address used for each grpc request. I think one could add a log print here and make apiserver log the etcd address per request.

Find the pid of apiserver
ps aux | grep apiserver
Then use lsof to see the open socket connections
lsof -p $PID | grep :2379

Related

Libvirt dnsmasq is running on all interfaces, this is undesired

So currently I'm running libvirt on my debian box, and it's DHCP server is listening on all interfaces, I would like to restrict that down to the bridge interface where the VMs would live. I can kill off the dhcp server temporarily to accomplish what I need but would like something more permanent.
I'm sure there is some option I can put in the dhcp server portion of the network config to make this happen.
<network>
<name>default</name>
<uuid>2fb34907-96bc-4af1-89a2-4f1f872a2600</uuid>
<forward mode='nat'/>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:c3:d2:ea'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
<host mac='52:54:00:21:df:dc' ip='192.168.122.2'/>
</dhcp>
</ip>
<route address='192.168.122.2' prefix='32' gateway='192.168.122.110'/>
</network>
root#calypso-deb:~# lsof -i -n | grep dnsmasq
dnsmasq 1656 nobody 3u IPv4 29150 0t0 UDP *:bootps
dnsmasq 1656 nobody 5u IPv4 29153 0t0 UDP 192.168.122.1:domain
dnsmasq 1656 nobody 6u IPv4 29154 0t0 TCP 192.168.122.1:domain (LISTEN)
root#calypso-deb:~#
Here’s a suggestion (which is meant to be a comment rather than an answer, but I cannot comment).
User Jonathon Reinhart posted an answer that describes how to pass options to dnsmasq (since libvirt v.5.6.0). See also “Network XML format” in the libvirt documentation. This got me wondering whether passing something like --interface=virbr0 --bind-interfaces would do what you need in this case.
It should already be listening on virbr0 interface only, as the config shows that too.
You can check that with lsof -i -n | grep dnsmasq or similar tools.

netstat showing foreign ports as kubernetes:port. What does this mean?

I am using a Windows 10 Pro machine.
When I run netstat, it is showing kubernetes:port as foreign address in active connections.
What does this mean? I have checked and there is no kubernetes cluster running on my machine.
How do I close these connections?
Minikube status:
$ minikube status
host:
kubelet:
apiserver:
kubectl:
That happens because of the way netstat renders output. It has nothing to do with actual Kubernetes.
I have Docker Desktop for Windows and it adds this to the hosts file:
# Added by Docker Desktop
192.168.43.196 host.docker.internal
192.168.43.196 gateway.docker.internal
# To allow the same kube context to work on the host and the container:
127.0.0.1 kubernetes.docker.internal
# End of section
There is a record which maps 127.0.0.1 to kubernetes.docker.internal. When netstat renders its output, it resolves foreign address and it looks at the hosts file and sees this record. It says kubernetes and that is what you see in the console. You can try to change it to
127.0.0.1 tomato.docker.internal
With this, netstat will print:
Proto Local Address Foreign Address State
TCP 127.0.0.1:6940 tomato:6941 ESTABLISHED
TCP 127.0.0.1:6941 tomato:6940 ESTABLISHED
TCP 127.0.0.1:8080 tomato:40347 ESTABLISHED
TCP 127.0.0.1:8080 tomato:40348 ESTABLISHED
TCP 127.0.0.1:8080 tomato:40349 ESTABLISHED
So what actually happens is there are connections from localhost to localhost (netstat -b will show apps that create them). Nothing to do with Kubernetes.
It seems that Windows docker changed your hosts file.
So, if you want to get rid of these connections, just comment out the corresponding lines in the hosts file.
The hosts file on Windows 10 is located in C:\Windows\System32\drivers\etc and
the records may look something like 127.0.0.1 kubernetes.docker.internal.
I am pretty sure it will disrupt your docker service on Windows (yet, I am not an expert), so don't forget to uncomment these lines whenever you need to get the docker service back.
OK, it looks like your minikube instance is definitely deleted. Keep in mind that in Linux or other nix-based systems it is totally normal that many processes use network sockets to communicate between each other e.g. you will see many established connections with both local and foreign addresses set to localhost:
tcp 0 0 localhost:45402 localhost:2379 ESTABLISHED
tcp 0 0 localhost:45324 localhost:2379 ESTABLISHED
tcp 0 0 localhost:2379 localhost:45300 ESTABLISHED
tcp 0 0 localhost:45414 localhost:2379 ESTABLISHED
tcp 0 0 localhost:2379 localhost:45388 ESTABLISHED
tcp 0 0 localhost:40600 localhost:8443 ESTABLISHED
kubernetes in your case is nothing more than hostname of one of your machines/VMs/instances. Maybe the one on top of which you ran your minikube you called kubernetes and that's why this hostname appears currently in your active network connections. Basically it has nothing to do with running kubernetes cluster.
To make it clearer you may cat the content of your /etc/hosts file and look for the entry kubernetes. Then you can compare them with your network interfaces addresses (run ip -4 a). Most probably kubernetes entry in /etc/hosts is mapped to one of them.
Let me know if it clarifies your doubts.
EDIT:
I've reproduced it on Minikube on my linux instance and noticed exactly the same behaviour, but it looks like the ESTABLISHED connections are showing only after successfull minikube stop. After minikube delete they're gone. It looks like those connections indeed belong to various components of kubernetes, but for some reason are not terminated. Basically closing established network connections is responsibility of the application which creates them and it looks like for some reason minikube is not terminating them.
If you run:
sudo netstat -ntp ### important: it must be run as superuser
it shows additionally PID/Program name column in which you can see by which program specific connection was established. You will see a lot of ESTABLISHED network connections belonging to etcd and kube-apiserver.
First I try to reboot the whole instance. It obviously close all the connections but then I verified a few times and it looks like successfully performed minikube delete also closes all connections.
Additionally you may want to check available docker containers by running:
docker ps
or:
docker container ls
After stopping the minikube instance it still shows those containers and it looks like the reason why a lot of connections with certain kubernetes components** are still shown by netstat command.
However after minikube delete neither containers nor ESTABLISHED connections with kubernetes cluster components are available any more.

Fix IP with port to IP without port

I have a website.example.com The website is hosted on OVH I would like to point a sub domain shop.example.com to another website hosted on another server
(95.110.189.135:8069) the problem is that I can't c name to an IP with a port.
I used Ubuntu for my odoo server
I've got odoo on my vps server with database. Now, It's working on IP with port (example: 55.55.55.55:8069). So now,
How can I change it to IP without port?
If I want a domain name - how can I do this?
I found the solution it's easy to redirect to port 80
to do that add a line of code in the file
etc/rc.local
iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 8069
then the file will become like this
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 8069
exit 0
save and then restart the server
You cannot use plain DNS to transfer traffic to another port. This is not possible with either canonical name (CNAME record) or address (A record). These DNS services are only used for address resolution.
To solve your configuration issue you can use reverse proxy, e.g. Nginx. You can find example configurations from the Odoo.com site at https://www.odoo.com/documentation/11.0/setup/deploy.html#https. This is describing how to use https in port 443 to proxy Odoo in upstream service at port 8069. For public services you should use encrypted https, not http. Point your show.example.com in DNS to your "another" server ip address and on that server have Odoo and Nginx running. Your Odoo can run on port 8069 and your Nginx would run on https 443 and proxy connections to Odoo upstream service on localhost 8069.
Hope this helps you forward. Please check your configuration with someone who have experience with this kind of setups before you go production. This will make sure your configuration is secure.

traefik failed external connectivity - 443 already in use

I am running a server and I have a pointed my domain via cloudflare to my server IP and have a signed SSL certificate via LetsEncrypt for my domain. My server is running an apache webserver using porto 443 for the ssl traffic.
I installed docker and a run a couple of containers. My goal is to get traefik up and running using port 443 as well and route all docker traffic through it. Is that even possible?
I used this here: https://www.linuxserver.io/2018/02/03/using-traefik-as-a-reverse-proxy-with-docker/ to write my traefik.toml file and my docker-compose file.
However, whenever I start up the docker-compose all services are up except traefik.
I receive following error:
ERROR: for traefik Cannot start service traefik: driver failed programming external connectivity on endpoint traefik (2d10b64b47e62e7dcb5f94265529fb647e4ba62dbeeb43c201ea02d39f60b381): Error starting userland proxy: listen tcp 0.0.0.0:443: bind: address already in use
ERROR: Encountered errors while bringing up the project.
I wonder if the reason is that I already use port 443 for my domain?!
How can I fix this?
Thanks for your help!
You are using docker in Linux:
Some of these commands should give you a clue that you are using them:
sudo lsof -i -P -n | grep LISTEN
netstat -tulpn | grep LISTEN
example:
docker-pr 2405 root 4u IPv6 28930 0t0 TCP
*:443 (LISTEN)
if 443 is occupied by docker ... it means that you have in some YML an exposed port 443, besides that of Traefik (if it is some other application proceeds to change port or close it [pkill])
You can try to separate the "Services" of the YML in different YML and turn them on one by one, in order to find the image that is causing you conflicts
(If you separate them remember to create the appropriate external "Networks".)
(by the way ... I recommend that the first image that starts be the one of traefik)
(You can also copy and paste your YML files for better help.)
Edit
RewriteEngine on
RewriteCond %{HTTPS} off
# RewriteCond %{SERVER_PORT} ^9000$
RewriteRule ^(.*)$ https://%{HTTP_HOST}:9443%{REQUEST_URI}
edit2 in toml config traefik: (I have no idea what works, try it)
# Entrypoints, http and https
[entryPoints]
# http should be redirected to https
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
# https is the default
[entryPoints.https]
address = ":9443"
[entryPoints.https.tls]
The other solution that occurs to me is to make your main apache as a Proxy Tunnel, BUT, then you do not need to bring traefik :P
I've got the same issue.
I've tried everything I found on stackoverflow and github.
Only this worked for me:
sudo lsof -i -P -n | grep LISTEN
And I've got somethink like this:
And I decided to kill the first PID (related to 80 port)
sudo kill -9 1876
And then I've started the service with docker on network and everything worked fine. Hooray!!!

Not able to access Centos Apache page from another Computer

Today a started apache on CentOS and I'm able to open the test page on same machine as localhost. But I'm unable to open it using another computer. The CentOS server is on a VLAN (using switch) behind a router. I'm able to ping the server from other side using my laptop. But I'm not able to open the test page in my browser. I have another server in same VLAN which I'm able to access from my laptop.
Also here is some entries of iptables -L
Chain INPUT
ACCEPT tcp -- anywhere anywhere tcp:dtp:http
ACCEPT udp -- anywhere anywhere udp:dtp:http
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Chain FORWARD
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
I'm not sure what else I need to check.
Security theory tells to first drop the firewall and test (iptables -F). If you can access then it is really a iptables issue, if you are still unable to reach your service, try looking if you got any specific bind: netstat -an | grep "LISTEN " if you see something like:
"tcp 0 0 127.0.0.1:80 0.0.0.0:* LISTEN "
means that your server is only listening on localhost ip, you should check on specific httpd binds on /etc/httpd/conf/httpd.conf
If you require some more help, keep posting =)