Trying to create a new Postgres connection - postgresql

I implement a java microservice application deployed under a docker using postgres as data base. I am on MAC. Yesterday I created successfully a connection on DBeaver using the host of my MAC : 192.168.1.73. Today, I cannot connect with this host.
I tried $ telnet 192.168.1.73 5432 and had the following output
Trying 192.168.1.73...
telnet: connect to address 192.168.1.73: Connection refused
telnet: Unable to connect to remote host
What can I do ?
EDIT
I run docker-compose.yml
here is the extract for the data base
database:
image: postgres:9.5
ports:
- "5432:5432"
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=Esprit292948
- POSTGRES_DB=immo_db_local
Here is the begining of docker-compose.yml. I voluntarelly hid the details
version: '2'
services:
eurekaserver:
image: ...
ports:
- ...
configserver:
image: ...
ports:
- ...
environment:
EUREKASERVER_URI: ...
EUREKASERVER_PORT: ...
ENCRYPT_KEY: ...
gateservice:
image: ...
ports:
- ...
environment:
PROFILE: ...
SERVER_PORT: ...
CONFIGSERVER_URI: ...
EUREKASERVER_URI: ...
EUREKASERVER_PORT: ...
DATABASESERVER_PORT: "5432"
CONFIGSERVER_PORT: "8888"
AUDIT_PORT: "8087"
DB_PORT: "8930"
database:
image: postgres:9.5
ports:
- "5432:5432"
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=Esprit292948
- POSTGRES_DB=immo_db_local
optimisationfiscaleservice:
image: ...
ports:
- ...
etc......
Here is the ifconfig result
$ ifconfig
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384
options=1203<RXCSUM,TXCSUM,TXSTATUS,SW_TIMESTAMP>
inet 127.0.0.1 netmask 0xff000000
inet6 ::1 prefixlen 128
inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1
nd6 options=201<PERFORMNUD,DAD>
gif0: flags=8010<POINTOPOINT,MULTICAST> mtu 1280
stf0: flags=0<> mtu 1280
en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
options=50b<RXCSUM,TXCSUM,VLAN_HWTAGGING,AV,CHANNEL_IO>
ether a8:20:66:31:0a:2a
nd6 options=201<PERFORMNUD,DAD>
media: autoselect (none)
status: inactive
en1: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
options=400<CHANNEL_IO>
ether 20:c9:d0:e0:5d:a1
inet6 fe80::41:83de:7236:ba7a%en1 prefixlen 64 secured scopeid 0x5
inet 192.168.1.73 netmask 0xffffff00 broadcast 192.168.1.255
nd6 options=201<PERFORMNUD,DAD>
media: autoselect
status: active
en2: flags=8963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500
options=460<TSO4,TSO6,CHANNEL_IO>
ether 82:0a:60:f7:bc:80
media: autoselect <full-duplex>
status: inactive
fw0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 4078
lladdr a8:20:66:ff:fe:83:de:f2
nd6 options=201<PERFORMNUD,DAD>
media: autoselect <full-duplex>
status: inactive
bridge0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
options=63<RXCSUM,TXCSUM,TSO4,TSO6>
ether 82:0a:60:f7:bc:80
Configuration:
id 0:0:0:0:0:0 priority 0 hellotime 0 fwddelay 0
maxage 0 holdcnt 0 proto stp maxaddr 100 timeout 1200
root id 0:0:0:0:0:0 priority 0 ifcost 0 port 0
ipfilter disabled flags 0x0
member: en2 flags=3<LEARNING,DISCOVER>
ifmaxaddr 0 port 6 priority 0 path cost 0
nd6 options=201<PERFORMNUD,DAD>
media: <unknown type>
status: inactive
p2p0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 2304
options=400<CHANNEL_IO>
ether 02:c9:d0:e0:5d:a1
media: autoselect
status: inactive
awdl0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1484
options=400<CHANNEL_IO>
ether 0e:b6:7e:12:5d:fb
inet6 fe80::cb6:7eff:fe12:5dfb%awdl0 prefixlen 64 scopeid 0xa
nd6 options=201<PERFORMNUD,DAD>
media: autoselect
status: active
utun0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1380
inet6 fe80::3b2:bb0f:18:dcb7%utun0 prefixlen 64 scopeid 0xb
nd6 options=201<PERFORMNUD,DAD>
utun1: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 2000
inet6 fe80::b744:4974:1dd2:5fda%utun1 prefixlen 64 scopeid 0xc
nd6 options=201<PERFORMNUD,DAD>

I didn't pay attention, there was an error message
Name does not resolve
So I added "apk add bind-tools" to the docker files to get logs, according to this post : Docker Compose and Postgres : Name does not resolve
After which I had another error
Docker error : no space left on device
So according to this post Docker error : no space left on device : I run
docker system prune
And then, I succedded to connect with DBeaver

Related

Ubuntu UDP Multicast not received on secondary interface

My test setup looks as following:
Ubuntu 22.4
Kernel 5.15.1025 Realtime
I210 enp1s0 (10.1.180.98)
I225 enp2s0 (10.1.180.97)
Netgear GS108 Switch
enp1s0 and enp2s0 are connected to the switch
sending UDP Packets over enp1s0 to multicast address 224.0.0.22
listening on enp2s0 (-> external loop back)
open62541 UDP pubsub
General:
ifconig
enp1s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.1.180.98 netmask 255.255.255.0 broadcast 10.1.180.255
inet6 fe80::36fc:cf83:b6f7:e7eb prefixlen 64 scopeid 0x20<link>
ether 00:07:32:a5:c3:88 txqueuelen 1000 (Ethernet)
RX packets 10823 bytes 3936173 (3.9 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 287226 bytes 29921782 (29.9 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device memory 0x7fe00000-7fe1ffff
enp2s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.1.180.97 netmask 255.255.255.0 broadcast 10.1.180.255
inet6 fe80::a22:bab1:5e74:d3ad prefixlen 64 scopeid 0x20<link>
ether 00:07:32:a5:c3:89 txqueuelen 1000 (Ethernet)
RX packets 287442 bytes 29411683 (29.4 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 3506 bytes 174754 (174.7 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device memory 0x7fc00000-7fcfffff
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 10698 bytes 924534 (924.5 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 10698 bytes 924534 (924.5 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
# netstat -rn
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 10.1.180.10 0.0.0.0 UG 0 0 0 enp1s0
0.0.0.0 10.1.180.10 0.0.0.0 UG 0 0 0 enp2s0
10.1.180.0 0.0.0.0 255.255.255.0 U 0 0 0 enp2s0
10.1.180.0 0.0.0.0 255.255.255.0 U 0 0 0 enp1s0
169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 enp2s0
# netstat -g
IPv6/IPv4 Group Memberships
Interface RefCnt Group
--------------- ------ ---------------------
lo 1 224.0.0.251
lo 1 224.0.0.1
enp1s0 1 224.0.0.251
enp1s0 1 224.0.0.1
enp2s0 1 224.0.0.22
enp2s0 1 224.0.0.251
enp2s0 1 224.0.0.1
lo 1 ff02::fb
lo 1 ip6-allnodes
lo 1 ff01::1
enp1s0 1 ff02::fb
enp1s0 1 ff02::1:fff7:e7eb
enp1s0 1 ip6-allnodes
enp1s0 1 ff01::1
enp2s0 1 ff02::fb
enp2s0 1 ff02::1:ff74:d3ad
enp2s0 1 ip6-allnodes
enp2s0 1 ff01::1
# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
befor sending:
# netstat -i
Kernel Interface table
Iface MTU RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
enp1s0 1500 12397 0 0 0 572786 0 0 0 BMRU
enp2s0 1500 562000 0 0 0 4015 0 0 0 BMRU
lo 65536 12782 0 0 0 12782 0 0 0 LRU
# netstat -s -u
IcmpMsg:
InType3: 6576
OutType3: 6576
Udp:
5710 packets received
902 packets to unknown port received
0 packet receive errors
576693 packets sent
0 receive buffer errors
0 send buffer errors
IgnoredMulti: 259
UdpLite:
IpExt:
InMcastPkts: 110
OutMcastPkts: 567399
InBcastPkts: 259
InOctets: 54256683
OutOctets: 52916072
InMcastOctets: 10142
OutMcastOctets: 50498445
InBcastOctets: 19383
InNoECTPkts: 574627
MPTcpExt:
# ethtool -S enp2s0 | grep rx
rx_packets: 561920
rx_bytes: 59893407
rx_broadcast: 5508
rx_multicast: 556412
rx_crc_errors: 0
rx_no_buffer_count: 0
rx_missed_errors: 0
rx_long_length_errors: 0
rx_short_length_errors: 0
rx_align_errors: 0
rx_flow_control_xon: 0
rx_flow_control_xoff: 0
rx_long_byte_count: 59893407
rx_smbus: 0
os2bmc_rx_by_bmc: 0
os2bmc_rx_by_host: 0
rx_hwtstamp_cleared: 0
rx_lpi_counter: 0
rx_errors: 0
rx_length_errors: 0
rx_over_errors: 0
rx_frame_errors: 0
rx_fifo_errors: 0
rx_queue_0_packets: 561750
rx_queue_0_bytes: 57629925
rx_queue_0_drops: 0
rx_queue_0_csum_err: 0
rx_queue_0_alloc_failed: 0
rx_queue_1_packets: 0
rx_queue_1_bytes: 0
rx_queue_1_drops: 0
rx_queue_1_csum_err: 0
rx_queue_1_alloc_failed: 0
rx_queue_2_packets: 148
rx_queue_2_bytes: 13290
rx_queue_2_drops: 0
rx_queue_2_csum_err: 0
rx_queue_2_alloc_failed: 0
rx_queue_3_packets: 22
rx_queue_3_bytes: 2512
rx_queue_3_drops: 0
rx_queue_3_csum_err: 0
rx_queue_3_alloc_failed: 0
after sending:
# netstat -i
Kernel Interface table
Iface MTU RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
enp1s0 1500 12465 0 0 0 618087 0 0 0 BMRU
enp2s0 1500 607349 0 0 0 4031 0 0 0 BMRU
lo 65536 12800 0 0 0 12800 0 0 0 LRU
# netstat -s -u
IcmpMsg:
InType3: 6588
OutType3: 6588
Udp:
5715 packets received
902 packets to unknown port received
0 packet receive errors
621972 packets sent
0 receive buffer errors
0 send buffer errors
IgnoredMulti: 263
UdpLite:
IpExt:
InMcastPkts: 112
OutMcastPkts: 612677
InBcastPkts: 263
InOctets: 58289081
OutOctets: 56953872
InMcastOctets: 10222
OutMcastOctets: 54527991
InBcastOctets: 19816
InNoECTPkts: 619936
MPTcpExt:
# ethtool -S enp2s0 | grep rx
rx_packets: 607351
rx_bytes: 64748001
rx_broadcast: 5666
rx_multicast: 601685
rx_crc_errors: 0
rx_no_buffer_count: 0
rx_missed_errors: 0
rx_long_length_errors: 0
rx_short_length_errors: 0
rx_align_errors: 0
rx_flow_control_xon: 0
rx_flow_control_xoff: 0
rx_long_byte_count: 64748001
rx_smbus: 0
os2bmc_rx_by_bmc: 0
os2bmc_rx_by_host: 0
rx_hwtstamp_cleared: 0
rx_lpi_counter: 0
rx_errors: 0
rx_length_errors: 0
rx_over_errors: 0
rx_frame_errors: 0
rx_fifo_errors: 0
rx_queue_0_packets: 607176
rx_queue_0_bytes: 62302224
rx_queue_0_drops: 0
rx_queue_0_csum_err: 0
rx_queue_0_alloc_failed: 0
rx_queue_1_packets: 0
rx_queue_1_bytes: 0
rx_queue_1_drops: 0
rx_queue_1_csum_err: 0
rx_queue_1_alloc_failed: 0
rx_queue_2_packets: 153
rx_queue_2_bytes: 13861
rx_queue_2_drops: 0
rx_queue_2_csum_err: 0
rx_queue_2_alloc_failed: 0
rx_queue_3_packets: 22
rx_queue_3_bytes: 2512
rx_queue_3_drops: 0
rx_queue_3_csum_err: 0
rx_queue_3_alloc_failed: 0
Dropwatch output is as followed:
# sudo dropwatch -l ksa
2 drops at igmp_rcv+10c (0xffffffff9dd7202c) [software]
1 drops at unix_stream_connect+36a (0xffffffff9ddbb10a) [software]
2 drops at ip_rcv_finish_core.constprop.0+19c (0xffffffff9dd1930c) [software]
2048 drops at ip_rcv_finish_core.constprop.0+19c (0xffffffff9dd1930c) [software]
2036 drops at ip_rcv_finish_core.constprop.0+19c (0xffffffff9dd1930c) [software]
1 drops at __udp4lib_lib_mcast_deliver+31f (0xffffffff9dd5d67f) [software]
1 drops at __udp4lib_lib_mcast_deliver+31f (0xffffffff9dd5d67f) [software]
If I run this setup (exatly same UDP packets with tcpdump) with a real second windows device, receiving works. But this "external loopback" dosn't receive anything (I want to create so a TSN setup, so the windows machine is no option).
If I don't specify the interface for receiving, I get the packets (but don't know if they come from the loopback)
Following steps I tried without success:
Disabling RP_FILTER (in any combination for all available interfaces)
promisc mode on (but the ethtool output says that there is no problem on the NIC side)
What did I missed?
Best regards,
Patrick
My goal is to send UDP multicast packets on the first interface and receive them on the second interface (for performance analysis and for simulating a current missing Master hardware).

Can't join cluster: connection refused during kubeadm join

I am getting the following error when I try to join the cluster
sudo kubeadm join 10.1.1.150:6443 --token ypcdg7.w6pun0nd31c4q5c2 --discovery-token-ca-cert-hash sha256:1ac79447f8dee9d90d592a3ead3d6c54ce7046dcd0b3854917c93cf6bbff7894
[preflight] running pre-flight checks
[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:
1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support
I0710 13:39:56.080260 2415 kernel_validator.go:81] Validating kernel version
I0710 13:39:56.080423 2415 kernel_validator.go:96] Validating kernel config
[discovery] Trying to connect to API Server "10.1.1.150:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.1.1.150:6443"
[discovery] Failed to request cluster info, will try again: [Get https://10.1.1.150:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 10.1.1.150:6443: connect: connection refused]
I get the same error with curl on the machine that I want to join the cluster.
But if I run this
curl -k https://10.1.1.150:6443/api/v1/namespaces/kube-public/configmaps/cluster-info
in any machine in the local net (10.1.1.0/24) I get a good JSON response.
Some usefull info:
I can ping my master node (10.1.1.150)
6443 is open on my master node, this is netstat on my master
jp#tensor3:~$ sudo netstat -tulpn
Connessioni Internet attive (solo server)
Proto CodaRic CodaInv Indirizzo locale Indirizzo remoto Stato PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1110/sshd
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 6783/kubelet
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 7375/kube-proxy
tcp 0 0 127.0.0.1:10251 0.0.0.0:* LISTEN 7120/kube-scheduler
tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 7065/etcd
tcp 0 0 127.0.0.1:10252 0.0.0.0:* LISTEN 7058/kube-controlle
tcp 0 0 127.0.0.1:2380 0.0.0.0:* LISTEN 7065/etcd
tcp 0 0 127.0.0.1:37491 0.0.0.0:* LISTEN 6783/kubelet
tcp6 0 0 :::22 :::* LISTEN 1110/sshd
tcp6 0 0 :::10250 :::* LISTEN 6783/kubelet
tcp6 0 0 :::6443 :::* LISTEN 7130/kube-apiserver
tcp6 0 0 :::10256 :::* LISTEN 7375/kube-proxy
udp 0 0 0.0.0.0:68 0.0.0.0:* 980/dhclient
udp 0 0 0.0.0.0:8472 0.0.0.0:* -
I'm using flannel for network communication
ifconfig and route on my master
jp#tensor3:~$ ifconfig -a
cni0 Link encap:Ethernet IndirizzoHW 0a:58:0a:f4:00:01
indirizzo inet:10.244.0.1 Bcast:0.0.0.0 Maschera:255.255.255.0
indirizzo inet6: fe80::ec67:aeff:fe90:4fbc/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:31641 errors:0 dropped:0 overruns:0 frame:0
TX packets:31925 errors:0 dropped:0 overruns:0 carrier:0
collisioni:0 txqueuelen:1000
Byte RX:2032440 (2.0 MB) Byte TX:11726575 (11.7 MB)
docker0 Link encap:Ethernet IndirizzoHW 02:42:44:f4:4f:cb
indirizzo inet:172.17.0.1 Bcast:0.0.0.0 Maschera:255.255.0.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisioni:0 txqueuelen:0
Byte RX:0 (0.0 B) Byte TX:0 (0.0 B)
ens160 Link encap:Ethernet IndirizzoHW 00:50:56:3d:73:84
indirizzo inet:10.1.1.150 Bcast:10.1.1.255 Maschera:255.255.255.0
indirizzo inet6: fe80::250:56ff:fe3d:7384/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:294081 errors:0 dropped:0 overruns:0 frame:0
TX packets:74724 errors:0 dropped:0 overruns:0 carrier:0
collisioni:0 txqueuelen:1000
Byte RX:297445831 (297.4 MB) Byte TX:5900453 (5.9 MB)
flannel.1 Link encap:Ethernet IndirizzoHW 06:a7:4c:1d:6f:cd
indirizzo inet:10.244.0.0 Bcast:0.0.0.0 Maschera:255.255.255.255
indirizzo inet6: fe80::4a7:4cff:fe1d:6fcd/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:8 overruns:0 carrier:0
collisioni:0 txqueuelen:0
Byte RX:0 (0.0 B) Byte TX:0 (0.0 B)
lo Link encap:Loopback locale
indirizzo inet:127.0.0.1 Maschera:255.0.0.0
indirizzo inet6: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:1219503 errors:0 dropped:0 overruns:0 frame:0
TX packets:1219503 errors:0 dropped:0 overruns:0 carrier:0
collisioni:0 txqueuelen:1
Byte RX:271937625 (271.9 MB) Byte TX:271937625 (271.9 MB)
vethd451ffcc Link encap:Ethernet IndirizzoHW 8a:f1:f0:05:80:f3
indirizzo inet6: fe80::88f1:f0ff:fe05:80f3/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:8799 errors:0 dropped:0 overruns:0 frame:0
TX packets:8860 errors:0 dropped:0 overruns:0 carrier:0
collisioni:0 txqueuelen:0
Byte RX:687962 (687.9 KB) Byte TX:3256670 (3.2 MB)
vethed2073fb Link encap:Ethernet IndirizzoHW ea:9e:43:5e:4e:30
indirizzo inet6: fe80::e89e:43ff:fe5e:4e30/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:8800 errors:0 dropped:0 overruns:0 frame:0
TX packets:8887 errors:0 dropped:0 overruns:0 carrier:0
collisioni:0 txqueuelen:0
Byte RX:688228 (688.2 KB) Byte TX:3258477 (3.2 MB)
jp#tensor3:~$ route -n
Tabella di routing IP del kernel
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.1.1.29 0.0.0.0 UG 0 0 0 ens160
10.1.1.0 0.0.0.0 255.255.255.0 U 0 0 0 ens160
10.244.0.0 0.0.0.0 255.255.255.0 U 0 0 0 cni0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
ifconfig and route on my node
jp#tensor2:~$ ifconfig
docker0 Link encap:Ethernet IndirizzoHW 02:42:4e:2f:0e:97
indirizzo inet:172.17.0.1 Bcast:0.0.0.0 Maschera:255.255.0.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisioni:0 txqueuelen:0
Byte RX:0 (0.0 B) Byte TX:0 (0.0 B)
ens160 Link encap:Ethernet IndirizzoHW 00:50:56:8c:0a:cd
indirizzo inet:10.1.1.151 Bcast:10.1.1.255 Maschera:255.255.255.0
indirizzo inet6: fe80::250:56ff:fe8c:acd/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:17470 errors:0 dropped:0 overruns:0 frame:0
TX packets:913 errors:0 dropped:0 overruns:0 carrier:0
collisioni:0 txqueuelen:1000
Byte RX:1511461 (1.5 MB) Byte TX:101801 (101.8 KB)
lo Link encap:Loopback locale
indirizzo inet:127.0.0.1 Maschera:255.0.0.0
indirizzo inet6: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:160 errors:0 dropped:0 overruns:0 frame:0
TX packets:160 errors:0 dropped:0 overruns:0 carrier:0
collisioni:0 txqueuelen:1
Byte RX:11840 (11.8 KB) Byte TX:11840 (11.8 KB)
jp#tensor2:~$ route -n
Tabella di routing IP del kernel
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.1.1.29 0.0.0.0 UG 0 0 0 ens160
10.1.1.0 0.0.0.0 255.255.255.0 U 0 0 0 ens160
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
and this is the status of my cluster
jp#tensor3:~$ sudo kubectl get all --namespace=kube-system
[sudo] password di jp:
NAME READY STATUS RESTARTS AGE
pod/coredns-78fcdf6894-bblcs 1/1 Running 0 1h
pod/coredns-78fcdf6894-wrmj4 1/1 Running 0 1h
pod/etcd-tensor3 1/1 Running 0 1h
pod/kube-apiserver-tensor3 1/1 Running 0 1h
pod/kube-controller-manager-tensor3 1/1 Running 0 1h
pod/kube-flannel-ds-amd64-p7jmq 1/1 Running 0 1h
pod/kube-proxy-qg7jj 1/1 Running 0 1h
pod/kube-scheduler-tensor3 1/1 Running 0 1h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 1h
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/kube-flannel-ds-amd64 1 1 1 1 1 beta.kubernetes.io/arch=amd64 1h
daemonset.apps/kube-flannel-ds-arm 0 0 0 0 0 beta.kubernetes.io/arch=arm 1h
daemonset.apps/kube-flannel-ds-arm64 0 0 0 0 0 beta.kubernetes.io/arch=arm64 1h
daemonset.apps/kube-flannel-ds-ppc64le 0 0 0 0 0 beta.kubernetes.io/arch=ppc64le 1h
daemonset.apps/kube-flannel-ds-s390x 0 0 0 0 0 beta.kubernetes.io/arch=s390x 1h
daemonset.apps/kube-proxy 1 1 1 1 1 beta.kubernetes.io/arch=amd64 1h
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/coredns 2 2 2 2 1h
NAME DESIRED CURRENT READY AGE
replicaset.apps/coredns-78fcdf6894 2 2 2 1h
On the same machine I'm using:
docker version 1.13.1
k8s version 1.11
I met the similar behaviour on Ubuntu 16.04 after reboot.
I did
sudo kubeadm join ...
It crashed like described.
Then I did:
sudo su
kubeadm init
exit
I tried again and that second time it was successful.

Set /etc/hosts for each vagrant vm by ansible

My Vagrantfile
hosts = {
"host01" => "192.168.11.101",
"host02" => "192.168.11.102",
}
Vagrant.configure("2") do |config|
config.ssh.username = "root"
config.ssh.password = "vagrant"
config.ssh.insert_key = "true"
hosts.each_with_index do |(name,ip),index|
config.vm.define name do |machine|
machine.vm.box = "centos7"
machine.vm.box_check_update = false
machine.vm.hostname = name
machine.vm.synced_folder "/data", "/data"
machine.vm.network :private_network, ip: ip
machine.vm.provider "virtualbox" do |v|
v.name = name
v.customize ["modifyvm", :id, "--memory", 2048]
end
end
end
end
ansible tamplate for generate /etc/host
127.0.0.1 localhost
{% for host in groups['all'] %}
{{ hostvars[host]['ansible_' + iface].ipv4.address }} {{ host }}
{% endfor %}
ansible task
- name: Create the hosts file for all machines
template: src=hosts.j2 dest=/etc/hosts
But I get the result
[root#host01 ~]# cat /etc/hosts
127.0.0.1 localhost
10.0.2.15 host01
10.0.2.15 host02
ifconfig
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 0.0.0.0
inet6 fe80::42:49ff:fed1:eebb prefixlen 64 scopeid 0x20<link>
ether 02:42:49:d1:ee:bb txqueuelen 0 (Ethernet)
RX packets 77 bytes 6065 (5.9 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 99 bytes 8572 (8.3 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.0.2.15 netmask 255.255.255.0 broadcast 10.0.2.255
inet6 fe80::a00:27ff:fede:e0e prefixlen 64 scopeid 0x20<link>
ether 08:00:27:de:0e:0e txqueuelen 1000 (Ethernet)
RX packets 785483 bytes 57738892 (55.0 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 777457 bytes 1957320412 (1.8 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
enp0s8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.11.101 netmask 255.255.255.0 broadcast 192.168.11.255
ether 08:00:27:15:2c:64 txqueuelen 1000 (Ethernet)
RX packets 41445 bytes 39878552 (38.0 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 18055 bytes 2113498 (2.0 MiB)
I found in ifconfig, only enp0s8 inet 192.168.11.102 is different on host01 and host02
host01 and host02 got same ip ??
host01 has a docker registry.
host01, curl http://host01:5006/v2/_catalog works
host02, curl http://host01:5006/v2/_catalog not work
host01 and host02 got same ip ??
Yes. That's how Vagrant works and is able to connect for orchestration purposes to machines created from a variety of boxes coming from different publishers.
There's nothing strange about that, they are connected to different virtual switches in VirtualBox.
I just want host01 and host02 can access each other by hostname .
Use the other interface as the value of iface in your Jinja2 template (you did not show it in the question anyway).

libpcap findalldevs not working in guest LDOM on Solaris 11

Environment
Oracle Solaris 11 for SPARC
Running in a Non-primary (Guest) Logical Domain (LDOM).
Logged in with root access.
Problem
My application uses libpcap to capture network traffic. When my application (myTestApp) calls libpcap findalldevs, my application only sees one network interface ("lo0"), yet ifconfig -a shows many more interfaces.
My application is statically linked to libpcap (version 1.3). The build machine is SunOS RS-T5120-01 5.10 Generic_141444-09 sun4v sparc SUNW,SPARC-Enterprise-T5120.
Any ideas why my application can't see all the network interfaces ?
Linux command Line Sample Output
# tcpdump --version
tcpdump version 4.1.1
libpcap version 1.1.1
# uname -a
SunOS g99dnpi802-LD 5.11 11.1 sun4v sparc sun4v
# ./myTestApp -adapters
[Available Adapters]
name: "lo0", description: "", address: 127.0.0.1, mask: 255.0.0.0
# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
net0: flags=100001000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,PHYSRUNNING> mtu 1500 index 2
inet 10.99.220.15 netmask ffffff00 broadcast 10.99.220.255
ether 0:14:4f:fa:e0:8d
net1: flags=100001000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,PHYSRUNNING> mtu 1500 index 3
inet 10.99.193.210 netmask ffffff80 broadcast 10.99.193.255
ether 0:14:4f:f9:d0:9c
lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1
inet6 ::1/128
net0: flags=120002000840<RUNNING,MULTICAST,IPv6,PHYSRUNNING> mtu 1500 index 2
inet6 ::/0
ether 0:14:4f:fa:e0:8d
net1: flags=120002000840<RUNNING,MULTICAST,IPv6,PHYSRUNNING> mtu 1500 index 3
inet6 ::/0
ether 0:14:4f:f9:d0:9c
# tcpdump -i net1
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on net1, link-type EN10MB (Ethernet), capture size 65535 bytes
09:32:29.520815 IP g99dnpi802-LD.ssh > 10.99.8.102.65436: Flags [P.], seq 3397909586:3397909718, ack 1479093081, win 64240, length 132
09:32:29.520860 IP g99dnpi802-LD.ssh > 10.99.8.102.65436: Flags [P.], seq 132:232, ack 1, win 64240, length 100
09:32:29.521644 IP 10.99.8.102.65436 > g99dnpi802-LD.ssh: Flags [.], ack 132, win 16379, length 0
09:32:29.680844 00:14:4f:f9:8d:84 (oui Unknown) > Broadcast, ethertype Unknown (0xcafe), length 90:
0x0000: 0500 ad85 0939 ffff 0001 ffff 809c 7401 .....9........t.
0x0010: 0000 004c 0000 0000 8070 00ab 0000 0000 ...L.....p......
0x0020: 0000 0000 0000 0000 0043 ffff 2074 6167 .........C...tag
0x0030: 6d61 7374 0672 0014 4ff9 8d84 5f31 3362 mast.r..O..._13b
0x0040: 650a 0000 0000 0000 84f9 0aab e...........
[update]
Here is the (edited) output of running the following truss command on the build machine and the customer machine.
truss –f –a –vall –l –d –o truss.txt ./myTestApp -adapters
truss on build machine
14365/1: 0.0751 so_socket(PF_INET, SOCK_DGRAM, IPPROTO_IP, "", SOV_DEFAULT) = 3
14365/1: 0.0753 so_socket(PF_INET6, SOCK_DGRAM, IPPROTO_IP, "", SOV_DEFAULT) = 4
14365/1: 0.0755 ioctl(3, SIOCGLIFNUM, 0xFFBE9F50) = 0
14365/1: 0.0757 ioctl(3, SIOCGLIFCONF, 0xFFBE9F40) = 0
14365/1: 0.0804 ioctl(3, SIOCGLIFFLAGS, 0xFFBE9DC8) = 0
14365/1: 0.0806 ioctl(3, SIOCGLIFNETMASK, 0xFFBE9C50) = 0
14365/1: 0.0809 open64("/dev/lo", O_RDWR) Err#2 ENOENT
14365/1: 0.0811 open64("/dev/lo0", O_RDWR) Err#2 ENOENT
14365/1: 0.0813 ioctl(3, SIOCGLIFFLAGS, 0xFFBE9DC8) = 0
14365/1: 0.0815 ioctl(3, SIOCGLIFNETMASK, 0xFFBE9C50) = 0
14365/1: 0.0817 ioctl(3, SIOCGLIFBRDADDR, 0xFFBE9AD8) = 0
14365/1: 0.0819 open64("/dev/e1000g", O_RDWR) = 5
truss on customer machine
6346/1: 0.0315 so_socket(PF_INET, SOCK_DGRAM, IPPROTO_IP, 0, SOV_DEFAULT) = 3
6346/1: 0.0319 so_socket(PF_INET6, SOCK_DGRAM, IPPROTO_IP, 0, SOV_DEFAULT) = 5
6346/1: 0.0320 ioctl(3, SIOCGLIFNUM, 0xFFBEA830) = 0
6346/1: 0.0321 ioctl(3, SIOCGLIFCONF, 0xFFBEA820) = 0
6346/1: 0.0322 ioctl(3, SIOCGLIFFLAGS, 0xFFBEA6A8) = 0
6346/1: 0.0323 ioctl(3, SIOCGLIFNETMASK, 0xFFBEA530) = 0
6346/1: 0.0327 open64("/dev/lo", O_RDWR) Err#2 ENOENT
6346/1: 0.0328 open64("/dev/lo0", O_RDWR) = 6
6346/1: 0.0345 ioctl(3, SIOCGLIFFLAGS, 0xFFBEA6A8) = 0
6346/1: 0.0346 ioctl(3, SIOCGLIFNETMASK, 0xFFBEA530) = 0
6346/1: 0.0347 ioctl(3, SIOCGLIFBRDADDR, 0xFFBEA3B8) = 0
6346/1: 0.0347 open64("/dev/net", O_RDWR) Err#21 EISDIR
6346/1: 0.0349 ioctl(3, SIOCGLIFFLAGS, 0xFFBEA6A8) = 0
6346/1: 0.0349 ioctl(3, SIOCGLIFNETMASK, 0xFFBEA530) = 0
6346/1: 0.0350 ioctl(3, SIOCGLIFBRDADDR, 0xFFBEA3B8) = 0
6346/1: 0.0351 open64("/dev/net", O_RDWR) Err#21 EISDIR

Can't connect to mongodb docker container from another container

I have the following simplified design: a mongodb container and a "python-client" docker container, that is linked to the former. This is my simplified docker-compose.yml file:
mongodb:
build: "mongodb"
dockerfile: "Dockerfile"
hostname: "mongodb.local"
ports:
- "27017:27017"
client:
build: "client"
dockerfile: "Dockerfile"
hostname: "client.local"
links:
- "mongodb:mongodb"
environment:
- "MONGODB_URL=mongodb://admin:admin#mongodb:27017/admin"
- "MONGODB_DB=historictraffic"
I'm able to establish a successful connection using pymongo from my host using the mongodb://admin:admin#localhost:27017/admin connection string (pay attention to localhost):
$ ipython
from pymongo import MongoClient
mongo = MongoClient('mongodb://admin:admin#localhost:27017/admin')
db = mongo.test
col = db.test
col.insert_one({'x': 1})
# This works
But I can't connect from the client container. Apparently the link is correct:
/ # cat /etc/hosts
172.17.0.27 client.local client
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.26 historictraffic_mongodb_1 mongodb
172.17.0.26 mongodb mongodb historictraffic_mongodb_1
172.17.0.26 mongodb_1 mongodb historictraffic_mongodb_1
But when I do the same test, it fails:
/ # ipython
from pymongo import MongoClient
mongo = MongoClient('mongodb://admin:admin#mongodb:27017/admin')
db = mongo.test
col = db.test
col.insert_one({'x': 2})
---------------------------------------------------------------------------
ServerSelectionTimeoutError Traceback (most recent call last)
<ipython-input-5-c5d62e5590d5> in <module>()
----> 1 col.insert_one({'x': 2})
/usr/lib/python2.7/site-packages/pymongo/collection.pyc in insert_one(self, document)
464 if "_id" not in document:
465 document["_id"] = ObjectId()
--> 466 with self._socket_for_writes() as sock_info:
467 return InsertOneResult(self._insert(sock_info, document),
468 self.write_concern.acknowledged)
/usr/lib/python2.7/contextlib.pyc in __enter__(self)
15 def __enter__(self):
16 try:
---> 17 return self.gen.next()
18 except StopIteration:
19 raise RuntimeError("generator didn't yield")
/usr/lib/python2.7/site-packages/pymongo/mongo_client.pyc in _get_socket(self, selector)
661 #contextlib.contextmanager
662 def _get_socket(self, selector):
--> 663 server = self._get_topology().select_server(selector)
664 try:
665 with server.get_socket(self.__all_credentials) as sock_info:
/usr/lib/python2.7/site-packages/pymongo/topology.pyc in select_server(self, selector, server_selection_timeout, address)
119 return random.choice(self.select_servers(selector,
120 server_selection_timeout,
--> 121 address))
122
123 def select_server_by_address(self, address,
/usr/lib/python2.7/site-packages/pymongo/topology.pyc in select_servers(self, selector, server_selection_timeout, address)
95 if server_timeout == 0 or now > end_time:
96 raise ServerSelectionTimeoutError(
---> 97 self._error_message(selector))
98
99 self._ensure_opened()
ServerSelectionTimeoutError: mongodb:27017: [Errno 113] Host is unreachable
Does anyone know how to solve it? Thank you.
This failure is probably because mongo hasn't started up yet. You can retry the connection with a short delay between retries, and it should work after one or two attempts.