Netstat output with boost::Asio - sockets

I have created an asio server with acceptor:
m_acceptor(m_ios, asio::ip::tcp::endpoint(asio::ip::address_v4::any(), port_num)
where port number is 3333
At this point, the netstat -antup command shows :
13:tcp 0 0 0.0.0.0:3333 0.0.0.0:* LISTEN 26566/./test
So, I believe this means that local address 0 0.0.0.0:3333 is ready to listen to any connection on port 3333
After this, I start the client which creates the endpoint to ip : 127.0.0.1 and port 3333
After this, the netstat output is:
tcp 0 0 0.0.0.0:3333 0.0.0.0:* LISTEN 26566/./test
tcp 0 0 127.0.0.1:3333 127.0.0.1:46675 ESTABLISHED 26566/./test
tcp 0 0 127.0.0.1:46675 127.0.0.1:3333 ESTABLISHED 26685/./test
Process 26566 is master process
Process 26685 is slave process
What I do not understand is what does the the port 46675 mean in the address shown above? This definitely represents the client side, but from where was this port number allocated to the client?
Does this mean that client has connected to port 3333 but the port from which it itself connects is 46675?

Does this mean that client has connected to port 3333 but the port from which it itself connects is 46675?
Basically. It describes the client endpoint. This is BSD/Posix sockets jargon.
What I do not understand is what does the the port 46675 mean in the address shown above? This definitely represents the client side, but from where was this port number allocated to the client?
It gets automatically chosen (by the TCP stack, usually in the kernel) from the local port range. E.g. on linux you can manipulate that range (if you have permission):
sudo sysctl -w net.ipv4.ip_local_port_range="60000 61000"
(Warning: don't do this unless you know what you're doing). See also https://en.wikipedia.org/wiki/Ephemeral_port

Related

Can't connect to Postgresql with specific external IP

I can connect to my DigitalOcean Ubuntu 20LTS VM instant that has PostgreSQL 14 installed without issue, but I'm trying to make it more secure with only specific IPs that can connect to the database.
I heard the way to do this is to modify the /etc/postgresql/14/main/postgresql.conf file.
When I have this line, I can connect to my database without issue.
listen_addresses='0.0.0.0'
However, if I modify this line with:
listen_addresses='123.123.123.123'
I get this DataGrip error message: [08001] Connection to 111.111.111.111:12345 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
111.111.111.111:12345 is my (fake) VM's IP and port that I already set up.
123.123.123.123 is my (fake) computer's external IP that I get from here or here
Any suggestions? Is there a log I can search from that will give me a better understanding of what is going on?
Also to note, with listen_addresses='0.0.0.0', running ss -ptl gives an output of
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 4096 127.0.0.53%lo:domain 0.0.0.0:*
LISTEN 0 128 0.0.0.0:ssh 0.0.0.0:*
LISTEN 0 244 0.0.0.0:12345 0.0.0.0:*
LISTEN 0 128 [::]:ssh [::]:*
with listen_addresses='123.123.123.123', running ss -ptl gives an output of
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 4096 127.0.0.53%lo:domain 0.0.0.0:*
LISTEN 0 128 0.0.0.0:ssh 0.0.0.0:*
LISTEN 0 128 [::]:ssh [::]:*
Documentation that I used so far:
https://www.postgresql.org/docs/current/runtime-config-connection.html
https://www.postgresql.org/docs/current/auth-pg-hba-conf.html

How to detect if keepalive is enabled on TCP socket in AIX and Solaris?

I am working on a solution where i am enabling keepalive option on the TCP socket. On linux I am able to see if keepalive is enabled or not using netstat
netstat -o -p |grep processid
ouput is as follows
$ netstat -o
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address Foreign Address State Timer
tcp 0 0 himanshu-laptop.l:46096
sjc-not16.sjc.dropb:www ESTABLISHED off (0.00/0/0)
tcp 38
0 himanshu-laptop.l:40156 v-d-1a.sjc.dropbo:https CLOSE_WAIT off
(0.00/0/0)
tcp 38 0 himanshu-laptop.l:54501
v-client-5a.sjc.d:https CLOSE_WAIT off (0.00/0/0)
In command output I see field timer which shows off or keepalive.
But I am not able to get this on AIX and Solaris.
Want to check how to get this information on AIX and Solaris?

Cannot connect to RabbitMQ server hosted remotely

I have installed and configured RabbitMQ on Ubuntu 16.04 server using reference. Since the default user that is guest is only allowed to connect locally by default, I added a new user with the administrator tag and set its permission so that it can access / virtual host. I enabled RabbitMQ management console. I am successfully able to login with the user I created. I am also able to connect with RabbitMQ when I am connecting to it via localhost using my created user. But when I am trying to connect with the RabbitMQ server through other servers using following code:
import pika
credentials = pika.PlainCredentials('new_user', 'new_pass')
parameters = pika.ConnectionParameters('<server's Public IP>', 5672,'/',credentials)
connection = pika.BlockingConnection(parameters)
It throws an error:
Traceback (most recent call last):
File "", line 1, in
File "/Library/Python/2.7/site-packages/pika/adapters/blocking_connection.py", line 339, in init
self._process_io_for_connection_setup()
File "/Library/Python/2.7/site-packages/pika/adapters/blocking_connection.py", line 374, in _process_io_for_connection_setup
self._open_error_result.is_ready)
File "/Library/Python/2.7/site-packages/pika/adapters/blocking_connection.py", line 395, in _flush_output
raise exceptions.ConnectionClosed()
pika.exceptions.ConnectionClosed
The same code works fine when I run this code on server, on which RabbitMQ is installed and by replacing <server's Public IP> with 0.0.0.0.
Output of sudo netstat -nltp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:25672 0.0.0.0:* LISTEN 18021/beam
tcp 0 0 0.0.0.0:4369 0.0.0.0:* LISTEN 18110/epmd
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1230/sshd
tcp 0 0 0.0.0.0:15672 0.0.0.0:* LISTEN 18021/beam
tcp6 0 0 :::5672 :::* LISTEN 18021/beam
tcp6 0 0 :::4369 :::* LISTEN 18110/epmd
tcp6 0 0 :::22 :::* LISTEN 1230/sshd
What could be causing this error?
this usually happens with a very low connection timeout. adjust your connection string to include a larger connection timeout, such as 30 or 60 seconds, and you should be good to go.
looks like pika uses this setting https://pika.readthedocs.io/en/latest/modules/parameters.html#pika.connection.ConnectionParameters.blocked_connection_timeout

Change bound IP running on port 7077 - Apache Spark

Can Spark be configured so that instead of binding to address 127.0.1.1 for port 7077, can
instead be bound to 0.0.0.0 . In same way as port 8080 is bound :
netstat -pln
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.1.1:7077 0.0.0.0:* LISTEN 2864/java
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 2864/java
tcp 0 0 127.0.1.1:6066 0.0.0.0:* LISTEN 2864/java
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
udp 0 0 0.0.0.0:68 0.0.0.0:* -
udp 0 0 192.168.192.22:123 0.0.0.0:* -
udp 0 0 127.0.0.1:123 0.0.0.0:* -
udp 0 0 0.0.0.0:123 0.0.0.0:* -
udp 0 0 0.0.0.0:21415 0.0.0.0:* -
Active UNIX domain sockets (only servers)
Proto RefCnt Flags Type State I-Node PID/Program name Path
unix 2 [ ACC ] STREAM LISTENING 7195 - /var/run/dbus/system_bus_socket
unix 2 [ ACC ] SEQPACKET LISTENING 405 - /run/udev/control
Reason I'm asking this is that I'm unable to connect workers to master node and I think the issue is that the master ip is not discoverable.
Error when try to connect slave to master :
15/04/02 21:58:18 WARN Remoting: Tried to associate with unreachable remote address [akka.tcp://sparkMaster#raspberrypi:7077]. Address is now gated for 5000 ms, all messages to this address will be delivered to dead letters. Reason: Connection refused: raspberrypi/192.168.192.22:7077
15/04/02 21:58:18 INFO RemoteActorRefProvider$RemoteDeadLetterActorRef: Message [org.apache.spark.deploy.DeployMessages$RegisterWorker] from Actor[akka://sparkWorker/user/Worker#1677101765] to Actor[akka://sparkWorker/deadLetters] was not delivered. [10] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
In spark-env.sh you can set SPARK_MASTER_IP=<ip>.
A hostname would also work fine (via SPARK_STANDALONE_MASTER=<hostname>), just make sure the workers connect to exactly the same hostname as the master binds to (i.e. the spark:// address that is shown in Spark master UI).

Rebinding udp socket to new one

As syslog uses the predefined socket port number of 514, is there any way to rebind this socket port number to any other port number specifically between 49152 and 65535. I am using Unix C 'gcc' compiler.
bash-3.2$ netstat -anp | grep udp
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
udp 0 0 0.0.0.0:2049 0.0.0.0:* -
udp 0 0 0.0.0.0:514 <-- needs to be changed 0.0.0.0:* -
udp 0 0 127.1.1.1:6688 0.0.0.0:* -
udp 0 0 0.0.0.0:4785 0.0.0.0:* -
udp 0 0 0.0.0.0:69 0.0.0.0:* -
udp 0 0 0.0.0.0:47451 0.0.0.0:* -
udp 0 0 0.0.0.0:613 0.0.0.0:* -
udp 0 0 0.0.0.0:111 0.0.0.0:* -
udp 0 0 0.0.0.0:1009 0.0.0.0:* -
udp 0 0 0.0.0.0:1012 0.0.0.0:* -
I need to change the 514 to the specified value.
gcc is simply a compiler and so we cannot change the port number of syslog application during compilation. Having said that, it is likely that one can configure the port, usually in the configuration files, and you should be able to set it to 514. Here is an example that talks about configuring both the client side and the server side: http://itvomit.com/2012/06/01/linux-sending-log-files-to-a-remote-server/ . Depending upon your system, I would explore how to change the port number.
We should keep in mind that if we change the port from 514 to X, then if remote clients send information to syslog, they they would also need to send it to the new port X and not 514. Here is a link on that explains how we need to change the client side config to redirect logging messages to a port number different from 514: http://docs.splunk.com/Documentation/Storm/Storm/User/Howtosetupsyslog