I'm using Merb::Cache for storing txt/xml and have noticed that the longer I leave my merbs running the larger the amount of open tcp sockets I have open -- I believe this is causing some major performance problems.
lsof | grep 11211 | wc -l
494
merb 27206 root 71u IPv4 13759908 TCP localhost.localdomain:59756->localhost.localdomain:11211 (ESTABLISHED)
merb 27206 root 72u IPv4 13759969 TCP localhost.localdomain:59779->localhost.localdomain:11211 (ESTABLISHED)
merb 27206 root 73u IPv4 13760039 TCP localhost.localdomain:59805->localhost.localdomain:11211 (ESTABLISHED)
merb 27206 root 74u IPv4 13760052 TCP localhost.localdomain:59810->localhost.localdomain:11211 (ESTABLISHED)
merb 27206 root 75u IPv4 13760135 TCP localhost.localdomain:59841->localhost.localdomain:11211 (ESTABLISHED)
merb 27206 root 76u IPv4 13760823 TCP localhost.localdomain:59866->localhost.localdomain:11211 (ESTABLISHED)
merb 27206 root 77u IPv4 13760951 TCP localhost.localdomain:52095->localhost.localdomain:11211 (ESTABLISHED)
etc...
my relevant code is :
if !exists?(:memcached) then
register(:memcached, Merb::Cache::MemcachedStore, :namespace => 'mynamespace', :servers => ['127.0.0.1:11211'])
end
&&
when :xml
unless #hand_xml = Merb::Cache[:memcached].read("/hands/#{#hand.id}.xml")
#hand_xml = display(#hand)
Merb::Cache[:memcached].write("/hands/#{#hand.id}.xml", #hand_xml)
end
return #hand_xml
is this code straight out wrong or am I using the wrong version of memcache??
I have memcached 1.2.8
and have the following:
libmemcached-0.25.14.tar.gz
memcached-0.13.gem
this is kind of driving me crazy..
k I figured out some stuff..
1) it CAN be reasonable to have hundreds/thousands of sockets connected to memcached assuming you are using a library that utilizes epoll or something else -- however, if you are using ruby like me I'm not aware of a lib that utilizes something else than select() or poll() -- therefore this strikes this question/want out immediately
2) if you are like me you only have 1 memcached server running right now and a couple of mongrels/thins running around taking care of requests..therefore your memcache connections should prob. be no more than the number of mongrels/thins you have running (assuming you only caching 1 or two sets of things) -- which was my case
here's the fix:
setup memcache through memcached gem rather than merb::cache (which actually wraps whatever memcache lib you are using
MMCACHE = Memcached.new("localhost:11211")
get/set your values:
#cache = MMCACHE.clone
begin
#hand_xml = #cache.get("/hands/#{#hand.id}.xml")
rescue
#hand_xml = display(#hand)
#cache.set("/hands/#{#hand.id}.xml", #hand_xml)
end
#cache.quit
sit back and drink a cold one cause now when you do this:
lsof | grep 11211 | wc -l
you see something like 2 or 3 instead of 2036!
props to reef for cluing me in that it's not uncommon for memcache connections to be persistent to begin with
I might be able to help, but I need to tell a story to do that. Here it is.
Once upon a time there was a cluster of 10 apache(ssl) servers configured to have exactly 100 threads each. There also was a cluster of 10 memcached servers (on the same boxes), and they all seemed to live peacefully. Both apache's and memcached's were guarded by the evil monit daemon.
Then the King installed a 11th apache(ssl) server and memcached's started to restart randomly every few hours! The King started investigating and what did he found? There was a bug in the php memcache module documentation that said that the default constructor of memcache connection object is not persistent, but apparently it was. What happened was that every php thread (and there were like 1000 of them), opened a connection to every memcached in the pool when he needed one, and it held it. There were 10*100 connections to every memcached server and it was fine, but with 11 servers it was 1100 and as 1024<1100. Maximum number of open sockets for memcached was 1024. When all the sockets were taken, the monit daemon couldn't connect, so he restarted the memcached.
Every story has to have a moral. So, what did the King do with all of this? He disabled the persistent connections and they all lived happily ever after, with number of connections on the cluster peaking at 5 (five). Those servers were serving hudge amount of data, so We couldn't have 1000 spare sockets and it was cheaper to negotiate the memcache connection on every request.
I am sorry but I don't know ruby, it looks like You had an awful amount of threads or You are caching it wrong.
Good luck!
Related
I am attempting to run the FreeRTOS+TCP demo (release 10.1.1):
The code is written for the Windows Simulator, but I am attempting to target the Xilinx Zynq, emulated by QEMU (version 4.2.0). The host machine is Ubuntu 16.04. There exists a
Network Interface port for the TCP part of FreeRTOS+TCP, so this should be possible.
One of the obvious modifications to the demo is changing the way messages are printed, and removing calls to the Windows Sleep function. Also, I am using ARM semihosting to view the output of the print statements.
Besides these changes, what changes will I need to make to the command line call? The demo creates a TCP echo server and client. If these are tied together, then no traffic should need to go to the host, right? Is there anything special I would need to do to get this to work? I don't have a ton of experience with networking.
Since this is technically a baremetal application, CLI options like -nic user,hostfwd=tcp::5022-:22 give the warning qemu-system-arm: warning: nic cadence_gem.1 has no peer.
(Found at How to replace `qemu-system -redir` command argument?).
If I ever did want to send traffic between the host and guest (like having a ncat echo server, instead of in FreeRTOS), how would I go about doing that?
Here is a related problem, with no solution:
Running LWIP TCP/IP Stack with QEMU
Current command line arguments:
qemu-system-arm -semihosting --semihosting-config enable=on,target=native -nographic -serial mon:stdio -machine xilinx-zynq-a9 -m 512M -cpu cortex-a9 -nic user,hostfwd=tcp::12346-:7 -kernel build/rtos_demo_tcp/rtos_demo_tcp.elf
Current output (I enabled extra debug messages):
qemu-system-arm: warning: nic cadence_gem.1 has no peer
Seed for randomiser: 1591112953
Random numbers: 00001294 00001925 000022D0 00005CC3
FreeRTOS_IPInit
vTaskStartScheduler
prvIPTask started
Network buffers: 30 lowest 30
IP Address: 10.2.118.223
Subnet Mask: 255.255.255.0
Gateway Address: 10.2.118.1
DNS Server Address: 208.67.222.222
Socket 7 -> 0ip:0 State eCLOSED->eTCP_LISTEN
Then this next bit repeats indefinitely, with a different socket number each time:
FreeRTOS_connect: 14207 to a0276dfip:7
Socket 14207 -> a0276dfip:7 State eCLOSED->eCONNECT_SYN
ARP for a0276dfip (using a0276dfip): rc=0 00:00:00 00:00:00
Network buffers: 30 lowest 29
Connect[a0276dfip:7]: next timeout 1: 500 ms
ARP for a0276dfip (using a0276dfip): rc=0 00:00:00 00:00:00
Connect[a0276dfip:7]: next timeout 2: 500 ms
ARP for a0276dfip (using a0276dfip): rc=0 00:00:00 00:00:00
Connect[a0276dfip:7]: next timeout 3: 500 ms
Connect: giving up a0276dfip:7
Socket 14207 -> a0276dfip:7 State eCONNECT_SYN->eCLOSE_WAIT
FreeRTOS_closesocket[14207 to a0276dfip:7]: buffers 30 socks 1
Summary: How do I change the way I am calling QEMU so that the TCP client and server can connect to each other?
I would recommend trying out all of the options enumerated in the qemu networking guide:
- SLIRP: -netdev user,id=mynet0,net=192.168.76.0/24,dhcpstart=192.168.76.9
- TAP: -netdev tap,id=mynet0
- SOCKET: -netdev socket,id=mynet0,listen=:1234 and -netdev socket,id=mynet0,connect=:1234
And there is much more in the guide. Something you might also consider is that it might be difficult for you to port-forward directly from the VM you care about to the host, and it might be easier to connect that VM to another VM and port-forward from that second VM to your host.
Sounds a bit odd, but that's something I've needed to do before. To talk between two VM's I find sockets to be the best method. They work sort of like a "virtual crossover cable".
This VM to VM method would allow you to simply have to set up corresponding static IP's and subnets on each VM and then ncat's would work between them. Do away with all of the complexity of a DHCP server and any sort of port-forwarding.
This might be a very basic question but it confuses me.
Can two different connected sockets share a port? I'm writing an application server that should be able to handle more than 100k concurrent connections, and we know that the number of ports available on a system is around 60k (16bit). A connected socket is assigned to a new (dedicated) port, so it means that the number of concurrent connections is limited by the number of ports, unless multiple sockets can share the same port. So the question.
TCP / HTTP Listening On Ports: How Can Many Users Share the Same Port
So, what happens when a server listen for incoming connections on a TCP port? For example, let's say you have a web-server on port 80. Let's assume that your computer has the public IP address of 24.14.181.229 and the person that tries to connect to you has IP address 10.1.2.3. This person can connect to you by opening a TCP socket to 24.14.181.229:80. Simple enough.
Intuitively (and wrongly), most people assume that it looks something like this:
Local Computer | Remote Computer
--------------------------------
<local_ip>:80 | <foreign_ip>:80
^^ not actually what happens, but this is the conceptual model a lot of people have in mind.
This is intuitive, because from the standpoint of the client, he has an IP address, and connects to a server at IP:PORT. Since the client connects to port 80, then his port must be 80 too? This is a sensible thing to think, but actually not what happens. If that were to be correct, we could only serve one user per foreign IP address. Once a remote computer connects, then he would hog the port 80 to port 80 connection, and no one else could connect.
Three things must be understood:
1.) On a server, a process is listening on a port. Once it gets a connection, it hands it off to another thread. The communication never hogs the listening port.
2.) Connections are uniquely identified by the OS by the following 5-tuple: (local-IP, local-port, remote-IP, remote-port, protocol). If any element in the tuple is different, then this is a completely independent connection.
3.) When a client connects to a server, it picks a random, unused high-order source port. This way, a single client can have up to ~64k connections to the server for the same destination port.
So, this is really what gets created when a client connects to a server:
Local Computer | Remote Computer | Role
-----------------------------------------------------------
0.0.0.0:80 | <none> | LISTENING
127.0.0.1:80 | 10.1.2.3:<random_port> | ESTABLISHED
Looking at What Actually Happens
First, let's use netstat to see what is happening on this computer. We will use port 500 instead of 80 (because a whole bunch of stuff is happening on port 80 as it is a common port, but functionally it does not make a difference).
netstat -atnp | grep -i ":500 "
As expected, the output is blank. Now let's start a web server:
sudo python3 -m http.server 500
Now, here is the output of running netstat again:
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:500 0.0.0.0:* LISTEN -
So now there is one process that is actively listening (State: LISTEN) on port 500. The local address is 0.0.0.0, which is code for "listening for all ip addresses". An easy mistake to make is to only listen on port 127.0.0.1, which will only accept connections from the current computer. So this is not a connection, this just means that a process requested to bind() to port IP, and that process is responsible for handling all connections to that port. This hints to the limitation that there can only be one process per computer listening on a port (there are ways to get around that using multiplexing, but this is a much more complicated topic). If a web-server is listening on port 80, it cannot share that port with other web-servers.
So now, let's connect a user to our machine:
quicknet -m tcp -t localhost:500 -p Test payload.
This is a simple script (https://github.com/grokit/quickweb) that opens a TCP socket, sends the payload ("Test payload." in this case), waits a few seconds and disconnects. Doing netstat again while this is happening displays the following:
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:500 0.0.0.0:* LISTEN -
tcp 0 0 192.168.1.10:500 192.168.1.13:54240 ESTABLISHED -
If you connect with another client and do netstat again, you will see the following:
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:500 0.0.0.0:* LISTEN -
tcp 0 0 192.168.1.10:500 192.168.1.13:26813 ESTABLISHED -
... that is, the client used another random port for the connection. So there is never confusion between the IP addresses.
A server socket listens on a single port. All established client connections on that server are associated with that same listening port on the server side of the connection. An established connection is uniquely identified by the combination of client-side and server-side IP/Port pairs. Multiple connections on the same server can share the same server-side IP/Port pair as long as they are associated with different client-side IP/Port pairs, and the server would be able to handle as many clients as available system resources allow it to.
On the client-side, it is common practice for new outbound connections to use a random client-side port, in which case it is possible to run out of available ports if you make a lot of connections in a short amount of time.
A connected socket is assigned to a new (dedicated) port
That's a common intuition, but it's incorrect. A connected socket is not assigned to a new/dedicated port. The only actual constraint that the TCP stack must satisfy is that the tuple of (local_address, local_port, remote_address, remote_port) must be unique for each socket connection. Thus the server can have many TCP sockets using the same local port, as long as each of the sockets on the port is connected to a different remote location.
See the "Socket Pair" paragraph in the book "UNIX Network Programming: The sockets networking API" by
W. Richard Stevens, Bill Fenner, Andrew M. Rudoff at: http://books.google.com/books?id=ptSC4LpwGA0C&lpg=PA52&dq=socket%20pair%20tuple&pg=PA52#v=onepage&q=socket%20pair%20tuple&f=false
Theoretically, yes. Practice, not. Most kernels (incl. linux) doesn't allow you a second bind() to an already allocated port. It weren't a really big patch to make this allowed.
Conceptionally, we should differentiate between socket and port. Sockets are bidirectional communication endpoints, i.e. "things" where we can send and receive bytes. It is a conceptional thing, there is no such field in a packet header named "socket".
Port is an identifier which is capable to identify a socket. In case of the TCP, a port is a 16 bit integer, but there are other protocols as well (for example, on unix sockets, a "port" is essentially a string).
The main problem is the following: if an incoming packet arrives, the kernel can identify its socket by its destination port number. It is a most common way, but it is not the only possibility:
Sockets can be identified by the destination IP of the incoming packets. This is the case, for example, if we have a server using two IPs simultanously. Then we can run, for example, different webservers on the same ports, but on the different IPs.
Sockets can be identified by their source port and ip as well. This is the case in many load balancing configurations.
Because you are working on an application server, it will be able to do that.
I guess none of the answers tells every detail of the process, so here it goes:
Consider an HTTP server:
It asks the OS to bind the port 80 to one or many IP addresses (if you choose 127.0.0.1, only local connections are accepted. You can choose 0.0.0.0 to bind to all IP addresses (localhost, local network, wide area network, both IP versions)).
When a client connects to that port, it WILL lock it up for a while (that's why the socket has a backlog: it queues a number of connection attempts, because they ARE NOT instantaneous).
The OS then chooses a random port and transfer that connection to that port (think of it as a temporary port that will handle all the traffic from now on).
The port 80 is then released for the next connection (first, it will accept the first one in the backlog).
When client or server disconnects, the random port is held open for a while (CLOSE_WAIT in the remote side, TIME_WAIT in the local side). That allows flushing some lost packets along the path. The default time for that state is 2 * MSL seconds (and it WILL consume memory while is waiting).
After that waiting, that random port is free again to receive other connections.
So, TCP cannot even share a port amongst two IP's!
No. It is not possible to share the same port at a particular instant. But you can make your application such a way that it will make the port access at different instant.
Absolutely not, because even multiple connections may shave same ports but they'll have different IP addresses
I have a specific question on implementing a load balancer or a TCP/IP server program that does TCP/IP.
Since port number is 16 bits, there are a max of only 65536 ports on a single Linux box at any given time.
And TCP/IP needs a port number to talk to the outside world.
1) when a client establishes a connection, an ephemeral port number is chosen.
2) when a server listening on a socket accepts a connection, a port number is assigned.
So in my understanding at any given time only maximum 65536 TCP/IP connections can exist on a given machine.
So how is it that some or most load balancers claim 200,000 or more concurrent connections?
Can someone please explain that?
Also regarding load balancers, once a load balancer has forwarded a request to one of the servers behind it, can the load balancer somehow pass some information to it, that will help the server to respond back to the originating client directly to avoid the latency of sending back the response via the load balancer?
Thanks everyone for your help.
Thambi
Since port number is 16 bits, there are a max of only 65536 ports on a single Linux box at any given time.
65535 actually, as you can't use port zero.
when a server listening on a socket accepts a connection, a port number is assigned.
No it isn't. The incoming connection uses the same port it connected to. No new port is assigned on accept().
So in my understanding at any given time only maximum 65536 TCP/IP connections can exist on a given machine.
No, see above. The actual limit is determined by kernel and process resources: open FDs, thread stack memory, kernel buffer space, ... Not by the 16-bit port number.
I TCP connection is uniquely identified by a (remote IP address, remote port, remote IP address, remote port) tuple.
For a typical server application, there is only one to three local IP addresses and one or two local ports. For example, a web server might listen on local addresses ::1, ::ffff:93.184.216.34, and 2606:2800:220:1:248:1893:25c8:1946 (possibly via wildcard addresses, but that's irrelevant), and local ports 80 and 443.
For the simple case of a single local address and port that's still 2128 + 16 (less a few for special purpose and broadcast addresses), which will be problematic if you wish to communicate with the entire Earth in units of less than 4 million atoms (which might be possible if you converted all matter on Earth into small viruses).
There has been a confusion among this question so I'll try to explain it with examples.
First a couple words about ports: everyone knows that they don't exist physically, they are just an extra identification information for a connection, and also a way to allow multiple servers listening on the same address (if there was no concept of port only one server could be listening on one address, or some other mechanism would have to be in place). Also port is unsigned short so it can have values between 0 and 65535 (64k).
Now, restriction about ports: they are on the server side when bind ing: a (server) socket (let's call it SS) can bind to an address and port: (unless SO_REUSEADDR is set before first binding,) only one socket can listen on a particular address and port at a time, so if someone is already listening on a port you can't listen too. There are some well known ports (e.g.: sshd - 22, httpd - 80, RDP - 3389, ...) that should be avoided when creating
SS, a general guidline is never to use a port number < 1k. For a complete list of "reserved" ports, visit www.iana.org.
As stated in the link I posted in the comment there's a 5 item tuple(2 pairs + 1 additional element) that identify a connection (LocalIP: LocalPort, RemoteIP: RemotePort, Protocol) (the last member is just for rigurousity, at this point we don't care about it). Now for a particular SS that listens on a IP:Port, one of the 2 pairs will be the same for all the clients (client sockets: CS) that connect to it depending where looking at the connection from:
server's endpoint: LocalIP: LocalPort
client's endpoint: RemoteIP: RemotePort
(just like looking in the mirror).
Now I'm going to exemplify on 2 machines (Centos(192.168.149.43) - server and Windows(192.168.137.10) - client). I created a dummy TCP server in Python (note that the code is not structured, no exception handling, only IPv4 capable, the purpose is not to have a Python class but to see some socket behavior):
import sys
import select
import socket
HOST = "192.168.149.43"
PORT = 4461
WAIT_TIME = 0.5
if __name__ == "__main__":
conns = list()
nconns = 0
srv = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
srv.bind((HOST, PORT))
srv.listen(0xFF)
print "Entering loop, press a key to exit..."
while sys.stdin not in select.select([sys.stdin], [], [], 0)[0]:
if select.select([srv], [], [], WAIT_TIME)[0]:
conn = srv.accept()
print "Accepted connection from: (%s, %d)" % conn[1]
conns.append(conn)
nconns += 1
print "Active connections:", nconns
for item in conns:
item[0].close()
srv.close()
print "Exiting."
Here's the netstat output on the server machine (before running the server app). I chose port 4461 for communication:
[cfati#xobved-itaf:~]> netstat -an | grep 4461
[cfati#xobved-itaf:~]>
So nothing related to this port. Now after starting the server (I had to trim some spaces so that the output fits here):
[cfati#xobved-itaf:~]> netstat -anp | grep 4461
tcp 0 0 192.168.149.43:4461 0.0.0.0:* LISTEN
As you can see there is a socket listening for connections on port 4461.
Now going on the client machine and starting the Python interpreter, running the following code in the console:
>>> import sys
>>> import socket
>>> HOST = "192.168.149.43"
>>> PORT = 4461
>>>
>>> def create(no=1):
... ret = []
... for i in xrange(no):
... s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
... s.connect((HOST, PORT))
... ret.append(s)
... return ret
...
>>> sockets=[]
>>> sockets.extend(create())
Just after typing the last line on the server machine we look at the server output:
Accepted connection from: (192.168.137.10, 64218)
Active connections: 1
And the corresponding netstat output:
[cfati#xobved-itaf:~]> netstat -an | grep 4461
tcp 0 0 192.168.149.43:4461 0.0.0.0:* LISTEN
tcp 0 0 192.168.149.43:4461 192.168.137.10:64218 ESTABLISHED
You see the ESTABLISHED connection (this is the accepted socket - AS): The connection was initiated from 192.168.137.10 on port 64218, to 192.168.149.43 on port 4461.
Here's the corresponding netstat output on the client machine (after creating the connection):
e:\Work\Dev>netstat -an | findstr 4461
TCP 192.168.137.10:64218 192.168.149.43:4461 ESTABLISHED
As you can see the Local and Remote (IP/Port) pairs (compared to the output on the server machine) are reversed (like I mentioned above about looking in the mirror). If I go again on the client machine in the interpreter and re-run the last line (create a new connection):
>>> sockets.extend(create())
the output of the server app will show another entry:
Accepted connection from: (192.168.137.10, 64268)
Active connections: 2
while the netstat output on the server machine:
[cfati#xobved-itaf:~]> netstat -an | grep 4461
tcp 0 0 192.168.149.43:4461 0.0.0.0:* LISTEN
tcp 0 0 192.168.149.43:4461 192.168.137.10:64268 ESTABLISHED
tcp 0 0 192.168.149.43:4461 192.168.137.10:64218 ESTABLISHED
I'm not posting what netstat will output on the client machine since it's obvious.
Now, let's look at the 2 pairs each corresponding to an active connection: 192.168.137.10:64268, 192.168.137.10:64218. The 2 ports are returned by the accept function (Ux or Win) called on SS.
The 2 ports (64268 and 64218) are used by connections, but that doesn't mean that they cannot be used anymore. Other socket servers can listen on them (I am talking here in the server machine context), or they can be present as used ports in connections from other addresses. Here's a hypothetical netstat output:
tcp 0 0 192.168.149.43:4461 192.168.137.45:64218 ESTABLISHED
So, port 64218 can be also present in a connection from 192.168.137.45 (note that I changed the last IP byte).
As a conclusion, you were somehow right: there can't be more than 65535 (excluding 0 as specified in the other solution) simultaneous connections from the same IP address. This is a huge number, I don't know if in the real world it can be met, but even if so, there are "tricks" to get around it (one example is have 2+ SSs listening on 2+ different ports, and configure the client that if connection to the server to one port fails to use another, so the max simultaneous connections number from the same address can be increased by a factor equal to the number of ports we have servers listening on).
Load balancers handle connections from multiple addresses, so their number can easily grow to hundreds of thousands.
Happy Spring Festival - the Chinese New Year.
I'm working on server programming, and stucked in 10055 Error.
I have a TCP client application, which can simulate a huge amount of clients.
Hearing that 65534 is the maximum value of tcp client connections of one computer,
I use Asio to implement simulation client which start 50000 asynchronous tcp connects.
pseudocode:
for (int i=0: i<50000 ; ++i)
asyn_connect(...);
Development Environment is:
windows xp , x86 , 4G memory, 4 core CPU
HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\MaxUserPort=65000
The result is:
when connects come up to 17000 , 10055 Error occur.
I tried another computer ,the Error occur at 30000 connections, better but not enough good.
( the server app runs at another computer, also using Asio ).
The question is:
How to successfully start 50000 client connections at one computer?
Cou could try to do it more blockwise:
Eg. start with 10000 connections. As soon as 5000 connections were successful start the next 5000 async_connect calls. Then repeat that until you have reached your target. That would at least put less stress on the IO completion port. If it doesn't work I would try with even smaller blocks.
However depending on where the OS runs out of memory that still won't help.
Do you start asynchronous reads directly after the connect succeeds? These will also drain the memory resources.
How to disable TCP port?
Configure only unix socket.
For isolation of local users.
This is a 5 year old bug at least. The only issue I found was closed as WONTFIX and RTFM, but this issue logged against 2.4 here somewhat relates to the issue: https://jira.mongodb.org/browse/SERVER-9383.
MongoDB will refuse to create the unix domain socket unless the IPV4 IP Address is either 127.0.0.1 or 0.0.0.0. You don't get to run it on one interface or disable it (for reasons unstated). To me it's a reflection of the quality of the MongoDB code.
I traced the code back to 2011 and my belief is that it was a crude hack to prevent you from accidentally have 2 mongodb processes trying to create the same socket file. If you ran one instance on 192.168.1.1:27017 and 192.168.1.2:27017, they would both try to create the same socket file at: /tmp/mongod-27017.sock. Since no one at 10gen has a clue as to why that check is in there, no one has fixed it since 2011. It's easy to check that 127.0.0.1:27017 is already in use, because of EADDRINUSE, but it's hard to check that your socket file is stale or if another process created it. I'm not sure why they didn't just name the socket file differently.
See the code here: https://github.com/mongodb/mongo/blob/r2.2.4/src/mongo/util/net/listen.cpp#L91
if (useUnixSockets && (sa.getAddr() == "127.0.0.1" || sa.getAddr() == "0.0.0.0")) // only IPv4
out.push_back(SockAddr(makeUnixSockPath(port).c_str(), port));
I can understand that your concern here is with security in your setup but it is worth considering that MongoDB is built by design to interact in clustered systems and hence TCP networking is part of that design. That said, and as you are aware, there is by default a unix domain socket connection you can use for local access.
You can use the '--bind_ip' configuration option to bind to the loopback only ('127.0.0.1') or only the interface you wish to use, as mongod will by default bind to all available interfaces. For a full list of startup options you might want to look at the manual page to determine what you need.
For other security you can refer to your firewall rules.
Late to the game but for future viewers you can disable tcp by using a bindIP to a socket file.
For example:
net:
port: 8080
# socket filename has port in it
bindIp: /var/tmp/mongodb/mongodb-8080.sock
unixDomainSocket:
pathPrefix: /var/tmp/mongodb
If I start mongo and run lsof -i :8080 I don't see mongo listening on that port.