unix domain socket - securing receiver - sockets

I am studying a tutorial on unix domain socket. I have a question on the receiver part.
If a process is using listen() and waiting for incoming requests:
what options does it have to make itself secure ? Does it have a way to identify who sent the request ? Can it apply some restriction on who can send it a request ?
Is the situation that there is there no security option and if a process uses listen() its completely open to any request ?

The general thought on Linux is that security is enforced by the file permissions on the UNIX socket "file" in the filesystem. A process must have read/write access to the socket special file.
The unix(7) man page indicates:
In the Linux implementation, sockets which are visible in the
filesystem honor the permissions of the directory they are in. Their
owner, group, and permissions can be changed. Creation of a new
socket will fail if the process does not have write and search
(execute) permission on the directory the socket is created in.
Connecting to the socket object requires read/write permission. This
behavior differs from many BSD-derived systems which ignore
permissions for UNIX domain sockets. Portable programs should not
rely on this feature for security.
It seems that directory-searching permissions are honored everywhere, though. So your socket can only be connect()ed to by users that have execute access on the entire path to your socket special file - this is true on all OSes.
Related:
Can UNIX Domain Sockets be locked by user ID?
Which systems do not honor socket read/write permissions?

Related

Restrict connections from current user on TCP socket

I have the following Electron-ish usecase:
User creates a http daemon listening on connections from local host on port X
User launches web browser and goes to http://localhost:X
Since the http daemon was created by the current user, it shares its permissions (which for this application must be correct).
Problem: Any user may on current machine may connect to the server, which means that private data is shared among all users. Is it possible solve this on system level, like changing permissions on file descriptors, or is it necessary to use some kind of login scheme.

Is it possible to deploy without downtime without disconnecting TCP sockets connected?

There is a long connected TCP socket. Up to two clients can connect to a server. In other words, the load is not high. However, once a TCP connection is made, the socket will not be disconnected unless there is an accident, such as a server power down or network failure. Is it possible to reuse an existing TCP socket when restarting the process? I think TCP load balancer like AWS NLB cannot be used since the existing socket won't be moved to a new application. I'd like to have a deployment without downtime, as the system i'm working on is a system that can suffer financial damage when a socket is broken and data is lost. Low-level socket programming is ok.
I have read CloudFlare's https://blog.cloudflare.com/graceful-upgrades-in-go/ article explaining Nginx's Gracefully Reload mechanism. Since an HTTP server is a server that opens and closes sockets frequently, that article assumes that the server's connection would someday be closed, but my situation is slightly different. So I'm not sure if this can be used.
A socket can be shared between multiple processes, for example by opening the socket in same parent processing and forking a child process. But if the last process using the socket is closed the socket and thus the underlying connection is implicitly closed.
This means you must make sure that there is always a process open which uses the socket. This can be for example done if the deployment of the new software does not first exit the old process and then creates the new one but if the new process would start and the old process would transfer the socket to the new one, see Can I share a file descriptor to another process on linux or are they local to the process?
for how this can be done in Linux. Other ways would be using file descriptor inheritance when doing a fork().
Note that these sharing of file descriptors will only work with plain sockets where the state is fully kept in the OS kernel. It will be much harder or impossible with TLS sockets since in this case also the current user space state somehow needs to be shared.
Another way is to have some intermediate "proxy" which on the hand has the stable socket connection to your fragil application and on the other hand is a robust socket handling (i.e. reconnect when needed) to the application you want to update. Then this proxy transfers the traffic between both sides and will reconnect the socket if needed whenever a problem occurs.

Not receiving any UDP data on a socket when App Sandbox is on in Cocoa app

I have a cocoa app written in Swift 3.0, which is using framework written using C++/Boost to receive UDP data on socket. But when the App Sandbox Capability is switched on in cocoa app i am not receiving any data from the socket, which i am also using to send data to the server first. When App Sandbox is switched off everything is working as expected.
Entitlements com.apple.security.network.client and om.apple.security.network.server are set to YES.
Is there anything i can do to make it working with App Sandbox switched on (which is mandatory for apps in Mac App Store)?
I was able to make it work with both com.apple.security.network.client and com.apple.security.network.server enabled and i am using much higher then 1024 ports. We had actually problems with firewalls and another end backend. So to sum it up it is possible to open socket and receive UDP data on cocoa sandboxed app, but you need to have com.apple.security.network.server enabled.
It's possible to use the following entitlements to allow UDP/TCP socket connections:
com.apple.security.network.client
com.apple.security.network.server
According to Apple's "Elevating Privileges Safely" section in the documentation, opening raw sockets, or port numbers below 1024 (UDP/TCP) require elevated privileges. Elevating privileges is apparently not permitted at all in Sandboxed Apps.
Circumstances Requiring Elevated Privileges
Regardless of whether a user is logged in as an administrator, a
program might have to obtain administrative or root privileges in
order to accomplish a task. Examples of tasks that require elevated
privileges include:
* manipulating file permissions, ownership
* creating, reading, updating, or deleting system and user files
* opening privileged ports (those with port numbers less than 1024) for TCP and UDP connections
* opening raw sockets
* managing processes
* reading the contents of virtual memory
* changing system settings
* loading kernel extensions
If you have to perform a task that requires elevated privileges, you must be aware of the fact that running with elevated privileges means that if there are any security vulnerabilities in your program, an attacker can obtain elevated privileges as well, and would then be able to perform any of the operations listed above.
Note: Elevating privileges is not allowed in applications submitted to
the Mac App Store (and is not possible in iOS).

Is PostgreSQL peer authentication safe for production?

PostgreSQL peer authentication is a source of many questions on this website, but once you understand how it works, it looks pretty awesome.
For example, I can have my application connecting to the development database without supplying username and password.
So, my question is, can I use peer authentication on a production server? Is it safe enough?
Thank you very much.
peer is very useful for many kinds of deployments - e.g. when you want to allow people to log in with local unix user accounts and get quick DB access as a matching PostgreSQL user.
It's not great for webapps, because you generally want each webapp to have its own user. So you usually use md5 for them.
I often combine the two. For webapps allow md5 to their private DB only - over local sockets if the driver supports it, otherwise over host connections from localhost. Allow peer for local users to any DB, including the webapp DBs. If you want to have only one user in each db (so you can ignore permissions - which I don't recommend, but I know some people do) you can use a pg_ident.conf mapping to allow people to authenticate via peer as users other than their default user name.
Then you may add hostssl connections from the outside world via md5 or gssapi (kerberos), or sspi if it's a Windows DB host.
Authentication methods aren't an all or nothing thing. There's a reason it's easy to provide a list of alternatives and pick the first matching one.

How do online port checkers work?

For example http://www.utorrent.com/testport?port=12345
How does this work? Can the server side script attempt to open a socket?
There are many ways of accomplishing this through server-side scripting. As #Oded mentioned, most server-side handlers are capable of initiating socket connections on arbitrary ports, and most of those even have dedicated port-scanning packages/libraries (PHP has one in the PEAR repository, Python's 'socket' module makes this type of tasks a breeze, etc...)
Keep in mind that on shared host platforms, socket connections are typically disabled for security purposes.
Another way that is also very easy to accomplish is to use a command-line port-scanner such as nmap from your server-side script. i.e in PHP, you would do echo ``nmap -p $port $ip\
The server side script will try to open a connection on the specified port to the originating IP.
If there is no response (the attempt will timeout), this would be an indication that the port is not open.
The server can try, as #Oded said. But that doesn't ensure the receiver will respond.
Typically, something like this happens:
The URL request contains instructions about which port to access. The headers that your browser sends include information about where the request is originating from.
Before responding to the request, the server tries to open a port and checks if this is successful. It waits a while before timing out.
The webpage is rendered dynamically based on the results of this test.
The response is returned to you containing the results.
Sometimes steps (2) and (3) will be replaced with an AJAX callback, which allows the response to return sooner.