support kerberos constrained delegation using SSPI for multiprocess - sspi

I need to support Kerberos constrained delegation for our C++ HTTP server product on Windows using SSPI.
For a single process server, the follow workflow can be used and I have a working prototype.
1) Call AcquireCredentialsHandle
2) Call AcceptSecurityContext
3) Call ImpersonateSecurityContext
4) Do delegation
5) Call RevertSecurityContext
However, the C++ HTTP server has a master process and a worker process. Both processes run on the same machine and use the same service account, and each client request can come from a different user. The master process can handle SPNEGO and Kerberos authentication using AcquireCredentialsHandle and AcceptSecurityContext, but it has no knowledge of which resource it needs to delegate, only the worker process has the knowledge.
Which SSPIs can I use to forward the client's security context to the worker so that the worker can do impersonation/delegation?
Seems one possible solution is to get client's identity in the master, transfer that to the worker; then in the worker use LsaLogonUser and ImpersonateLoggedOnUser. However, since LsaLogonUser allows logon without password, our security expert is strongly against the use of it.
SSPI also has ExportSecurityContext and ImportSecurityContext, but the documentation is very vague and not sure if it can address my use case. Since the ImpersonateSecurityContext documentation says it "allows a server to impersonate a client by using a token previously obtained by a call to AcceptSecurityContext (General) or QuerySecurityContextToken.", seems I can't call ImpersonateSecurityContext after ImportSecurityContext.
Any suggestion is appreciated.

What you need to do is get a handle to a token in the parent process and duplicate it into the child process.
You do it this way:
In the parent process call ImpersonateSecurityContext as you normally would. This will set your identity. Then call QuerySecurityContextToken to get a handle to the token of that identity. Once you have the handle call DuplicateHandle, but where the target process is a handle to the child process. The returned lpTargetHandle is a locally referenced handle in the target process (the child). You will some how need to transfer this value to the target process.
Once the child process has the handle value you can call ImpersonateLoggedOnUser passing the handle value. At this point the local identity should be the user in question and any outbound calls will use that when creating the new context.
Keep in mind though that the child process will need the SeImpersonatePrivilege.

Related

Which OAuth2 grant type should I use?

I'm maintaining SReview, a mojolicious-based webapp which needs to run a lot of background tasks that will need to change database state. These jobs require a large amount of CPU time and there are many of them, so depending on the size of the installation it may be prudent to have multiple machines run these background jobs. Even so, the number of machines that have access to the database is rather limited, so currently I'm having them access the database directly, using a direct PostgreSQL connection.
This works, but sometimes the background jobs may need to run somewhere on the other side of a hostile network, and therefore it may be less desirable to require an extra open network port just for database access. As such, I was thinking of implementing some sort of web based RPC protocol (probably something with JSON), and to protect the access with OAuth2. However, I've never worked with that protocol in detail before, and could use some guidance as to which grant flow to use.
There are two ways in which the required credentials can be provided to the machine that runs these background jobs:
The job dispatcher has the ability to specify environment variables or command line options to the background jobs. These will then be passed on to the machines that actually run the jobs in a way that can be assumed to be secure. However, that would mean that in some cases the job dispatcher itself would need to be authenticated with OAuth2, too, preferably in a way that it can be restarted at will without having to authenticate again and again.
As the number of machines running jobs is likely to be fairly limited, it should be possible to create machine credentials for each machine. In that case, it would be important to be able to run multiple sessions in parallel on the sale machine, however.
Which grant flow would support either of those models best?
From overview of your scenario it is clear that interactions occur among system to system. There is no end user (a human) user interaction.
First, given that your applications are executed in a secure environment (closed) they can be considered as confidential clients. OAuth 2.0 client types explain more on this. With this background, you can issue each distributed application component a client id and a client secret.
Regarding the grant type, first I welcome you to get familiarize with all available options. This can be done by going through Obtaining Authorization section. In simple words it explain different ways an application can obtain tokens (specially access token) that can be used to invoke OAuth 2.0 protected endpoint (in your case RPC endpoint).
For you, the best grant type will be client credential grant. It is designed for clients which has a pre-established trust with OAuth 2.0 protected endpoint. Also it does not require a browser (user agent) or an end user compared to other grant types.
Finally, you will require to use a OAuth 2.0 authorization server. This will registered different distributed clients and issue client id, secrets to them. And when a client require to obtain tokens, they will consume token endpoint. And each client invocation of your RPC endpoint will contain a valid access token which you can validate using token introspection (or any specific desired method).

How to handle username/pass changes in a distributed REST application?

I have a distributed REST application, written in C++, with an integrated SQLite DB. The application is self contained - no apache or iis server, and no external mysql. The application is the logic behind a hardware sensor: the application monitors sensor(s), identifying and storing data of interest, and generating "events" when data of interest repeats. The creation of data of interest is synchronized across the Internet to multiple instances of the application using REST to communicate the synchronization.
Using basic authentication over https, each instance maintains a local key/value store of remote instances' user/pass authentication data. This is necessary because each communication with a remote instance of the application requires authentication.
My question is how to handle the situation when the human operator changes either the username or password in the application, while the application is in active synchronization with remote instances.
I'm thinking this is really no different than any other material application data changing - when a local username / password changes, a REST communication is posted to each synchronization instance containing the changed data for that remote's local key/value store. Any communications that fail get queued for when that remote is back, as that is material information the remote needs to maintain synchronization.
Because the communications occur over https, the fact that authentication data is being passed around is okay.
I thought I might need special logic to handle the race condition where one instance tries to communicate with another, but the other has just changed its authentication fields. The sender will queue with my current logic, and when the remote sends it's updated authentication data, the locally queued failed communications will start succeeding. So that does not appear to be an issue.
I guess this is a request for anyone that's been here before, what did you do? Maybe my search terms are weak here, because I'm not finding discussion of this issue.

Semaphore error logged in mobicents sip servlet

We have an application written against Mobicents SIP Servlets, currently this is using v2.1.547 but I have also tested against v3.1.633 with the same behavior noted.
Our application is working as a B2BUA, we have an incoming SIP call and we also have an outbound SIP call being placed to an MRF which is executing VXML. These two SIP calls are associated with a single SipApplicationSession - which is the concurrency model we have configured.
The scenario which recreates this 100% of the time is as follows:
inbound call placed to our application (call is not answered)
outbound call placed to MRF
inbound call hangsup
application attempts to terminate the SipSession associated with the outbound call
I am seeing this being logged:
2015-12-17 09:53:56,771 WARN [SipApplicationSessionImpl] (MSS-Executor-Thread-14) Failed to acquire session semaphore java.util.concurrent.Semaphore#55fcc0cb[Permits = 0] for 30 secs. We will unlock the semaphore no matter what because the transaction is about to timeout. THIS MIGHT ALSO BE CONCURRENCY CONTROL RISK. app Session is5faf5a3a-6a83-4f23-a30a-57d3eff3281c;SipController
I am willing to believe somehow our application might be triggering this behavior but I can't see how at the moment. I would have thought acquiring/releasing the Semaphore was all internal to the implementation so it should ensure something doesn't acquire the Semaphore and never release it?
Any pointers on how to get to the bottom of this would be appreciated, as I said it is 100% repeatable so getting logs etc is all possible.
It's hard to tell without seeing any logs or application code on how you access and schedule messages to be sent. But if you use the same SipApplicationSession in an asynchronous manner you may want to use our vendor specific asynchronous API https://mobicents.ci.cloudbees.com/job/MobicentsSipServlets-Release/lastSuccessfulBuild/artifact/documentation/jsr289-extensions-apidocs/org/mobicents/javax/servlet/sip/SipSessionsUtilExt.html#scheduleAsynchronousWork(java.lang.String,%20org.mobicents.javax.servlet.sip.SipApplicationSessionAsynchronousWork) which will guarantee that the access to the SipapplicationSession is serialized and avoid any concurrency issues.

Should a custom http header or a parameter by used to identify the context of a caller to a RESTful service?

My team has inherited a WCF service that serves as a gateway into multiple back-end systems. The first step in every call to this service is a decision point based on a context key that identifies the caller. This decision point is essentially a factory to provide a handler based on which back end system the request should be directed to.
We're looking at simplifying this service into a RESTful service and are considering the benefits and consequences of passing the context key as part of the request header rather than adding the context key as a parameter to every call into the service. On the one hand, when looking at the individual implementations of the service for each of the backend systems, the context of the caller seems like an orthogonal concern. However, using a custom header leaves me with a slightly uncomfortable feeling since an essential detail for calls to the service are masked from the visible interface. I should note that this is a purely internal solution, which mitigates some of my concern about the visibility of the interface, but even internally there's no telling whether the next engineer to attempt to connect to or modify the service is going to be aware of this hidden detail.

Which Interprocess Communication methods work on a Terminal Server?

In a terminal server session, some standard IPC technologies might not work like in a single user environment, because the required resources are not virtualized.
For example, TCP/IP ports are not virtualized, so applications in different sessions which try to listen on the same port will cause a port conflict.
Which IPC technology will work in a terminal server environment where applications running in the same user session need to interact?
Messages (WM_COPYDATA)?
Named Pipes?
DDE?
Memory Mapped Files?
Messages will work fine. DDE will too, since it is based on messages. Named pipes will not, since they are per-system and not per-session. You might also consider COM or OLE.
All IPCs can be used in a TS environment - you just have to be clever in the naming of the objects to achieve the required end result. Using sockets is trickier but it can be done. I've listed a few methods below.
For IPC objects that can be named (Pipe, Event, Mutex, Memory Mapped File etc.) incorporating the session ID into the name of the object will achieve the virtualisation required.
To further lock down the IPC object use the object's security attributes to stop the possibility of any other user from accessing the IPC object. This could occur accidentally as a result of a bug or maliciously from another user on the terminal server.
Similarly use the logged on user's Authentication ID in the IPC object's name. In C++ see MSDN on GetTokenInformation use TokenStatistics for the TokenInformationClass. I'm sure there is an equivalent .NET method. Again secure the IPC object.
If you must use sockets on a TS (I personally would choose another method to communicate between applications on a TS) then use the port numbers. Pick a base port number and add the session number to get the port used for a session. To make sure that the correct applications are communicating use an authentication method and/or handshaking before transfering data. Theoretically sessions can be numbered up to 65535 so you may come unstuck when you use a base port number of say 2000 and session your application is run in session 65500.
If you really wanted to use sockets then maybe a broker service would help.