UA_STATUSCODE_BADINTERNALERROR when connecting with Username & Password Security Mode = NONE and Security Policy = NONE - opc

I’ve written an OPC UA Client and i’m using ProSys OPC UA Simulator Server to test it. Running in Anonymous mode my client connects and i can browse the server. however when I configure my client to use a Username and Password it fails with No Suitable UserTokenPolicy found for the possible endPoints. I’ve debugged it and it appears that http://opcfoundation.org/UA/Se…..olicy#None is not in the endpoints userIdentityTokens array, although in ProSys it states it is.
I’ve set up a user in The OPC UA Simulator Username & Passord box is ticked
Security Modes = None
Security Policies = None are ticked
The list of server EndPoints
Security Mode = None
Security Policy = None
So i would expect to see it in userIdentityTokens array.
My Client is written using the Open62541 libraries
Any help would be appreciated.
Thanks

I haven't used the Open62541 libraries, but it seems the standard allows the OPC server to require that the username and password are encrypted even when the rest of the transport is not signed or encrypted.
It appears that at some point in time (and maybe still), Open62541 doesn't handle encrypting the password. Here are a couple of related github issues:
https://github.com/open62541/open62541/issues/934
https://github.com/open62541/open62541/issues/1548
https://github.com/open62541/open62541/issues/2757
I am working on a node.js OPC UA client based on the node-opcua library, and I was able to verify that I can connect to the ProSys OPC-UA simulation server with the security mode set to none and user authentication set to username & password. I do not know what type of password encryption node-opcua is doing behind the scenes, but it works.
----- Additional info -------
I found another forum with some clarification: https://forum.prosysopc.com/forum/opc-ua/clarification-on-opensecurechannel-messages-and-x509identitytoken-specifications/
The key part:
If passwords are sent, they will be encrypted as defined by the
UserTokenPolicy (which is separate from SecurityPolicy, but similar).
The Application Instance Certificates will be used for the encryption
in this case – and therefore they are required to be exchanged even
when MessageSecurityMode=None.

Related

OpenID Connect: transparent authentication for legacy clients using Resource Owner Password Credentials

We're currently rewriting various services to use OpenID Connect (via Keycloak).
This works great for any modern browser-based clients, but in our case we also need to support legacy IoT devices, which:
cannot receive a firmware update (and thus are stuck in their current modes of authentication/communication)
are not aware of Keycloak and are not configured to participate in OpenID Connect. (and are also only aware of the application's URL and not the Keycloak URL)
authenticate directly with the application using either Basic Authentication or SSL Client Authentication with a certificate.
From the documentation we gathered that mapping each device to a Keycloak user and using the
Resource Owner Password Credentials would be the way to go in such cases.
We were thinking that it'd be nice to add centralized support for such devices by exposing a reverse proxy that sits in front of all services and performs the following steps:
Receive the IoT device requests (and optionally terminate SSL)
Extract the credentials from the request (either basic auth / client certificate)
Perform the Resource Owner Password Credentials Flow against Keycloak to exchange the credentials for an access token (where the IoT device acts as the OAuth Resource Owner and the reverse proxy acts as the OAuth Client
If successful, enrich the original request with the retrieved access token and forward it to the proxied service
In that way the entire OpenID Connect authentication is transparent for any legacy devices.
This design could be further improved/optimized by caching the access tokens for the duration they are valid for (using the credentials as the cache key) and refreshing them when they expire.
Now, this idea seems like such a no-brainer, that we were surprised that we couldn't find any existing gateways, reverse proxies or plugins that do this.
So I guess we're in need of a sanity check on:
Is this something that can work as described or are there any obvious flaws with the idea?
Why isn't anyone doing this already? (assuming that supporting legacy devices is a major pain point when switching to OpenID Connect)
UPDATE 1: (responding to question) The described legacy IoT devices are (physically) Arduino microcontrollers with baked-in unique credentials. In the context of Keycloak, each such Arduino microcontroller is mapped as a Keycloak user. We're open for suggestions if this is not the most adequate mapping for this use-case.
UPDATE 2: (responding to question) Agreed that the Client Credentials Flow would be semantically more correct for such a device-to-device authentication and any future devices we produce will use it. However we can't use it for the existing legacy devices for two reasons: 1) the devices only know the server's URL and can't authenticate directly against Keycloak and 2) we also want to support SSL Client Authentication using a X.509 certificate and from our understanding Keycloak only supports X.509 client certificate user authentication for users, and not for clients.
Is this something that can work as described or are there any obvious flaws with the idea?
It works fine, so long as your OP supports the Resource Owner Password Credentials flow, which is deprecated and removed from modern OAuth2.
Why isn't anyone doing this already? (assuming that supporting legacy devices is a major pain point when switching to OpenID Connect)
Lots of reverse proxies do this, just not with resource owner credentials. The ROPC flow was never a good idea, exists for legacy reasons, and has been removed from OAuth 2.1.
I suspect that most people move away from storing and transmitting resource owner credentials as they modernize their architecture.

WWW-Authenticate uses NTLM and not Kerberos

I'm running a NodeJS server on a Windows Server 2008.
The server doesn't do much but I set the header for 401, WWW-Authenticate Negotiation which I know can go either with his default Kerberos authentication or if it's not available then with NTLM.
I downloaded fiddler and discovered that when I try to reach the server it tries to authenticate with NTLM(prompt me for username and password) rather than Kerberos, even though the computer is in the same domain as the server and when I do run the command "klist" it does show me that he has tokens which means that he already authenticated with Kerberos(doesn't it?).
My question is how can I make it authenticate with Kerberos rather than NTLM - why does he go to NTLM in the first place?
Any ideas?
You client is probably unable to obtain a server ticket for that machine. Use Wireshark and check for Kerberos TGS-REQ messages. Please search SO for my similar answers, I was already able to help in such situations.
Most likely, you need to read up on Kerberos. You probably have a TGT (ticket granting ticket) , which proves you authenticated to AD (KDC). You still need a ST (service ticket) for the specific service/server instanced. http/servername. For this to work, you need an SPN registered for the service. the SPN must be registered to the account that will be decrytping the ST.
Fiddler showing that you don't send a ST shows that you didn't get one. You likely will need a sniffer, not just Fiddler. You will see, on port 88, your workstation try to request the ST from a KDC, then fail, then fall back to NTLM. You want to see the ST req get back a ST.
this is a great starter, despite being from 2000 http://technet.microsoft.com/en-us/library/bb742431.aspx
For more meaningful assistance, I'd need to see your app URL, your SPN info via setspn -l , and server hostname.
As this hasn't been marked as fixed I thought I would add a solution which worked for me. YMMV as in my environment we have multiple forests with trust relationships.
In my case, the user running the browser (and therefore making the HTTP GET) was in DOMA, but the web server was joined to DOMB. When the client tried to find the SPN of HTTP/webserver.domainb.example.com, it was looking for it in the DOMA.
I wrongly assumed that because a bi-directional trust relationship existed between DOMA and DOMB that it would figure out to look in DOMB and do so. Unfortunately it does not and you must Configure Kerberos Forest Search Order. This may be done per client (Computer Configuration → Policies → Administrative Templates → System → Kerberos → Use forest search order) on Windows 7, or per KDC (Computer Configuration → Policies → Administrative Templates → System → KDC → Use forest search order) on Windows Server 2008R2.
Note: Michael-O's advice to use Wireshark and look for TGS-REQ messages was what led me to finding the KFSO setting so many thanks to him.

Need to run a cron job as encrypted

I need to setup a cron job to run a SOAP client. The customer insists that I connect to their web service (on an https address) from an https address. They insist that if I don't their response to me can't be encrypted.
My first question is, is that true? I thought that as long as I'm connecting to their SOAP service over https, the response back would automatically be encrypted.
If that's true, how can I run a cron job to be as https? My site is on a LAMP setup with cPanel access.
Thank you in advance for your help!
Your customers statement seems to be a little bit unclear in what he/she specifically means by "... connecting from an https adress" as there isn't any notion of the term "https adress" in the specs and https URLS only seem to make sense in the context of Request-URI s given in a https request.
Given this unclarity I'm only wild guessing. Nevertheless to me it seems your clients requirements might most probably not be connected to the http protocol but rather to establishing your TLS connection.
If your client is very sensitive in respect to the security of his system - which in fact if he intends to offer RPC requests might be a very good idea - he might not want to the whole world to be able to connect an encrypted connection to his machines and rely on any secondary authentication mechanism once the connection has been established.
As most users of the public internet don't have any certificates signed by a trusted authority this feature it isn't used out in the open wild but besides server authentication the TLS handshake protocol also provides a means of client authentication via client certificates (the relevant part being section 7 in RFC 5246 here. see: https://www.rfc-editor.org/rfc/rfc5246#section-7)
While in the absence of widely used client certificates web services usually rely on establishing an encryted connection to first to authenticate users by some kind of challange response test like querying for username and password your client might want to either additionally secure access to his machines by additionally requiring a valid client certificate or even - probably not the best idea - replace a second authorization like the one already mentioned above.
Nevertheless all this are nothing but some ideas that I came along with given the riddle in your question.
Most probably the best idea might be to just ask your client what he/she meant when saying "... connecting from an https adress"

Single Sign-On for Rich Clients ("Fat Client") without Windows Logon

single sign-on (SSO) for web applications (used through a browser) is well-documented and established. Establishing SSO for Rich Clients is harder, and is usually suggested on the basis of Kerberos tickets, in particular using a Windows login towards an ActiveDirectory in a domain.
However, I'm looking for a more generic solution for the following: I need to establish "real" SSO (one identity for all applications, i.e. not just a password synchronization across applications), where on client's side (unmanaged computers, incl. non-Windows), the "end clients" are a Java application and a GTK+ application. Both communicate with their server counterparts using a HTTP-based protocol (say, WebServices over HTTPS). The clients and the server do not necessarily sit in the same LAN/Intranet, but the client can access the servers from the extranet. The server-side of all the applications sit in the same network area, and the SSO component can access the identity provider via LDAP.
My question is basically "how can I do that"? More specifically,
a) is there an agreed-upon mechanism for secure, protected client-side "sso session storage", as it is the case with SSO cookies for browser-accessed applications? Possibly something like emulating Kerberos (TGT?) or even directly re-using it even where no ActiveDirectory authentication has been performed on the client side?
b) are there any protocols/APIs/frameworks for the communication between rich clients and the other participants of SSO (as it is the case for cookies)?
c) are there any APIs/frameworks for pushing kerberos-like TGTs and session tickets over the network?
d) are there any example implementations / tutorials available which demonstrate how to perform rich-client SSO?
I understand that there are "fill-out" agents which learn to enter the credentials into the application dialogues on the client side. I'd rather not use such a "helper" if possible.
Also, if possible, I would like to use CAS, Shibboleth and other open-source components where possible.
Thanks for comments, suggestions and answers!
MiKu
Going with AD account IS the generic solution. Kerberos is ubiquitous. This is the only mechanism which will ask you for your credentials once and just once at logon time.
This is all feasable, you need:
A KDC
Correct DNS entries
KDC accounts
Correct SPN entries
Client computers configured to talk to the KDC
Java app using JAAS with JGSS to obtain service tickets
GSS-API with your GTK+ app to obtain service tickets
What did you figure out yourself yet?
Agreed with Michael that GSSAPI/Kerberos is what you want to use. I'll add that there’s a snag with Java, however: by default, JGSS uses its own GSSAPI and Kerberos implementations, written in Java in the JDK, and not the platform’s libraries. Thus, it doesn’t obey your existing configuration and doesn’t work like anything else (e.g. on Unix it doesn’t respect KRB5CCNAME or other environment variables you’re used to, can’t use the DNS to locate KDCs, has a different set of supported ciphers, etc.). It is also buggy and limited; it can’t follow referrals, for example.
On Unix platforms, you can tell JGSS to bypass the JDK code and use an external GSSAPI library by starting the JVM with:
-Dsun.security.jgss.native=true -Dsun.security.jgss.lib=/path/to/libgssapi_krb5.so
There is no analogous option on Windows to use SSPI, however. This looks promising:
http://dblock.github.com/waffle/
... but I haven’t gotten to addressing this issue yet.

stunnel on window for IBM MQ connection

Does anyone have an experience or just thoughts about securing MQ TCP
communication channels using stunnel?
I am integration with third party S.W which has MQ support built in but it can not support SSL. So to have some kind of security over the TCP we would like to use stunnel. Does any one have any thoughts how to implement and any best practices
I haven't used stunnel so I'll leave that part of the answer to another responder. With regard to WMQ, keep in mind that this will provide you with data privacy and data integrity over the stunnel link but will not give you channel-level services such as WMQ authentication. True, you will have some level of authentication on the stunnel connection itself, but anyone with a TCP route to the QMgr that does not arrive via stunnel will also be able to start that channel.
Your requirement for security obviously includes data privacy. If it also includes authentication and authorization, you might need to use something like BlockIP2 (from http://mrmq.dk )to filter incoming connections on that channel by IP address to insure they arrive over the stunnel link. Of course, there is nothing to prevent someone at the remote end from specifying any channel name to connect to so if you secure one channel, you need to secure them all - i.e. make sure that SYSTEM.DEF.* and SYSTEM.AUTO.* channels are disabled or that they use SSL and/or an exit to authenticate the inbound connection.
Finally, be aware that if WMQ is configured to accept the ID presented by the client then the connection has full administrative access and that includes remote code execution. To prevent this you must configure all inbound channels (RCVR, RQSTR, CLUSRCVR and SVRCONN) that are not administrative with a low-privileged ID in the channel's MCAUSER. For any channels that are intended for administrators, authenticate these with SSL. (Hopefully your 3rd party SW is an application and not an administrative tool! Any WMQ admin tool must support SSL or else don't use it!)
So by all means use stunnel to secure this link, just be sure to secure the rest of the QMgr or else anyone who can legitimately connect (or even anonymous remote users if you leave MCAUSER blank and aren't using SSL and/or exits) will just bypass the security or disable it.
There's a copy of the IMPACT presentation Hardening WMQ Security at https://t-rob.net/links/ which explains all this in more detail.
Rob - I agree with you. For that only we have MQIPT. Which is much better. For STunnel for MQ i have sloved the problem.
Keys -U need a .pem key (From Key manager you can create .p12 and use open ssl to covert to .PEM).
Client Side: Download and install stunnel have followoling entries in the config file
cert = XXX.pem
client = yes
[MQ]
accept = 1415
connect = DestinationIP:1415
Server Side:
cert = xxx.pem
client = no
[MQ]
accept = 1415
connect = MQIP:1415
Once you do this all you have do is just call the amquputc with the Queue name.