This might be a shot in the dark, but I am trying to implement an OpenID Provider in Perl using the Net::OpenID::Server module. The documentation for the entire process is confusing and sparse.
If anyone has successfully implemented a provider in Perl, could you please paste some code snippets?
So I finally jiggered the OpenID installation into place and it's working pretty well. I figure I will detail some of the gotchas I ran into.
There are more than three states/steps to the OpenID sign-in process. This is confusing, because the documentation and sample code would lead you to believe that there are three. There are, in some cases, up to seven. Watch your server logs and see how many times a SERVER and USER (the ones requesting the authentication) hit the PROVIDER (what you are presumably setting up.) It's difficult to debug something when you're only looking at half of the interactions
Many providers are using the unfinalized OpenID 2.0 spec. (It's a little better.) The 2.0 spec performs differently from the 1.0 spec; the SERVER (them) establishes trust with the PROVIDER (you). Net::OpenID::Server handles this gracefully, but doesn't tell you what spec it's using. The 2.0 spec adds a step to the handshaking process.
Set up your own OpenID SERVER for easy testing. I used a simple Rails server with a gem called ruby-openid. It took about 10 minutes to set up to mimic behavior of a real in-the-wild server.
It should go without saying, but make sure your login process is stateless. We had a global variable that handled how the user was verified. Because use of that variable made certain assumptions that were incompatible with the OpenID sign-in process, users would have been allowed to log in to accounts other than their own. This is obviously bad. A few closures and we have some stateless and more secure code.
All in all, OpenID is pretty cool once you get it working.
Fyi, development on the Net-OpenID Perl modules is starting up so you can expect a big pile of bugfixes and better docs to hit real soon now. Check CPAN and the openid-perl group for details.
Related
We are in the process of designing a front-end application with Angular which will call a jBASE server through RESTful APIs. APIs are created from jBASE component called jAgent.
Does jAgent support creating and verifying JWTs?
If not, what is the best way to handle authentication/authorization for the Angular application?
If we need to use JWTs, do we have to use a authentication middleware application (.NET Core or node.js) for that?
Great question! At the moment there is no handler within jAgent and our recommendation is to implement this, and advanced web server/API gateway technology by way of other applications like HAproxy or Kong.
An expansion of jAgent functionality to include things like this is something we're still considering but keep in mind, the power of jBASE lies in its native interactions with the host OS. Since there is no virtual OS layer it can be easier to plug and play off the shelf things to fill in for additional functionality, which gives you the flexibility to bring your own tooling.
In summary:
Not at the moment
Using an off the shelf package to act as your API gateway
Subject to the package you choose
That relegates jAgent to management of the API layer as it exists on the PICK/jBASE side while the off the shelf package manages your API security layer.
One other note for you--I noticed that you included a link to the old jBASE docs hosted on HelpJuice. It's worth mentioning that we've migrated those docs to docs.zumasys.com. You'll find the docs there to be more up to date, and also completely open sourced--part of the migration included their move to a GitHub repo, where we're happy to take community contributions.
For reference, the article you mentioned is available at https://docs.zumasys.com/jbase/connectivity/jagent/introduction-to-jagent-rest-services/.
Update:
One of our engineers has a program that will use openssl to generate the tokens for you, which you can find at https://github.com/patrickp/wjwt.
You will need openssl installed on the machine and in the path.
The WJWT.TEST program shows the usage. The important piece is the SECRET.KEY which is your internal KEY you use to sign the payloads.
When a user first authenticates you create the token with SIGN. Claims are any items/fields you wish to save/store. Do NOT put sensitive data in here as it is viewable by anybody. The concept is we sign this with our key, give it back to the client. On future calls the client sends the token and we pull it and call the VERIFY function which basically re-signs the payload and validates the signatures match. This validates the payload was not manipulated.
Activities such as expiration you would build into your code.
Long term we plan to take this library and refactor the code into our MVDB Toolkit library with more functionality. That library is something we provide to jBASE customers at no additional charge.
I am currently evaluating the framework "wolkenkit" [1] for using it in an application. Within this application I will have a user interface for tenant-based data management. Only authenticated users will have access to this application.
Additionally there should be a public REST API following common standards and being callable by public (tenant security done with submission of a tenant-based API Key within the request headers).
As far as I have found out, the wolkenkit REST API does not seem to fit these standards in forms of HTTP verbs.
But as wolkenkit at all appears to me as a really flexible and easy-to-use framework, I wonder how to basically implement such a public API.
May it be e.g. a valid approach to create an own web application which internally connects to the wolkenkit backend? What about the additional performance overhead then?
[1] https://www.wolkenkit.io/
In addition to the answer of mattwagl, I would like to point out a few things that you may be interested in.
First of all, since wolkenkit is based on CQRS, the application has a separate API for writing and reading. That means, that if you send a command (whose intent is to change state) this goes to the write API. If you subscribe for events or run a query, this goes to the read API.
This again means, that if you send a command, it's up to the write side to respond to it. As the write side is not meant to return application state, all it says is basically: "Thanks, I have received the command." To get the actual result you have to wait for the appropriate event, which means subscribing to the read API.
In the wolkenkit documentation there is a nice diagram which shows this in a clear way:
If you now add a separate REST API (which actually fulfills the requirements of REST), this means that you need to handle waiting for the result internally. In other words: Clients in wolkenkit are always meant to be asynchronous, REST is not. Hence it's your job to handle the asynchronous behavior of the wolkenkit APIs in your REST API. I think that this is the hardest part.
Once you have done this, you will have a synchronous REST API, and of course it will have some overhead. But I think that since its overhead is limited to passing through and translating network requests, it should be negligible.
Oh, and finally, there is another thing that you have to watch out for: Since REST as it was meant originally relies on the HTTP verbs to transport semantics, you need to map GET / POST / PUT / DELETE to the semantic commands of wolkenkit. As long as this can be done 1:1, everything's fine – problems start when there are multiple commands that (technically speaking) do an UPDATE.
PS: I'm also one of the developers of wolkenkit.
PPS: However you are going to solve this, I would be highly interested to hear from you! It would be very great if you could share your experiences with us, as you are most probably not the last one with this idea. If you want to contact us, the easiest way would be via Slack.
wolkenkit applications can be accessed using an HTTP- and a Websocket-API. These APIs are both provided by the tailwind module that wolkenkit uses under the hood. In the tailwind repo you can find a very simple documentation of the available HTTP routes.
You're right, the wolkenkit HTTP-API is not a classic REST-API. It's more RPC-style which in our experience is a good fit for applications. There are only 3 routes that your clients/tenants need to support: /v1/command (POST) is used for issuing commands. The commands you post should follow the command schema. /v1/events (POST) can be used for streaming events to clients. These events will follow the event schema. Finally you have /v1/read/:modelType/:modelName (POST) to read models. You can simply use HTTPie to test these routes.
Authentication of these APIs is currently done using OpenID-Connect. There's a very detailed article on how to setup authentication using Auth0. I'm not quite sure if this fits your use-case but you could basically use any Authentication Service that follows this standard or that is able to issue JWT tokens.
Finally you could also build your own JavaScript client-SDK that runs inside browsers by building a module that uses the wolkenkit-client-js under the hood. This SDK can just use the same API as any other client to connect to your application.
Hope this helps.
PS: Please note that I am one of the authors of wolkenkit.
Looking at some of the sample code for hybrid mobile apps that speak to Node.js on BM (http://mbaas-gettingstarted.ng.bluemix.net/hybrid), you will see various examples that demonstrate how to use a logger on the client side:
var config = {
applicationId:'<applicationId>',
applicationRoute:'<applicationRoute>',
applicationSecret:'<applicationSecret>'
};
IBMBluemix.initialize(config).done(function(status){
// Initialize the Services
}).catch(function(err){
IBMBluemix.getLogger().error("Error intializing SDK");
});
I've confirmed this works fine in a Cordova app. My question is - why does this exist? As far as I can see, it does nothing more than wrap calls to console.log. It does not ever send logs to the Bluemix server app as far as I can tell.
There is documentation here, https://www.ng.bluemix.net/docs/starters/mobile/mobilecloud/nodejsmobile.html#log, that talks about the feature both server-side and client-side, but unless I'm missing it, there's no persistence for the client-side version.
If so - then what exactly is the point of this abstraction then? I have to imagine it was built for some reason, but I'm not seeing it.
this wrapper is used to "wrap" and to make "standard" the console log api, especially because this javascript API isn't available for all the browsers (especially old ones). By wrapping it the library could check the browser and its availability, in order to avoid an execution error
Another reasons is to wrap some configuration utilities, like providing different libraries to use (eg log4js) or other configuration, and so on.
Last but not least, probably it provide a singleton interface for performance optimization
Is it possible to configure wildfly such that users and agents have "optional" security?
In essence I want form authentication for a web page, silent basic authentication for my services. Most unfortunately, one component of my ecosystem cannot call with basic headers.
That being said I have a work around but it will take some time to implement. For the time being I would desire to basically have optional security. So everything can play nice in the interim.
I know I could change my authentication module to allow everyone through. But with form turned on, requests without a basic header ram back the web page to log in with.
Thanks for any good tips or tricks.
Edit: This would be possible with Spring Security. Using WildFly's inbuilt security mechanisms with undertow seems to limit your flexibility. So much is handled up front before you reach your code, you really are stuck.
However, with spring security, everything is implemented as filters and so you can check the request context for user agents and all kinds of things, and make decisions about each request as you want.
Obviously this wouldn't be a production solution but in development, like was my case, I could have let any request with user agent XYZ run as admin, for the time being.
I have since migrated to Spring Security for our web app security management.
So the short answer is no. The short answer is still no, but the slightly longer answer is to stop using Wildfly's in built security and use Spring Security.
In the long run you could probably create your own undertow servlet extension that would validate authentication or default to admin credentials. This is going to be a lot of work, and spring security has already done a lot of work for you.
We ended up spinning up duplicate of our services without any authentication that our trailing component could call in the meantime. If you don't want to use spring security this is still the best solution I have
Following the Book App example in play 2 for scala, I now have a basic working app.
What I want now is to add some features like
User registration
User authenetication to access some pages
What is the best way to do it in play for scala? Should I manage it by my own? is there a plugin for that?
Note: I'm the maintainer of Silhouette.
I can suggest you Silhouette which is a core only fork of Secure Social with the intention to built a more customizable, non-blocking and well tested implementation.
For the first stable version there are only two open issues which must be resolved. And these issues are only future requests. There are no API changes planed. The documentation must be improved and a sample application is started. The unit tests are also a good starting point.
If you plan to follow the authentication flow as stated by Secure Social then stick with it. It exists since more than two years and it is well tested by many companies. Otherwise take a look at Silhouette.
You have two options:
Secure Social (http://securesocial.ws/)
But it has unusual registration flow, where the user have to enter your email first, and receive link to registration form.
However, there is a pull request that address this issue (https://github.com/jaliss/securesocial/pull/260)
Play Authenticate
It doesn't support Scala out of the box. But there is a workaround created by me here: https://github.com/joscha/play-authenticate/issues/92
Both of them requires you to write the interface layer to database. An important drawback in both of them, is that you won't be able to make use of reactive database drivers like Reactive Mongo. they assume that you will return the results immediately, not a Future of the result.
There is a securesocial plugin (http://securesocial.ws). Covers most common authentication methods, has registration stuff. I found it very usefull.
The drawback is it's documentation. If you want to do something a bit differ from the simplest scenarios - be prepared to read through the source code.