I am trying to use Class::DBI with Catalyst::Plugin::Authentication::Store::DBIC. The example given on CPAN does not work with Class::DBI. For example, the config is incorrect: role_class => 'DB::Role' has to be replaced by role_class => 'MyApp::Model::DB::Role' I got Authentication working using plain DBI, but I would rather use Class::DBI like in the rest of my application.
I could not find a complete example of Catalyst authentication with Class:DBI. Do you know any such tutorial?
I suspect you'd be better off asking about this on the #catalyst channel on irc.perl.org. You'll either end up writing your own store for CDBI, or you'll work out how to use the CDBI compatibility layer in DBIx::Class to get it working, or if it's a new codebase, then you should really consider CDBI legacy and build your schema with DBIx::Class and DBIx::Class::Schema::Loader
I hate to say it, but singingfish is right. The Catalyst::Plugin::Authentication::Store::DBIC has not been updated in some time (aside from being made to display the deprecated warning) It also uses the old (2006) authentication API and will be the limiting factor of your application even if you did get it to work.
If you have the option, I would switch to DBIx::Class. If not, your only real choice is to write your own user storage module that works with Class::DBI. It's actually not too hard and you can find instructions in the internals doc for Catalyst Auth:
http://search.cpan.org/dist/Catalyst-Plugin-Authentication/lib/Catalyst/Plugin/Authentication/Internals.pod
Good luck!
JayK
Related
I am developing a new project with spring boot and graphql. I am confused on how to proceed because there are 2 ways to develop it, one is via the graphqls file and Annotation based approach. I prefer Annotation based approach but are they stable. exmaple : https://github.com/leangen/graphql-spqr.
I second AllirionX's answer and just want to add a few details.
Firstly, to answer your question: yes, SPQR has been pretty stable for quite a while now. Many teams are successfully using it in production. The only reason it is still in 0.X versions is the lack of documentation, but an occasional small breaking change in the API does occur.
Secondly, I'd also like to add that going code-first doesn't mean you can't also go contract-first. In fact, I'd argue you should still develop in that style. The only difference is that you get to write your contracts as Java interfaces instead of a new language.
As I highlight in SPQR's README:
Note that developing in the code-first style is still effectively
schema-first, the difference is that you develop your schema not in
yet another language, but in Java, with your IDE, the compiler and all
your tools helping you. Breaking changes to the schema mean the
compilation will fail. No need for linters or other fragile hacks.
So whether the API (as described by the interfaces) changes as the other code changes is entirely up to you. And if you need the SDL for any reason, it can always be generated from the executable schema or the introspection result.
I don't think there is a good or a bad answer to the "how to proceed" question.
There are two different approaches to build your graphql server (with graphl-java, graphql-java-tools, graphql-spqr), and each method has its advantages and inconvenients. All those library propose a springboot starter. Note that I never used graphql-spqr.
Schema first (with graphql-java or graphql-java-tools)
In this approach you first create a SDL file. The graphql library will parse it, and "all" you have to do is wire each graphql type to its data fetcher. The graphql-java-tools can even do the wiring for you.
Advantage
no need to enter into the detail of how the Graphql schema is built server side
you have a nice graphqls schema file that can be read and used by a client, easying the charge of building a graphql client
you actually define your api first (SDL schema): changing the implementation of the api will not require any change client side
Inconvenient
no compile-time check. If something is not wired properly, an exception will be thrown at runtime. But this can be negated by using graphql-java-codegen that will generate for you the java classes and interfaces for your graphql types, unions, queries, enums, etc.
if using graphql-java (no auto wiring), I felt I had to write long boring data fetchers. So I switched to graphql-java-tools.
Code first (with graphql-java or grapqhl-java-tools or graphql-spqr)
The graphql schema is built programmatically (through annotation with graphql-spqr or by building a GraphQLSchema object in graphql-java)
Advantage
compile-time check
no need to maintain both the SDL and the Domain class
Inconvenient
as your schema is generated from your code base, changing your code base will change the api, which might not be great for the clients depending on it.
This is my opinion on those different framework and I would be happy to be shown that I am in the wrong. The ultimate decision depends on your project: the size, if there is an existing code base, etc.
Looking at some of the sample code for hybrid mobile apps that speak to Node.js on BM (http://mbaas-gettingstarted.ng.bluemix.net/hybrid), you will see various examples that demonstrate how to use a logger on the client side:
var config = {
applicationId:'<applicationId>',
applicationRoute:'<applicationRoute>',
applicationSecret:'<applicationSecret>'
};
IBMBluemix.initialize(config).done(function(status){
// Initialize the Services
}).catch(function(err){
IBMBluemix.getLogger().error("Error intializing SDK");
});
I've confirmed this works fine in a Cordova app. My question is - why does this exist? As far as I can see, it does nothing more than wrap calls to console.log. It does not ever send logs to the Bluemix server app as far as I can tell.
There is documentation here, https://www.ng.bluemix.net/docs/starters/mobile/mobilecloud/nodejsmobile.html#log, that talks about the feature both server-side and client-side, but unless I'm missing it, there's no persistence for the client-side version.
If so - then what exactly is the point of this abstraction then? I have to imagine it was built for some reason, but I'm not seeing it.
this wrapper is used to "wrap" and to make "standard" the console log api, especially because this javascript API isn't available for all the browsers (especially old ones). By wrapping it the library could check the browser and its availability, in order to avoid an execution error
Another reasons is to wrap some configuration utilities, like providing different libraries to use (eg log4js) or other configuration, and so on.
Last but not least, probably it provide a singleton interface for performance optimization
they're exposing a REST and SOAP API to access data but none of them seems really fitted to my purpose of inserting lots of data through external RPC calls, which are somehow time-critical (There are a dozen users working the same time on the data).
I only spend a few hours to get an idea how things work in SugarCRM but I figured out its best to place an independent RPC server beside of SugarCRM and then do the processing(huge) part and afterwards storing it into the database, using preferable Sugars own API.
Is there a way to include their core API in order to benefit from their model and persistence API ?
I'd like to know which files I need to include and how to access the running instance, if there is one.
Its actually quite simple what I am trying to do :
Receiving RPC calls
Do some matching
Update database
You see, there is no GUI part involved which justifies the long way of using modules, etc..
Well, I hope someone knows it. Its seems a pretty big beast to me and I'd love to keep it quick'n dirty.
thanks !
Here You find a starting point for your own API Extending the REST API and here is a overview of what API versions exist.
We are using a time critical custom API to speed up some calls.
I would like to create a module for the Pinboard API.
Though very similar to the old Delicious API, there are enough changes that I would like to re-implement to specifically work for Pinboard.
The Net::Delicious module was build initially in 2002 and I see that many of the newer REST best Modules are implemented in a new way. Net::Twitter, WebService::Dropbox and WWW::Vimeo::Simple seem to have different methodologies on how to implement their respective REST API.
Net::Twitter is very complex and heavy implementation in my opion. WebService::Dropbox is extremely light as is the API it implements. WWW::Vimeo::Simple seems to be between the two in terms of complexity.
I also spent some time looking at REST::Client but it probably would not be useful if you want to implement more that one or two methods.
What are the best practices for implementing a complete REST webservice? and also to test the responses without being able to connect to the service.
What you want is Net::HTTP::Spore. It's a moosy framework for REST clients in modern Perl. See also these slides
I've considered CGI::Application and CGI::Session. Of the two, CGI::Session seems more promising. CGI::Application model, however, doesn't look like it would work well with Template Toolkit. (Am I wrong to think so?)
I would like to evaluate more session management libraries, before comitting to one. Any recommendations? I'm looking for a library that's web server agnostic, and works across multiple servers. Catalyst is not an option for now, due to the time required to retrofit existing code into the Catalyst way of doing things.
CGI::Application and CGI::Session are very different modules. CGI::Session is a session module - it does not do anything beyond this. CGI::Application is a lightweight framework. It works well with Template Toolkit, some of us use with CGI::Application::Plugin::TT.
So, if you need sessions only, use CGI::Session.
If you need better structure of your code - use CGI::Application. You can even use CGI::Session in it, with CGI::Application::Plugin::Session.
I have used Apache::Session with some success. Although the name tells a different story, I don't think it will only work with an Apache webserver.
The nice thing is that you can easily change the way sessions are stored without changing your own session handling code. E.g. you might start out with sessions being stored as files on disk and then move to a DB based system. And then change the DB backend after that.