API Wrapper Architecture Best Practice - perl

I'm writing a Perl wrapper module around a REST webservice and I'm hoping to have some advice on how best to architect the module.
I've been looking at a couple of different Perl modules for inspiration.
Flickr::Simple2 is basically one big file with methods wrapping around the different methods in the Flickr API, e.g. getPhotos() etc.
Flickr::API is a sub-class of another module (LWP) for making HTTP requests. So basically it just allows you to make calls through the module, using LWP, that go to the correct API method/URL without defining any wrapper methods itself. That's explained pretty poorly - but basically it has a method that takes an argument (a API method name) and constructs the correct API call, e.g. request()/response().
An alternative design would be like the first described, but less monolithic, with separate classes for separate "areas" of the API.
I'd like to follow modern/best practice Perl methods so I'm using Dist::Zilla to build the module and Moose for the OO stuff but I'd appreciate some input on how to actually design/architect my wrapper.
Guides/tutorials or pointers to other well designed modules would be appreciated.
Cheers

Joshua Bloch has good tips on "How to Design a Good API and Why it Matters" (video, 2007).
The slides (PDF).

This depends somewhat on the breadth/depth of API you're trying to wrap around.
If it only has a few simple API calls, the first approach is fine.
If it has VERY complex APIs that have "simple" mode you wish to expose to the user, one pattern is to have the main module and subclass it as Main::Module::Simple which would wrap around the main underlying module.
As you noted, a very broad API might benefit from being split into areas with parallel classes (possibly inheriting from or using a base class) responsible for wrapping each area. Just make sure to factor all the common stuff out to avpoid any code/design duplication.

Related

Rest APIs in Go - using net/http vs. a library like Gorilla

I see that Go itself has a package net/http, which is adequate at providing everything you need to get your own REST APIs up and running. However, there are a variety of frameworks; the most popular maybe say gorilla.
Considering that one of the main things I need to do going forward is to build REST APIs that will access some back-end storage (databases, caches, etc.) to perform CRUD operation, is it good to go with Go's standard library itself, or should I consider using some frameworks?
Normally, people write a new library or framework which solves the problem present in the existing library. But a lot of the frameworks also tend to make things worse when actual demands are simple.
So I have few questions:
Is the basic library in go lang good enough to support basic to moderate functionality for REST?
If I do end up using the inbuilt library and tomorrow have to change it to use some framework (like a gorilla), how difficult/costly would that be?
Are frameworks really addressing the problems or just making simple problems complex?
I would be extremely grateful for someone to share his thoughts here (who has been through making this choice himself) while I research more of my own.
The net/http package is probably sufficient for most scenarios, but if you want to ease your development, you should use a third-party package, such as Gorilla.
For example, net/http's ServeMux does a great job at routing incoming requests for fixed URL paths but for pretty paths which use variables, you will need to implement a custom multiplexer while using Gorilla, you are getting this for free.
Another example is if you want to specify RESTful resources with
proper HTTP methods, it is hard to work with the standard
http.ServeMux, while with Gorilla's mux package,
requests can be matched based on URL host, path, path prefix,
schemes, header and query values, and HTTP methods.
One of the great benefits of Gorilla is that it is fully compatible with the net/http package and can be substituted in the future.
See 1.
I totally encourage you to use Gorilla's toolkit to develop REST services.
The built-in net/http package is sufficient to build a complete REST API. However, some of the libraries can make building an API slightly easier, particularly if the REST API is complex. Changing from the built-in facilities to any decent framework is relatively straightforward - they generally accept handlers of the http.Handler type.
In the end, though, this is an extremely situational choice. The best thing you can do is examine each available solution, contrast and compare, and build a proof of concept with the top options if you possibly can. First-hand experience will guide you best.

Calling LISP or SCHEME from .NET/C#

In my existing software I have an implementation of genetic programing using home grown decission making tree that is able to apply basic logic operators (AND OR NOT) in some boolean data that are provided to it in a form of an array. The platform I am using is .NET / C# with SQLServer back end. Looking for ways to improve the performance of my genetic program I concluded that I need almost all the additional functionality that comes with a functional language and I believe Scheme or to a lesser extend LISP are the best solutions for it unless I want to implement features like COND, IF, comparisson operators etc myself extending the existing implementation.
My question to the forum is if there is any efficient way to call Scheme (or LISP) from a .NET application passing data back and front in some array form. If this is not possible, do you thing that it will better just to bite the bullet and implement it from scratch or I should look for alternative ways, like for example communicating using a text file?
There is an R6RS compliant Scheme implementation for the DLR called IronScheme. Since IronScheme uses the DLR, it can be embedded into any .NET application using the standardized DLR embedding APIs in exactly the same way that you would embed, say, IronRuby or IronPython:
dynamic Scheme = new SchemeEnvironment();
var list = Scheme.list;
var map = Scheme.map;
// and so on
The full snippet can be found in a blog post by IronScheme's author, leppie. It also shows how to pass a C# lambda to a Scheme higher-order function and other cool stuff.
Unless you go with IronScheme (above), I'd probably use something like ZeroMQ (which has both Common Lisp and .Net drivers) to pass messages between the two systems.
I built a lightweight, embeddable Scheme-like language interpreter exactly for the purpose of complex and re-usable configuration. It has a small footprint (~1500 line of code) and does not introduce any other dependency to your application.
I open-sourced it from work. It's called schemy. Here is also an example application demonstrating how to use it in really complex way.
I also provided some detailed motivation behind building it for work in this stackoverflow answer.
Hope it helps:)
Why not look at F#?
(www.fsharp.net)
It's basically an adaptation of OCaml in .Net.
Or you can always use IronScheme, but I don't think it's as mature.

What are the pros and cons of using the two different programming styles of CGI.pm with Perl?

I am in a Web Scripting class at school and am working on my first assignment. I tend to overdo things and delve deeper into my subject than what is required in my classes. Right now I am researching CGI.pm to do my HTTP requests and it says there are two programming styles for CGI.pm:
An object-oriented style
A function-oriented style
Unless I overlooked the clear answer or am not knowledgeable enough to discern the answer for myself from the documentation provided at: http://perldoc.perl.org/CGI.html I just don't know what the pros and cons are of using these two different styles.
With that being said what are the pros and cons of using the two different styles? Which one is more commonly used? As far as using object-oriented style it says I can only use one CGI object at the time. Why is that?
Thanks for all your help. You have all made studying Computer Science very enjoyable, satisfying, and rewarding for me. =D
Behind the scenes, CGI.pm is doing the same thing despite the styles. The functional interface actually uses a secret object that you don't see.
For many small-scale CGI projects, you're probably never going to need more than one CGI object at a time, so the functional interface is fine. This might be the more common style, but only because most people make small scripts for very specific tasks. If you have a lot of other stuff going on, you might not like CGI.pm importing a long list (and it is long) of function names into your script. Some of the function names might clash with those other modules want to import.
I, however, always use the object-oriented interface. I don't have to worry about name collisions, and it's apparent where any method came from since you see its object. It's also easy to pass the object as arguments to other parts of large applications, etc.
Some people might complain about the extra typing, but that's never been the slow part of programming for me. I've been doing Perl for a long time and I don't mind the syntax. However, I only use CGI to get the input and maybe send the output. I don't mess with any of the HTML stuff.
When it talks about one CGI.pm object at a time, it's referring to access to the input. Once you've read STDIN, for instance, another CGI.pm object won't be able to read that. You can have as many objects as you like though. They just won't share data and the first one gets all of POST data.
You can actually use a mixture though. You can import some things, like :html, but still use the OO interface to deal with the input.
I strongly recommend using the object interface.
Will it be absolutely required for your classwork? No, in fact it is arguably overkill for even small production projects.
However, if you are serious about learning to use CGI.pm for larger scale projects you will need to learn the object method. If you reach the point of needing two objects you will have to use the object interface. Programming, like most everything else, gets better with practice. Practicing now on relatively easier problems will help you be ready for more complex ones.
In fact I'd recommend it as a general rule in programming (although there are exceptions) that if faced with two methods of using a particular tool making a habit of using the one most likely to be used in production code and/or the one that is the correct answer for more of the problem space.

Why are most web services in REST style, and not (also) in XML-RPC?

I know that Flickr provides both XML-RPC and REST ways of working with it.
There are standard XML-RPC libraries for every language (For example, Python has a built-in one xmlrpclib).
Standard XML-RPC libraries takes care of the serializing/deserializing as well as sending/receiving the responses.
It seems to me that websites that use the REST style for the same API would end up writing their own libraries in each language. Example: the Yahoo! Search SDK.
To me, it seems that the XML-RPC way is better, but all the evidence is to the contrary. Why?
So:
Why are most web services in REST style, and not in XML-RPC?
Are there downsides to XML-RPC that is not apparent?
Rest is not just easier, its a lot easier.
Xml-Rpc/soap has a lot of moving parts and a hefty amount of overhead, cognitive
and otherwise which (very often) is not needed, its complex and unless you
specifically need some of the features it provides it's just not worth it
Not every service request needs to be packaged up as a formal function call with
parameters
REST is also a formal system that's well defined and a great model for representing
the resources available on the web (hence the term REST)
Having said that, it's easy to make a lot of newbie mistakes using REST so google around for how to use it first, you'll be happy you did.
This is a great question. Unless you are taking advantage of hypermedia for discovery and standard media formats then you are not likely to be getting the benefits of REST. You might as well stick with XML-RPC.
Simple Answer: REST tends to be easier to implement
there are many discussions on that on the web, so I won't go deep on the answer. In short: It's easy. Easy to write, easy to understand, easy to debug. You can write it on your browser and it will probably bring back something useful. Very good.
This easiness come at the price of less "possibilities" but the theory goes that in the long run, easiness might be more worthy.
REST is the native architectural style of the Web. (In fact, it was reverse-engineered from the way the Web already works.) XML-RPC and SOAP attempt to take a very different (procedural, imperative) programming model and adapt it to the web. The result is that REST ends up being cleaner and more flexible.

Suggestions for Adding Plugin Capability?

Is there a general procedure for programming extensibility capability into your code?
I am wondering what the general procedure is for adding extension-type capability to a system you are writing so that functionality can be extended through some kind of plugin API rather than having to modify the core code of a system.
Do such things tend to be dependent on the language the system was written in, or is there a general method for allowing for this?
I've used event-based APIs for plugins in the past. You can insert hooks for plugins by dispatching events and providing access to the application state.
For example, if you were writing a blogging application, you might want to raise an event just before a new post is saved to the database, and provide the post HTML to the plugin to alter as needed.
This is generally something that you'll have to expose yourself, so yes, it will be dependent on the language your system is written in (though often it's possible to write wrappers for other languages as well).
If, for example, you had a program written in C, for Windows, plugins would be written for your program as DLLs. At runtime, you would manually load these DLLs, and expose some interface to them. For example, the DLLs might expose a gimme_the_interface() function which could accept a structure filled with function pointers. These function pointers would allow the DLL to make calls, register callbacks, etc.
If you were in C++, you would use the DLL system, except you would probably pass an object pointer instead of a struct, and the object would implement an interface which provided functionality (accomplishing the same thing as the struct, but less ugly). For Java, you would load class files on-demand instead of DLLs, but the basic idea would be the same.
In all cases, you'll need to define a standard interface between your code and the plugins, so that you can initialize the plugins, and so the plugins can interact with you.
P.S. If you'd like to see a good example of a C++ plugin system, check out the foobar2000 SDK. I haven't used it in quite a while, but it used to be really well done. I assume it still is.
I'm tempted to point you to the Design Patterns book for this generic question :p
Seriously, I think the answer is no. You can't write extensible code by default, it will be both hard to write/extend and awfully inefficient (Mozilla started with the idea of being very extensible, used XPCOM everywhere, and now they realized it was a mistake and started to remove it where it doesn't make sense).
what makes sense to do is to identify the pieces of your system that can be meaningfully extended and support a proper API for these cases (e.g. language support plug-ins in an editor). You'd use the relevant patterns, but the specific implementation depends on your platform/language choice.
IMO, it also helps to use a dynamic language - makes it possible to tweak the core code at run time (when absolutely necessary). I appreciated that Mozilla's extensibility works that way when writing Firefox extensions.
I think there are two aspects to your question:
The design of the system to be extendable (the design patterns, inversion of control and other architectural aspects) (http://www.martinfowler.com/articles/injection.html). And, at least to me, yes these patterns/techniques are platform/language independent and can be seen as a "general procedure".
Now, their implementation is language and platform dependend (for example in C/C++ you have the dynamic library stuff, etc.)
Several 'frameworks' have been developed to give you a programming environment that provides you pluggability/extensibility but as some other people mention, don't get too crazy making everything pluggable.
In the Java world a good specification to look is OSGi (http://en.wikipedia.org/wiki/OSGi) with several implementations the best one IMHO being Equinox (http://www.eclipse.org/equinox/)
Find out what minimum requrements you want to put on a plugin writer. Then make one or more Interfaces that the writer must implement for your code to know when and where to execute the code.
Make an API the writer can use to access some of the functionality in your code.
You could also make a base class the writer must inherit. This will make wiring up the API easier. Then use some kind of reflection to scan a directory, and load the classes you find that matches your requirements.
Some people also make a scripting language for their system, or implements an interpreter for a subset of an existing language. This is also a possible route to go.
Bottom line is: When you get the code to load, only your imagination should be able to stop you.
Good luck.
If you are using a compiled language such as C or C++, it may be a good idea to look at plugin support via scripting languages. Both Python and Lua are excellent languages that are used to script a large number of applications (Civ4 and blender use Python, Supreme Commander uses Lua, etc).
If you are using C++, check out the boost python library. Otherwise, python ships with headers that can be used in C, and does a fairly good job documenting the C/python API. The documentation seemed less complete for Lua, but I may not have been looking hard enough. Either way, you can offer a fairly solid scripting platform without a terrible amount of work. It still isn't trivial, but it provides you with a very good base to work from.