Does system.describe is still in JSON-RPC spec - specifications

I have json-rpc librarary written in php and I use it in javascript in other project (but it can use other library), I look into a spec and in 1.0 there is not system.describe. It's defined in historical 1.1 but not in 2.0. Does it mean that it's drop from specification? Was there any discussion about it?

You are free to use "system.describe" as any other usual method name, in JSON-RPC 2.0.
If (and only if) it is for local standardization purposes, you can to use the prefix method name rpc. (ex. "rpc.system.describe"), see Extensions section,
Each system extension is defined in a related specification. All system extensions are OPTIONAL
Help yourself, about "defined in a related specification", you not need an international discussion or consensus, it is also for local/personal use.
This kind of "local standardization" is interesting in a context of multiple endpoints (or many local web-services), and some common and controlled options to check endpoints.
Notes
Another question is about JSON-RPC 2.0 itself... 2.0 specification is alive? It is RESTful-compatible? How many systems are using it, there are some statistical data? ... I am using it, but with this kind of questions :-)
Uniform interface, with self-descriptive messages, is a good requirement, that can enhance JSON-RPC pack (eg. by use of JSON-LD), but there are no discussion since 2013 (perhaps 2010).

Related

Rest APIs in Go - using net/http vs. a library like Gorilla

I see that Go itself has a package net/http, which is adequate at providing everything you need to get your own REST APIs up and running. However, there are a variety of frameworks; the most popular maybe say gorilla.
Considering that one of the main things I need to do going forward is to build REST APIs that will access some back-end storage (databases, caches, etc.) to perform CRUD operation, is it good to go with Go's standard library itself, or should I consider using some frameworks?
Normally, people write a new library or framework which solves the problem present in the existing library. But a lot of the frameworks also tend to make things worse when actual demands are simple.
So I have few questions:
Is the basic library in go lang good enough to support basic to moderate functionality for REST?
If I do end up using the inbuilt library and tomorrow have to change it to use some framework (like a gorilla), how difficult/costly would that be?
Are frameworks really addressing the problems or just making simple problems complex?
I would be extremely grateful for someone to share his thoughts here (who has been through making this choice himself) while I research more of my own.
The net/http package is probably sufficient for most scenarios, but if you want to ease your development, you should use a third-party package, such as Gorilla.
For example, net/http's ServeMux does a great job at routing incoming requests for fixed URL paths but for pretty paths which use variables, you will need to implement a custom multiplexer while using Gorilla, you are getting this for free.
Another example is if you want to specify RESTful resources with
proper HTTP methods, it is hard to work with the standard
http.ServeMux, while with Gorilla's mux package,
requests can be matched based on URL host, path, path prefix,
schemes, header and query values, and HTTP methods.
One of the great benefits of Gorilla is that it is fully compatible with the net/http package and can be substituted in the future.
See 1.
I totally encourage you to use Gorilla's toolkit to develop REST services.
The built-in net/http package is sufficient to build a complete REST API. However, some of the libraries can make building an API slightly easier, particularly if the REST API is complex. Changing from the built-in facilities to any decent framework is relatively straightforward - they generally accept handlers of the http.Handler type.
In the end, though, this is an extremely situational choice. The best thing you can do is examine each available solution, contrast and compare, and build a proof of concept with the top options if you possibly can. First-hand experience will guide you best.

Dapper in Metro apps

Is there any way to include Dapper in metro apps?
It relies on System.Data which is left out in WinRT.
If not is there any similar framework which can be used?
Is there any way to include Dapper in metro apps?
No. As you observe, the lack of System.Data is pretty much a show-stopper, however in addition WinRT also omits meta-programming support, so the entire core would need to be re-written to use regular (i.e. slow) reflection. There are some elaborate hoops you can jump through to get around this, but without System.Data it seems a lost cause.
Basically, the intent with WinRT (as I understand it) is to consume your data from things like web-services, the classic "smart client" rather than "rich client" model.
So you might consider:
server (full .NET)
using "dapper" for data-access
exposing some call/serialization protocol
client (.NET for Windows Store apps, or whatever the term is today)
consuming some call/serialization protocol
Strictly speaking, you can IIRC break all the rules and just reference .NET anyway, but that won't pass any MS validation, and won't be a proper metro Windows Store application.

Ada/C/++ distributed applications

I am trying to evaluate some technologies for implementing a communication process between some Ada modules with some C++/OpenGL modules. There is an (Windows XP) Ada application which communicates with a C++ application using COM, but I intend to switch the COM to a new technology. Some suggestions came up, such as direct Sockets, DSA, Polyorb, Corba and DSS/Opensplice.
DSA appears to be just Ada -implemented (not sure)
Polyorb has its last implementation on 2006, according to http://polyorb.ow2.org/
Corba someone argumented that it could be not simple enough to justify its complexity for implementing simple applications
DSS/Opensplice appears to be just C/C++ implemented, so an Ada binding should be done. It also looks to be not very simple to be implemented too.
Personally I like COM, but due to the migration, I'd rather take the sockets option due to its simplicity, and the interface architecture could be implemented very easily.
So, what you think? Could you please comment about these technologies or even suggest other more?
Thanks very much.
A big factor in your choice is the size and complexity of the system you're reengineering. Is it a broadly distributed system with lots of complex messages? Is it a relatively small system with a handful of mundane message exchanges?
For small systems I used to just roll-my-own socket-based comm modules. Now, though, I lean more towards ZeroMQ (brokerless) or STOMP (text-based). And there's some Ada support for these, zeromq-Ada and TOMI_4_Ada (supports both).
While these handle the distribution mechanics, you would still need to handle the serialization of the messages into transportable form.
CORBA/PolyORB and DDS solutions are rather heavyweight, but are complete solutions. If you don't fear IDL and managing brokers, they can do well for large-scale distributed systems. Yeah, there may need to be some Ada bindings built, but if you can get C headers or a C API to bind to, it's typically not too bad if you focus on just binding the functions and data structures you require. Rather than creating a comprehensive binding, liberally employ opaque and void pointers (void_ptr, opaque_structure_def_ptr) for structs and parameters whose internal contents you don't care about.
we intend to switch the COM to a new (suported) technology, since COM is not more supported by Microsoft
Whoever told you COM is no longer supported is totally clueless.
While COM has undergone many name changes (OLE, COM, OLE Automation, DCOM, COM+, ActiveX, WinRT) and extensions over the past decades, it is the single most important technology for MS platforms: past, present and future. The .NET runtime uses COM extensively. Much of the Win32 API is written in COM, and the portions that weren't, will be in Win8, since WinRT components are COM objects.
Also take a look at AMQP (RabbitMQ for server), there seems to be Ada library available for it http://www.gti-ia.upv.es/sma/tools/AdaBinding/index.php.
If you could find binding for Ada, Apache thrift might also be a lightweight option. Maybe you could even write your own binding, it should not be more difficult that rolling something of your own over the sockets.
If you do go sockets route, than I would suggest ZeroMQ as "supersockets".
One more option for your list should be to use Ada's distributed programming support, and write C/C++ wrappers to interface your C++ program into it.
I don't know that its the best option for your needs, but if your Ada compiler supports Annex E, it should be on the list.
Since this post, AdaCore published PolyORB on GitHub with regular updates :)

What is the good starting point to developing RESTful web service in Clojure?

I am looking into something lightweight, that, at a minimum should support the following features:
Support for easy definition of actions through metadata
Wrapper that extracts parameters from request into clojure map, or as function parameters
Support for multiple forms of authentication (basic, form, cookie)
basic authorization based of api method metadata
session object wrapped in clojure map
live coding from REPL (no need to restart server)
automatic serialization of return value to json and xml
have nice (pluggable) url parameter handling (eg /action/par1/par2 instead of /action?par1=val1&par2=val2)
I know it is relatively easy to roll my own micro-framework for each one of these options, but why reinvent the wheel if something like that already exists? Especially if it is:
Active project with rising number of contributors/users
Have at least basic documentation and tutorial online.
First of all, I think that you are unlikely to find a single shrinkwrapped solution to do all this in Clojure (except in the form of a Java library to be used through interop). What is becoming Clojure's standard Web stack comprises a number of libraries which people mix and match in all sorts of ways (since they happily tend to be perfectly compatible).1
Here's a list of some building blocks which you might find useful:
Ring -- Clojure's basic HTTP request handling library; all the other webby libraries (for writing routes &c.) that I know of are compatible with Ring. Ring is being actively developed, has a robust community, is very well-written and has a nice SPEC document detailing its design philosophy. This blog post provides a nice example of how it might be used (reacting to GitHub commits).
Sandbar -- currently an authentication library, more types of functionality planned; under development.
Compojure -- a mature and robust library which provides a nice DSL for writing routes to be used on top of Ring. This will give you the nice URL parameter handling.
Compojure-rest -- "a library for building RESTful applications on top of Compojure". Compojure-rest is, as far as I can tell, in its early stages of development; perhaps you might see this as an opportunity to influence its design. :-)
For dealing with XML, there's clojure.contrib.lazy-xml (and the helper library clojure.contrib.zip-filter.xml) and Enlive (the built-in clojure.xml namespace is currently not very usable); these would be used in tandem (though for your purposes the former might suffice).
For JSON, there a library in contrib and clojure-json (and I think there was at least one other lib I seem to be forgetting now...); pick the one you like best.
All of will be perfectly happy with a REPL-driven development style (see the accepted answer to this SO question for a Ring trick which is very much to the purpose here). I suppose the above collection of links does leave a few blind spots (in particular, the authentication story is still being ironed out, as far as I can tell), but hopefully it's a good start.
1The only single-package solution for building webapps in Clojure that I know of is Conjure, inspired by Rails; unfortunately I have to admit that I don't know much about it, so if you feel interested, follow the link and look around the sources, wiki &c.
While building my first Clojure rest service I found myself asking often the same question. The Clojure Toolbox helped me a lot: http://www.clojure-toolbox.com/
If you are looking for some sample, real-world, illustrative code to get you started, then you could study this clojure-news-feed on github project which demonstrates how to implement a non-trivial RESTful web service with compojure/ring that wraps both SQL (postgresql or mysql) and NoSQL (cassandra), search (solr), caching (redis), event logging (kafka), connection pooling (c3po), and real-time metrics via JMX.
This blog about Building a Scalable News Feed Web Service in Clojure provides a good introduction. I ran some load tests against this service on a humble AWS deployment and got about eighty transactions per second with less than a half second average latency per transaction.
Take a look at liberator library http://clojure-liberator.github.io/liberator/ It's noy a standalone solution, buy very good for rest service definition.
Just to provide an updated answer to this old question, currently (in 2018) I think Luminus provides an excellent starting point. It's using many of the libraries (ring, compojure, etc.) mentioned in previous answers, is modular and as close to "single package" as you can get with Clojure. Specifically for REST, take a look at compojure-api. Luminus recommends buddy for authentication, I've had good success using it both for traditional session-based auth as well as Oauth and stateless JWTs.

Suggestions for Adding Plugin Capability?

Is there a general procedure for programming extensibility capability into your code?
I am wondering what the general procedure is for adding extension-type capability to a system you are writing so that functionality can be extended through some kind of plugin API rather than having to modify the core code of a system.
Do such things tend to be dependent on the language the system was written in, or is there a general method for allowing for this?
I've used event-based APIs for plugins in the past. You can insert hooks for plugins by dispatching events and providing access to the application state.
For example, if you were writing a blogging application, you might want to raise an event just before a new post is saved to the database, and provide the post HTML to the plugin to alter as needed.
This is generally something that you'll have to expose yourself, so yes, it will be dependent on the language your system is written in (though often it's possible to write wrappers for other languages as well).
If, for example, you had a program written in C, for Windows, plugins would be written for your program as DLLs. At runtime, you would manually load these DLLs, and expose some interface to them. For example, the DLLs might expose a gimme_the_interface() function which could accept a structure filled with function pointers. These function pointers would allow the DLL to make calls, register callbacks, etc.
If you were in C++, you would use the DLL system, except you would probably pass an object pointer instead of a struct, and the object would implement an interface which provided functionality (accomplishing the same thing as the struct, but less ugly). For Java, you would load class files on-demand instead of DLLs, but the basic idea would be the same.
In all cases, you'll need to define a standard interface between your code and the plugins, so that you can initialize the plugins, and so the plugins can interact with you.
P.S. If you'd like to see a good example of a C++ plugin system, check out the foobar2000 SDK. I haven't used it in quite a while, but it used to be really well done. I assume it still is.
I'm tempted to point you to the Design Patterns book for this generic question :p
Seriously, I think the answer is no. You can't write extensible code by default, it will be both hard to write/extend and awfully inefficient (Mozilla started with the idea of being very extensible, used XPCOM everywhere, and now they realized it was a mistake and started to remove it where it doesn't make sense).
what makes sense to do is to identify the pieces of your system that can be meaningfully extended and support a proper API for these cases (e.g. language support plug-ins in an editor). You'd use the relevant patterns, but the specific implementation depends on your platform/language choice.
IMO, it also helps to use a dynamic language - makes it possible to tweak the core code at run time (when absolutely necessary). I appreciated that Mozilla's extensibility works that way when writing Firefox extensions.
I think there are two aspects to your question:
The design of the system to be extendable (the design patterns, inversion of control and other architectural aspects) (http://www.martinfowler.com/articles/injection.html). And, at least to me, yes these patterns/techniques are platform/language independent and can be seen as a "general procedure".
Now, their implementation is language and platform dependend (for example in C/C++ you have the dynamic library stuff, etc.)
Several 'frameworks' have been developed to give you a programming environment that provides you pluggability/extensibility but as some other people mention, don't get too crazy making everything pluggable.
In the Java world a good specification to look is OSGi (http://en.wikipedia.org/wiki/OSGi) with several implementations the best one IMHO being Equinox (http://www.eclipse.org/equinox/)
Find out what minimum requrements you want to put on a plugin writer. Then make one or more Interfaces that the writer must implement for your code to know when and where to execute the code.
Make an API the writer can use to access some of the functionality in your code.
You could also make a base class the writer must inherit. This will make wiring up the API easier. Then use some kind of reflection to scan a directory, and load the classes you find that matches your requirements.
Some people also make a scripting language for their system, or implements an interpreter for a subset of an existing language. This is also a possible route to go.
Bottom line is: When you get the code to load, only your imagination should be able to stop you.
Good luck.
If you are using a compiled language such as C or C++, it may be a good idea to look at plugin support via scripting languages. Both Python and Lua are excellent languages that are used to script a large number of applications (Civ4 and blender use Python, Supreme Commander uses Lua, etc).
If you are using C++, check out the boost python library. Otherwise, python ships with headers that can be used in C, and does a fairly good job documenting the C/python API. The documentation seemed less complete for Lua, but I may not have been looking hard enough. Either way, you can offer a fairly solid scripting platform without a terrible amount of work. It still isn't trivial, but it provides you with a very good base to work from.