How do services like github,twilio,algolia,stormpath maintain rest api's along with SDK's for different languages? Do they generate such code using tools like enunciate or are they maintaining the client code themselves? I guess for github ,they are open sourced client libraries. My questions are:
How to sync between rest api changes and corresponding SDK changes.
What are the best practices for versioning of rest apis,as well as their sdk's ?What are the common pitfalls one must be aware of?
At Algolia we’ve developed a dozen of API clients on top of our REST API. Honestly, I must say we suffered a lot to create all of them /o\ I hope the following bullet points will help:
Why did we create our own API clients instead of just using libraries/tools to generate them (which is pretty common for REST API clients)?
be able to reach a 99.99% SLA, we decided to implement a “retry-strategy” in our API clients -> the Algolia index is always replicated on 3 machines -> if one goes down the API client will retry on the other ones -> this cannot be handled by generic libraries
to reach optimal performances, we wanted to be sure we control the way the HTTP keep alive works -> most of generic libraries doesn’t handle that as well
we wanted to force HTTPS as soon as possible
How did we proceed?
at the beginning, we were not super fluent in all those languages; so we started to look at other API clients implemented in each language to understand best practices
we got the help from 1 guy for Node.js and 1 for python
but it was really not OK until we decided to move all of them to Travis.CI + plug code coverage to reach 80-95% code coverage + automated tests -> obviously, we spot a lot of bugs :)
as soon as we release a new feature, we need to update all our API clients -> pretty painful…
to ease the README generation, we’ve developed a small tool generating the README for all languages. It’s super Algolia-specific but you can have a look at https://github.com/algolia/algoliasearch-client-readme-generator -> pretty painful to bootstrap as well :)
Things we learned:
maintaining all of them ourself make them act exactly the same, whatever the language -> super appreciable from a customer POV
if your community is building API clients, that’s cool; but they may not go so deep in the tests -> we’re testing ALL features in ALL API clients -> quality
we would do the same again if we needed to
Related
I have an internal service that exposes few APIs and few clients using these APIs. I have to make some breaking changes and redesign this service's API.
What are some of the best ways to maintain backward compatibility for these clients while making these changes? (I known it's not ideal but most things in the world aren't, right?)
One solution I can think of is having a config based on which the clients either talk to the old API or the new. This allows me to merge the client code immediately and then enable the new API through the config when the time is right for me.
I want to find out if there are more solutions out there that's in practice when making such breaking changes.
The most common way is to introduce versioning in your API, e.g:
http://api.example.com/ (can default to an older version for backwards compatibility)
http://api.example.com/v1
etc...
See more information and examples here: https://restfulapi.net/versioning/
I'm admittedly unsure whether this post falls within the scope of acceptable SO questions. If not, please advise whether I might be able to adjust it to fit or if perhaps there might be a more appropriate site for it.
I'm a WinForms guy, but I've got a new project where I'm going to be making web service calls for a Point of Sale system. I've read about how CRUD operations are handled in RESTful environments where GET/PUT/POST/etc represent their respective CRUD counterpart. However I've just started working on a project where I need to submit my requirements to a developer who'll be developing a web api for me to use but he tells me that this isn't how the big boys do it.
Instead of making web requests to create a transaction followed by requests to add items to the transaction in the object based approach I'm accustomed to, I will instead use a service based approach to just make a 'prepare' checkout call in order to see the subtotal, tax, total, etc. for the transaction with the items I currently have on it. Then when I'm ready to actually process the transaction I'll make a call to 'complete' checkout.
I quoted a couple words above because I'm curious whether these are common terms that everyone uses or just ones that he happened to choose to explain the process to me. And my question is, where might I go to get up to speed on the way the 'big boys' like Google and Amazon design their APIs? I'm not the one implementing the API, but there seems to be a little bit of an impedance mismatch in regard to how I'm trying to communicate what I need and the way the developer is expecting to hear my requirements.
Not sure wrt the specifics of your application though your general understanding seems ik. There are always corner cases that test the born though.
I would heed that you listen to your dev team on how things should be imolemented and just provide the "what's" (requirements). They should be trusted to know best practice and your company's own interpretation and standards (right or wrong). If they don't give you your requirement (ease-of-use or can't be easily reusable with expanded requirements) then you can review why with an architect or dev mgr.
However, if you are interested and want to debate and perhaps understand, check out Atlassian's best practice here: https://developer.atlassian.com/plugins/servlet/mobile#content/view/4915226.
FYI: Atlassian make really leading dev tools in use in v.large companies. Note also that this best-practices is as a part of refactoring meaning they've been through the mill and know what worked and what hasn't).
FYI2 (edit): Reading between the lines of your question, I think your dev is basically instructing you specifically on how transactions are managed within ReST. That is, you don't typically begin, add, end. Instead, everything that is transactional is rolled within a transaction wrapper and POSTed to the server as a single transaction.
We're an SME with SAP implemented. We're trying to use the transactional data in SAP to build another system in PHP for our trucking division for graphical reports, etc. This is because we don't have in-house expertise ABAP development and any SAP modifications are expensive.
Presently, I've managed to achieve our objectives with read-only access to our Quality DB2 server and any writes go to another DB2 server. We've found the CPU usage on the SELECT statements to be acceptable and the user is granted access only to specific tables/views.
SAP's Quality DB2 -> PHP -> Different DB2 client
Would like your opinion on whether it is safe to read from production the same way? Implementing all of this again via the RFC connector seems very painful. Master-Slave config is an option for us but again will involve external consultancy.
EDIT
Forgot to mention that our SAP guys don't want to build even reports for another 6-months - they want to leave the system intact. Which is why we're building this in PHP on the top.
If you don't have ABAP expertise, get it - it's not that hard, and you'll get a lot of stuff "for granted" (as in "provided by the platform") that you'll have to implement manually otherwise - like user authentication and authority management and software logistics (moving stuff from the development to the production repository). See these articles for a short (although biased) introduction. If you still need an external PHP application, fine - but you really should give ABAP a try first. For web applications, you might want to look into Web Dynpro ABAP. Using the IGS built'in chart engine with the BusinessGraphics element, you'll get a ton of the most custom chart types for free. You can also integrate PDF forms created with Adobe Livecycle Designer.
Second, while "any SAP modifications are expensive" might be a good approach, what you're suggesting isn't a modification. That's add-on development, and it's neither expensive nor more complex than any other programming language and/or environment out there. If you can't or don't want to implement your own application entirely using the existing infrastructure, at least use a decent interface - web services, RFC, whatever. From an ABAP point of view, RFC is always the easiest option, but you can use SOAP or REST as well, although you'll have to implement the latter manually. It's not that hard either.
NEVER EVER access the SAP database directly. Just don't. You'll have to implement all the constraints like client dependency or checks for validity dates and cancellation flags for yourself - that's hardly less complex than writing a decent interface, and it's prone to break every time the structure is changed. And if at some point you need to read some of the more complex contents like long texts, you're screwed - period. Not to mention that most internal or external auditors (if that happens to be an issue with your company and/or legal requirements) don't like direct database access to a system as critical as this one, which again can cause lots of trouble from people you really don't want to mess with. It's just not worth it.
I was reading a paper titled "An Optimized Web Feed Aggregation Approach for Generic Feed Types" and googles PubSubHubbub protocol was discussed and the paper stated its drawback something like
Furthermore, there are patch systems such as pubsubhubbub (Google 2010) which can be seen as a mod-erator between feed readers and servers. All of these solutions only work if both, client and server support the extensions, which is rarely the case. Pubsubhubbub, for example,is only supported by 2 % of the feeds in our dataset.
I have never really interacted with this protocol , does it require clients (subscribers) to have some sort of a software on their system like feed listeners are required on the client side(subscribers) for obtaining feeds (is that what the above means) ?
I am not sure where they pulled that 2% number from, but it is probably not right.
For example, all the major blogging platforms support PubSubHubbub. A lot of news outlets (HuffPo, Gawker, Foxnews, ABCLocal...) support the protocol too.
Many other services, like Craigslist, Getglue, (even StackOverflow) . A lot of other services, like Github or Instagram do support PubSubHubbub-like APIs for JSON resources, even though this is outside of the current (0.3) spec.
The list goes on and on and on.
Now as far as complexity, it really isn't that difficult for a huge benefit. The "clients" (technically these are web servers) need to visible, accessible outside the firewall.
For publishers, it is even easier as they just need to ping (a simple HTTP POST request) the hub that they've chosen previously.
I am really curious to know how Google Buzz and Facebook implement their comment feature which is being updated instantly. is it similar to Google wave technology? are there any resources to learn that technology and implement it to our website?
Thanks !!
I work on the Google Buzz team, so hopefully I can give you a good answer for our side of the equation. I obviously won't go into any of the confidential backend stuff, but I'm happy to address the open standards we use and the open source projects involved.
Starting in the UI space, we use technologies like Closure and GWT to build rich, responsive user interfaces. We use a technology vaguely similar to what you see in the Google App Engine Channel API to push real-time updates to the users. GAE is a really good choice for real-time web applications right now.
On the API side of things, we try to use open standards wherever possible. We use the Atom syndication format to enable feed readers to consume Buzz content, and Pubsubhubbub to enable real-time pushes of the content. In fact, we use Pubsubhubbub for our activity firehose — it's possible to subscribe to the entire real-time stream of all updates that happen in Buzz. Needless to say, this sends a massive amount of traffic to your application. On the JSON side of the equation, we use Activity Streams, and we're actively working with the community to refine and improve that specification. Our Atom feeds include Activity Streams as well, but the focus there is on syndication. All our secured API endpoints for Buzz use the OAuth standard for authorization.
On the backend, I think the only thing we're willing to say publicly is that Protocol Buffers are pretty awesome.
The technology is called Real-time web (http://en.wikipedia.org/wiki/Real-time_web). You have many application models to achieve real-time and one of them is Comet (http://en.wikipedia.org/wiki/Comet_%28programming%29). Good server to use it in your implementation is APE (http://www.ape-project.org/). It supports many common javascript frameworks. More you can check in provided links.