Why do API's have different URLs? - rest

Why do API's use different URLs? Is there two different interfaces on the web server? One processing API requests and the other web HTTP requests? For example there might be a site called www.joecoffee.com but then they use the URL www.api.joecoffe.com for their API requests. Why are different URLS being used here?

We separate ours for a couple of reasons, and they won't always apply.
Separation of concerns.
We write API code in one project, and deploy it in one unit. When we work on the API we only worry about that and we don't worry about page layout. When we do web work, that's completely separate
Different authentication mechanisms.
The way you tell a user to log in is quite different to how you tell an API client it's not authenticated.
Different scalability requirements
It might be that the API does a lot of complex operations, while the web-server serves more or less static content. So you might want to add hundreds of API servers around the world, but only have 10 web servers.
Different Clients
You might have an API for the web client and a separate API for a mobile client. Or perhaps a public one and a private / authenticated one. This might not apply to your example.
Different Technologies
Kind of an extension of Separation of concerns, but it allows you to have Linux server for one and use something like an AWS Lambda for the other.
SSL Wrangling
This one is more of an anti-reason (particularly for the specific example you give). Many sites use SSL for both web and api. Most sites are going to use SSL for the API at least. You tend to have SSL certificates matched to your URL, so there might be a reason there. That said, if you had a *.joecoffee.com certificate you would use api.joecoffee.com not www.api.joecoffee.com (because apparently an extra '.' in your URL costs more, or something like that).
As #james suggested - there's no really right answer and some debate.

Related

Single API, multple frontend clients

I'm currently redesigning an application where we have a single API (.NET Core 3.1) and two frontend (React) applications. In our current solution we share the same endpoints for both clients, meaning that there is an overhead of data sent from the API because both clients need different information.
I'm looking for a convenient way to split the API so we end up having two different responses, depending on which client is sending the request. I see a few options but I'm quite unfamiliar with pros and cons (besides the obvious ones) of these approaches since I never planned an application for multiple clients.
Single endpoint, different data returned depending on a header/query param
Split the API into two APIs (but where should common endpoints land? creating the third API is an overhead for us)
Split controllers into two when necessary (perhaps using partial classes?) and just implement GET api/resourceX/client1 separately from GET api/resourceX/client2
But maybe there already is a build-in solution for this in .NET Core that I'm not aware of? I tried looking for solutions but ended up getting nowhere near the answer - perhaps my keywords failed me.
This question is somewhat opinion based and you did not give us much details about how different the needs of the two clients are, but I try to answer it.
I have the assumption that you do a CRUD API and you just serve data from database without much transformation or processing in your code. This is ok, but keep in mind that REST should be built around operations and not around data types and CRUD if we use the WSDL terms. This is somewhat related to anemic domain model (defined by Martin Fowler).
If the users of the two clients are the same, then I would add more endpoints to cover the different types of queries. You can use URI templates for this e.g. /stuff?filter=xyz&view=verbose. If you want to support arbitrary queries, then it is better to use a standard solution for URI queries e.g. odata instead of building your own query language. You can use headers too, but as far as I know standard HTTP headers are a lot less flexible than URI queries.
If you have different user groups for different clients based on the privileges e.g. staff and customers, then you can serve different data for the same endpoint based on the privileges. Be aware that if you handle both regular users and admins with a single client, then the code of the client can contain hints about what is accessible for an admin. If this is an issue, then you need to split up the code of the client into multiple files and serve the admin files only for people with admin privileges. Setting the proper cache header is important in this case too. Another possible solution to have different clients for different user groups.
Be aware that in the case of real REST your server gives the URIs or URI templates to the clients just like a webserver gives the links and forms to the browser. This is because the URIs can change in time and it is a lot easier to keep backward compatibility this way. The clients know what a link does based on the metadata you send with the link e.g. IANA link relations, RDF vocabularies, etc. So the only URI you need to hardcode into your clients should be the root URI of the API. Most of the people don't try to meet with this REST constraint, because it requires somewhat different than the usual mindset to do it.

Designing REST API for Different Consumers

I have an application API that is used In two scenarios:
My frontend application uses it to interact with the server
A client is using it for development of CLI tool so there is an open documentation of the API.
At start all of the endpoints were kind of generic so they have been used in both scenarios, but as my application grows i have a need to :
create special endpoints for my frontend application for optimization, for example an endpoint to some statistics screen
Change some of the basic API results structures that are not backward compatible and can break the Clients
usage.
What is the best practice to design an API to meet these needs?
How is should be design correctly so it will be adjusted
to the frontend needs and on the other side will be robust enough to not break the Client's applications?
frontend specific endpoints along with General ones?
What is the best practice to design an API to meet these needs?
This highly depends on your scenario. Is your API going to be used internally only or will it be made publicly available to an unknown number of developers and integrators? What is the expected lifetime of the API? Will it evolve?
How is should be design correctly so it will be adjusted to the frontend needs and on the other side will be robust enough to not break the Client's applications?
I recommend to commit to API contracts and use a specification for these contracts. I prefer the OpenAPI specification as it will come with a lof of benefits. Make sure you invest a lot of time and team effort (product owner, project managers, backend & frontend devs) to develop the contract in several iterations. After each iteration test the specification by mocking the API and clients before turning over to to implement your frontend app or cli client.
frontend specific endpoints along with General ones?
I would not do that, but I do not know you context. What does a frontend specific endpoint mean? If it means that as of today the endpoint should be only used by the frontend application but is of no use for the current cli client than I think it is just a matter of perspective. Make it a general endpoint and just use it by the frontend app. If it somewhat provides sensitive information that should be access only by the frontend you need to think about authentication and authorization. I recommend implementing Oauth2 for that.
create special endpoints for my frontend application for optimization, for example an endpoint to some statistics screenfrontend specific endpoints along with General ones?
I would suggest to implement all endpoints in your API and use OAuth2 as authentication. Use the scopes of the OAuth approach to manage authorization and access to different endpoints for each client (frontend app, cli).
You wrote you need to:
Change some of the basic API results structures that are not backward compatible and can break the Clients usage.
Try to avoid making breaking changes to your API. If it is used internally only you may be in control of the different clients accessing the API but even than the risk of breaking a client is high.
If you need to change existing behaviour you should think about API versioning or API evolution, which is a controversly discussed topic with a lot of different opinions and practices.
What is the best practice to design an API to meet these needs?
Design your resource representations so that they are forward and backwards compatible by design. Fundamentally, they are messages, so treat them that way; new optional fields with reasonable defaults can be added to the messages, but the semantics of a message element should never change.
If you dig through the old XML literature, you'll find references to ideas like Must Ignore and Must Forward -- those are the sorts of princples that also apply to the representations of long lived resources.
Create new resources when the existing resources cannot be conveniently extended to cover your new use case.

Microservices end points accessible on Internet

We have a MicroService based Architecture where each service has a REST End point. These services talk to each other via REST.
However I noticed that a lot of developers have directly started calling these Services in the Javascript code of our Web Application. I want to know if it is recommended to access these MicroServices over the Internet OR they should be hidden behind a Facade layer. Of course all the end points are authenticated but all Web application users can find these end points once they do a F12.
thanks,
Abhi
I would not do that for the following reasons
Security. You are exposing your endpoint as is and it allow other people to know quite a lot about your endpoints then what you rather want them to know. Authentication is ok, but are still open for DDOS for your individual services, out of turn calls, unexpected load etc.
Service Discovery. By allowing access to the endpoints directly you are basically forcing dev to bind themselves with a given URL. This may work but since it is restricting you to make changes in future to your URL etc its better not to do it. By having a layer in between you will be required to change one url if ever required
Code Duplication There are quite a few cross cutting concerns when it comes to URL handling like request logging, https stripping, authentication, prevention of DDOS, request limiting etc. By having one common layer before your services you can manage all these at that one place rather than doing each of them for each services
If you think any of these are or could be major concerns that you should add an additional layer in between and route your internet facing api via that.

Design rest service for one to one

Scenario: We are creating/desiging a REST service to help us configure a system (and it's network, etc), but we run into some problems related to designing this API. We would like to configure the hostname of the system using a REST call/
Challenge: Because most APIs and design guidelines are related to lists of entities and not just a single one, I can't decide on how the rest API should look like.
Currently we are considering using something like:
GET /system/0
PUT /system/0 {....}
Problem: There is just one system entity so it doesn't feel good to identify this using 0 because there is only one of it.
Are there any REST guidlines about how this should be done?
Actually, REST does not enforce a particular format for the URL, you can even have an URL like /569284d7-1b59-4343-92d4-90e8753bcbd7 and it's OK. In REST the server guides the client through state changes, it's not about the client knowing what URLs to access.
Most web API's are created in a CRUD style, with hierarchies of resources like your example /system/0, /system/1 because it's easier to understand and implement (might not always be RESTful depending on how tight the coupling of the client is to the URLs, but it serves most needs so people chose to do it like that).
So my advice would be to keep it simple and not over-think it. Using /system/0 is just fine, even if now you have only one system.
Just my 2 cents!

Web UI to a restful interface, good idea?

I am working on a experimental website (which is accessible through web browser) that will act as a front-end to a restful interface (a sub-system). The website will serve as an interface between a user and the restful interface, as it will make http requests to the restful interface for almost all database operations. Authentication will probably be done using openid and authorization for the database operations will be done via oAuth.
Just out of curiousity, is this a feasible solution or I should develop two systems that accesses the database in parallel (i.e. the website has its own data access logic, and the restful interface has another data access logic)? And what are the pros/cons if I insist on doing it this way (it is just an experiment project for me to learn things like how OpenID and oAuth work in real life anyway) besides there will be more database queries and http requests generated for each transaction?
Your concept sounds quite feasible. I'd say that you'll get some fairly good wins out of this approach. For starters you'll get a large degree of code reuse since you'll be able to put other front ends on top of the RESTful service. Additionally, you'll be able to unit test this architecture with relative ease. Finally, you'll be able to give 3rd party developers access to the same API that you use (subject possibly to some restrictions) which will be a huge win when it comes to attracting customers and developers to your platform.
On the down side, depending on how you structure your back end you could run into the standard problem of granularity. Too much granularity and you'll end up making lots of connections for very little amounts of data. Too little and you'll get more data than you need in some cases. As for security, you should be able to lock down the back end so that requests can only be made under certain conditions: requests contain an authorization token, api key, etc.
Sounds good, but I'd recommend that you do this only if you plan to open up the restful API for other UI's to use, or simply to learn something cool. Support HTML XML and JSON for the interface.
Otherwise, use a great MVC framework instead (asp.net MVC, rails, cakephp). You'll end up with the same basic result but you'll be "strongerly" typed to the database.
with a modern javascript library your approach is quite straightforward.
ExtJS now has always had Ajax support, but it is now able to do this via a REST interface.
So, your ExtJS user interface components populate receive a URL. They populate themselves via a GET to the URL, and store update via POST to the URL.
This has worked really well on a project I'm currently working on. By applying RESTful principles there's an almost clinical separation between the front & backends - meaning it would be trivial undertaking to replace other. Plus, the API barely needs documenting, since it's an implementation of an existing mature standard.
Good luck,
Ian
woow! A question from 2009! And it's funny to read the answers. Many people seem to disagree with the web services approach and JS front end - which has nowadays become kind of standard, known as Single Page Applications..
I think the general approach you outline is quite feasible -- the main pro is flexibility, the main con is that it won't protect clueless users against their own ((expletive deleted)) abuses. As most users are likely to be clueless, this isn't feasible for mass consumption... but, it's fine for really leet users!-)
So to clarify, you want to have your web UI call into your web service, which in turn calls into the database?
This is exactly the path I took for a recent project and I think it was a mistake because you end up creating a lot of extra work. Here's why:
When you are coding your web service, you will create a library to wrap database calls, which is typical. No problem there.
But then when you code your web UI, you will end up creating another library to wrap calls into the REST interface... because otherwise it will get cumbersome making all the raw HTTP calls.
So you essentially created 2 data access libraries, one to wrap DB and the other to wrap the Web service calls. This basically doubles the amount of work you do, because for every operation on a resource, you will end up implementing in both libraries. This gets tiring real fast.
The simpler alternative is to create a single library that wraps access to the database, as before, then use that library from BOTH the web UI and web service.
This is assuming that your web UI and web service reside on the same network and both have direct access to the backend database server (which was the case for me). In this setup having both go directly to the database is also a lot more efficient then having the UI go through the web service.