We're developing REST API using Web API for a mobile client.
There is a special need to keep user specific data on server side (expensive DAL requests).
What is the best practice to do that?
The naive way is to keep the data in the Application's memory.
Considering security issues, we're thinking of using ASP.NET Session.
What would you recommend?
3rd party cloud solution is not an option.
Thanks!
Related
Hey guys I need general help with identifying what an intermediate REST layer is called.
I am developing a solution that relies on the user data of a video game company. The company has REST APIs that I can call to gather the data, and I have decided to take the following approach: build a website with React, build an intermediate layer using Spring-boot which will provide APIs for the website and also call the company's APIs to gather the data. Say I want to research best practices for that intermediate layer be it caching for example, I am having trouble narrowing my search down to specifically cater for my architecture.
So what would you call that intermediate solution?
If you think I have design flaws, advice is greatly appreciated.
Since, your application is acting as a think layer of proxy a backend for frontend application architecture (BFF) might fit better.
A BFF is, in simple terms, a layer between the user experience and the resources it calls on. When a mobile user requests data, in a BFF situation, their request is translated through the BFF and into a general layer below it.
Hello Stackoverflow community,
I want to build an Android app which uses soccer data. I've found a service that provides soccer information via a REST API. The service is limited to 5,000 request/hour and I want to implement it.
If I have lots of users the app will break.
I've found a way to decrease the number of requests, by using an API-caching middleware. Example:
https://github.com/kwhitley/apicache
Question: What are the best practices when using limited REST APIs?
Best practice is that you implement a server-side application that caches the unique requests with a lifespan and Android application get data from it! Don't get data directly from third-parties.
Let's suppose I was going to design a platform like Airbnb. They have a website as well as native apps on various mobile platforms.
I've been researching app design, and from what I've gathered, the most effective way to do this is to build an API for the back-end, like a REST API using something like node.js, and SQL or mongoDB. The font-end would then be developed natively on each platform which makes calls to the API endpoints to display and update data. This design sounds like it works great for mobile development, but what would be the best way to construct a website that uses the same API?
There are three approaches I can think of:
Use something completely client-side like AangularJS to create a single-page application front end which ties directly into the REST API back-end. This seems OK, but I don't really like the idea of a single-page application and would prefer a more traditional approach
Create a normal web application (in PHP, python, node.js, etc), but rather than tying the data to a typical back end like mySQL, it would basically act as an interface to the REST API. For example when you visit www.example.com/video/3 the server would then call the corresponding REST endpoint (ie api.example.com/video/3/show) and render the HTML for the user. This seems like kind of a messy approach, especially since most web frameworks are designed to work with a SQL backend.
Tie the web interface in directly in with the REST api. For example, The endpoint example.com/video/3/show can return both html or json depending on the HTTP headers. The advantage is that you can share most of your code, however the code would become more complex and you can't decouple your web interface from the API.
What is the best approach for this situation? Do you choose to completely decouple the web application from the REST API? If so, how do you elegantly interface between the two? Or do you choose to merge the REST API and web interface into one code base?
It's a usually a prefered way but one should have a good command of SPA.
Adds a redundant layer from performance perspective. You will basically make twice more requests all the time.
This might work with super simple UI, when it's just a matter of serializing your REST API result into different formats but I believe you want rich UI and going this way will be a nightmare from both implementation and maintainance perspective.
SUGGESTED SOLUTION:
Extract your core logic. Put it into a separate project/assembly and reuse it both in your REST API and UI. This way you will be able to reuse the business logic which is the same both for UI and REST API and keep the representation stuff separately which is different for UI and REST API.
Hope it helps!
Both the first and the second option seem reasonable to me, in the sense that there are certain advantages in decoupling the backend API from the clients (including your web site). For example, you could have dedicated teams per each project, if there's a bug on the web/api you'd only have to release that project, and not both.
Say you're going public with your API. If you're releasing a version that breaks backwards compatibility, with a decoupled web app you'd be able to detect that earlier (say staging environment, given you're developing both in-house). However, if they were tightly coupled they'd probably work just fine, and you'll find out you've broken the other clients only once you release in production.
I would say the first option is preferable one as a generic approach. SPA first load delay problem can be resolved with server side rendering technique.
For second option you will have to face scalability, cpu performance, user session(not on rest api of course because should be stateless), caching issues both on your rest api services and normal website node instances (maybe caching not in all the cases). In most of the cases this intermediate backend layer is just unnecessary, there is not any technical limitation for doing all the stuff in the recent versions of browsers.
The third option violates the separation of concerns, in your case presentational from data models/bussines logic.
We're investigating the iPhone Enterprise Developer Program as a way to develop and distribute in-house apps. Since our backends are all Windows, SQL server and Oracle databases, we have to find out a way to make our data available for the coming in-house apps.
As far as I know that Core Data is mainly based on SQLite as persistent store. I am not sure if there are any APIs available in iPhone SDK for SQL server or Oracle database? Another possibility, or very attractive strategy, is to build our own web-based REST services as CRUD gateway to our databases.
Personally, I prefer to integrate in-house apps with our ASP.NET based web services. I am not sure if this is possible. Are there any examples or documentations about this strategy?
an interesting option is to expose your data from your server using asp.net OData then use this project to generate a client in objective-C to consume your OData service.
As far as I am aware there are no approved API's to access a server based database. The way we do it in our organisation is pretty much the way you are suggesting. In some instances we are using SOAP but typically we just use a custom JSON or XML web service to access the data.
With regards to ASP.NET are you talking about making native iPhone Apps with ASP.NET or getting a native iPhone App to talk to an ASP.NET web service? If its the first then have a look at monotouch (don't know much about it), if its the later then this shouldn't cause issues. Just use NSURLConnection and deal with the resource structure on the App (be it JSON or XML).
The added advantage of using a web service rather than a straight database connection is that you get encryption for free using https.
Hope that helps
There's a product called SUP (Sybase Unwired Platform), it provides a framework to handle access to databases but has the advantage that there doesn't need to be an online access all the time, it stores persistent data locally and then can sync up with the host database using messaging.
I am working on a experimental website (which is accessible through web browser) that will act as a front-end to a restful interface (a sub-system). The website will serve as an interface between a user and the restful interface, as it will make http requests to the restful interface for almost all database operations. Authentication will probably be done using openid and authorization for the database operations will be done via oAuth.
Just out of curiousity, is this a feasible solution or I should develop two systems that accesses the database in parallel (i.e. the website has its own data access logic, and the restful interface has another data access logic)? And what are the pros/cons if I insist on doing it this way (it is just an experiment project for me to learn things like how OpenID and oAuth work in real life anyway) besides there will be more database queries and http requests generated for each transaction?
Your concept sounds quite feasible. I'd say that you'll get some fairly good wins out of this approach. For starters you'll get a large degree of code reuse since you'll be able to put other front ends on top of the RESTful service. Additionally, you'll be able to unit test this architecture with relative ease. Finally, you'll be able to give 3rd party developers access to the same API that you use (subject possibly to some restrictions) which will be a huge win when it comes to attracting customers and developers to your platform.
On the down side, depending on how you structure your back end you could run into the standard problem of granularity. Too much granularity and you'll end up making lots of connections for very little amounts of data. Too little and you'll get more data than you need in some cases. As for security, you should be able to lock down the back end so that requests can only be made under certain conditions: requests contain an authorization token, api key, etc.
Sounds good, but I'd recommend that you do this only if you plan to open up the restful API for other UI's to use, or simply to learn something cool. Support HTML XML and JSON for the interface.
Otherwise, use a great MVC framework instead (asp.net MVC, rails, cakephp). You'll end up with the same basic result but you'll be "strongerly" typed to the database.
with a modern javascript library your approach is quite straightforward.
ExtJS now has always had Ajax support, but it is now able to do this via a REST interface.
So, your ExtJS user interface components populate receive a URL. They populate themselves via a GET to the URL, and store update via POST to the URL.
This has worked really well on a project I'm currently working on. By applying RESTful principles there's an almost clinical separation between the front & backends - meaning it would be trivial undertaking to replace other. Plus, the API barely needs documenting, since it's an implementation of an existing mature standard.
Good luck,
Ian
woow! A question from 2009! And it's funny to read the answers. Many people seem to disagree with the web services approach and JS front end - which has nowadays become kind of standard, known as Single Page Applications..
I think the general approach you outline is quite feasible -- the main pro is flexibility, the main con is that it won't protect clueless users against their own ((expletive deleted)) abuses. As most users are likely to be clueless, this isn't feasible for mass consumption... but, it's fine for really leet users!-)
So to clarify, you want to have your web UI call into your web service, which in turn calls into the database?
This is exactly the path I took for a recent project and I think it was a mistake because you end up creating a lot of extra work. Here's why:
When you are coding your web service, you will create a library to wrap database calls, which is typical. No problem there.
But then when you code your web UI, you will end up creating another library to wrap calls into the REST interface... because otherwise it will get cumbersome making all the raw HTTP calls.
So you essentially created 2 data access libraries, one to wrap DB and the other to wrap the Web service calls. This basically doubles the amount of work you do, because for every operation on a resource, you will end up implementing in both libraries. This gets tiring real fast.
The simpler alternative is to create a single library that wraps access to the database, as before, then use that library from BOTH the web UI and web service.
This is assuming that your web UI and web service reside on the same network and both have direct access to the backend database server (which was the case for me). In this setup having both go directly to the database is also a lot more efficient then having the UI go through the web service.