Making single page apps more responsive while maintaining RESTfulness - rest

For a single-page app to be RESTful it needs access to API endpoints from which to retrieve its content. This mean's that you'll have, say, a API/user_information endpoint, and a API/article/ endpoint, and an API/comments endpoint and so on. For large applications these ajax calls will slow down rendering, especially over slow (mobile) connections. Ironically, this is not as much of a problem in the old-world of doing everything server-side, since the entire html is delivered in bulk upon the first request. Thus, sigle-page apps, which are supposed to be faster and more responsive than old-school server-rendered apps, may end up being considerably slower (admittedly, only if they require a large enough number of API calls).
The options, as far as I can see, seem to be:
1) Forget single-page apps, go back to doing everything server-side.
2) Consolidate all the API endpoints into fewer (possibly one) endpoint(s) so all content is retrieved with minimum latency. But now you're not longer RESTful.
These are both unsatisfying and sad. Is there a better solution?
PS: caching content in local storage seems to cause more problems than it solves because now you have to worry about invalidating client-side caches every time you push out fresh content.

I'm working now in a similar architecture as you commented, and I agree with you in your thoughts. What we are doing is to try to find a compromise between the both options you described, even if sometimes we are not always 100% RESTful.
I think it not always necessary to be 100% RESTful if your needs require it. Using the DTO pattern you can define the DTOs in a way that optimizes the performance of your application minimizing the number of calls, offering still reusability.

Related

General question on Sever Side Google Tag Manager

So I've been looking into page speed and have been reading about Google's Server Side GTM. Wondering if a lot of people have transitioned to this and what the costs could be for a high traffic marketing site. If the site already has a ton of tags, is this something that can be set up by the marketing department or developers, or is it more likely needed to hire someone who specializes in GTM.
If anyone has experienced this transition and has any information to share regarding page speed results that would be greatly appreciated as well.
The page speed change will vary depending on your site and the health of your container. You can measure it. Just go to your site, go to devtools, block GTM, reload the page. Compare that to how fast it loads with it.
General conclusion: Most GTM containers have zero influence on page load speed. Even if the library is not cached. Before it's non-blocking.
Keep in mind that best practice for implementing server-side GTM is using it in conjunction with the regular GTM. Which kinda defeats the purpose since sending GA hits is not something affecting your site speed. Unless you create real hardware load with them. Which is pretty difficult.
Now, to maintain your sGTM, you do need to increase spendings, but the main expense is usually not on maintaining the instance with the container. The main expense has a more long-term nature. The maintenance of the backend instance will require more expensive experts, much tougher debugging, typically involvement of normally more expensive backend devs and so on.
Generally, going sGTM route just because it sounds like fun is a bad idea to do in production. There should be a good business case to justify it.

securing usage of REST API when using SPA without authentication

after reading all the threads on stackoverflow and other platforms, I still wasn't able to find an answer, which satisfies me.
The task:
I want to create a single page application (SPA) which receives data from a REST API. In this SPA, NO authentication should be used. It's a public site.
But the REST API should only be accessible from people who loaded the SPA from my webserver.
I assume this is only solvable with something on server side like sessions, cookies etc. - otherwise I'm open for your suggestions, solutions etc.
Thx in advance!
There's no reasonably easy way to do this. You can easily prevent other domains (in browsers) from accessing a an API on your domain (via CORS), but it's significantly harder to prevent scripts from doing this.
The issue lies in 'how do you detect legit browser traffic from a script'. It turns out that this is not easy. You could try to detect 'unusual behavior' as much as possible (for example a large amount of requests in a short time), but this doesn't stop clients that are slower.
Ultimately if people want your data, they will find some way around whatever restrictions you come up with. You should reevaluate this and use one of the following options:
Don't do an SPA and API. Although one could wonder, if the data exists in HTML it can still be crawled.
Add authentication. But obviously this won't help you in any way if anyone can authenticate.
Re-evaluate why you have this restriction. What are you worried about? If you're worried about people taking your data and using it elsewhere, how does only showing it in a browser from 1 domain help with that? If you're worried about copyright theft, why not use a legal approach to this?
I've seen a lot of these types of questions, but in my opinion I haven't yet seen one that has a legitimate good reason to want this. But, maybe you're the first.
I believe I answered my question myself on a comment 30 minutes ago... I think with captcha I'm able to secure the REST API against unwanted access to my REST API

BMC Remedy 9.1 REST API front end performance impacts

I understand this is a loaded question, however I am going to try my luck to see if anyone has info/documentation that I have been unable to locate thus far. Perhaps someone with a better understanding of REST API functionality could point me in the correct direction.
In looking to deploy Remedy 9.1, I am being told REST API will be disabled due to performance concerns of the front end application itself (web interface). I am trying to find out if there is any quality control or prioritization that occurs on the backend that would mitigate this concern.
I understand that there are some obvious savings in not having to dynamically render a webpage or engage the front end virtually at all when making a REST API call, so when pulling data its more lean 1 to 1 to use REST. However if someone where to be reckless with a REST API call, is the ARServer at all equipped to manage this request by assigning it low priority, or would it simply take down the whole system?
In a perfect world I would love if someone could point me to some specific documentation that has something close to a definitive answer either way.
Thanks for any help anyone may be able to send my way.
Deploy a server group, keep one server for the front end and one for the back end. Only make calls to the 'back end' server.
I think the bigger issue that is being avoided is that there are kinds of places for bottlenecks. If they are concerned about that, what guardrails do they have in place to protect from an overloaded database? Or a table lock? The REST API is just another client. There is only so much you can do to protect the system. Since it is another client, it has all the protections at the app level that exist for every other client. Indexing, requiring search criteria, threading, and other performance tactics will help regardless of the client.
RCJ has a good point in that you can dedicate servers in the server group (or even outside of it) to accommodate their needs. But you will always go back to the central database as the ultimate risk.

Redis / Memcached ReST caching for an external service

Question here about caching data from calls to an external ReST API.
There is currently a ReST service set up to generate and retrieve some specific types of reports that the UI must consume. However, this service is not meant for high volume usage, or to be exposed to the public and these reports are fairly static. Possibly only changing every 10-20 minutes. The web application resides on a separate server.
What I would like to do is, using memcached or Redis, is when a request for data comes in from the UI to the web back-end, make a call from the web application back-end to the report server to get the specified report, transform the data to the appropriate format for the UI to consume, cache it with a timestamp, and return it to the UI so subsequent requests will be available in memory on the web applications back-end without having to re-request from the report server. I would also need to check this timestamp and make a new request if the cached report has been held for longer than the specified time. The data that will be cached is fairly minuscule just some smallish JSON objects with only a handful of values holding the information the UI needs and there is NOT a ton of these objects, I would not be surprised if they could all be easily stored in memory at once so the time stamping is the only invalidation that should be necessary.
I have almost 0 experience here when it comes to caching / memcached / Redis. Is there advantages to one or the other? Is something like this possible? How would I go about implementing this? Are there other options?
Appreciate the help!
Server-caching these kinds of RESTful query responses is very possible and quite common.
With any server based caching, you should also think hard about whether you really need it, as it does add complexity. It can certainly make a huge improvement, but since your usage volume is low, it might actually be overkill. You may also be able to use HTTP caching protocols to avoid the need for caching on the server. If the data doesn't change very often and you use eTags or modified dates correctly, along with an intermediary proxy like AWS CloudFront, users will rarely experience that delay.
Also, if you are finding your database to be a bottleneck, you might be able to get away with just configuring it to cache more aggressively.
Assuming you do want to cache in memory ...
For server-side caching, the normal approach is to cache results for some time period or manually clear them from the cache. But a more modern and better approach imo is to use Russian-doll caching, where you key items according to the time their inputs changed. Then you never need to worry about manually clearing them, you just make sure timestamps are correct and synchronised.
Memcached versus Redis versus something else? For this usage, Memcached is probably best as it's extremely simple and you don't have to worry about persistence, which is a big advantage of Redis over Memcached. Redis is well-engineered and would work fine too, but I don't see the benefit to use something that's considerably more feature-rich and complex if you don't need it and there's a good alternative. That said, the one big advantage of Redis is it now has excellent built-in clustering support, so it's easy to scale and stay online. But that would be overkill for your use case.
Something else? There are plenty of other in-memory databases, but I think Memcached and Redis are probably best if you want to avoid the problems of relying on cutting-edge frameworks without too much support. However, there is something else: boring old files. If you're generating reports, you might want to consider just generating them as temporary files. If your OS is doing its job, the files will end up being cached anyway.

Creating a Secure iPhone Web Data Source

I've searched the web for this bit to no avail - I Hope some one can point me in the right direction. I'm happy to look things up, but its knowing where to start.
I am creating an iPhone app which takes content updates from a webserver and will also push feedback there. Whilst the content is obviously available via the app, I don't want the source address to be discovered and published my some unhelpful person so that it all becomes freely available.
I'm therefore looking at placing it in a mySQL database and possibly writing some PHP routines to provide access to my http(s) requests. That's all pretty new to me but I can probably do it. However, I'm not sure where to start with the security question. Something simple and straightforward would be great. Also, any guidance on whether to stick with the XML parser I currently have or to switch to JSON would be much appreciated.
The content consists of straightforward data but also html and images.
Doing exactly what you want (prevent users from 'unauthorized' apps to get access to this data') is rather difficult because at the end of the day, any access codes and/or URLs will be stored in your app for someone to dig up and exploit.
If you can, consider authenticating against the USER not the App. So that even if there is a 3rd party app created that can access this data from where ever you store it, you can still disable it on a per-user basis.
Like everything in the field of Information Security, you have to consider the cost-benefit. You need to weigh-up the value of your data vs. the cost of your security both in terms of actual development cost and the cost of protecting it as well as the cost of inconveniencing users to the point that you can't sell your data at all.
Good luck!