I've been recently looking at how I could speed up page loading on my website and specifically to reduce the response time between my server and the CDNs I use (FontAwesome, jQuery, BootstrapCDN, and CloudFlare) since I figured that it was highly dependent on the traffic on those big CDNs. And I thought that if I built my own CDN (via a subdomain on my server), then traffic would be a lot smaller hence more fluid. However since I'm not an expert at all on that matter, I'd like to know if I'm right about that, and if it would be worth doing it in terms of performance?
Thanks!
If you had to ask, then no.
The first strike is on CloudFlare. By using CloudFlare, right now most of the cacheable traffic from your website should be between the user's browser (which can be anywhere in the world) to the nearest CloudFlare endpoint. Unless you have mirrors all over the globe, CloudFlare should be faster than your own CDN.
By using BootstrapCDN (which includes FontAwesome) and jQuery CDN, if the user's browser ever visited any other BootstrapCDN and jQuery CDN powered site on the near-past and assuming they're using the same resources, there will be no re-downloading of those resources. This mean using your own CDN will always add traffic.
Related
I'm new to server architecture and have been reading around a lot but have not yet had a solid opinion on if the setup below is good practice or not and was hoping someone with more experienced can give me confirmation if I'm setting up my architecture correctly:
Use Angular Universal to Pre render html to CDN (e.g. Cloudflare)
Cloudinary for Image assets
One/Few strong machines with ngix handling bus load and sending off to other servers listed below (all hosted in digital ocean):
Rest API (Express Server)
Database MongoDB
I'm really concerned about the speed of my rest api as the regions offered in digital ocean seem significantly smaller in contrast to a cdn like cloudflare. How much does this matter when affecting my speed and is a service?
I know this might sound ridiculous but the region issue makes me wonder if hosting a rest api express server on a cdn would be better than a place like digital ocean. (my instincts tell me I should't do this on a cdn but am at a loss for reasons and hope someone can provide clear reasons why I can or shouldn't host an express rest api server there.)
From my knowledge I would do this a little differently.
A CDN is used to serve content hence the name CDN (Content Delivery Network). The CDN its self doesn't serve the content but it routes the user to a server which serves it. For example if you have a server in the US, France and Asia and you where from the UK and requested the website with images hosted on these servers. The CDN would direct you the the closest/best server for you. In this case that would be the server in France.
So to answer your question it isn't a bad idea to host the RESTful API on the CDN but you would need multiple servers around the world (if you are going for worldwide) and use Cloudflare CDN to direct your traffic.
This is what I would do:
If your not expecting loads of traffic (like millions) just have 1-2 servers in each location so 1-2 in North America, South America, France (EU), Asia and maybe Australia. This will give you decent coverage. Then when you setup your CDN that should handle who goes where. Using node and nginx will help you a lot this will allow you to get cheaper not as powerful servers because they are pretty light weight.
Now for your databases you can do one of two things have one dedicated solution somewhere which will be as little latency for all regions somewhere like France (EU) so its more central or you can have multiple and have them sync. Having multiple databases which sync will be more work and will require quite a bit of research. Having the one server is a lot easier to manage.
The database will be your biggest problem deciding whether to do with one and deal with latency or multiple and have to manage them and keep them in sync. Keep in mind you could go with a cloud hosting platform to host your database this would help you with the issue because a lot of platforms will offer worldwide coverage as well as providing synchronised databases. You will however run into the cost issue when using cloud platforms.
Hope this answers your questions and provides you with the knowledge you need!
I recently read an article, "www. is not deprecated", which strongly advices against redirection from www to no-www. I would like to know the main cons of such a redirection and the main cons of redirecting from no-www to www. How would it impact site scalability, search engines visibility, problems with cookies, etc.
I'm going to suggest something controversial. It doesn't matter. Use either domain.
There are legitimate issues with serving content from a single domain with HTTP 1.1. You have to do domain sharding in order to parallelize content. However browsers only open up 4 connections at the same time, so even that scaling is limited. This is called sharding.
However the issues of sharding are gone with HTTP/2. With HTTP/2 you can parallize assets natively over a single connection. https://http2.github.io/faq/
When you need to scale beyond a single server you'll be faced with other issues, but throwing more hardware at the problem will be the easiest solution. When your site becomes so large you'll want to use a Content Delivery Network at which point, scaling becomes a non-issue for the front end.
There are issues with cross domain cookies. If you do scale to such a large size that you need a single sign on solution, you won't be worried about sub domain cookies, you'll probably be looking at a single sign on service, such as facebook, google, openid, or you'll roll your own saml2.0 solution, a CDN will also be able to provide a solution to do cross domain cookies as well.
Someone else can speak to authority regarding SEO.
Build your site the way you find aesthetically pleasing, and deal with the scaling issues when you come to them.
Edit: I did think of one advantage of using www.example.com You can cname www, whereas you would not be able to cname the example.com.
Since the article covers the reasons for www domain, I'll not repeat that and look at other side instead:
It's mostly aesthetic - some people think a bare domain looks better.
The www isn't needed and some think is a relic of the past - who even differentiates between the World Wide Web and the Internet anymore? Certainly not your browser which is more concerned with the protocol (http/https) than three random letters tacked on to the beginning of a website domain.
And finally it's extra typing for the user, or speaking - www is actually quite a mouthful when reading out a web address, and don't even come near me with the "dub dub dub" phrasing that some try to use to address this.
Personally I still think www wins it for me - mostly from recognition factor rather than from the technical issues raised in the article (though they help cement this opinion). In the same way that a .com or .country domain is more recognisable as a web address than some of the new TLDs.
Using a subdomain in your website address (of which www is the most recognisable) does have technical advantages as raised in the article - some of which can be worked around - but other than those it's a personal preference so not sure that SO is best place for this since there is no "right" answer.
One thing is clear. You should have one domain variant and stick with it. So redirect to your preferred version (with or without www) so if anyone ends up on the wrong one they are steered right. This just makes sense from a cleanliness point of view and also from an SEO point of view since search engines see the two domains as separate and so you don't want content showing on both as duplicate. Along the same vein, it's best practice to have your webserver listen to both domains to do that redirect and, if using https, to make sure your certificate covers both domains.
was wondering if anyone had a solution (hopefully simple) for how to change the repository that a SAPUI5 app pulls from.
i.e. when I'm accessing my app (might be hosted anywhere, but for argument's sake lets say on HCP in EU) and I'm in the EU, it makes sense to use the EU repository:
https://sapui5.hana.ondemand.com/resources/sap-ui-cachebuster/sap-ui-core.js
when in the US however, I'm going to get much better performance if I use the US repository:
https://sapui5.us1.hana.ondemand.com/resources/sap-ui-cachebuster/sap-ui-core.js
But short of having a US app and a EU app, how can I achieve this? I don't want to pop-up a request for the user to allow their browser to know where they are via using HTML Geo capabilities http://dev.w3.org/geo/api/spec-source.html and it seems most solutions to map IP addresses to location charge a fee (which I don't want to have to pay)
The standard way for this sort of thing on the web (afaik) would be just to use one address and have a CDN sort it out for you.
This doesn't seem to have happened for SAPUI5.
Anyone know why not? Or perhaps it has, and I just don't know about it, that would also be a very happily received answer.
Now, as of January 2015 there is such a CDN (with geo routing) implemented for OpenUI5 (or more specifically, for everything below the URL https://openui5.hana.ondemand.com).
It does not only serve the data from the closest SAP data center (Germany, USA, Australia), but uses the popular Akamai CDN technology on top, which provides thousands of servers around the world.
See http://openui5.tumblr.com/post/108835000027/openui5-in-your-neighborhood-a-true-cdn-has-gone for more details.
there is currently no such CDN with automatic routing to the closest server, sorry.
Reasons? Lack of time, money, demand...
There may be even free offerings for Open Source libs, but the total of UI5 is larger than your typical JS lib, so I'm not sure they would want it. And in older IE versions the cross-domain loading wasn't working anyway due to missing CORS support, hence a local deployment was preferred. And custom-tailored minimized runtimes for apps are the best for good performance, this is also not something a CDN can deliver. So currently there is no such thing even though it would be obviously good to have.
UI5 will load awesome fast if is part of a real app. Real app means a installable app from an App Store were the UI5 library is part of the app itself and not loaded from a server. That is the real destiny of UI5 and not putting it on a Gateway/Server (the Fiori Way, although there is the Fiori Client which tries to solve this).
I understand that SAP wants SAPUI5 on the backend because of integration in the SAP software lifecycle management. But it is bought with bad performance and caching issues. A very high price in my opinion! Luckily OpenUI5 is free to be part of real apps.
I'm thinking about doing a mobile version of our website. Some people says it's a good idea to let mobile websites have their own domain name (ie m.domainname.com) as oppose to the same app handling both mobile and desktop requests. What are some pros and cons of these two approaches?
My technology stack is ASP.NET MVC2 + MySQL.
This is more a strategic issue for your business. A lot of the larger vendors seem to use a suffix because it allows the end browser to be sure it is viewing the correct version of the site.
So for example, if I am using my smartphone to view a site - sometimes I will be redirected to the subdomain because there is code that determines through the session exactly what browser (and version) I am running. The redirect will then cause me to go to the new site. A problem arises when a situation arises that the code wasn't written to deal with. If I connected with a bespoke browser - how would the site determine that I was on a smartphone? Sure there is additional metadata that can be gathered - but what happens if my bespoke browse purposefully conceals that information (perhaps because it is not designed to view general web pages)?
The subdomain prefix gives the consumer a choice. They can view the normal site in their smartphone - and risk that the web pages may render incorrectly, etc.. Or alternatively they can enter the subdomain and view the site using the correct CSS for a smaller screen, alternatives to flash, and other technologies that smartphones require to view a site correctly.
If you want to play it safe - use the subdomain approach like we do. The big companies all seem to adapt this approach so why try to go against the grain. Remember - 99.99% of development is just doing something someone else has done before you (more or less) so learn from their mistakes.
What is the purpose of CDN service providers?
My guess is that large scale sites like facebook,wikipedia,youtube etc use CDN service providers for some kind of outsourcing.
My understanding:
youtube keeps its content in these
CDNs and the site actually focus on
algorithms such as searching of
videos,suggesting related videos,
keeping subscriber list/playlist of
users etc.
The youtube site only keeps
meta-data,indexes?. or may be it also
contains one copy of its entire
content?. The user connects to
youtube site, searches for a video.
The site finds out the file name and
sends it to the CDN hub along with IP
address of the user.
The CDN hub than perhaps locates the
CDN node closest to the user and
serves the content to the user.
What is the advantage of this approach?
One most important I can see is that esp for videos, it is perhaps remarkably much more faster if you are streaming video from the same country than from across the globe.
but does distance really matter that much? Any concrete numbers to get a sense of increase in speed between getting videos from across the globe than from same country?
and Google doesn't want to install its storage nodes all over the world. It would rather outsource this to CDN service providers which have already spread their nodes all over the world. and Google only focuses on algorithms part (which it mostly keeps secret)
Is my understanding of the picture correct? Any input/pointers would be highly useful.
Thanks,
I learned about the importance of CDNs in terms of website performance a couple of years back thanks to Yahoo's "Best Practices for Speeding Up Your Web Site"
This is oft-referenced in ySlow, and Yahoo estimated a 20% speed increase.
Another "benefit" is parallel downloading, which is discussed at length by one of the above authors in this blog post.
These are some resources that I ran into when looking into site optimization so I just thought I'd share. Besides that, you seem to have a good grasp on the concept.