What consideration would one take when designing a scalable web architecture - web-architecture

I know that one has to look at the queries for database, because when the database is small queries are not a problem. But when it becomes larger it could slow down the site.

Here's some things I would consider...
Handling Data:
Returning as small a dataset as
possible. This includes row counts &
columns. For example, no select *
from table....
Data caching strategies
Web:
File Caching.... AKA - HTML, images,
javascript....
Put JS at the bottom after DOM loads.
Increases speed of page load.
Put CSS at top.
Session State - be careful with it's
usage...
Use CDNs when possible with local
fallback....
Minimize the postbacks / http
requests
GZip/compress your http
compress your JS & CSS
Hardware setup:
Load Balancing,proxies...

Related

ag-grid with virtual scrolling without lazy load

I have a requirement where we need to show around 24k records which has 84 cols in one go, as user wants filtering on entire set of data.
So can we have virtual scrolling mechanism with ag-grid without lazy loading?? If so could you please here. Any example are most welcome for reference.
Having tried this sort of thing with a similar number of rows and columns, I've found that it's just about impossible to get reasonable performance, especially if you are using things like "framework" renderers. And if you enable grouping, you're going to have a bad time.
What my team has done to enable filtering and sorting across an entire large dataset includes:
We used the client-side row model - the grid's simplest mode
We only load a "page" of data at a time. This involves trial and error with a reasonable sample of data and the actual features that you are using to arrive at the maximum page size that still allows the grid to perform well with respect to scrolling / rendering.
We implemented our own paging. This includes display of a paging control, and fetching the next/previous page from the server. This obviously requires server-side support. From an ag-grid point of view, it is only ever managing one page of data. Each page gets completely replaced with the next page via round-trip to the server.
We implemented sorting and filtering on the server side. When the user sorts or filters, we catch the event, and send the sort/filter parameters to the server, and get back a new page. When this happens, we revert to page 0 (or page 1 in user parlance).
This fits in nicely with support for non-grid filters that we have elsewhere in the page (in our case, a toolbar above the grid).
We only enable grouping when there is a single page of data, and encourage our users to filter their data to get down to one page of data so that they can group it. Depending on the data, page size might be as high as 1,000 rows. Again, you have to arrive at page size on a case-by-case basis.
So, in short, when we have the need to support filtering/sorting over a large dataset, we do all of the performance-intensive bits on the server side.
I'm sure that others will argue that ag-grid has a lot of advanced features that I'm suggesting that you not use. And they would be correct, for small-to-medium sized datasets, but when it comes to handling large datasets, I've found that ag-grid just can't handle it with reasonable performance.

Wordpress in waiting state

I built a website for someone and I used https://gtmetrix.com to get some analytics, mainly because the wait time is huge (~20 sec) without having any heavy images. Please find attached a screenshot here:
http://img42.com/05yvZ
One of my problems is that it takes quite a long time to perform the 301 redirect. Not sure why, but if someone has a key to the solution I would really appreciate. At least some hints to search would be nice.
The second problem is after the redirection, the waiting time is still huge. As expected I have a few plugins. Their javascripts are called approx. 6 secs after the redirection. Would someone please show me some directions on where to search please?
P.S. I have disabled all plugins and started from a naked plain Twenty Eleven theme, but I still have waiting times during redirection and smaller delay after redirection.
Thanks in advance
But a few suggestions:
1 and 2.) If the redirect is adding noticeable delays; test different redirect methods. There are several approaches to this -- including HTML meta and server side (ie PHP) methods -- I typically stick to server side; if it's showing noticeable delays using a server side method, this may be a great indicator that your experiencing server issues - and may be very well your server all along causing your speed issues; contact your host provider.
3.) Take a look at the size of your media. Images and Video; also Flash if your using any. Often cases it's giant images that were just sliced / saved poorly and not optimized for web in a image editing software like PhotoShop. Optimize your images for web and re-save them at a lower weight to save significant on load time. Also, many cases nowadays and you can avoid using clunky images in general by building the area out using pure CSS3. (ie. Odd repeatable .gifs to create gradients or borders etc.)

How big can I make the DOM tree without degrading performance?

I'm making a single page application, and one approach I am considering is keeping all of the templates as part of the single-page DOM tree (basically compiling server-side and sending in one page). I don't expect each tree to be very complicated.
Given this, what's the maximum number of nodes on the tree before a user on a mediocre computer/browser begins to see performance degradation? For example, 6 views stored as 6 hidden nodes each with 100 subnodes of little HTML bits.
Thanks for the input!
The short of it is, you're going to hit a bandwidth bottleneck before you'd ever hit a DOM size bottleneck.
Well, I don't have any mediocre machines lying around. The only way to find out something like that is to test it. It will be different for every browser, every CPU.
Is your application javascript?
If yes, you should consider only loading in the templates you need usinx XHR, as you're going to be more concerned with loadtime for mobile than performance on a crappy HP from 10 years ago.
I mean, hearing what you describe should be technically reasonable for any machine of this decade, but you should not load that much junk up front.
A single page application doesn't necessitate bringing all the templates down at once. For example, your single page can have one or more content divs which are replaced at will dynamically. If you're thinking about something like running JSON objects thru a template to generate the HTML, the template could remain in the browser cache, the JSON itself stays in memory, and you can regenerate the HTML without any issue and avoid the DOM size issue.

Incrementing a Page View Count with Varnish and ESI

If I am using Varnish to cache my entire documents, by what mechanism would you advise I increment a page view count as well.
For example, lets supose that I have an auction listing, such as ebay, and I would like to cache the entire page since I know it is never going to change.
How would you then increase the page view count of this listing.
Lets say that my application is running from Zend Framework.
Would it be correct to make an ESI (Edge Side Include) to a node.js server which increments a page view count in Redis?
I'm looking for something that wil be 100% supported and will yielf accurate page view request numbers. (I'm not concerned about duplicate requests either, I"ll handle that in my application logic to prevent one IP from nuking the page view count).
I would separate your statistics logic from your application. Use a small piece of javascript that requests a resource with a unique timestamp (e.g. an image like /statistics?pageId=3&ts=234234249). You can cache your complete page (no need to bother with ESI) and have the statistics handled by a fast (multiplexing) server like node.js, netty, tornado.
If you need the pageCount in your page, request a small piece of javascript/json data instead of an image and update the DOM in javascript.
This way, you can log better statistics (e.g. dimensions of the page), you minimize traffic and keep statistics a separate concern.

How to prevent Crystal webserver refetching data on each page

We're using Crystal 11 through their webserver. When we run a report, it does the Sql query and displays the first page of the report in the Crystal web reportviewer.
When you hit the next page button, it reruns the Sql query and displays the next page.
How do we get the requerying of the data to stop?
We also have multiple people running the same reports at the same time (it is a web server after all), and we don't want to cache data between different instances of the same report, we only want to cache the data in each single instance of the report.
The reason to have pagination is not only a presentation concern. With pagination the single most important advantage is lazy loading of data - so that in theory, depending on given filters, you load only what you need.
Just imagine if you have millions of records in your db and you load all of them. First of all is gonna be a hell of a lot slower, second you're fetching a lot of stuff you don't really need. All the web models nowadays are based on lazy loading rather than bulk loading. Think about Google App Engine: you can't retrieve more than 1000 records in a given transaction from the Google Datastore - and you know that if you'll only try and display them your browser will die.
I'll close with a question - do you have a performance issue of any kind?
If so, you probably think you'll make it better but it's probably not the case, because you'll reduce the load on the server but each single query will be much more resource consuming.
If not my advice is to leave it alone! :)