I am currently working on Charts in ExtJS, it is taking much time to display the graphs in browser.
Because the number of requests sending to server are more only for graphs.
How to reduce these no.of requests and improve the performance of my graph in EXTJS.
Can you help me regarding this ?
Thanks in Advance...
Have you considered using Ext Direct for its method batching support?
The idea is that you should receive data in a few large requests, rather than many smaller ones. Each browser has a maximum number of outgoing requests (I think 6 is the default now), so if you are making more requests than this, then they will queue client side.
Related
For a desktop App (ERP like functionality) I'm and wondering what would be wiser to do.
Assuming that both machines are equal in performance and the server has to deal with max. 5-10 clients and no other obligations. Is it better to load all data initially (~20.000 objects) and do filtering, sorting etc. on the client (electron) or is it better to do the processing on the backend (golang + postgres) over Axios. The user interface should be as snappy as possible but also get the data as fast as possible.
A costly operation is filtering 15.000 Objects by a reference ID. (e.g. a client can have several orders)
So objects that belong to a "parent object" are displayed by querying all those objects by a parentID.
Is there a general answer to what would be more performant, or a better choice here? Doing some assumptions, like a latency of 5ms in the network + 20ms for the API + a couple for filling the store.
At which data size will this operation be slower on the frontend or completely unsustainable?
If it's not a performance problem, are there other reasons I would want to do this on the server?
Edit: Client and Server are on the same local network
You specifically mention an ERP-like software. For such software you have to carefully consider the value of consistency:
Will your software need to show the same data for all clients?
If the answer to this is yes, then the simplest implementation is to do data processing on the server which informs all clients of changing data.
If the answer to this is no, then you should be fine doing most processing on the client software.
There are of course ways to do most of your processing on the client yet still have consistency but they will add complexity to your overall design. One implementation is to broadcast changes on one client to all other clients. This is the architecture behind most multiplayer online games.
Another way to tackle this is implemented by git: the data on all clients are different from each other but there are ways to synchronize each client data with the server thus achieving eventual consistency.
Another consideration you have to think about is the size of your data:
Will downloading all the data from the server take more than a few seconds?
If downloading all data from the server takes too long then the UI will be essentially unresponsive when starting.
Confused about making batch call when building Restful API.
For example, I want to check 100 students height. What's the difference among:
1) check height one by one
2) check height with a batch of 20
3) check height with a batch of 50
What's the benefit of it? I know batching will decrease HTTP requests amount. Will this count much when we evaluate the speed of an API?
How to choose the batch size?
I think you could have found this with a bit of googling - but I'll answer here all the same.
The benefits of batching generally depends on what you do with it.
If you have a keep-alive connection, you have no overhead of handshaking, and then you don't spend too much time dealing with subsequent packets along this connection. You can then pipeline requests and decrease your latency. However, in HTTP1.1, requests are still FIFO - so you have to 'handshake' each and every one. This is where you can leverage the power of batching!
Because you can send/retrieve all the data you need for all requests hereafter, you can minimize the overheads incurred from setting up each individual HTTP connection. This does mean that you will have to wait a bit longer to handle the request as everything is packed into one request, but your throughput improves. The reason for this is the roundtrip from making first request to receiving final response is not multiplied by the number of requests you have to make.
Something to keep in mind, however, is TTFB - time to first byte. If you load data progressively, it can be perceived as faster by a user. Imagine a website which loads one of each one thousand resources at a time, and you as a user can see these popping up, versus a website that loads all one thousand at once, and you just see a spinner until all the resources have been loaded? I'll bet you find the progressive loading scheme "looks" faster.
Naturally, batching very much depends on what you're doing with the requests, if it is to be worthwhile.
Sometimes, you have to be careful with batching, as you can put a lot of load on a server if you have multiple users concurrently making batch requests, and it might be a better balancing scheme to process requests sequentially. Of course, you will be able to figure this out with a bit of monitoring and analytics.
I intend to use sails.io to build a chat website. There will be some 1000 user online at the same time. is sails.io is enough to do that? And is there a way to test performance of chat website? with some normal website i known jmeter but for chat website i know nothing at all
That will depend mostly of the server you will be using for your service.
Sockets are simply an array of connections. You can have as many as you want (within normal memory usage limits of your server machine).
You can checkout this answer for more information on socket costs What's the maximum number of rooms socket.io can handle?
I am currently sails.io for a chat product with 2000+ simultaneous users using it during business hours. Sails socket.io have been holding it pretty well. Nevertheless I got it prepared for horizontal scaling when I maximum capacity starts to show symptoms.You should too.
I have a working HTTP RESTful API that will receive an ID, then check against data in the database. Based on the status of the record and related records it will then return either state errors or if everything is ready to begin it will return some information about the records. It has some other functionality as well but my issue is our device we are using to collect this data does not have access to WiFi, we are planning on testing a 2G cellular solution but I know an HTTP request will be far too slow if it even completes.
What lightweight protocol can my device send a 36 char UUID to a server and get a JSON response back. I have been exploring information about MQTT and COAP but don't see much info on asking another device about a specific ID of a record it's more like ask for a hardware's status.
Furthermore, if there is a solution I can get to interface with my existing API this would be ideal.
Thanks for any help.
I'm not sure why the 2G cellular solution wont play well with HTTP(S).
according to another SO answer the size of http is:
Request headers today vary
in size from ~200 bytes to over 2KB. As applications use more cookies
and user agents expand features, typical header sizes of 700-800 bytes
is common.
And according to wiki you can get up to 40kbit/s. I'm not really sure what the issue is with using http(s) for this scenario.
If you use something like UDP it can be quicker and is smaller however, it's not as reliable as HTTP due to packet loss possibilities. Not to mention you can also apply gzip or another form of compression on the HTTP request to make it even smaller.
minor update
If that data is not needed right away you can do it hourly or half day batch uploads, store all the data in a local db and at certain time intervals do 1 main HTTP request that is a bit bigger but will have all the data? I'm not fully sure what your requirements are but HTTP should be fine for your case over 2G
We are trying to improve our google speed test results and I was told there are some mysterious "better quality" servers that will improve results. Currently using dreamhost. Any tips on servers that increase website speed? Thank you.
For one thing, having a dedicated server with high internet speeds is pretty important. If multiple services are running on the same server (such as multiple users using the same server), it will bring the performance speed down. Also, below are some other factors to account for:
Minimize HTTP Requests
Reduce server response time
Enable compression
Enable browser caching
Minify Resources
Optimize images
Optimize CSS Delivery
Prioritize above-the-fold content
Reduce the number of plugins you use on your site
Reduce redirects
For more detailed suggestions, visit: https://blog.crazyegg.com/2013/12/11/speed-up-your-website/
When you refer to Google Speed Test, its many things, i recommend you use a cache as memcached or redis, use minify static files, try to use your styles on the HTML on tag