best server for google speed test results? - server

We are trying to improve our google speed test results and I was told there are some mysterious "better quality" servers that will improve results. Currently using dreamhost. Any tips on servers that increase website speed? Thank you.

For one thing, having a dedicated server with high internet speeds is pretty important. If multiple services are running on the same server (such as multiple users using the same server), it will bring the performance speed down. Also, below are some other factors to account for:
Minimize HTTP Requests
Reduce server response time
Enable compression
Enable browser caching
Minify Resources
Optimize images
Optimize CSS Delivery
Prioritize above-the-fold content
Reduce the number of plugins you use on your site
Reduce redirects
For more detailed suggestions, visit: https://blog.crazyegg.com/2013/12/11/speed-up-your-website/

When you refer to Google Speed Test, its many things, i recommend you use a cache as memcached or redis, use minify static files, try to use your styles on the HTML on tag

Related

Eclipse milo - Performance/scalability when deploying an OPCUA server in the cloud

I have created an OPCUA server with eclipse milo that is installed in the same machine where the clients are installed, so the communication works fast and reliably.
I did a bit of sniffing with wireshark to see how much communication involves under the hood and apparently there is a lot going on when monitoring variable, alarms, etc....
So I am thinking what issues I may expect in terms of performance and scalability if the server gets deployed in the cloud. I have seen that people talks about OPCUA cloud services, but not being this a hot topic is hard to foresee what challenges may come, and how well it scales and performs.
I would imagine that OPCUA uses sticky sessions, which means that you only can support a max number of users/requests, so dynamic scaling may not be an alternative right?
I tried the samples provides by eclipse milo, which are stored somewhere in the network, and it took long timeto connect to it. If that is the performance one may expect then the perception of the service for non-technical users would be that it does not work well.
Is the cloud a right place to use OPCUA considering the network overhead? Any recommendation to stick to local networks (intranet) only and skip the cloud?
Any feedback would be appreciated, thanks!
If you wanted to get into more detail and share Wireshark captures we might be able to go over parameters that would reduce traffic.
If bandwidth is a concern because you're using cellular or other constrained connections then sure, OPC UA may not be the best fit.
I'm curious what kind of delays or latency you experienced running the examples - connecting over the internet generally does not take very long, so perhaps you were also measuring the time it took to compile and start the example or there was something going on with your network.

How to reduce the latency in case of Indexing files using Lucene.net

Hi We are planning to use lucene.net as apart of one of our products which is mainly a content repository. In order to have better performance, our system will be using a Lucene query engine for majority of content read operation. But one of the major set back that we feel is the latency, as we are unaware of clustered or distributed Lucene Implementation, which is the best method to reduce the latency.
Based on the comments I now understand that the latency concerns relate to IP packet latency of having a single index server that is on the other side of the planet from the user.
Lucene.net, like Lucene, is a library not a platform. It provides blazingly fast indexing and querying, but does not provide higher level platform features like a multi-server distributed index. Lucene does contain alot of plumbing to support such use cases and it's this plumbing that software products can build on top of to provide multi-server distributed indexes. The two most popular software platforms built on Lucene that provide such features are Apache Solr and ElasticSearch. It's important to note however that they are built on top of the Java version of Lucene rather than Lucene.net. I am not aware similar alternatives for Lucene.net, and building your own would be a major undertaking.
It is possible however that the IP packet latency you are concerned about may not turn out to be a large issue, let me explain. It's often possible to send an IP request to the other side of the planet in under 300ms, certainly under 500ms. When rendering a web page served from the other side of the planet with lots of images served from there as well, this latency can add up quickly. At 300ms per image if the page had 20 images on it plus a few css files and a few js files all loaded from that server, then rendering the page could require say 25 requests. At 300ms latency per request, such a situation would bear 7500ms in latency, or 7.5 secs. In reality some of the requests could be served in parallel, which would reduce the effective latency, but on top of this latency the sever has to do work to meet the request and that takes time too. Anyway, this example is just for illustration purposes to agree that serving a web page from the other side of the world does raise IP latency concerns.
However, making a search request from Lucene from the other side of the planet does not have similar issues because it need only be a single request. So for example, if a person needs to issue a search to a API server on the other side of the planet running Lucene.net, the request (probably json) is a single request that bares the IP latency, say 300ms. That request is serviced by Lucene.net and the results are returned as a single response to that request and then the App or web page that made the request renders the results. So the total IP latency is just the IP latency of a single request which for many use cases would be fine done across the globe.

GWT application bandwidth congestion issue

I am curious to know this question for a long time about GWT Application. Why Bandwidth consumption is high during first run on the server and after that bandwidth consumption decreases big time? So why this happen? Please Reply as soon as possible
Because the whole application (loads of JS/image/CSS) is loaded at start up. Additional calls to fetch data are made via AJAX. Search the interwebs for GWT bootstrapping to learn more. You can improve said bootstrapping using code splitting and client bundles. See the GWT documentation http://code.google.com/webtoolkit/overview.html

how can i measure stress testing for the iPhone app?

how can i measure stress testing for the iPhone app ?
i need stress testing not performance testing, for example 100 users access the database of the app which is on the server at the same time.
any help?
thanks in advance
First, you need to decide if you need to test the client-side (iPhone) app, the server-side code, or both.
Testing ONLY the server-side, might make this much easier - especially if it is using HTTP to communicate with the server and exchanges data via a text-based format (XML, JSON, etc). There are many web load testing tools available which can handle this scenario. Using our Load Tester product, for example, you would configure the proxy settings on your iPhone to point to our software running on a local machine. Then start a recording and use the application. Load Tester will record the messages exchanged with the server. You can then replay the scenario, en masse, to simulate many users hitting your server simultaneously. The process, at a high level, is the same with most of the web load testing tools.
Of course, the requests to the server can't be replayed exactly as recorded - they'll need to be customized to accurately simulate multiple users. How much customization is needed will depend on the kind of data being exchanged, the complexity of the scenario and the ability of the tool to automatically configure dynamic fields (and this is one area where the abilities of the tools vary greatly).
Hope that helps!
A basic simulation would involve running your unit tests on OS X, using many simultaneous unit test processes (with unique simulated users, and other variables).
If you need more 'stress', add machines - you'll likely end up hitting io or network limits from one machine relatively early on.

Horizontal scalability for distributed apps, how to achieve that?

I would like to disregard web applications here, because to scale them horizontally, ie to use multiple server instances together, it is "sufficient" to just duplicate the server software over the machines and just use a sort of router that forwards requests to the "less busy" server machine.
But what if my server application allows users to engage together in realtime ?
If the response to the request of a certain client X depends on the context of a client Y whose connection is managed by another machine then "inter machines" communication is needed.
I'd like to know the kind of "design solutions" that people has used in such cases.
For example, the people at Facebook must have already encountered such situation when enabling the chat feature of their social app.
Thank you in advance for any advise.
One solution to achive that is to use distibuted caches like memcache (Facebook also uses that aproach).
Then all the information which is needed on all nodes is stored in that cache (and a database if it needs to be permanent) an so all nodes can access that information (with a very small latency between the nodes).
regards
You should consider some solutions that provide transparent horizontal database scalability and guarantee ACID semantics. There are many solutions that offer this at various levels. People at Facebook which you reference have solved the problem by accepting eventual consistency but your question leads me to believe that you can't accept eventual consistency.