Total network waiting time for all http requests - google-chrome-devtools

Not sure whether anybody has similar query here, assume we do a performance tracing on a single page application completely loaded on a url, how to get the total waiting time on networking time for all resources?

I've never heard about an existing tool that would provide the exact metric you're looking for. But you could write a mini script that uses the Resource Timing API. It is easy to list all network requests and sum up waiting times. More info here.
Then, if you need to automate measures, you can use Puppeteer to run your script on a headless Chrome.

Related

What are the limitations of the flask built-in web server

I'm a newbie in web server administration. I've read multiple times that flask built-in web server is not designed for "production", and must be used only for tests and debug...
But what if my app touchs only a thousand users who occasionnaly send data to the server ?
If it works, when will I have to bother with the configuration of a more sophisticated web server ? (I am looking for approximative metrics).
In a nutshell, I would love to find what the builtin web server can do (with approx thresholds) and what it cannot.
Thanks a lot !
There isn't one right answer to this question, but here are some things to keep in mind:
With the right amount of horizontal scaling, it is quite possible you could keep scaling out use of the debug server forever. When exactly you would need to start scaling (or switch to using a "real" web server) would also depend on the environment you are hosting in, the expectations of the users, etc.
The main issue you would probably run into is that the server is single-threaded. This means that it will handle each request one at a time, serially. This means that if you are trying to serve more than one request (including favicons, static items like images, CSS and Javascript files, etc.) the requests will take longer. If any given requests happens to take a long time (say, 20 seconds) then your entire application is unresponsive for that time (20 seconds). This is only the default, of course: you could bump the thread counts (or have requests be handled in other processes), which might alleviate some issues. But once again, it can still be slow under a "high" load. What is considered a "high" load will be dependent on your application and the expectations of a maximum acceptable response time.
Another issue is security: if you are concerned at ALL about security (and not just the security of the data in the application itself, but the security of the box that will be running it as well) then you should not use the development server. It is not ready to withstand any sort of attack.
Finally, the development server could just fail outright. It is not designed to be used as a long-running process (days, weeks, months), and so it has not been well tested to work in this capacity.
So, yes, it has limitations. Yes, you could still conceivably use it in production. And yes, I would still recommend using a "real" web server. If you don't like the idea of needing to install something like Apache or Nginx, you can still go with a solution that is still as easy as "run a python script" by using some of the WSGI Standalone servers, which can run a server that is designed to be in production with something just as simple as running python run_app.py in the command line. You typically just need to create a 4-5 line python script to import and create the server object, point it to your Flask app, and run it.
gunicorn could be run with only the following on the command line, no extra script needed:
gunicorn myproject:app
...where "myproject" is the Python package that contains the app Flask object. Keep in mind that one of developers of gunicorn would probably recommend against this approach. See https://serverfault.com/questions/331256/why-do-i-need-nginx-and-something-like-gunicorn.
The OP has long-since moved on, but for those who encounter this question in the future I would just add that setting up an Apache server, even on a laptop, is free and pretty easy. It can be readily configured for as few or as many features as you want just by uncomment in or commenting out lines in the config file. There might be an even easier GUI method for doing that nowdays, but just editing the configs is simple.

Can I send multiple requests at one time using Fiddler?

Using Fiddler, I want to send multiple requests in one hit, to check the response time from the server, if too many requests are sent at one time. Basically, I want to perform a, kind of, load testing on my service. Is there any way to perform this action? I want to repeat the process of hitting the server, again and again.
In Fiddler, you can repeat a request as many times as you like by hitting SHIFT+R on the selected Web Session. You'll be prompted for a repeat count and then Fiddler will issue the specified number of requests.
Caveat: Having said that, generally speaking, you'd want to use a tool like Telerik Test Studio's Load Test tool for a task like this. Alternatively, you could use Fiddler's Export architecture to generate a script for VS WebTest or Microsoft's free WCAT tool and use those tools to generate the load. You can then run these scripts on multiple machines from multiple networks and generate a more-realistic set of load than you could by simply running on a single client.
I've been load testing with StresStimulus today. Overall, I'm quite impressed.
It's now a standalone application (it used to be a fiddler extension). There's a 7 day free trial which allows up to 50 virtual users. Also, the setup wizard is great for beginners.
For basic load testing the trial should be fine. Consider upgrading for extensive/professional use.

How should I benchmark a system to determine the overall best architecture choice?

This is a bit of an open ended question, but I'm looking for an open ended answer. I'm looking for a resource that can help explain how to benchmark different systems, but more importantly how to analyze the data and make intelligent choices based on the results.
In my specific case, I have a 4 server setup that includes mongo that serves as the backend for an iOS game. All servers are running Ubuntu 11.10. I've read numerous articles that make suggestions like "if CPU utilization is high, make this change." As a new-comer to backend architecture, I have no concept of what "high CPU utilization" is.
I am using Mongo's monitoring service (MMS), and I am gathering some information about it, but I don't know how to make choices or identify bottlenecks. Other servers serve requests from the game client to mongo and back, but I'm not quite sure how I should be benchmarking or logging important information from them. I'm also using Amazon's EC2 to host all of my instances, which also provides some information.
So, some questions:
What statistics are important to log on a backend setup? (CPU, RAM, etc)
What is a good way to monitor those statistics?
How do I analyze the statistics? (RAM usage is high/read requests are low, etc)
What tips should I know before trying to create a stress-test or benchmarking script for my architecture?
Again, if there is a resource that answers many of these questions, I don't need an explanation here, I was just unable to find one on my own.
If more details regarding my setup are helpful, I can provide those as well.
Thanks!
I like to think of performance testing as a mini-project that is undertaken because there is a real-world need. Start with the problem to be solved: is the concern that users will have a poor gaming experience if the response time is too slow? Or is the concern that too much money will be spent on unnecessary server hardware?
In short, what is driving the need for the performance testing? This exercise is sometimes called "establishing the problem to be solved." It is about the goal to be achieved-- because if there is not goal, why go through all the work of testing the performance? Establishing the problem to be solved will eventually drive what to measure and how to measure it.
After the problem is established, a next set is to write down what questions have to be answered to know when the goal is met. For example, if the goal is to ensure the response times are low enough to provide a good gaming experience, some questions that come to mind are:
What is the maximum response time before the gaming experience becomes unacceptably bad?
What is the maximum response time that is indistinguishable from zero? That is, if 200 ms response time feels the same to a user as a 1 ms response time, then the lower bound for response time is 200 ms.
What client hardware must be considered? For example, if the game only runs on iOS 5 devices, then testing an original iPhone is not necessary because the original iPhone cannot run iOS 5.
These are just a few question I came up with as examples. A full, thoughtful list might look a lot different.
After writing down the questions, the next step is decide what metrics will provide answers to the questions. You have probably comes across a lot metrics already: response time, transaction per second, RAM usage, CPU utilization, and so on.
After choosing some appropriate metrics, write some test scenarios. These are the plain English descriptions of the tests. For example, a test scenario might involve simulating a certain number of games simultaneously with specific devices or specific versions of iOS for a particular combination of game settings on a particular level of the game.
Once the scenarios are written, consider writing the test scripts for whatever tool is simulating the server work loads. Then run the scripts to establish a baseline for the selected metrics.
After a baseline is established, change parameters and chart the results. For example, if one of the selected metrics is CPU utilization versus the number of of TCP packets entering the server second, make a graph to find out how utilization changes as packets/second goes from 0 to 10,000.
In general, observe what happens to performance as the independent variables of the experiment are adjusted. Use this hard data to answer the questions created earlier in the process.
I did a Google search on "software performance testing methodology" and found a couple of good links:
Check out this white paper Performance Testing Methodology by Johann du Plessis
Have a look at the Methodology section of this Wikipedia article.

Best way to create REST API for long lasting tasks?

Suppose I have 2 servers.
The first is a service that provides some computations, which can last long time (minutes to hours).
The second server will use this service to have some data computed.
I'm trying to design a REST API for the first server and so far so good. But I'd like to hear some opinion on how to model notifications when the long lasting task is finished.
I considered 2 approaches so far:
Polling - the second server will ask every now and then about the result.
Callback - Second server will setup an uri for the first one to call after it is done. But this smells a bit in REST API.
What do you think?
For your situation I would choose polling. When the second server makes the initial request to create the job on the first server, it should get a response that has the url of the eventual status page. The second server then polls that url every 5-15 minutes to check the status of the job. If the first server makes that url an RSS or Atom feed, then users could also point their RSS readers at the same url and find out themselves if the job is done. It's a real win when both people and machines can get information out of a single source.
In addition to what I've already answered in this similar question, I'd suggest using the Atom Publishing Protocol for the notification (you could publish to your second server).
If you use Python, you can take advantage of RabbitMQ and Celery to do the job. Celery lets you create an item in a queue and then pause execution of whatever you're running it through (i.e.: Django) such that you can consume the output of the queue processor as it becomes available. No need for polling OR callbacks.

What is a good way to measure performance and usage on my webserver

We have an internal web server (intranet) and starting next week we will be under a heavy load. I need to monitor things like users online, page hits, wait time. I can't use outside sites to measure this. I need something that can be loaded internally.
have a look at siege
you can collect some urls using proxy tool or import your google webmasters sitemap extraciton.
then you can run some simluations, specify clients, delys etc.
you can also, if you have external proxies, simulate different users.