On my Surface RT/RT2, UpgradeToSslAsync is too slow on both Win8/Win8.1 via WiFi - microsoft-metro

I am working on a project for Surface RT/RT2 and Surface Pro2. In my recent work, I found that UpgradeToSslAsync is too slow on Win8/Win8.1 on the devices. When I connect one of our company’s website with its FQDN (it is accessible from both intranet and Internet) within our intranet without any DNS/proxy configured, the issue occurs with a high probability.
Given this website is xxx.a.com with https. If I access it from IE, it is about 20~40 seconds to succeed. However, my program will take about 150 seconds to succeed.
My program follows StreamSocket.UpgradeToSslAsync to make the connection. There are two steps: a) make a plain connection to xxx.a.com with https; After it succeeds, b) upgrade the connection to SSL.
Step a) is very fast, but the issue occurred at step b). It will wait more than 145 seconds before the completed event of UpgradeToSslAsync comes.
The issue doesn't occur each time. It maybe there or not after I reinstall my program or reboot the machine.
I don’t know why this happened. Would you please give me some suggestion? Thank you very much!

Related

Why is my server response so high (First Byte Time)?

One of my websites has suddenly become extremely slow in server response - currently 41859 ms First Byte Time.
Already talked to hosting provider, but they did not notice any changes or problems with the server. So I am lost on what is the problem. Already tried plugins for caching and image optimization but I know that won't help.
Does anyone know what might be the problem? I've also run a security check but nothing was found, in case it's a virus.
Any ideas?

Change value after period of time in Firebase

I have a lobby in which I want the users to be in sync. So when a user turns off his internet while the app is running, he should be removed. I know Firebase does not support server side coding, so the coding needs to be client side. The answers from How to delete firebase data after "n" days and Delete firebase data older than 2 hours do not answer this question since they expect that the user is online and they have an internet connection. So my question is if is possible to delete users when they got no internet? I thought maybe it is an idea to let the users update a value every 5 seconds, and when that update is not done, the other users in that lobby remove the player. This way is not good, since every player needs to retrieve and upload alot of data every 5 seconds. What is the best way to solve this?
Edit: to make it short, lets say each user has an image. The image should be green when the user is connected, and grey when disconnected.
Edit 2: after thinking it over, it is really hard to accurate present the connected users on a client-side server. That is why, if nobody has a different solution, I should add another server which can execute server-side codes. Because of the larges amount of servers, I would like to know which server I should use. The server should run a simple function which only checks if the users are connected or disconnected and can communicate with Firebase. If I am correct it should look like this:
But the server also needs to communicate with the users directly. I have absoluty no idea where to start.
If I'm not completely wrong, you should be able to use onDisconnect.
From the Firebase, documentation:
How onDisconnect:Works:
When an onDisconnect() operation is established, it lives on the Firebase Realtime Database server. The server checks security to make sure the user can perform the write event requested, and informs the client if it is invalid. The server then monitors the connection. If at any point it times out, or is actively closed by the client, the server checks security a second time (to make sure the operation is still valid) and then invokes the event.
In app in production I'm using onDisconnectRemoveValue, and when I close the app, the user removes himself from the lobby. Not sure how it works when you turn the device in airplane mode, but from the documentation it seems there should be no problem.
One thing: when you test it better do it on real device, the simulator have issues with turning it off and on, at least the on I have installed.
Edit: So i checked the onDisconnect when you put the device on airplane mode and it works! The question is, that it removes the user in about a 1:30 min, approximately, so if you read the documentation or ask the support, you may be (and only may be) able to find a way to set the time you want.

Concurrent Connection Test

So I ran into a network problem the other day and I was trying to find a way to test for this problem in the future.
I had a lot of users online at once and hit my routers max IP connection limit (not DHCP! TCP/UDP connections.)
Once I figured out what the problem was it was fairly simple to fix however I was wondering if there is any way to simulate this kind of activity? Everything worked fine when I tested it, it wasn't until I had 150+ users that I discoved I had a problem.
I have spent the last 3-4hrs looking for such a test/audit tool. Here is what I found:
http://bittwist.sourceforge.net/ -DDoS simulator (can't make it work, barly get +300 connections)
http://stevesouders.com/hpws/max-connections.php -Browser concurrent connection tester (This hits the browser limit (6 in chrome) w/o making a dent on my router even open in 70+ tabs at the same time)
http://www.smallnetbuilder.com/lanwan/lanwan-howto/31103-how-we-test-hardware-routers-revision-3 -Some tool linked about halfway down the page (Reads like its exactly what I want, however it barely has a noticable effect on my router.)
http://www.http-kit.org/600k-concurrent-connection-http-kit.html -Concurrent HTTP connection simulator (This one seems like it would do what I want, but my linux-foo is limited and I can't get it working. tear)
So do you guys have a tool to test your routers with? I would love something that does both TCP/UDP.
(btw, for anyone misunderstanding I'm not trying to test "speed", just sheer number of connections)
Thanks!
Kz

PDO queries very slow over high latency, high bandwidth connection

Running postgresql 9.x (9.1 - 9.3)
I have a custom web app built using php's PDO library. Every query in our app uses prepared statements (via our internal PDO wrapper library).
Our production system uses AWS EC2 small instances for the web server and RDS for the app server.
For local development, my local machine serves as the web-server, and an office machine running Mac OSX (Mavericks) serves up the DB.
When I'm at work, I can typically ping my local office DB server and get 1-5ms ping times. Everything performs well, page load times are very speedy, my internal timer shows that PHP runs the page from start to end in about 12ms.
Now, the issue comes in when I take my work laptop home-- from home, I get about 50-60ms ping times to the office DB server. If I run my development machine at home, page times now take 5-10 seconds to load-- every time. Granted, there are 4 db queries running per page load, it's very, very little data. I've tried TCP_NOWAIT settings, I've tried loading pgbouncer on my local machine with persistent connections to the remote db-- nothing has helped so far.
When timing the queries, we have a simple query that returns 100 bytes of data that runs in .0006 seconds locally to taking around 1 second to run remotely. Lastly, I'll say it appears to be related to latency only, no matter how much data a query returns, it's like it takes around 1 second longer than it normally would if running locally.. give or take.
I was simply wondering if anyone could help me resolve what this delay might be. It seems that every single query, no matter how much data, imparts a delay of around a second, give or take. The odd thing is that when I run PGAdmin on the my machine connecting to the remote DB, it takes nowhere near that much time to run simple queries.
Does anyone have any idea of other things to try? I'm not runnig an SSL DB connection, or using any compression, I'm willing to try if necessary, however, that's one thing I haven't gotten to work before, and I doubt that'll help with latency anyway.

How Do I Optimize Zend Framework

I have a application built on Zend Framework I am trying to optimize.
I did some Xdebug profiling and although i cant say i understand every nitty gritty of the results i got, some things were quite obvious from the result.
For instance, the file Bootstrap.php seems to be the one gulping most of the time taking 4,553MS seconds which accounts for 92.49% of the total time.
And if i dig further, I could see that Zend_Application_Bootstrap_Boostrap->run takes the bulk of the time. Checking this out again, I found out that Zend_Controller_Front->Dispatch might actually be the function inside the Boostrap.php that takes time to execute.
Question is, from these indices that i have, how best can I go about Optimizing the application? If it caching, how do i go about applying Caching to this situation?
Thanks
From the look of the callgrinds, on the login page the app is spending most of it's time in curl_exec, which is to be expected if you're doing a remote login. But it is doing 10 separate curl_execs which seems excessive. I'm not familiar with the LinkedIn login auth, but is it possible your app is running the remote login code multiple times?
On the standard page request the app is spending most of its time connecting to MySQL, and it seems to be doing this twice. Are you using a remote DB server, and do you need two separate DB connections?
Assuming you are using a remote DB server and it is on the same network as your web server, there seems to be some networking issue there. I'd check the latency to that server if you can, and try connecting to the IP address instead of a hostname to see if that makes any difference (if doing this is much faster this would suggest an issue with the DNS setup on your web server).