I have XAMPP running on localhost on Windows 7. I was trying to find a way to simulate the bandwidth of dialup and 3G connections.
Is there a current solution which works on a localhost and Windows 7 and is reasonably straight-forward to enable and disable as necessary?
The easiest way to use Chrome developer tools.
On network tab, there is feature to throttle bandwidth in different use cases.
Chales proxy includes a bandwidth throttle.
You can use #sitespeed.io/throttle - an npm package.
Example:
# simulate slow 3g connection on localhost
npx #sitespeed.io/throttle 3gslow --localhost
# throttle localhost with roundtrip time of 200ms
npx #sitespeed.io/throttle --rtt 200 --localhost
# later when you are done,... stop throttling
npx #sitespeed.io/throttle --stop --localhost
Note that throttle requires sudo, and will prompt for user login.
Alternatively, throttle can be used programmatically in NodeJS.
Example (copied from docs):
const throttle = require('#sitespeed.io/throttle');
// Returns a promise
const options = {up: 360, down: 780, rtt: 200};
await throttle.start(options);
// Do your thing and then stop
await throttle.stop();
See the documentation for more options.
Related
I'm building a website attached to a Heroku Postgres database and am using the free hobby dev plan. Per Heroku, this means there's a "Maximum of 20 connections." Does this mean that a maximum of 20 people can be using the website with data being collected by the database on the back end? Any idea what happens if connections go above that level? The paid plans go up to a maximum connection limit of 500, but even that seems low to me if people are using this at the enterprise level. Any color on this would be greatly appreciated. There was a prior question on this but the answer wasn't quite clear to me.
Thanks!
What does database connection limit mean?
PostgreSQL could be configured to limit the number of simultaneous connections to the database. The Heroku comes with plans having connection limits. The 'Hobby' plans come with 20 connections whereas standard plans comes starting with 120 connections. When we start developing and testing, especially automated testings, the hobby plans raise the error PG::Error (FATAL: too many connections for role "xxxxxxx"). If we check the connections with Heroku CLI, we get
Heroku CLI
The immediate solution is to kill all connections with the command :
$ heroku pg:killall --app <app name>
This is not a permanent solution. We had the same issue with this website also. We tried many solutions available in the internet, especially in stack overflow.
It is very important to know how to calculate the no of connections required. Heroku documentation says...
Assuming that you are not manually creating threads in your application code, you can use your web server settings to guide the number of connections that you need. The Unicorn web server scales out using multiple processes, if you aren’t opening any new threads in your application, each process will take up 1 connection. So in your unicorn config file if you have worker_processes set to 3 like this:
worker_processes 3
Then your app will use 3 connections for workers. This means each dyno will require 3 connections. If you’re on a “Dev” plan, you can scale out to 6 dynos which will mean 18 active database connections, out of a maximum of 20. However, it is possible for a connection to get into a bad or unknown state.
Solution - Limit connections with PgBouncer
The easiest fix is to limit the connections with PG bouncer. For many frameworks, you must disable prepared statements in order to use PgBouncer. Then add the PgBouncer buildpack to your app.
$ heroku buildpacks:add https://github.com/heroku/heroku-buildpack-pgbouncer
The output will be something like
Buildpack added. Next release on will use:
heroku/python
https://github.com/heroku/heroku-buildpack-pgbouncer
Run git push heroku master to create a new release using these buildpacks.
Now you must modify your Procfile to start PgBouncer. In your Procfile add the command bin/start-pgbouncer-stunnel to the beginning of your web entry. So if your Procfile was
web: gunicorn .wsgi:application --worker-class gevent
Change it to:
web: bin/start-pgbouncer-stunnel gunicorn .wsgi:application --worker-class gevent
Commit the results to git, test on a staging app, and then deploy to production.
On deployment, you will see
OUTPUT
Depending on the web-framework you are using this can be different, but:
Typically you will have a maximum of one database connection per server process. This could be one per running web- or worker-dyno. Or more if your framework runs multiple thread / worker processes per dyno (most do).
These connections are then only used if there is an actual request to your application, not when the use is just viewing a page.
When you're running an async framework (node.js for example, or greenlets in python) this get's a little more complicated.
The easy way: just test it. You'll see the current connection count in the heroku interfaces. There are frameworks and services in the wild that let you test concurrent users.
The even easier way (since this runs on hobby plans, it seems like a hobby application): just see when it breaks :) .
I am using Google Chrome 63.
In DevTools in Performance tab there are three CPU throttling settings: "No throttling", "4x slowdown" and "6x slowdown".
Is it possible to set custom throttling, for example "20x slowdown"? It could be via setting some flag in chrome.exe file or programmatically via NodeJS library.
I found that Lighthouse library has kind of helpful function but if I change the default value inside it (CPU_THROTTLE_METRICS seems to be equal to 4) from 4 to (for example) 20 and run it, how can I be sure it really is 20x slowed down?
Also, I would like to know, if it is possible to do such simulated "slow down" to the GPU in similar way?
Thanks for any advice.
Custom values for Emulation.setCPUThrottlingRate can be set right in Chrome, but you need to open a Dev Tools window on the Dev Tools window to change the setting programatically.
Open Dev Tools; make sure it is detached (open in its own window).
Open Dev Tools again on the Dev Tools window from step 1 using the key combination Cmd-Opt-i (Mac) or Ctrl-Shift-i (Windows).
Run the following in the Console tab: await Main.MainImpl.sendOverProtocol('Emulation.setCPUThrottlingRate', {rate: 40});
This example will throttle Chrome performance by 40x. NOTE: Passing 1 for rate turns off throttling.
The first Dev Tools window created in Step 1 may be re-docked after creating the second Dev Tools window.
Lighthouse uses Emulation.setCPUThrottlingRate command in the Chrome DevTools Protocol:
https://chromedevtools.github.io/devtools-protocol/tot/Emulation#method-setCPUThrottlingRate
You can monitor the protocol this way:
https://umaar.com/dev-tips/166-protocol-monitor/
You'll see this command in the protocol log when you switch with the throttling setting in the performance panel.
If you're asking how to be sure if it works - here is the implementation from Chromium source code:
https://github.com/chromium/chromium/blob/master/third_party/blink/renderer/platform/scheduler/util/thread_cpu_throttler.h#L21
// This class is used to slow down the main thread for
// inspector "cpu throttling". It does it by spawning an
// additional thread which frequently interrupts main thread
// and sleeps.
Hope this helps.
On Linux you can use cpulimit
sudo apt-get install cpulimit
# -l 5 means 5% , or 20x slowdown
cpulimit -l 5 chromium-browser
Colleagues and users testing various features in a program use MFDeploy to install for example "MyApp.exe" onto their Netduino +2. This method works great. Is there a way to also MFDeploy a "MyApp.config" text file so they can set their specific network criteria (like Port#) or other program preferences? Obviously, more robust preferences can be set from desktop software or web app AFTER the connection is established.
After several days researching, I could not find a viable means of transferring a config file via MFDeploy. Decided to add a "/install" command line option to the desktop app:
cncBuddyUI.exe [/help|/?] [/reset] [/discover] [/install:[axisA=X|Y] ,port=9999]]
/help|/? Show this help/usage information
/reset Create new default software configuration
/discover Listen for cncBuddyCAM broadcasting IPAddress & Port (timeout 30 secs)
/install Install hardware specific settings on Netduino+2 SDCard.
port Network port number (default=80)
axisA Slave axisA motor signals to X or Y axis
During "/install" mode, once cncBuddyCAM (Netduino app) network connects to cncBuddyUI (desktop app), the configuration parameters are transmitted and written onto the SDCard (\SD\config.txt).
Every warm boot now reads \SD\config.txt at startup and loads the configuration parameters into the appropriate application variables.
After several weeks of usage, I find this method preferable and easier to customize. Check out cncBuddy on Github.
Recently discovered that Zend_Session's DbTable SaveHandler is implemented in a way that is not very optimized for high performance, so, I've been investigating changing over to using Memcache for session management.
I found a decent pattern/class for changing the Zend_Session SaveHandler in my bootstrap from DbTable to Memcache here and added it into my web app.
In my bootstrap, I changed the SaveHandler like so:
FROM:
Zend_Session::setSaveHandler(new Zend_Session_SaveHandler_DbTable($config));
TO:
Zend_Session::setSaveHandler(new MyApp_Session_SaveHandler_Memcache(Zend_Registry::get("cache")));
So, my session init looks like this:
Zend_Loader::loadClass('MyApp_Session_SaveHandler_Memcache');
Zend_Session::setSaveHandler(new MyApp_Session_SaveHandler_Memcache(Zend_Registry::get("cache")));
Zend_Session::start();
// set up session space
$this->session = new Zend_Session_Namespace('MyApp');
Zend_Registry::set('session', $this->session);
As you can see, the class provided from that site integrates quickly with a simple loadClass and SaveHandler change in the bootstrap and it works in my local dev env without error (web app and memcache are on the same system).
I also tested my web app hosted in local dev env with a remote memcache server in PROD to see how it performs over the wire and it appears to also work okay.
However, in my staging environment (which mimics production) my zend app is hosted on server1 and memcache on hosted on server2 and it seems that nearly every other request completely bombs out with specific error messages.
The error information I capture includes the message "session has already been started by session.auto-start or session_start()" and second/related indicates that Zend_Session::start() got a Connection Refused with an "Error #8 MemcachePool::get()" implicated on line 180 in the framework file ../Zend/Cache/Backend/Memcached.php.
I have confirmed that my php.ini has session.auto_start set to 0 and the only instance of Zend_Session::start() in my code is in my bootstrap. Also, I init my Cache, Db and Helpers before I init my Session (to make sure my Zend_Registry::get("cache") argument for instantiating my new SaveHandler is valid.
I found only about two valuable resources for how to successfully employ Memcache for Zend_Session and I have also reviewed ZF's Zend_Cache_Backend and Zend_Session "Advanced Usage" docs but I haven't been able to identify the source of why I get this error using Memcache or why it won't work consistently with a dedicated/remote memcache server.
Does anyone understand this problem?
Does anyone have experience with solving this problem?
Does anyone have Memcache working in their ZF web app for session management in a way that they can recommend?
Please be sure to include any/all Zend_Session and/or Zend_Cache configurations you made or other trickeration or witchcraft you used to get this working.
Thanks!
This one just nearly exploded my head.
First, sorry for the book of a question...I wanted to paint a complete picture of the situation. Unfortunately, I missed a few key details which my wonderful coworker found.
So, once you install, most likely when you are just starting to test out the deamon, you will do this:
root# memcached -d -u nobody -m 512 127.0.0.1 -p 11211
This command will start up memcached, using 512MB on the localhost and the default port 11211.
Did you see what I did there? That means it's set to only process requests sent to the LOOPBACK network interface.
ugh
My problem was, I couldn't get my web app to work with a REMOTE memcached server.
So, when you actually want to fire up your memcached server to accept requests from remote systems, you execute something like the following:
root# memcached -d -u nobody -m 512 -l 192.168.0.101 -p 11211
This fixed my problem. This starts my memcached daemon, setting it to use 512MB bound to IP 192.168.0.101 and listening on the default port 11211.
Now, any requests SENT to that IP and Port will be accepted by the server and handled as you might expect.
Here's a networking doc reference...RTFM...a second time!
Is there a way to slow down the internet connection to the iPhone Simulator, so as to mimic how the App might react when you are in a slow spot on the cellular network?
How to install Apple’s Network Link Conditioner
These instructions current as of October 2019.
Warning: If you just upgraded to new version of macOS, make sure you install the very latest Network Conditioner (in Additional Tools for Xcode) or it may silently fail; that is, you will turn it on but it won’t throttle anything or drop any packets.
Update: As of Xcode 11, there may be an even simpler way to simulate network conditions on tethered devices; see this blog post. For how to affect simulated devices, continue below, as before.
Install Xcode if you don’t have it.
Open Xcode and go to Xcode › Open Developer Tool › More Developer Tools…
Download Additional Tools for Xcode (matching your current Xcode version)
Open the downloaded disk image and double-click the Network Link Conditioner .prefpane under “Hardware” to install it.
There we go!
Be sure to turn it on. You need to select a profile and enable the network conditioner.
Caveat
This won't affect localhost, so be sure to use a staging server or co-worker's computer to simulate slow network connections to an API you’re running yourself. You may find https://ngrok.com/ helpful in this regard.
"There's an app for that!" ;) Apple provides "Network Link Conditioner" preference pane that does the job quite well.
for Xcode versions prior to 4.3, the pane installer can be found in your Developer folder, e.g. "/Developer/Applications/Utilities/Network Link Conditioner", after installation, if daemon fails to start and you don't want to reboot your machine, just use sudo launchctl load /system/library/launchdaemons/com.apple.networklinkconditioner.plist
if you are already done with Developer folder, you can install the pane as a part of "Hardware IO Tools for Xcode" package available via Mac Dev Center additional downloads section.
Link to download page (you must log in with your Apple ID): https://developer.apple.com/downloads/index.action
(credits to #nverinaud)
An app called SpeedLimit
https://github.com/mschrag/speedlimit
Works great.
chris.
It also worth mentioning that Xcode also has a built in way for devices, not simulator.
Just go 'Devices and Simulator' (cmmd+shift+2)
Select your device
Scroll down til you find 'Device Conditions'
Set your desired profile
Hit Start
To have this working you need to install 'Network Link Conditioner' on your Mac. See steps mention in Alan's answer
I would argue that a slow connection isn't enough to simulate real-work mobile data network behaviour - since there is also much more packet loss, higher latencies and more dropped connections too.
Here is a handy script I found to configure the firewall to emulate these parameters:
http://pmilosev-notes.blogspot.com/2011/02/ios-simulator-testing-over-different.html
#!/bin/sh
if [ "$#" -ne "3" ]
then
echo "Usage:\n$0 <bandwidth in kpbs> <delay in ms> <packet loss ratio>";
exit 1
fi
BW=$1
DELAY=$2
PLR=$3
sudo ipfw pipe 1 config bw ${BW}Kbit/s delay $DELAY plr $PLR
sudo ipfw add 1 pipe 1 all from me to not me
sudo ipfw add 2 pipe 1 all from not me to me
echo "RETURN to stop connection noise"
read
sudo ipfw delete 1
sudo ipfw delete 2
exit 0
Some suggested values you can use:
Scenario
Bw (Kbit)
delay (ms)
pr (ratio)
2.5G mobile
(GPRS)
50
200
3G mobile
1000
200
0.2
VSAT
5000
500
0.2
Busy LAN on VSAT
300
500
0.4
There isn't a direct way to emulate a slow connection, unlike, say, the nice network connection emulator that blackberry developers enjoy. However, since your simulator's connection goes through your computer - you can simply focus on slowing down your computer's connection.
You'll want to achieve two things (depending upon your circumstances):
throttle your bandwidth
increase your latency
Maybe this will point you in right direction:
http://www.macosxhints.com/article.php?story=20080119112509736
There are some good open source solutions, too, but I so can't remember their names.
This question might help: How to throttle network traffic for environment simulation?
You can do it in really device through Xcode(14) settings
Debug -> Induce Device conditions -> Network Link -> select the Network you want