Is there a way to slow down the internet connection to the iPhone Simulator, so as to mimic how the App might react when you are in a slow spot on the cellular network?
How to install Apple’s Network Link Conditioner
These instructions current as of October 2019.
Warning: If you just upgraded to new version of macOS, make sure you install the very latest Network Conditioner (in Additional Tools for Xcode) or it may silently fail; that is, you will turn it on but it won’t throttle anything or drop any packets.
Update: As of Xcode 11, there may be an even simpler way to simulate network conditions on tethered devices; see this blog post. For how to affect simulated devices, continue below, as before.
Install Xcode if you don’t have it.
Open Xcode and go to Xcode › Open Developer Tool › More Developer Tools…
Download Additional Tools for Xcode (matching your current Xcode version)
Open the downloaded disk image and double-click the Network Link Conditioner .prefpane under “Hardware” to install it.
There we go!
Be sure to turn it on. You need to select a profile and enable the network conditioner.
Caveat
This won't affect localhost, so be sure to use a staging server or co-worker's computer to simulate slow network connections to an API you’re running yourself. You may find https://ngrok.com/ helpful in this regard.
"There's an app for that!" ;) Apple provides "Network Link Conditioner" preference pane that does the job quite well.
for Xcode versions prior to 4.3, the pane installer can be found in your Developer folder, e.g. "/Developer/Applications/Utilities/Network Link Conditioner", after installation, if daemon fails to start and you don't want to reboot your machine, just use sudo launchctl load /system/library/launchdaemons/com.apple.networklinkconditioner.plist
if you are already done with Developer folder, you can install the pane as a part of "Hardware IO Tools for Xcode" package available via Mac Dev Center additional downloads section.
Link to download page (you must log in with your Apple ID): https://developer.apple.com/downloads/index.action
(credits to #nverinaud)
An app called SpeedLimit
https://github.com/mschrag/speedlimit
Works great.
chris.
It also worth mentioning that Xcode also has a built in way for devices, not simulator.
Just go 'Devices and Simulator' (cmmd+shift+2)
Select your device
Scroll down til you find 'Device Conditions'
Set your desired profile
Hit Start
To have this working you need to install 'Network Link Conditioner' on your Mac. See steps mention in Alan's answer
I would argue that a slow connection isn't enough to simulate real-work mobile data network behaviour - since there is also much more packet loss, higher latencies and more dropped connections too.
Here is a handy script I found to configure the firewall to emulate these parameters:
http://pmilosev-notes.blogspot.com/2011/02/ios-simulator-testing-over-different.html
#!/bin/sh
if [ "$#" -ne "3" ]
then
echo "Usage:\n$0 <bandwidth in kpbs> <delay in ms> <packet loss ratio>";
exit 1
fi
BW=$1
DELAY=$2
PLR=$3
sudo ipfw pipe 1 config bw ${BW}Kbit/s delay $DELAY plr $PLR
sudo ipfw add 1 pipe 1 all from me to not me
sudo ipfw add 2 pipe 1 all from not me to me
echo "RETURN to stop connection noise"
read
sudo ipfw delete 1
sudo ipfw delete 2
exit 0
Some suggested values you can use:
Scenario
Bw (Kbit)
delay (ms)
pr (ratio)
2.5G mobile
(GPRS)
50
200
3G mobile
1000
200
0.2
VSAT
5000
500
0.2
Busy LAN on VSAT
300
500
0.4
There isn't a direct way to emulate a slow connection, unlike, say, the nice network connection emulator that blackberry developers enjoy. However, since your simulator's connection goes through your computer - you can simply focus on slowing down your computer's connection.
You'll want to achieve two things (depending upon your circumstances):
throttle your bandwidth
increase your latency
Maybe this will point you in right direction:
http://www.macosxhints.com/article.php?story=20080119112509736
There are some good open source solutions, too, but I so can't remember their names.
This question might help: How to throttle network traffic for environment simulation?
You can do it in really device through Xcode(14) settings
Debug -> Induce Device conditions -> Network Link -> select the Network you want
Related
I want to deploy a small object detection app in a lobby, but I would like to prevent unauthorized physical access. The device logs in automatically on boot, so anyone can access it with a keyboard. How could I prevent that? Thank you!
In the end, I opted to disable the login for the mendel user and to also lock it. Instead of using /bin/false, I opted to place my own script in /usr/bin/guard.sh that creates an .UNAUTHORISED_LOGIN file in mendel's home directory in case that someone tries to open a terminal on the device. Basically I ran the following commands:
chmod +x guard.sh
sudo cp guard.sh /usr/bin
sudo chsh -s /usr/bin/guard.sh mendel
sudo usermod -L mendel
guard.sh contents:
#!/bin/bash
touch /home/mendel/.UNAUTHORISED_LOGIN
Maybe you can try blacklisting the usb-storage driver?
Create this file:
sudo vim /etc/modprobe.d/blacklist.conf
Write this line into the file:
usb-storage
Save, close, and reboot.
Nam's suggestion is good. It locks out usb-storage, but still allows the usb camera to work. You could lock out a USB keyboard that way too. With effort, you can plug lots of potential attack points, including login passwords for MDT and serial access. Perhaps you will superglue the USB camera in place, or secure the whole assembly in a locked box.
Coral development is primarily focused on embedded ML inference on the edge TPU, and not the security tradeoffs of deployment. What follows are some untested suggestions, not documented recommendations.
Electronic tampering is important to address on any internet-connected device. We do not recommend deploying Mendel for end applications. It is for development only. Use a yocto build to include only what is necessary for your application, and be sure to include all the latest security patches.
Protecting against physical tampering could be an infinite challenge. First, determine the level of attack to be expected, and go no further. Some businesses have armed security. Most businesses have unarmed security. My home has no security guards.
Do you need a locked box with tamper switches? ATM machines and point-of-sale terminals have published standards to keep them secure enough. Perhaps a locked box is sufficient. An attacker could cut the cables, and take the box if its not bolted down, but could not quickly compromise the device.
Once you have a security plan, its important to get an outside review. They can help you decide: Does this plan protect against the expected attack vectors? Are there any other attack vectors that must be addressed for this level of security? Are there elements of the plan that are too much for this level of security? Depending upon the application, it might be reasonable to hire penetration testers to get a realistic evaluation when it is ready.
To disable the automatic login using HDMI, I found that sudo systemctl set-default multi-user will do the trick
I am using Google Chrome 63.
In DevTools in Performance tab there are three CPU throttling settings: "No throttling", "4x slowdown" and "6x slowdown".
Is it possible to set custom throttling, for example "20x slowdown"? It could be via setting some flag in chrome.exe file or programmatically via NodeJS library.
I found that Lighthouse library has kind of helpful function but if I change the default value inside it (CPU_THROTTLE_METRICS seems to be equal to 4) from 4 to (for example) 20 and run it, how can I be sure it really is 20x slowed down?
Also, I would like to know, if it is possible to do such simulated "slow down" to the GPU in similar way?
Thanks for any advice.
Custom values for Emulation.setCPUThrottlingRate can be set right in Chrome, but you need to open a Dev Tools window on the Dev Tools window to change the setting programatically.
Open Dev Tools; make sure it is detached (open in its own window).
Open Dev Tools again on the Dev Tools window from step 1 using the key combination Cmd-Opt-i (Mac) or Ctrl-Shift-i (Windows).
Run the following in the Console tab: await Main.MainImpl.sendOverProtocol('Emulation.setCPUThrottlingRate', {rate: 40});
This example will throttle Chrome performance by 40x. NOTE: Passing 1 for rate turns off throttling.
The first Dev Tools window created in Step 1 may be re-docked after creating the second Dev Tools window.
Lighthouse uses Emulation.setCPUThrottlingRate command in the Chrome DevTools Protocol:
https://chromedevtools.github.io/devtools-protocol/tot/Emulation#method-setCPUThrottlingRate
You can monitor the protocol this way:
https://umaar.com/dev-tips/166-protocol-monitor/
You'll see this command in the protocol log when you switch with the throttling setting in the performance panel.
If you're asking how to be sure if it works - here is the implementation from Chromium source code:
https://github.com/chromium/chromium/blob/master/third_party/blink/renderer/platform/scheduler/util/thread_cpu_throttler.h#L21
// This class is used to slow down the main thread for
// inspector "cpu throttling". It does it by spawning an
// additional thread which frequently interrupts main thread
// and sleeps.
Hope this helps.
On Linux you can use cpulimit
sudo apt-get install cpulimit
# -l 5 means 5% , or 20x slowdown
cpulimit -l 5 chromium-browser
I'm currently busy with my masters project which involves setting up comms on UART between a Raspberry Pi Model 2 B V1.1 and a Pixhawk Flight Controller using Mavlink protocol.
The first step is, of course, to get the UART set up and working. I'm not one to run after help at the first sign of a problem. I have been struggling with this for days and it's forced me to doubt the purpose of my existence more than once. I feel stupid and frustrated. Please see if you can provide any assistance.
My first resource was this tutorial, which should be relatively straight forward:
http://ardupilot.org/dev/docs/raspberry-pi-via-mavlink.html
The tutorial simply installs all the necessary packages and dependencies, as well as sets up the UART. I followed the steps to disable OS use of the serial port through raspi-config, however after attempting to test the connection I get an error:
[Errno 2] No such file or directory: '/dev/ttyAMA0'
Which is very strange. So after disabling and enabling OS use of serial port through rasp-config a few times and checking, every time I disable it, the /dev/ttyAMA0 file disappears. Now how the hell is anything supposed to work on the UART if disabling OS use of the UART removes that file!? Nevertheless I powered through. I enabled OS use of the serial port, which leaves the ttyAMA0 file right where it is and followed another suggestion, which is to change the /boot/cmdline.txt and remove all reference to ttyAMA0, as shown in the following link:
http://www.raspberry-projects.com/pi/pi-operating-systems/raspbian/io-pins-raspbian/uart-pins
This seemed to work alright. I could now initiate comms between the RPi and the Pixhawk flight controller and get some information that looked correct. Then the black magic started. The next day I tested the connection and it consistently spat out complete rubbish. But Nothing changed since the previous day. Somewhere I must be missing something. I followed all the same tutorials and steps attempting to get the more positive results I got the previous day. However that only led to more erratic behaviour. When connecting the serial lines to my Pixhawk Flight Controller, the keyboard/mouse seems to get interrupted momentarily every now and then. Everything just went backwards. I have already reinstalled Raspbian Jessie in a desperate attempt to get things to work.
Here are a few things I suspect could possibly contribute to the problems:
Baud rate not correct (to communicate with my Flight Controller baud rate needs to be 57600). Best way I've found to set this baud rate is to append "init_uart_baud=57600" to /boot/config.txt/. I have also read about other ways such as appending a line to /etc/crontab. Any suggestions?
Pixhawk miraculously and sporadically refused to communicate back with RPi.
Any assistance will be appreciated. Thank you.
SOLVED:
Looks like a known bug in the latest raspbian, easy to fix though.
These need to be done as the root user.
Disable "serial console" through GUI-preferences or "sudo raspi-config." Then reboot the pi.
Then change the following line in the file /boot/config.txt at the bottom of the file from:
enable_uart=0
to
enable_uart=1
Disable the ModemMonitor service by running the following command as root:
systemctl disable ModemManager.service
Then add youself to the dialout group, just to be sure you have the required permissions on the serial port:
adduser pi dialout
That should give you unrestricted proper access to the serial port.
Resources:
[url]https://www.raspberrypi.org/forums/viewtopic.php?f=66&t=148515[/url]
and
[url]https://www.raspberrypi.org/forums/viewtopic.php?f=28&t=82779[/url]
I had researched this for days now and troubleshooting all the readings listed on Google sites. I solved the serial UART settings for connecting my RPi3 Model B (typed at command line the following:
`cat /proc/cpuinfo`
to find my Pixhawk hardware info.)
FYI: You must be root when working with mavproxy so, sudo su
or sudo -s
Also, you must be a member of the dialout group, so do this at CMD line:
sudo usermod -a -G dialout root (enable root user!)
Do all the RPi regular stuff:
sudo apt-get update && sudo apt-get upgrade and sudo rpi-update.
Did all as outlined in the Ardupilot website. I did NOT use the
"apsync-rpi". (I used the 2017-03-02-raspbian-jessie.img.) at here
On my RPi3, using $uname -a: results--> Linux raspberrypi 4.4.50-v7+
My $sudo nano /boot/config.txt file has one change at bottom of file;
THIS statement: enable_uart=1 (has a good side effect of forcing the
core_freq to 250 which reduces poor signal frequency)
Important discovery: so the articles state that RPI3 UART and tty settings have changes. (link here)
What I have discovered after much ado is this for my sudo nano /boot/cmdline.txt file:
dwc_otg.lpm_enable=0 console=tty1 root=/dev/mmcblk0p2 rootfstype=ext4 etc.,.
-Notice I am not using ttyS0 in the /boot/cmdline.txt file. I tried the ttyS0 a dozen times and it never worked properly. For some reason, I am not able to explain it at this time, although the console=/dev/tty1 works if written in the /boot/cmdline.txt file.
Make sure you have the wiring correct between your RPi and the Pixhawk.
Telem 2. Also set the correct parameters in Mission Planner as I
have;
Go to CONFIGTUNING->STANDARD PARAMS; (my settings)
-The Serial0 baud rate(SERIALO_PROTOCOL) is: 115200
-The Console protocol selection(SERIAL0_PROTOCOL) is: MAVlink1
-Telem1 baud rate(SERIAL1_BAUD) is at: 115200
-Telem1 protocol selection(SERIAL1_PROTOCOL) IS MAVlink2
-Telemtry 2 Baud rate(SERIAL2_BAUD) is 921600
-Telemetry 2 protocol selection(SERIAL2_PROTOCOL) is MAVlink1
The RPi and pixhawk communicate at 921600 baud rate.
-Once I get the RPi3 powered up with it's own +5/VCC source and connect to my MP with a 3.0 USB cable from my PC-Windows10PRo, (okay, I have Arch and Debian Linux distros and Apple OSes too!) I enter:
`mavproxy.py --master=/dev/ttyS0 --baudrate 921600 --aircraft Plane`
It works for me!
Happy experimenting and flying!
Colleagues and users testing various features in a program use MFDeploy to install for example "MyApp.exe" onto their Netduino +2. This method works great. Is there a way to also MFDeploy a "MyApp.config" text file so they can set their specific network criteria (like Port#) or other program preferences? Obviously, more robust preferences can be set from desktop software or web app AFTER the connection is established.
After several days researching, I could not find a viable means of transferring a config file via MFDeploy. Decided to add a "/install" command line option to the desktop app:
cncBuddyUI.exe [/help|/?] [/reset] [/discover] [/install:[axisA=X|Y] ,port=9999]]
/help|/? Show this help/usage information
/reset Create new default software configuration
/discover Listen for cncBuddyCAM broadcasting IPAddress & Port (timeout 30 secs)
/install Install hardware specific settings on Netduino+2 SDCard.
port Network port number (default=80)
axisA Slave axisA motor signals to X or Y axis
During "/install" mode, once cncBuddyCAM (Netduino app) network connects to cncBuddyUI (desktop app), the configuration parameters are transmitted and written onto the SDCard (\SD\config.txt).
Every warm boot now reads \SD\config.txt at startup and loads the configuration parameters into the appropriate application variables.
After several weeks of usage, I find this method preferable and easier to customize. Check out cncBuddy on Github.
I am looking for assistance with the proper GDB / OpenOCD initializion and running commands (external tools) to use within Eclipse for flash and RAM debugging, as well as the proper modifications or additions that need to be incorporated in a makefile for flash vs RAM building for this MCU, if this matters of course.
MCU: STM32F103VET6
I am using Eclipse Helios with Zylin Embedded CDT, Yagarto Tools and Bins, OpenOCD 0.4, and have an Olimex ARM-USB-OCD JTAG adapter.
I have already configured the ARM-USB-OCD and added it as an external tool in Eclipse. For initializing OpenOCD I used the following command in Eclipse. The board config file references the stm32 MCU:
openocd -f interface/olimex-arm-usb-ocd-h.cfg -f board/stm32f10x_128k_eval.cfg
When I run this within Eclipse everything appears to be working (GDB Interface, OpenOCD finds the MCU, etc). I can also telnet into OpenOCD and run commands.
So, I am stuck on the next part; initialization and commands for flash and RAM debugging, as well as erasing flash.
I read through several tutorials, and scoured the net, but have not been able to find anything particular to this processor. I am new to this, so I might not be recognizing an equivalent product for an example.
I'm working with the same tool chain to program and debug a STM32F107 board. Following are my observations to get an STM32Fxxx chip programmed and debugged under this toolchain.
Initial Starting Point
So at this point you've got a working OpenOCD to ARM-USB-OCD connection and so you should be all set on that end. Now the work is on getting Eclipse/Zylin/Yagarto GDB combination to properly talk to the STM32Fxxx through the OpenOCD/Olimex connection. One thing to keep in mind is that all the OpenOCD commands to issue are the run mode commands. The configuration scripts and command-line options to invoke the OpenOCD server are configuration mode commands. Once you issue the init command then the server enters run mode which opens up the set of commands you'll need next. You've probably done it somewhere else but I tack on a '-c "init"' option when I call the OpenOCD server like so:
openocd -f /path to scripts/olimex-arm-usb-ocd-h.cfg -f /path to targets/stm32f107.cfg -c "init"
The following commands I issue next are done by the Eclipse Debug Configurations dialogue. Under the Zylin Embedded debug (Native) section, I create a new configuration, give it a name, Project (optional), and absolute path to the binary that I want to program. Under the Debugger tab I set the debugger to Embedded GDB, point to the Yagarto GDB binary path, don't set a GDB command file, set GDB command set to Standard, and the protocol to mi.
The Commands Tab - Connect GDB to OpenOCD
So the next tab is the Commands tab and that's where the meat of the issue lies. You have two spaces Initialize and Run. Not sure exactly what the difference is except to guess that they occur pre- and post-invocation of GDB. Either way I haven't noticed a difference in how my commands are run.
But anyway, following the examples I found on the net, I filled the Initialize box with the following commands:
set remote hardware-breakpoint limit 6
set remote hardware-watchoint-limit 4
target remote localhost:3333
monitor halt
monitor poll
First two lines tell GDB how many breakpoints and watchpoints you have. Open OCD Manual Section 20.3 says GDB can't query for that information so I tell it myself. Next line commands GDB to connect to the remote target at the localhost over port 3333. The last line is a monitor command which tells GDB to pass the command on to the target without taking any action itself. In this case the target is OpenOCD and I'm giving it the command halt. After that I tell OpenOCD to switch to asynchronous mode of operation. As some of the following operations take a while, it's useful not to have OpenOCD block and wait for every operation.
Sidenote #1: If you're ever in doubt about the state of GDB or OpenOCD then you can use the Eclipse debug console to send commands to GDB or OpenOCD (via GDB monitor commands) after invoking this debug configuration.
The Commands Tab - Setting up the User Flash
Next are commands I give in the Run commands section:
monitor flash probe 0
monitor flash protect 0 0 127 off
monitor reset halt
monitor stm32x mass_erase 0
monitor flash write_image STM3210CTest/test_rom.elf
monitor flash protect 0 0 127 on
disconnect
target remote localhost:3333
monitor soft_reset_halt
to be explained in the following sections...
Setting up Access to User Flash Memory
First I issue an OpenOCD query to see if it can find the flash module and report the proper address. If it responds that it found the flash at address 0x08000000 then we're good. The 0 at the end specifies to get information about flash bank 0.
Sidenote #2: The STM32Fxxx part-specific data sheets have a memory map in section 4. Very useful to keep on hand as you work with the chip. Also as everything is accessed as a memory address, you'll come to know this layout like the back of your hand after a little programming time!
So after confirming that the flash has been properly configured we invoke the command to turn off write protection to the flash bank. PM0075 describes everything you need to know about programming the flash memory. What you need to know for this command is the flash bank, starting sector, ending sector, and whether to enable or disable write protection. The flash bank is defined in the configuration files you passed to OpenOCD and was confirmed by the previous command. Since I want to disable protection for the entire flash space I specify sectors 0 to 127. PM0075 explains how I got that number as it refers to how the flash memory is organized into 2KB pages for my (and your) device. My device has 256KB of flash so that means I have 128 pages. Your device has 512KB of flash so you'll have 256 pages. To confirm that your device's write-protection has been disabled properly, you can check the FLASH_WRPR register at address 0x40022020 using the OpenOCD command:
monitor mdw 0x40022020
The resulting word that it prints will be 0xffffffff which means all pages have their write protection disabled. 0x00000000 means all pages have write protection enabled.
Sidenote #3: On the subject of the memory commands, I bricked my chip twice as I was messing with the option bytes at the block starting at address 0x1ffff800. First time I set the read protection on the flash (kind of hard to figure out what your doing if you do that), second time I set the hardware watchdog which prevented me from doing anything afterwards since the watchdog kept firing off! Fixed it by using the OpenOCD memory access commands. Moral of the story is: With great power comes great responsibility.... Or another take is that if I shoot myself in the foot I can still fix things via JTAG.
Sidenote #4: One thing that'll happen if you try to write to protected flash memory is the FLASH_SR:WRPRTERR bit will be set. OpenOCD will report a more user-friendly error message.
Erasing the Flash
So after disabling the write protection, we need to erase the memory that you want to program. I do a mass erase which erases everything, you also have the option to erase by sector or address (I think). Either way you need to erase first before programming as the hardware checks for erasure first before allowing a write to occur. If the FLASH_SR:PGERR bit (0x4002200c) ever gets set during programming then you know you haven't erased that chunk of memory yet.
Sidenote #5: Erasing a bit in flash memory means setting it to 1.
Programming Your Binary
The next two lines after erasure writes the binary image to the flash and reenables the write protection. There isn't much more to say that isn't covered by PM0075. Basically any error that occurs when you issue flash write_image is probably related to the flash protection not being disabled. It's probably NOT OpenOCD though if you're curious you can take enable the debug output and follow what it does.
GDB Debugging
So finally after programming I disconnect GDB from the remote connection and then reconnect it to the target, do a soft-reset, and my GDB is now ready to debug. This last part I just figured out last night as I was trying to figure out why, after programming, GDB wouldn't properly stop at main() after reset. It kept going off into the weeds and blowing up.
My current thinking and from what I read in the OpenOCD and GDB manuals is that the remote connection is, first and foremost, meant to be used between GDB and a target that has already been configured and running. Well I'm using GDB to configure before I run so I think the symbol table or some other important info gets messed up during the programming. The OpenOCD manual says that the server automatically reports the memory and symbols when GDB connects but all that info probably becomes invalid when the chip gets programmed. Disconnecting and reconnecting I think refreshes the info GDB needs to debug properly. So that has led me to create another Debug Configuration, this one just connects and resets the target since I don't necessarily need to program the chip every time I want to use GDB.
Whew! Done! Kind of long but this took me 3 weekends to figure out so isn't too terribly bad I think...
Final sidenote: During my time debugging I found that OpenOCD debug output to be invaluable to me understanding what OpenOCD was doing under the covers. To program a STM32x chip you need to unlock the flash registers, flip the right bits, and can only write a half-word at a time. For a while I was questioning whether OpenOCD was doing this properly but after looking through the OpenOCD debug output and comparing it against what the PM0075 instructions were, I was able to confirm that it did indeed follow the proper steps to do each operation. I also found I was duplicating steps that OpenOCD was already doing so I was able to cut out instructions that weren't helping! So moral of the story: Debug output is your friend!
I struggled getting JLink to work with a STM3240XX and found a statement in the JLink GDB server documentation saying that after loading flash you must issue a "target reset":
"When debugging in flash the stack pointer and the PC are set automatically when the target is reset after the flash download. Without reset after download, the stack pointer and the PC need to be initialized correctly, typically in the .gdbinit file."
When I added a "target reset" in the Run box of the debugger Setup of Eclipse, suddenly everything worked. I did not have this problem with a Kinetis K60.
The document also explains how to manually set the stack pointer and pc directly if you don't want to issue a reset. It may not be the disconnect/connect that solves the problem but the reset.
What i use after the last sentence in the Comannd Tab - 'Run' Commands, is:
symbol-file STM3210CTest/test_rom.elf
thbreak main
continue
The thbreak main sentence is what makes gdb stop at main.