Is there any way of simulating limited or no 3G / Wifi / EDGE connectivity when using the iPhone simulator?
Is it the variations in speed you wish to test? Or access to each technology?
If it's speed then you could use the following ipfw trick, courtesty of Craig Hockenberry of the Icon Factory, to use ipfw to limit connectivity to a given domain. In this example, it's twitter and it limits the speed of all connections to and from the host.
It's a bash script, if you're doing iPhone dev you'll be on a mac so just create it and run in the terminal.
#!/bin/bash
# configuration
host="twitter.com"
# usage
if [ "$*" == "" ]; then
echo "usage: $0 [off|fast|medium|slow]"
exit
fi
# remove any previous firewall rules
sudo ipfw list 10 > /dev/null 2>&1
if [ $? -eq 0 ]; then
sudo ipfw delete 10 > /dev/null 2>&1
fi
sudo ipfw list 11 > /dev/null 2>&1
if [ $? -eq 0 ]; then
sudo ipfw delete 11 > /dev/null 2>&1
fi
# process the command line option
if [ "$1" == "off" ]; then
# add rules to deny any connections to configured host
sudo ipfw add 10 deny tcp from $host to me
sudo ipfw add 11 deny tcp from me to $host
else
# create a pipe with limited bandwidth
bandwidth="100Kbit"
if [ "$1" == "fast" ]; then
bandwidth="300Kbit"
elif [ "$1" == "slow" ]; then
bandwidth="10Kbit"
fi
sudo ipfw pipe 1 config bw $bandwidth
# add rules to use bandwidth limited pipe
sudo ipfw add 10 pipe 1 tcp from $host to me
sudo ipfw add 11 pipe 1 tcp from me to $host
fi
You might want to take a look at SpeedLimit, a Preference Pane for OS X that allows you to throttle bandwidth and control latency.
If you have iPhone tethering, you can turn off your cable modem/ASDL connection, and route your internet through your iPhone. This method works really well if your carrier is AT&T. If you don't have AT&T as your carrier, you'll have to try one of the other methods to simulate a crappy connection.
Another lo-fi solution, is to wrap your home wireless router in tin foil, or put it in a metal box. What you want to simulate generally is a crappy connection - not a slow connection. The firewall rules will slow the connection, but won't lose random packets.
Since your on a Mac, you can use Dummynet. This plugs into ipfw, but can also simulate packet loss. Here's a typical ipfw with the Dummynet module:
ipfw add 400 prob 0.05 deny sr-ip 10.0.0.0/8
You can test no network by turning your airport off :-)
For finer control, Neil's ipfw suggestion is the best way.
Related
I setup a NetCat Video Stream from my RPi and I am accessing it with OpenCV in the following way:
videoStream = cv2.VideoCapture("tcp://#<my_ip>:<my_port>/")
...
videoStream.release()
Unfortunately I cannot connect to the Stream multiple times without reinitializing it. How does OpenCV tread my tcp connection? Does .release() properly close the socket or what is the right way to close it?
I would comment but I do not have enough points. I had a similar issue. Ultimately, what worked for me is the run netcat with the -k option, which does allow reconnecting:
on RPI:
/opt/vc/bin/raspivid -n -t 0 -w 640 -h 360 -fps 30 -ih -fl -l -o - | /bin/nc -klvp 5000
for nc, the -k option keeps the port listening after the first client disconnects, thereby allowing you to reconnect. You won't need the -v option, it just adds some verbosity.
Another alternative is to
on receiver (Ubuntu, Win10):
nc x.x.x.x 5000 | mplayer -fps 200 -demuxer h264es -
or
gst-launch-1.0 -v tcpclientsrc host=10.60.66.237 port=5000 ! decodebin ! autovideosink
Python code with opencv:
import cv2
cap = cv2.VideoCapture("tcp://10.60.66.237:5000")
while(True):
ret, frame = cap.read()
cv2.imshow('frame', frame)
# the 'q' button is set as the
# quitting button you may use any
# desired button of your choice
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
Disconnect and reconnect all you want :)
I know that Zabbix can monitor any service on Linux machine via two options:
scan particular tcp or udp port, on which the service is bound
or count the service processes with proc.num[<processname>]
It is totally counter-intuitive, because I can spawn processes with the same executable name and they will deceive Zabbix. I'd prefer to use standard service <servicename> status or systemctl status name.service tool. But there are no standard way to use it from Zabbix except system.run[cmd]
Could you help me to write templates for monitoring a particular service state. We want to use different OSes like Centos 7 and Ubuntu 14.04 and 16.04 distributions. It is pity but service <servicename> status is completely different in listed operating systems.
You can also add the following UserParameters in zabbix_agentd.conf to monitor service status in systemd systems. For non-systemd the OS doesn't really monitor service status, the various bash script "status" arguments are often unreliable.
UserParameter=systemd.unit.is-active[*],systemctl is-active --quiet '$1' && echo 1 || echo 0
UserParameter=systemd.unit.is-failed[*],systemctl is-failed --quiet '$1' && echo 1 || echo 0
UserParameter=systemd.unit.is-enabled[*],systemctl is-enabled --quiet '$1' && echo 1 || echo 0
And then e.g. for sshd status create an item with a key like:
systemd.unit.is-active[sshd]
If Linux services are managed by systemd (Centos 7+, Ubuntu 16+, ...), then you can use https://github.com/cavaliercoder/zabbix-module-systemd. It uses standard systemd D-Bus communication - that's what systemctl does under the hood.
For centos 6 it can be done:
UserParameter=check_service_status_asterisk,sudo service asterisk status 2> /dev/null | grep -q "is running";echo $?
For centos 7 or similar it can be created with:
UserParameter=check_service_status_grafana,systemctl status grafana-server 2> /dev/null |sed -n 3p |grep -q "running";echo $?
or
UserParameter=check_service_status[*],systemctl status $1 2> /dev/null |sed -n 3p |grep -q "running";echo $?
I would like to connect to a VPN on start-up of OSMC.
Environment:
installed OSMC on Raspberry 2
downloaded, compiled and installed shrew soft vpn on the device
As user 'osmc' with ssh
> sudo iked starts the daemon successfully
> ikec -r "test.vpn" -a starts the client, loads the config and connects successfully
rc.local:
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
sudo iked >> /home/osmc/iked.log 2>> /home/osmc/iked.error.log &
ikec -a -r "test.vpn" >> /home/osmc/ikec.log 2>> /home/osmc/ikec.error.log &
exit 0
after start of raspberry iked is as process visible with ps -e
but ikec is not running
osmc#osmc:~$ /etc/rc.local starts the script and connects to vpn successfully
Problem:
Why does the script not working correctly on start-up?
Thank you for your help!
I was also looking to do the same thing as you and ran into the same problem. I'm no linux expert, but I did figure out a workaround.
I created a script called ikec_after_reboot.sh and it looks like this...
$ cat ikec_after_reboot.sh
#!/bin/bash
echo "Starting ikec"
ikec -r test.vpn -a
I then installed cron.
sudo apt-get update
sudo apt-get install cron
Edit the cron job as root and run the ikec script 60 seconds after reboot.
sudo crontab -e
SHELL=/bin/bash
#reboot sleep 60 && /home/osmc/ikec_after_reboot.sh & >> /home/osmc/ikec.log 2>&1
Now edit your /etc/rc.local file and add the following.
sudo iked >> //home/osmc/iked.log 2>> /home/osmc/iked.error.log &
exit 0
Hopefully, this is helpful to you.
I'm wondering if there is a way to dump an HTTP stream no matter what happens on the server side.
If I use curl --retry 999 or wget --retry-connrefused --waitretry=1 --read-timeout=20 --timeout=15 -t 0, the connection and download are resumed in case of network errors, but if the session is terminated by the server there is no retry. The connection is being ended and that's it. I need a perpetual retry even on FIN.
Do wget or curl have some special parameters to archive this?
Is there a tool that is not wget or curl that can archive this? A single command would be appreciated since the output is being piped.
To avoid local failure or so, you can put it into while loop, bash script
while [ 1 ]; do
wget -t 0 --timeout=15 --waitretry=1 --read-timeout=20 --retry-connrefused --continue
if [ $? = 0 ]; then break; fi; # check return value, break if successful
sleep 1s;
done;
You may try also another solution
FILENAME=$1
DOWNURL=$2
wget -O "`echo $FILENAME`" "`echo $DOWNURL`"
FILESIZE=$(stat -c%s "$FILENAME")
while [ $FILESIZE \< 1000 ]; do
sleep 3
wget -O "`echo $FILENAME`" "`echo $DOWNURL`"
FILESIZE=$(stat -c%s "$FILENAME")
done
You can play with the limit 1000. If the file is smaller then the while loop will try again.
I am trying to find a good way to tail a file on a remote host. This is on an internal network of Linux machines. The requirements are:
Must be well behaved (no extra process laying around, or continuing output)
Cannot require someone's pet Perl module.
Can be invoked through Perl.
If possible, doesn't require a custom built script or utility on the remote machine (regular linux utilities are fine)
The solutions I have tried are generally of this sort
ssh remotemachine -f <some command>
"some command" has been:
tail -f logfile
Basic tail doesn't work because the remote process continues to write output to the terminal after the local ssh process dies.
$socket = IO:Socket::INET->new(...);
$pid = fork();
if(!$pid)
{
exec("ssh $host -f '<script which connects to socket and writes>'");
exit;
}
$client = $socket->accept;
while(<$client>)
{
print $_;
}
This works better because there is no output to the screen after the local process exits but the remote process doesn't figure out that its socket is down and it lives on indefinitely.
Have you tried
ssh -t remotemachine <some command>
-t option from the ssh man page:
-t Force pseudo-tty allocation. This can be used to execute
arbitrary screen-based programs on a remote machine, which
can be very useful, e.g. when implementing menu services.
Multiple -t options force tty allocation, even if ssh has no local tty.
instead of
-f Requests ssh to go to background just before command execution.
This is useful if ssh is going to ask for passwords or passphrases,
but the user wants it in the background.
This implies -n. The recommended way to start X11 programs at a remote
site is with something like ssh -f host xterm.
Some ideas:
You could mount it over NFS or CIFS, and then use File::Tail.
You could use one of Perl's SSH modules (there are a number of them), combined with tail -f.
You could try Survlog Its OS X only though.
netcat should do it for you.
You can Tail files remotely using bash and rsync. The following script is taken from this tutorial: Tail files remotely using bash and rsync
#!/bin/bash
#Code Snippet from and copyright by sshadmincontrol.com
#You may use this code freely as long as you keep this notice.
PIDHOME=/a_place/to/store/flag/file
FILE=`echo ${0} | sed 's:.*/::'`
RUNFILEFLAG=${PIDHOME}/${FILE}.running
if [ -e $RUNFILEFLAG ]; then
echo "Already running ${RUNFILEFLAG}"
exit 1
else
touch ${RUNFILEFLAG}
fi
hostname=$1 #host name to remotlely access
log_dir=$2 #log directory on the remotehost
log_file=$3 #remote log file name
username=$3 #username to use to access remote host
log_base=$4 #where to save the log locally
ORIGLOG="$log_base/$hostname/${log_file}.orig"
INTERLOG="$log_base/$hostname/${log_file}.inter"
FINALLOG="$log_base/$hostname/${log_file}.log"
rsync -q -e ssh $username#$hostname:$log_dir/$log_file ${ORIGLOG}
grep -Ev ".ico|.jpg|.gif|.png|.css" > ${INTERLOG}
if [ ! -e $FINALLOG ]; then
cp ${INTERLOG} ${FINALLOG}
else
LINE=`tail -1 ${FINALLOG}`
grep -F "$LINE" -A 999999999 ${INTERLOG} \
| grep -Fv "$LINE" >> ${FINALLOG}
fi
rm ${RUNFILEFLAG}
exit 0
rsync://[USER#]HOST[:PORT]/SRC... [DEST] | tail [DEST] ?
Someone suggested using nc (netcat). This solution does work but is less ideal than just using ssh -t. The biggest problem is that you have to use nc on both sides of the connection and need to do some port discovery on the local machine to find a suitable port over which to connect. Here is the adaptation of the above code to use netcat:
$pid = fork();
if(!$pid)
{
exec("ssh $host -f 'tail -f $filename |nc $localhost $port'");
exit;
}
exec("nc -l -p $port");
There is File::Tail. Don't know if it helps?