Caffe does not make snapshots on SIGINT - neural-network

When I'm pressing CTRL+C in terminal, caffe stops training but does not make snapshots. How to fix it?
My solver:
net: "course-work/testing/model.prototxt"
test_iter: 200
test_interval: 500
base_lr: 0.001
momentum: 0.9
weight_decay: 0.005
lr_policy: "fixed"
display: 50
max_iter: 60000
snapshot: 5000
snapshot_format: HDF5
snapshot_prefix: "course-work/testing/by_solver_lr0"
snapshot_after_train: true
solver_mode: CPU
Bash script:
TOOLS=./build/tools
NET_DIR=course-work/testing
$TOOLS/caffe train \
--solver=$NET_DIR/solver_lr0.prototxt 2>&1 | tee $NET_DIR/1.log

Redirecting caffe's output through tee and pipes might alter the way the OS handles and transfers signals to processes. Try avoiding | tee to make sure SIGINT reaches caffe.
Note that caffe tool has two flags
DEFINE_string(sigint_effect, "stop",
"Optional; action to take when a SIGINT signal is received: "
"snapshot, stop or none.");
DEFINE_string(sighup_effect, "snapshot",
"Optional; action to take when a SIGHUP signal is received: "
"snapshot, stop or none.");
These flags can help you define caffe's behavior on SIGINT and SIGHUP.

A good way to log caffe output is
GLOG_log_dir=/path/to/log/dir $CAFFE_ROOT/bin/caffe.bin train
—solver=/path/to/solver.prototxt
This does live logging of caffe output and SIGINT definitely reaches caffe.

Related

U-Boot won't take keyboard input from serial Raspberry Pi Model 3 B

I already have a working config for a compute module 3+. As I need the same setup on a raspberry Pi Model 3 B I tried bringing the config over.
Everything is compiled in a buildroot environment. U-Boot v2020.10 is used.
After some small changes regarding the device tree and dtoverlays I managed to get U-Boot to print on the serial console(as expected), but it ignores all keyboard input.
The following output is produced by U-Boot on serial console.
EDIT
I used the term serial very loosely here. I'm connected to the serial console with a serial-USB adapter and picocom. I applied the miniuart-bt overlay to restore /dev/ttyAMA0 respectively UART0 on gpio pins 14/15.
Lastely I configured U-Boot with PL011.
I left out support for mini-uart as this would break the output too.
This configuration works just fine on the compute module, but doen't register input on the model 3B.
EDIT
I moved the working u-boot.bin from the cm 3 to the model B to see what happens. It seemingly works as both are close enough. But the same problem occurs. The other way around though it does not work. So it is potential not a problem with U-Boot but with the Model B configuration.
1 Isa-Boot>·
2
3 U-Boot 2020.10 (Mar 24 2022 - 12:18:38 +0000)
4
5 DRAM: 924 MiB
6 RPI 3 Model B (0xa02082)
7 MMC: mmc#7e202000: 0, sdhci#7e300000: 1
8 In: serial
9 Out: vidconsole
10 Err: vidconsole
11 Hit any key to stop autoboot: 0·
Neither can I stop autoboot nor can I use the shell to complete the boot script.
I tried what feels like a million configurations and I'm out of ideas what could be the reason for this behavior. I also never experienced this with the cm module.
RPi setup config.txt:
enable_uart=1
start_file=start.elf
fixup_file=fixup.dat
kernel=u-boot.bin
gpu_mem=100
dtoverlay=miniuart-bt
dtparam=spi=on
device_tree=bcm2710-rpi-3-b.dtb
dtoverlay=sc16is750-spi0-ce0
U-Boot defconfig:
CONFIG_ARM=y
CONFIG_ARCH_CPU_INIT=y
CONFIG_ARCH_BCM283X=y
CONFIG_SYS_TEXT_BASE=0x00008000
CONFIG_TARGET_RPI_3_32B=y
CONFIG_SYS_MALLOC_F_LEN=0x2000
CONFIG_NR_DRAM_BANKS=1
CONFIG_ENV_SIZE=0x4000
CONFIG_DEFAULT_DEVICE_TREE="bcm2837-rpi-3-b"
CONFIG_DISTRO_DEFAULTS=y
CONFIG_OF_BOARD_SETUP=y
CONFIG_SYS_STDIO_DEREGISTER=y
CONFIG_MISC_INIT_R=y
# CONFIG_DISPLAY_CPUINFO is not set
# CONFIG_DISPLAY_BOARDINFO is not set
CONFIG_SYS_PROMPT="Isa-Boot> "
CONFIG_CMD_GPIO=y
CONFIG_CMD_MMC=y
CONFIG_CMD_USB=y
CONFIG_CMD_FS_UUID=y
CONFIG_OF_EMBED=y
# CONFIG_ENV_IS_IN_FAT is not set
CONFIG_SYS_RELOC_GD_ENV_ADDR=y
CONFIG_ENV_VARS_UBOOT_RUNTIME_CONFIG=y
# CONFIG_NET is not set
CONFIG_DM_MMC=y
# CONFIG_MMC_HW_PARTITIONING is not set
CONFIG_MMC_SDHCI=y
CONFIG_MMC_SDHCI_BCM2835=y
CONFIG_DM_ETH=y
CONFIG_PINCTRL=y
# CONFIG_PINCTRL_GENERIC is not set
# CONFIG_REQUIRE_SERIAL_CONSOLE is not set
# CONFIG_BCM283X_MU_SERIAL is not set
CONFIG_USB=y
CONFIG_DM_USB=y
CONFIG_DM_VIDEO=y
# CONFIG_VIDEO_BPP8 is not set
# CONFIG_VIDEO_BPP16 is not set
CONFIG_SYS_WHITE_ON_BLACK=y
CONFIG_CONSOLE_SCROLL_LINES=10
CONFIG_PHYS_TO_BUS=y
CONFIG_OF_LIBFDT_OVERLAY=y
From U-boot documentation, "U-boot Environment Variables":
bootdelay: After reset, U-Boot will wait this number of seconds before it executes the contents of the bootcmd variable. During this time a countdown is printed, which can be interrupted by pressing any key.
Set this variable to 0 boot without delay. Be careful: depending on the contents of your bootcmd variable, this can prevent you from entering interactive commands again forever!
Is this value 0 in your case?

Skylake hardware brightness changes too granular

I'm having a problem with Skylake i7-6700 HQ laptop display (HD 530 graphics) brightness changes:
If the value changes by 20 it works.
If the value changes by 19 it only works in multiple-hundred jumps.
If the value changes by <19 there is no brightness change at all.
However on my old Ivybridge laptop i7-3630 QM (HD 4000 graphics) brightness successfully changes in steps of 1.
Here is the script for testing:
#!/bin/bash
# Test all brightness levels from 1 to max_brightness
# For Intel i7-6700 HQ HD 530 graphics:
# - When change is 18 steps brighhness doesn't change at all.
# - When change is 19 steps brightnesss changes on multi-hundred point jumps.
# - When change is 20 steps each change applied as expected.
# For Intel i7-3630QM steps of 1 work fine!
if [[ $(id -u) != 0 ]]; then
echo >&2 "$0 must be called with sudo powers"
exit 1
fi
cd /sys/class/backlight/*/
max=$(cat max_brightness)
save=$(cat brightness)
for (( i=1; i < max; i=i+20)); do
echo $i > brightness
echo setting brightness level: $i
sleep .005
done
echo $save > brightness
echo resetting brightness level from $max back to: $save
exit 0
I think my skylake is working fine other than weird temperatures reported for pch_skylake sensor:
$ paste <(cat /sys/class/thermal/thermal_zone*/type) <(cat /sys/class/thermal/thermal_zone*/temp) | column -s $'\t' -t | sed 's/...$/.0°C/'
INT3400 Thermal 20.0°C
SEN1 56.0°C
SEN2 52.0°C
SEN3 57.0°C
SEN4 61.0°C
pch_skylake -44.0°C
B0D4 50.0°C
x86_pkg_temp 52.0°C
Other than that Linux intel micro-code is definitely activated on old laptop (Ubuntu 16.04) but may not be loaded on new laptop (Ubuntu 16.04.5).
Edit: Rebooted with Ubuntu 18.04.1 LTS, Kernel 4.15.0-36 and the same behaviour is witnessed.
Confirmation: I wonder if others have a Skylake laptop and can confirm hardware brightness works the same way.
Question: For the app I'm developing, do I have to put in a feature for each user to test smallest granular brightness change supported?
Backlight brightness is separate from the GPU proper; the iGPU that's part of the CPU chip just produces pixel data for the LCD, e.g. as a DisplayPort output. (Or in laptops, often an eDP lower-voltage signal).
Note that in a desktop, you can't adjust the backlight brightness with software; there's no connection from the normal GPU hardware / drivers with the backlight.
The software backlight control in laptops is pretty much separate from the iGPU, and has nothing to do with whether it's a Skylake or IvyBridge. The backlight control is a separate hardware device with separate I/O ports (or memory-mapped IO registers or whatever).
Finer granularity backlight adjustment is a property of the laptop design, not the CPU. Specifically of the backlight technology and controller hardware.
(This is my understanding, but I haven't actually looked at GPU or backlight / ACPI driver code in enough detail to be 100% sure this is accurate.)
I have no idea if it's possible for software to query the true / meaningful granularity; this answer is only to point out the misconception that it's dependent on the GPU or GPU drivers.

inotifywait not detected in /sys/class/gpio/gpioXX/ (raspberry pi)

I have connected 2 raspberry pi using GPIO :
The first one is the master, and use GPIO2 (and GND...)
The second one is a slave, and use GPIO0 and GPIO1
All are switch on a relay card
I put GPIO1 and GPIO0 on direction "IN" and GPI02 on direction "out" :
echo in > /sys/class/gpio/gpioXX/direction
On my master, (GPIO2, direction = OUT), when i put the pin GPIO2 to 1, the 2 pins on my slave turn to 1 too. So, no probleme here
I add a shell script, using inotifywait on one folder (for example /sys/class/gpio/gpio18/ (18 for GPIO1)).
When I'm on my SLAVE, and i try to modify the value of /sys/class/gpio/gpio18/ with an echo 1 > .../value , inotifywait catch a modification, but the value didn't change ( -bash: echo: write error: Operation not permitted , it's normal because direction is on "IN" ).
When I'm on my MASTER, and i modify the value of gpio27 (corresponding to GPI02), both value file (GPIO0, GPIO1 and GPIO2) change, but my inotifywait didn't catch the modification on gpio/gpio18/value (the containt of the file change from 0 to 1 or inversely)
I can't say for sure what is wrong. But I would try running a simple script like this and see what happens:
while inotifywait -e modify /sys/class/gpio/gpio18/; do echo "Hello"; done

Julia, handle keyboard interrupt

Title says it all. How can I handle or catch a SIGINT in julia? From the docs I assumed I just wanted to catch InterruptException using a try/catch block like the following
try
while true
println("go go go")
end
catch ex
println("caught something")
if isa(ex, InterruptException)
println("it was an interrupt")
end
end
But I never enter the catch block when I kill the program with ^C.
edit: The code above works as expected from the julia REPL, just not in a script.
I see the same behavior as alto, namely that SIGINT kills the entire process when my code is run as a script but that it is caught as an error when run in the REPL. My version is quite up to date and looks rather similar to that of tholy:
julia> versioninfo()
Julia Version 0.3.7
Commit cb9bcae* (2015-03-23 21:36 UTC)
Platform Info:
System: Linux (x86_64-linux-gnu)
CPU: Intel(R) Core(TM) i7-3610QM CPU # 2.30GHz
WORD_SIZE: 64
BLAS: libopenblas (DYNAMIC_ARCH NO_AFFINITY Sandybridge)
LAPACK: libopenblas
LIBM: libopenlibm
LLVM: libLLVM-3.3
Digging through the source, I found hints that Julia's interrupt behavior is determined by an jl_exit_on_sigint option, which can be set via a ccall. jl_exit_on_sigint is 0 for the REPL, but it looks as if init.c sets it to 1 when running a Julia program file from the command line.
Adding the appropriate ccall makes alto's code works regardless of the calling environment:
ccall(:jl_exit_on_sigint, Void, (Cint,), 0)
try
while true
println("go go go")
end
catch ex
println("caught something")
if isa(ex, InterruptException)
println("it was an interrupt")
end
end
This does seem to be a bit of a hack. Is there a more elegant way of selecting the interrupt behavior of the environment? The default seems quite sensible, but perhaps there should be a command line option to override it.
For julia >= 1.5, there is Base.exit_on_sigint. From the docs (retrieved on 20220419)
Set exit_on_sigint flag of the julia runtime. If false, Ctrl-C (SIGINT) is capturable as InterruptException in try block. This is the default behavior in REPL, any code run via -e and -E and in Julia script run with -i option.
Works for me. I'm running
julia> versioninfo()
Julia Version 0.3.0-prerelease+695
Commit 47915f3* (2013-12-27 05:27 UTC)
DEBUG build
Platform Info:
System: Linux (x86_64-linux-gnu)
CPU: Intel(R) Core(TM) i7 CPU L 640 # 2.13GHz
WORD_SIZE: 64
BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY)
LAPACK: libopenblas
LIBM: libopenlibm
but I expect it's not necessary to be fully up-to-date for this to work (I'd guess 0.2 should be fine too).

VOD server performance test

I have two VOD servers (RTSP) each on a different machine in a local network at home (vlc and Darwin streaming server).
What i am trying to do is a performance test that goes as follows:
* send in 10 requests, 50, then 100.
* redo the same but request multiple files instead of emulating multiple access to a single file.
* output statistics (speed, quality...etc).
What i have right now is OpenRstp which uses "-Q" to output Qos info but it is nowhere near what i need.
What i need is a free tool that can help me with this...all the ones i found (divesifeye and IxLoad) are not free.
Could anyone please suggest something useful?
I found a method that should do. It is based on openRTSP with "-Q" for Qos statistics.
the trick is how to redirect the data to a file as the Qos info only shows up after the feed is cut off. i wrote the following script to manage N-readings of a video feed/playlist. It will create a file that will contain the Qos info.
#!/bin/bash
f_rtsp(){
clear
echo -e "ENTER THE NUMBER OF STREAM USERS:"
echo -n "USER:"
read usr
for((i=1; i <= $usr;i++))
do
exec &> /$HOME/Desktop/results
echo -e "******************************* $i *****************************"
openRTSP -Q rtsp://<url>/<playlist-name>.sdp &
done
}
while : #Loop forever
do
cat <<!
Benchmark.RTSP
1.RTSP consumers
2.EXIT
!
echo -n "YOUR CHOICHE? :"
read choice
case $choice in
1|[rR]) f_rtsp ;;
2|[eE]) exit ;;
*) echo "\"$choice\"is not valid"; sleep 2 ;;
esac
done