"Invalid or incomplete multibyte or wide character" running v4l2-ctl - raspberry-pi

Using iTerm, I'm SSHing into my raspberry pi (raspbian) to control a home security system I've setup.
I need to change the focus of my camera, so I'm running v4l2-ctl -c focus_absolute=0 on my terminal.
I've been doing this for weeks, and it hasn't given me any issues. Today, when running the command I've started getting the following error:
VIDIOC_S_EXT_CTRLS: failed: Invalid or incomplete multibyte or wide character
focus_absolute: Invalid or incomplete multibyte or wide character
What could be causing it to suddenly be throwing this error? I've been running the exact same command for weeks without a problem.

I found the answer in here: https://askubuntu.com/a/388045/814834. Turns out that when, for example, the camera's auto_exposure setting is set to 1 (true) then, whenever you modify settings that are controlled by that "automatic" setting, in this case the absolute_exposure, you will get that useless error because you have to first change the auto_exposure to 0 (false) in order to change the settings controlled by it

Related

Emacs garbled screen on SLURM interactive node

When I remotely login a SLURM interactive node, emacs will sometimes garble the screen. As I describe below, I think the issue is that the SLURM interactive node is messing up the Enquiry/acknowledgment terminal signals and some characters are getting dropped causing glitches.
Setup
Computer that I actually interact with: MacBook Air (10.13.2)
Terminal: iTerm2 Build 3.1.7
ssh to cluster
SLURM interactive node (i.e. srun --nodes=1 ... --pty /bin/bash)
emacs: GNU Emacs 27.1 used in terminal mode (i.e. emacs -nw)
Sometimes when the screen re-draws, it gets garbled:
It seems to happen more when the there are many panes or moving around lots of text.
Based on this part of the emacs documentation, I tried using C-l (recenter-top-bottom) to re-draw the screen, and that temporarily fixes the current glitch.
By setting $TERM=screen or $TERM=xterm-256color in my .bash-profile I see different color schemes, but the glitches persist.
Note, I am only seeing the glitches when I login to the interactive node, not from the head node on the cluster. Using the fact that it is ok on the loging node, this can provide useful diagnostic information. This makes me suspect that the issue is that the ENQ/ACK or pad-timing for so characters sent from the cluster are getting dropped. This is discussed in the documentation for the tack terminfo diagnostics program.
Using tack from both the login node and the interactive node give the same values
$ tack
Using terminfo from: /home/maom/opt/miniconda3/share/terminfo/x/xterm-256color
Name: xterm-256color|xterm with 256 colors
\r ^M (cr) = ^M
\n ^J (ind) = ^J
\b ^H (cub1) = ^H
\t ^I (ht) = ^I
(clear) = ^[[H^[[2J
(home) = ^[[H
ENQ (u9) = ^[[c
ACK (u8) = ^[[?1;2c
Terminal size: 204 x 52. Baud rate: 38400. Frame size: 10.0
Using the Baudrate test on the login node:
1600949 characters per second. Baudrate 52 Done
Using the Baudrate test on the interactive node the characters per second is about 30% slower:
1090426 characters per second. Baudrate 52 Done
And, using the test ENQ/ACK handshake on the login node gives:
Testing ENQ/ACK, standby...
This program expects the ENQ sequence to be answered with the ACK character. This will help the program reestablish synchronization when the terminal is overrun with data.
ENQ sequence from (u9): ^[[c
ACK received: ^[[?1;2c
Length of ACK 7. Expected length of ACK 7.
Terminating character found in (u8): c
while using the test ENQ/ACK handshake on the interactive node node gives:
Testing ENQ/ACK, standby...
ACK terminating character: c
Is there someway I can update the terminfo to fix the glitches work with the cluster admin support to work around the issue?
I have experienced the same problem, and I don't know the best solution, but this could help. First, when you do srun it is better to pass -li on bash:
srun --nodes=1 ... --pty /bin/bash -li
This will make sure it loads the interactive bash profile you will usually open when you normally login.
This does not fix the issue completely for me, but but if I do tmux in the interactive session and then run emacs, then I don't have garbling problem.

WinSCP PUT command errors out with "Unknown switch 'rawtransfersettings'"

We have a job which is located in Windows server and this job is responsible for sending files to a Linux box through WinSCP utility.
We observed that file transfer process failing due to the connection error on an average every alternate day.
We are getting below error message in logs:
Upload of file 'xxx_20190103031754.csv' was successful, but error occurred while setting the permissions and/or timestamp.
If the problem persists, turn off setting permissions or preserving timestamp.
Alternatively you can turn on 'Ignore permission errors' option.
General failure (server should provide error description).
In order to fix the issue I googled to set -rawtransfersettings for put command
open sftp://xxx#xxx.example.com/ -hostkey="ssh-rsa 1024 xx:xx:xx:xx:xx:xx" -timeout=60 -rawsettings SendBuf=0 SshSimple=1
put -rawtransfersettings IgnorePermErrors=0 PreserveTimeDirs=0 "E:\Final\XXX_ASSIGNMENT_20190416200819.csv" "/<Linux Box Folder Name>/"
But I am getting below error
Authenticating with pre-entered password.
Authenticated.
Starting the session...
Session started.
Active session: [1] xxx#xxx.com
Unknown switch 'rawtransfersettings'.
The -rawtransfersettings switch is supported by the latest WinSCP 5.15 only. You are probably using an older version of WinSCP.
Also, if your goal was to to enable "Ignore permission errors", you need IgnorePermErrors=1 (0 is the default value).
Side notes: PreserveTimeDirs is not related to your problem and is 0 by default anyway. So you can remove that. And the double slash is suspicious, you should probably use one only.
This should do:
put -rawtransfersettings IgnorePermErrors=1 "E:\Final\XXX_ASSIGNMENT_20190416200819.csv" "/"
But if your server actually does not support preserving timestamps, then you should rather use -nopreversetime switch. See the documentation for the error message:
When using scripting, add -nopreservetime and -nopermissions switches to put command.

fluxbox couldn't connect to XServer - CentOS 6.4

I'm setting up some new VNC servers. I already have this setup working with CentOS 6.3, although I'm not certain that this difference is the real problem.
One of the window managers I'm making available is fluxbox, but when I start it, I always get the following: Error: Couldn't connect to XServer. Here's my setup:
fluxbox: fluxbox-1.1.1-5.el6.x86_64
vnc : tigervnc-server-1.1.0-5.el6_4.1.x86_64
OS : CentOS 6.4
Note that I can start other window managers: Gnome, KDE, openbox, xfce4, etc.
I gutted my ~/.vnc/xstartup script so it only loads an xterm. Then, I tried running startfluxbox &, but still got the error. Obviously, VNC is working, since my xterm opened up OK. I can start firefox, another xterm or other app requiring X, and even fluxbox comes up, but it is worthless in its current state, since it is not connected to the X session.
What is fluxbox looking for? Are there some log files I can look at to give me some clues?
Thanks,
David
CentOS/RHEL 6.4 and up have upgraded libX11 and Xorg.
The $DISPLAY var handling has changed in libX11.
This one in particular is described in this git commit:
http://cgit.freedesktop.org/xorg/lib/libX11/commit/?id=f92e754297ec5fdb81068b56a4435026666224fa
we run our fluxbox with this script in our vnc configs now:
/usr/bin/fluxbox -display "$DISPLAY.0"
OK, I think I've figured out the problem, so I'm answering my own question.
In VNC, I usually specify a display number. (Note, however, that the problem occurs even if vncserver uses the first available display number.) So, I start the vncserver as:
vncserver :17
This should create an X session where my $DISPLAY is set to :17.0, but in CentOS 6.4, the $DISPLAY is set to :17 instead. Apparently, unlike other window managers, fluxbox is unable to handle this inaccuracy. The problem, then, was that fluxbox was trying to connect to :17 and was unable to do so.
My solution, as suggested by someone answering a different problem, was to set $DISPLAY as part of the invocation of fluxbox. So, in my ~/.vnc/xstartup file, I have:
DISPLAY=$DISPLAY.0 startfluxbox &
Note that this may not work for other releases of CentOS, so you might wish to test the release of the box you are using before adding the DISPLAY=... setting to the command.

mysqldump: Got error: 0: when retrieving data from server

The back up system I am running gives me this error
mysqldump: Got error: 0: when retrieving data from server
I have been looking around and I failed to find another output with error code 0. I do not know if that even matters or not but I thought it would otherwise it would be same number as others. Thats why I made this topic.
Server is CentOs
MySQL version 5.5.33-31.1
What throws me off is that I have another box with same OS and MySQL version and it works fine.
I have been looking at this site for reference but there is no code 0...
Any other information you need, please tell and I will put it here.

STM32 GDB/OpenOCD Commands and Initialization for Flash and Ram Debugging

I am looking for assistance with the proper GDB / OpenOCD initializion and running commands (external tools) to use within Eclipse for flash and RAM debugging, as well as the proper modifications or additions that need to be incorporated in a makefile for flash vs RAM building for this MCU, if this matters of course.
MCU: STM32F103VET6
I am using Eclipse Helios with Zylin Embedded CDT, Yagarto Tools and Bins, OpenOCD 0.4, and have an Olimex ARM-USB-OCD JTAG adapter.
I have already configured the ARM-USB-OCD and added it as an external tool in Eclipse. For initializing OpenOCD I used the following command in Eclipse. The board config file references the stm32 MCU:
openocd -f interface/olimex-arm-usb-ocd-h.cfg -f board/stm32f10x_128k_eval.cfg
When I run this within Eclipse everything appears to be working (GDB Interface, OpenOCD finds the MCU, etc). I can also telnet into OpenOCD and run commands.
So, I am stuck on the next part; initialization and commands for flash and RAM debugging, as well as erasing flash.
I read through several tutorials, and scoured the net, but have not been able to find anything particular to this processor. I am new to this, so I might not be recognizing an equivalent product for an example.
I'm working with the same tool chain to program and debug a STM32F107 board. Following are my observations to get an STM32Fxxx chip programmed and debugged under this toolchain.
Initial Starting Point
So at this point you've got a working OpenOCD to ARM-USB-OCD connection and so you should be all set on that end. Now the work is on getting Eclipse/Zylin/Yagarto GDB combination to properly talk to the STM32Fxxx through the OpenOCD/Olimex connection. One thing to keep in mind is that all the OpenOCD commands to issue are the run mode commands. The configuration scripts and command-line options to invoke the OpenOCD server are configuration mode commands. Once you issue the init command then the server enters run mode which opens up the set of commands you'll need next. You've probably done it somewhere else but I tack on a '-c "init"' option when I call the OpenOCD server like so:
openocd -f /path to scripts/olimex-arm-usb-ocd-h.cfg -f /path to targets/stm32f107.cfg -c "init"
The following commands I issue next are done by the Eclipse Debug Configurations dialogue. Under the Zylin Embedded debug (Native) section, I create a new configuration, give it a name, Project (optional), and absolute path to the binary that I want to program. Under the Debugger tab I set the debugger to Embedded GDB, point to the Yagarto GDB binary path, don't set a GDB command file, set GDB command set to Standard, and the protocol to mi.
The Commands Tab - Connect GDB to OpenOCD
So the next tab is the Commands tab and that's where the meat of the issue lies. You have two spaces Initialize and Run. Not sure exactly what the difference is except to guess that they occur pre- and post-invocation of GDB. Either way I haven't noticed a difference in how my commands are run.
But anyway, following the examples I found on the net, I filled the Initialize box with the following commands:
set remote hardware-breakpoint limit 6
set remote hardware-watchoint-limit 4
target remote localhost:3333
monitor halt
monitor poll
First two lines tell GDB how many breakpoints and watchpoints you have. Open OCD Manual Section 20.3 says GDB can't query for that information so I tell it myself. Next line commands GDB to connect to the remote target at the localhost over port 3333. The last line is a monitor command which tells GDB to pass the command on to the target without taking any action itself. In this case the target is OpenOCD and I'm giving it the command halt. After that I tell OpenOCD to switch to asynchronous mode of operation. As some of the following operations take a while, it's useful not to have OpenOCD block and wait for every operation.
Sidenote #1: If you're ever in doubt about the state of GDB or OpenOCD then you can use the Eclipse debug console to send commands to GDB or OpenOCD (via GDB monitor commands) after invoking this debug configuration.
The Commands Tab - Setting up the User Flash
Next are commands I give in the Run commands section:
monitor flash probe 0
monitor flash protect 0 0 127 off
monitor reset halt
monitor stm32x mass_erase 0
monitor flash write_image STM3210CTest/test_rom.elf
monitor flash protect 0 0 127 on
disconnect
target remote localhost:3333
monitor soft_reset_halt
to be explained in the following sections...
Setting up Access to User Flash Memory
First I issue an OpenOCD query to see if it can find the flash module and report the proper address. If it responds that it found the flash at address 0x08000000 then we're good. The 0 at the end specifies to get information about flash bank 0.
Sidenote #2: The STM32Fxxx part-specific data sheets have a memory map in section 4. Very useful to keep on hand as you work with the chip. Also as everything is accessed as a memory address, you'll come to know this layout like the back of your hand after a little programming time!
So after confirming that the flash has been properly configured we invoke the command to turn off write protection to the flash bank. PM0075 describes everything you need to know about programming the flash memory. What you need to know for this command is the flash bank, starting sector, ending sector, and whether to enable or disable write protection. The flash bank is defined in the configuration files you passed to OpenOCD and was confirmed by the previous command. Since I want to disable protection for the entire flash space I specify sectors 0 to 127. PM0075 explains how I got that number as it refers to how the flash memory is organized into 2KB pages for my (and your) device. My device has 256KB of flash so that means I have 128 pages. Your device has 512KB of flash so you'll have 256 pages. To confirm that your device's write-protection has been disabled properly, you can check the FLASH_WRPR register at address 0x40022020 using the OpenOCD command:
monitor mdw 0x40022020
The resulting word that it prints will be 0xffffffff which means all pages have their write protection disabled. 0x00000000 means all pages have write protection enabled.
Sidenote #3: On the subject of the memory commands, I bricked my chip twice as I was messing with the option bytes at the block starting at address 0x1ffff800. First time I set the read protection on the flash (kind of hard to figure out what your doing if you do that), second time I set the hardware watchdog which prevented me from doing anything afterwards since the watchdog kept firing off! Fixed it by using the OpenOCD memory access commands. Moral of the story is: With great power comes great responsibility.... Or another take is that if I shoot myself in the foot I can still fix things via JTAG.
Sidenote #4: One thing that'll happen if you try to write to protected flash memory is the FLASH_SR:WRPRTERR bit will be set. OpenOCD will report a more user-friendly error message.
Erasing the Flash
So after disabling the write protection, we need to erase the memory that you want to program. I do a mass erase which erases everything, you also have the option to erase by sector or address (I think). Either way you need to erase first before programming as the hardware checks for erasure first before allowing a write to occur. If the FLASH_SR:PGERR bit (0x4002200c) ever gets set during programming then you know you haven't erased that chunk of memory yet.
Sidenote #5: Erasing a bit in flash memory means setting it to 1.
Programming Your Binary
The next two lines after erasure writes the binary image to the flash and reenables the write protection. There isn't much more to say that isn't covered by PM0075. Basically any error that occurs when you issue flash write_image is probably related to the flash protection not being disabled. It's probably NOT OpenOCD though if you're curious you can take enable the debug output and follow what it does.
GDB Debugging
So finally after programming I disconnect GDB from the remote connection and then reconnect it to the target, do a soft-reset, and my GDB is now ready to debug. This last part I just figured out last night as I was trying to figure out why, after programming, GDB wouldn't properly stop at main() after reset. It kept going off into the weeds and blowing up.
My current thinking and from what I read in the OpenOCD and GDB manuals is that the remote connection is, first and foremost, meant to be used between GDB and a target that has already been configured and running. Well I'm using GDB to configure before I run so I think the symbol table or some other important info gets messed up during the programming. The OpenOCD manual says that the server automatically reports the memory and symbols when GDB connects but all that info probably becomes invalid when the chip gets programmed. Disconnecting and reconnecting I think refreshes the info GDB needs to debug properly. So that has led me to create another Debug Configuration, this one just connects and resets the target since I don't necessarily need to program the chip every time I want to use GDB.
Whew! Done! Kind of long but this took me 3 weekends to figure out so isn't too terribly bad I think...
Final sidenote: During my time debugging I found that OpenOCD debug output to be invaluable to me understanding what OpenOCD was doing under the covers. To program a STM32x chip you need to unlock the flash registers, flip the right bits, and can only write a half-word at a time. For a while I was questioning whether OpenOCD was doing this properly but after looking through the OpenOCD debug output and comparing it against what the PM0075 instructions were, I was able to confirm that it did indeed follow the proper steps to do each operation. I also found I was duplicating steps that OpenOCD was already doing so I was able to cut out instructions that weren't helping! So moral of the story: Debug output is your friend!
I struggled getting JLink to work with a STM3240XX and found a statement in the JLink GDB server documentation saying that after loading flash you must issue a "target reset":
"When debugging in flash the stack pointer and the PC are set automatically when the target is reset after the flash download. Without reset after download, the stack pointer and the PC need to be initialized correctly, typically in the .gdbinit file."
When I added a "target reset" in the Run box of the debugger Setup of Eclipse, suddenly everything worked. I did not have this problem with a Kinetis K60.
The document also explains how to manually set the stack pointer and pc directly if you don't want to issue a reset. It may not be the disconnect/connect that solves the problem but the reset.
What i use after the last sentence in the Comannd Tab - 'Run' Commands, is:
symbol-file STM3210CTest/test_rom.elf
thbreak main
continue
The thbreak main sentence is what makes gdb stop at main.