I am making a MJPEG video stream using Raspberry Pi with dedicated Pi Camera. For this I am using jpeg libraries and the following web application found on Github. The use is pretty straightforward, you just type cd mjpg-streamer/mjpg-streamer-experimental and then ./mjpg_streamer -o "output_http.so -w ./www" -i "input_raspicam.so". However, I would like to make it run on every reboot, so that the camera is "maintenance free".
I researched that I need to put the path and the executable file in the /etc/rc.local. Nevertheless, when I put the path (mjpg-streamer/mjpg-streamer-experimental/mjpg_streamer -o "output_http.so -w ./www" -i "input_raspicam.so") to this executable file, it did not work at all. I tried to run the stream as one command in the Terminal, it did not work either. I also tried to set up a variable PATH in .bashrc in order to access it from /etc/rc.local, but it also did not want to work.
I suspect it might have something to do with command ./mjpg_streamerneeding some input for it to work (-o "output_http.so -w ./www" -i "input_raspicam.so")
Do you have any idea how to start it with every reboot?
Thanks for your time and help
i have solved similar issue for my rpi and jpeg streamer as following.
create a shell script in /home/pi
touch /home/pi/mjpg-streamer.sh
edit that shell script and add this content
#!/bin/bash
cd /home/pi/mjpg-streamer/mjpg-streamer-experimental/
LD_LIBRARY_PATH=.
./mjpg_streamer -o "output_http.so -w ./www" -i "input_raspicam.so"
make sure new shell script has execution rights
add that shell script to your /etc/rc.local
Related
I'm trying to modify the devicetree on my Coral SoM to support a different DSI display, and could use some pointers.
I edited arch/arm64/boot/dts/freescale/fsl-imx8mq-phanbell.dts
then used 'm' to make all
Question:
What is the correct way to modify the devicetree, build and load it to the board!?
Thanks!
can you give a little more details on the changes?
You can definitely do it that way, and I believe you don't need to rebuilt the entire OS, just the kernel is fine:
$ m docker-linux-imx
$ cd ./out/product/packages/bsp
$ scp ./linux-image-4.14.98-imx_11–4_arm64.deb mendel#board-ip
$ ssh mendel#board-ip
$ sudo dpkg -i ./linux-image-4.14.98-imx_11–4_arm64.deb
Another way is to create a device tree overlays. For instance here is a dts for disabling the hdmi: https://gist.github.com/Namburger/f700eb6b18bd1e3697638088d5995c8b
You can then compile it and move it to /boot:
$ dtc -# -I dts -O dtb -o disable-hdmi.dtbo disable_hdmi.dts
$ sudo mv disable-hdmi.dtbo /boot
Then add the file to /boot/overlays.txt to apply it:
$ cat /boot/overlays.txt
# List of device tree overlays to load. Format: overlay=<dtbo name, no extenstion> <dtbo2> ...
overlay= disable-hdmi
Thank you very much Nam.
The first option I think was already working, but I was not sure how to check. It appears that the devicetree can be examined by looking in /proc/device-tree/ for example cat hdmi#32c00000/status gives 'disabled' after doing the modification above, and the HDMI can be verified not working.
I have the need to start a java rest server with concourse that lives on an Ubuntu 18.04 machine. The version of concourse my company uses is 5.5.11. The server code is written in Java, so a simple java -jar <uber.jar> suffices from the command line (see below). In production, I will not have this simple luxury, hence my question.
I have an scp command working that copies the .jar from concourse to the target Ubuntu machine:
scp -i /tmp/key.p8 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ./${NEW_DIR}/${ARTIFACT_NAME}.${ARTIFACT_FILE_TYPE} ${SRV_ACCOUNT_USER}#${JAVA_VM_HOST}:/var/www
Note that my private key is passed with -i and I can confirm that is working.
I followed this other SO Q&A that seemed to be promising: Getting ssh to execute a command in the background on target machine
, but after trying a few permutations of the suggested solution and other answers, I still don't have my rest service kicked off.
I've tried a few permutations of this line in my concourse script:
ssh -f -i /tmp/pvt_key1.p8 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ${SRV_ACCOUNT_USER}#${JAVA_VM_HOST} "bash -c 'nohup java -jar /var/www/${ARTIFACT_NAME}.${ARTIFACT_FILE_TYPE} -c \"/opt/testcerts/clientkeystore\" -w \"password\" > /dev/null 2>&1 &'"
I've tried with and without the -f and -t switches in ssh, with and without the file stream redirection, with and without nohup and the Linux background ('&') command and various ways to escape the quotes.
At the bash prompt, this line successfully starts my server. The two switches are needed to point to the certificate and provide the password:
java -jar rest-service.jar -c "/opt/certificates/clientkeystore" -w "password"
I really think this is possible to do in Concourse, but I'm stuck at this point.
After a lot of trial an error, it seems I needed to do this:
ssh -f -i /tmp/pvt_key1.p8 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ${SRV_ACCOUNT_USER}#${JAVA_VM_HOST} "bash -c 'sudo java -jar /var/www/${ARTIFACT_NAME}.${ARTIFACT_FILE_TYPE} -c \"/path/to/my/certificate\" -w \"password\" > /var/www/log.txt 2>&1 &'"
The key was I was missing the 'sudo' portion of the command. Using nohup as opposed to putting in a Linux bash background indicator ('&') seems to give me an error in the pipeline. This works for me, but others are welcome to post responses with better answers or methods that might be a better practice.
I'm very new to Raspberry Pi, and have no prior notable experience with Linux so this is all new to me...
Octoprint is a 3D printer spooler that you can run on your raspberry pi. One of the features on Octoprint is the ability to setup a USB camera to view either still images or a stream of your print.
I am using the Octopi prepackaged Octoprint image.
Octoprint's github contains the following info referring to my USB camera. But I have no idea how to implement this.
Hama PC-Webcam "AC-150" on Raspberry Pi
./mjpg_streamer -o output_http.so -w ./www -i input_uvc.so -y -r 640x480 -f 10
https://github.com/foosel/OctoPrint/wiki/Webcams-known-to-work
I'm guessing this is an easy command that I enter via console, but I've winged few commands with no luck. Can someone shed some light on how I use this? Like I said I'm an absolute beginner with the pi...
Any help is greatly appreciated!
Try this:
camera_usb_options="-r VGA -f 10 -y"
sudo service octoprint stop
fuser /dev/video0
/dev/video0: **1871m**
$ ps axl | grep **1871** *Change this number by yours*
$ kill -9 **1871**
./mjpg_streamer -i "input_uvc.so $camera_usb_options" -o "output_http.so -w ./www"
sudo service octoprint start
For a while i have my db running on a command window because im not figuring out how to run it as a windows service.
Since i have the zip file version downloaded. how can i register the pg_ctl command as a windows service?
By the way, im using the following line to start the server:
"D:/Program Files/PostgreSQL/9.0.4/bin/pg_ctl.exe" -D "D:/Program Files/PostgreSQL/9.0.4/db_data" -l logfile start
Thanks in advance.
Use the register parameter for the pg_ctl program.
The data directory should not be stored in Program Files, the location of %ProgramData% is e.g. a good choice.
pg_ctl.exe register -N PostgreSQL -U some_windows_username -P windows_password -D "%ProgramData%/db_data" ...
In newer versions of Postgres, a separate Windows account is no longer necessary, so the following is also sufficient
pg_ctl.exe register -N PostgreSQL -D "%ProgramData%/db_data" ...
Details are in the manual: http://www.postgresql.org/docs/current/static/app-pg-ctl.html
You need to make sure the directory D:/Program Files/PostgreSQL/9.0.4/db_data has the correct privileges for the windows user you specify with the -U flag.
Btw: it is a bad idea to store program data in Program Files. You should move the data directory somewhere outside of Program Files because Program Files is usually highly restricted for regular users - with a very good reason.
Just run 'Command Prompt' as windows administrator and run the below command:
pg_ctl.exe register -N PostgreSQL -D "D:/Program Files/PostgreSQL/9.0.4/db_data"
You don't need to specify a User and Password, as previous answers have suggested.
I started a wget mirror with "wget --mirror [sitename]", and it was
working fine, but accidentally interrupted the process.
I now want to resume the mirror with the following caveats:
If wget has already downloaded a file, I don't want it downloaded
it again. I don't even want wget to check the timestamp: I know the
version I have is "recent enough".
I do want wget to read the files it's already downloaded and
follow links inside those files.
I can use "-nc" for the first point above, but I can't seem to coerce
wget to read through files it's already downloaded.
Things I've tried:
The obvious "wget -c -m" doesn't work, because it wants
to compare timestamps, which requires making at least a HEAD request
to the remote server.
"wget -nc -m" doesn't work, since -m implies -N, and -nc is
incompatible with -N.
"wget -F -nc -r -l inf" is the best I could come up with, but it
still fails. I was hoping "-F" would coerce wget into reading local,
already-downloaded files as HTML, and thus follow links, but this
doesn't appear to happen.
I tried a few other options (like "-c" and "-B [sitename]"), but
nothing works.
How do I get wget to resume this mirror?
Apparently this works:
Solved: Wget error “Can’t timestamp and not clobber old files at the
same time.” Posted on February 4, 2012 While trying to resume a
site-mirror operation I was running through Wget, I ran into the error
“Can’t timestamp and not clobber old files at the same time”. It turns
out that running Wget with the -N and -nc flags set at the same time
can’t happen, so if you want to resume a recursive download with
noclobber you have to disable -N. The -m attribute (for mirroring)
intrinsically sets the -N attribute, so you’ll have to switch from -m
to -r in order to use noclobber as well.
From: http://www.marathon-studios.com/blog/solved-wget-error-cant-timestamp-and-not-clobber-old-files-at-the-same-time/
-m, according to the wget manual is equivalent to this longer series of settings: -r -N -l inf --no-remove-listing. Just use those settings instead of -m, and without -N (timestamping).
Now I'm not sure if there is a way to get wget to download urls from existing html files. There probably is a solution, I know it can take html files as inputs and scrape all the links in them. Perhaps you could use a bash command to concatenate all the html files together into one big file.
I solved this problem by just deleting all the html files, because I didn't mind only redownloading them. But this might not work for everyone's use case.