How do I rebuild Coral mendel with a devicetree modification? - linux-device-driver

I'm trying to modify the devicetree on my Coral SoM to support a different DSI display, and could use some pointers.
I edited arch/arm64/boot/dts/freescale/fsl-imx8mq-phanbell.dts
then used 'm' to make all
Question:
What is the correct way to modify the devicetree, build and load it to the board!?
Thanks!

can you give a little more details on the changes?
You can definitely do it that way, and I believe you don't need to rebuilt the entire OS, just the kernel is fine:
$ m docker-linux-imx
$ cd ./out/product/packages/bsp
$ scp ./linux-image-4.14.98-imx_11–4_arm64.deb mendel#board-ip
$ ssh mendel#board-ip
$ sudo dpkg -i ./linux-image-4.14.98-imx_11–4_arm64.deb
Another way is to create a device tree overlays. For instance here is a dts for disabling the hdmi: https://gist.github.com/Namburger/f700eb6b18bd1e3697638088d5995c8b
You can then compile it and move it to /boot:
$ dtc -# -I dts -O dtb -o disable-hdmi.dtbo disable_hdmi.dts
$ sudo mv disable-hdmi.dtbo /boot
Then add the file to /boot/overlays.txt to apply it:
$ cat /boot/overlays.txt
# List of device tree overlays to load. Format: overlay=<dtbo name, no extenstion> <dtbo2> ...
overlay= disable-hdmi

Thank you very much Nam.
The first option I think was already working, but I was not sure how to check. It appears that the devicetree can be examined by looking in /proc/device-tree/ for example cat hdmi#32c00000/status gives 'disabled' after doing the modification above, and the HDMI can be verified not working.

Related

Where to get the eligible library tag file in Android O

In https://source.android.com/devices/architecture/vndk/deftool, it mentions that Google provides a tag file to classify the framework shared libraries, including LL-NDK, SP-NDK, VNDK, VNDK-SP and etc. However, after searching on this website and googling it, I'm not able to find the tag file. Where does Google provide it?
Thanks
Jincan
I found how to get such files.
You must get the file of vendor.img and system.img, for that is a file for deploying at "vendor partition" and "system partition" at a device.
Step 1
Please visit to Driver Binaries for Nexus and Pixel Devices.
There are images for two devices.
taimen (Pixel 2 XL)
walleye (Pixel 2)
Step 2: Method for file expand
Please read README.md.
There is undermentioned code
$ simg2img system.img system.raw.img
$ simg2img vendor.img vendor.raw.img
$ mkdir system
$ mkdir vendor
$ sudo mount -o loop,ro system.raw.img system
$ sudo mount -o loop,ro vendor.raw.img vendor
$ sudo python3 vndk_definition_tool.py vndk \
--system system \
--vendor vendor \
--aosp-system /path/to/aosp/generic/system \
--tag-file eligible-list-v3.0.csv
For detail, Please see that "README.md".
Thank you
git clone https://android.googlesource.com/platform/development
~/tools/development/vndk/tools/definition-tool/datasets[master]$ ls
eligible-list-o-mr1-release.csv eligible-list-o-release.csv minimum_dlopen_deps.txt minimum_tag_file.csv

Raspberry Pi MJPEG video stream - start application at reboot

I am making a MJPEG video stream using Raspberry Pi with dedicated Pi Camera. For this I am using jpeg libraries and the following web application found on Github. The use is pretty straightforward, you just type cd mjpg-streamer/mjpg-streamer-experimental and then ./mjpg_streamer -o "output_http.so -w ./www" -i "input_raspicam.so". However, I would like to make it run on every reboot, so that the camera is "maintenance free".
I researched that I need to put the path and the executable file in the /etc/rc.local. Nevertheless, when I put the path (mjpg-streamer/mjpg-streamer-experimental/mjpg_streamer -o "output_http.so -w ./www" -i "input_raspicam.so") to this executable file, it did not work at all. I tried to run the stream as one command in the Terminal, it did not work either. I also tried to set up a variable PATH in .bashrc in order to access it from /etc/rc.local, but it also did not want to work.
I suspect it might have something to do with command ./mjpg_streamerneeding some input for it to work (-o "output_http.so -w ./www" -i "input_raspicam.so")
Do you have any idea how to start it with every reboot?
Thanks for your time and help
i have solved similar issue for my rpi and jpeg streamer as following.
create a shell script in /home/pi
touch /home/pi/mjpg-streamer.sh
edit that shell script and add this content
#!/bin/bash
cd /home/pi/mjpg-streamer/mjpg-streamer-experimental/
LD_LIBRARY_PATH=.
./mjpg_strea‌​mer -o "output_http.so -w ./www" -i "input_raspicam.so"
make sure new shell script has execution rights
add that shell script to your /etc/rc.local

Can't use opkg on pre-configured embedded machine

I want to add a feature to a pre-configured embedded machine with uname -a output:
Linux asdf 3.1.10 #1 SMP PREEMPT Fri Nov 11 02:05:03 CET 2016 armv7l GNU/Linux
It uses busybox for a lot of its terminal commands and has a lot of stuff that doesn't work. It uses opkg as its package manager. I wanted to update systemd the other day and so I typed opkg update, which gave
Downloading http://www.website-of-manufacturer.com/ipk2/all/Packages.gz.
wget: bad address 'website-of-manufacturer.com:8008'
So I wanted to update the list of repositories, which should be done by editing /etc/opkg.conf if I understand correctly. But there is no such file. So after reading this I simply created it and pasted the example from the link.
But after running opkg update again, it still looks for http://www.website-of-manufacturer.com/ipk2/all/Packages.gz! What can I do to remove this repo and add others?
Edit: I also tried grep -Ril website-of-manufacturer in rootdir, but the installed version of grep doesn't support those flags so I don't even know where the configuration file is located :D
Edit: Ok find ./ -type f | xargs grep "website-of-manufacturer" actually located the file I was looking for. I guess I'll answer my own question if this works.
Since grep -r was not working, I could not find the config file. But then I tried
find ./ -type f | xargs grep "website-of-manufacturer"
which located the file containing the repository list. Neat trick for environments where grep isn't working like it should.

How to make correct wget request?

I need to copy xml file from server to my folder and name it as daily.xml. Here is my code.
The problem is that every new file has name daily.xml.1, daily.xml.2 etc
How to name new file as daily.xml, and previous file as previous-daily.xml? As I know I need to use -O but I don't understand how to use it
wget -P /home/name/name2/docs/xml/ http://www.domain.com/XML/daily.xml
How to make correct request?
What about
wget -P /home/name/name2/docs/xml/ http://www.domain.com/XML/daily.xml -O daily$(date +'%Y%m%d%H%M%S').xml
Maybe the resolution by seconds is not fine enough and you need to have a count variable.
This dose not, however, rename your previous files.
In case your only original problem was the your system does not recognize *.xml.7 as xml-file, the command above should fix this.
Edit: as for your comment, you could do
mv daily.xml previous-daily.xml;wget -P /home/name/name2/docs/xml/ http://www.domain.com/XML/daily.xml -O daily.xml

Can't resume "wget --mirror" with --no-clobber (-c -F -B unhelpful)

I started a wget mirror with "wget --mirror [sitename]", and it was
working fine, but accidentally interrupted the process.
I now want to resume the mirror with the following caveats:
If wget has already downloaded a file, I don't want it downloaded
it again. I don't even want wget to check the timestamp: I know the
version I have is "recent enough".
I do want wget to read the files it's already downloaded and
follow links inside those files.
I can use "-nc" for the first point above, but I can't seem to coerce
wget to read through files it's already downloaded.
Things I've tried:
The obvious "wget -c -m" doesn't work, because it wants
to compare timestamps, which requires making at least a HEAD request
to the remote server.
"wget -nc -m" doesn't work, since -m implies -N, and -nc is
incompatible with -N.
"wget -F -nc -r -l inf" is the best I could come up with, but it
still fails. I was hoping "-F" would coerce wget into reading local,
already-downloaded files as HTML, and thus follow links, but this
doesn't appear to happen.
I tried a few other options (like "-c" and "-B [sitename]"), but
nothing works.
How do I get wget to resume this mirror?
Apparently this works:
Solved: Wget error “Can’t timestamp and not clobber old files at the
same time.” Posted on February 4, 2012 While trying to resume a
site-mirror operation I was running through Wget, I ran into the error
“Can’t timestamp and not clobber old files at the same time”. It turns
out that running Wget with the -N and -nc flags set at the same time
can’t happen, so if you want to resume a recursive download with
noclobber you have to disable -N. The -m attribute (for mirroring)
intrinsically sets the -N attribute, so you’ll have to switch from -m
to -r in order to use noclobber as well.
From: http://www.marathon-studios.com/blog/solved-wget-error-cant-timestamp-and-not-clobber-old-files-at-the-same-time/
-m, according to the wget manual is equivalent to this longer series of settings: -r -N -l inf --no-remove-listing. Just use those settings instead of -m, and without -N (timestamping).
Now I'm not sure if there is a way to get wget to download urls from existing html files. There probably is a solution, I know it can take html files as inputs and scrape all the links in them. Perhaps you could use a bash command to concatenate all the html files together into one big file.
I solved this problem by just deleting all the html files, because I didn't mind only redownloading them. But this might not work for everyone's use case.