I am working with yocto, i want to copy some documents into rootfs.
when i copy them they are present inside
packages-split/Ex-doc/usr/share/doc/versions/Ex.txt
Now i want to copy from here into rootfs, below are the changes i have made inside recipe
do_install() {
install -m 0755 -d ${D}${docdir}/versions
install -m 0755 ${S}/Ex.txt ${D}${docdir}/versions
}
FILES_${PN}-doc += "${docdir}/versions"
But i am not able to see /usr/share/doc dir inside rootfs. Please point me where i am doing wrong.
Thank You
Did you add the package FILES_${PN}-doc to the list of packages installed in the image?
Related
I have tried installing CANopen in yocto using below command. But the CANOpen is not getting installed.
bitbake canopensocket_git
In local.conf file I had added
CORE_IMAGE_EXTRA_INSTALL += " canopensocket_git "
How I can install canopen package?
Any input is also considered.
First of all, that's a syntax error canopensocket_git.
The recipe name ${PN} is canopensocket and every thing after the_ is the version number ${PV}.
So, you need to only specify the recipe name. Or if you have different versions you can specify one by:
PREFERRED_VERSION_canopensocket = "version_here"
That being said, I found a recipe for canopensocket in here.
But if fails and it is not updated with latest github commit.
I did some modifications on it, here is my recipe:
SUMMARY = "Linux CANOpen tools"
DESCRIPTION = "Linux CANOpen Protocol Stack Tools"
LICENSE = "GPLv2"
LIC_FILES_CHKSUM = "file://gpl-2.0.txt;md5=b234ee4d69f5fce4486a80fdaf4a4263"
SRC_URI = "git://github.com/CANopenNode/CANopenSocket.git"
SRCREV = "ec9735165502e08b5d2e84d641833709b6faeb96"
S = "${WORKDIR}/git"
do_compile_prepend() {
cd ${S}
git submodule init
git submodule update
}
do_compile() {
cd ${S}/cocomm
make
cd ${S}/canopencgi
make
}
do_install(){
install -d ${D}${bindir}
install -m 0755 ${S}/cocomm/cocomm ${D}${bindir}
install -m 0755 ${S}/canopencgi/canopen.cgi ${D}${bindir}
}
FILES_${PN} += "${bindir}/*"
I modified do_compile, do_install and added the packaging of FILES.
And I set SRCREV to the latest v4 tag commit instead of AUTOREV.
I do not know what this recipe does, but I compiled it and the build was okay for me on a zeus build.
The build produced two binaries: cocomm and canopen.cgi .
No if you want to install it to your image, add this to you cutom image recipe:
IMAGE_INSTALL_append = " canopensocket"
I'm trying install files extracted from tar file, but non of my files are installing under usr/include directory on target board, but I see my files under temp/work/aarch64/recipedir/image/usr/include/mydir/ and include/myfile.h. While building I haven't got any errors.
do_install() {
install -d ${D}${includedir}
mkdir -p ${D}${includedir}/mydir
install -m 0644 ${S}/include/myfile.h ${D}${includedir}
install -m 0644 ${S}/include/mydir/*.h ${D}${includedir}/mydir/
}
FILES_${PN} += "${includedir}/mydir
Everything in ${includedir} is put into ${PN}-dev by default.
c.f.: https://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/meta/conf/bitbake.conf#n316
You have to remember that a file can only be in one package. To determine in which package a file is going to make it, it's pretty simple. Starting from leftmost package in PACKAGES, the first package to have the file matching any path in FILES_<pkg> will have the file.
By default, ${PN}-dev appears before ${PN} in PACKAGES.
c.f.: http://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/meta/conf/bitbake.conf#n294
You can check which package has your file without "reverse-engineering" the whole thing by running oe-pkgdata-util find-path '/usr/include/mydir'.
If you really want this header file in your system (why?), you can either add ${PN}-dev to your image or hack things (remove the -dev package from PACKAGES or move ${PN} before ${PN}-dev, if you only have one file in ${includedir}, etc.).
I want to install the OpenPose files permanently so that I need not install them each time I reopen collab after a break.
I got through some installation code but I dont know how to make the required modifications.
import os
from os.path import exists, join, basename, splitext
git_repo_url = 'https://github.com/CMU-Perceptual-Computing-Lab/openpose.git'
project_name = splitext(basename(git_repo_url))[0]
if not exists(project_name):
# see: https://github.com/CMU-Perceptual-Computing-Lab/openpose/issues/949
# install new CMake becaue of CUDA10
!wget -q https://cmake.org/files/v3.13/cmake-3.13.0-Linux-x86_64.tar.gz
!tar xfz cmake-3.13.0-Linux-x86_64.tar.gz --strip-components=1 -C /usr/local
# clone openpose
!git clone -q --depth 1 $git_repo_url
!sed -i 's/execute_process(COMMAND git checkout master WORKING_DIRECTORY ${CMAKE_SOURCE_DIR}\/3rdparty\/caffe)/execute_process(COMMAND git checkout f019d0dfe86f49d1140961f8c7dec22130c83154 WORKING_DIRECTORY ${CMAKE_SOURCE_DIR}\/3rdparty\/caffe)/g' openpose/CMakeLists.txt
# install system dependencies
!apt-get -qq install -y libatlas-base-dev libprotobuf-dev libleveldb-dev libsnappy-dev libhdf5-serial-dev protobuf-compiler libgflags-dev libgoogle-glog-dev liblmdb-dev opencl-headers ocl-icd-opencl-dev libviennacl-dev
# install python dependencies
!pip install -q youtube-dl
# build openpose
!cd openpose && rm -rf build || true && mkdir build && cd build && cmake .. && make -j`nproc`
from IPython.display import YouTubeVideo
Can somebody please help me solve this issue.
When you say that you want to install openpose permanently, I assume that you mean you want to install it onto your google drive rather than having it installed into temporary files each time you run the code above in colab.
To install openpose on your google drive, rather than on the temporary colab storage:
(1) Mount your gdrive. Add this block of code prior to the block that you posted in your question above.
#Connect your google gdrive
from google.colab import drive
drive.mount('/content/drive')
(2) Change the directory to your gdrive. Add the following line to your code just after you've imported the dependencies but before the first line of the code block.
#Change the drive to your mounted gdrive
%cd /content/drive/MyDrive
This should install openpose on your permanent gdrive so that you can call openpose in the future from this location.
I want to install wkhtmltopdf library inside /home/dev directory, and I can't touch anything else outside of this directory, because it's not my server.
The file has .deb extension, I have run in /home/dev:
$ wget "http://file-to-install.com/"
$ dpkg -x my_file.deb
So the file exists. Now I want to run:
$ dpkg -i my_file.deb
Which will install it, but my question is - does this install the library only inside this dev folder, without touching anything else?
You should refer to How to extract RPM or DEB packages, which is linked to from the FAQ on the downloads page:
ar p wkhtmltox.deb data.tar.xz | tar zx
I have a web app: fooapp. I have a package.json in the root. I want to install all the dependencies in a specific node_modules directory. How do I do this?
What I want
Lets say I have two widget dependencies. I want to end up with a directory structure like this:
node_modules/
widgetA
widgetB
fooapp/
package.js
lib
..
What I get
when I run npm install fooapp/ I get this:
node_modules/
fooapp/
node_modules/
widgetA
widgetB
package.js
lib/
..
fooapp/
package.js
lib/
..
npm makes a copy of my app directory in the node_modules dir and installs the packages inside another node_modules directory.
I understand this makes sense for installing a package. But I don't require() my web app inside of something else, I run it directly. I'm looking for a simple way to install my dependencies into a specific node_modules directory.
Running:
npm install
from inside your app directory (i.e. where package.json is located) will install the dependencies for your app, rather than install it as a module, as described here. These will be placed in ./node_modules relative to your package.json file (it's actually slightly more complex than this, so check the npm docs here).
You are free to move the node_modules dir to the parent dir of your app if you want, because node's 'require' mechanism understands this. However, if you want to update your app's dependencies with install/update, npm will not see the relocated 'node_modules' and will instead create a new dir, again relative to package.json.
To prevent this, just create a symlink to the relocated node_modules from your app dir:
ln -s ../node_modules node_modules
In my case I need to do
sudo npm install
my project is inside /var/www so I also need to set proper permissions.
Just execute
sudo npm i --save
That's all
npm i --force
from documention:
The -f or --force argument will force npm to fetch remote resources even if a local copy exists on disk