vscode tests discovery with poetry (src layout) - visual-studio-code

Last time I've followed recommended src layout (https://hynek.me/articles/testing-packaging/) with using tox with great success.
However VSCODE tests discovery fails because src package cannot be imported. That is expected as we want to test installed package.
But how to debug my tests in vscode?

(Q author here: I've done research on that before posting the question, so sharing what I found)
Not solution
You could modify your PYTHONPATH to point to your src directory, but it breaks the main benefit from having separate src directory (read the link from OP).
Solution
Use pip install -e path/to/your/package (usually pip install -e .) to enable development mode and test versus your codebase as it would be installed.
After that your tests should be discovered properly. Otherwise it is different issue - read vs code OUTPUT console.
Note: this requires setup.py as a build backend
workaround for poetry
pyproject.toml
[build-system]
requires = [
"poetry-core>=1.0.0",
"setuptools" # to support local installations
]
then
poetry build --format sdist && tar --wildcards -xvf dist/*.tar.gz -O '*/setup.py' > setup.py
pip install -e .
Source: https://github.com/python-poetry/poetry/issues/34
TLDR: proper solution is outside of poetry scope, links to python-list discussions: https://github.com/python-poetry/poetry/issues/34#issuecomment-732478605

Related

rpmbuild unable to find the custom installed package

There are plenty of perl packages missing in Centos 8 and Rocky Linux. So, I try to get the rpm spec by cpanspec and build rpm by myself. But, it seems like that rpmbuild could not find the rpm I built.
This is the script for me to build rpm.
cd /root/rpmbuild
cpanspec --packer 'Example <example#example.com>' <Perl-Package-Name>
mkdir SOURCES
cp <Perl-Package-Name>.tar.gz SOURCES
rpmbuild -ba perl-<Package-Name>.spec
Let's say we have two package A and B. A is needed by B.
I try to build both of the packages through the script above. I build A first, switch into /root/rpmbuild/RPMS/noarch and install A.rpm. Then, I try to build package B.
I got
error: Failed build dependencies:
perl(A) is needed by perl-<B>
I try to check the existence of package A.
yum list installed | grep A
and
perldoc -l A
Both of the commands show that A exists.
Did I miss something?
update 2022/06/07
I just gave up and commented the BuildRequires: A in B package. This is not a good approach but it works.

How to npm install and compile on travis.ci for multiple environments (including 32 bit)

I have a project hosted on github that builds and publishes releases to GitHub using Travis.ci:
https://github.com/Roaders/rpi-garage-door/releases/tag/v1.1.0
currently this just adds one tgz file to the release. This includes the bundles dependencies needed to run (the thinking behind this is that npm install and npm run build are really slow on a raspberry pi so just unzipping a tar is a million times faster). As travis uses 64bit machines the resulting tar files can only be used on 64bit so as part of the upgrade process we need to run npm rebuild - which is still fairly slow on RPI 3.
Initially I would like to build a 32 bit version instead of a 64 bit version but I do not know how to configure this on travis. I think I need to change the npm config so I tried this in my .travis.yml:
language: node_js
node_js: 12
script: npm run build-release
before_install:
npm set npm_config_arch ia32
but that doesn't work.
The second thing that I want to do is to build multiple versions of my project using different versions of node and then add all of those tgz files to the release.
this solution:
Cross-platform install of npm package sqlite3
is very close to what I want but does not work for me as the dependency I need to rebuild (epoll) does not build with node-pre-gyp
It seems that the other referenced answer was almost there. The difference is that I have to use node-gyp to rebuild epoll. I also want to rename the generated tgz file to distinguish between the binaries. These scripts generate the required files for me:
"build-release": "npm run build",
"postbuild-release": "npm run build-node10-32 && npm run build-node12-32 && npm run build-node14-32",
"build-node10-32": "npx node-gyp rebuild -C node_modules/epoll/ --arch=arm --target=v10.21.0 && FILENAME=$(npm pack | tail -n 1) && mv $FILENAME \"node_10_32_$FILENAME\"",
"build-node12-32": "npx node-gyp rebuild -C node_modules/epoll/ --arch=arm --target=v12.18.2 && FILENAME=$(npm pack | tail -n 1) && mv $FILENAME \"node_12_32_$FILENAME\"",
"build-node14-32": "npx node-gyp rebuild -C node_modules/epoll/ --arch=arm --target=v14.5.0 && FILENAME=$(npm pack | tail -n 1) && mv $FILENAME \"node_14_32_$FILENAME\"",
Note: this doesn't actually solve my specific issue as this still generates 64bit binaries that do not work on my 32 bit raspberry pis. I think the advised fix for this is to use --arch=ia32 but this results in the error unrecognized command line option ‘-m32’; did you mean ‘-mbe32’? but I think fixing that is outside the scope of this question.

Docker: kafka confluent go client error

I am trying to use apache kafka with go, things look good when i execute the project with go run but when i use docker build i get error....
# pkg-config --cflags rdkafka
Package rdkafka was not found in the pkg-config search path.
Perhaps you should add the directory containing `rdkafka.pc'
to the PKG_CONFIG_PATH environment variable
No package 'rdkafka' found
pkg-config: exit status 1
I installed librdkafka from https://github.com/confluentinc/confluent-kafka-go
git clone https://github.com/edenhill/librdkafka.git
cd librdkafka
./configure --prefix /usr
make
sudo make install
I tried
PKG_CONFIG_PATH=/usr/lib/pkgconfig
source ~/.bashrc
but not luck. Any help is appreciated.
Probably you should include librdkafka.dll, msvcr120.dll and zlib.dll in your project root. At least this is what i should do to get this work on Windows. Not sure about Linux.
This below line inside the Dockerfile worked for me as this sets the environmental variable and this will persist when a container is run from the resulting image.
ENV PKG_CONFIG_PATH ${PKG_CONFIG_PATH}:/usr/lib/pkgconfig/

CMake install directory permission

I have built a project using cmake (LLVM project) and tried to install it by issuing the following command:
$ cmake3 --build . --target install
If I run it using root then there is no problem and the files will be installed under the directory /usr/local/.
My problem is when I want to install the project using normal user.
I get the following error:
CMake Error at cmake_install.cmake:36 (file):
file INSTALL cannot set permissions on "/usr/local/include/llvm"
I have changed the permission of directory /usr/local/ to 777 recursively, and their ownership to root:wheel and I added my normal user to group wheel. But I still cannot install the files into the /usr/local/ directory.
The main issue is about building project in Eclipse which fails at "Build Install" command.
chmod 777 -R / is a very scary command. I've destroyed a system once by doing that.
The philosophy I use for this is:
If I need to deploy something through my IDE to debug or test before packaging, I deploy it locally within my home directory.
I only install stuff to my system (outside of home) if it has been packaged first (*.deb, *.rpm, *.tar.gz) so that I can remove it without problems.
For me, I do this with:
cmake $src
cmake --build . --target install -- DESTDIR=stage
This will configure my project, make it, then install it locally in a folder called ./stage which resides in my build directory. I can then run my executable from ./stage/usr/bin. Note that this only works if make is your generator.
Once I've tested it and I'm happy, I package it and deploy to my system or upload to a repository:
cpack
sudo dpkg -i <package>.deb
We should use USE_SOURCE_PERMISSIONS in our install function.
Example:
install(DIRECTORY "${CMAKE_CURRENT_BINARY_DIR}/Release/" DESTINATION "${CMAKE_CURRENT_BINARY_DIR}" USE_SOURCE_PERMISSIONS)

How can I make a list of installed packages in a certain virtualenv?

You can cd to YOUR_ENV/lib/pythonxx/site-packages/ and have a look, but is there any convenient ways?
pip freeze list all the packages installed including the system environment's.
You can list only packages in the virtualenv by
pip freeze --local
or
pip list --local.
This option works irrespective of whether you have global site packages visible in the virtualenv.
Note that restricting the virtualenv to not use global site packages isn't the answer to the problem, because the question is on how to separate the two lists, not how to constrain our workflow to fit limitations of tools.
Credits to #gvalkov's comment here. Cf. also pip issue 85.
Calling pip command inside a virtualenv should list the packages visible/available in the isolated environment. Make sure to use a recent version of virtualenv that uses option --no-site-packages by default. This way the purpose of using virtualenv is to create a python environment without access to packages installed in system python.
Next, make sure you use pip command provided inside the virtualenv (YOUR_ENV/bin/pip). Or just activate the virtualenv (source YOUR_ENV/bin/activate) as a convenient way to call the proper commands for python interpreter or pip
~/Projects$ virtualenv --version
1.9.1
~/Projects$ virtualenv -p /usr/bin/python2.7 demoenv2.7
Running virtualenv with interpreter /usr/bin/python2.7
New python executable in demoenv2.7/bin/python2.7
Also creating executable in demoenv2.7/bin/python
Installing setuptools............................done.
Installing pip...............done.
~/Projects$ cd demoenv2.7/
~/Projects/demoenv2.7$ bin/pip freeze
wsgiref==0.1.2
~/Projects/demoenv2.7$ bin/pip install commandlineapp
Downloading/unpacking commandlineapp
Downloading CommandLineApp-3.0.7.tar.gz (142kB): 142kB downloaded
Running setup.py egg_info for package commandlineapp
Installing collected packages: commandlineapp
Running setup.py install for commandlineapp
Successfully installed commandlineapp
Cleaning up...
~/Projects/demoenv2.7$ bin/pip freeze
CommandLineApp==3.0.7
wsgiref==0.1.2
What's strange in my answer is that package 'wsgiref' is visible inside the virtualenv. Its from my system python. Currently I do not know why, but maybe it is different on your system.
In Python3
pip list
Empty venv is
Package Version
---------- -------
pip 19.2.3
setuptools 41.2.0
To create a new environment
python3 -m venv your_foldername_here
Activate
cd your_foldername_here
source bin/activate
Deactivate
deactivate
You can also stand in the folder and give the virtual environment a name/folder (python3 -m venv name_of_venv).
Venv is a subset of virtualenv that is shipped with Python after 3.3.
list out the installed packages in the virtualenv
step 1:
workon envname
step 2:
pip freeze
it will display the all installed packages and installed packages and versions
If you're still a bit confused about virtualenv you might not pick up how to combine the great tips from the answers by Ioannis and Sascha. I.e. this is the basic command you need:
/YOUR_ENV/bin/pip freeze --local
That can be easily used elsewhere. E.g. here is a convenient and complete answer, suited for getting all the local packages installed in all the environments you set up via virtualenvwrapper:
cd ${WORKON_HOME:-~/.virtualenvs}
for dir in *; do [ -d $dir ] && $dir/bin/pip freeze --local > /tmp/$dir.fl; done
more /tmp/*.fl
why don't you try pip list
Remember I'm using pip version 19.1 on python version 3.7.3
If you are using pip 19.0.3 and python 3.7.4. Then go for pip list command in your virtualenv. It will show all the installed packages with respective versions.
.venv/bin/pip freeze worked for me in bash.
In my case the flask version was only visible under so I had to go to
C:\Users\\AppData\Local\flask\venv\Scripts>pip freeze --local
Using python3 executable only, from:
Gitbash:
winpty my_venv_dir/bin/python -m pip freeze
Linux:
my_venv_dir/bin/python -m pip freeze