Never happen before but If I create a directory mkdir -p catkin_ws/src and then enter catkin build I have the following error:
emeric#emeric-desktop:~/catkin_plan_ws$ catkin build
------------------------------------------------------
Profile: default
Extending: [env] /opt/ros/kinetic
Workspace: /home/emeric
------------------------------------------------------
Source Space: [exists] /home/emeric/src
Log Space: [missing] /home/emeric/logs
Build Space: [exists] /home/emeric/build
Devel Space: [exists] /home/emeric/devel
Install Space: [unused] /home/emeric/install
DESTDIR: [unused] None
------------------------------------------------------
Devel Space Layout: linked
Install Space Layout: None
------------------------------------------------------
Additional CMake Args: DCMAKE_BUILT_TYPE=Release
Additional Make Args: None
Additional catkin Make Args: None
Internal Make Job Server: True
Cache Job Environments: False
------------------------------------------------------
Whitelisted Packages: None
Blacklisted Packages: None
------------------------------------------------------
Workspace configuration appears valid.
NOTE: Forcing CMake to run for each package.
------------------------------------------------------
Traceback (most recent call last):
File "/usr/bin/catkin", line 9, in <module>
load_entry_point('catkin-tools==0.4.4', 'console_scripts', 'catkin')()
File "/usr/lib/python2.7/dist-packages/catkin_tools/commands/catkin.py", line 267, in main
catkin_main(sysargs)
File "/usr/lib/python2.7/dist-packages/catkin_tools/commands/catkin.py", line 262, in catkin_main
sys.exit(args.main(args) or 0)
File "/usr/lib/python2.7/dist-packages/catkin_tools/verbs/catkin_build/cli.py", line 420, in main
summarize_build=opts.summarize # Can be True, False, or None
File "/usr/lib/python2.7/dist-packages/catkin_tools/verbs/catkin_build/build.py", line 283, in build_isolated_workspace
workspace_packages = find_packages(context.source_space_abs, exclude_subspaces=True, warnings=[])
File "/usr/lib/python2.7/dist-packages/catkin_pkg/packages.py", line 86, in find_packages
packages = find_packages_allowing_duplicates(basepath, exclude_paths=exclude_paths, exclude_subspaces=exclude_subspaces, warnings=warnings)
File "/usr/lib/python2.7/dist-packages/catkin_pkg/packages.py", line 146, in find_packages_allowing_duplicates
xml, filename=filename, warnings=warnings)
File "/usr/lib/python2.7/dist-packages/catkin_pkg/package.py", line 509, in parse_package_string
raise InvalidPackage('The manifest must contain a single "package" root tag')
catkin_pkg.package.InvalidPackage: The manifest must contain a single "package" root tag
Besides the build and devel folders are created in my home directory not in the catkin one.
I guess I messed up something but I do not what and thus how to fix it.
Thank you for your help
the root Folder of build, install, log, devel and src space should be your catkin root where you can call to catkin build (in your case it's ~/catkin_ws).
in a nutshell, you can't do a task outside of initiated catkin folder with catkin
Related
I am running docker-compose 1.25.5 on a ubuntu 20 box and I have a github repo working "fine" in its home folder... I can docker-compose build and docker-compose up with no problem, and the container does what is expected. The github repo is current with the on-disk files.
As a test, however, I created a new folder, pulled the repo, and ran docker-compose build with no problem but when I tried to run docker-compose up, I get the following error:
Starting live_evidently_1 ... done
Attaching to live_evidently_1
evidently_1 | Traceback (most recent call last):
evidently_1 | File "app.py", line 14, in <module>
evidently_1 | with open('config.yml') as f:
evidently_1 | IsADirectoryError: [Errno 21] Is a directory: 'config.yml'
live_evidently_1 exited with code 1
config.yml on my host is a file (of course) and the docker-compose.yml file is unremarkable:
version: "3"
services:
evidently:
build: ../
volumes:
- ./data:/data
- ./config.yml:/app/config.yml
etc...
...
So, I am left with two inter-related problems. 1) Why does the test version of the repo fail and the original version is fine (git status is unremarkable, all the files I want on github are up to date), and 2) Why does docker-compose think that config.yml is a folder when it is clearly a file? I would welcome suggestions.
You need to use bind mount type. To do this you have to use long syntax.
Like this.
volumes:
- type: bind
source: ./config.yml
target: /app/config.yml
yaml" file and ".bat" file present in github repository ? I am not able to build a model in anaconda using 'conda build' Can anyone please guide??
(dl) C:\Users\Nishant>conda build .
Traceback (most recent call last):
File "E:\anaconda\envs\dl\Scripts\conda-build-script.py", line 10, in
sys.exit(main())
File "E:\anaconda\envs\dl\lib\site-packages\conda_build\cli\main_build.py", line 469, in main
execute(sys.argv[1:])
File "E:\anaconda\envs\dl\lib\site-packages\conda_build\cli\main_build.py", line 460, in execute
verify=args.verify, variants=args.variants)
File "E:\anaconda\envs\dl\lib\site-packages\conda_build\api.py", line 207, in build
raise ValueError('No valid recipes found for input: {}'.format(recipe_paths_or_metadata))
ValueError: No valid recipes found for input: ['.']
From the log, it doesn't mention xxx.yaml or xx.bat file. Probably you are entering wrong input.
.yml file is configuration file called YAML. While .bat is a kind of Windows execute script. You should read carefully about readme on github repository.
I'm making a simple apk using buildozer in kivy.
I tried re-installing two times android sdk ndk, but sdkmanager tools are not automatically installed.
# Apache ANT found at /home/shivam/.buildozer/android/platform/apache-ant-1.9.4
# Android SDK found at /home/shivam/.buildozer/android/platform/android-sdk
# Android NDK found at /home/shivam/.buildozer/android/platform/android-ndk-r17c
# Read available permissions from api-versions.xml
# Check application requirements
# Check garden requirements
# Compile platform
# Run '/usr/bin/python -m pythonforandroid.toolchain create --dist_name=firstapp --bootstrap=sdl2 --requirements=python3,kivy --arch armeabi-v7a --copy-libs --color=always --storage-dir="/home/shivam/.buildozer/android/platform/build" --ndk-api=21'
# Cwd /home/shivam/.buildozer/android/platform/python-for-android
[INFO]: Will compile for the following archs: armeabi-v7a
[INFO]: Found Android API target in $ANDROIDAPI: 27
Traceback (most recent call last):
File "/usr/lib/python2.7/runpy.py", line 174, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/home/shivam/.buildozer/android/platform/python-for-android/pythonforandroid/toolchain.py", line 1073, in
main()
File "/home/shivam/.buildozer/android/platform/python-for-android/pythonforandroid/toolchain.py", line 1067, in main
ToolchainCL()
File "/home/shivam/.buildozer/android/platform/python-for-android/pythonforandroid/toolchain.py", line 576, in
__init__
getattr(self, args.subparser_name.replace('-', '_'))(args)
File "/home/shivam/.buildozer/android/platform/python-for-android/pythonforandroid/toolchain.py", line 144, in
wrapper_func
user_ndk_api=self.ndk_api)
File "pythonforandroid/build.py", line 236, in prepare_build_environment
avdmanager = sh.Command(join(sdk_dir, 'tools', 'bin', 'avdmanager'))
File "/home/shivam/.local/lib/python2.7/site-packages/sh.py", line 1202, in __init__
raise CommandNotFound(path)
sh.CommandNotFound: /home/shivam/.buildozer/android/platform/android-sdk/tools/bin/avdmanager
# Command failed: /usr/bin/python -m pythonforandroid.toolchain create --dist_name=firstapp --bootstrap=sdl2 –
requirements=python3,kivy --arch armeabi-v7a --copy-libs --color=always --storage-
dir="/home/shivam/.buildozer/android/platform/build" --ndk-api=21
#
# Buildozer failed to execute the last command
# The error might be hidden in the log above this error
# Please read the full log, and search for it before
# raising an issue with buildozer itself.
# In case of a bug report, please add a full log with log_level = 2
Just installed Yocto. On a morty branch. Executed the following commands:
cd poky
source oe-init-build-env build-qemuarm
In conf/local.conf changed the name of the machine to MACHINE ?= "qemuarm"
Then executed the following:
$ bitbake core-image-minimal
Loading cache: 100% |##########################################################################################################| Time: 0:00:00
Loaded 1320 entries from dependency cache.
ERROR: Execution of event handler 'sstate_eventhandler2' failed
Traceback (most recent call last):
File "/home/some-user/projects/melp/poky/meta/classes/sstate.bbclass", line 1015, in sstate_eventhandler2(e=<bb.event.ReachableStamps object at 0x7fbc17f2e0f0>):
for l in lines:
> (stamp, manifest, workdir) = l.split()
if stamp not in stamps:
ValueError: not enough values to unpack (expected 3, got 1)
ERROR: Command execution failed: Traceback (most recent call last):
File "/home/some-user/projects/melp/poky/bitbake/lib/bb/command.py", line 101, in runAsyncCommand
self.cooker.updateCache()
File "/home/some-user/projects/melp/poky/bitbake/lib/bb/cooker.py", line 1658, in updateCache
bb.event.fire(event, self.databuilder.mcdata[mc])
File "/home/some-user/projects/melp/poky/bitbake/lib/bb/event.py", line 201, in fire
fire_class_handlers(event, d)
File "/home/some-user/projects/melp/poky/bitbake/lib/bb/event.py", line 124, in fire_class_handlers
execute_handler(name, handler, event, d)
File "/home/some-user/projects/melp/poky/bitbake/lib/bb/event.py", line 96, in execute_handler
ret = handler(event)
File "/home/some-user/projects/melp/poky/meta/classes/sstate.bbclass", line 1015, in sstate_eventhandler2
(stamp, manifest, workdir) = l.split()
ValueError: not enough values to unpack (expected 3, got 1)
It looks like it is a python error. Does anyone know what is the issue? Am I using the wrong version?
Here is the output of python --version
$ python --version
Python 2.7.12
What am I doing wrong?
You realise that Morty is 18 months old and in a few weeks will be longer supported right?
Anyway, looks like the sstate-cache/ somehow is corrupted. Delete your tmp/ and sstate-cache/ directories and try again.
I have a Dockerfile that builds an image based on CentOS (tag: centos6):
FROM centos
RUN rpm -iUvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
RUN yum update -y
RUN yum install ansible -y
ADD ./ansible /home/root/ansible
RUN cd /home/root/ansible;ansible-playbook -v -i hosts site.yml
Everything works fine until Docker hits the last line, then I get the following errors:
[WARNING]: The version of gmp you have installed has a known issue regarding
timing vulnerabilities when used with pycrypto. If possible, you should update
it (ie. yum update gmp).
PLAY [all] ********************************************************************
GATHERING FACTS ***************************************************************
Traceback (most recent call last):
File "/usr/bin/ansible-playbook", line 317, in <module>
sys.exit(main(sys.argv[1:]))
File "/usr/bin/ansible-playbook", line 257, in main
pb.run()
File "/usr/lib/python2.6/site-packages/ansible/playbook/__init__.py", line 319, in run
if not self._run_play(play):
File "/usr/lib/python2.6/site-packages/ansible/playbook/__init__.py", line 620, in _run_play
self._do_setup_step(play)
File "/usr/lib/python2.6/site-packages/ansible/playbook/__init__.py", line 565, in _do_setup_step
accelerate_port=play.accelerate_port,
File "/usr/lib/python2.6/site-packages/ansible/runner/__init__.py", line 204, in __init__
cmd = subprocess.Popen(['ssh','-o','ControlPersist'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
File "/usr/lib64/python2.6/subprocess.py", line 642, in __init__
errread, errwrite)
File "/usr/lib64/python2.6/subprocess.py", line 1234, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
Stderr from the command:
package epel-release-6-8.noarch is already installed
I imagine that the cause of the error is the gmp package not being up to date.
There is a related issue on GitHub: https://github.com/ansible/ansible/issues/6941
But there doesn't seem to be any solutions at the moment ...
Any ideas ?
Thanks in advance !
My site.yml playbook:
- hosts: all
pre_tasks:
- shell: echo 'hello'
Make sure that the files site.yml and hosts are present in the directory you're adding to /home/root/ansible.
Side note, you can simplify your Dockerfile by using WORKDIR:
WORKDIR /home/root/ansible
RUN ansible-playbook -v -i hosts site.yml