Yocto Conflict between attempted installs - yocto

I have a conflict between a number of install files.
I am getting the below error:
Transaction Summary
================================================================================
Install 612 Packages
Total size: 110 M Installed size: 403 M Downloading Packages: Running
transaction check Transaction check succeeded. Running transaction
test Error: Transaction check error: file /etc/iproute2/rt_protos
conflicts between attempted installs of
base-files-3.0.14-r89.nexbox_a95x_s905x and iproute2-4.14.1-r0.aarch64
file /etc/iproute2/rt_tables conflicts between attempted installs of
base-files-3.0.14-r89.nexbox_a95x_s905x and iproute2-4.14.1-r0.aarch64
file /etc/sysctl.conf conflicts between attempted installs of
base-files-3.0.14-r89.nexbox_a95x_s905x and procps-3.3.12-r0.aarch64
Error Summary
-------------
ERROR: amlogic-image-headless-sd-1.0-r0 do_rootfs: Function failed:
do_rootfs ERROR: Logfile of failure stored in:
/home/user/amlogic-bsp/build/tmp/work/nexbox_a95x_s905x-poky-linux/amlogic-image-headless-sd/1.0-r0/temp/log.do_rootfs.29264
ERROR: Task
(/home/user/amlogic-bsp/meta-meson/recipes-core/images/amlogic-image-headless-sd.bb:do_rootfs)
failed with exit code '1' NOTE: Tasks Summary: Attempted 3131 tasks of
which 3130 didn't need to be rerun and 1 failed.
I have seen somewhere that I should pin a file, but how do I do this? I can't find a tutorial or any reference to what that means.
I am also getting the below warning. Is this related?
WARNING: Layer meson should set LAYERSERIES_COMPAT_meson in its
conf/layer.conf file to list the core layer names it is compatible
with.
I'm new to OE coming over from OpenWRT.
For bitbake, I've added the layers for the packages below:
meta-openwrt:- OE/Yocto metadata layer for OpenWRT
superna9999/meta-meson:- Upstream Linux Amlogic Meson Yocto/OpenEmbedded Layer
And tried compiling the nexbox-a95x-s905x image

I think the problem is that /etc/iproute2/rt_protos is provided by base-files which is coming from meta-openwrt as well as from iproute2 package which is coming from other OE layers. its not clear for the image builder which one to use and hence the conflict
You can solve it via defining a iproute2_%.bbappend file in meta-openwrt where this file gets deleted from iproute2 package and preference is given to the one openwrt provides
do_install_append() {
rm -rf ${D}${sysconfdir}/iproute2/rt_protos
}
should help.

Related

AWS Elastic Beanstalk failed to install psycopg2 using requirements.txt Git Pip

I am trying to deploy an app using elasticbeanstalk with Python 3.8. I am using the following requirements.txt
click==8.0.1
Flask==1.1.2
Flask-SQLAlchemy==2.5.1
greenlet==1.1.0
itsdangerous==2.0.1
Jinja2==3.0.1
MarkupSafe==2.0.1
marshmallow==3.12.1
marshmallow-sqlalchemy==0.25.0
SQLAlchemy==1.4.15
Werkzeug==2.0.1
celery[redis]
psycopg2==2.9.3
Flask-JWT-Extended==4.3.1
Flask-RESTful==0.3.9
python-decouple==3.6
When I run the command eb create, I get the following error
2022-04-05 22:03:00 INFO Created security group named: sg-00b14485064e5e8ca
2022-04-05 22:03:16 INFO Created security group named: awseb-e-ekd3bw2bvf-stack-AWSEBSecurityGroup-1O3NAVBIRRK30
2022-04-05 22:03:31 INFO Created Auto Scaling launch configuration named: awseb-e-ekd3bw2bvf-stack-AWSEBAutoScalingLaunchConfiguration-HKjIVsa84E3U
2022-04-05 22:04:49 INFO Created Auto Scaling group named: awseb-e-ekd3bw2bvf-stack-AWSEBAutoScalingGroup-5FQOAWMGCR3W
2022-04-05 22:04:49 INFO Waiting for EC2 instances to launch. This may take a few minutes.
2022-04-05 22:04:49 INFO Created Auto Scaling group policy named: arn:aws:autoscaling:us-east-1:208357543212:scalingPolicy:ecfbbff0-4151-492f-a474-ba01535ad348:autoScalingGroupName/awseb-e-ekd3bw2bvf-stack-AWSEBAutoScalingGroup-5FQOAWMGCR3W:policyName/awseb-e-ekd3bw2bvf-stack-AWSEBAutoScalingScaleDownPolicy-CI2UIP6X023P
2022-04-05 22:04:49 INFO Created Auto Scaling group policy named: arn:aws:autoscaling:us-east-1:208357543212:scalingPolicy:d534189a-45e3-48f1-a206-720f202b4469:autoScalingGroupName/awseb-e-ekd3bw2bvf-stack-AWSEBAutoScalingGroup-5FQOAWMGCR3W:policyName/awseb-e-ekd3bw2bvf-stack-AWSEBAutoScalingScaleUpPolicy-1F0WVTUXXPFKF
2022-04-05 22:05:04 INFO Created CloudWatch alarm named: awseb-e-ekd3bw2bvf-stack-AWSEBCloudwatchAlarmLow-W8URMJEYBO3C
2022-04-05 22:05:04 INFO Created CloudWatch alarm named: awseb-e-ekd3bw2bvf-stack-AWSEBCloudwatchAlarmHigh-13J8QHI51MEBM
2022-04-05 22:06:09 INFO Created load balancer named: arn:aws:elasticloadbalancing:us-east-1:208357543212:loadbalancer/app/awseb-AWSEB-IXOR2Z0K0OJV/1fba4c6ff6122c55
2022-04-05 22:06:24 INFO Created Load Balancer listener named: arn:aws:elasticloadbalancing:us-east-1:208357543212:listener/app/awseb-AWSEB-IXOR2Z0K0OJV/1fba4c6ff6122c55/734b0cf960b6b8c4
2022-04-05 22:06:42 ERROR Instance deployment failed to install application dependencies. The deployment failed.
2022-04-05 22:06:42 ERROR Instance deployment failed. For details, see 'eb-engine.log'.
2022-04-05 22:06:44 ERROR [Instance: i-0368a7ba2157241f4] Command failed on instance. Return code: 1 Output: Engine execution has encountered an error..
2022-04-05 22:06:45 INFO Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
2022-04-05 22:07:48 ERROR Create environment operation is complete, but with errors. For more information, see troubleshooting documentation.
I look at the corresponding logs and I get the following error:
Collecting Werkzeug==2.0.1
Downloading Werkzeug-2.0.1-py3-none-any.whl (288 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 288.2/288.2 KB 35.6 MB/s eta 0:00:00
Collecting celery[redis]
Downloading celery-5.2.6-py3-none-any.whl (405 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 405.6/405.6 KB 54.7 MB/s eta 0:00:00
Collecting psycopg2==2.9.3
Downloading psycopg2-2.9.3.tar.gz (380 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 380.6/380.6 KB 52.2 MB/s eta 0:00:00
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'error'
2022/04/05 22:06:42.952376 [INFO] error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [23 lines of output]
running egg_info
creating /tmp/pip-pip-egg-info-v0aygozt/psycopg2.egg-info
writing /tmp/pip-pip-egg-info-v0aygozt/psycopg2.egg-info/PKG-INFO
writing dependency_links to /tmp/pip-pip-egg-info-v0aygozt/psycopg2.egg-info/dependency_links.txt
writing top-level names to /tmp/pip-pip-egg-info-v0aygozt/psycopg2.egg-info/top_level.txt
writing manifest file '/tmp/pip-pip-egg-info-v0aygozt/psycopg2.egg-info/SOURCES.txt'
Error: pg_config executable not found.
pg_config is required to build psycopg2 from source. Please add the directory
containing pg_config to the $PATH or specify the full executable path with the
option:
python setup.py build_ext --pg-config /path/to/pg_config build ...
or with the pg_config option in 'setup.cfg'.
If you prefer to avoid building psycopg2 from source, please install the PyPI
'psycopg2-binary' package instead.
For further information please check the 'doc/src/install.rst' file (also at
<https://www.psycopg.org/docs/install.html>).
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
I am not quite familiar with the requirements of AWS, but I could run the app locally and without any problem. I just wonder what would be a right configuration for the requirements.txt file in order to avoid the bug.
Thanks in advance.
You have to install postgresql-devel first before you can use psycopg2. You can add the installation instructions to your ebextentions:
packages:
yum:
postgresql-devel: []
or
commands:
command1:
command: yum install -y postgresql-devel
I could solve the error. I have to change psycopg2 by psycopg2-binary as it was suggested by the AWS logs:
If you prefer to avoid building psycopg2 from source, please install the PyPI
'psycopg2-binary' package instead.
This issue has to be with the particular configuration of the libraries and the specific Linux machines used in AWS.

Error runing mftf in Magento 2.3.5 I get an error on build:project

First time eveer runing mftf I installed with composer with in a m2 2.3.5 and when I run the build I get this message:
mftf build:project
mftf files removed from filesystem.
codeception.yml applied to /var/www/html/dev/tests/acceptance/codeception.yml
functional.suite.yml configuration successfully applied.
functional.suite.yml applied to /var/www/html/dev/tests/acceptance/tests/functional.suite.yml
command.php copied to /var/www/html/dev/tests/acceptance/utils/command.php
.credentials.example successfully applied.
.env configuration successfully applied.
In Configuration.php line 212:
Output path is not defined by key "paths: output"
In BuildProjectCommand.php line 105:
The codecept build command failed unexpectedly. Please see the above output for more details.
Any ideas what can be ?

Backport installation script for Broadcom 14e4:43ae wifi controller fails

I recently bought a Lenovo 500-15ACZ notebook and installed Ubuntu 16.04 on it. After the installation I found I couldn't connect to Wifi. When I googled the issue, this seemed to be a common problem for Broadcom wifi cards. I found this question on askubuntu and followed the steps of the answer by Luis Alvarado.
The command lspci -nn -d 14e4: showed me that the pci.id of my device is 14e4:43ae rev 02, which is not yet supported in Linux.
However, there is a script (link to project) on git that tries to solve this via backport:
#!/bin/bash
cd /tmp
git clone https://github.com/kvalo/ath10k-firmware.git
cd ath10k-firmware/QCA9377/hw1.0
sudo mkdir -p /lib/firmware/ath10k/QCA9377/hw1.0
sudo cp board.bin /lib/firmware/ath10k/QCA9377/hw1.0
sudo cp firmware-5.bin_WLAN.TF.1.0-00267-1 /lib/firmware/ath10k/QCA9377/hw1.0/firmware-5.bin
sudo modprobe -r ath10k_pci
cd /tmp
wget https://www.kernel.org/pub/linux/kernel/projects/backports/2015/11/20/backports-20151120.tar.gz
tar -xf backports-20151120.tar.gz
cd backports-20151120
make defconfig-ath10k
make
sudo make install
But when I tried to run this, make threw the following error:
Building backport-include/backport/autoconf.h ... done.
CC [M] /tmp/backports-20151120/compat/main.o
In file included from /tmp/backports-20151120/backport-include/backport/backport.h:7:0,
from :0:
./include/asm-generic/qrwlock.h: In function ‘__qrwlock_write_byte’:
/tmp/backports-20151120/backport-include/linux/kconfig.h:25:28: error: implicit declaration of function ‘config_enabled’ [-Werror=implicit-function-declaration]
#define IS_BUILTIN(option) config_enabled(option)
^
./include/asm-generic/qrwlock.h:156:26: note: in expansion of macro ‘IS_BUILTIN’
return (u8 *)lock + 3 * IS_BUILTIN(CONFIG_CPU_BIG_ENDIAN);
^
./include/asm-generic/qrwlock.h:156:37: error: ‘CONFIG_CPU_BIG_ENDIAN’ undeclared (first use in this function)
return (u8 *)lock + 3 * IS_BUILTIN(CONFIG_CPU_BIG_ENDIAN);
^
/tmp/backports-20151120/backport-include/linux/kconfig.h:25:43: note: in definition of macro ‘IS_BUILTIN’
#define IS_BUILTIN(option) config_enabled(option)
^
./include/asm-generic/qrwlock.h:156:37: note: each undeclared identifier is reported only once for each function it appears in
return (u8 *)lock + 3 * IS_BUILTIN(CONFIG_CPU_BIG_ENDIAN);
^
/tmp/backports-20151120/backport-include/linux/kconfig.h:25:43: note: in definition of macro ‘IS_BUILTIN’
#define IS_BUILTIN(option) config_enabled(option)
^
cc1: some warnings being treated as errors
scripts/Makefile.build:294: recipe for target '/tmp/backports-20151120/compat/main.o' failed
make[6]: *** [/tmp/backports-20151120/compat/main.o] Error 1
scripts/Makefile.build:567: recipe for target '/tmp/backports-20151120/compat' failed
make[5]: *** [/tmp/backports-20151120/compat] Error 2
Makefile:1524: recipe for target '_module_/tmp/backports-20151120' failed
make[4]: *** [_module_/tmp/backports-20151120] Error 2
Makefile.build:6: recipe for target 'modules' failed
make[3]: *** [modules] Error 2
Makefile.real:88: recipe for target 'modules' failed
make[2]: *** [modules] Error 2
Makefile:40: recipe for target 'modules' failed
make[1]: *** [modules] Error 2
Makefile:30: recipe for target 'default' failed
make: *** [default] Error 2
CC [M] /tmp/backports-20151120/compat/main.o
In file included from /tmp/backports-20151120/backport-include/backport/backport.h:7:0,
from :0:
./include/asm-generic/qrwlock.h: In function ‘__qrwlock_write_byte’:
/tmp/backports-20151120/backport-include/linux/kconfig.h:25:28: error: implicit declaration of function ‘config_enabled’ [-Werror=implicit-function-declaration]
#define IS_BUILTIN(option) config_enabled(option)
^
./include/asm-generic/qrwlock.h:156:26: note: in expansion of macro ‘IS_BUILTIN’
return (u8 *)lock + 3 * IS_BUILTIN(CONFIG_CPU_BIG_ENDIAN);
^
./include/asm-generic/qrwlock.h:156:37: error: ‘CONFIG_CPU_BIG_ENDIAN’ undeclared (first use in this function)
return (u8 *)lock + 3 * IS_BUILTIN(CONFIG_CPU_BIG_ENDIAN);
^
/tmp/backports-20151120/backport-include/linux/kconfig.h:25:43: note: in definition of macro ‘IS_BUILTIN’
#define IS_BUILTIN(option) config_enabled(option)
^
./include/asm-generic/qrwlock.h:156:37: note: each undeclared identifier is reported only once for each function it appears in
return (u8 *)lock + 3 * IS_BUILTIN(CONFIG_CPU_BIG_ENDIAN);
^
/tmp/backports-20151120/backport-include/linux/kconfig.h:25:43: note: in definition of macro ‘IS_BUILTIN’
#define IS_BUILTIN(option) config_enabled(option)
^
cc1: some warnings being treated as errors
scripts/Makefile.build:294: recipe for target '/tmp/backports-20151120/compat/main.o' failed
make[5]: *** [/tmp/backports-20151120/compat/main.o] Error 1
scripts/Makefile.build:567: recipe for target '/tmp/backports-20151120/compat' failed
make[4]: *** [/tmp/backports-20151120/compat] Error 2
Makefile:1524: recipe for target '_module_/tmp/backports-20151120' failed
make[3]: *** [_module_/tmp/backports-20151120] Error 2
Makefile.build:6: recipe for target 'modules' failed
make[2]: *** [modules] Error 2
Makefile.real:88: recipe for target 'modules' failed
make[1]: *** [modules] Error 2
Makefile:40: recipe for target 'install' failed
make: *** [install] Error 2
**Does anyone know how to fix this?**
Please let me know if you need any other info.
Thanks in advance!
Update:
I installed the broadcom-sta-dkms package as you suggested. Unfortunately, you were right; this didn't work.
When I tried the wl driver, dmesg | grep -i wl returned [
12.459884] wl: loading out-of-tree module taints kernel.
[ 12.459890] wl: module license 'MIXED/Proprietary' taints kernel.
[ 12.468203] wl: module verification failed: signature and/or required key missing - tainting kernel
[ 12.487603] wl driver 6.30.223.271 (r587334) failed with code 1001
[ 12.487606] ERROR #wl_cfg80211_detach :
[ 12.487607] NULL ndev->ieee80211ptr, unable to deref wl
However, I'm afraid I am not sure what this means. For the other drivers, dmesg returned nothing.
Well, I'd suggest to be consistent. You have a Wi-Fi device and you know its PCI vendor ID (which stands before the colon) and device ID - 14e4:43ae. In your question you don't provide a complete excerpt from your lspci, so it's not clear whether your device is indeed identified as Broadcom. However, if we assume it's true, we can search for it.
Here is what WikiDevi page says:
802.11a/b/g/n/ac WLAN + Bluetooth 4.0 NGFF 2230 Mini Card
WI1 chip1: Broadcom BCM43162
Probable Linux driver unknown
PCI ID not yet observed in any mainline kernel / this list
So, as you might see, this page sheds light on such important things like chip naming and current observation of kernel code awareness of such PCI ID. The latter means that, according to their research, no one driver in the main kernel tree has such an ID in the corresponding PCI ID table by means of which the kernel makes a decision to probe a specific driver for a given device. Nothing known about the PCI ID.
But now we know for sure that this one is indeed a Broadcom device.
Looking at your excerpt from the script (which you are trying to make use of) baffles me a lot since it's for Qualcomm Atheros, not for Broadcom. It tries to grab QCA firmware from (possibly) untrusted repository and compile ath10k backported driver. So, at this point we know that the question merely about the compilation errors is unhelpful from the very beginning. But, of course, one may suppose that either Linux kernel headers package is not installed or the version of backported ath10k is not compatible with your current kernel. That's it.
So, it's clear that we shall look for Broadcom drivers (and, possibly, for Broadcom firmware) instead. From this perspective I can tell you that three types of drivers are available for Broadcom devices: b43 (mostly legacy), vendor-licensed broadcom-sta (wl) and in-tree brcm80211. The latter one is a common name for brcmsmac and brcmfmac.
Here are the authoritative pages with up-to-date info:
b43 - http://linuxwireless.org/en/users/Drivers/b43/
brcm80211 - https://wireless.wiki.kernel.org/en/users/drivers/brcm80211
Also, a more or less descriptive page for the vendor-licensed wl:
https://wiki.debian.org/wl
I can't find your PCI ID on either of the pages. This indeed confirms that corresponding support has not been added yet. However, we can confirm this further by just trying the drivers on hands. It's obvious that in-kernel b43 and brcm80211 don't work for you, but it might be useful to take a look at dmesg - perhaps, brcm80211 is loaded but can't find FW.
If nothing useful is found, then it would be nice to try wl. This driver is distributed by means of broadcom-sta package (Debian, Ubuntu), and I can mention the corresponding description on Ubuntu website.
So, to try wl you need to make sure that you have proper Linux headers and then just install broadcom-sta-dkms package.
apt-get update
apt-get install linux-headers-$(uname -r)
apt-get install broadcom-sta-dkms
Hopefully, it will compile and install it. Then you should do a reboot and take a look at what happens with your Wi-Fi. Most likely, this won't help (since I suppose that your device is really not supported yet), but if it works, you will be able to use it. Even if you see for sure that your device doesn't work with wl, again, like in the case of brcm80211, it's worth taking a look at dmesg output. However, in example, seeking for a valid FW image (if dmesg complains about it) is a separate question and should be discussed accordingly.
Also, I can expand on this topic and mention that in certain mailing lists on the net some folks have already asked about plans to add support for this device. Here is one of the links. So, if neither brcm80211 nor wl (broadcom-sta-dkms) help you, you may consider sending an email to one of brcm80211 supporters. Their names and email addresses are listed on the page. There are Broadcom employees among them. If you ask them for a good piece of advice, you will also help other people.
UPDATE
So, you say that b43 (also b43_legacy) and brcm80211 keep silence in dmesg. This could mean that your PCI ID is not supported by these drivers.
What's for wl output, I can share my output for comparison:
wl: loading out-of-tree module taints kernel.
wl: module license 'MIXED/Proprietary' taints kernel.
Disabling lock debugging due to kernel taint
wlan0: Broadcom BCM43a0 802.11 Hybrid Wireless Controller 6.30.223.271 (r587334)
This obviously means that your output minus this one gives some sort of silence again. However, it's too murky to say for sure whether your device is unsupported or there is some FW issue.
So, it seems like no options remain here.
However, you still may consider ndiswrapper solution. In two words, it's a special tool/driver which enables you to install a proper inf and sys files from the Windows driver (i.e. you should obtain it for your card somewhere, eg. extract from the CD or download from Broadcom webside) in such a way that the driver would operate in Linux as it was in Windows environment. This type of solution has its drawbacks and limitations. First of all, only Windows XP versions of wireless drivers are supported, so if you've got, say, a ZIP package from the vendor's website, you need to extract inf and sys files from the directory named after Windows XP (not Vista/7/10), and you need to pay attention to CPU architecture choice (32 bit / 64 bit). Here is an article from Debian which could fit Ubuntu as well. But this kind of solution overall may face some extra drawbacks and suddenly bad operation (it's a topic for a separate talk) and also in general it is considered as bad solution for missing driver. So, many people in such a situation just prefer to swap their unsupported card with some other one or just wait until the missing support is added to one of the native drivers. It's up to you.

PPP install error on Raspberry Pi

I am rather new to Linux and have been playing with the Raspberry Pi. I am trying to get a modem I have working with this Pi. I am learning here but any help on how to fix this situation would be great.
Reading package lists... Done
Building dependency tree
Reading state information... Done
Correcting dependencies... Done
The following extra packages will be installed:
ppp
The following NEW packages will be installed:
ppp
0 upgraded, 1 newly installed, 0 to remove and 316 not upgraded.
5 not fully installed or removed.
Need to get 0 B/354 kB of archives.
After this operation, 772 kB of additional disk space will be used.
Do you want to continue [Y/n]? y
(Reading database ... 65955 files and directories currently installed.)
Unpacking ppp (from .../ppp_2.4.5-5.1_armhf.deb) ...
update-rc.d: using dependency based boot sequencing
insserv: warning: script 'wireless-reconnect' missing LSB tags and overrides
insserv: There is a loop between service watchdog and wireless-reconnect if stopped
insserv: loop involving service wireless-reconnect at depth 2
insserv: loop involving service watchdog at depth 1
insserv: Stopping wireless-reconnect depends on watchdog and therefore on system facility `$all' which can not be true!
insserv: exiting now without changing boot order!
update-rc.d: error: insserv rejected the script header
dpkg: error processing /var/cache/apt/archives/ppp_2.4.5-5.1_armhf.deb (--unpack):
subprocess new pre-installation script returned error exit status 1
Errors were encountered while processing:
/var/cache/apt/archives/ppp_2.4.5-5.1_armhf.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)
This is even after trying 'sudo apt-get -f install'

Chef Deployment with irrelevant default symlinks

I am trying to deploy my application code with Chef, which is working for one node and failing on another. I cannot determine why it works for one node and not another when they have the exact same config, but I can at least try to debug the problem on the node that fails.
deploy_revision app_config['deploy_dir'] do
scm_provider Chef::Provider::Git
repo app_config['deploy_repo']
revision app_config['deploy_branch']
if secrets["deploy_key"]
git_ssh_wrapper "#{app_config['deploy_dir']}/git-ssh-wrapper" # For private Git repos
end
enable_submodules true
shallow_clone false
symlink_before_migrate({}) # Symlinks to add before running db migrations
purge_before_symlink [] # Directories to delete before adding symlinks
create_dirs_before_symlink [] # Directories to create before adding symlinks
# symlinks()
action :deploy
restart_command do
service "apache2" do action :restart; end
end
end
This is my recipe for deploying the code. Notice that I have tried disabling symlinking entirely, as Chef always jams its own defaults in. Even with this I get the error:
================================================================================
Error executing action `deploy` on resource 'deploy_revision[/var/www]'
================================================================================
Chef::Exceptions::FileNotFound
------------------------------
Cannot symlink /var/www/shared/config/database.yml to /var/www/releases/7404041cf8859a35de90ae72091bea1628391075/config/database.yml before migrate: No such file or directory - /var/www/shared/config/database.yml or /var/www/releases/7404041cf8859a35de90ae72091bea1628391075/config/database.yml
Resource Declaration:
---------------------
# In /var/chef/cache/cookbooks/kapture/recipes/api.rb
68:
69: deploy_revision app_config['deploy_dir'] do
70: scm_provider Chef::Provider::Git
71: repo app_config['deploy_repo']
72: revision app_config['deploy_branch']
73: if secrets["deploy_key"]
74: git_ssh_wrapper "#{app_config['deploy_dir']}/git-ssh-wrapper" # For private Git repos
75: end
76: enable_submodules true
Compiled Resource:
------------------
# Declared in /var/chef/cache/cookbooks/kapture/recipes/api.rb:69:in `from_file'
deploy_revision("/var/www") do
destination "/var/www/shared/cached-copy"
symlink_before_migrate {"config/database.yml"=>"config/database.yml"}
updated_by_last_action true
restart_command #<Proc:0x00007f40f366e5a0#/var/chef/cache/cookbooks/kapture/recipes/api.rb:82>
repository_cache "cached-copy"
retries 0
keep_releases 5
create_dirs_before_symlink ["tmp", "public", "config"]
updated true
provider Chef::Provider::Deploy::Revision
enable_submodules true
deploy_to "/var/www"
current_path "/var/www/current"
recipe_name "api"
revision "HEAD"
scm_provider Chef::Provider::Git
purge_before_symlink ["log", "tmp/pids", "public/system"]
git_ssh_wrapper "/var/www/git-ssh-wrapper"
remote "origin"
shared_path "/var/www/shared"
cookbook_name "kapture"
symlinks {"log"=>"log", "system"=>"public/system", "pids"=>"tmp/pids"}
action [:deploy]
repo "git#github.com:kapture/api.git"
retry_delay 2
end
[2012-09-24T15:42:07+00:00] ERROR: Running exception handlers
[2012-09-24T15:42:07+00:00] FATAL: Saving node information to /var/chef/cache/failed-run-data.json
[2012-09-24T15:42:07+00:00] ERROR: Exception handlers complete
[2012-09-24T15:42:07+00:00] FATAL: Stacktrace dumped to /var/chef/cache/chef-stacktrace.out
[2012-09-24T15:42:07+00:00] FATAL: Chef::Exceptions::FileNotFound: deploy_revision[/var/www] (kapture::api line 69) had an error: Chef::Exceptions::FileNotFound: Cannot symlink /var/www/shared/config/database.yml to /var/www/releases/7404041cf8859a35de90ae72091bea1628391075/config/database.yml before migrate: No such file or directory - /var/www/shared/config/database.yml or /var/www/releases/7404041cf8859a35de90ae72091bea1628391075/config/database.yml
Here you can see it mention database.yml, tmp/, system/ and pids folders, all of which are defaults that Chef likes to set (see related bug)
Question 1
What are these symlinks for and how do I know if I even need any. What sort of things am I symlinking? I will be using migrations, so if they are useful for the migration then I'll need them working.
I have read the documentation many times and it just doesn't explain this is plain English - at least not that I have found.
Question 2
If I do not require them, how can I disable symlinking entirely? Following the examples in that bug report have not helped.
Clear out all the symlink attributes.
deploy_revision("/var/www") do
# ...
symlink_before_migrate.clear
create_dirs_before_symlink.clear
purge_before_symlink.clear
symlinks.clear
end
Make sure the deployment directory in shared has the proper directory structure (/var/www/shared/[log,pids,system,config]) and that all config files necessary for your application are in the config directory.
Your recipe for your application's cookbook should have an array of directory names to create (recursively) so that you won't run into this error again.
The symlinks are there so that while your application code will continue to evolve, you can share the log, pids, and system folder by symlinking shared/log to current/log and so on and so forth..
Chef happens to cache the directory structure - somehow, I haven't dug into that - with this troll application cookbook. It's something in the deploy resource I believe - I never use that - but you can fix it by deleting the directory structure it creates in /var/derp or whatever. Also ensure your tmp directory is setup.
A couple reasons this may be an issue:
File permissions are incorrect
The currently running chef user cannot access the file
You are using the application cookbook in a configuration for rails deploys and your application does not have the same directory structure.
It's definitely caused by Chef caching the state of the deploy somewhere, then reading that state out of it's cache - wherever that is - and then reusing that. I would look at either the application cookbook to see for any persistance, and if you fail at finding it there, look in the deploy resource from chef itself.