gatsby new [site] failing: Cannot find module 'gatsby/dist/commands/develop' - babeljs

I'm pretty green here. I've run gatsby on this (macosx) before. It has stopped working - presumably due to an install or environment variable somewhere?
First noticed with Module build failed: Error: Couldn't find preset "flow" relative to directory "/Users/3Legs"
Then after installing babel presets:
npm install --global --save-dev babel-preset-flow
I get the above message:
gatsby develop
... Cannot find module 'gatsby/dist/commands/develop'
FULL TRAIL BELOW + babelrc
Michaels-MacBook-Air:mggatsby 3Legs$
gatsby new test
-bash: /usr/local/bin/gatsby: No such file or directory
Michaels-MacBook-Air:mggatsby 3Legs$
npm install --global gatsby-cli
WARN registry Unexpected warning for registry.npmjs.org: Miscellaneous Warning EINTEGRITY: sha1-xRn2KfhrOlvtuliojTETCe7Al/k= integrity checksum failed when using sha1: wanted sha1-xRn2KfhrOlvtuliojTETCe7Al/k= but got sha512-vE2hT1D0HLZCLLclfBSfkfTTedhVj0fubHpJBHKwwUWX0nSbhPAfk+SG9rTX95BYNmau8rGFfCeaT6T5OW1C2A==. (455516 bytes)
WARN registry Using stale package data from registry.npmjs.org/ due to a request error during revalidation.
WARN registry Unexpected warning for registry.npmjs.org/: Miscellaneous Warning EINTEGRITY: sha1-buxr+wdCHiFIx1xrunJCH4UwqCY= integrity checksum failed when using sha1: wanted sha1-buxr+wdCHiFIx1xrunJCH4UwqCY= but got sha512-+ktMAh1Jwas+TnGodfCfjUbJKoANqPaJFN0z0iqh41eqD8dvguNzcitVSBSVK1pidz0AqGbLKcoVuVLRVZ/aVg==. (42903 bytes)
WARN registry Using stale package data from registry.npmjs.org/ due to a request error during revalidation.
/usr/local/bin/gatsby -> /usr/local/lib/node_modules/gatsby-cli/lib/index.js
+ gatsby-cli#1.1.1
added 153 packages, removed 5 packages and updated 1 package in 10.573s
Michaels-MacBook-Air:mggatsby 3Legs$
gatsby new test
info Creating new site from git: git://github.com/gatsbyjs/gatsby-starter-default.git
Cloning into 'test'...
remote: Counting objects: 566, done.
remote: Total 566 (delta 0), reused 0 (delta 0), pack-reused 566
Receiving objects: 100% (566/566), 358.35 KiB | 187.00 KiB/s, done.
Resolving deltas: 100% (316/316), done.
success Created starter directory layout
info Installing packages...
yarn install v0.27.5
info No lockfile found.
[1/4] Resolving packages...
[2/4] Fetching packages...
[3/4] Linking dependencies...
warning "eslint-config-fbjs#1.1.1" has unmet peer dependency "babel-eslint#^6.1.2".
warning "eslint-config-fbjs#1.1.1" has unmet peer dependency "eslint#^3.0.0".
warning "eslint-config-fbjs#1.1.1" has unmet peer dependency "eslint-plugin-babel#^3.3.0".
warning "eslint-config-fbjs#1.1.1" has unmet peer dependency "eslint-plugin-flowtype#^2.15.0".
warning "eslint-config-fbjs#1.1.1" has unmet peer dependency "eslint-plugin-react#^5.2.2".
[4/4] Building fresh packages...
success Saved lockfile.
Done in 47.94s.
Michaels-MacBook-Air:mggatsby 3Legs$
cd test
Michaels-MacBook-Air:test 3Legs$
gatsby develop
success delete html files from previous builds — 0.010 s
success open and validate gatsby-config.js — 0.006 s
success copy gatsby files — 0.028 s
success source and transform nodes — 0.045 s
success building schema — 0.134 s
success createLayouts — 0.039 s
success createPages — 0.016 s
success createPagesStatefully — 0.016 s
success extract queries from components — 0.118 s
success run graphql queries — 0.030 s
success write out page data — 0.006 s
success update schema — 0.094 s
info bootstrap finished - 3.856 s
error There was an error compiling the html.js component for the development server.
See our docs page on debugging HTML builds for help ...
Error: Module build failed: Error: Couldn't find preset "flow" relative to directory "/Users/3Legs"
Michaels-MacBook-Air:test 3Legs$
npm install --save-dev babel-preset-flow
npm WARN gentlyRm not removing /Users/3Legs/react/mggatsby/test/node_modules/.bin/gatsby as it wasn't installed by /Users/3Legs/react/mggatsby/test/node_modules/gatsby
npm WARN gentlyRm not removing /Users/3Legs/react/mggatsby/test/node_modules/.bin/semver as it wasn't installed by /Users/3Legs/react/mggatsby/test/node_modules/semver
npm WARN gentlyRm not removing /Users/3Legs/react/mggatsby/test/node_modules/jspm-registry/node_modules/.bin/semver as it wasn't installed by /Users/3Legs/react/mggatsby/test/node_modules/jspm-registry/node_modules/semver
npm WARN gentlyRm not removing /Users/3Legs/react/mggatsby/test/node_modules/.bin/browserslist as it wasn't installed by /Users/3Legs/react/mggatsby/test/node_modules/browserslist
npm notice created a lockfile as package-lock.json. You should commit this file.
npm WARN gatsby-starter-default#1.0.0 No repository field.
+ babel-preset-flow#6.23.0
removed 1375 packages and updated 6 packages in 17.729s
Michaels-MacBook-Air:test 3Legs$
gatsby develop
/usr/local/bin/gatsby develop
Options:
-h, --help Show help [boolean]
-H, --host Set host. Defaults to localhost [string]
-p, --port Set port. Defaults to 8000 [string] [default: "8000"]
-o, --open Open the site in your browser for you. [boolean]
-v, --version Show version number [boolean]
error There was a problem loading the local develop command. Gatsby may not be installed.
Error: Cannot find module 'gatsby/dist/commands/develop'
- index.js:17 resolveFileName
[lib]/[gatsby-cli]/[resolve-from]/index.js:17:39
- index.js:31 resolveFrom
[lib]/[gatsby-cli]/[resolve-from]/index.js:31:9
- index.js:34 module.exports
[lib]/[gatsby-cli]/[resolve-from]/index.js:34:41
- index.js:4 module.exports.moduleId
[lib]/[gatsby-cli]/[resolve-cwd]/index.js:4:30
- create-cli.js:35 resolveLocalCommand
[lib]/[gatsby-cli]/lib/create-cli.js:35:22
- create-cli.js:66 Object.handler
[lib]/[gatsby-cli]/lib/create-cli.js:66:7
- command.js:233 Object.self.runCommand
[lib]/[gatsby-cli]/[yargs]/lib/command.js:233:22
- yargs.js:990 Object.Yargs.self._parseArgs
[lib]/[gatsby-cli]/[yargs]/yargs.js:990:30
- yargs.js:532 Object.Yargs.self.parse
[lib]/[gatsby-cli]/[yargs]/yargs.js:532:23
- create-cli.js:163 module.exports
[lib]/[gatsby-cli]/lib/create-cli.js:163:154
- index.js:122 Object.<anonymous>
[lib]/[gatsby-cli]/lib/index.js:122:1
Michaels-MacBook-Air:~ 3Legs$
cat .babelrc
{
"presets": ["flow"]
}

NPM seems to be removing modules because there isn't a package-lock.json. Not sure if this is new behavior but it's been hitting a lot of people in past few days.
All you need to do is delete node_modules and any lock file there and run npm install.

Related

AWS Elastic Beanstalk failed to install psycopg2 using requirements.txt Git Pip

I am trying to deploy an app using elasticbeanstalk with Python 3.8. I am using the following requirements.txt
click==8.0.1
Flask==1.1.2
Flask-SQLAlchemy==2.5.1
greenlet==1.1.0
itsdangerous==2.0.1
Jinja2==3.0.1
MarkupSafe==2.0.1
marshmallow==3.12.1
marshmallow-sqlalchemy==0.25.0
SQLAlchemy==1.4.15
Werkzeug==2.0.1
celery[redis]
psycopg2==2.9.3
Flask-JWT-Extended==4.3.1
Flask-RESTful==0.3.9
python-decouple==3.6
When I run the command eb create, I get the following error
2022-04-05 22:03:00 INFO Created security group named: sg-00b14485064e5e8ca
2022-04-05 22:03:16 INFO Created security group named: awseb-e-ekd3bw2bvf-stack-AWSEBSecurityGroup-1O3NAVBIRRK30
2022-04-05 22:03:31 INFO Created Auto Scaling launch configuration named: awseb-e-ekd3bw2bvf-stack-AWSEBAutoScalingLaunchConfiguration-HKjIVsa84E3U
2022-04-05 22:04:49 INFO Created Auto Scaling group named: awseb-e-ekd3bw2bvf-stack-AWSEBAutoScalingGroup-5FQOAWMGCR3W
2022-04-05 22:04:49 INFO Waiting for EC2 instances to launch. This may take a few minutes.
2022-04-05 22:04:49 INFO Created Auto Scaling group policy named: arn:aws:autoscaling:us-east-1:208357543212:scalingPolicy:ecfbbff0-4151-492f-a474-ba01535ad348:autoScalingGroupName/awseb-e-ekd3bw2bvf-stack-AWSEBAutoScalingGroup-5FQOAWMGCR3W:policyName/awseb-e-ekd3bw2bvf-stack-AWSEBAutoScalingScaleDownPolicy-CI2UIP6X023P
2022-04-05 22:04:49 INFO Created Auto Scaling group policy named: arn:aws:autoscaling:us-east-1:208357543212:scalingPolicy:d534189a-45e3-48f1-a206-720f202b4469:autoScalingGroupName/awseb-e-ekd3bw2bvf-stack-AWSEBAutoScalingGroup-5FQOAWMGCR3W:policyName/awseb-e-ekd3bw2bvf-stack-AWSEBAutoScalingScaleUpPolicy-1F0WVTUXXPFKF
2022-04-05 22:05:04 INFO Created CloudWatch alarm named: awseb-e-ekd3bw2bvf-stack-AWSEBCloudwatchAlarmLow-W8URMJEYBO3C
2022-04-05 22:05:04 INFO Created CloudWatch alarm named: awseb-e-ekd3bw2bvf-stack-AWSEBCloudwatchAlarmHigh-13J8QHI51MEBM
2022-04-05 22:06:09 INFO Created load balancer named: arn:aws:elasticloadbalancing:us-east-1:208357543212:loadbalancer/app/awseb-AWSEB-IXOR2Z0K0OJV/1fba4c6ff6122c55
2022-04-05 22:06:24 INFO Created Load Balancer listener named: arn:aws:elasticloadbalancing:us-east-1:208357543212:listener/app/awseb-AWSEB-IXOR2Z0K0OJV/1fba4c6ff6122c55/734b0cf960b6b8c4
2022-04-05 22:06:42 ERROR Instance deployment failed to install application dependencies. The deployment failed.
2022-04-05 22:06:42 ERROR Instance deployment failed. For details, see 'eb-engine.log'.
2022-04-05 22:06:44 ERROR [Instance: i-0368a7ba2157241f4] Command failed on instance. Return code: 1 Output: Engine execution has encountered an error..
2022-04-05 22:06:45 INFO Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
2022-04-05 22:07:48 ERROR Create environment operation is complete, but with errors. For more information, see troubleshooting documentation.
I look at the corresponding logs and I get the following error:
Collecting Werkzeug==2.0.1
Downloading Werkzeug-2.0.1-py3-none-any.whl (288 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 288.2/288.2 KB 35.6 MB/s eta 0:00:00
Collecting celery[redis]
Downloading celery-5.2.6-py3-none-any.whl (405 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 405.6/405.6 KB 54.7 MB/s eta 0:00:00
Collecting psycopg2==2.9.3
Downloading psycopg2-2.9.3.tar.gz (380 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 380.6/380.6 KB 52.2 MB/s eta 0:00:00
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'error'
2022/04/05 22:06:42.952376 [INFO] error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [23 lines of output]
running egg_info
creating /tmp/pip-pip-egg-info-v0aygozt/psycopg2.egg-info
writing /tmp/pip-pip-egg-info-v0aygozt/psycopg2.egg-info/PKG-INFO
writing dependency_links to /tmp/pip-pip-egg-info-v0aygozt/psycopg2.egg-info/dependency_links.txt
writing top-level names to /tmp/pip-pip-egg-info-v0aygozt/psycopg2.egg-info/top_level.txt
writing manifest file '/tmp/pip-pip-egg-info-v0aygozt/psycopg2.egg-info/SOURCES.txt'
Error: pg_config executable not found.
pg_config is required to build psycopg2 from source. Please add the directory
containing pg_config to the $PATH or specify the full executable path with the
option:
python setup.py build_ext --pg-config /path/to/pg_config build ...
or with the pg_config option in 'setup.cfg'.
If you prefer to avoid building psycopg2 from source, please install the PyPI
'psycopg2-binary' package instead.
For further information please check the 'doc/src/install.rst' file (also at
<https://www.psycopg.org/docs/install.html>).
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
I am not quite familiar with the requirements of AWS, but I could run the app locally and without any problem. I just wonder what would be a right configuration for the requirements.txt file in order to avoid the bug.
Thanks in advance.
You have to install postgresql-devel first before you can use psycopg2. You can add the installation instructions to your ebextentions:
packages:
yum:
postgresql-devel: []
or
commands:
command1:
command: yum install -y postgresql-devel
I could solve the error. I have to change psycopg2 by psycopg2-binary as it was suggested by the AWS logs:
If you prefer to avoid building psycopg2 from source, please install the PyPI
'psycopg2-binary' package instead.
This issue has to be with the particular configuration of the libraries and the specific Linux machines used in AWS.

How to build azure pipeline tasks from https://github.com/microsoft/azure-pipelines-tasks locally and consume them in an on-prem Azure DevOps Server?

So I cloned their repository, but how do I actually build the tasks?
Here is my scenario. We use Azure DevOps Server 2020 (on prem). All of our build pipelines run the Index Sources & Publish Symbols task
However, it has a bug https://github.com/microsoft/azure-pipelines-tasks/issues/14852. Luckily a fix was merged to master. However, we are not going to see it until it is propagated to the Azure DevOps Server edition and only All Mighty knows when it would happen.
So, I would like to build that task locally and upload to our Azure DevOps server. But I cannot find instructions on how to do it.
So, how can I build and consume it in our Azure DevOps Server?
EDIT 1
Tried to follow the procedure in https://github.com/microsoft/azure-pipelines-tasks/blob/master/ci/build-all-steps.yml, but it did not work.
The first step that seems relevant to me is https://github.com/microsoft/azure-pipelines-tasks/blob/6ab084f52e582370880127132d1c449634c9bfbc/ci/build-all-steps.yml#L39:
- script: |
cd ci
cd verifyMinAgentDemands
npm install
node index.js
displayName: Verify all min agent demands are valid
And it is fine:
C:\work\azure-pipelines-tasks\ci\verifyMinAgentDemands [master ≡]> npm install
npm WARN verifyminagentdemands#1.0.0 No description
npm WARN verifyminagentdemands#1.0.0 No repository field.
added 52 packages from 78 contributors and audited 52 packages in 1.743s
found 2 low severity vulnerabilities
run `npm audit fix` to fix them, or `npm audit` for details
C:\work\azure-pipelines-tasks\ci\verifyMinAgentDemands [master ≡]> node .\index.js
##vso[task.debug]agent.TempDirectory=undefined
##vso[task.debug]agent.workFolder=undefined
##vso[task.debug]loading inputs and endpoints
##vso[task.debug]loaded 0
##vso[task.debug]Agent.ProxyUrl=undefined
##vso[task.debug]Agent.CAInfo=undefined
##vso[task.debug]Agent.ClientCert=undefined
##vso[task.debug]Agent.SkipCertValidation=undefined
Verifying min agent demands.
Latest version of the Agent that's fully rolled out is 2.195.0.
The next step (https://github.com/microsoft/azure-pipelines-tasks/blob/6ab084f52e582370880127132d1c449634c9bfbc/ci/build-all-steps.yml#L47) seems to be relevant too:
- script: node make.js build --task "$(task_pattern)"
displayName: Build
condition: ne(variables['numTasks'], 0)
But:
C:\work\azure-pipelines-tasks [master ≡]> node make.js build --task PublishSymbolsV2
> prepending PATH C:\work\azure-pipelines-tasks\node_modules\.bin
tsc tool:
Version 2.3.4
C:\work\azure-pipelines-tasks\node_modules\.bin\tsc
npm tool:
6.14.12
C:\Program Files\nodejs\npm
------------------------------------------------------------
Building: PublishSymbolsV2
------------------------------------------------------------
> getting task externals
Downloading file: https://vstsagenttools.blob.core.windows.net/tools/symstore/2/symbol.zip
Could not use "nc", falling back to slower node.js method for sync requests.
C:\work\azure-pipelines-tasks\node_modules\sync-request\index.js:77
throw new Error(res.stderr.toString());
^
Error
at doRequestWith (C:\work\azure-pipelines-tasks\node_modules\sync-request\index.js:77:11)
at doRequest (C:\work\azure-pipelines-tasks\node_modules\sync-request\index.js:20:10)
at downloadFile (C:\work\azure-pipelines-tasks\make-util.js:411:22)
at downloadArchive (C:\work\azure-pipelines-tasks\make-util.js:451:27)
at C:\work\azure-pipelines-tasks\make-util.js:644:33
at Array.forEach (<anonymous>)
at getExternals (C:\work\azure-pipelines-tasks\make-util.js:639:25)
at C:\work\azure-pipelines-tasks\make.js:193:13
at Array.forEach (<anonymous>)
at Function.target.build (C:\work\azure-pipelines-tasks\make.js:158:14)
C:\work\azure-pipelines-tasks [master ≡]>
What is now?

pypi erro when try to install gcsfs in google composer(airflow)

I use google composer-1.0.0-airflow-1.9.0. I used dask in one of my DAG and wanted to setup composer to use dask. One of the required package for this DAG is gcsfs. When I tried to install it via Web UI I got the below error:
Composer Backend timed out. Currently running tasks are [stage: CP_COMPOSER_AGENT_RUNNING description: "Composer Agent Running. Latest Agent Stage: stage: DEPLOYMENTS_UPDATED\n ." response_timestamp { seconds: 1540331648 nanos: 860000000 } ].
Updated:
The error is coming from this line of code when dask tries to read file from gcp bucket:dd.read_csv(bucket)
log:
[2018-10-24 22:25:12,729] {base_task_runner.py:98} INFO - Subtask: File "/usr/local/lib/python2.7/site-packages/dask/bytes/core.py", line 350, in get_fs_token_paths
[2018-10-24 22:25:12,733] {base_task_runner.py:98} INFO - Subtask: fs, fs_token = get_fs(protocol, options)
[2018-10-24 22:25:12,735] {base_task_runner.py:98} INFO - Subtask: File "/usr/local/lib/python2.7/site-packages/dask/bytes/core.py", line 473, in get_fs
[2018-10-24 22:25:12,740] {base_task_runner.py:98} INFO - Subtask: "Need to install `gcsfs` library for Google Cloud Storage support\n"
[2018-10-24 22:25:12,741] {base_task_runner.py:98} INFO - Subtask: File "/usr/local/lib/python2.7/site-packages/dask/utils.py", line 94, in import_required
[2018-10-24 22:25:12,748] {base_task_runner.py:98} INFO - Subtask: raise RuntimeError(error_msg)
[2018-10-24 22:25:12,751] {base_task_runner.py:98} INFO - Subtask: RuntimeError: Need to install `gcsfs` library for Google Cloud Storage support
[2018-10-24 22:25:12,756] {base_task_runner.py:98} INFO - Subtask: conda install gcsfs -c conda-forge
[2018-10-24 22:25:12,758] {base_task_runner.py:98} INFO - Subtask: or
[2018-10-24 22:25:12,762] {base_task_runner.py:98} INFO - Subtask: pip install gcsfs
When tried to install gcsfs in google composer UI using pypi got below error:
{
insertId: "17ks763f726w1i"
logName: "projects/xxxxxxxxx/logs/airflow-worker"
receiveTimestamp: "2018-10-25T15:42:24.935880717Z"
resource: {…}
severity: "ERROR"
textPayload: "Traceback (most recent call last):
File "/usr/local/bin/gcsfuse", line 7, in <module>
from gcsfs.cli.gcsfuse import main
File "/usr/local/lib/python2.7/site-
packages/gcsfs/cli/gcsfuse.py", line 3, in <module>
fuse import FUSE
ImportError: No module named fuse
"
timestamp: "2018-10-25T15:41:53Z"
}
Unfortunately, your error mssage doesn't mean much to me.
gcsfs is pure python code, so it is very unlikely that anything is going wrong with installing it - as is done very commonly with pip or conda. The dependency libraries are a bunch of google ones, some of which may require compilation (I don't know), so I would suggest trying to find out from logs which one is stalling and taking it up with them. On the other hand, this kind of thing can often be a network/intermittent problem, so waiting may also fix things.
For the future, I recommend basing installations around conda, which never needs to compile anything and is generally better at dependency tracking.
This has to do with the fact that Composer and Airflow have silent dependencies and they are not syncd. So if gcsfs installation has conflicts with Airflow dependency, we get this error. More details here. The only workarounds ( other than updating to the Nov 28 release of composer) are:
Source: Thanks to Jake Biesinger (jake.biesinger#infusionsoft.com)
use a separate Kubernetes Pod for running various jobs, but it's a
large change and requires infra we're not very familiar with (GKE).
This particular issue can also be solved by installing dbt in a
PythonVirtualEnvOperator, then having the python_callable re-use the
virtualenv's bin dir, something like:
``` def _run_cmd_in_virtual_env(cmd):
subprocess.check_call(os.path.join(os.path.split(sys.argv[0])[0], cmd)
task =
PythonVirtualEnvOperator(python_callable=_run_cmd_in_virtual_env,
op_args=('dbt',)) # this will call the temporarily-installed dbt
binary, something like /tmp/virtualenv-asdasd/bin/dbt.
```
I haven't tried this, but this might help you out.
In general, installing arbitrary system packages (like fuse or whatever which becomes the dependencies of what you are trying to install) is not supported by Google Composer. As discussed here: https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!searchin/cloud-composer-discuss/sugimiyanto%7Csort:date/cloud-composer-discuss/jpxAGCPFkZo/mCx_P1LPCQAJ
However, you may be able to do this by uploading the package folder that you have installed it in your local (i.e. fuse), into your Google Cloud Storage bucket for example: gs://<your_bukcet_name>/libs, so that it becomes shared libraries.
Then, you can set LD_LIBRARY_PATH environment variable in Google Composer to /home/airflow/gcs/libs, to make GCC look for shared libraries in that directory.
Then, try to reinstall the gcsfs using pypi Google Composer.

Yocto Conflict between attempted installs

I have a conflict between a number of install files.
I am getting the below error:
Transaction Summary
================================================================================
Install 612 Packages
Total size: 110 M Installed size: 403 M Downloading Packages: Running
transaction check Transaction check succeeded. Running transaction
test Error: Transaction check error: file /etc/iproute2/rt_protos
conflicts between attempted installs of
base-files-3.0.14-r89.nexbox_a95x_s905x and iproute2-4.14.1-r0.aarch64
file /etc/iproute2/rt_tables conflicts between attempted installs of
base-files-3.0.14-r89.nexbox_a95x_s905x and iproute2-4.14.1-r0.aarch64
file /etc/sysctl.conf conflicts between attempted installs of
base-files-3.0.14-r89.nexbox_a95x_s905x and procps-3.3.12-r0.aarch64
Error Summary
-------------
ERROR: amlogic-image-headless-sd-1.0-r0 do_rootfs: Function failed:
do_rootfs ERROR: Logfile of failure stored in:
/home/user/amlogic-bsp/build/tmp/work/nexbox_a95x_s905x-poky-linux/amlogic-image-headless-sd/1.0-r0/temp/log.do_rootfs.29264
ERROR: Task
(/home/user/amlogic-bsp/meta-meson/recipes-core/images/amlogic-image-headless-sd.bb:do_rootfs)
failed with exit code '1' NOTE: Tasks Summary: Attempted 3131 tasks of
which 3130 didn't need to be rerun and 1 failed.
I have seen somewhere that I should pin a file, but how do I do this? I can't find a tutorial or any reference to what that means.
I am also getting the below warning. Is this related?
WARNING: Layer meson should set LAYERSERIES_COMPAT_meson in its
conf/layer.conf file to list the core layer names it is compatible
with.
I'm new to OE coming over from OpenWRT.
For bitbake, I've added the layers for the packages below:
meta-openwrt:- OE/Yocto metadata layer for OpenWRT
superna9999/meta-meson:- Upstream Linux Amlogic Meson Yocto/OpenEmbedded Layer
And tried compiling the nexbox-a95x-s905x image
I think the problem is that /etc/iproute2/rt_protos is provided by base-files which is coming from meta-openwrt as well as from iproute2 package which is coming from other OE layers. its not clear for the image builder which one to use and hence the conflict
You can solve it via defining a iproute2_%.bbappend file in meta-openwrt where this file gets deleted from iproute2 package and preference is given to the one openwrt provides
do_install_append() {
rm -rf ${D}${sysconfdir}/iproute2/rt_protos
}
should help.

PPP install error on Raspberry Pi

I am rather new to Linux and have been playing with the Raspberry Pi. I am trying to get a modem I have working with this Pi. I am learning here but any help on how to fix this situation would be great.
Reading package lists... Done
Building dependency tree
Reading state information... Done
Correcting dependencies... Done
The following extra packages will be installed:
ppp
The following NEW packages will be installed:
ppp
0 upgraded, 1 newly installed, 0 to remove and 316 not upgraded.
5 not fully installed or removed.
Need to get 0 B/354 kB of archives.
After this operation, 772 kB of additional disk space will be used.
Do you want to continue [Y/n]? y
(Reading database ... 65955 files and directories currently installed.)
Unpacking ppp (from .../ppp_2.4.5-5.1_armhf.deb) ...
update-rc.d: using dependency based boot sequencing
insserv: warning: script 'wireless-reconnect' missing LSB tags and overrides
insserv: There is a loop between service watchdog and wireless-reconnect if stopped
insserv: loop involving service wireless-reconnect at depth 2
insserv: loop involving service watchdog at depth 1
insserv: Stopping wireless-reconnect depends on watchdog and therefore on system facility `$all' which can not be true!
insserv: exiting now without changing boot order!
update-rc.d: error: insserv rejected the script header
dpkg: error processing /var/cache/apt/archives/ppp_2.4.5-5.1_armhf.deb (--unpack):
subprocess new pre-installation script returned error exit status 1
Errors were encountered while processing:
/var/cache/apt/archives/ppp_2.4.5-5.1_armhf.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)
This is even after trying 'sudo apt-get -f install'