google cloud gcloud app deploy gives me permission denied - deployment

I can't deploy an app anymore. I was able to yesterday, then I deleted the project, then today I re-enabled it. And now when I deploy with gcloud app deploy I get a build error, in the build error log I get the following
Pulling image: gcr.io/gae-runtimes/python37_app_builder:python37_20190328_3_7_3_RC00
python37_20190328_3_7_3_RC00: Pulling from gae-runtimes/python37_app_builder
Digest: sha256:64993f54d3c409dd342d23167eb3ff5b92485a4225dc30762125e9c99fcf105a
Status: Downloaded newer image for gcr.io/gae-runtimes/python37_app_builder:python37_20190328_3_7_3_RC00
11 May 2019 22:00:03 INFO Arguments: ['--parser_script=/usr/local/bin/ftl.par', '--name=eu.gcr.io/ellipticdata-ai/app-engine-tmp/app/ttl-2h:248e94c0-9e01-412c-9435-8fc7b0cbda1f', '--directory=/workspace', '--destination=/srv', '--cache-repository=eu.gcr.io/ellipticdata-ai/app-engine-tmp/build-cache/ttl-7d', '--cache', '--python-cmd=/opt/python3.7/bin/python3.7', '--pip-cmd=/env/bin/python3.7 -m pip', '--venv-cmd=/opt/python3.7/bin/python3.7 -m venv /env', '-v=DEBUG', '--entrypoint-from-app-yaml=false', '--entrypoint-contents=', '--base=gcr.io/gae-runtimes/python37:python37_20190328_3_7_3_RC00']
11 May 2019 22:00:03 INFO Unparsed arguments: ['--name=eu.gcr.io/ellipticdata-ai/app-engine-tmp/app/ttl-2h:248e94c0-9e01-412c-9435-8fc7b0cbda1f', '--destination=/srv', '--cache-repository=eu.gcr.io/ellipticdata-ai/app-engine-tmp/build-cache/ttl-7d', '--cache', '--python-cmd=/opt/python3.7/bin/python3.7', '--pip-cmd=/env/bin/python3.7 -m pip', '--venv-cmd=/opt/python3.7/bin/python3.7 -m venv /env', '-v=DEBUG', '--base=gcr.io/gae-runtimes/python37:python37_20190328_3_7_3_RC00']
11 May 2019 22:00:03 INFO Using entrypoint from command line
11 May 2019 22:00:03 INFO Entrypoint: {'type': 'default'}
11 May 2019 22:00:03 INFO Executing ['/usr/local/bin/ftl.par', '--name=eu.gcr.io/ellipticdata-ai/app-engine-tmp/app/ttl-2h:248e94c0-9e01-412c-9435-8fc7b0cbda1f', '--destination=/srv', '--cache-repository=eu.gcr.io/ellipticdata-ai/app-engine-tmp/build-cache/ttl-7d', '--cache', '--python-cmd=/opt/python3.7/bin/python3.7', '--pip-cmd=/env/bin/python3.7 -m pip', '--venv-cmd=/opt/python3.7/bin/python3.7 -m venv /env', '-v=DEBUG', '--base=gcr.io/gae-runtimes/python37:python37_20190328_3_7_3_RC00', '--entrypoint=/start', '--directory=/workspace', '--additional-directory=/.gaeconfig']
INFO FTL version python-v0.15.0
INFO Beginning FTL build for python
INFO FTL arg passed: virtualenv_dir /env
INFO FTL arg passed: ttl 168
INFO FTL arg passed: python_cmd /opt/python3.7/bin/python3.7
INFO FTL arg passed: cache True
INFO FTL arg passed: virtualenv_cmd virtualenv
INFO FTL arg passed: entrypoint /start
INFO FTL arg passed: exposed_ports None
INFO FTL arg passed: pip_cmd /env/bin/python3.7 -m pip
INFO FTL arg passed: tar_base_image_path None
INFO FTL arg passed: builder_output_path /builder/outputs
INFO FTL arg passed: destination_path /srv
INFO FTL arg passed: sh_c_prefix False
INFO FTL arg passed: base gcr.io/gae-runtimes/python37:python37_20190328_3_7_3_RC00
INFO FTL arg passed: cache_key_version v0.15.0
INFO FTL arg passed: cache_salt
INFO FTL arg passed: cache_repository eu.gcr.io/ellipticdata-ai/app-engine-tmp/build-cache/ttl-7d
INFO FTL arg passed: venv_cmd /opt/python3.7/bin/python3.7 -m venv /env
INFO FTL arg passed: name eu.gcr.io/ellipticdata-ai/app-engine-tmp/app/ttl-2h:248e94c0-9e01-412c-9435-8fc7b0cbda1f
INFO FTL arg passed: global_cache False
INFO FTL arg passed: upload True
INFO FTL arg passed: fail_on_error True
INFO FTL arg passed: output_path None
INFO FTL arg passed: directory /workspace
INFO FTL arg passed: additional_directory /.gaeconfig
INFO FTL arg passed: verbosity DEBUG
INFO starting: full build
INFO starting: builder initialization
INFO Loading Docker credentials for repository 'gcr.io/gae-runtimes/python37:python37_20190328_3_7_3_RC00'
INFO Loading Docker credentials for repository 'eu.gcr.io/ellipticdata-ai/app-engine-tmp/app/ttl-2h:248e94c0-9e01-412c-9435-8fc7b0cbda1f'
INFO builder initialization took 0 seconds
INFO starting: build process for FTL image
INFO starting: checking_cached_interpreter_layer
INFO starting: check python version
INFO `python version` full cmd:
/opt/python3.7/bin/python3.7 --version
INFO `python version` stderr:
INFO check python version took 0 seconds
DEBUG Checking cache for cache_key d62f0bca0db2e6b2b1312f303e883b8ff187274fe5fd7b9aa17b13cdb68bad80
INFO checking_cached_interpreter_layer took 0 seconds
INFO build process for FTL image took 0 seconds
INFO full build took 0 seconds
Traceback (most recent call last):
File "/usr/lib/python2.7/runpy.py", line 174, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/usr/local/bin/ftl.par/__main__.py", line 65, in <module>
File "/usr/local/bin/ftl.par/__main__.py", line 54, in main
File "/usr/local/bin/ftl.par/__main__/ftl/python/builder.py", line 73, in Build
File "/usr/local/bin/ftl.par/__main__/ftl/python/layer_builder.py", line 361, in BuildLayer
File "/usr/local/bin/ftl.par/__main__/ftl/common/cache.py", line 113, in Get
File "/usr/local/bin/ftl.par/__main__/ftl/common/cache.py", line 137, in _getEntry
File "/usr/local/bin/ftl.par/__main__/ftl/common/cache.py", line 152, in _getLocalEntry
File "/usr/local/bin/ftl.par/__main__/ftl/common/cache.py", line 175, in getEntryFromCreds
File "/usr/local/bin/ftl.par/containerregistry/client/v2_2/docker_image_.py", line 279, in exists
File "/usr/local/bin/ftl.par/containerregistry/client/v2_2/docker_image_.py", line 293, in manifest
File "/usr/local/bin/ftl.par/containerregistry/client/v2_2/docker_image_.py", line 250, in _content
File "/usr/local/bin/ftl.par/containerregistry/client/v2_2/docker_http_.py", line 364, in Request
containerregistry.client.v2_2.docker_http_.V2DiagnosticException: response: {'status': '403', 'content-length': '294', 'x-xss-protection': '0', 'transfer-encoding': 'chunked', 'server': 'Docker Registry', '-content-encoding': 'gzip', 'docker-distribution-api-version': 'registry/2.0', 'cache-control': 'private', 'date': 'Sat, 11 May 2019 22:00:04 GMT', 'x-frame-options': 'SAMEORIGIN', 'content-type': 'application/json'}
Permission denied for "d62f0bca0db2e6b2b1312f303e883b8ff187274fe5fd7b9aa17b13cdb68bad80" from request "/v2/ellipticdata-ai/app-engine-tmp/build-cache/ttl-7d/python-cache/manifests/d62f0bca0db2e6b2b1312f303e883b8ff187274fe5fd7b9aa17b13cdb68bad80". : None
I tried to create my credentials from scratch again but no luck, i cant deploy my app anymore. Any ideas?

Solution found. I need to re enable billing for the project through the 'identity platform'. As apparently the previous approval gets deleted.

Related

AGL:"bitbake agl-demo-platform" hangs in task 16

I am an agl and pokey newbie, and have followed the steps in https://wiki.automotivelinux.org/agl-distro/source-code
(I am running the following in a docker container)
$source meta-agl/scripts/aglsetup.sh -m qemux86-64 agl-demo agl-netboot
------------ aglsetup.sh: Starting
Configuration files already exist:
- /home/work/agl/build/conf/local.conf
- /home/work/agl/build/conf/bblayers.conf
Skipping configuration files generation.
Use option -f|--force to overwrite existing configuration.
Generating setup manifest: /home/work/agl/build/aglsetup.manifest ... OK
Generating setup file: /home/work/agl/build/agl-init-build-env ... OK
------------ aglsetup.sh: Done
Common targets are:
- meta-agl: (core system)
- agl-profile-core:
agl-image-boot
agl-image-minimal
agl-image-minimal-qa
- agl-profile-graphical:
agl-image-weston
- agl-profile-graphical-qt5:
agl-image-graphical-qt5
agl-image-graphical-qt5-crosssdk
- agl-profile-graphical-html5
agl-demo-platform-html5
- meta-agl-demo: (demo with UI)
agl-image-ivi (base for ivi targets)
agl-image-ivi-qa
agl-image-ivi-crosssdk
agl-demo-platform (* default demo target)
agl-demo-platform-qa
agl-demo-platform-crosssdk
$bitbake agl-demo-platform
This hangs in
Initialising tasks: 100% |############################################################################################| Time: 0:00:05
Sstate summary: Wanted 2729 Found 0 Missed 2729 Current 0 (0% match, 0% complete)
NOTE: Executing SetScene Tasks
NOTE: Executing RunQueue Tasks
No currently running tasks (16 of 7400) 0% ||
In order to debug it, I ran
$bitbake -DDD agl-demo-platform
...
DEBUG: Full skip list {'/home/work/agl/meta-agl/meta-netboot/recipes-core/images/initramfs-netboot-image.bb:do_packagedata', '/home/work/agl/meta-agl/meta-netboot/recipes-core/images/initramfs-netboot-image.bb:do_install', '/home/work/agl/meta-agl-demo/recipes-platform/images/agl-demo-platform.bb:do$
package', '/home/work/agl/meta-agl-demo/recipes-platform/images/agl-demo-platform.bb:do_compile', '/home/work/agl/meta-agl-demo/recipes-platform/imag
es/agl-demo-platform.bb:do_install', '/home/work/agl/meta-agl-demo/recipes-platform/images/agl-demo-platform.bb:do_packagedata', '/home/work/agl/meta
-agl-demo/recipes-platform/images/agl-demo-platform.bb:do_configure', '/home/work/agl/meta-agl/meta-netboot/recipes-core/images/initramfs-netboot-image.bb:do_configure', '/home/work/agl/meta-agl/meta-netboot/recipes-core/images/initramfs-netboot-image.bb:do_compile', '/home/work/agl/meta-agl/meta-netboot/recipes-core/images/initramfs-netboot-image.bb:do_package'}
DEBUG: Using runqueue scheduler 'speed'
DEBUG: Stampfile /home/work/agl/build/tmp/stamps/x86_64-linux/quilt-native/0.65-r0.do_fetch.e8a4c952a66942653e36f289eaf68ca5 not available
NOTE: Running task 1 of 7400 (/home/work/agl/external/poky/meta/recipes-devtools/quilt/quilt-native_0.65.bb:do_fetch)
DEBUG: Stampfile /home/work/agl/build/tmp/stamps/x86_64-linux/texinfo-dummy-native/1.0-r0.do_fetch.6af0fac94be624020d4ded1391838faa not available
NOTE: Running task 2 of 7400 (/home/work/agl/external/poky/meta/recipes-extended/texinfo-dummy-native/texinfo-dummy-native.bb:do_fetch)
DEBUG: Stampfile /home/work/agl/build/tmp/stamps/x86_64-linux/gnu-config-native/20180713+gitAUTOINC+30d53fc428-r0.do_fetch.66a4b9fc46062c0ab4c3d6bf6838$8ef not available
NOTE: Running task 3 of 7400 (virtual:native:/home/work/agl/external/poky/meta/recipes-devtools/gnu-config/gnu-config_git.bb:do_fetch)
DEBUG: Stampfile /home/work/agl/build/tmp/stamps/x86_64-linux/m4-native/1.4.18-r0.do_fetch.6762cc3ab39f2cedf73b612115bd959d not available
NOTE: Running task 4 of 7400 (/home/work/agl/external/poky/meta/recipes-devtools/m4/m4-native_1.4.18.bb:do_fetch)
DEBUG: Stampfile /home/work/agl/build/tmp/stamps/x86_64-linux/autoconf-native/2.69-r11.do_fetch.25fa26d4261bb5d4666677301aa59479 not available
NOTE: Running task 5 of 7400 (virtual:native:/home/work/agl/external/poky/meta/recipes-devtools/autoconf/autoconf_2.69.bb:do_fetch)
DEBUG: Stampfile /home/work/agl/build/tmp/stamps/x86_64-linux/automake-native/1.16.1-r0.do_fetch.0fd4964b1b460fad47bd3cfb55e06e3f not available
NOTE: Running task 6 of 7400 (virtual:native:/home/work/agl/external/poky/meta/recipes-devtools/automake/automake_1.16.1.bb:do_fetch)
DEBUG: Stampfile /home/work/agl/build/tmp/stamps/x86_64-linux/libtool-native/2.4.6-r0.do_fetch.fb99da9a9824dd7b876403694f7b783a not available
NOTE: Running task 7 of 7400 (/home/work/agl/external/poky/meta/recipes-devtools/libtool/libtool-native_2.4.6.bb:do_fetch)
DEBUG: Stampfile /home/work/agl/build/tmp/stamps/x86_64-linux/gettext-minimal-native/0.19.8.1-r0.do_fetch.d984cddf39092f50c5874c27f42c9627 not available
NOTE: Running task 8 of 7400 (/home/work/agl/external/poky/meta/recipes-core/gettext/gettext-minimal-native_0.19.8.1.bb:do_fetch)
DEBUG: Stampfile /home/work/agl/build/tmp/stamps/x86_64-linux/xz-native/5.2.4-r0.do_fetch.eb624201d02d0135b086909af9a87977 not available
NOTE: Running task 9 of 7400 (virtual:native:/home/work/agl/external/poky/meta/recipes-extended/xz/xz_5.2.4.bb:do_fetch)
DEBUG: Stampfile /home/work/agl/build/tmp/stamps/x86_64-linux/gmp-native/6.1.2-r0.do_fetch.d4d7e5eb8e67d572386a46cc21e57f8e not available
NOTE: Running task 10 of 7400 (virtual:native:/home/work/agl/external/poky/meta/recipes-support/gmp/gmp_6.1.2.bb:do_fetch)
DEBUG: Stampfile /home/work/agl/build/tmp/stamps/x86_64-linux/flex-native/2.6.0-r0.do_fetch.588daad6e54df2fe977b08ef749ef523 not available
NOTE: Running task 11 of 7400 (virtual:native:/home/work/agl/external/poky/meta/recipes-devtools/flex/flex_2.6.0.bb:do_fetch)
DEBUG: Stampfile /home/work/agl/build/tmp/stamps/x86_64-linux/zlib-native/1.2.11-r0.do_fetch.1fa21ab74fd7fedd15f87baac65b9dab not available
NOTE: Running task 12 of 7400 (virtual:native:/home/work/agl/external/poky/meta/recipes-core/zlib/zlib_1.2.11.bb:do_fetch)
DEBUG: Stampfile /home/work/agl/build/tmp/stamps/x86_64-linux/autoconf-archive-native/2018.03.13-r0.do_fetch.e880edd4650611bf6f65e254102ba230 not available
DEBUG: Stampfile /home/work/agl/build/tmp/stamps/x86_64-linux/autoconf-archive-native/2018.03.13-r0.do_fetch.e880edd4650611bf6f65e254102ba230 not available
NOTE: Running task 13 of 7400 (virtual:native:/home/work/agl/external/poky/meta/recipes-devtools/autoconf-archive/autoconf-archive_2018.03.13.bb:do_fetch)
DEBUG: Stampfile /home/work/agl/build/tmp/stamps/x86_64-linux/mpfr-native/4.0.1-r0.do_fetch.34c76de4a18ded6152d2ff68820420c9 not available
NOTE: Running task 14 of 7400 (virtual:native:/home/work/agl/external/poky/meta/recipes-support/mpfr/mpfr_4.0.1.bb:do_fetch)
DEBUG: Stampfile /home/work/agl/build/tmp/stamps/x86_64-linux/bison-native/3.0.4-r0.do_fetch.53556f21491498d19bb9e3b24cf725b2 not available
NOTE: Running task 15 of 7400 (virtual:native:/home/work/agl/external/poky/meta/recipes-devtools/bison/bison_3.0.4.bb:do_fetch)
DEBUG: Stampfile /home/work/agl/build/tmp/stamps/x86_64-linux/binutils-cross-x86_64/2.31.1-r0.do_fetch.14df04f9e0c741b374c8987222b85026 not available
NOTE: Running task 16 of 7400 (/home/work/agl/external/poky/meta/recipes-devtools/binutils/binutils-cross_2.31.bb:do_fetch)
When the above happens, there are the following process in ps -ef output
admin 3977 1430 0 10:48 pts/3 00:00:02 python3 /home/work/agl/external/poky/bitbake/bin/bitbake agl-demo-platform
admin 3996 1 7 10:48 ? 00:00:28 python3 /home/work/agl/external/poky/bitbake/bin/bitbake agl-demo-platform
admin 4108 3996 0 10:48 ? 00:00:00 python3 /home/work/agl/external/poky/bitbake/bin/bitbake-worker decafbad
It looks like there are 16(?) do_fetch tasks going on. I have tried waiting for an hour but bitbake does not move forward.
My container does not have strace enabled. Could someone please help me with debugging?
All the git repositories under agl directory except the following three are on branch icefish, not sure if it matters but just documenting it
external/meta-iot-cloud
* (no branch)
external/meta-python2
* (no branch)
bsp/meta-arm
* (no branch)
There are no run.do_fetch logs in $T
admin#623c5e680b76:/home/work/agl/build$ bitbake -e|grep ^T=
T="/home/work/agl/build/tmp/work/corei7-64-agl-linux/defaultpkgname/1.0-r0/temp"
/home/work/agl$ ls -l build/tmp/work/corei7-64-agl-linux/defaultpkgname/1.0-r0/temp/*
lrwxrwxrwx 1 admin admin 30 Jun 28 19:42 build/tmp/work/corei7-64-agl-linux/defaultpkgname/1.0-r0/temp/run.oecore_update_bblayers -> run.oecore_update_bblayers.369
-rw-r--r-- 1 admin admin 4565 Jun 28 19:42 build/tmp/work/corei7-64-agl-linux/defaultpkgname/1.0-r0/temp/run.oecore_update_bblayers.369
-rw-rw-r-- 1 admin admin 4565 Jun 28 18:02 build/tmp/work/corei7-64-agl-linux/defaultpkgname/1.0-r0/temp/run.oecore_update_bblayers.560
-rw-r--r-- 1 admin admin 4565 Jun 28 17:50 build/tmp/work/corei7-64-agl-linux/defaultpkgname/1.0-r0/temp/run.oecore_update_bblayers.715
-rw-r--r-- 1 admin admin 4565 Jun 28 17:16 build/tmp/work/corei7-64-agl-linux/defaultpkgname/1.0-r0/temp/run.oecore_update_bblayers.769
EDIT
There is no quilt directory in the work directory
$ pwd
/home/work/agl/build/tmp/work
$ find .
.
./corei7-64-agl-linux
./corei7-64-agl-linux/defaultpkgname
./corei7-64-agl-linux/defaultpkgname/1.0-r0
./corei7-64-agl-linux/defaultpkgname/1.0-r0/temp
./corei7-64-agl-linux/defaultpkgname/1.0-r0/temp/run.oecore_update_bblayers.560
./corei7-64-agl-linux/defaultpkgname/1.0-r0/temp/run.oecore_update_bblayers.633
./corei7-64-agl-linux/defaultpkgname/1.0-r0/temp/run.oecore_update_bblayers.369
./corei7-64-agl-linux/defaultpkgname/1.0-r0/temp/run.oecore_update_bblayers.715
./corei7-64-agl-linux/defaultpkgname/1.0-r0/temp/run.oecore_update_bblayers.769
./corei7-64-agl-linux/defaultpkgname/1.0-r0/temp/run.oecore_update_bblayers
EDIT
I could make the build start with basing my container off crops/poky-container.My container did not have the following
new user usersetup and sudoers.usersetup
execution of /usr/bin/distro-entry.sh which in turn runs /opt/poky/3.1/environment-setup-x86_64-pokysdk-linux

Airflow: Celery task failure

I have airflow up and running but I have an issue where my task is failing in celery.
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/airflow/executors/celery_executor.py", line 52, in execute_command
subprocess.check_call(command, shell=True)
File "/usr/local/lib/python3.6/subprocess.py", line 291, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command 'airflow run airflow_tutorial_v01 print_hello 2017-06-01T15:00:00 --local -sd /usr/local/airflow/dags/hello_world.py' returned non-zero exit status 1.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/celery/app/trace.py", line 375, in trace_task
R = retval = fun(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/celery/app/trace.py", line 632, in __protected_call__
return self.run(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/airflow/executors/celery_executor.py", line 55, in execute_command
raise AirflowException('Celery command failed')
airflow.exceptions.AirflowException: Celery command failed
it is a very basic DAG (taken from the hello world tutorial: https://github.com/apache/incubator-airflow/blob/master/airflow/example_dags/tutorial.py).
Also I do not see any logs of my worker, I got this stack strace from the Flower web interface.
If I run manually on the worker node, the airflow run command mentionned in the stack trace it works.
How can I get more information to debug further?
The only log I get when starting `airflow work` is
root#ip-10-0-4-85:~# /usr/local/lib/python3.5/dist-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
""")
[2018-07-25 17:49:43,430] {driver.py:120} INFO - Generating grammar tables from /usr/lib/python3.5/lib2to3/Grammar.txt
[2018-07-25 17:49:43,469] {driver.py:120} INFO - Generating grammar tables from /usr/lib/python3.5/lib2to3/PatternGrammar.txt
[2018-07-25 17:49:43,594] {__init__.py:45} INFO - Using executor CeleryExecutor
Starting flask
[2018-07-25 17:49:43,665] {_internal.py:88} INFO - * Running on http://0.0.0.0:8793/ (Press CTRL+C to quit)
^C
The config I use is the default one with a postgresql and redis backend for celery.
I see the worked online in Flower.
Thanks.
edit: edited for more informations

Run QEMU from within Eclipse External tools

I'd like to ask since I am setting up the Eclipse IDE for Yocto app development and I got stuck to start the QEMU from within Eclipse.
I got working QEMU image OK such as
ubuntu#ubuntu:~/work/community/build-x11$ runqemu qemuarm
tmp/deploy/images/qemuarm/zImage-qemuarm.bin
tmp/deploy/images/qemuarm/fsl-image-multimedia-full-qemuarm.ext4
Within the Eclipse I follow
https://www.yoctoproject.org/docs/2.5/sdk-manual/sdk-manual.html#oxygen-starting-qemu-in-user-space-nfs-mode
But by configuring "External Tools" and try to run QEMU I got following
runqemu - INFO - Running MACHINE=qemuarm bitbake -e...
ERROR: Unable to find conf/bblayers.conf or conf/bitbake.conf. BBAPTH is unset and/or not in a build directory?
runqemu - WARNING - Couldn't run 'bitbake -e' to gather environment information:
runqemu - WARNING - Can't find qemuboot conf file, DEPLOY_DIR_IMAGE is NULL!
runqemu - INFO - Running MACHINE=qemuarm bitbake -e...
ERROR: Unable to find conf/bblayers.conf or conf/bitbake.conf. BBAPTH is unset and/or not in a build directory?
runqemu - WARNING - Couldn't run 'bitbake -e' to gather environment information:
runqemu - INFO - Setting STAGING_DIR_NATIVE to OECORE_NATIVE_SYSROOT (/home/ubuntu/work/community/build-x11/tmp/work/armv5e-fslc-linux-gnueabi/meta-ide-support/1.0-r3/recipe-sysroot-native)
runqemu - INFO - Setting STAGING_BINDIR_NATIVE to /home/ubuntu/work/community/build-x11/tmp/work/armv5e-fslc-linux-gnueabi/meta-ide-support/1.0-r3/recipe-sysroot-native/usr/bin
runqemu - INFO - QB_MEM is not set, use 512M by default
runqemu - INFO - Continuing with the following parameters:
KERNEL: [/home/ubuntu/work/community/build-x11/tmp/deploy/images/qemuarm/zImage-qemuarm.bin]
MACHINE: [qemuarm]
FSTYPE: [nfs]
NFS_DIR: [/home/ubuntu/work/community/build-x11/MY_QEMU_ROOTFS]
CONFFILE: []
/bin/sh: 1: stty: not found
Traceback (most recent call last):
File "/home/ubuntu/work/community/sources/poky/scripts/runqemu", line 1270, in main
config.setup_network()
File "/home/ubuntu/work/community/sources/poky/scripts/runqemu", line 997, in setup_network
self.saved_stty = subprocess.check_output("stty -g", shell=True).decode('utf-8')
File "/usr/lib/python3.5/subprocess.py", line 626, in check_output
**kwargs).stdout
File "/usr/lib/python3.5/subprocess.py", line 708, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command 'stty -g' returned non-zero exit status 127
Cleanup
Command 'lesspipe' is available in the following places
* /bin/lesspipe
* /usr/bin/lesspipe
The command could not be located because '/bin:/usr/bin' is not included in the PATH environment variable.
lesspipe: command not found
Command 'dircolors' is available in '/usr/bin/dircolors'
The command could not be located because '/usr/bin' is not included in the PATH environment variable.
dircolors: command not found
ubuntu#ubuntu:~/eclipse/cpp-oxygen/eclipse$
I wonder if any has experience such a problem on setting it up "External Tools" with Eclipse?
Thank you
Navigate to the build path and then trigger this command will work.
OR
Run source oe-init-build-env build path.

How to use Hadoop streaming input parameter for matlab shell script

Actually i want to execute my matlab code in hadoop streaming. My doubt is how to use hadoop streaming input parameter value for my matlab input. For example ,
This is my matlab file imreadtest.m (simple coding)
rgbImage = imread('/usr/new.jpg');
imwrite(rgbImage,'/usr/OT/testedimage1.jpg');
my shell script is
#!/bin/sh
matlabbg imreadtest.m -nodisplay
Normally this works well in my ubuntu. (Not in hadoop). I have stored these two files in my HDFS using hue. now my matlab script looks like (imrtest.m)
rgbImage = imread(STDIN);
imwrite(rgbImage,STDOUT);
My shell script is (imrtest.sh).
#!/bin/sh
matlabbg imrtest.m -nodisplay
I have tried to execute this in hadoop streaming
hadoop#xxx:/usr/local/master/hadoop$ $HADOOP_HOME/bin/hadoop jar $HADOOP_HOME/hadoop-streaming.jar -mapper /usr/OT/imrtest.sh -file /usr/OT/imrtest.sh -input /usr/OT/testedimage.jpg -output /usr/OT/opt
But i got error like this
packageJobJar: [/usr/OT/imrtest.sh, /usr/local/master/temp/hadoop- unjar4018041785380098978/] [] /tmp/streamjob7077345699332124679.jar tmpDir=null
14/03/06 15:51:41 WARN snappy.LoadSnappy: Snappy native library is available
14/03/06 15:51:41 INFO util.NativeCodeLoader: Loaded the native-hadoop library
14/03/06 15:51:41 INFO snappy.LoadSnappy: Snappy native library loaded
14/03/06 15:51:41 INFO mapred.FileInputFormat: Total input paths to process : 1
14/03/06 15:51:42 INFO streaming.StreamJob: getLocalDirs(): [/usr/local/master/temp/mapred/local]
14/03/06 15:51:42 INFO streaming.StreamJob: Running job: job_201403061205_0015
14/03/06 15:51:42 INFO streaming.StreamJob: To kill this job, run:
14/03/06 15:51:42 INFO streaming.StreamJob: /usr/local/master/hadoop/bin/hadoop job -Dmapred.job.tracker=slave3:8021 -kill job_201403061205_0015
14/03/06 15:51:42 INFO streaming.StreamJob: Tracking URL: http://slave3:50030/jobdetails.jsp?jobid=job_201403061205_0015
14/03/06 15:51:43 INFO streaming.StreamJob: map 0% reduce 0%
14/03/06 15:52:15 INFO streaming.StreamJob: map 100% reduce 100%
14/03/06 15:52:15 INFO streaming.StreamJob: To kill this job, run:
14/03/06 15:52:15 INFO streaming.StreamJob: /usr/local/master/hadoop/bin/hadoop job -Dmapred.job.tracker=slave3:8021 -kill job_201403061205_0015
14/03/06 15:52:15 INFO streaming.StreamJob: Tracking URL: http://slave3:50030/jobdetails.jsp?jobid=job_201403061205_0015
14/03/06 15:52:15 ERROR streaming.StreamJob: Job not successful. Error: NA
14/03/06 15:52:15 INFO streaming.StreamJob: killJob...
Streaming Command Failed!
jobtracker error log for this job is
HOST=null
USER=hadoop
HADOOP_USER=null
last Hadoop input: |null|
last tool output: |null|
Date: Thu Mar 06 15:51:51 IST 2014
java.io.IOException: Broken pipe
at java.io.FileOutputStream.writeBytes(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java:297)
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:126)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at org.apache.hadoop.streaming.io.TextInputWriter.writeUTF8(TextInputWriter.java:72)
at org.apache.hadoop.streaming.io.TextInputWriter.writeValue(TextInputWriter.java:51)
at org.apache.hadoop.streaming.PipeMapper.map(PipeMapper.java:110)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
at org.apache.hadoop.streaming.Pipe
java.io.IOException: log:null
.
.
.
Please suggest me how to get input from hadoop streaming input for my matlab script input, Similarly output.

Dotcloud supervisord shows error but process is running

My dotcloud setup (django-celery with rabbitmq) was working fine a week ago - the processes were starting up ok and the logs were clean. However, I recently repushed (without updating any of the code), and now the logs are saying that the processes fail to start even though they seem to be running.
Supervisord log
dotcloud#hack-default-www-0:/var/log/supervisor$ more supervisord.log
2012-06-03 10:51:51,836 CRIT Set uid to user 1000
2012-06-03 10:51:51,836 WARN Included extra file "/etc/supervisor/conf.d/uwsgi.c
onf" during parsing
2012-06-03 10:51:51,836 WARN Included extra file "/home/dotcloud/current/supervi
sord.conf" during parsing
2012-06-03 10:51:51,938 INFO RPC interface 'supervisor' initialized
2012-06-03 10:51:51,938 WARN cElementTree not installed, using slower XML parser
for XML-RPC
2012-06-03 10:51:51,938 CRIT Server 'unix_http_server' running without any HTTP
authentication checking
2012-06-03 10:51:51,946 INFO daemonizing the supervisord process
2012-06-03 10:51:51,947 INFO supervisord started with pid 144
2012-06-03 10:51:53,128 INFO spawned: 'celerycam' with pid 159
2012-06-03 10:51:53,133 INFO spawned: 'apnsd' with pid 161
2012-06-03 10:51:53,148 INFO spawned: 'djcelery' with pid 164
2012-06-03 10:51:53,168 INFO spawned: 'uwsgi' with pid 167
2012-06-03 10:51:53,245 INFO exited: djcelery (exit status 1; not expected)
2012-06-03 10:51:53,247 INFO exited: celerycam (exit status 1; not expected)
2012-06-03 10:51:54,698 INFO spawned: 'celerycam' with pid 176
2012-06-03 10:51:54,698 INFO success: apnsd entered RUNNING state, process has s
tayed up for > than 1 seconds (startsecs)
2012-06-03 10:51:54,705 INFO spawned: 'djcelery' with pid 177
2012-06-03 10:51:54,706 INFO success: uwsgi entered RUNNING state, process has s
tayed up for > than 1 seconds (startsecs)
2012-06-03 10:51:54,731 INFO exited: djcelery (exit status 1; not expected)
2012-06-03 10:51:54,754 INFO exited: celerycam (exit status 1; not expected)
2012-06-03 10:51:56,760 INFO spawned: 'celerycam' with pid 178
2012-06-03 10:51:56,765 INFO spawned: 'djcelery' with pid 179
2012-06-03 10:51:56,790 INFO exited: celerycam (exit status 1; not expected)
2012-06-03 10:51:56,791 INFO exited: djcelery (exit status 1; not expected)
2012-06-03 10:51:59,798 INFO spawned: 'celerycam' with pid 180
2012-06-03 10:52:00,538 INFO spawned: 'djcelery' with pid 181
2012-06-03 10:52:00,565 INFO exited: celerycam (exit status 1; not expected)
2012-06-03 10:52:00,571 INFO gave up: celerycam entered FATAL state, too many st
art retries too quickly
2012-06-03 10:52:00,573 INFO exited: djcelery (exit status 1; not expected)
2012-06-03 10:52:01,575 INFO gave up: djcelery entered FATAL state, too many sta
rt retries too quickly
dotcloud#hack-default-www-0:/var/log/supervisor$
The djerror log:
dotcloud#hack-default-www-0:/var/log/supervisor$ more djcelery_error.log
Traceback (most recent call last):
File "hack/manage.py", line 2, in
from django.core.management import execute_manager
ImportError: No module named django.core.management
Traceback (most recent call last):
File "hack/manage.py", line 2, in
from django.core.management import execute_manager
ImportError: No module named django.core.management
Traceback (most recent call last):
File "hack/manage.py", line 2, in
from django.core.management import execute_manager
ImportError: No module named django.core.management
Traceback (most recent call last):
File "hack/manage.py", line 2, in
from django.core.management import execute_manager
ImportError: No module named django.core.management
dotcloud#hack-default-www-0:/var/log/supervisor$
The statusctrl shows that some processes are running, but the pids are different. Also, the celery functionality seems to be working ok. Messages are processed, and I can see the messages being processed in the django admin interface (dj celery cam is running).
# supervisorctl status
apnsd RUNNING pid 225, uptime 0:00:44
celerycam RUNNING pid 224, uptime 0:00:44
djcelery RUNNING pid 226, uptime 0:00:44
Supervisord.conf file:
[program:djcelery]
directory = /home/dotcloud/current/
command = python hack/manage.py celeryd -E -l info -c 2
stderr_logfile = /var/log/supervisor/%(program_name)s_error.log
stdout_logfile = /var/log/supervisor/%(program_name)s.log
[program:celerycam]
directory = /home/dotcloud/current/
command = python hack/manage.py celerycam
stderr_logfile = /var/log/supervisor/%(program_name)s_error.log
stdout_logfile = /var/log/supervisor/%(program_name)s.log
http://jefurii.cafejosti.net/blog/2011/01/26/celery-in-virtualenv-with-supervisord/ says that the problem may be that the python being used is incorrect, so I've explicitly specified the python in the supervisord file. It now works, but it doesn't explain what I'm seeing above and why I've had to change my configuration when it was working fine last week.
Also, not all of the pids are lining up:
2012-06-03 11:19:03,045 CRIT Server 'unix_http_server' running without any HTTP
authentication checking
2012-06-03 11:19:03,051 INFO daemonizing the supervisord process
2012-06-03 11:19:03,052 INFO supervisord started with pid 144
2012-06-03 11:19:04,061 INFO spawned: 'celerycam' with pid 151
2012-06-03 11:19:04,066 INFO spawned: 'apnsd' with pid 153
2012-06-03 11:19:04,085 INFO spawned: 'djcelery' with pid 155
2012-06-03 11:19:04,104 INFO spawned: 'uwsgi' with pid 156
2012-06-03 11:19:05,271 INFO success: celerycam entered RUNNING state, process h
as stayed up for > than 1 seconds (startsecs)
2012-06-03 11:19:05,271 INFO success: apnsd entered RUNNING state, process has s
tayed up for > than 1 seconds (startsecs)
2012-06-03 11:19:05,271 INFO success: djcelery entered RUNNING state, process ha
s stayed up for > than 1 seconds (startsecs)
2012-06-03 11:19:05,271 INFO success: uwsgi entered RUNNING state, process has s
tayed up for > than 1 seconds (startsecs)
the status shows that the celery cam pids aren't lining up:
# supervisorctl status
apnsd RUNNING pid 153, uptime 0:06:17
celerycam RUNNING pid 150, uptime 0:06:17
djcelery RUNNING pid 155, uptime 0:06:17
My first guess is that your using the wrong python binary (system python, instead of virtualenv python), and it is causing this error (below) because that system python binary doesn't have that package installed.
Traceback (most recent call last):
File "hack/manage.py", line 2, in
from django.core.management import execute_manager
ImportError: No module named django.core.management
You should change your supervisord.conf to the following to make sure you are pointing to the correct python version.
[program:djcelery]
directory = /home/dotcloud/current/
command = /home/dotcloud/env/bin/python hack/manage.py celeryd -E -l info -c 2
stderr_logfile = /var/log/supervisor/%(program_name)s_error.log
stdout_logfile = /var/log/supervisor/%(program_name)s.log
[program:celerycam]
directory = /home/dotcloud/current/
command = /home/dotcloud/env/bin/python hack/manage.py celerycam
stderr_logfile = /var/log/supervisor/%(program_name)s_error.log
stdout_logfile = /var/log/supervisor/%(program_name)s.log
The python path went fromt python to /home/dotcloud/env/bin/python.
I'm not sure why supervisor is saying it is running when it is not, but hopefully this one little change will help clear up your errors, and get everything back to working again.