I am trying to install Kappelt gBridge on a Raspberry Pi 3 model B, using this guide: https://doc.gbridge.io/selfHosted/hostItYourself.html
I have generated a docker-compose.yml using this generator:
https://about.gbridge.io/dockergen/. I am using my own MySQL and MQTT servers, but Redis inside Docker.
Then changed redis image from 'redis:4' to 'hypriot/rpi-redis:latest'.
Here is my docker-compose.yml:
version: '3'
networks:
backend:
driver: bridge
web_frontend:
driver: bridge
services:
web:
image: 'pkap/gbridge-web:latest'
restart: always
ports:
- '8082:80'
environment:
APP_ENV: production
APP_KEY: [secret]
APP_DEBUG: 'false'
APP_LOG_LEVEL: warning
APP_URL: 'http://localhost'
DB_CONNECTION: mysql
DB_HOST: 192.168.1.92
DB_PORT: '3306'
DB_DATABASE: gbridge
DB_USERNAME: gbridge
DB_PASSWORD: [secret]
BROADCAST_DRIVER: log
CACHE_DRIVER: file
SESSION_DRIVER: file
SESSION_LIFETIME: 120
QUEUE_DRIVER: sync
REDIS_HOST: cache
REDIS_PASSWORD: 'null'
REDIS_PORT: '6379'
MAIL_DRIVER: smtp
MAIL_HOST: ERROR
MAIL_PORT: ERROR
MAIL_USERNAME: ERROR
MAIL_PASSWORD: ERROR
MAIL_ENCRYPTION: ERROR
GOOGLE_CLIENTID: [secret]
GOOGLE_PROJECTID: [secret]
links:
- cache
depends_on:
- cache
networks:
- web_frontend
- backend
redis-worker:
image: 'pkap/gbridge-redis-worker:latest'
restart: always
environment:
GBRIDGE_REDISWORKER_REDIS: 'redis://cache:6379'
GBRIDGE_REDISWORKER_MQTT: 'mqtt://192.168.1.93:1883'
GBRIDGE_REDISWORKER_MQTTUSER: gbridge
GBRIDGE_REDISWORKER_MQTTPASSWORD: [secret]
GBRIDGE_REDISWORKER_HOMEGRAPHKEY: [secret]
networks:
- backend
links:
- cache
depends_on:
- cache
cache:
image: 'hypriot/rpi-redis:latest'
# image: 'redis:4'
restart: always
expose:
- '6379'
networks:
- backend
When starting, I got the following output:
pi#PI5:/opt/gbridge $ sudo docker-compose up
Creating network "gbridge_web_frontend" with driver "bridge"
Creating network "gbridge_backend" with driver "bridge"
Pulling web (pkap/gbridge-web:latest)...
latest: Pulling from pkap/gbridge-web
5e6ec7f28fb7: Pull complete
cf165947b5b7: Pull complete
7bd37682846d: Pull complete
99daf8e838e1: Pull complete
ae320713efba: Pull complete
ebcb99c48d8c: Pull complete
9867e71b4ab6: Pull complete
936eb418164a: Pull complete
dfa2ee5b92b5: Pull complete
1d7c2c4e167c: Pull complete
6cca41ef2bb3: Pull complete
bef66d80d31c: Pull complete
a8f43605d68c: Pull complete
47d407538c8d: Pull complete
c2797628ac4c: Pull complete
a34d45d05e93: Pull complete
2fe5576814be: Pull complete
a063192606ae: Pull complete
a37fa9e3ac80: Pull complete
caca02ee174f: Pull complete
a09b4aa1d2d4: Pull complete
054e4215a923: Pull complete
65491e91c688: Pull complete
ab9be62f37ed: Pull complete
3d758b0e9492: Pull complete
a9e75786a08e: Pull complete
b8f6d39ac5c2: Pull complete
e54728c3516e: Pull complete
ea7523212e8f: Pull complete
a6372d49dd57: Pull complete
a855663b44bc: Pull complete
Creating gbridge_cache_1 ... done
Creating gbridge_redis-worker_1 ... done
Creating gbridge_web_1 ... done
Attaching to gbridge_cache_1, gbridge_redis-worker_1, gbridge_web_1
redis-worker_1 | standard_init_linux.go:207: exec user process caused "exec format error"
web_1 | standard_init_linux.go:207: exec user process caused "exec format error"
cache_1 | 1:C 26 Jan 14:29:17.537 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
cache_1 | 1:M 26 Jan 14:29:17.548 # Warning: 32 bit instance detected but no memory limit set. Setting 3 GB maxmemory limit with 'noeviction' policy now.
cache_1 | _._
cache_1 | _.-``__ ''-._
cache_1 | _.-`` `. `_. ''-._ Redis 3.0.0 (00000000/0) 32 bit
cache_1 | .-`` .-```. ```\/ _.,_ ''-._
cache_1 | ( ' , .-` | `, ) Running in standalone mode
cache_1 | |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379
cache_1 | | `-._ `._ / _.-' | PID: 1
cache_1 | `-._ `-._ `-./ _.-' _.-'
cache_1 | |`-._`-._ `-.__.-' _.-'_.-'|
cache_1 | | `-._`-._ _.-'_.-' | http://redis.io
cache_1 | `-._ `-._`-.__.-'_.-' _.-'
cache_1 | |`-._`-._ `-.__.-' _.-'_.-'|
cache_1 | | `-._`-._ _.-'_.-' |
cache_1 | `-._ `-._`-.__.-'_.-' _.-'
cache_1 | `-._ `-.__.-' _.-'
cache_1 | `-._ _.-'
cache_1 | `-.__.-'
cache_1 |
cache_1 | 1:M 26 Jan 14:29:17.551 # Server started, Redis version 3.0.0
cache_1 | 1:M 26 Jan 14:29:17.551 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
cache_1 | 1:M 26 Jan 14:29:17.551 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
cache_1 | 1:M 26 Jan 14:29:17.551 * The server is now ready to accept connections on port 6379
gbridge_redis-worker_1 exited with code 1
redis-worker_1 | standard_init_linux.go:207: exec user process caused "exec format error"
redis-worker_1 | standard_init_linux.go:207: exec user process caused "exec format error"
redis-worker_1 | standard_init_linux.go:207: exec user process caused "exec format error"
gbridge_web_1 exited with code 1
gbridge_redis-worker_1 exited with code 1
gbridge_redis-worker_1 exited with code 1
gbridge_web_1 exited with code 1
gbridge_redis-worker_1 exited with code 1
Exception in thread Thread-10:
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/docker/api/client.py", line 256, in _raise_for_status
response.raise_for_status()
File "/usr/local/lib/python3.5/dist-packages/requests/models.py", line 940, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 409 Client Error: Conflict for url: http+docker://localhost/v1.25/containers/7dee1246e16ca1e28d703e4fe1ab0e1da958741d0fd6562e429da1bf8b291a61/attach?stream=1&logs=0&stderr=1&stdout=1
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.5/threading.py", line 914, in _bootstrap_inner
self.run()
File "/usr/lib/python3.5/threading.py", line 862, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.5/dist-packages/compose/cli/log_printer.py", line 233, in watch_events
event['container'].attach_log_stream()
File "/usr/local/lib/python3.5/dist-packages/compose/container.py", line 215, in attach_log_stream
self.log_stream = self.attach(stdout=True, stderr=True, stream=True)
File "/usr/local/lib/python3.5/dist-packages/compose/container.py", line 307, in attach
return self.client.attach(self.id, *args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/docker/utils/decorators.py", line 19, in wrapped
return f(self, resource_id, *args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/docker/api/container.py", line 61, in attach
response, stream, self._check_is_tty(container), demux=demux)
File "/usr/local/lib/python3.5/dist-packages/docker/api/client.py", line 395, in _read_from_socket
socket = self._get_raw_response_socket(response)
File "/usr/local/lib/python3.5/dist-packages/docker/api/client.py", line 306, in _get_raw_response_socket
self._raise_for_status(response)
File "/usr/local/lib/python3.5/dist-packages/docker/api/client.py", line 258, in _raise_for_status
raise create_api_error_from_http_exception(e)
File "/usr/local/lib/python3.5/dist-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
raise cls(e, response=response, explanation=explanation)
docker.errors.APIError: 409 Client Error: Conflict ("b'container 7dee1246e16ca1e28d703e4fe1ab0e1da958741d0fd6562e429da1bf8b291a61 is restarting, wait until the container is running'")
gbridge_web_1 exited with code 1
The image names you are trying to call do not support the chip architecture of the Raspberry Pi 3:
redis-worker_1 | standard_init_linux.go:207: exec user process caused "exec format error"
web_1 | standard_init_linux.go:207: exec user process caused "exec format error"
You can solve this by calling these packages instead in docker-compose.yml:
pkap/gbridge-redis-worker:arm32v6-latest
pkap/gbridge-web-fpm:arm32v6-latest
I ran into the same problem, see my issue on GitHub.
Related
I'm trying to run Redis Insight in Docker Compose and I always get errors even though the only thing I'm changing from the Docker Run command is the volume. How do I fix this?
docker-compose.yml
redisinsights:
image: redislabs/redisinsight:latest
restart: always
ports:
- '8001:8001'
volumes:
- ./data/redisinsight:/db
logs
redisinsights_1 | Process 9 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid)
redisinsights_1 | Traceback (most recent call last):
redisinsights_1 | File "./entry.py", line 11, in <module>
redisinsights_1 | File "./startup.py", line 47, in <module>
redisinsights_1 | File "/usr/local/lib/python3.6/site-packages/django/conf/__init__.py", line 79, in __getattr__
redisinsights_1 | self._setup(name)
redisinsights_1 | File "/usr/local/lib/python3.6/site-packages/django/conf/__init__.py", line 66, in _setup
redisinsights_1 | self._wrapped = Settings(settings_module)
redisinsights_1 | File "/usr/local/lib/python3.6/site-packages/django/conf/__init__.py", line 157, in __init__
redisinsights_1 | mod = importlib.import_module(self.SETTINGS_MODULE)
redisinsights_1 | File "/usr/local/lib/python3.6/importlib/__init__.py", line 126, in import_module
redisinsights_1 | return _bootstrap._gcd_import(name[level:], package, level)
redisinsights_1 | File "./redisinsight/settings/__init__.py", line 365, in <module>
redisinsights_1 | File "/usr/local/lib/python3.6/os.py", line 220, in makedirs
redisinsights_1 | mkdir(name, mode)
redisinsights_1 | PermissionError: [Errno 13] Permission denied: '/db/rsnaps'
Follow the below steps to make it work:
Step 1. Create a Docker Compose file as shown below:
version: '3'
services:
redis:
image: redislabs/redismod
ports:
- 6379:6379
redisinsight:
image: redislabs/redisinsight:latest
ports:
- '8001:8001'
volumes:
- ./Users/ajeetraina/data/redisinsight:/db
Step 2. Provide sufficient permission
Go to Preference under Docker Desktop > File Sharing and add your folder structure which you want to share.
Please change the directory structure as per your environment
Step 3. Execute the Docker compose CLI
docker-compose ps
Name Command State Ports
------------------------------------------------------------------------------------------
pinegraph_redis_1 redis-server --loadmodule ... Up 0.0.0.0:6379->6379/tcp
pinegraph_redisinsight_1 bash ./docker-entry.sh pyt ... Up 0.0.0.0:8001->8001/tcp
Go to web browser and open RedisInsight URL.
Enjoy!
I was having the same problem on a Linux machine. I was able to solve it through the Installing RedisInsight on Docker
page of redis.
Note: Make sure the directory you pass as a volume to the container
has necessary permissions for the container to access it. For example,
if the previous command returns a permissions error, run the following
command:
console $ chown -R 1001 redisinsight
I am running docker-compose 1.25.5 on a ubuntu 20 box and I have a github repo working "fine" in its home folder... I can docker-compose build and docker-compose up with no problem, and the container does what is expected. The github repo is current with the on-disk files.
As a test, however, I created a new folder, pulled the repo, and ran docker-compose build with no problem but when I tried to run docker-compose up, I get the following error:
Starting live_evidently_1 ... done
Attaching to live_evidently_1
evidently_1 | Traceback (most recent call last):
evidently_1 | File "app.py", line 14, in <module>
evidently_1 | with open('config.yml') as f:
evidently_1 | IsADirectoryError: [Errno 21] Is a directory: 'config.yml'
live_evidently_1 exited with code 1
config.yml on my host is a file (of course) and the docker-compose.yml file is unremarkable:
version: "3"
services:
evidently:
build: ../
volumes:
- ./data:/data
- ./config.yml:/app/config.yml
etc...
...
So, I am left with two inter-related problems. 1) Why does the test version of the repo fail and the original version is fine (git status is unremarkable, all the files I want on github are up to date), and 2) Why does docker-compose think that config.yml is a folder when it is clearly a file? I would welcome suggestions.
You need to use bind mount type. To do this you have to use long syntax.
Like this.
volumes:
- type: bind
source: ./config.yml
target: /app/config.yml
I am running
docker-compose version 1.25.4, build 8d51620a
on
OS X Catalina, v10.15.4 (19E266)
I am using the system python.
When I run docker-compose, it crashes with the following error:
Traceback (most recent call last):
File "docker-compose", line 6, in <module>
File "compose/cli/main.py", line 72, in main
File "compose/cli/main.py", line 128, in perform_command
File "compose/cli/main.py", line 1077, in up
File "compose/cli/main.py", line 1073, in up
File "compose/project.py", line 548, in up
File "compose/service.py", line 355, in ensure_image_exists
File "compose/service.py", line 381, in image
File "site-packages/docker/utils/decorators.py", line 17, in wrapped
docker.errors.NullResource: Resource ID was not provided
[9018] Failed to execute script docker-compose
I have tried a fresh repo clone and a fresh install of docker, neither work. What could be causing this?
It turned out that I had uninitialized environment variables that were causing the crash.
The particular cause was the env vars setting image names in the docker-compose file and then trying to pull a blank image.
It can be initialized environment variables but for my case, it was some other command before docker-compose build which was failing.
I was pulling images from the registry but it could not find them/
I've seen this error when passing docker-compose files explicitly and omitting one. e.g.
docker-compose -f docker-compose.yml up # fails
docker-compose -f docker-compose.yml -f docker-compose.override.yml up # works
I faced with the same issue.
In my case, it was different.
I had 2 docker-compose files:
docker-compose.yml
version: "3"
networks:
web: {}
docker-compose.development.yml
version: "3"
services:
web:
image: ""
build:
context: .
dockerfile: Dockerfile
environment:
API_URL: http://example.com/api
ports:
- "11000:22000"
networks:
- web
restart: on-failure
The problem was from image property in docker-compose.development.yml file.
When I removed it & ran the below command:
docker-compose --project-name my-web -f docker-compose.yml -f docker-compose.development.yml up --detach
It was successful.
This is the new docker-compose.development.yml file:
version: "3"
services:
web:
build:
context: .
dockerfile: Dockerfile
environment:
API_URL: http://example.com/api
ports:
- "11000:22000"
networks:
- web
restart: on-failure
I have already posted this as an issue on Github at https://github.com/snakemake/snakemake/issues/279 but haven't got any response yet. I hope to find help here.
Version
I am using the following versions on our HPC cluster:
Snakemake c5.4.4
singularity version 3.5.3
Minimal example
singularity: "docker://bash"
rule test:
shell: "echo test"
Describe the bug
snakemake --use-singularity --debug
returns this message:
Building DAG of jobs...
Pulling singularity image docker://bash.
Using shell: /bin/bash
Provided cores: 1
Rules claiming more threads will be scaled down.
Job counts:
count jobs
1 test
1
[Fri Mar 13 15:59:30 2020]
rule test:
jobid: 0
Activating singularity image /data/nanopore/test/.snakemake/singularity/36b22e49e8a03fd08160e9345dd1034e.simg
FATAL: container creation failed: not mounting user requested home: user bind control is disallowed
[Fri Mar 13 15:59:30 2020]
Error in rule test:
jobid: 0
RuleException:
CalledProcessError in line 4 of /data/nanopore/test/Snakefile:
Command ' singularity exec --home /data/nanopore/test --bind /opt/snakemake/v5.4.4/lib/python3.5/site-packages/snakemake-5.4.4-py3.5.egg:/mnt/snakemake /data/nanopore/test/.snakemake/singularity/36b22e49e8a03fd08160e9345dd1034e.simg bash -c 'set -euo pipefail; echo test'' returned non-zero exit status 255
File "/data/nanopore/test/Snakefile", line 4, in __rule_test
File "/usr/lib/python3.5/concurrent/futures/thread.py", line 55, in run
Shutting down, this might take some time.
Exiting because a job execution failed. Look above for error message
Complete log: /data/nanopore/test/.snakemake/log/2020-03-13T155917.601627.snakemake.log
Apparently, snakemake runs singularity with default values for --home and --bind. These were disallowed by the administrator, however.
Executing
singularity exec --home /data/nanopore/test --bind /opt/snakemake/v5.4.4/lib/python3.5/site-packages/snakemake-5.4.4-py3.5.egg:/mnt/snakemake /data/nanopore/test/.snakemake/singularity/36b22e49e8a03fd08160e9345dd1034e.simg bash -c 'set -euo pipefail;'
returns:
FATAL: container creation failed: not mounting user requested home: user bind control is disallowed
Additional context
Is there a way to disable the Singularity default parameter setting in snakemake? Inside the singularity container the /data directory is still writeable and readable anyway.
Thanks a lot
I am new to airflow, for now I find out airflow is using celery to schedule its tasks. To run airflow, I need to run command 'airflow worker' which will start celery. However, there is always a bug here. Since I have searched in Internet, most problem happen to celery.py which write by user themselves. I use celery just by start airflow. So it is a little bit different.
Anyone could help me? Below is the screenshot of the bug.
airflow#linux-test:~$ airflow worker
[2018-06-22 07:29:04,068] {__init__.py:57} INFO - Using executor CeleryExecutor
[2018-06-22 07:29:04,125] {driver.py:124} INFO - Generating grammar tables from /usr/lib/python2.7/lib2to3/Grammar.txt
[2018-06-22 07:29:04,146] {driver.py:124} INFO - Generating grammar tables from /usr/lib/python2.7/lib2to3/PatternGrammar.txt
-------------- celery#linux-test v4.2.0 (windowlicker)
---- **** -----
--- * *** * -- Linux-4.15.0-22-generic-x86_64-with-Ubuntu-18.04-bionic 2018-06-22 07:29:04
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: airflow.executors.celery_executor:0x7f2267122310
- ** ---------- .> transport: amqp://airflow:**#localhost:5672/airflow
- ** ---------- .> results: postgresql://airflow:**#localhost:5432/airflow
- *** --- * --- .> concurrency: 16 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> default exchange=default(direct) key=default
[2018-06-22 07:29:04,630] {__init__.py:57} INFO - Using executor CeleryExecutor
[2018-06-22 07:29:04,689] {driver.py:124} INFO - Generating grammar tables from /usr/lib/python2.7/lib2to3/Grammar.txt
[2018-06-22 07:29:04,715] {driver.py:124} INFO - Generating grammar tables from /usr/lib/python2.7/lib2to3/PatternGrammar.txt
Starting flask
[2018-06-22 07:29:04,858] {_internal.py:88} INFO - * Running on http://0.0.0.0:8793/ (Press CTRL+C to quit)
[2018-06-22 07:29:06,122: ERROR/ForkPoolWorker-1] Pool process <celery.concurrency.asynpool.Worker object at 0x7f22648c8e10> error: TypeError("Required argument 'object' (pos 1) not found",)
Traceback (most recent call last):
File "/home/airflow/.local/lib/python2.7/site-packages/billiard/pool.py", line 289, in __call__
sys.exit(self.workloop(pid=pid))
File "/home/airflow/.local/lib/python2.7/site-packages/billiard/pool.py", line 347, in workloop
req = wait_for_job()
File "/home/airflow/.local/lib/python2.7/site-packages/billiard/pool.py", line 447, in receive
ready, req = _receive(1.0)
File "/home/airflow/.local/lib/python2.7/site-packages/billiard/pool.py", line 419, in _recv
return True, loads(get_payload())
File "/home/airflow/.local/lib/python2.7/site-packages/billiard/common.py", line 107, in pickle_loads
return load(BytesIO(s))
TypeError: Required argument 'object' (pos 1) not found
[2018-06-22 07:29:06,127: ERROR/MainProcess] Process 'ForkPoolWorker-1' pid:18839 exited with 'exitcode 1'
Uninstalling librabbitmq worked for me : pip uninstall librabbitmq. I didn't understand very well why, but apparently, there's some optimization on that library that made the thing fail. Here's the answer I found on some website (I had to translate the page, thus my inability to understand well the solution)
Hope it helps