Docker-compose failing on startup - docker-compose

I am running
docker-compose version 1.25.4, build 8d51620a
on
OS X Catalina, v10.15.4 (19E266)
I am using the system python.
When I run docker-compose, it crashes with the following error:
Traceback (most recent call last):
File "docker-compose", line 6, in <module>
File "compose/cli/main.py", line 72, in main
File "compose/cli/main.py", line 128, in perform_command
File "compose/cli/main.py", line 1077, in up
File "compose/cli/main.py", line 1073, in up
File "compose/project.py", line 548, in up
File "compose/service.py", line 355, in ensure_image_exists
File "compose/service.py", line 381, in image
File "site-packages/docker/utils/decorators.py", line 17, in wrapped
docker.errors.NullResource: Resource ID was not provided
[9018] Failed to execute script docker-compose
I have tried a fresh repo clone and a fresh install of docker, neither work. What could be causing this?

It turned out that I had uninitialized environment variables that were causing the crash.
The particular cause was the env vars setting image names in the docker-compose file and then trying to pull a blank image.

It can be initialized environment variables but for my case, it was some other command before docker-compose build which was failing.
I was pulling images from the registry but it could not find them/

I've seen this error when passing docker-compose files explicitly and omitting one. e.g.
docker-compose -f docker-compose.yml up # fails
docker-compose -f docker-compose.yml -f docker-compose.override.yml up # works

I faced with the same issue.
In my case, it was different.
I had 2 docker-compose files:
docker-compose.yml
version: "3"
networks:
web: {}
docker-compose.development.yml
version: "3"
services:
web:
image: ""
build:
context: .
dockerfile: Dockerfile
environment:
API_URL: http://example.com/api
ports:
- "11000:22000"
networks:
- web
restart: on-failure
The problem was from image property in docker-compose.development.yml file.
When I removed it & ran the below command:
docker-compose --project-name my-web -f docker-compose.yml -f docker-compose.development.yml up --detach
It was successful.
This is the new docker-compose.development.yml file:
version: "3"
services:
web:
build:
context: .
dockerfile: Dockerfile
environment:
API_URL: http://example.com/api
ports:
- "11000:22000"
networks:
- web
restart: on-failure

Related

airflow SSH operator error: [Errno 2] No such file or directory:

airflow 1.10.10
minikube 1.22.0
amazon emr
I am running airflow on kubernetes(minikube).
Dags are synced from github.
spark-submit on Amazon EMR as a CLI mode.
In order to do that, I attach EMR pem key.
So, I get pem key from AWS S3 while ExtraInitContainer is getting image awscli and mount the volume at airlfow/sshpem
error is reported when I make a connection from airflow WebUI as
"con_type": "ssh"
"key_file": "/opt/sshepm/emr.pem"
SSH operator error: [Errno 2] No such file or directory: '/opt/airflow/sshpem/emr.pem'
it is there. I think it is related to some PATH or permission issue since I get emr.pem on ExtraInitContainer and it's permission was root. Although I temporarily changed a user as 1000:1000 there is some issue airflow WebUI can't get this directory while getting a key.
Full log is below
> Traceback (most recent call last): File
> "/home/airflow/.local/lib/python3.6/site-packages/airflow/contrib/operators/ssh_operator.py",
> line 108, in execute
> with self.ssh_hook.get_conn() as ssh_client: File "/home/airflow/.local/lib/python3.6/site-packages/airflow/contrib/hooks/ssh_hook.py",
> line 194, in get_conn
> client.connect(**connect_kwargs) File "/home/airflow/.local/lib/python3.6/site-packages/paramiko/client.py",
> line 446, in connect
> passphrase, File "/home/airflow/.local/lib/python3.6/site-packages/paramiko/client.py",
> line 677, in _auth
> key_filename, pkey_class, passphrase File "/home/airflow/.local/lib/python3.6/site-packages/paramiko/client.py",
> line 586, in _key_from_filepath
> key = klass.from_private_key_file(key_path, password) File "/home/airflow/.local/lib/python3.6/site-packages/paramiko/pkey.py",
> line 235, in from_private_key_file
> key = cls(filename=filename, password=password) File "/home/airflow/.local/lib/python3.6/site-packages/paramiko/rsakey.py",
> line 55, in __init__
> self._from_private_key_file(filename, password) File "/home/airflow/.local/lib/python3.6/site-packages/paramiko/rsakey.py",
> line 175, in _from_private_key_file
> data = self._read_private_key_file("RSA", filename, password) File
> "/home/airflow/.local/lib/python3.6/site-packages/paramiko/pkey.py",
> line 307, in _read_private_key_file
> with open(filename, "r") as f: FileNotFoundError: [Errno 2] No such file or directory: '/opt/airflow/sshpem/emr-pa.pem'
>
> During handling of the above exception, another exception occurred:
>
> Traceback (most recent call last): File
> "/home/airflow/.local/lib/python3.6/site-packages/airflow/models/taskinstance.py",
> line 979, in _run_raw_task
> result = task_copy.execute(context=context) File "/opt/airflow/class101-airflow/plugins/operators/emr_ssh_operator.py",
> line 107, in execute
> super().execute(context) File "/home/airflow/.local/lib/python3.6/site-packages/airflow/contrib/operators/ssh_operator.py",
> line 177, in execute
> raise AirflowException("SSH operator error: {0}".format(str(e))) airflow.exceptions.AirflowException: SSH operator error: [Errno 2] No
> such file or directory: '/opt/airflow/sshpem/emr-pa.pem' [2021-07-14
> 05:40:31,624] Marking task as UP_FOR_RETRY. dag_id=test_staging,
> task_id=extract_categories_from_mongo, execution_date=20210712T190000,
> start_date=20210714T054031, end_date=20210714T054031 [2021-07-14
> 05:40:36,303] Task exited with return code 1
airflow home: /opt/airflow
dags : /opt/airflow//dags
pemkey : /opt/sshpem/
airflow.cfg: /opt/airflow
airflow_env:
export PATH="/home/airflow/.local/bin:$PATH"
my yaml file
airflow:
image:
repository: airflow
executor: KubernetesExecutor
extraVolumeMounts:
- name: sshpem
mountPath: /opt/airflow/sshpem
extraVolumes:
- name: sshpem
emptyDir: {}
scheduler:
extraInitContainers:
- name: emr-key-file-download
image: amazon/aws-cli
command: [
"sh",
"-c",
"aws s3 cp s3://mykeyfile/path.my.pem&& \
chown -R 1000:1000 /opt/airflow/sshpem/"
volumeMounts:
- mountPath: /opt/airflow/sshpem
name: sshpem
Are you using KubernetesExecutor or CeleryExecutor?
If the former, then you have to make sure the extra init container is added to the pod_template you are using (tasks in KubernetesExecutor) run as separate PODs.
If the latter, you should make sure the extra init container is also added for workers, not only for scheduler).
BTW. Airflow 1.10 reached end-of-life on June 17th, 2021 and it will not receive even critical security fixes. You can watch our talk from the recent Airflow Summit "Keep your Airflow Secure" - https://airflowsummit.org/sessions/2021/panel-airflow-security/ to learn why it is important to upgrade to Airflow 2.

Run Redis Insights in Docker Compose

I'm trying to run Redis Insight in Docker Compose and I always get errors even though the only thing I'm changing from the Docker Run command is the volume. How do I fix this?
docker-compose.yml
redisinsights:
image: redislabs/redisinsight:latest
restart: always
ports:
- '8001:8001'
volumes:
- ./data/redisinsight:/db
logs
redisinsights_1 | Process 9 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid)
redisinsights_1 | Traceback (most recent call last):
redisinsights_1 | File "./entry.py", line 11, in <module>
redisinsights_1 | File "./startup.py", line 47, in <module>
redisinsights_1 | File "/usr/local/lib/python3.6/site-packages/django/conf/__init__.py", line 79, in __getattr__
redisinsights_1 | self._setup(name)
redisinsights_1 | File "/usr/local/lib/python3.6/site-packages/django/conf/__init__.py", line 66, in _setup
redisinsights_1 | self._wrapped = Settings(settings_module)
redisinsights_1 | File "/usr/local/lib/python3.6/site-packages/django/conf/__init__.py", line 157, in __init__
redisinsights_1 | mod = importlib.import_module(self.SETTINGS_MODULE)
redisinsights_1 | File "/usr/local/lib/python3.6/importlib/__init__.py", line 126, in import_module
redisinsights_1 | return _bootstrap._gcd_import(name[level:], package, level)
redisinsights_1 | File "./redisinsight/settings/__init__.py", line 365, in <module>
redisinsights_1 | File "/usr/local/lib/python3.6/os.py", line 220, in makedirs
redisinsights_1 | mkdir(name, mode)
redisinsights_1 | PermissionError: [Errno 13] Permission denied: '/db/rsnaps'
Follow the below steps to make it work:
Step 1. Create a Docker Compose file as shown below:
version: '3'
services:
redis:
image: redislabs/redismod
ports:
- 6379:6379
redisinsight:
image: redislabs/redisinsight:latest
ports:
- '8001:8001'
volumes:
- ./Users/ajeetraina/data/redisinsight:/db
Step 2. Provide sufficient permission
Go to Preference under Docker Desktop > File Sharing and add your folder structure which you want to share.
Please change the directory structure as per your environment
Step 3. Execute the Docker compose CLI
docker-compose ps
Name Command State Ports
------------------------------------------------------------------------------------------
pinegraph_redis_1 redis-server --loadmodule ... Up 0.0.0.0:6379->6379/tcp
pinegraph_redisinsight_1 bash ./docker-entry.sh pyt ... Up 0.0.0.0:8001->8001/tcp
Go to web browser and open RedisInsight URL.
Enjoy!
I was having the same problem on a Linux machine. I was able to solve it through the Installing RedisInsight on Docker
page of redis.
Note: Make sure the directory you pass as a volume to the container
has necessary permissions for the container to access it. For example,
if the previous command returns a permissions error, run the following
command:
console $ chown -R 1001 redisinsight

docker-compose selectively thinks config.yml is a folder

I am running docker-compose 1.25.5 on a ubuntu 20 box and I have a github repo working "fine" in its home folder... I can docker-compose build and docker-compose up with no problem, and the container does what is expected. The github repo is current with the on-disk files.
As a test, however, I created a new folder, pulled the repo, and ran docker-compose build with no problem but when I tried to run docker-compose up, I get the following error:
Starting live_evidently_1 ... done
Attaching to live_evidently_1
evidently_1 | Traceback (most recent call last):
evidently_1 | File "app.py", line 14, in <module>
evidently_1 | with open('config.yml') as f:
evidently_1 | IsADirectoryError: [Errno 21] Is a directory: 'config.yml'
live_evidently_1 exited with code 1
config.yml on my host is a file (of course) and the docker-compose.yml file is unremarkable:
version: "3"
services:
evidently:
build: ../
volumes:
- ./data:/data
- ./config.yml:/app/config.yml
etc...
...
So, I am left with two inter-related problems. 1) Why does the test version of the repo fail and the original version is fine (git status is unremarkable, all the files I want on github are up to date), and 2) Why does docker-compose think that config.yml is a folder when it is clearly a file? I would welcome suggestions.
You need to use bind mount type. To do this you have to use long syntax.
Like this.
volumes:
- type: bind
source: ./config.yml
target: /app/config.yml

Issues with Catkin Build

Never happen before but If I create a directory mkdir -p catkin_ws/src and then enter catkin build I have the following error:
emeric#emeric-desktop:~/catkin_plan_ws$ catkin build
------------------------------------------------------
Profile: default
Extending: [env] /opt/ros/kinetic
Workspace: /home/emeric
------------------------------------------------------
Source Space: [exists] /home/emeric/src
Log Space: [missing] /home/emeric/logs
Build Space: [exists] /home/emeric/build
Devel Space: [exists] /home/emeric/devel
Install Space: [unused] /home/emeric/install
DESTDIR: [unused] None
------------------------------------------------------
Devel Space Layout: linked
Install Space Layout: None
------------------------------------------------------
Additional CMake Args: DCMAKE_BUILT_TYPE=Release
Additional Make Args: None
Additional catkin Make Args: None
Internal Make Job Server: True
Cache Job Environments: False
------------------------------------------------------
Whitelisted Packages: None
Blacklisted Packages: None
------------------------------------------------------
Workspace configuration appears valid.
NOTE: Forcing CMake to run for each package.
------------------------------------------------------
Traceback (most recent call last):
File "/usr/bin/catkin", line 9, in <module>
load_entry_point('catkin-tools==0.4.4', 'console_scripts', 'catkin')()
File "/usr/lib/python2.7/dist-packages/catkin_tools/commands/catkin.py", line 267, in main
catkin_main(sysargs)
File "/usr/lib/python2.7/dist-packages/catkin_tools/commands/catkin.py", line 262, in catkin_main
sys.exit(args.main(args) or 0)
File "/usr/lib/python2.7/dist-packages/catkin_tools/verbs/catkin_build/cli.py", line 420, in main
summarize_build=opts.summarize # Can be True, False, or None
File "/usr/lib/python2.7/dist-packages/catkin_tools/verbs/catkin_build/build.py", line 283, in build_isolated_workspace
workspace_packages = find_packages(context.source_space_abs, exclude_subspaces=True, warnings=[])
File "/usr/lib/python2.7/dist-packages/catkin_pkg/packages.py", line 86, in find_packages
packages = find_packages_allowing_duplicates(basepath, exclude_paths=exclude_paths, exclude_subspaces=exclude_subspaces, warnings=warnings)
File "/usr/lib/python2.7/dist-packages/catkin_pkg/packages.py", line 146, in find_packages_allowing_duplicates
xml, filename=filename, warnings=warnings)
File "/usr/lib/python2.7/dist-packages/catkin_pkg/package.py", line 509, in parse_package_string
raise InvalidPackage('The manifest must contain a single "package" root tag')
catkin_pkg.package.InvalidPackage: The manifest must contain a single "package" root tag
Besides the build and devel folders are created in my home directory not in the catkin one.
I guess I messed up something but I do not what and thus how to fix it.
Thank you for your help
the root Folder of build, install, log, devel and src space should be your catkin root where you can call to catkin build (in your case it's ~/catkin_ws).
in a nutshell, you can't do a task outside of initiated catkin folder with catkin

Ansible error due to GMP package version on Centos6

I have a Dockerfile that builds an image based on CentOS (tag: centos6):
FROM centos
RUN rpm -iUvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
RUN yum update -y
RUN yum install ansible -y
ADD ./ansible /home/root/ansible
RUN cd /home/root/ansible;ansible-playbook -v -i hosts site.yml
Everything works fine until Docker hits the last line, then I get the following errors:
[WARNING]: The version of gmp you have installed has a known issue regarding
timing vulnerabilities when used with pycrypto. If possible, you should update
it (ie. yum update gmp).
PLAY [all] ********************************************************************
GATHERING FACTS ***************************************************************
Traceback (most recent call last):
File "/usr/bin/ansible-playbook", line 317, in <module>
sys.exit(main(sys.argv[1:]))
File "/usr/bin/ansible-playbook", line 257, in main
pb.run()
File "/usr/lib/python2.6/site-packages/ansible/playbook/__init__.py", line 319, in run
if not self._run_play(play):
File "/usr/lib/python2.6/site-packages/ansible/playbook/__init__.py", line 620, in _run_play
self._do_setup_step(play)
File "/usr/lib/python2.6/site-packages/ansible/playbook/__init__.py", line 565, in _do_setup_step
accelerate_port=play.accelerate_port,
File "/usr/lib/python2.6/site-packages/ansible/runner/__init__.py", line 204, in __init__
cmd = subprocess.Popen(['ssh','-o','ControlPersist'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
File "/usr/lib64/python2.6/subprocess.py", line 642, in __init__
errread, errwrite)
File "/usr/lib64/python2.6/subprocess.py", line 1234, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
Stderr from the command:
package epel-release-6-8.noarch is already installed
I imagine that the cause of the error is the gmp package not being up to date.
There is a related issue on GitHub: https://github.com/ansible/ansible/issues/6941
But there doesn't seem to be any solutions at the moment ...
Any ideas ?
Thanks in advance !
My site.yml playbook:
- hosts: all
pre_tasks:
- shell: echo 'hello'
Make sure that the files site.yml and hosts are present in the directory you're adding to /home/root/ansible.
Side note, you can simplify your Dockerfile by using WORKDIR:
WORKDIR /home/root/ansible
RUN ansible-playbook -v -i hosts site.yml