Run Redis Insights in Docker Compose - docker-compose

I'm trying to run Redis Insight in Docker Compose and I always get errors even though the only thing I'm changing from the Docker Run command is the volume. How do I fix this?
docker-compose.yml
redisinsights:
image: redislabs/redisinsight:latest
restart: always
ports:
- '8001:8001'
volumes:
- ./data/redisinsight:/db
logs
redisinsights_1 | Process 9 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid)
redisinsights_1 | Traceback (most recent call last):
redisinsights_1 | File "./entry.py", line 11, in <module>
redisinsights_1 | File "./startup.py", line 47, in <module>
redisinsights_1 | File "/usr/local/lib/python3.6/site-packages/django/conf/__init__.py", line 79, in __getattr__
redisinsights_1 | self._setup(name)
redisinsights_1 | File "/usr/local/lib/python3.6/site-packages/django/conf/__init__.py", line 66, in _setup
redisinsights_1 | self._wrapped = Settings(settings_module)
redisinsights_1 | File "/usr/local/lib/python3.6/site-packages/django/conf/__init__.py", line 157, in __init__
redisinsights_1 | mod = importlib.import_module(self.SETTINGS_MODULE)
redisinsights_1 | File "/usr/local/lib/python3.6/importlib/__init__.py", line 126, in import_module
redisinsights_1 | return _bootstrap._gcd_import(name[level:], package, level)
redisinsights_1 | File "./redisinsight/settings/__init__.py", line 365, in <module>
redisinsights_1 | File "/usr/local/lib/python3.6/os.py", line 220, in makedirs
redisinsights_1 | mkdir(name, mode)
redisinsights_1 | PermissionError: [Errno 13] Permission denied: '/db/rsnaps'

Follow the below steps to make it work:
Step 1. Create a Docker Compose file as shown below:
version: '3'
services:
redis:
image: redislabs/redismod
ports:
- 6379:6379
redisinsight:
image: redislabs/redisinsight:latest
ports:
- '8001:8001'
volumes:
- ./Users/ajeetraina/data/redisinsight:/db
Step 2. Provide sufficient permission
Go to Preference under Docker Desktop > File Sharing and add your folder structure which you want to share.
Please change the directory structure as per your environment
Step 3. Execute the Docker compose CLI
docker-compose ps
Name Command State Ports
------------------------------------------------------------------------------------------
pinegraph_redis_1 redis-server --loadmodule ... Up 0.0.0.0:6379->6379/tcp
pinegraph_redisinsight_1 bash ./docker-entry.sh pyt ... Up 0.0.0.0:8001->8001/tcp
Go to web browser and open RedisInsight URL.
Enjoy!

I was having the same problem on a Linux machine. I was able to solve it through the Installing RedisInsight on Docker
page of redis.
Note: Make sure the directory you pass as a volume to the container
has necessary permissions for the container to access it. For example,
if the previous command returns a permissions error, run the following
command:
console $ chown -R 1001 redisinsight

Related

airflow SSH operator error: [Errno 2] No such file or directory:

airflow 1.10.10
minikube 1.22.0
amazon emr
I am running airflow on kubernetes(minikube).
Dags are synced from github.
spark-submit on Amazon EMR as a CLI mode.
In order to do that, I attach EMR pem key.
So, I get pem key from AWS S3 while ExtraInitContainer is getting image awscli and mount the volume at airlfow/sshpem
error is reported when I make a connection from airflow WebUI as
"con_type": "ssh"
"key_file": "/opt/sshepm/emr.pem"
SSH operator error: [Errno 2] No such file or directory: '/opt/airflow/sshpem/emr.pem'
it is there. I think it is related to some PATH or permission issue since I get emr.pem on ExtraInitContainer and it's permission was root. Although I temporarily changed a user as 1000:1000 there is some issue airflow WebUI can't get this directory while getting a key.
Full log is below
> Traceback (most recent call last): File
> "/home/airflow/.local/lib/python3.6/site-packages/airflow/contrib/operators/ssh_operator.py",
> line 108, in execute
> with self.ssh_hook.get_conn() as ssh_client: File "/home/airflow/.local/lib/python3.6/site-packages/airflow/contrib/hooks/ssh_hook.py",
> line 194, in get_conn
> client.connect(**connect_kwargs) File "/home/airflow/.local/lib/python3.6/site-packages/paramiko/client.py",
> line 446, in connect
> passphrase, File "/home/airflow/.local/lib/python3.6/site-packages/paramiko/client.py",
> line 677, in _auth
> key_filename, pkey_class, passphrase File "/home/airflow/.local/lib/python3.6/site-packages/paramiko/client.py",
> line 586, in _key_from_filepath
> key = klass.from_private_key_file(key_path, password) File "/home/airflow/.local/lib/python3.6/site-packages/paramiko/pkey.py",
> line 235, in from_private_key_file
> key = cls(filename=filename, password=password) File "/home/airflow/.local/lib/python3.6/site-packages/paramiko/rsakey.py",
> line 55, in __init__
> self._from_private_key_file(filename, password) File "/home/airflow/.local/lib/python3.6/site-packages/paramiko/rsakey.py",
> line 175, in _from_private_key_file
> data = self._read_private_key_file("RSA", filename, password) File
> "/home/airflow/.local/lib/python3.6/site-packages/paramiko/pkey.py",
> line 307, in _read_private_key_file
> with open(filename, "r") as f: FileNotFoundError: [Errno 2] No such file or directory: '/opt/airflow/sshpem/emr-pa.pem'
>
> During handling of the above exception, another exception occurred:
>
> Traceback (most recent call last): File
> "/home/airflow/.local/lib/python3.6/site-packages/airflow/models/taskinstance.py",
> line 979, in _run_raw_task
> result = task_copy.execute(context=context) File "/opt/airflow/class101-airflow/plugins/operators/emr_ssh_operator.py",
> line 107, in execute
> super().execute(context) File "/home/airflow/.local/lib/python3.6/site-packages/airflow/contrib/operators/ssh_operator.py",
> line 177, in execute
> raise AirflowException("SSH operator error: {0}".format(str(e))) airflow.exceptions.AirflowException: SSH operator error: [Errno 2] No
> such file or directory: '/opt/airflow/sshpem/emr-pa.pem' [2021-07-14
> 05:40:31,624] Marking task as UP_FOR_RETRY. dag_id=test_staging,
> task_id=extract_categories_from_mongo, execution_date=20210712T190000,
> start_date=20210714T054031, end_date=20210714T054031 [2021-07-14
> 05:40:36,303] Task exited with return code 1
airflow home: /opt/airflow
dags : /opt/airflow//dags
pemkey : /opt/sshpem/
airflow.cfg: /opt/airflow
airflow_env:
export PATH="/home/airflow/.local/bin:$PATH"
my yaml file
airflow:
image:
repository: airflow
executor: KubernetesExecutor
extraVolumeMounts:
- name: sshpem
mountPath: /opt/airflow/sshpem
extraVolumes:
- name: sshpem
emptyDir: {}
scheduler:
extraInitContainers:
- name: emr-key-file-download
image: amazon/aws-cli
command: [
"sh",
"-c",
"aws s3 cp s3://mykeyfile/path.my.pem&& \
chown -R 1000:1000 /opt/airflow/sshpem/"
volumeMounts:
- mountPath: /opt/airflow/sshpem
name: sshpem
Are you using KubernetesExecutor or CeleryExecutor?
If the former, then you have to make sure the extra init container is added to the pod_template you are using (tasks in KubernetesExecutor) run as separate PODs.
If the latter, you should make sure the extra init container is also added for workers, not only for scheduler).
BTW. Airflow 1.10 reached end-of-life on June 17th, 2021 and it will not receive even critical security fixes. You can watch our talk from the recent Airflow Summit "Keep your Airflow Secure" - https://airflowsummit.org/sessions/2021/panel-airflow-security/ to learn why it is important to upgrade to Airflow 2.

No module named 'airflow' when initializing Apache airflow docker

I am trying to run apache airflow as a docker on a Centos 7 machine.
I followed all the instructions here:https://airflow.apache.org/docs/apache-airflow/stable/start/docker.html
when i am trying to initialize the docker by running docker-compose up airflow-init
i am getting this error
[root#centos7 centos]# docker-compose up airflow-init
Creating network "centos_default" with the default driver
Creating volume "centos_postgres-db-volume" with default driver
Creating centos_redis_1 ... done
Creating centos_postgres_1 ... done
Creating centos_airflow-init_1 ... done
Attaching to centos_airflow-init_1
airflow-init_1 | BACKEND=postgresql+psycopg2
airflow-init_1 | DB_HOST=postgres
airflow-init_1 | DB_PORT=5432
airflow-init_1 |
airflow-init_1 | Traceback (most recent call last):
airflow-init_1 | File "/home/airflow/.local/bin/airflow", line 5, in <module>
airflow-init_1 | from airflow.__main__ import main
airflow-init_1 | ModuleNotFoundError: No module named 'airflow'
airflow-init_1 | Traceback (most recent call last):
airflow-init_1 | File "/home/airflow/.local/bin/airflow", line 5, in <module>
airflow-init_1 | from airflow.__main__ import main
airflow-init_1 | ModuleNotFoundError: No module named 'airflow'
airflow-init_1 | Traceback (most recent call last):
airflow-init_1 | File "/home/airflow/.local/bin/airflow", line 5, in <module>
airflow-init_1 | from airflow.__main__ import main
airflow-init_1 | ModuleNotFoundError: No module named 'airflow'
airflow-init_1 | Traceback (most recent call last):
airflow-init_1 | File "/home/airflow/.local/bin/airflow", line 5, in <module>
airflow-init_1 | from airflow.__main__ import main
airflow-init_1 | ModuleNotFoundError: No module named 'airflow'
airflow-init_1 | Traceback (most recent call last):
airflow-init_1 | File "/home/airflow/.local/bin/airflow", line 5, in <module>
airflow-init_1 | from airflow.__main__ import main
airflow-init_1 | ModuleNotFoundError: No module named 'airflow'
centos_airflow-init_1 exited with code 1
i used the standard YAML file from here:https://airflow.apache.org/docs/apache-airflow/2.0.1/docker-compose.yaml
i found that it's a known issue here:https://github.com/apache/airflow/issues/14520
but i could not understand how to solve this problem.
any advice?
I solved this problem this way.
login with a non-root user.
find your user id :
echo $UID
create .env file and put these lines inside it . replace 4003 with your user id:
AIRFLOW_UID=4003
AIRFLOW_GID=0
If you have not created these directories, first create these and run docker-compose:
sudo mkdir ./dags ./logs ./plugins
sudo chmod 777 -R logs
sudo docker-compose up airflow-init
sudo docker-compose up
I found the problem.
There is a bug on version 2.0.1 that doesn’t let you run the airflow containers using root.
You have to run the installation under another user name (with sudo).
This can happen if AIRFLOW_GID is not set properly in the .env file.
The instructions include running the command echo -e "AIRFLOW_UID=$(id -u)\nAIRFLOW_GID=0" > .env.
To check this worked as expected, look at the contents of the .env file by running cat .env.
You should see something that looks like this:
AIRFLOW_UID=1000
AIRFLOW_GID=0
If you do not you might need to manually edit the .env file to set the airflow uid and gid.

Docker-compose failing on startup

I am running
docker-compose version 1.25.4, build 8d51620a
on
OS X Catalina, v10.15.4 (19E266)
I am using the system python.
When I run docker-compose, it crashes with the following error:
Traceback (most recent call last):
File "docker-compose", line 6, in <module>
File "compose/cli/main.py", line 72, in main
File "compose/cli/main.py", line 128, in perform_command
File "compose/cli/main.py", line 1077, in up
File "compose/cli/main.py", line 1073, in up
File "compose/project.py", line 548, in up
File "compose/service.py", line 355, in ensure_image_exists
File "compose/service.py", line 381, in image
File "site-packages/docker/utils/decorators.py", line 17, in wrapped
docker.errors.NullResource: Resource ID was not provided
[9018] Failed to execute script docker-compose
I have tried a fresh repo clone and a fresh install of docker, neither work. What could be causing this?
It turned out that I had uninitialized environment variables that were causing the crash.
The particular cause was the env vars setting image names in the docker-compose file and then trying to pull a blank image.
It can be initialized environment variables but for my case, it was some other command before docker-compose build which was failing.
I was pulling images from the registry but it could not find them/
I've seen this error when passing docker-compose files explicitly and omitting one. e.g.
docker-compose -f docker-compose.yml up # fails
docker-compose -f docker-compose.yml -f docker-compose.override.yml up # works
I faced with the same issue.
In my case, it was different.
I had 2 docker-compose files:
docker-compose.yml
version: "3"
networks:
web: {}
docker-compose.development.yml
version: "3"
services:
web:
image: ""
build:
context: .
dockerfile: Dockerfile
environment:
API_URL: http://example.com/api
ports:
- "11000:22000"
networks:
- web
restart: on-failure
The problem was from image property in docker-compose.development.yml file.
When I removed it & ran the below command:
docker-compose --project-name my-web -f docker-compose.yml -f docker-compose.development.yml up --detach
It was successful.
This is the new docker-compose.development.yml file:
version: "3"
services:
web:
build:
context: .
dockerfile: Dockerfile
environment:
API_URL: http://example.com/api
ports:
- "11000:22000"
networks:
- web
restart: on-failure

How to properly run gsutil from crontab?

This is my entry in /etc/crontab, CentOS 6.6:
0 0 */1 * * fredrik /home/fredrik/google-cloud-sdk/bin/gsutil -d -m rsync -r -C [src] [dst] &> [log]
And I'm getting this error: OSError: [Errno 13] Permission denied: '/.config'
The command runs fine if executed in the shell. I've noticed I cannot run 0 0 */1 * * fredrik gsutil ... without the full path to gsutil, so I'm assuming I'm missing something in the environment in which cron is running...?
Here's the full traceback:
Traceback (most recent call last):
File "/home/fredrik/google-cloud-sdk/bin/bootstrapping/gsutil.py", line 68, in <module>
bootstrapping.PrerunChecks(can_be_gce=True)
File "/home/fredrik/google-cloud-sdk/bin/bootstrapping/bootstrapping.py", line 279, in PrerunChecks
CheckCredOrExit(can_be_gce=can_be_gce)
File "/home/fredrik/google-cloud-sdk/bin/bootstrapping/bootstrapping.py", line 167, in CheckCredOrExit
cred = c_store.Load()
File "/home/fredrik/google-cloud-sdk/bin/bootstrapping/../../lib/googlecloudsdk/core/credentials/store.py", line 195, in Load
account = properties.VALUES.core.account.Get()
File "/home/fredrik/google-cloud-sdk/bin/bootstrapping/../../lib/googlecloudsdk/core/properties.py", line 393, in Get
return _GetProperty(self, _PropertiesFile.Load(), required)
File "/home/fredrik/google-cloud-sdk/bin/bootstrapping/../../lib/googlecloudsdk/core/properties.py", line 618, in _GetProperty
value = callback()
File "/home/fredrik/google-cloud-sdk/bin/bootstrapping/../../lib/googlecloudsdk/core/properties.py", line 286, in <lambda>
'account', callbacks=[lambda: c_gce.Metadata().DefaultAccount()])
File "/home/fredrik/google-cloud-sdk/bin/bootstrapping/../../lib/googlecloudsdk/core/credentials/gce.py", line 179, in Metadata
_metadata_lock.lock(function=_CreateMetadata, argument=None)
File "/usr/lib64/python2.6/mutex.py", line 44, in lock
function(argument)
File "/home/fredrik/google-cloud-sdk/bin/bootstrapping/../../lib/googlecloudsdk/core/credentials/gce.py", line 178, in _CreateMetadata
_metadata = _GCEMetadata()
File "/home/fredrik/google-cloud-sdk/bin/bootstrapping/../../lib/googlecloudsdk/core/credentials/gce.py", line 73, in __init__
_CacheIsOnGCE(self.connected)
File "/home/fredrik/google-cloud-sdk/bin/bootstrapping/../../lib/googlecloudsdk/core/credentials/gce.py", line 186, in _CacheIsOnGCE
config.Paths().GCECachePath()) as gcecache_file:
File "/home/fredrik/google-cloud-sdk/bin/bootstrapping/../../lib/googlecloudsdk/core/util/files.py", line 465, in OpenForWritingPrivate
MakeDir(full_parent_dir_path, mode=0700)
File "/home/fredrik/google-cloud-sdk/bin/bootstrapping/../../lib/googlecloudsdk/core/util/files.py", line 44, in MakeDir
os.makedirs(path, mode=mode)
File "/usr/lib64/python2.6/os.py", line 150, in makedirs
makedirs(head, mode)
File "/usr/lib64/python2.6/os.py", line 157, in makedirs
mkdir(name, mode)
OSError: [Errno 13] Permission denied: '/.config'
Thanks to Mike and jterrace for helping me getting this working. In the end, I had to revise these environment variables: PATH, HOME, BOTO_CONFIG (except for any other default ones).
PATH=/sbin:/bin:/usr/sbin:/usr/bin:/home/fredrik/google-cloud-sdk/bin
HOME=/home/fredrik
BOTO_CONFIG="/home/fredrik/.config/gcloud/legacy_credentials/[your-email-address]/.boto"
# Example of job definition:
# .---------------- minute (0 - 59)
# | .------------- hour (0 - 23)
# | | .---------- day of month (1 - 31)
# | | | .------- month (1 - 12) OR jan,feb,mar,apr ...
# | | | | .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat
# | | | | |
# * * * * * user-name command to be executed
0 0 */1 * * fredrik gsutil -d -m rsync -r -C /local-folder/ gs://my-bucket/my-folder/ > /logs/gsutil.log 2>&1
The > gsutil.log 2>&1 pipes both stdout and stderr to the same file. Also, it will overwrite the log file the next time gsutil runs. In order to make it append to the log file, use >> gsutil.log 2>&1. This should be safe on both Linux and OS X.
I'm noticing that the debug flag -d creates enormous log files on large data volumes, so I might opt out on that flag, personally.
You're probably getting a different boto config file when running from cron. Please try running the following both ways (as root, and then via cron), and see if you get different config file lists for the two cases:
gsutil -D ls 2>&1 | grep config_file_list
The reason this happens is that cron unsets most environment variables before running jobs, so you need to manually set the BOTO_CONFIG environment variable in your cron script before running gsutil, i.e.,:
BOTO_CONFIG="/root/.boto"
gsutil rsync ...
I believe you're getting this error because the HOME environment variable is not set when running under cron. Try setting HOME=/home/fredrik.
because cron is ran in a very limited environment, you need to source your .bash_profile to get your environment config.
* * * * * source ~/.bash_profile && your_cmd_here
For anyone trying to manage images with gsutil from PHP running Apache -
Made a new directory called apache-shared and chgrp/chown'd www-data (or whichever user your Apache runs on, run "top" to check). Copied the .boto file into the directory and ran the following without issue:
shell_exec('export BOTO_CONFIG=/apache-shared/.boto && export PATH=/sbin:/bin:/usr/sbin:/usr/bin:/home/user/google-cloud-sdk/bin && gsutil command image gs://bucket');

Ansible error due to GMP package version on Centos6

I have a Dockerfile that builds an image based on CentOS (tag: centos6):
FROM centos
RUN rpm -iUvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
RUN yum update -y
RUN yum install ansible -y
ADD ./ansible /home/root/ansible
RUN cd /home/root/ansible;ansible-playbook -v -i hosts site.yml
Everything works fine until Docker hits the last line, then I get the following errors:
[WARNING]: The version of gmp you have installed has a known issue regarding
timing vulnerabilities when used with pycrypto. If possible, you should update
it (ie. yum update gmp).
PLAY [all] ********************************************************************
GATHERING FACTS ***************************************************************
Traceback (most recent call last):
File "/usr/bin/ansible-playbook", line 317, in <module>
sys.exit(main(sys.argv[1:]))
File "/usr/bin/ansible-playbook", line 257, in main
pb.run()
File "/usr/lib/python2.6/site-packages/ansible/playbook/__init__.py", line 319, in run
if not self._run_play(play):
File "/usr/lib/python2.6/site-packages/ansible/playbook/__init__.py", line 620, in _run_play
self._do_setup_step(play)
File "/usr/lib/python2.6/site-packages/ansible/playbook/__init__.py", line 565, in _do_setup_step
accelerate_port=play.accelerate_port,
File "/usr/lib/python2.6/site-packages/ansible/runner/__init__.py", line 204, in __init__
cmd = subprocess.Popen(['ssh','-o','ControlPersist'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
File "/usr/lib64/python2.6/subprocess.py", line 642, in __init__
errread, errwrite)
File "/usr/lib64/python2.6/subprocess.py", line 1234, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
Stderr from the command:
package epel-release-6-8.noarch is already installed
I imagine that the cause of the error is the gmp package not being up to date.
There is a related issue on GitHub: https://github.com/ansible/ansible/issues/6941
But there doesn't seem to be any solutions at the moment ...
Any ideas ?
Thanks in advance !
My site.yml playbook:
- hosts: all
pre_tasks:
- shell: echo 'hello'
Make sure that the files site.yml and hosts are present in the directory you're adding to /home/root/ansible.
Side note, you can simplify your Dockerfile by using WORKDIR:
WORKDIR /home/root/ansible
RUN ansible-playbook -v -i hosts site.yml