How to properly run gsutil from crontab? - google-cloud-storage

This is my entry in /etc/crontab, CentOS 6.6:
0 0 */1 * * fredrik /home/fredrik/google-cloud-sdk/bin/gsutil -d -m rsync -r -C [src] [dst] &> [log]
And I'm getting this error: OSError: [Errno 13] Permission denied: '/.config'
The command runs fine if executed in the shell. I've noticed I cannot run 0 0 */1 * * fredrik gsutil ... without the full path to gsutil, so I'm assuming I'm missing something in the environment in which cron is running...?
Here's the full traceback:
Traceback (most recent call last):
File "/home/fredrik/google-cloud-sdk/bin/bootstrapping/gsutil.py", line 68, in <module>
bootstrapping.PrerunChecks(can_be_gce=True)
File "/home/fredrik/google-cloud-sdk/bin/bootstrapping/bootstrapping.py", line 279, in PrerunChecks
CheckCredOrExit(can_be_gce=can_be_gce)
File "/home/fredrik/google-cloud-sdk/bin/bootstrapping/bootstrapping.py", line 167, in CheckCredOrExit
cred = c_store.Load()
File "/home/fredrik/google-cloud-sdk/bin/bootstrapping/../../lib/googlecloudsdk/core/credentials/store.py", line 195, in Load
account = properties.VALUES.core.account.Get()
File "/home/fredrik/google-cloud-sdk/bin/bootstrapping/../../lib/googlecloudsdk/core/properties.py", line 393, in Get
return _GetProperty(self, _PropertiesFile.Load(), required)
File "/home/fredrik/google-cloud-sdk/bin/bootstrapping/../../lib/googlecloudsdk/core/properties.py", line 618, in _GetProperty
value = callback()
File "/home/fredrik/google-cloud-sdk/bin/bootstrapping/../../lib/googlecloudsdk/core/properties.py", line 286, in <lambda>
'account', callbacks=[lambda: c_gce.Metadata().DefaultAccount()])
File "/home/fredrik/google-cloud-sdk/bin/bootstrapping/../../lib/googlecloudsdk/core/credentials/gce.py", line 179, in Metadata
_metadata_lock.lock(function=_CreateMetadata, argument=None)
File "/usr/lib64/python2.6/mutex.py", line 44, in lock
function(argument)
File "/home/fredrik/google-cloud-sdk/bin/bootstrapping/../../lib/googlecloudsdk/core/credentials/gce.py", line 178, in _CreateMetadata
_metadata = _GCEMetadata()
File "/home/fredrik/google-cloud-sdk/bin/bootstrapping/../../lib/googlecloudsdk/core/credentials/gce.py", line 73, in __init__
_CacheIsOnGCE(self.connected)
File "/home/fredrik/google-cloud-sdk/bin/bootstrapping/../../lib/googlecloudsdk/core/credentials/gce.py", line 186, in _CacheIsOnGCE
config.Paths().GCECachePath()) as gcecache_file:
File "/home/fredrik/google-cloud-sdk/bin/bootstrapping/../../lib/googlecloudsdk/core/util/files.py", line 465, in OpenForWritingPrivate
MakeDir(full_parent_dir_path, mode=0700)
File "/home/fredrik/google-cloud-sdk/bin/bootstrapping/../../lib/googlecloudsdk/core/util/files.py", line 44, in MakeDir
os.makedirs(path, mode=mode)
File "/usr/lib64/python2.6/os.py", line 150, in makedirs
makedirs(head, mode)
File "/usr/lib64/python2.6/os.py", line 157, in makedirs
mkdir(name, mode)
OSError: [Errno 13] Permission denied: '/.config'

Thanks to Mike and jterrace for helping me getting this working. In the end, I had to revise these environment variables: PATH, HOME, BOTO_CONFIG (except for any other default ones).
PATH=/sbin:/bin:/usr/sbin:/usr/bin:/home/fredrik/google-cloud-sdk/bin
HOME=/home/fredrik
BOTO_CONFIG="/home/fredrik/.config/gcloud/legacy_credentials/[your-email-address]/.boto"
# Example of job definition:
# .---------------- minute (0 - 59)
# | .------------- hour (0 - 23)
# | | .---------- day of month (1 - 31)
# | | | .------- month (1 - 12) OR jan,feb,mar,apr ...
# | | | | .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat
# | | | | |
# * * * * * user-name command to be executed
0 0 */1 * * fredrik gsutil -d -m rsync -r -C /local-folder/ gs://my-bucket/my-folder/ > /logs/gsutil.log 2>&1
The > gsutil.log 2>&1 pipes both stdout and stderr to the same file. Also, it will overwrite the log file the next time gsutil runs. In order to make it append to the log file, use >> gsutil.log 2>&1. This should be safe on both Linux and OS X.
I'm noticing that the debug flag -d creates enormous log files on large data volumes, so I might opt out on that flag, personally.

You're probably getting a different boto config file when running from cron. Please try running the following both ways (as root, and then via cron), and see if you get different config file lists for the two cases:
gsutil -D ls 2>&1 | grep config_file_list
The reason this happens is that cron unsets most environment variables before running jobs, so you need to manually set the BOTO_CONFIG environment variable in your cron script before running gsutil, i.e.,:
BOTO_CONFIG="/root/.boto"
gsutil rsync ...

I believe you're getting this error because the HOME environment variable is not set when running under cron. Try setting HOME=/home/fredrik.

because cron is ran in a very limited environment, you need to source your .bash_profile to get your environment config.
* * * * * source ~/.bash_profile && your_cmd_here

For anyone trying to manage images with gsutil from PHP running Apache -
Made a new directory called apache-shared and chgrp/chown'd www-data (or whichever user your Apache runs on, run "top" to check). Copied the .boto file into the directory and ran the following without issue:
shell_exec('export BOTO_CONFIG=/apache-shared/.boto && export PATH=/sbin:/bin:/usr/sbin:/usr/bin:/home/user/google-cloud-sdk/bin && gsutil command image gs://bucket');

Related

Run Redis Insights in Docker Compose

I'm trying to run Redis Insight in Docker Compose and I always get errors even though the only thing I'm changing from the Docker Run command is the volume. How do I fix this?
docker-compose.yml
redisinsights:
image: redislabs/redisinsight:latest
restart: always
ports:
- '8001:8001'
volumes:
- ./data/redisinsight:/db
logs
redisinsights_1 | Process 9 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid)
redisinsights_1 | Traceback (most recent call last):
redisinsights_1 | File "./entry.py", line 11, in <module>
redisinsights_1 | File "./startup.py", line 47, in <module>
redisinsights_1 | File "/usr/local/lib/python3.6/site-packages/django/conf/__init__.py", line 79, in __getattr__
redisinsights_1 | self._setup(name)
redisinsights_1 | File "/usr/local/lib/python3.6/site-packages/django/conf/__init__.py", line 66, in _setup
redisinsights_1 | self._wrapped = Settings(settings_module)
redisinsights_1 | File "/usr/local/lib/python3.6/site-packages/django/conf/__init__.py", line 157, in __init__
redisinsights_1 | mod = importlib.import_module(self.SETTINGS_MODULE)
redisinsights_1 | File "/usr/local/lib/python3.6/importlib/__init__.py", line 126, in import_module
redisinsights_1 | return _bootstrap._gcd_import(name[level:], package, level)
redisinsights_1 | File "./redisinsight/settings/__init__.py", line 365, in <module>
redisinsights_1 | File "/usr/local/lib/python3.6/os.py", line 220, in makedirs
redisinsights_1 | mkdir(name, mode)
redisinsights_1 | PermissionError: [Errno 13] Permission denied: '/db/rsnaps'
Follow the below steps to make it work:
Step 1. Create a Docker Compose file as shown below:
version: '3'
services:
redis:
image: redislabs/redismod
ports:
- 6379:6379
redisinsight:
image: redislabs/redisinsight:latest
ports:
- '8001:8001'
volumes:
- ./Users/ajeetraina/data/redisinsight:/db
Step 2. Provide sufficient permission
Go to Preference under Docker Desktop > File Sharing and add your folder structure which you want to share.
Please change the directory structure as per your environment
Step 3. Execute the Docker compose CLI
docker-compose ps
Name Command State Ports
------------------------------------------------------------------------------------------
pinegraph_redis_1 redis-server --loadmodule ... Up 0.0.0.0:6379->6379/tcp
pinegraph_redisinsight_1 bash ./docker-entry.sh pyt ... Up 0.0.0.0:8001->8001/tcp
Go to web browser and open RedisInsight URL.
Enjoy!
I was having the same problem on a Linux machine. I was able to solve it through the Installing RedisInsight on Docker
page of redis.
Note: Make sure the directory you pass as a volume to the container
has necessary permissions for the container to access it. For example,
if the previous command returns a permissions error, run the following
command:
console $ chown -R 1001 redisinsight

bitbake fails at the simplest recipe

Just installed Yocto. On a morty branch. Executed the following commands:
cd poky
source oe-init-build-env build-qemuarm
In conf/local.conf changed the name of the machine to MACHINE ?= "qemuarm"
Then executed the following:
$ bitbake core-image-minimal
Loading cache: 100% |##########################################################################################################| Time: 0:00:00
Loaded 1320 entries from dependency cache.
ERROR: Execution of event handler 'sstate_eventhandler2' failed
Traceback (most recent call last):
File "/home/some-user/projects/melp/poky/meta/classes/sstate.bbclass", line 1015, in sstate_eventhandler2(e=<bb.event.ReachableStamps object at 0x7fbc17f2e0f0>):
for l in lines:
> (stamp, manifest, workdir) = l.split()
if stamp not in stamps:
ValueError: not enough values to unpack (expected 3, got 1)
ERROR: Command execution failed: Traceback (most recent call last):
File "/home/some-user/projects/melp/poky/bitbake/lib/bb/command.py", line 101, in runAsyncCommand
self.cooker.updateCache()
File "/home/some-user/projects/melp/poky/bitbake/lib/bb/cooker.py", line 1658, in updateCache
bb.event.fire(event, self.databuilder.mcdata[mc])
File "/home/some-user/projects/melp/poky/bitbake/lib/bb/event.py", line 201, in fire
fire_class_handlers(event, d)
File "/home/some-user/projects/melp/poky/bitbake/lib/bb/event.py", line 124, in fire_class_handlers
execute_handler(name, handler, event, d)
File "/home/some-user/projects/melp/poky/bitbake/lib/bb/event.py", line 96, in execute_handler
ret = handler(event)
File "/home/some-user/projects/melp/poky/meta/classes/sstate.bbclass", line 1015, in sstate_eventhandler2
(stamp, manifest, workdir) = l.split()
ValueError: not enough values to unpack (expected 3, got 1)
It looks like it is a python error. Does anyone know what is the issue? Am I using the wrong version?
Here is the output of python --version
$ python --version
Python 2.7.12
What am I doing wrong?
You realise that Morty is 18 months old and in a few weeks will be longer supported right?
Anyway, looks like the sstate-cache/ somehow is corrupted. Delete your tmp/ and sstate-cache/ directories and try again.

Snakemake and cloud formation cluster error with local scratch space

I am having a problem using local scratch space on cfncluster and snakemake at the same time. My strategy is to write data to local scratch for each node in the cluster and then move the data to the NFS partition.
Unfortunately I am getting the following error:
snakemake 4.0.0, cfncluster
/shared/bin/bin/snakemake --rerun-incomplete -s /shared/scripts/sra_to_fa_cluster.py -j 1 -p --latency-wait 20 -k -c " qsub -cwd -V" -F
/shared/dbGAP/sra_toolkit/sratoolkit.2.8.2-1-ubuntu64/bin/fastq-dump --split-files --gzip --outdir /scratch/ /shared/dbGAP/sras2/test/SRR2135300.sra
Waiting at most 20 seconds for missing files.
Exception in thread Thread-1:
Traceback (most recent call last):
File "/shared/bin/lib/python3.6/site-packages/snakemake/dag.py", line 319, in check_and_touch_output
wait_for_files(expanded_output, latency_wait=wait)
File "/shared/bin/lib/python3.6/site-packages/snakemake/io.py", line 395, in wait_for_files
latency_wait, "\n".join(get_missing())))
OSError: Missing files after 20 seconds:
/scratch/SRR2135300_2.fastq.gz
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/shared/bin/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/shared/bin/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/shared/bin/lib/python3.6/site-packages/snakemake/executors.py", line 647, in _wait_for_jobs
active_job.callback(active_job.job)
File "/shared/bin/lib/python3.6/site-packages/snakemake/scheduler.py", line 287, in _proceed
self.get_executor(job).handle_job_success(job)
File "/shared/bin/lib/python3.6/site-packages/snakemake/executors.py", line 549, in handle_job_success
super().handle_job_success(job, upload_remote=False)
File "/shared/bin/lib/python3.6/site-packages/snakemake/executors.py", line 178, in handle_job_success
ignore_missing_output=ignore_missing_output)
File "/shared/bin/lib/python3.6/site-packages/snakemake/dag.py", line 323, in check_and_touch_output
"wait time with --latency-wait.", rule=job.rule)
snakemake.exceptions.MissingOutputException: Missing files after 20 seconds:
/scratch/SRR2135300_2.fastq.gz
This might be due to filesystem latency. If that is the case, consider to increase the wait time with --latency-wait.
This is similar to the error reported here:
https://bitbucket.org/snakemake/snakemake/issues/462/unhandled-missingoutputexception-in
Snakemake script is as follows:
rule all:
input:expand("/shared/dbGAP/sras2/fastq.gz/{sample}_{end}.fastq.gz",
sample=SAMPLES, end=END)
rule move:
input: left="/scratch/{sample}_1.fastq.gz", right="/scratch/{sample}_2.fastq.gz"
output: left="/shared/dbGAP/sras2/fastq.gz/{sample}_1.fastq.gz", right="/shared/dbGAP/sras2/fastq.gz/{sample}_2.fastq.gz"
shell: "rsync --remove-source-files -av {input.left} {output.left}; rsync --remove-source-files -av {input.right} {output.right};"
rule get_fastq_files_from_sra_file:
input: sras="/shared/dbGAP/sras2/test/{sample}.sra"
output: left="/scratch/{sample}_1.fastq.gz", right="/scratch/{sample}_2.fastq.gz"
shell: "/shared/dbGAP/sra_toolkit/sratoolkit.2.8.2-1-ubuntu64/bin/fastq-dump --split-files --gzip --outdir /scratch/ {input}"
My feeling is that snakemake cannot "see" the scratch on the nodes, so it returns it as missing, but I am not sure how to solve this issue.

unable to run mongo-connector

I have installed mongo-connector in the mongodb server.
I am executing by giving the command
mongo-connector -m [remote mongo server IP]:[remote mongo server port] -t [elastic search server IP]:[elastic search server Port] -d elastic_doc_manager.py
I also tried with this since mongo is running in the same server with the default port.
mongo-connector -t [elastic search server IP]:[elastic search server Port] -d elastic_doc_manager.py
I am getting error
Traceback (most recent call last):
File "/usr/local/bin/mongo-connector", line 9, in <module>
load_entry_point('mongo-connector==2.3.dev0', 'console_scripts', 'mongo-connector')()
File "/usr/local/lib/python2.7/dist-packages/mongo_connector-2.3.dev0-py2.7.egg/mongo_connector/util.py", line 85, in wrapped
func(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/mongo_connector-2.3.dev0-py2.7.egg/mongo_connector/connector.py", line 1037, in main
conf.parse_args()
File "/usr/local/lib/python2.7/dist-packages/mongo_connector-2.3.dev0-py2.7.egg/mongo_connector/config.py", line 118, in parse_args
option, dict((k, values.get(k)) for k in option.cli_names))
File "/usr/local/lib/python2.7/dist-packages/mongo_connector-2.3.dev0-py2.7.egg/mongo_connector/connector.py", line 820, in apply_doc_managers
module = import_dm_by_name(dm['docManager'])
File "/usr/local/lib/python2.7/dist-packages/mongo_connector-2.3.dev0-py2.7.egg/mongo_connector/connector.py", line 810, in import_dm_by_name
"Could not import %s." % full_name)
**mongo_connector.errors.InvalidConfiguration: Could not import mongo_connector.doc_managers.elastic_doc_manager.py.**
NOTE: I am using python2.7
and mongo-connector 2.3
Elastic search server is 2.2
Any suggestions ?
[edit]
After applying Val's suggestion:
2016-02-29 19:56:59,519 [CRITICAL] mongo_connector.oplog_manager:549 -
Exception during collection dump
Traceback (most recent call last):
File
"/usr/local/lib/python2.7/dist-packages/mongo_connector-2.3.dev0-py2.7.egg/mongo_connector/oplog_manager.py",
line 501, in do_dump
upsert_all(dm)
File
"/usr/local/lib/python2.7/dist-packages/mongo_connector-2.3.dev0-py2.7.egg/mongo_connector/oplog_manager.py",
line 485, in upsert_all dm.bulk_upsert(docs_to_dump(namespace),
mapped_ns, long_ts)
File
"/usr/local/lib/python2.7/dist-packages/mongo_connector-2.3.dev0-py2.7.egg/mongo_connector/util.py", line 32, in wrapped
return f(*args, **kwargs)
File
"/usr/local/lib/python2.7/dist-packages/mongo_connector-2.3.dev0-py2.7.egg/mongo_connector/doc_managers/elastic_doc_manager.py", line 190, in bulk_upsert
for ok, resp in responses:
File
"/usr/local/lib/python2.7/dist-packages/elasticsearch-1.9.0-py2.7.egg/elasticsearch/helpers/init.py",
line 160, in streaming_bulk
for result in _process_bulk_chunk(client, bulk_actions,
raise_on_exception, raise_on_error, **kwargs):
File
"/usr/local/lib/python2.7/dist-packages/elasticsearch-1.9.0-py2.7.egg/elasticsearch/helpers/init.py",
line 132, in _process_bulk_chunk
raise BulkIndexError('%i document(s) failed to index.' % len(errors),
errors)
BulkIndexError: (u'2 document(s) failed to
index.',..document_class=dict, tz_aware=False, connect=True,
replicaset=u'mss'), u'local'), u'oplog.rs')
2016-02-29 19:56:59,835 [ERROR] mongo_connector.connector:302 -
MongoConnector: OplogThread unexpectedly stopped! Shutting down
Hi Val,
I connected with another mongodb instance, which had only one database, having one collection with 30,000+ records and I was able to execute it succesfully. The previous mongodb collection has multiple databases (around 7), which internally had multiple collections (around 5 to 15 per databases) and all were having good amount of documents (ranging from 500 to 50,000) in the collections.
Was Mongo-connector failing because of huge data residing in the mongo database ?
I have further queries
a. Is is possible to get indexing done of only specific collections in the mongodb, residing in different databases? I wan to index only specific collections (not the entire database). How can I achieve this ?
b. In elasticsearch i can see duplicate indexes for one collection. First one is with the database name (as expected), other one with the name mongodb_meta, both of them having same data, if I am changing the collection, the update is happening in both the collections.
c. Is it possible to configure the output index name or any other parameters any how?
I think the only issue is that you have the .py extension on the doc manager (it was needed before mongo-connector 2.0), you simply need to remove it:
mongo-connector -m [remote mongo server IP]:[remote mongo server port] -t [elastic search server IP]:[elastic search server Port] -d elastic_doc_manager
I found this option to run specific collection only.
$ mongo-connector -m mongodbserver:27017 -t elasticserver:9200 -d elastic_doc_manager --oplog-ts oplogstatus.txt --namespace-set database.collection
It started working after giving below command with --oplog-ts option.
mongo-connector -m localhost:27017 -t localhost:37017 -d mongo_doc_manager --oplog-ts oplogstatus.txt
But its failing if i use a config file. Kindly advise how to resolve this issue.
C:\Dev\mongodb\mongo-connector>mongo-connector -c myconfig.json --oplog-ts oplogstatus.txt
Fatal Exception
Traceback (most recent call last):
File "C:\Program Files\Python\lib\site-packages\mongo_connector-2.5.0.dev0-py3.6.egg\mongo_connector\config.py", line 110, in parse_args
self.load_json(f.read())
File "C:\Program Files\Python\lib\site-packages\mongo_connector-2.5.0.dev0-py3.6.egg\mongo_connector\config.py", line 132, in load_json
parsed_config = json.loads(text)
File "C:\Program Files\Python\lib\json\__init__.py", line 319, in loads
return _default_decoder.decode(s)
File "C:\Program Files\Python\lib\json\decoder.py", line 339, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "C:\Program Files\Python\lib\json\decoder.py", line 355, in raw_decode
obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Invalid \escape: line 6 column 21 (char 201)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Program Files\Python\lib\site-packages\mongo_connector-2.5.0.dev0-py3.6.egg\mongo_connector\util.py", line 90, in wrapped
func(*args, **kwargs)
File "C:\Program Files\Python\lib\site-packages\mongo_connector-2.5.0.dev0-py3.6.egg\mongo_connector\connector.py", line 1059, in main
conf.parse_args()
File "C:\Program Files\Python\lib\site-packages\mongo_connector-2.5.0.dev0-py3.6.egg\mongo_connector\config.py", line 112, in parse_args
reraise(errors.InvalidConfiguration, *sys.exc_info()[1:])
File "C:\Program Files\Python\lib\site-packages\mongo_connector-2.5.0.dev0-py3.6.egg\mongo_connector\compat.py", line 9, in reraise
raise exctype(str(value)).with_traceback(trace)
File "C:\Program Files\Python\lib\site-packages\mongo_connector-2.5.0.dev0-py3.6.egg\mongo_connector\config.py", line 110, in parse_args
self.load_json(f.read())
File "C:\Program Files\Python\lib\site-packages\mongo_connector-2.5.0.dev0-py3.6.egg\mongo_connector\config.py", line 132, in load_json
parsed_config = json.loads(text)
File "C:\Program Files\Python\lib\json\__init__.py", line 319, in loads
return _default_decoder.decode(s)
File "C:\Program Files\Python\lib\json\decoder.py", line 339, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "C:\Program Files\Python\lib\json\decoder.py", line 355, in raw_decode
obj, end = self.scan_once(s, idx)
mongo_connector.errors.InvalidConfiguration: Invalid \escape: line 6 column 21 (char 201)
Try this.
pip install 'elastic2-doc-manager[elastic5]'
mongo-connector -m localhost:27017 -t localhost:9200 -d elastic2_doc_manager
answer on github
Your strategy seems sound to me. Here's how to do this:
Generate a mongo-connector timestamp file:
Run mongo-connector --no-dump.
Stop mongo-connector right after it starts up. Now you have an
oplog.timestamp file pointing to the latest entry on the oplog.
Run mongodump on the primary. The dump already reflects all the
changes that mongo-connector saw in the oplog.
Run mongorestore with the dump from (2) on the target MongoDB.
Restart mongo-connector. Pass in the file generated in (1) to the
--oplog-ts option.
I'll add this to the wiki.

Ansible error due to GMP package version on Centos6

I have a Dockerfile that builds an image based on CentOS (tag: centos6):
FROM centos
RUN rpm -iUvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
RUN yum update -y
RUN yum install ansible -y
ADD ./ansible /home/root/ansible
RUN cd /home/root/ansible;ansible-playbook -v -i hosts site.yml
Everything works fine until Docker hits the last line, then I get the following errors:
[WARNING]: The version of gmp you have installed has a known issue regarding
timing vulnerabilities when used with pycrypto. If possible, you should update
it (ie. yum update gmp).
PLAY [all] ********************************************************************
GATHERING FACTS ***************************************************************
Traceback (most recent call last):
File "/usr/bin/ansible-playbook", line 317, in <module>
sys.exit(main(sys.argv[1:]))
File "/usr/bin/ansible-playbook", line 257, in main
pb.run()
File "/usr/lib/python2.6/site-packages/ansible/playbook/__init__.py", line 319, in run
if not self._run_play(play):
File "/usr/lib/python2.6/site-packages/ansible/playbook/__init__.py", line 620, in _run_play
self._do_setup_step(play)
File "/usr/lib/python2.6/site-packages/ansible/playbook/__init__.py", line 565, in _do_setup_step
accelerate_port=play.accelerate_port,
File "/usr/lib/python2.6/site-packages/ansible/runner/__init__.py", line 204, in __init__
cmd = subprocess.Popen(['ssh','-o','ControlPersist'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
File "/usr/lib64/python2.6/subprocess.py", line 642, in __init__
errread, errwrite)
File "/usr/lib64/python2.6/subprocess.py", line 1234, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
Stderr from the command:
package epel-release-6-8.noarch is already installed
I imagine that the cause of the error is the gmp package not being up to date.
There is a related issue on GitHub: https://github.com/ansible/ansible/issues/6941
But there doesn't seem to be any solutions at the moment ...
Any ideas ?
Thanks in advance !
My site.yml playbook:
- hosts: all
pre_tasks:
- shell: echo 'hello'
Make sure that the files site.yml and hosts are present in the directory you're adding to /home/root/ansible.
Side note, you can simplify your Dockerfile by using WORKDIR:
WORKDIR /home/root/ansible
RUN ansible-playbook -v -i hosts site.yml