Ansible Copy Module Fails - copy

I am trying to copy over the "resolve.conf" file from one machine to another and overwrite the old one. This operation works on all but 4 of the 40+ servers... I get an error it could not replace the file because it is not permitted. I have pasted the contents of the Playbook related to the failure of the operation below.
- hosts: all
remote_user: root
...
- name: Copy over the updated DNS configuration file
copy: src=/etc/resolv.conf dest=/etc/resolv.conf
It gives me the following error message for all 4 servers.
fatal: [server-name]: FAILED! => {"changed": false, "checksum": "9925f1a81f849f373f860c3156d19edcd1c002f2", "failed": true, "msg": "Could not replace file: /root/.ansible/tmp/ansible-tmp-1469481567.72-275811900408782/source to /etc/resolv.conf: [Errno 1] Operation not permitted"}
I just don't understand what the problem could be since I am accessing the machines as the root user and the Playbook succeeds on the majority of the servers - many with the exact same configuration and settings. For example, it succeeds on the server "server-analytical1" but fails on the server "server-analytical2". So, does anyone have any insight into why the Playbook would fail for only a few servers even though they're similar to or the same as other servers that succeeded?

Is the immutable bit set on the target file? Try lsattr /etc/resolv.conf and chattr -i /etc/resolv.conf to unset if it is.

Related

Unable to view the contents of file in ansible managed nodes

I'm trying to view the contents of a file from managed node and control node, here I see the syntax works fine for localhost (172.17.254.200) but not for the remote hosts. Below is the task I have written using lookup / query plugin, can you please suggest the fix:
---
- name: Report Test
hosts: all
roles:
- patching
tasks:
- name: Display the Pre and Post check Differences
debug:
msg: "{{ query('file', '/tmp/check/{{ inventory_hostname }}_Comparison') }}"
Below is the output
TASK [patching : Display the Pre and Post check Differences] ***********************************************************************************************************
ok: [172.17.254.200] =>
msg:
- |-
free_m - YES
sysctl_all - YES
uptime - YES
[WARNING]: Unable to find '/tmp/check/172.17.254.207_Comparison' in expected paths (use -vvvvv to see paths)
fatal: [172.17.254.207]: FAILED! =>
msg: 'An unhandled exception occurred while running the lookup plugin ''file''. Error was a <class ''ansible.errors.AnsibleError''>, original message: could not locate file in lookup: /tmp/check/172.17.254.207_Comparison'
[WARNING]: Unable to find '/tmp/check/172.17.254.208_Comparison' in expected paths (use -vvvvv to see paths)
fatal: [172.17.254.208]: FAILED! =>
msg: 'An unhandled exception occurred while running the lookup plugin ''file''. Error was a <class ''ansible.errors.AnsibleError''>, original message: could not locate file in lookup: /tmp/check/172.17.254.208_Comparison'
Lookups are executed on Ansible controller (as pointed out by #Vladimir Botka). If you just want to view the contents of a file on remote hosts, you can cat the file through ansible and debug the stdout_lines.
- command: "cat /tmp/check/{{ inventory_hostname }}_Comparison"
register: file_cat
changed_when: false
- debug:
var: file_cat.stdout_lines
lookup and query "execute and are evaluated on the Ansible control machine."
Use slurp. Quoting:
This module returns an ‘in memory’ base64 encoded version of the file, take into account that this will require at least twice the RAM as the original file size.
For larger files use fetch. Quoting:
It is used for fetching files from remote machines and storing them locally in a file tree, organized by hostname.

OCI runtime exec failed: exec failed: container_linux.go:348 : starting container process caused "no such file or directory": unknown

I am trying to bringup my fabric network.
I got my orderers organization started.
I got my peer organizations started.
I got my cli started.
after that request is failing with
OCI runtime exec failed:
exec failed: container_linux.go:348 : starting container process caused "no such file or directory": unknown
The error means that either working_dir is undefined, or it does not exist.
Czeck the cli section in your docker-compose file for the above setting.
If you are working on Windows OS, a possible cause is the file encoding (should be in Unix format).
You could open this page:
https://hyperledger-fabric.readthedocs.io/en/latest/build_network.html
And search "No such file or directory". There is some related trouble shooting.
Just a short description:
Ensure that the file in question is encoded in the Unix format. This was most likely caused by not setting core.autocrlf to false in your Git configuration. There are several ways of fixing this. If you have access to the vim editor for instance, open the file:
vim ./path/to/the/related-file
Then change its format by executing the following vim command:
:set ff=unix

Unable to run Mongo shell (Mac)

I'm new to web development and I wanted to get started with some RoR (using Locomotive CMS).
One of the things Locomotive asks for is to have Mongodb. I installed using homebrew by following this link http://docs.mongodb.org/manual/tutorial/install-mongodb-on-os-x/
It installs fine but then im not able to run it!
When I type 'mongo' on terminal I get the following output :
"MongoDB shell version: 2.4.3
connecting to: test
Mon May 6 11:12:28.927
JavaScript execution failed:
Error: couldn't connect to server
127.0.0.1:27017 at src/mongo/shell/mongo.js:L112
exception: connect failed"
BACKGROUND TO HELP DEBUGGING ( on Terminal) :
1.When I type in mongod I get the following :
"all output going to: /usr/local/var/log/mongodb/mongo.log"
Ownership of mongo.log :
-rw-r--r-- 1 username admin 22133 May 6 11:13 mongo.log
2.When I input mongod --fork I get the following :
about to fork child process, waiting until server is ready for connections.
forked process: 77566
all output going to: /usr/local/var/log/mongodb/mongo.log
ERROR: child process failed, exited with error number 100
3.Typing mongod --help gives the following warning:
* WARNING: soft rlimits too low. Number of files is 256, should be at least 1000
4.I have a folder called data (which acts as amongodb database, is this where it should be?)in root (PATH : /data) Ownership of data folder :
"drwxr-xr-x 3 username wheel 102 Apr 23 21:38 data"
5.Checking if ports are free: lsof -i :27017. Ive also tried to check for a running mongo process using activity montior and found zilch!
No output
6.Ive also tried : mongo --repair. Dint help!
Ive been stuk on this for a while, I've looked at most responses on stackoverflow and searched around to find a solution to this but nothing has helped so far!
UPDATE:
When I tried to start the mongo shell, I was getting the following l
log message from mongo.log:
5/6/13 1:33:27.616 PM com.apple.launchd:
(org.mongodb.mongod[79133])
open("/private/var/log/mongodb/output.log", ...): Permission denied
So I did a chmod777 for the particular folder and the shell launches!
Although I still get a warning when it launches as:
Server has startup warnings:
Mon May 6 13:33:27.693 [initandlisten]
Mon May 6 13:33:27.693 [initandlisten]
** WARNING: soft rlimits too low.
Number of files is 256, should be at least 1000
Any idea how I can silence these warnings?
To get the information you need to determine the cause of failure you need to look in (and post for us) the output from /usr/local/var/log/mongodb/mongo.log when it is trying to start.
However, the most common reason for the failure is the lack of the default database path - at /data/db. Either create that folder (and don't forget to make sure your user has permission to read/write to it) or specify a different path with the --dbpath option.
UPDATE: as you have since found, bad permissions on the log file can cause the issue, in a similar way to bad permissions on the data path.
In terms of the warning, the information you need is here:
https://superuser.com/questions/433746/is-there-a-fix-for-the-too-many-open-files-in-system-error-on-os-x-10-7-1
It is just that though, a warning - you can run MongoDB without an issue with those limits as long as it is not under heavy load. So, if this is a development environment, unless you plan on load testing, you should be fine

Chef Deployment with irrelevant default symlinks

I am trying to deploy my application code with Chef, which is working for one node and failing on another. I cannot determine why it works for one node and not another when they have the exact same config, but I can at least try to debug the problem on the node that fails.
deploy_revision app_config['deploy_dir'] do
scm_provider Chef::Provider::Git
repo app_config['deploy_repo']
revision app_config['deploy_branch']
if secrets["deploy_key"]
git_ssh_wrapper "#{app_config['deploy_dir']}/git-ssh-wrapper" # For private Git repos
end
enable_submodules true
shallow_clone false
symlink_before_migrate({}) # Symlinks to add before running db migrations
purge_before_symlink [] # Directories to delete before adding symlinks
create_dirs_before_symlink [] # Directories to create before adding symlinks
# symlinks()
action :deploy
restart_command do
service "apache2" do action :restart; end
end
end
This is my recipe for deploying the code. Notice that I have tried disabling symlinking entirely, as Chef always jams its own defaults in. Even with this I get the error:
================================================================================
Error executing action `deploy` on resource 'deploy_revision[/var/www]'
================================================================================
Chef::Exceptions::FileNotFound
------------------------------
Cannot symlink /var/www/shared/config/database.yml to /var/www/releases/7404041cf8859a35de90ae72091bea1628391075/config/database.yml before migrate: No such file or directory - /var/www/shared/config/database.yml or /var/www/releases/7404041cf8859a35de90ae72091bea1628391075/config/database.yml
Resource Declaration:
---------------------
# In /var/chef/cache/cookbooks/kapture/recipes/api.rb
68:
69: deploy_revision app_config['deploy_dir'] do
70: scm_provider Chef::Provider::Git
71: repo app_config['deploy_repo']
72: revision app_config['deploy_branch']
73: if secrets["deploy_key"]
74: git_ssh_wrapper "#{app_config['deploy_dir']}/git-ssh-wrapper" # For private Git repos
75: end
76: enable_submodules true
Compiled Resource:
------------------
# Declared in /var/chef/cache/cookbooks/kapture/recipes/api.rb:69:in `from_file'
deploy_revision("/var/www") do
destination "/var/www/shared/cached-copy"
symlink_before_migrate {"config/database.yml"=>"config/database.yml"}
updated_by_last_action true
restart_command #<Proc:0x00007f40f366e5a0#/var/chef/cache/cookbooks/kapture/recipes/api.rb:82>
repository_cache "cached-copy"
retries 0
keep_releases 5
create_dirs_before_symlink ["tmp", "public", "config"]
updated true
provider Chef::Provider::Deploy::Revision
enable_submodules true
deploy_to "/var/www"
current_path "/var/www/current"
recipe_name "api"
revision "HEAD"
scm_provider Chef::Provider::Git
purge_before_symlink ["log", "tmp/pids", "public/system"]
git_ssh_wrapper "/var/www/git-ssh-wrapper"
remote "origin"
shared_path "/var/www/shared"
cookbook_name "kapture"
symlinks {"log"=>"log", "system"=>"public/system", "pids"=>"tmp/pids"}
action [:deploy]
repo "git#github.com:kapture/api.git"
retry_delay 2
end
[2012-09-24T15:42:07+00:00] ERROR: Running exception handlers
[2012-09-24T15:42:07+00:00] FATAL: Saving node information to /var/chef/cache/failed-run-data.json
[2012-09-24T15:42:07+00:00] ERROR: Exception handlers complete
[2012-09-24T15:42:07+00:00] FATAL: Stacktrace dumped to /var/chef/cache/chef-stacktrace.out
[2012-09-24T15:42:07+00:00] FATAL: Chef::Exceptions::FileNotFound: deploy_revision[/var/www] (kapture::api line 69) had an error: Chef::Exceptions::FileNotFound: Cannot symlink /var/www/shared/config/database.yml to /var/www/releases/7404041cf8859a35de90ae72091bea1628391075/config/database.yml before migrate: No such file or directory - /var/www/shared/config/database.yml or /var/www/releases/7404041cf8859a35de90ae72091bea1628391075/config/database.yml
Here you can see it mention database.yml, tmp/, system/ and pids folders, all of which are defaults that Chef likes to set (see related bug)
Question 1
What are these symlinks for and how do I know if I even need any. What sort of things am I symlinking? I will be using migrations, so if they are useful for the migration then I'll need them working.
I have read the documentation many times and it just doesn't explain this is plain English - at least not that I have found.
Question 2
If I do not require them, how can I disable symlinking entirely? Following the examples in that bug report have not helped.
Clear out all the symlink attributes.
deploy_revision("/var/www") do
# ...
symlink_before_migrate.clear
create_dirs_before_symlink.clear
purge_before_symlink.clear
symlinks.clear
end
Make sure the deployment directory in shared has the proper directory structure (/var/www/shared/[log,pids,system,config]) and that all config files necessary for your application are in the config directory.
Your recipe for your application's cookbook should have an array of directory names to create (recursively) so that you won't run into this error again.
The symlinks are there so that while your application code will continue to evolve, you can share the log, pids, and system folder by symlinking shared/log to current/log and so on and so forth..
Chef happens to cache the directory structure - somehow, I haven't dug into that - with this troll application cookbook. It's something in the deploy resource I believe - I never use that - but you can fix it by deleting the directory structure it creates in /var/derp or whatever. Also ensure your tmp directory is setup.
A couple reasons this may be an issue:
File permissions are incorrect
The currently running chef user cannot access the file
You are using the application cookbook in a configuration for rails deploys and your application does not have the same directory structure.
It's definitely caused by Chef caching the state of the deploy somewhere, then reading that state out of it's cache - wherever that is - and then reusing that. I would look at either the application cookbook to see for any persistance, and if you fail at finding it there, look in the deploy resource from chef itself.

Error while configuring Google App Engine on Pydev in Eclipse

i am trying to configure Google App Engine on Eclipse and use it to run a python application locally (on the local host):
for this i used following tutorial as a guide:
http://www.mkyong.com/google-app-engine/google-app-engine-python-hello-world-example-using-eclipse/
i followed the steps properly but when i try to use the configuration i get errors the console output is:
Console Output:
C:\Program Files (x86)\Google\google_appengine\google\appengine\api\search\search.py:232: UserWarning: DocumentOperationResult._code is deprecated. Use OperationResult._code instead.
'Use OperationResult.%s instead.' % (name, name))
C:\Program Files (x86)\Google\google_appengine\google\appengine\api\search\search.py:232: UserWarning: DocumentOperationResult._CODES is deprecated. Use OperationResult._CODES instead.
'Use OperationResult.%s instead.' % (name, name))
WARNING 2012-06-20 14:53:01,451 rdbms_mysqldb.py:74] The rdbms API is not available because the MySQLdb library could not be loaded.
Traceback (most recent call last):
File "C:\Program Files (x86)\Google\google_appengine\dev_appserver.py", line 126, in
run_file(file, globals())
File "C:\Program Files (x86)\Google\google_appengine\dev_appserver.py", line 122, in run_file
execfile(script_path, globals_)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver_main.py", line 694, in
sys.exit(main(sys.argv))
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver_main.py", line 582, in main
root_path, {}, default_partition=default_partition)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 3142, in LoadAppConfig
raise AppConfigNotFoundError
google.appengine.tools.dev_appserver.AppConfigNotFoundError
The configuration i am using is:
Windows 7 64 bit
python 2.7
Eclipse Helios
What could be the possible mistake in configuring the GAE?
Additional info : when i try to use the project with GAE manually(ie by using the launcher) it works
Update:
i experimented and discovered that since the workstation and the python installation folder is not in the same directory i get these errors
got the hint from here:
File
"C:\Program Files(x86)\Google\google_appengine\google\appengine\tools\dev_appserver.py"
line 582, in main
root_path, {}, default_partition=default_partition)
but when i made another workspace in the same partition i got this as console output and the the local host is still not working
output
C:\Program Files (x86)\Google\google_appengine\google\appengine\api\search\search.py:232: UserWarning: DocumentOperationResult._code is deprecated. Use OperationResult._code instead.
'Use OperationResult.%s instead.' % (name, name))
C:\Program Files (x86)\Google\google_appengine\google\appengine\api\search\search.py:232: UserWarning: DocumentOperationResult._CODES is deprecated. Use OperationResult._CODES instead.
'Use OperationResult.%s instead.' % (name, name))
WARNING 2012-06-20 17:20:56,719 rdbms_mysqldb.py:74] The rdbms API is not available because the MySQLdb library could not be loaded.
Runs a development application server for an application.
dev_appserver.py [options]
Application root must be the path to the application to run in this server.
Must contain a valid app.yaml or app.yml file.
Options:
--address=ADDRESS, -a ADDRESS Address to which this server should bind(Defaultlocalhost).
--clear_datastore, -c Clear the Datastore on startup. (Default false)
--debug, -d Use debug logging. (Default false)
--help, -h View this helpful message.
--port=PORT, -p PORT Port for the server to run on. (Default 8080)
--allow_skipped_files Allow access to files matched by app.yaml's
skipped_files (default False)
--auth_domain Authorization domain that this app runs in.
(Default gmail.com)
--backends Run the dev_appserver with backends support
(multiprocess mode).
--blobstore_path=DIR Path to directory to use for storing Blobstore
file stub data.
--clear_prospective_search Clear the Prospective Search subscription index
(Default false).
--datastore_path=DS_FILE Path to file to use for storing Datastore file
stub data.
(Defaultc:\users\anukoo~1\appdata\local\temp\dev_appserver.datastore)
--debug_imports Enables debug logging for module imports, showing
search paths used for finding modules and any
errors encountered during the import process.
--default_partition Default partition to use in the APPLICATION_ID.
(Default dev)
--disable_static_caching Never allow the browser to cache static files.
(Default enable if expiration set in app.yaml)
--disable_task_running When supplied, tasks will not be automatically
run after submission and must be run manually
in the local admin console.
--enable_sendmail Enable sendmail when SMTP not configured.
(Default false)
--high_replication Use the high replication datastore consistency
model. (Default false).
--history_path=PATH Path to use for storing Datastore history.
(Default c:\users\anukoo~1\appdata\local\temp\dev_appserver.datastore.history)
--multiprocess_min_port When running in multiprocess mode, specifies the
lowest port value to use when choosing ports. If
set to 0, select random ports.
(Default 9000)
--mysql_host=HOSTNAME MySQL database host.
Used by the Cloud SQL (rdbms) stub.
(Default 'localhost')
--mysql_port=PORT MySQL port to connect to.
Used by the Cloud SQL (rdbms) stub.
(Default 3306)
--mysql_user=USER MySQL user to connect as.
Used by the Cloud SQL (rdbms) stub.
(Default )
--mysql_password=PASSWORD MySQL password to use.
Used by the Cloud SQL (rdbms) stub.
(Default '')
--mysql_socket=PATH MySQL Unix socket file path.
Used by the Cloud SQL (rdbms) stub.
(Default '')
--persist_logs Enables storage of all request and application
logs to enable later access. (Default false).
--require_indexes Disallows queries that require composite indexes
not defined in index.yaml.
--show_mail_body Log the body of emails in mail stub.
(Default false)
--skip_sdk_update_check Skip checking for SDK updates. If false, fall back
to opt_in setting specified in .appcfg_nag
(Default false)
--smtp_host=HOSTNAME SMTP host to send test mail to. Leaving this
unset will disable SMTP mail sending.
(Default '')
--smtp_port=PORT SMTP port to send test mail to.
(Default 25)
--smtp_user=USER SMTP user to connect as. Stub will only attempt
to login if this field is non-empty.
(Default '').
--smtp_password=PASSWORD Password for SMTP server.
(Default '')
--task_retry_seconds How long to wait in seconds before retrying a
task after it fails during execution.
(Default '30')
--use_sqlite Use the new, SQLite based datastore stub.
(Default false)
Invalid arguments
seems like the arguments to dev_appserver.py are incorrect any ideas
If you did the same mistake as mine,then its a very high chance that you have space in the name of your directory
The warnings about deprecated things in search can be ignored. As can the message about the rdbms API if you don't plan on using that.
AppConfigNotFoundError happens when there is no app.yaml in the directory passed to dev_appserver.py. If you followed those instructions, then your app.yaml would be in the 'src' directory, and the 'program arguments' in the build command would be ${project_loc}/src - is this the case? When you run from the command-line, and see it work, what command are you running, and from what location?