I'm trying to crash course myself on ansible... and I've run into a scenario where I need to import a file into postgres. The postgres module for ansible doesn't have all the commands that the mysql module does... so I've had to find an alternative way to run sql commands against the db.
I'm using the shell command. However, I don't know how to check if the shell command was successful or not.
Here's what my playbook looks like so far:
- hosts: webservers
tasks:
- block:
- debug: msg='Start sql insert play...'
- copy: src=file.dmp dest=/tmp/file.dmp
- debug: msg='executing sql file...'
- shell: psql -U widgets widgets < /tmp/file.dmp
- debug: msg='all is well'
when: result|succeeded
rescue:
- debug: msg='Error'
always:
- debug: msg='End of Play'
# - block:
# - name: restart secondarywebservers
# - debug: msg='Attempting to restart secondary servers'
# - hosts: websecondaries
What I ultimately want to do is start the second block of code only when the first block has been successful. For now, just to learn how conditionals work, I'm trying to see if I can print a message to screen when i know for sure the sql file executed.
It fails with the following error message:
TASK [debug] *******************************************************************
fatal: [10.1.1.109]: FAILED! => {"failed": true, "msg": "The conditional check 'result|succeeded' failed. The error was: |failed expects a dictionary\n\nThe error appears to have been in '/etc/ansible/playbooks/dbupdate.yml': line 9, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n - shell: psql -U openser openser < /tmp/file.dmp\n - debug: msg='all is well'\n ^ here\n"}
I'm still doing some research on how conditionals work... so it could be that my syntax is just wrong. But the bigger question is this - Since there is no native ansible / postgresql import command, I realize that there's no way for ansible to know that the commands in "file.dmp" really created my database records...
I could add a select statement in file.dmp to try to select the record I just created... but how do i pass that back to ansible?
Just wondering if someone has some ideas on how I could accomplish something like this.
Thanks.
EDIT 1
Here's what the contents of the test "file.dmp" contains:
INSERT INTO mytable VALUES (DEFAULT, 64, 1, 1, 'test', 0, '^(.*)$', '\1);
EDIT 2
I'm going to try to do something like this:
Copy (Select * From mytable where ppid = 64) To '/tmp/test.csv' With CSV;
after the insert statements... and then have ansible check for this file (possibly using the lineinfile command) as a way to prove to myself that the insert worked.
You just need to register that result variable.
Changing your play to look like this should work:
- hosts: webservers
tasks:
- block:
- debug: msg='Start sql insert play...'
- copy: src=file.dmp dest=/tmp/file.dmp
- debug: msg='executing sql file...'
- shell: psql -U widgets widgets < /tmp/file.dmp
register: result
- debug: msg='all is well'
when: result|succeeded
rescue:
- debug: msg='Error'
always:
- debug: msg='End of Play'
Related
Usually when Ansible kicks off a PowerShell script with win_command/win_shell/win_psexec, as long as it doesn't run into errors it'll return changed, because of course it doesn't know what all the script did.
Since we can return any exit code in a PowerShell script is there a way, via exit codes or otherwise, to notify Ansible that there was no change required so that Ansible returns an OK status?
Or will it always return changed no matter what (assuming no failure)?
it'll return changed, because of course it doesn't know what all the script did
Right, because of that that is the defined default behavior for this type of module(s).
Regarding your question
Since we can return any exit code in a PowerShell script is there a way, via exit codes or otherwise, to notify Ansible that there was no change required so that Ansible returns an OK status?
this is already the correct idea. In other words, yes, such kind of behavior is possible.
Additionally to the already given comment about Defining changed, one'll need also Defining failure. This is because a non-zero return code would result into an failure otherwise.
The minimal example playbook
---
- hosts: localhost
become: false
gather_facts: false
tasks:
- name: Exec sh command
shell:
cmd: "echo ''; exit 254;"
register: result
failed_when: result.rc != 0 and result.rc != 254
changed_when: result.rc != 254
- name: Show result
debug:
msg: "{{ result }}"
will result into the output of
TASK [Exec sh command] *****************
TASK [Show result] *********************
ok: [localhost] =>
msg:
changed: false
cmd: echo ''; exit 254;
delta: '0:00:00.012397'
end: '2022-12-24 18:00:00.012397'
failed: false
failed_when_result: false
msg: non-zero return code
rc: 254
start: '2022-12-24 18:00:00.000000'
stderr: ''
stderr_lines: []
stdout: ''
stdout_lines: []
Similar Q&A
shell command and set task changed
win_powershell module accesses the $Ansible PowerShell variable
I supply the below cloud-init script through Azure portal in creating a VM. and the script never runs. appreciate if anyone can suggest what's wrong with my #cloud-config upload.
observation -
ubuntuVMexscript.sh is written
test.sh is NOT written in home directory
/etc/cloud/cloud.cfg doesn't show the change of [scripts-user, always] in final modules.
#cloud-config
package_upgrade: true
write_files:
- owner: afshan:afshan
path: /var/lib/cloud/scripts/per-boot/ubuntuVMexscript.sh
permissions: '0755'
content: |
#!/bin/sh
cat > testCat < /var/lib/cloud/scripts/per-boot/ubuntuVMexscript.sh
- owner: afshan:afshan
path: /home/afshan/test.sh
permissions: '0755'
content: |
#!/bin/sh
echo "test"
cloud_final_modules:
- rightscale_userdata
- scripts-vendor
- scripts-per-once
- scripts-per-boot
- scripts-per-instance
- [scripts-user, always]
- ssh-authkey-fingerprints
- keys-to-console
- phone-home
- final-message
- power-state-change
write_files runs before any user/group creation. Does the afshan user exist when write_files is being run? If not, attempting to set the own on the first file will throw an exception, and the write_files module will exit before attempting to create the second file. You can see if this is happening by checking /var/log/cloud-init.log on your instance.
/etc/cloud/cloud.cfg won't get updated by user data. It will stay as-is on disk, but your user data changes will get merged on top of it.
scripts-user refers to scripts written to /var/lib/cloud/instance/scripts. You haven't written anything there, so I'm not sure the purpose of your [scripts-user, always] change. If you're just looking to run a script every boot, the scripts-per-boot module (without any changes) should be fine. Every boot, it will run what's written to /var/lib/cloud/scripts/per-boot
when running specific command from linux terminal command is the following:
/opt/front/arena/sbin/ads_start ads -db_server vmazfassql01 -db_name Test1
In regular docker compose yaml file we define it like this:
ENTRYPOINT ["/opt/front/arena/sbin/ads_start", "ads" ]
command: ["-db_server vwmazfassql01","-db_name Test1"]
Then I tried to convert it to Kubernetes
command: ["/opt/front/arena/sbin/ads_start","ads"]
args: ["-db_server vwmazfassql01","-db_name Test1"]
or without quotes for args
command: ["/opt/front/arena/sbin/ads_start","ads"]
args: [-db_server vwmazfassql01,-db_name Test1]
but I got errors for both cases:
Unknown parameter value '-db_server vwmazfassql01'
Unknown parameter value '-db_name Test1'
I then tried to remove dashes from args but then it seems those values are being ignored and not set up. During the Initialization values process, during the container start, those properties seem to have they default values e.g. db_name: "ads". At least that is how it is printed out in the log during the Initialization.
I tried few more possibilities:
To define all of them in command:
command:
- /opt/front/arena/sbin/ads_start
- ads
- db_server vwmazfassql01
- db_name Test1
To define them in little bit different way:
command: ["/opt/front/arena/sbin/ads_start","ads"]
args:
- db_server vwmazfassql01
- db_name Test1
command: ["/opt/front/arena/sbin/ads_start","ads"]
args: [db_server vwmazfassql01,db_name Test1]
again they are being ignored, and not being set up.
Am I doing something wrong? How I can make some workaround for this? Thanks
I would try separating the args, following the documentation example (https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#run-a-command-in-a-shell)
Something like:
command: ["/opt/front/arena/sbin/ads_start", "ads"]
args: ["-db_server", "vwmazfassql01", "-db_name", "Test1"]
Or maybe, it would work even like this and it looks more clean:
command: ["/opt/front/arena/sbin/ads_start"]
args: ["ads", "-db_server", "vwmazfassql01", "-db_name", "Test1"]
This follows the general approach of running an external command from code (a random example is python subprocess module) where you specify each single piece of the command that means something on its own.
can't find solution for simple question:
I have file text.js
use somedb
db.somecollection.findOne()
When I run this file in cmd with redirection command from file:
"mongo < text.js"
it's work properly
But when I try this way
"mongo text.js" or "mongo --shell test.js"
I got this error message
MongoDB shell version: 2.2.0
connecting to: test
type "help" for help
Wed Dec 05 16:05:21 SyntaxError: missing ; before statement pathToFile\test.js.js:1
failed to load: pathToFile\test.js.js
It's fail on "use somedb". If I remove this line, it's run without error, but console is clear.
is there any idea, what is this and how to fix?
I'm tying to find sollution for this, to create build tool for Sublime Text 2.
default build file was
{
"cmd": ["mongo","$file"]
}
but in this case I get the error above
PS. right after posting this question I find sollution for SublimeText2:
{
"selector": "source.js",
"shell":true,
"cmd": ["mongo < ${file}"]
}
PSS. right after posting this question I find sollution for SublimeText3:
{
"selector": "source.js",
"shell":true,
"cmd": ["mongo","<", "$file"]
}
this build tool work properly
use dbname is a helper function in the interactive shell which does not work when you are using mongo shell with a JS script file like you are.
There are multiple solutions to this. The best one, IMO is to explicitly pass the DB name along with host and port name to mongo like this:
mongo hostname:27017/dbname mongoscript.js // replace 27017 with your port number
A better way to do this would be to define the DB at the beginning of your script:
mydb=db.getSiblingDB("yourdbname");
mydb.collection.findOne();
etc.
The latter is preferable as it allows you to interact with multiple DBs in the same script if you need to do so.
You can specify the database while starting the mongo client:
mongo somedb text.js
To get the output from the client to stdout just use the printjson function in your script:
printjson(db.somecollection.findOne());
Mongo needs to be invoked from a shell to get that mode, with Ansible you would have this:
- name: mongo using different databases
action: shell /usr/bin/mongo < text.js
Instead of this:
- name: mongo breaking
command: /usr/bin/mongo < text.js
This is what finally worked for me on Windows + Sublime Text 2 + MongoDB 2.6.5
{
"selector": "source.js",
"shell":true,
"cmd": ["mongo","<", "$file"],
"working_dir" : "C:\\MongoDB\\bin"
}
I am using Dancer::Plugin::Database to connect with database from my dancer application. It works fine for single connection. When i tried for multiple connection i got error. How can i add multiple connection.
I added the following code in my config.yml file:
plugins:
Database:
connections:
one:
driver: 'mysql'
database: 'employeedetails'
host: 'localhost'
port: 3306
username: 'remya'
password: 'remy#'
connection_check_threshold: 10
dbi_params:
RaiseError: 1
AutoCommit: 1
on_connect_do: ["SET NAMES 'utf8'", "SET CHARACTER SET 'utf8'" ]
log_queries: 1
two:
driver: 'mysql'
database: 'employeetree'
host: 'localhost'
port: 3306
username: 'remya'
password: 'remy#'
connection_check_threshold: 10
dbi_params:
RaiseError: 1
AutoCommit: 1
on_connect_do: ["SET NAMES 'utf8'", "SET CHARACTER SET 'utf8'" ]
log_queries: 1
Then i tried to connect with database using the following code :
my $dbh=database('one');
my $sth=$dbh->prepare("select * from table_name where id=?");
$sth->execute(1);
I got compilation error, "Unable to parse Configuration file"
Please suggest a solution.
Thanks in advance
YAML requires consistent indentation for the keys of a hash. Remove four spaces from before "two:" and it should parse.
Update: I see there's been some editing of indentation; going back to the original question produces a parsing error in a different place and shows a mixture of tabs and spaces being used; try to consistently use only tabs or only spaces. You can test your file and find what line is producing the first error like so:
$ perl -we'use YAML::Syck; LoadFile "config.yml"'
Syck parser (line 19, column 16): syntax error at -e line 1, <> chunk 1.
Also make sure that your keys are all ending up in the right hash (the mixture of tabs and spaces seems to allow this coming out wrong but still parsing successfully) with:
perl -we'use YAML::Syck; use Data::Dumper; $Data::Dumper::Sortkeys=$Data::Dumper::Useqq=1; print Dumper LoadFile "config.yml"'