how to order query execution with opscode chef and database cookbook - chef-recipe

I'm looking for some help on defining a priority in my chef recipe.
The recipe is supposed to import a sql dump in a database then execute 2 mysql queries against the database.
I'm using Chef Solo.
The first mysql_database action :query importing the dump works well
But the 2 other queries seem to do nothing as it seems that the dump is still importing datas in the DB and the datas are not there when runs. Random execution ?
My recipe:
mysql_database node['wp-dev']['database'] do
connection ({:host => "localhost", :username => 'root',
:password => node['mysql']['server_root_password']})
#NOW THE SQL SCRIPT importing the dump - WORKING
sql { ::File.open("/tmp/database.sql").read }
#NOW THE 2 QUERIES - Not working at first Run
sql "UPDATE WORKING QUERY WHEN COPIED IN MYSQL 1"
sql "UPDATE WORKING QUERY WHEN COPIED IN MYSQL 2"
action :query
end
I can not figure how to fix this or how I can work with only_if to check that the import is finished before running the queries.

A solution is to use different mysql_datase blocks with one notifying another when completed.
mysql_database 'insertTestData' do
connection mysql_connection_info
database_name node['some']['database']
sql { ::File.open(/somePath/testData.sql).read }
action :nothing
end
mysql_database 'createTables' do
connection mysql_connection_info
database_name node['some']['database']
sql { ::File.open(/somePath/tableCreate.sql).read }
action :query
notifies :query,"mysql_database[insertTestData]"
end

You need to create a separate mysql_database block for each sql statement you're executing. You can't have multiple sql queries in the same one.

Related

Robot framework : Database library keywords not getting executed

I recently started working with Robot framework. So I had a requirement where I needed to connect with Postgres db.
So though I am able to connect with the db but then when I try to execute queries, the flow is getting stuck. Even the test is not failing. Following is what I did:
Connect To Database psycopg2 ${DBName} ${DBUser} ${DBPass} ${DBHost} ${DBPort}
${current_row_count} = Row Count Select * from xyz
The first statement is executing fine but then it gets stuck on second statement.
Can somebody help me out on this
To Execute Query and get data from result :
Connect To Database psycopg2 ${DBName} ${DBUser} ${DBPass} ${DBHost} ${DBPort}
${output} = Query SELECT * from xyz;
Log ${output}
${DataResults}= Get from list ${output} 0
${DataResults}= Convert to list ${DataResults}
${DataResults}= Get from list ${DataResults} 0
${DataResults} convert to string ${DataResults}
Disconnect From Database
You are not executing your query.... read below a bit documentation and an example ;)
In the example you can see example variable but introduce your data ;)
Name: Connect To Database Using Custom Params
Source: DatabaseLibrary
Arguments:
[ dbapiModuleName=None | db_connect_string= ]
Loads the DB API 2.0 module given dbapiModuleName then uses it to connect to the database using the map string db_custom_param_string.
Example usage Example usage: :
Connect To Database Using Custom Params pymssql database='${db_database}' , user='${db_user}', password='${db_password}', host='${db_host}'
${queryResults} Query ${query}
Disconnect From Database

Unable to connect to proper database using mongo script

I've written a script that takes in some variables and updates my db accordingly. I seem to get a connection to the wrong database every time I try to run my script in the command line:
conn = new Mongo();
db = conn.getDB("my_db");
db.auth({
user:DB_USER,
password: DB_PASSWORD
});
cust = db.customers;
print('===== Indexes before execution =====');
print(JSON.stringify(db.customers.getIndexes()));
cust.createIndex({geo_point:"2dsphere"});
print('===== Indexes after execution =====');
print(JSON.stringify(db.customers.getIndexes()));
This is how I'm running it: mongo --eval 'let DB_USER="usr", DB_PASSWORD="pwd"' script.js
and this is the output: MongoDB shell version: 3.2.16
connecting to: test
2017-09-20T22:51:08.589+0000 E QUERY [thread1] SyntaxError: missing ; before statement #(shell eval):1:4
As you can see it says "connecting to test". Not sure what that's about, also, not sure what it means by missing ; before statement #(shell eval):1:4
Please note that when I run mongo in the shell I get
MongoDB shell version: 3.2.16
connecting to: test
All help is greatly appreciated. Thanks!

travis-ci postgres `SELECT EXISTS` query return different result

I am working on some tests with pg-promise involve dropping a table and recreate a table.
All tests passes on my local machine. But ontravis-ci, it seems to skipp all the DROP TABLE ... SQL, resulting tests fail.
Anyone has any idea why? Is it a permission issue?
Is there a way for me to further debug this, like connect to travis-ci postgres sever?
Update: I didnt put any code cuz all tests pass on my local env, so I thought it is just a travis-ci issue. The below are the bit that I think traivs-ci is skipping.
afterEach('cleanup tables', (done) => {
db.none('DROP TABLE $1~', 'syncTest')
.then(done)
.catch(() => done());
});
beforeEach('cleanup tables', (done) => {
db.none('DROP TABLE $1~', 'syncTest')
.then(done)
.catch(() => done());
});
Update2: After some further tests, it turns out that the test fail was because that
db.one('SELECT EXISTS (SELECT 1 FROM information_schema.tables WHERE table_name=$1)', [tableName])
was not returning expected value. The query is returning { '?column?': false } on travis, but returning { exists: false } on my local env.
Is this a travis-ci issue? or postgres version issue?
Most likely it is because your test sequence is wrong, which is subject to a race condition, which you only see on Travis Ci because it is much busier than your local machine when running the tests.
To start with, try replacing your DROP TABLE name with DROP TABLE IF EXISTS name.
And then you may try using CREATE TABLE IF NOT EXISTS name...

Slick does not commit transaction on AWS postgres DB

We have an issue with slick 3.0 and a postgres database (9.5) on AWS, where slick opens a transaction but does not seem to commit it, leaving an open connection "idle in transaction" and the futures never complete.
We are just calling db.run(saveRow(row).transactionally.asTry), where
private def saveRow(row: Row): DBIO[Int] = {
val getExistingRow: DBIO[Option[Row]] = table.filter(_.id === row.id).result.headOption
getExistingRow.flatMap((existingRow: Option[Row]) =>
existingRow match {
case None => table += row
case Some(row) =>
table.filter(_.id === row.id).map(_.propety).update(row.property)
}
)
}
Now the first select statement created from getExistingRow already does not complete. It works locally, but when running it in production on AWS, all prepared statements are never commited. Logs from slick.backend just show
#1: Start transaction
#2: StreamingInvokerAction$HeadOptionAction [select ...]
We would expect to get the following further logs from slick.backend (we see them locally), but we don't see them.
#3: SingleInsertAction [insert into ...]
#4: Commit
Is there some configuration setting I need to provide for this to work on the side of Slick, HikariCP or the postgres database that could fix this? Any other ideas on how to fix this issue?
It was actually caused by using the play execution context. When switching to the scala default execution context it worked fine.

readAnyDatabase user can create a database on mongodb

the following code leaves an empty dummy database. Is this system behavior intended?
mongodb is running --auth mode and the user is part of the readAnyDatabase Role.
import pymongo
print CORE_PROD_URL
mongo = pymongo.MongoClient(CORE_PROD_URL)
print mongo.database_names()
print mongo.dummy.test.count()
print mongo.database_names()
which gives:
mongodb://read_only_user:pw#localhost:27017
[u'admin', u'local']
0
[u'admin', u'local', u'dummy']
the same behaviour happens with find()
while
mongo.dummy.test.insert({‘foo’: ‘bar’})
throws an exception
OperationFailure: not authorized on new_db to execute command
This is a known bug, SERVER-11051. The database name will disappear from "database_names()" the next time you restart the server, but of course it will reappear next time you read from the "dummy" database.