I am using postgres database in my cakephp project.
I have a table with some data and a column called "status".
"Status" it's enum and can be "waiting", "in_progress", "completed".
My script has to get the first found record with status=waiting, change the status to "in_progress" and also get the id of this record and all this in one atomic procedure.
The id is needed after the computation to change status to "completed".
There will be many such scripts working in parrallel thats why I need this simple "row locking".
I am using postgres db for the first time - is there any easy way to accomplish this?
Maybe cake supports some convinient way of doing this?
with cakePHP it has no diffrence what kind of DB you have, simply use $this->Model->find... modify your status and then '$this->Model->save....`
$row = $this->Model->find('first',array('conditions' => array('Model.status' => 'waiting')));
$row['Model']['status'] = 'in progress';
$this->Model->save($row);
(...do something...)
$row['Model']['status'] = 'completed';
$this->Model->save($row);
propably you want to run it in loop and put some kind of const as statuses...
Related
I have created one table with JSONB column as "data"
And the sample value of that column is
[{field_id:1, value:10},{field_id:2, value:"some string"}]
Now there are multiple rows like this..
What i want ?
I want to use aggregate function on "data" column such that, i should
get
Sum of all value where field_id = 1;
Avg of value where field_id = 1;
I have searched alot on google but not able to find a proper solution.
sometimes it says "Field doesn't exist" and some times it says "from clause missing"
I tried referring like data.value & also data -> value lastly data ->> value
But nothing is working.
Please let me know the solution if any one knows,
Thanks in advance.
Your attributes should be something like this, so you instruct it to run the function on a specific value:
attributes: [
[sequelize.fn('sum', sequelize.literal("data->>'value'")), 'json_sum'],
[sequelize.fn('avg', sequelize.literal("data->>'value'")), 'json_avg']
]
Then in WHERE, you reference field_id in a similar way, using literal():
where: sequelize.literal("data->>'field_id' = 1")
Your example also included a string for the value of "value" which of course won't work. But if the basic Sequelize setup works on a good set of data, you can enhance the WHERE clause to test for numeric "value" data, there are good examples here: Postgres query to check a string is a number
Hopefully this gets you close. In my experience with Sequelize + Postgres, it helps to run the program in such a way that you see what queries it creates, like in a terminal where the output is streaming. On the way to a working statement, you'll either create objects which Sequelize doesn't like, or Sequelize will create bad queries which Postgres doesn't like. If the query looks close, take it into pgAdmin for further work, then try to reproduce your adjustments in Sequelize. Good luck!
Excuse what seems like it could be a duplicate. I'm familiar with multiple updates in Postgres... but I can't seem to figure out a way around this one...
I have a photos table with the following columns: id (primary key), url, sort_order, and owner_user_id.
We would like to allow our interface to allow the user to reorder their existing photos in a collection view. In which case when a drag-reorder interaction is complete, I am able to send a POST body to our API with the following:
req.body.photos = [{id: 345, order: 1, id: 911, order: 2, ...<etc>}]
In which case I can turn around and run the following query in a loop per each item in the array.
photos.forEach(function (item) {
db.runQuery('update photos set sort_order=$1 where id=$2 and owner_user_id=$3', [item.order, item.id, currentUserId])
})
In general, it's generally frowned upon to run database queries inside loops, so if there's anyway this can be done with 1 query that would be fantastic.
Much thanks in advance.
Running a select query inside of a loop is definitely questionable, but I don't think multiple updates is necessarily frowned upon if the data you are updating doesn't natively reside on the database. To do these as separate transactions, however, might be.
My recommendation would be to wrap all known updates in a single transaction. This is not only kinder to the database (compile once, execute many, commit once), but this is an ACID approach to what I believe you are trying to do. If, for some reason, one of your updates fails, they will all fail. This prevents you from having two photos with an order of "1."
I didn't recognize your language, but here is an example of what this might look like in C#:
NpgSqlConnection conn = new NpgSqlConnection(connectionString);
conn.Open();
NpgSqlTransaction trans = conn.BeginTransaction();
NpgSqlCommand cmd = new NpqSqlCommand("update photos set sort_order=:SORT where id=:ID",
conn, trans);
cmd.Parameters.Add(new NpgSqlParameter("SORT", DbType.Integer));
cmd.Parameters.Add(new NpgSqlParameter("ID", DbType.Integer));
foreach (var photo in photos)
{
cmd.Parameters[0].Value = photo.SortOrder;
cmd.Parameters[1].Value = photo.Id;
cmd.ExecuteNonQuery();
}
trans.Commit();
I think in Perl, for example, it would be even simpler -- turn off DBI AutoCommit and commit after the inserts.
CAVEAT: Of course, add error trapping -- I was just illustrating what it might look like.
Also, I changed you update SQL. If "Id" is the primary key, I don't think you need the additional owner_user_id=$3 clause to make it work.
I write lots of code using sql alchemy on top of postgres 9.3. I often have to do an insert, after checking that the record does not already exist. To do so, I do the following
c = session.query(ClassName).filter(ClassName.id=new.id).count()
if c==0:
session.add(new)
session.commit()
This is sort of tedious. Is there any way to set up sql alchemy + postgres to handle that checking automatically? I'm not necessarily looking for a unique-ness index in postgres (which will throw an error if the record already exists) so much as an "add" operation that knows what to do if a record is already there.
Why not define your own
"add" operation that knows what to do if a record is already there.
?
def addIfNotExist(session, new):
if not c:
session.add(new)
session.commit()
else:
pass #put other code here if needed
addIfNotExist(session, new)
Without putting a unique index on id, this is the most direct thing I can think of, as there isn't (to my knowledge) a built in way of doing what you want to do
In my application as I release a new version, I alter my existing database tables by adding a new table or altering existing tables in SQlite.
I have written the same in script/text file and want to just import in a form of batch directly into existing database where these queries will execute once.
I know that i can do same by writing each alter query separately but this will also increase execution time and time for writing question.
Any ideas on how I can achieve this?
One thing that I used to was to keep an array of colums for each table, like
persons = {[ fname, lname, address, zip ]}
then I also have another version array, that tells me I have for version 1, persons, 4 colums.
Then when I updated the application, and add f.ex. gsm to the persons, I update the array and the count. Then I run the query on the database sqlite_master, parse the data
you can run '.schema persons' to get the create statement. This is just work you do once, and you never run alter table on tables up to date this way. You need to be organized.
For more dynamism, I would like to add a random part on my app.
Here is what I would have done in other techs, and what is not working in play :
long id = JPA.execute("select id from Realisation r order by RANDOM() LIMIT 1");
And here is the stack :
unexpected token: LIMIT near line 1, column 55
Comments :
Either in app or database, makes no difference to me.
About hundred "realisations" in database.
All I need is there ID, no need for full object.
MySQL database behind it all.
EDIT
After a little investigation, here is how I've done it :
Define jpa.dialect in application.conf : jpa.dialect=org.hibernate.dialect.MySQLDialect
Fetch a complete object instead of just id with classic Model utilities :
Realisation r = Realisation.find("order by RAND()").first();
After a little investigation, here is how I've done it. Define jpa.dialect in application.conf :
jpa.dialect=org.hibernate.dialect.MySQLDialect
Fetch a complete object instead of just id with classic Model utilities :
Realisation r = Realisation.find("order by RAND()").first();
Not the best way possible, since I only need the ID and not the complete object. Anyway, I have no other solution. If anyone has one for just the ID I will take it.
There is no "limit" clause in JPQL, what you need is paging. You can use Query.setMaxResults instead if this is a JPQL query, which is not entirely clear in the post.