Many useless queries done by DataMapper - sinatra

I have a problem with DataMapper (I'm using it with Sinatra)
I have a very basic app with 3 models.
Here's the code.
class Level
include DataMapper::Resource
property :id, Serial
property :name, String, :required => true, :unique => true, :lazy => true
property :description, Text, :lazy => true
timestamps :at
end
class Player
include DataMapper::Resource
property :id, Serial
property :name, String, :required => true, :lazy => true
timestamps :at
belongs_to :game
end
class Game
include DataMapper::Resource
property :id, Serial
has n, :players
belongs_to :level
belongs_to :current_player, 'Player', :required => false
end
Here's a basic route:
get '/' do
DataMapper::logger.debug 'Creating level'
level = Level.create(:name => "One")
DataMapper::logger.debug 'Creating game'
game = Game.create(:level => level)
DataMapper::logger.debug 'Adding players'
alice = Player.create(:name => 'Alice', :game => game)
bob = Player.create(:name => 'Bob', :game => game)
DataMapper::logger.debug 'Setting game current player'
game.current_player = alice
game.save
'ok'
end
My problem is that when I look at the DataMapper log file, I find it has made many useless queries and I don't understand why!
Here's the log output:
~ Creating level
~ (0.000062) SELECT "id" FROM "levels" WHERE "name" = 'One' ORDER BY "id" LIMIT 1
~ (0.002241) INSERT INTO "levels" ("name", "created_at", "updated_at") VALUES ('One', '2012-01-15T18:15:28+01:00', '2012-01-15T18:15:28+01:00')
~ Creating game
~ (0.000048) SELECT "id" FROM "levels" WHERE "name" = 'One' ORDER BY "id" LIMIT 1
~ (0.001747) INSERT INTO "games" ("level_id") VALUES (1)
~ Adding players
~ (0.000050) SELECT "id" FROM "levels" WHERE "name" = 'One' ORDER BY "id" LIMIT 1
~ (0.003762) INSERT INTO "players" ("name", "created_at", "updated_at", "game_id") VALUES ('Alice', '2012-01-15T18:15:28+01:00', '2012-01-15T18:15:28+01:00', 1)
~ (0.000085) SELECT "id" FROM "levels" WHERE "name" = 'One' ORDER BY "id" LIMIT 1
~ (0.001820) INSERT INTO "players" ("name", "created_at", "updated_at", "game_id") VALUES ('Bob', '2012-01-15T18:15:28+01:00', '2012-01-15T18:15:28+01:00', 1)
~ Setting game current player
~ (0.000078) SELECT "id" FROM "levels" WHERE "name" = 'One' ORDER BY "id" LIMIT 1
~ (0.001826) UPDATE "games" SET "current_player_id" = 1 WHERE "id" = 1
As you can see, there's a lot of queries for the level model. I really don't understand why DataMapper is doing these.
Thanks a lot in advance for your help.
PS: You may think that it is not a big deal but I actually simplified the model structure before posting here. The actual model is more complex and is full of those useless queries..
Here's a short part of my real datamapper log file:
It happens when I save an instance of my game model.
~ (0.001640) UPDATE "asd_games" SET "updated_at" = '2012-01-15T17:51:27+01:00', "current_player_id" = 3, "current_action_id" = 3 WHERE "id" = 1
~ (0.000079) SELECT "id", "body" FROM "asd_actions" WHERE "id" = 3 ORDER BY "id"
~ (0.000083) SELECT "id", "name", "description" FROM "asd_levels" WHERE "id" = 1 ORDER BY "id"
~ (0.000057) SELECT "id" FROM "asd_levels" WHERE "name" = 'One' ORDER BY "id" LIMIT 1
~ (0.000075) SELECT "id" FROM "asd_levels" WHERE "name" = 'One' ORDER BY "id" LIMIT 1
~ (0.000083) SELECT "id" FROM "asd_levels" WHERE "name" = 'One' ORDER BY "id" LIMIT 1
~ (0.000082) SELECT "id" FROM "asd_levels" WHERE "name" = 'One' ORDER BY "id" LIMIT 1
~ (0.000084) SELECT "id" FROM "asd_levels" WHERE "name" = 'One' ORDER BY "id" LIMIT 1

The extra SELECTS are being made to check the :unique => true constraint on the Level class. This check seems to be being made on every database call.
One way to avoid this would be instead of using create when creating your model objects, which immediately saves the model in the database, use new and then save the whole object graph with a single call to save on a suitable object when they're all ready (see the docs on creating and saving models):
DataMapper::logger.debug 'Creating level'
level = Level.new(:name => "One")
DataMapper::logger.debug 'Creating game'
game = Game.new(:level => level)
DataMapper::logger.debug 'Adding players'
alice = Player.new(:name => 'Alice', :game => game)
bob = Player.new(:name => 'Bob', :game => game)
DataMapper::logger.debug 'Setting game current player'
game.current_player = alice
game.save
produces the output:
~ Creating level
~ Creating game
~ Adding players
~ Setting game current player
~ (0.000074) SELECT "id" FROM "levels" WHERE "name" = 'One' ORDER BY "id" LIMIT 1
~ (0.001062) INSERT INTO "levels" ("name", "created_at", "updated_at") VALUES ('One', '2012-01-15T20:07:16+00:00', '2012-01-15T20:07:16+00:00')
~ (0.001460) INSERT INTO "games" ("level_id") VALUES (1)
~ (0.001279) INSERT INTO "players" ("name", "created_at", "updated_at", "game_id") VALUES ('Alice', '2012-01-15T20:07:16+00:00', '2012-01-15T20:07:16+00:00', 1)
~ (0.001592) UPDATE "games" SET "current_player_id" = 1 WHERE "id" = 1
So the models are not immediately persisted, but are all done together, and the uniqueness check is only done once.
Another possibility would be to set :auto_validation => false on the :name property.
This change produces this output (using create):
~ Creating level
~ (0.001162) INSERT INTO "levels" ("name", "created_at", "updated_at") VALUES ('One', '2012-01-15T20:13:51+00:00', '2012-01-15T20:13:51+00:00')
~ Creating game
~ (0.001958) INSERT INTO "games" ("level_id") VALUES (1)
~ Adding players
~ (0.001194) INSERT INTO "players" ("name", "created_at", "updated_at", "game_id") VALUES ('Alice', '2012-01-15T20:13:51+00:00', '2012-01-15T20:13:51+00:00', 1)
~ (0.001304) INSERT INTO "players" ("name", "created_at", "updated_at", "game_id") VALUES ('Bob', '2012-01-15T20:13:51+00:00', '2012-01-15T20:13:51+00:00', 1)
~ Setting game current player
~ (0.001369) UPDATE "games" SET "current_player_id" = 1 WHERE "id" = 1
So there are still multiple calls to the database, but the check isn't made on each call (in fact it doesn't look like it's being made at all, so this rather defeats the object of using :unique => true in the first place).

I ran into the same issue where I have a User and a Job. The Job belongs_to the user. Snippets:
class User
include DataMapper::Resource
property :id, Serial, writer: :protected, key: true
property :email, String, required: true, length: (5..40),
unique: true, format: :email_address
end
class Job
include DataMapper::Resource
property :id, Serial, key: true
property :progress, Integer
property :updated_at, DateTime
belongs_to :user
end
Every time I save the job, two queries are run:
~ (0.000421) SELECT `id` FROM `users` WHERE `email` = 'mike#example.com' ORDER BY `id` LIMIT 1
~ (0.001589) UPDATE `jobs` SET `progress` = 19, `updated_at` = '2013-12-19 06:32:43' WHERE `id` = 91
Because I am confident that my update contains no bad data (nothing is tainted by user input), I was able to stop the SELECT from running by using the bang (!) version of save
- save
+ save!
http://datamapper.org/docs/create_and_destroy.html describes what actually happens when you use the bang method instead of the non-bang method, so you will want to review that and see if it's safe for your use case.

Related

Postgresql: Can the minus operator not be used with a parameter? Only hardcoded values?

The following query deletes an entry using index:
const deleteGameQuery = `
update users
set games = games - 1
where username = $1
`
If I pass the index as a parameter, nothing is deleted:
const gameIndex = rowsCopy[0].games.findIndex(obj => obj.game == gameID).toString();
const deleteGameQuery = `
update users
set games = games - $1
where username = $2
`
const { rows } = await query(deleteGameQuery, [gameIndex, username]);
ctx.body = rows;
The gameIndex parameter is just a string, the same as if I typed it. So why doesn't it seem to read the value? Is this not allowed?
The column games is a jsonb data type with the following data:
[
{
"game": "cyberpunk-2077",
"status": "Backlog",
"platform": "Any"
},
{
"game": "new-pokemon-snap",
"status": "Backlog",
"platform": "Any"
}
]
The problem is you're passing text instead of an integer. You need to pass an integer. I'm not sure exactly how your database interface works to pass integers, try removing toString() and ensure gameIndex is a Number.
const gameIndex = rowsCopy[0].games.findIndex(obj => obj.game == gameID).
array - integer and array - text mean two different things.
array - 1 removes the second element from the array.
select '[1,2,3]'::jsonb - 1;
[1, 3]
array - '1' searches for the entry '1' and removes it.
select '["1","2","3"]'::jsonb - '1';
["2", "3"]
-- Here, nothing is removed because 1 != '1'.
select '[1,2,3]'::jsonb - '1';
[1, 2, 3]
When you pass in a parameter, it is translated by query according to its type. If you pass a Number it will be translated as 1. If you pass a String it will be translated as '1'. (Or at least that's how it should work, I'm not totally familiar with Javascript database libraries.)
As a side note, this sort of data is better handled as a join table.
create table games (
id bigserial primary key,
name text not null,
status text not null,
platform text not null
);
create table users (
id bigserial primary key,
username text not null
);
create table game_users (
game_id bigint not null references games,
user_id bigint not null references users,
-- If a user can only have each game once.
unique(game_id, user_id)
);
-- User 1 has games 1 and 2. User 2 has game 2.
insert into game_users (game_id, user_id) values (1, 1), (2, 1), (2,2);
-- User 1 no longer has game 1.
delete from game_users where game_id = 1 and user_id = 1;
You would also have a platforms table and a game_platforms join table.
Join tables are a little mind bending, but they're how SQL stores relationships. JSONB is very useful, but it is not a substitute for relationships.
You can try to avoid decomposing objects outside of postgress and manipulate jsonb structure inside the query like this:
create table gameplayers as (select 1 as id, '[
{
"game": "cyberpunk-2077",
"status": "Backlog",
"platform": "Any"
},
{
"game": "new-pokemon-snap",
"status": "Backlog",
"platform": "Any"
},
{
"game": "gameone",
"status": "Backlog",
"platform": "Any"
}
]'::jsonb games);
with
ungroupped as (select * from gameplayers g, jsonb_to_recordset(g.games)
as (game text, status text, platform text)),
filtered as (select id,
jsonb_agg(
json_build_object('game', game,
'status', status,
'platfrom', platform
)
) games
from ungroupped where game not like 'cyberpunk-2077' group by id)
UPDATE gameplayers as g set games=f.games
from filtered f where f.id=g.id;

How to flatten multiple row values into concatenated string in postgres?

I have a simple set of three tables . 2 "source" tables and a "join" table that allows one to many relationship.
I have this query:
select data.mph1.data ->> 'name' as name, data.mph1.data ->> 'tags' as Tags,
data.mph2.data ->> 'School' as school from data.mph1
join data.mph1tomph2 on data.mph1tomph2.mph1 = data.mph1.id
join data.mph2 on data.mph2.id = data.mph1tomph2.mph2
The output appears as:
Name Tags School
"Steve Jones" "["tag1", "tag2"]" "UMass"
"Steve Jones" "["tag1", "tag2"]" "Harvard"
"Gary Summers" "["java", "postgres", "flutter"]" "Yale"
"Gary Summers" "["java", "postgres", "flutter"]" "Harvard"
"Gary Summers" "["java", "postgres", "flutter"]" "UMass"
What I'm looking for is
Name Tags School
"Steve Jones" "["tag1", "tag2"]" "UMass", "Harvard"
"Gary Summers" "["java", "postgres", "flutter"]" "Yale, Harvard, UMass"
How would I get this result in single query? Possible?
Use the aggregate function string_agg()
select
data.mph1.data ->> 'name' as name,
data.mph1.data ->> 'tags' as tags,
string_agg(data.mph2.data ->> 'School', ', ') as school
from data.mph1
join data.mph1tomph2 on data.mph1tomph2.mph1 = data.mph1.id
join data.mph2 on data.mph2.id = data.mph1tomph2.mph2
group by data.mph1.id -- if id is a primary key
-- group by 1, 2 -- otherwise

I have different results from query for COUNT('e.id') or COUNT(e.id)

I have following code:
def self.department_members(department)
where(organization_id: department.organization_id)
.joins("LEFT JOIN core_employments As e ON
e.organization_id = #{department.organization_id} AND
core_members.user_id = e.user_id")
.group('core_members.id')
end
def self.can_automerged(department)
department_members(department).having("COUNT('e.id') = 1")
# department_members(department).having("COUNT(e.id) = 1")
end
def self.can_not_automerged(department)
department_members(department).having("Count('e.id') > 1")
end
When I use
department_members(department).having("COUNT('e.id') = 1")
my test completes without errors. When I use
department_members(department).having("COUNT(e.id) = 1")
my test fails. I can't understand why. Can u explain why?
I use Rails-4 and PostgreSQL.
schema:
create_table "core_members", force: :cascade do |t|
t.integer "user_id", null: false
t.integer "project_id", null: false
t.boolean "owner", default: false
t.string "login"
t.string "project_access_state"
t.datetime "created_at"
t.datetime "updated_at"
t.integer "organization_id"
t.integer "organization_department_id"
end
create_table "core_employments", force: :cascade do |t|
t.integer "user_id"
t.integer "organization_id"
t.boolean "primary"
t.string "state"
t.datetime "created_at"
t.datetime "updated_at"
t.integer "organization_department_id"
end
test:
module Core
require "initial_create_helper"
describe Member do
describe "automerge" do
before(:each) do
#organization = create(:organization)
#department1 = create(:organization_department,organization: #organization)
#department2 = create(:organization_department,organization: #organization)
#user = create(:user)
#user_with_many_employments = create(:user)
#department1.employments.create!(user: #user)
#department1.employments.create!(organization: #organization, user: #user_with_many_employments)
#department2.employments.create!(organization: #organization, user: #user_with_many_employments)
#project = create_project
#project.members.create!(user: #user,
organization: #organization)
#project.members.create!(user: #user_with_many_employments,
organization: #organization)
end
it "::can_not_automerged" do
expect(Member.can_not_automerged(#department1).to_a.map(&:user)).to match_array [#user_with_many_employments]
end
it "::can_automerged" do
expect(Member.can_automerged(#department1).to_a.map(&:user)).to match_array [#user]
end
end
end
end
I have different results from query for COUNT('e.id') or COUNT(e.id)
'e.id' is a string constant, so COUNT('e.id') is just an awkward, misleading way of saying COUNT(*).
COUNT(e.id), on the other hand, counts all rows in the result where e.id IS NOT NULL - since count() does not count NULL values.
The manual about count():
count(*) ... number of input rows
count(expression) ... number of input rows for which
the value of expression is not null
As you can see, there are even two separate functions internally. And
it should be noted that count(*) is slightly faster. So use that unless you need the second variant. Related:
PostgreSQL: running count of rows for a query 'by minute'
You might counter with:
"But e.id is the PRIMARY KEY of core_employments, so it is defined NOT NULL!"
But that would overlook the conditional LEFT JOIN in your query that still introduces NULL values in your NOT NULL column, where the join conditions are not met. Related:
Query with LEFT JOIN not returning rows for count of 0
That said, LEFT [OUTER] JOIN is misleading, too. The later condition
having("COUNT(e.id) = 1")
forces it to act like a plain [INNER] JOIN. Once you have fixed that, you might as well simplify to:
having("COUNT(*) = 1")
And if all you care is that at least one related row exists in core_employments, translating to having("COUNT(*) >= 1"), the superior (clearer, faster) technique in simple cases would be an EXISTS semi-join:
WHERE EXISTS (SELECT FROM core_employments WHERE <conditions>)

One2many field issue Odoo 10.0

I have this very weird issue with One2many field.
First let me explain you the scenario...
I have a One2many field in sale.order.line, below code will explain the structure better
class testModule(models.Model):
_name = 'test.module'
name = fields.Char()
class testModule2(models.Model):
_name = 'test.module2'
location_id = fields.Many2one('test.module')
field1 = fields.Char()
field2 = fields.Many2one('sale.order.line')
class testModule3(models.Model):
_inherit = 'sale.order.line'
test_location = fields.One2many('test.module2', 'field2')
CASE 1:
Now what is happening is that when i create a new sales order, i select the partner_id and then add a sale.order.line and inside this line i add the One2many field test_location and then i save.
CASE 2:
Create new sales order, select partner_id then add sale.order.line and inside the sale.order.line add the test_location line [close the sales order line window]. Now after the entry before hitting save i change a field say partner_id and then click save.
CASE 3:
this case is same as case 2 but with the addition that i again change the partner_id field [changes made total 2 times first of case2 and then now], then i click on save.
RESULTS
CASE 1 works fine.
CASE 2 has a issue of
odoo.sql_db: bad query: INSERT INTO "test_module2" ("id", "field2", "field1", "location_id", "create_uid", "write_uid", "create_date", "write_date") VALUES(nextval('test_module2_id_seq'), 27, 'asd', ARRAY[1, '1'], 1, 1, (now() at time zone 'UTC'), (now() at time zone 'UTC')) RETURNING id
ProgrammingError: column "location_id" is of type integer but expression is of type integer[]
LINE 1: ...VALUES(nextval('test_module2_id_seq'), 27, 'asd', ARRAY[1, '...
now for this case i put a debugger on create/write method of sale.order.line to see waht the values are getting passed..
values = {u'product_uom': 1, u'sequence': 0, u'price_unit': 885, u'product_uom_qty': 1, u'qty_invoiced': 0, u'procurement_ids': [[5]], u'qty_delivered': 0, u'qty_to_invoice': 0, u'qty_delivered_updateable': False, u'customer_lead': 0, u'analytic_tag_ids': [[5]], u'state': u'draft', u'tax_id': [[5]], u'test_location': [[5], [0, 0, {u'field1': u'asd', u'location_id': [1, u'1']}]], 'order_id': 20, u'price_subtotal': 885, u'discount': 0, u'layout_category_id': False, u'product_id': 29, u'price_total': 885, u'invoice_status': u'no', u'name': u'[CARD] Graphics Card', u'invoice_lines': [[5]]}
in the above values location_id is getting passed like u'location_id': [1, u'1']}]] which is not correct...so for this i correct the issue in code and the update the values and pass that...
CASE 3
if the user changes the field say 2 or more than 2 times then the values are
values = {u'invoice_lines': [[5]], u'procurement_ids': [[5]], u'tax_id': [[5]], u'test_location': [[5], [1, 7, {u'field1': u'asd', u'location_id': False}]], u'analytic_tag_ids': [[5]]}
here
u'location_id': False
MULTIPLE CASE
if the user does case 1 the on the same record does case 2 or case 3 then sometimes the line will be saved as field2 = Null or False in the database other values like location_id and field1 will have data but not field2
NOTE: THIS HAPPENS WITH ANY FIELD NOT ONLY PARTNER_ID FIELD ON HEADER LEVEL OF SALE ORDER
I tried debugging myself but couldn't find the reason why this is happening .

OrientDB SQL - traverse while keeping edges weight

Lets assume the following model in OrientDB graph:
I have a Profile vertex.
Profiles are connected with 2 edges: Liked and Commented. Both edges have a "value" field indicating the count of the action (or the "weight" of the edge).
So, if user A commented 3 times on posts by user B there will be a Commented edge from user A to user B with value = 3.
Now, say I want to get all the users that interacted with user B (either liked or commented), sorted by the weight of the interaction.
I can do that with the following SQL:
select * from (traverse out from
(select out, sum(value) as value from
(traverse * from (select from Profile where username="B") while $depth < 3)
where #class="Liked" or #class="Commented" group by out order by value desc)
while $depth < 2 ) where #class="Profile" )
But what if I want to know also the weight of the interaction? How do I propagate up the "value" while doing the last traverse?
Edit
According to the suggestion, a simplified version of this query will be:
select expand(out) from (
select out, sum(value) as value from (
select expand(inE("Liked", "Commented")) from Profile
where username="B"
) group by out order by value desc
)
But I still can't find a way to use LET to insert the value into the outer expanded object. $parent does not seem to point to the object that is expanded on the most outer select.
Edit 2
I'm Playing with $parent in every way I can think of. I don't see how you can use it in this case. Again - the problem I'm trying to solve is how to pass the sum(value) to the outer result set. I don't see a way of using LET for it when doing a GROUP BY, and I also don't see a way of using LET when the outer most select is doing an expand (since you can't do other projections together with expand).
Also, the results of using $current do not seem to be what is expected. For example, the following query:
select expand($v) from
(select from
(select expand(inE("Liked", "Commented")) from Profile where #rid=#11:0)
let $v = $current
)
Returns this:
{
"result" : [{
"#type" : "d",
"#rid" : "#14:4",
"#version" : 2,
"#class" : "Commented",
"value" : 1,
"out" : "#11:165",
"in" : "#11:0"
}, {
"#type" : "d",
"#rid" : "#14:4",
"#version" : 2,
"#class" : "Commented",
"value" : 1,
"out" : "#11:165",
"in" : "#11:0"
}, {
"#type" : "d",
"#rid" : "#14:4",
"#version" : 2,
"#class" : "Commented",
"value" : 1,
"out" : "#11:165",
"in" : "#11:0"
}
]
}
The same node over and over again, instead of all the edges, which is what I would expect.
I see you're using an old version of OrientDB. With more recent versions you can simplify it by. Example: original query:
select * from (
traverse out from (
select out, sum(value) as value from (
traverse * from (
select from Profile where username="B"
) while $depth < 3
) where #class="Liked" or #class="Commented" group by out order by value desc
) while $depth < 2
) where #class="Profile" )
You could skip some step by using out()/in()/both() passing the Edge's labels/class like:
select expand( out(["Liked","Commented]) ) from Profile where username="B"
However to pass the value you can use variables with LET clause. Example:
select from XXX let $parent.a = value
In this way you set the variable "a" into the upper level context, but you could do also:
select from XXX let $parent.$parent.a = value
To set it 2 levels up.
I haven't tried this with a group by yet, but you should be able to group the result using a sub query. This works for me, where prop1, prop2, prop3, etc are properties of the vertices coming out of the edge (the columns resulting from select out() Profile where username="B")
select outV().prop1, outV().prop2, outV().prop3, value from (
select expand(inE("Liked", "Commented")) from Profile
where username="B"
) order by value desc