T-SQL query one table, get presence or absence of other table value - tsql

I'm not sure what this type of query is called so I've been unable to search for it properly. I've got two tables, Table A has about 10,000 rows. Table B has a variable amount of rows.
I want to write a query that gets all of Table A's results but with an added column, the value of that column is a boolean that says whether the result also appears in Table B.
I've written this query which works but is slow, it doesn't use a boolean but rather a count that will be either zero or one. Any suggested improvements are gratefully accepted:
SELECT u.number,u.name,u.deliveryaddress,
(SELECT COUNT(productUserid)
FROM ProductUser
WHERE number = u.number and productid = #ProductId)
AS IsInPromo
FROM Users u
UPDATE
I've run the query with actual execution plan enabled, I'm not sure how to show the results but various costs are:
Nested Loops (left semi join): 29%]
Clustered Index scan (User Table): 41%
Clustered Index Scan (ProductUser table): 29%
NUMBERS
There are 7366 users in the users table and currently 18 rows in the productUser table (although this will change and could be in the thousands)

You can use EXISTS to short circuit after the first row is found rather than COUNT-ing all matching rows.
SQL Server does not have a boolean datatype. The closest equivalent is BIT
SELECT u.number,
u.name,
u.deliveryaddress,
CASE
WHEN EXISTS (SELECT *
FROM ProductUser
WHERE number = u.number
AND productid = #ProductId) THEN CAST(1 AS BIT)
ELSE CAST(0 AS BIT)
END AS IsInPromo
FROM Users u
RE: "I'm not sure what this type of query is called". This will give a plan with a semi join. See Subqueries in CASE Expressions for more about this.

Which management system are you using?
Try this:
SELECT u.number,u.name,u.deliveryaddress,
case when COUNT(p.productUserid) > 0 then 1 else 0 end
FROM Users u
left join ProductUser p on p.number = u.number and productid = #ProductId
group by u.number,u.name,u.deliveryaddress
UPD: this could be faster using mssql
;with fff as
(
select distinct p.number from ProductUser p where p.productid = #ProductId
)
select u.number,u.name,u.deliveryaddress,
case when isnull(f.number, 0) = 0 then 0 else 1 end
from Users u left join fff f on f.number = u.number

Since you seem concerned about performance, this query can perform faster as this will cause index seek on both tables versus an index scan:
SELECT u.number,
u.name,
u.deliveryaddress,
ISNULL(p.number, 0) IsInPromo
FROM Users u
LEFT JOIN ProductUser p ON p.number = u.number
WHERE p.productid = #ProductId

Related

How to make postgres (cursor?) start at particular row

I have created the following query:
select t.id, t.row_id, t.content, t.location, t.retweet_count, t.favorite_count, t.happened_at,
a.id, a.screen_name, a.name, a.description, a.followers_count, a.friends_count, a.statuses_count,
c.id, c.code, c.name,
t.parent_id
from tweets t
join accounts a on a.id = t.author_id
left outer join countries c on c.id = t.country_id
where t.row_id > %s
-- order by t.row_id
limit 100
Where %s is a number that starts at 0 and is incremented by 100 after each such query is conducted. I want to fetch all records from the database using this method, where I just increase the %s in the where condition. I found this approach on https://ivopereira.net/efficient-pagination-dont-use-offset-limit. I also included a column in my table which is corresponding to row number (I named it row_id). Now the problem is when I run this query the first time, it returns rows which have an row_id of 3 million. I would like the cursor (not sure if my terminology is correct) to start from rows with row_id 1 through 100 and so on. The table contains 7 million rows. Am I missing something obvious with which I could achieve my goal?

Need better summation select statement in postgres function

I've got two tables in my database. One of them, 'orders', contains a set of columns with an integer which represents what the order should contain (like 5 of A and 15 of B). The second table, 'production_work', contains those same order columns, and a date, so whenever somebody completes part of an order, I track it.
So now i need a fast way to know which orders are completed, and I'm hoping to avoid a 'completed' table on the first column as orders are editable and it's just more logic to keep correct.
This query works, but it's horribly written. What's a better way to do this? There are actually 12 of these columns that go into this query...I'm just showing 3 of them for the example.
SELECT *
FROM orders o
WHERE ud = (SELECT SUM(ud) FROM production_work WHERE order_id = o.ident)
AND dp = (SELECT SUM(dp) FROM production_work WHERE order_id = o.ident)
AND swrv = (SELECT SUM(swrv) FROM production_work WHERE order_id = o.ident)
select o.*
from
orders o
inner join
(
select order_id, sum(ud) as ud, sum(dp) as dp, sum(swrv) as swrv
from production_work
group by order_id
) pw on o.ident = pw.order_id
where
o.ud = pw.ud
and o.dp = pw.dp
and o.swrv = pw.swrv

T-SQL Need help to optimize table value function

I need help to optmize the SQL logic in one of my functions. Please, note that I am not able to use store procedure.
Here is my table. It will be initialized using #MainTable that contains a lot of records.
DECLARE TABLE #ResultTable
(
ResultValue INT
)
These are tables that stores some parameters - they can be emty too.
DECLARE TABLE #ParameterOne (ParameterOne INT)
DECLARE TABLE #ParameterTwo (ParameterOne NVARCHAR(100))
...
DECLARE TABLE #ParameterN(ParameterN TINYINT)
Now, I need to join a lot of tables to my #MainTable in order to select from it only some of its records.
The selected records depend on the information stored in the parameters table.
So, my current solution is:
INSERT INTO ResultTable(ResultValue)
SELECT ResultValue
FROM MainTable M
INNER JOIN #MainOne MO
ON M.ID=MO.ID
....
INNER JOIN #MainN MN
ON M.IDN=MN.ID
WHERE (EXISTS (SELECT 1 FROM #ParameterOne WHERE ParameterOne=MO.ID) OR NOT EXISTS (SELECT 1 FROM #ParameterOne))
AND
...
AND
(EXISTS (SELECT 1 FROM #ParameterN WHERE ParameterN=MN.Name) OR NOT EXISTS (SELECT 1 FROM #ParameterN ))
So, the idea is to add the records only if they match the current criteria from the parameters tables.
Because I am not able to use procedure to build dynamic query I am using the WHERE clause with combinations of EXISTS and NOT EXISTS for each parameter table.
The problem is that it works slower when I am adding more and more parameters table. Is there an other way to do this without using a lot of IF/ELSE statements checking what parameter table has records - it will make the function a lot bigger and difficult for read.
And ideas and advices are welcomed.
Good question.
Try the following one:
INSERT INTO ResultTable(ResultValue)
SELECT ResultValue
FROM MainTable M
INNER JOIN (SELECT * FROM #MainOne WHERE (EXISTS (SELECT 1 FROM #ParameterOne WHERE ParameterOne=#MainOne.ID) OR NOT EXISTS (SELECT 1 FROM #ParameterOne))) MO
ON M.ID=MO.ID
....
INNER JOIN (SELECT * FROM #MainN WHERE (EXISTS (SELECT 1 FROM #ParameterN WHERE ParameterOne=#MainN.Name OR NOT EXISTS (SELECT 1 FROM #ParameterN))) MO
ON M.IDN=MN.ID
Advantages:
Result of the JOIN is more quickly, because it does not process all data (it is already filtered)
It looks more simple for adjusting

Selecting non-repeating values in Postgres

SELECT DISTINCT a.s_id, select2Result.s_id, select2Result."mNrPhone",
select2Result."dNrPhone"
FROM "Table1" AS a INNER JOIN
(
SELECT b.s_id, c."mNrPhone", c."dNrPhone" FROM "Table2" AS b, "Table3" AS c
WHERE b.a_id = 1001 AND b.s_id = c.s_id
ORDER BY b.last_name) AS select2Result
ON a.a_id = select2Result.student_id
WHERE a.k_id = 11211
It returns:
1001;1001;"";""
1002;1002;"";""
1002;1002;"2342342232123";"2342342"
1003;1003;"";""
1004;1004;"";""
1002 value is repeated twice, but it shouldn't because I used DISTINCT and no other table has an id repeated twice.
You can use DISTINCT ON like this:
SELECT DISTINCT ON (a.s_id)
a.s_id, select2Result.s_id, select2Result."mNrPhone",
select2Result."dNrPhone"
...
But like other persons have told you, the "repeated records" are different really.
The qualifier DISTINCT applies to the entire row, not to the first column in the select-list. Since columns 3 and 4 (mNrPhone and dNrPhone) are different for the two rows with s_id = 1002, the DBMS correctly lists both rows. You have to write your query differently if you only want the s_id = 1002 to appear once, and you have to decide which auxilliary data you want shown.
As an aside, it is strongly recommended that you always use the explicit JOIN notation (which was introduced in SQL-92) in all queries and sub-queries. Do not use the old implicit join notation (which is all that was available in SQL-86 or SQL-89), and especially do not use a mixture of explicit and implicit join notations (where your sub-query uses the implicit join, but the main query uses explicit join). You need to know the old notation so you can understand old queries. You should write new queries in the new notation.
First of all, the query displayed does not work at all, student_id is missing in the sub-query. You use it in the JOIN later.
More interestingly:
Pick a certain row out of a set with DISTINCT
DISTINCT and DISTINCT ON return distinct values by sorting all rows according to the set of columns to be distinct, then it picks the first row from every set. It sorts by all rows for a general DISTINCT and only the specified rows for DISTINCT ON. Here lies the opportunity to pick certain rows out of a set over other.
For instance if you prefer rows with not-empty "mNrPhone" in your example:
SELECT DISTINCT ON (a.s_id) -- sure you didn't want a.a_id?
,a.s_id AS a_s_id -- use aliases to avoid dupe name
,s.s_id AS s_s_id
,s."mNrPhone"
,s."dNrPhone"
FROM "Table1" a
JOIN (
SELECT b.s_id, c."mNrPhone", c."dNrPhone", ??.student_id -- misssing!
FROM "Table2" b
JOIN "Table3" c USING (s_id)
WHERE b.a_id = 1001
-- ORDER BY b.last_name -- pointless, DISTINCT will re-order
) s ON a.a_id = s.student_id
WHERE a.k_id = 11211
ORDER BY a.s_id -- first col must agree with DISTINCT ON, could add DESC though
,("mNrPhone" <> '') DESC -- non-empty first
ORDER BY cannot disagree with DISTINCT on the same query level. To get around this you can either use GROUP BY instead or put the whole query in a sub-query and run another SELECT with ORDER BY on it.
The ORDER BY you had in the sub-query is voided now.
In this particular case, if - as it seems - the dupes come only from the sub-query (you'd have to verify), you could instead:
SELECT a.a_id, s.s_id, s."mNrPhone", s."dNrPhone" -- picking a.a_id over s_id
FROM "Table1" a
JOIN (
SELECT DISTINCT ON (b.s_id)
,b.s_id, c."mNrPhone", c."dNrPhone", ??.student_id -- misssing!
FROM "Table2" b
JOIN "Table3" c USING (s_id)
WHERE b.a_id = 1001
ORDER BY b.s_id, (c."mNrPhone" <> '') DESC -- pick non-empty first
) s ON a.a_id = s.student_id
WHERE a.k_id = 11211
ORDER BY a.a_id -- now you can ORDER BY freely

Table cardinality information in Firebird

I am new to Firebird and I am messing around in it's meta data to get some information about the table structure and etc.
My problem is that I can't seem to find some information about estimated table cardinality. Is there a way to get this information from Firebird?
Edit:
By cardinality i mean the number of rows in a table :) and for my use the select count(*) is not an option.
You can use an aproximative method, using the selectivity of primary key like this:
SELECT
R.RDB$RELATION_NAME TABLENAME,
(
CASE
WHEN I.RDB$STATISTICS = 0 THEN 0
ELSE 1 / I.RDB$STATISTICS
END) AS COUNTRECORDS8
FROM RDB$RELATIONS R
JOIN RDB$RELATION_CONSTRAINTS C ON (R.RDB$RELATION_NAME = C.RDB$RELATION_NAME AND C.RDB$CONSTRAINT_TYPE = 'PRIMARY KEY')
JOIN RDB$INDICES I ON (I.RDB$RELATION_NAME = C.RDB$RELATION_NAME AND I.RDB$INDEX_NAME = C.RDB$INDEX_NAME)
To get the number of rows in a table you use the COUNT() function as in any other SQL DB, ie
SELECT count(*) FROM table;
Why not use a query:
select count(distinct field_name)/(count(field_name) + 0.0000) from table_name
The closer result to 1 the higher cardinality of a specified column.