Find Missing Records - tsql

My SQL is a bit bad. I got a query that when I run it I return for example 10 rows but there are 15 in my where clause, how do I identify those 5 that I can't find?
Off course I can dump it in MS Excel but how do I use SQL?
Its a simple query:
select * from customers where username in ("21051", "21052"...
Nothing else in the where clause, it return 10 rows but there are 15 usernames in there where clause.

To identify the missing rows:
SELECT *
FROM (SELECT '21051' username
UNION ALL SELECT '21052'
UNION ALL SELECT '21053'
UNION ALL SELECT '21054') u
WHERE u.username NOT IN (SELECT c.username FROM customers c)

Related

Find the next oldest row in Redshift

I have a table called user_activity in Redshift that has department, user_id, activity_type, activity_id, activity_date.
I'd like to query a daily report of how many days since the last event (of any type). Using CROSS APPLY (SQL Server) or LATERAL JOIN (Postgres 9+), I'd do something like...
SELECT d.date, a.last_activity_date
FROM date_table d
CROSS JOIN (
SELECT DISTINCT user_id FROM activity_table
) u
CROSS APPLY (
SELECT TOP 1 activity_date as last_activity_date
FROM activity_table
WHERE user_id = u.user_id AND activity_date <= d.date
ORDER BY activity_date DESC
) a
For now, I write it similar to the below, but it is a bit slow and I am afraid it'll only get slower.
with user_activity as (
select distinct activity_date, user_id from activity_table
)
select
d.date, u.user_id,
max(u.activity_date) as last_activity_date
from date_table d
inner join user_activity u on u.activity_date <= d.date
where d.date between '2020-01-01' and current_date
group by 1, 2
Can someone suggest a good alternative for my needs or for CROSS APPLY / LATERAL JOIN.
As you are seeing cross joining and inequality joining will slow down as you data grows and are generally not the approach you want in Redshift. This is due to the data size increase that comes with this type of action when applied to large data tables that are typical in Redshift.
You want to use window functions to perform this type of analysis. But you will need to step back and rethink how you will structure the SQL. A MAX(activity_date) window function, partitioned by user_id and ordered by date and with a frame clause of all preceding rows, will find the most recent activity to any activity.
Now this will produce only rows for user_ids and dates that exist in the data table and it looks like you want 1 row for each date for each user_id, right? To do this you need to UNION in a frame of data that has 1 row for each date for each user_id ahead of the window function. You will need NULLs in for the other columns so that the data widths match. You will also want the dates in a separate column from activity_date. Now all dates for all user ids will be in the source and the window function will give you the result you want.
You also ask ‘how is this better than the joins?’ Well in the joins you are replicating all the data records by the number of dates which can get really big. In this approach you just have the original data records plus one row per user_id per date (which is the size of your output) and as the number of records per user_id grows this approach doesn’t.
——— Request to modify asker’s code per comments made to their approach ———
Your code is definitely on the right track as you have removed the massive inequality join of your original. I made 2 comments about it. The first is that I believe you need GROUP BY user_id, date to prevent multiple rows per user_id per date that would result if there are records for the same user_id on a single date with differing activity_types. This is a simple oversight.
The second is to state that I intended for you to use UNION ALL, not LEFT JOIN, in combining the actual data and the user_id/date framework. Your approach works fine but I have found that unioning with very large amounts of data is generally faster than joining but you do need to make sure the columns match up. Either way we end up with a data segment with 3 columns - 2 date columns, one with NULLs for framework rows, and 1 user_id. Your approach is fine and the difference in performance is likely very small unless you have huge tables.
Since you asked for a rewrite, here it is with both changes. (NOTE: my laptop is in the shop so I don’t have ready access to Redshift at the moment and this SQL is untested. If the intent is not clear from this and you need me to debug it will be delayed by a few days. I’m keeping your setup methods and SQL structure.)
with date_table as (
select '2000-01-01'::date as date
union all
select '2000-01-02'::date
union all
select '2000-01-03'::date
union all
select '2000-01-04'::date
union all
select '2000-01-05'::date
union all
select '2000-01-06'::date
),
users as (
select 1 as user_id
union all
select 2
union all
select 3
),
user_activity as (
select 1 as user_id, '2000-01-01'::date as activity_date
union all
select 1 as user_id, '2000-01-04'::date as activity_date
union all
select 3 as user_id, '2000-01-03'::date as activity_date
union all
select 1 as user_id, '2000-01-05'::date as activity_date
union all
select 1 as user_id, '2000-01-06'::date as activity_date
),
user_dates as (
select d.date, u.user_id
from date_table d
cross join users u
),
user_date_activity as (
select cal_date, user_id,
lag(max(activity_date), 1) ignore nulls over (partition by user_id order by date) as last_activity_date
from (
Select user_id, date as cal_date, NULL as activity_date from user_dates
Union all
Select user_id, activity_date as cal_date, activity_date from user_activity
)
Group by user_id, cal_date
)
select * from user_date_activity
order by user_id, cal_date```
This was my query based on Bill's answer.
with date_table as (
select '2000-01-01'::date as date
union all
select '2000-01-02'::date
union all
select '2000-01-03'::date
union all
select '2000-01-04'::date
union all
select '2000-01-05'::date
union all
select '2000-01-06'::date
),
users as (
select 1 as user_id
union all
select 2
union all
select 3
),
user_activity as (
select 1 as user_id, '2000-01-01'::date as activity_date
union all
select 1 as user_id, '2000-01-04'::date as activity_date
union all
select 3 as user_id, '2000-01-03'::date as activity_date
union all
select 1 as user_id, '2000-01-05'::date as activity_date
union all
select 1 as user_id, '2000-01-06'::date as activity_date
),
user_dates as (
select d.date, u.user_id
from date_table d
cross join users u
),
user_date_activity as (
select ud.date, ud.user_id,
lag(ua.activity_date, 1) ignore nulls over (partition by ud.user_id order by ud.date) as last_activity_date
from user_dates ud
left join user_activity ua on ud.date = ua.activity_date and ud.user_id = ua.user_id
)
select * from user_date_activity
order by user_id, date

How to make postgres (cursor?) start at particular row

I have created the following query:
select t.id, t.row_id, t.content, t.location, t.retweet_count, t.favorite_count, t.happened_at,
a.id, a.screen_name, a.name, a.description, a.followers_count, a.friends_count, a.statuses_count,
c.id, c.code, c.name,
t.parent_id
from tweets t
join accounts a on a.id = t.author_id
left outer join countries c on c.id = t.country_id
where t.row_id > %s
-- order by t.row_id
limit 100
Where %s is a number that starts at 0 and is incremented by 100 after each such query is conducted. I want to fetch all records from the database using this method, where I just increase the %s in the where condition. I found this approach on https://ivopereira.net/efficient-pagination-dont-use-offset-limit. I also included a column in my table which is corresponding to row number (I named it row_id). Now the problem is when I run this query the first time, it returns rows which have an row_id of 3 million. I would like the cursor (not sure if my terminology is correct) to start from rows with row_id 1 through 100 and so on. The table contains 7 million rows. Am I missing something obvious with which I could achieve my goal?

Postgresql - Select distinct on limit 100

There's probably a simple solution to this. However my question is how do i go about selecting say 100 or 1000 distinct avclassfamily values in the following query?
Ideally there would be a command that'd be something like 'select distinct ON (1000) (avclassfamily)' but there isn't.
cursor.execute("select count(*),date_trunc( 'year', first_seen ) from (select DISTINCT ON (avclassfamily) * from malwarehashesandstrings) as p where first_seen is not null and behaviouralbinary is true and origindataset != 'MalRec' and origindataset != 'Ember Benign' and origindataset is not null group by date_trunc( 'year', first_seen );")
Please let me know if you have a solution or if there's anytthing i can clarify
Edit: As requested, a simpler version of the query:
select DISTINCT (avclassfamily) from malwarehashesandstrings;
But say if it was possible to select 100 distinct values there

sql oracle question about joins or subquery

select *
from amazon_shipment, customer
where amazon_shipment.customer_id = customer.customer_id
and amazon_shipment.customer_id in
(select top(1) amazon_shipment.customer_id
from amazon_shipment
group by amazon_shipment.customer_id
order by count(*) desc);
I am trying to select all the customers with the most order, however, I get an error:
FROM keyword not found were expected
TOP(1) isn't available in Oracle.
In Oracle 11gR2 and lower, you can use WHERE ROWNUM < 2
select *
from EMPLOYEES
where rownum < 2
order by SALARY desc;
In Oracle 12c and higher, you can use FETCH FIRST 1 ROWS ONLY
select *
from EMPLOYEES
order by SALARY desc
fetch first 1 rows only;

Conditional Union in T-SQL

Currently I've a query as follows:
-- Query 1
SELECT
acc_code, acc_name, alias, LAmt, coalesce(LAmt,0) AS amt
FROM
(SELECT
acc_code, acc_name, alias,
(SELECT
(SUM(cr_amt)-SUM(dr_amt))
FROM
ledger_mcg l
WHERE
(l.acc_code LIKE a.acc_code + '.%' OR l.acc_code=a.acc_code)
AND
fy_id=1
AND
posted_date BETWEEN '2010-01-01' AND '2011-06-02') AS LAmt
FROM
acc_head_mcg AS a
WHERE
(acc_type='4')) AS T1
WHERE
coalesce(LAmt,0)<>0
Query 2 is same as Query 1 except that acc_type = '5' in Query 2. Query 2 always returns a resultset with a single row. Now, I need the union of the two queries i.e
Query 1
UNION
Query 2
only when the amt returned by Query 2 is less than 0. Else, I don't need a union but only the resulset from Query 1.
The best way I can think of is to create a parameterised scalar function. How best can I do this?
You could store the result of the first query into a temporary table, then, if the table wasn't empty, execute the other query.
IF OBJECT_ID('tempdb..#MultipleQueriesResults') IS NOT NULL
DROP TABLE #MultipleQueriesResults;
SELECT
acc_code, acc_name, alias, LAmt, coalesce(LAmt,0) AS amt
INTO #MultipleQueriesResults
FROM
(SELECT
acc_code, acc_name, alias,
(SELECT
(SUM(cr_amt)-SUM(dr_amt))
FROM
ledger_mcg l
WHERE
(l.acc_code LIKE a.acc_code + '.%' OR l.acc_code=a.acc_code)
AND
fy_id=1
AND
posted_date BETWEEN '2010-01-01' AND '2011-06-02') AS LAmt
FROM
acc_head_mcg AS a
WHERE
(acc_type='4')) AS T1
WHERE
coalesce(LAmt,0)<>0;
IF NOT EXISTS (SELECT * FROM #MultipleQueriesResults)
… /* run Query 2 */