how to delete all rows after 1000 rows in postgresql Table - postgresql

I have huge Database with 15 tables.
I need to make light version of that and leave only first 1000 rows in each table based on DESC Date. I did try to find on google how to do that but nothing really works.
It will be perfect it there will be automated way to go through each table and leave only 1000 rows.
But If I need to do that manually with each table it will be fine as well.
Thank you,

This looks positively awful, but maybe it's a starting point from which you can build.
with cte as (
select mod_date, row_number() over (order by mod_date desc) as rn
from table1
),
min_date as (
select mod_date
from cte
where rn = 1000
)
delete from table1 t1
where t1.mod_date < (select mod_date from min_date)

So solution is:
DELETE FROM "table" WHERE "date" < now() - interval '1 year';
That way it will delete all data from table where Date is older that 1 year.

Related

How to reference a date range from another CTE in a where clause without joining to it?

I'm trying to write a query for Hive that uses the system date to determine both yesterday's date as well as the date 30 days ago. This will provide me with a rolling 30 days without the need to manually feed the date range to the query every time I run it.
I have that code working fine in a CTE. The problem I'm having is in referencing those dates in another CTE without joining the CTEs together, which I can't do since there's not a common field to join on.
I've tried various approaches but I get a "ParseException" every time.
WITH
date_range AS (
SELECT
CAST(from_unixtime(unix_timestamp()-30*60*60*24,'yyyyMMdd') AS INT) AS start_date,
CAST(from_unixtime(unix_timestamp()-1*60*60*24,'yyyyMMdd') AS INT) AS end_date
)
SELECT * FROM myTable
WHERE date_id BETWEEN (SELECT start_date FROM date_range) AND (SELECT end_date FROM date_range)
The intended result is the set of records from myTable that have a date_id between the start_date and end_date as found in the CTE date_range. Perhaps I'm going about this all wrong?
You can do a cross join, it does not require ON condition. Your date_range dataset is one row only, you can CROSS JOIN it with your_table if necessary and it will be transformed to a map-join (your small dataset will be broadcasted to all the mappers and loaded into each mapper memory and will work very fast), check the EXPLAIN command and make sure it is a map-join:
set hive.auto.convert.join=true;
set hive.mapjoin.smalltable.filesize=250000000;
WITH
date_range AS (
SELECT
CAST(from_unixtime(unix_timestamp()-30*60*60*24,'yyyyMMdd') AS INT) AS start_date,
CAST(from_unixtime(unix_timestamp()-1*60*60*24,'yyyyMMdd') AS INT) AS end_date
)
SELECT t.*
FROM myTable t
CROSS JOIN date_range d
WHERE t.date_id BETWEEN d.start_date AND d.end_date
Also instead if this you can calculate dates in the where clause:
SELECT t.*
FROM myTable t
CROSS JOIN date_range d
WHERE t.date_id
BETWEEN CAST(from_unixtime(unix_timestamp()-30*60*60*24,'yyyyMMdd') AS INT)
AND CAST(from_unixtime(unix_timestamp()-1*60*60*24,'yyyyMMdd') AS INT)

Most efficient way to remove duplicates - Postgres

I have always deleted duplicates with this kind of query:
delete from test a
using test b
where a.ctid < b.ctid
and a.col1=b.col1
and a.col2=b.col2
and a.col3=b.col3
Also, I have seen this query being used:
DELETE FROM test WHERE test.ctid NOT IN
(SELECT ctid FROM (
SELECT DISTINCT ON (col1, col2) *
FROM test));
And even this one (repeated until you run out of duplicates):
delete from test ju where ju.ctid in
(select ctid from (
select distinct on (col1, col2) * from test ou
where (select count(*) from test inr
where inr.col1= ou.col1 and inr.col2=ou.col2) > 1
Now I have run into a table with 5 million rows, which have indexes in the columns that are going to match in the where clause. And now I wonder:
Which, of all those methods that apparently do the same, is the most efficient and why?
I just run the second one and it is taking it over 45 minutes to remove duplicates. I'm just curious about which would be the most efficient one, in case I have to remove duplicates from another huge table. It wouldn't matter if it has a primary key in the first place, you can always create it or not.
demo:db<>fiddle
Finding duplicates can be easily achieved by using row_number() window function:
SELECT ctid
FROM(
SELECT
*,
ctid,
row_number() OVER (PARTITION BY col1, col2, col3 ORDER BY ctid)
FROM test
)s
WHERE row_number >= 2
This orders groups tied rows and adds a row counter. So every row with row_number > 1 is a duplicate which can be deleted:
DELETE
FROM test
WHERE ctid IN
(
SELECT ctid
FROM(
SELECT
*,
ctid,
row_number() OVER (PARTITION BY col1, col2, col3 ORDER BY ctid)
FROM test
)s
WHERE row_number >= 2
)
I don't know if this solution is faster than your attempts but your could give it a try.
Furthermore - as #a_horse_with_no_name already stated - I would recommend to use an own identifier instead of ctid for performance issues.
Edit:
For my test data your first version seems to be a little bit faster than my solution. Your second version seems to be slower and your third version does not work for me (after fixing the compiling errors it shows no result).
demo:db<>fiddle

Count distinct users over n-days

My table consists of two fields, CalDay a timestamp field with time set on 00:00:00 and UserID.
Together they form a compound key but it is important to have in mind that we have many rows for each given calendar day and there is no fixed number of rows for a given day.
Based on this dataset I would need to calculate how many distinct users there are over a set window of time, say 30d.
Using postgres 9.3 I cannot use COUNT(Distinct UserID) OVER ... nor I can work around the issue using DENSE_RANK() OVER (... RANGE BETWEEN) because RANGE only accepts UNBOUNDED.
So I went the old fashioned way and tried with a scalar subquery:
SELECT
xx.*
,(
SELECT COUNT(DISTINCT UserID)
FROM data_table AS yy
WHERE yy.CalDay BETWEEN xx.CalDay - interval '30 days' AND xx.u_ts
) as rolling_count
FROM data_table AS xx
ORDER BY yy.CalDay
In theory, this should work, right? I am not sure yet because I started the query about 20 mins ago and it is still running. Here lies the problem, the dataset is still relatively small (25000 rows) but will grow over time. I would need something that scales and performs better.
I was thinking that maybe - just maybe - using the unix epoch instead of the timestamp could help but it is only a wild guess. Any suggestion would be welcome.
This should work. Can't comment on speed, but should be a lot less than your current one. Hopefully you have indexes on both these fields.
SELECT t1.calday, COUNT(DISTINCT t1.userid) AS daily, COUNT(DISTINCT t2.userid) AS last_30_days
FROM data_table t1
JOIN data_table t2
ON t2.calday BETWEEN t1.calday - '30 days'::INTERVAL AND t1.calday
GROUP BY t1.calday
UPDATE
Tested it with a lot of data. The above works but is slow. Much faster to do it like this:
SELECT t1.*, COUNT(DISTINCT t2.userid) AS last_30_days
FROM (
SELECT calday, COUNT(DISTINCT userid) AS daily
FROM data_table
GROUP BY calday
) t1
JOIN data_table t2
ON t2.calday BETWEEN t1.calday - '30 days'::INTERVAL AND t1.calday
GROUP BY 1, 2
So instead of building up a massive table for all the JOIN combinations and then grouping/aggregating, it first gets the "daily" data, then joins the 30 day on that. Keeps the join much smaller and returns quickly (just under 1 second for 45000 rows in the source table on my system).

T-SQL import data and remove duplicate records from prior month

Every month we receive a roster which we run queries on and then generate data that gets uploaded into a table for an outside source to retrieve. My question is what would be the easiest way to remove the duplicate data from the prior months upload bearing in mind that not all data is duplicated and that if a person does not appear on the new roster their prior month needs to remain. The data is time stamped when it gets uploaded.
Thank you
You can use a cte and Row_Number() to identify and remove dupes
;with cte as (
Select *
,RN = Row_Number() over (Partition By SomeKeyField(s) Order By SomeDate Desc)
From YourTable
)
Select * -- << Remove if Satisfied
-- Delete -- << Remove Comment if Statisfied
From cte
Where RN>1
Without seeing your data structure, take a hard look a the Partition By and Order By within the OVER clause of Row_Number()
Short and efective way is delete via derived table.
delete from f
from (
select *, row_number() over (partition by col order by (select 0)) rn
from tbl) f
where rn > 1
But the most efective way is remove duplicates on input and prevent them (for example with unique constraint).

How to prune a table down to the first 5000 records of 50000

I have a rather large table of 50000 records, and I want to cut this down to 5000. How would I write an SQL query to delete the other 45000 records. The basic table structure contains the column of a datetime.
A rough idea of the query I want is the following
DELETE FROM mytable WHERE countexceeded(5000) ORDER BY filedate DESC;
I could write this in C# somehow grabbing the row index number and doing some work around that, however is there a tidy way to do this?
The answer you have accepted is not valid syntax as DELETE does not allow an ORDER BY clause. You can use
;WITH T AS
(
SELECT TOP 45000 *
FROM mytable
ORDER BY filedate
)
DELETE FROM T
DELETE TOP(45000) FROM mytable ORDER BY filedate ASC;
Change the order by to ascending to get the rows in reverse order and then delete the top 45000.
Hope this helps.
Edit:-
I apologize for the invalid syntax. Here is my second attempt.
DELETE FROM myTable a INNER JOIN
(SELECT TOP(45000) * FROM myTable ORDER BY fileDate ASC) b ON a.id = b.id
If you do not have a unique column then please use Martin Smith's CTE answer.
if the table is correctly ordered:
DELETE FROM mytable LIMIT 5000
if not and the table has correctly ordered auto_increment index:
get the row
SELECT id, filedate FROM mytable LIMIT 1, 50000;
save the id and then delete
DELETE FROM mytable WHERE id >= #id;
if not ordered correctly, you could use filedate instead of id, but if it's a date without time, you could get undesired rows deleted from the same date, so be carefull with filedate deletion solution