How to prune a table down to the first 5000 records of 50000 - tsql

I have a rather large table of 50000 records, and I want to cut this down to 5000. How would I write an SQL query to delete the other 45000 records. The basic table structure contains the column of a datetime.
A rough idea of the query I want is the following
DELETE FROM mytable WHERE countexceeded(5000) ORDER BY filedate DESC;
I could write this in C# somehow grabbing the row index number and doing some work around that, however is there a tidy way to do this?

The answer you have accepted is not valid syntax as DELETE does not allow an ORDER BY clause. You can use
;WITH T AS
(
SELECT TOP 45000 *
FROM mytable
ORDER BY filedate
)
DELETE FROM T

DELETE TOP(45000) FROM mytable ORDER BY filedate ASC;
Change the order by to ascending to get the rows in reverse order and then delete the top 45000.
Hope this helps.
Edit:-
I apologize for the invalid syntax. Here is my second attempt.
DELETE FROM myTable a INNER JOIN
(SELECT TOP(45000) * FROM myTable ORDER BY fileDate ASC) b ON a.id = b.id
If you do not have a unique column then please use Martin Smith's CTE answer.

if the table is correctly ordered:
DELETE FROM mytable LIMIT 5000
if not and the table has correctly ordered auto_increment index:
get the row
SELECT id, filedate FROM mytable LIMIT 1, 50000;
save the id and then delete
DELETE FROM mytable WHERE id >= #id;
if not ordered correctly, you could use filedate instead of id, but if it's a date without time, you could get undesired rows deleted from the same date, so be carefull with filedate deletion solution

Related

How can I query for the previous claim for each claim. Window Function PostgreSQL

I have a table of claims and I want to attach each patients' previous claim. I've been able to do it with
a select statement, but my dataset is 50+ million records and I'm hoping that there is a more efficient way to do this. From my understanding, this query will need to scan the full table each time for each record. Would a window function be better? Could sorting the large table help at all?
http://www.sqlfiddle.com/#!17/09a53/6/0
select
(select b."fill_date" from t1 b
where b.user_id = a.user_id and b.fill_date < a.fill_date
order by b.fill_date desc
limit 1) as prior_fill_date,
a.* from t2 a
Thanks for the help
Please give this a try:
select *,
lag(fill_date)
over (partition by user_id order by fill_date)
as prior_fill_date
from "sql_notebook_results_T42E95sESnn0"
order by user_id, fill_date;
This sorts only once. If performance is still not good enough, then you will need to look at adding an index on (user_id, fill_date).

PLSQL query for getting all records with MAX date

I'm working on a table which has more than 10 columns. One of the column name is ASAT which is of type DATE(Format is yyyy-mm-dd HH:MM:SS:mmm).
I'm looking for a sql query which returns all records of max date. Trying to use that query in java for JDBC call.
I tried this:
Select * from tablename where ASAT in (select MAX(ASAT) from tablename).
But it is not returning any records.
Any help is really appreciated.Thanks
How about:
SELECT MAX(Asat) FROM TableA;
SELECT MAX(Asat) FROM TableA GROUP BY Asat;
When you self join, I suggest aliasing each copy of the table. Personally I use the table letter with a number afterwards in case I need to track it for larger queries.
Select *
from tablename t1
where t1.ASAT = (
select MAX(t2.ASAT)
from tablename t2
)
I believe you are looking for something like this if I'm understanding you. First build a CTE containing the primary key and the MAX(ASAT). Then join to it, selecting where the primary key matches the primary key of the row with the MAX(ASAT). Note your "ID" may have to be more than one column.
with tbl_max_asat(id, max_asat) as (
select id, max(asat) max_asat
from tablename
group by id
)
select *
from tablename t
join tbl_max_asat tma
on t.id = tma.id;
This old post just popped up because it was edited today. Maybe my answer will still help someone. :-)

Postgres Remove records by duplicate control_id [duplicate]

I have a table in a PostgreSQL 8.3.8 database, which has no keys/constraints on it, and has multiple rows with exactly the same values.
I would like to remove all duplicates and keep only 1 copy of each row.
There is one column in particular (named "key") which may be used to identify duplicates, i.e. there should only exist one entry for each distinct "key".
How can I do this? (Ideally, with a single SQL command.)
Speed is not a problem in this case (there are only a few rows).
A faster solution is
DELETE FROM dups a USING (
SELECT MIN(ctid) as ctid, key
FROM dups
GROUP BY key HAVING COUNT(*) > 1
) b
WHERE a.key = b.key
AND a.ctid <> b.ctid
DELETE FROM dupes a
WHERE a.ctid <> (SELECT min(b.ctid)
FROM dupes b
WHERE a.key = b.key);
This is fast and concise:
DELETE FROM dupes T1
USING dupes T2
WHERE T1.ctid < T2.ctid -- delete the older versions
AND T1.key = T2.key; -- add more columns if needed
See also my answer at How to delete duplicate rows without unique identifier which includes more information.
EXISTS is simple and among the fastest for most data distributions:
DELETE FROM dupes d
WHERE EXISTS (
SELECT FROM dupes
WHERE key = d.key
AND ctid < d.ctid
);
From each set of duplicate rows (defined by identical key), this keeps the one row with the minimum ctid.
Result is identical to the currently accepted answer by a_horse. Just faster, because EXISTS can stop evaluating as soon as the first offending row is found, while the alternative with min() has to consider all rows per group to compute the minimum. Speed is of no concern to this question, but why not take it?
You may want to add a UNIQUE constraint after cleaning up, to prevent duplicates from creeping back in:
ALTER TABLE dupes ADD CONSTRAINT constraint_name_here UNIQUE (key);
About the system column ctid:
Is the system column “ctid” legitimate for identifying rows to delete?
If there is any other column defined UNIQUE NOT NULL column in the table (like a PRIMARY KEY) then, by all means, use it instead of ctid.
If key can be NULL and you only want one of those, too, use IS NOT DISTINCT FROM instead of =. See:
How do I (or can I) SELECT DISTINCT on multiple columns?
As that's slower, you might instead run the above query as is, and this in addition:
DELETE FROM dupes d
WHERE key IS NULL
AND EXISTS (
SELECT FROM dupes
WHERE key IS NULL
AND ctid < d.ctid
);
And consider:
Create unique constraint with null columns
For small tables, indexes generally do not help performance. And we need not look further.
For big tables and few duplicates, an existing index on (key) can help (a lot).
For mostly duplicates, an index may add more cost than benefit, as it has to be kept up to date concurrently. Finding duplicates without index becomes faster anyway because there are so many and EXISTS only needs to find one. But consider a completely different approach if you can afford it (i.e. concurrent access allows it): Write the few surviving rows to a new table. That also removes table (and index) bloat in the process. See:
How to delete duplicate entries?
I tried this:
DELETE FROM tablename
WHERE id IN (SELECT id
FROM (SELECT id,
ROW_NUMBER() OVER (partition BY column1, column2, column3 ORDER BY id) AS rnum
FROM tablename) t
WHERE t.rnum > 1);
provided by Postgres wiki:
https://wiki.postgresql.org/wiki/Deleting_duplicates
I would use a temporary table:
create table tab_temp as
select distinct f1, f2, f3, fn
from tab;
Then, delete tab and rename tab_temp into tab.
I had to create my own version. Version written by #a_horse_with_no_name is way too slow on my table (21M rows). And #rapimo simply doesn't delete dups.
Here is what I use on PostgreSQL 9.5
DELETE FROM your_table
WHERE ctid IN (
SELECT unnest(array_remove(all_ctids, actid))
FROM (
SELECT
min(b.ctid) AS actid,
array_agg(ctid) AS all_ctids
FROM your_table b
GROUP BY key1, key2, key3, key4
HAVING count(*) > 1) c);
Another approach (works only if you have any unique field like id in your table) to find all unique ids by columns and remove other ids that are not in unique list
DELETE
FROM users
WHERE users.id NOT IN (SELECT DISTINCT ON (username, email) id FROM users);
Postgresql has windows function, you can use rank() to archive your goal, sample:
WITH ranked as (
SELECT
id, column1,
"rank" () OVER (
PARTITION BY column1
order by column1 asc
) AS r
FROM
table1
)
delete from table1 t1
using ranked
where t1.id = ranked.id and ranked.r > 1
Here is another solution, that worked for me.
delete from table_name a using table_name b
where a.id < b.id
and a.column1 = b.column1;
How about:
WITH
u AS (SELECT DISTINCT * FROM your_table),
x AS (DELETE FROM your_table)
INSERT INTO your_table SELECT * FROM u;
I had been concerned about execution order, would the DELETE happen before the SELECT DISTINCT, but it works fine for me.
And has the added bonus of not needing any knowledge about the table structure.
Here is a solution using PARTITION BY and the virtual ctid column, which is works like a primary key, at least within a single session:
DELETE FROM dups
USING (
SELECT
ctid,
(
ctid != min(ctid) OVER (PARTITION BY key_column1, key_column2 [...])
) AS is_duplicate
FROM dups
) dups_find_duplicates
WHERE dups.ctid == dups_find_duplicates.ctid
AND dups_find_duplicates.is_duplicate
A subquery is used to mark all rows as duplicates or not, based on whether they share the same "key columns", but not the same ctid, as the "first" one found in the "partition" of rows sharing the same keys.
In other words, "first" is defined as:
min(ctid) OVER (PARTITION BY key_column1, key_column2 [...])
Then, all rows where is_duplicate is true are deleted by their ctid.
From the documentation, ctid represents (emphasis mine):
The physical location of the row version within its table. Note that although the ctid can be used to locate the row version very quickly, a row's ctid will change if it is updated or moved by VACUUM FULL. Therefore ctid is useless as a long-term row identifier. A primary key should be used to identify logical rows.
well, none of this solution would work if the id is duplicated which is my use case, then the solution is simple:
myTable:
id name
0 value
0 value
0 value
1 value1
1 value1
create dedupMyTable as select distinct * from myTable;
delete from myTable;
insert into myTable select * from dedupMyTable;
select * from myTable;
id name
0 value
1 value1
well you shouldn't have duplicates id into your table unless it doesn't have PK constraints or simply doesn't support it such as Hive/data lake tables
Better pay attention when loading your data to avoid dups over ID's
DELETE FROM tracking_order
WHERE
mvd_id IN (---column you need to remove duplicate
SELECT
mvd_id
FROM (
SELECT
mvd_id,thoi_gian_gui,
ROW_NUMBER() OVER (
PARTITION BY mvd_id
ORDER BY thoi_gian_gui desc) AS row_num
FROM
tracking_order
) s_alias
WHERE row_num > 1)
AND thoi_gian_gui in ( --column you used to compare to delete duplicates, eg last update time
SELECT
thoi_gian_gui
FROM (
SELECT
thoi_gian_gui,
ROW_NUMBER() OVER (
PARTITION BY mvd_id
ORDER BY thoi_gian_gui desc) AS row_num
FROM
tracking_order
) s_alias
WHERE row_num > 1)
My code, I remove all duplicates 7800445 row and keep only 1 copy of each row with 7 min 28 secs.
enter image description here
This worked well for me. I had a table, terms, that contained duplicate values. Ran a query to populate a temp table with all of the duplicate rows. Then I ran the a delete statement with those ids in the temp table. value is the column that contained the duplicates.
CREATE TEMP TABLE dupids AS
select id from (
select value, id, row_number()
over (partition by value order by value)
as rownum from terms
) tmp
where rownum >= 2;
delete from [table] where id in (select id from dupids)

duplicate multi column entries postgresql

I have a bunch of data in a postgresql database. I think that two keys should form a unique pair,
so want to enforce that in the database. I try
create unique index key1_key2_idx on table(key1,key2)
but that fails, telling me that I have duplicate entries.
How do I find these duplicate entries so I can delete them?
select key1,key2,count(*)
from table
group by key1,key2
having count(*) > 1
order by 3 desc;
The critical part of the query to determine the duplicates is having count(*) > 1.
There are a whole bunch of neat tricks at the following link, including some examples of removing duplicates: http://postgres.cz/wiki/PostgreSQL_SQL_Tricks
Assuming you only want to delete the duplicates and keep the original, the accepted answer is inaccurate -- it'll delete your originals as well and only keep records that have one entry from the start. This works on 9.x:
SELECT * FROM tblname WHERE ctid IN
(SELECT ctid FROM
(SELECT ctid, ROW_NUMBER() OVER
(partition BY col1, col2, col3 ORDER BY ctid) AS rnum
FROM tblname) t
WHERE t.rnum > 1);
https://wiki.postgresql.org/wiki/Deleting_duplicates

how to get rowNum like column in sqlite IPHONE

I have an Sqlite database table like this (with out ascending)
But i need to retrive the table in Ascending order by name, when i set it ascending order the rowId changes as follows in jumbled order
But i need to retrieve some limited number of contacts 5 in ascending order every time
like Aaa - Eeee and then Ffff- Jjjjj ......
but to se**t limits like 0-5 5-10 .... ** it can able using rowids since they are in jumble order
So i need another column like (rowNum in oracle) wich is in order 1234567... every time as follows
how to retrive that column with existing columns
Note: WE DONTE HAVE ROWNUM LIKE COLUMN IN SQLITE
The fake rownum solution is clever, but I am afraid it doesn't scale well (for complex query you have to join and count on each row the number of row before current row).
I would consider using create table tmp as select /*your query*/.
because in the case of a create as select operation the rowid created when inserting
the rows is exactly what would be the rownum (a counter). It is specified by the SQLite doc.
Once the initial query has been inserted, you only need to query the tmp table:
select rowid, /* your columns */ from tmp
order by rowid
You can use offset/limit.
Get the first, 2nd, and 3rd groups of five rows:
select rowid, name from contactinfo order by name limit 0, 5
select rowid, name from contactinfo order by name limit 5, 5
select rowid, name from contactinfo order by name limit 10, 5
Warning, using the above syntax requires SQLite to read through all prior records in sorted order. So to get the 10th record for statement number 3 above SQLite needs to read the first 9 records. If you have a large number of records this can be problematic from a performance standpoint.
More info on limit/ offset:
Sqlite Query Optimization (using Limit and Offset)
Sqlite LIMIT / OFFSET query
This is a way of faking a RowNum, hope it helps:
SELECT
(SELECT COUNT(*)
FROM Names AS t2
WHERE t2.name < t1.name
) + (
SELECT COUNT(*)
FROM Names AS t3
WHERE t3.name = t1.name AND t3.id < t1.id
) AS rowNum,
id,
name
FROM Names t1
ORDER BY t1.name ASC
SQL Fiddle example