Postgresql get some last rows from table - postgresql

I have PostgreSQL table:
Username1 SomeBytes1
Username2 SomeBytes1
Username1 SomeBytes1
Username1 SomeBytes1
I need to get some rows from with name Username1 but from the end of the table. For example i need last to rows with Username1
select from my_table where user = Username1 LIMIT 2
Gives me first 2 rows, but i need last two.
How can i select it?
Thank you.

first and last in a table is very arbitrary. In order to have a good predictable result you should always have an order by clause. And if you have that, then getting the last two rows will become easy.
For instance, if you have a primary key or something like an ID (which is populated by a sequence), then you can do:
select * from my_table where user = 'Username1' order by ID desc limit 2.
desc tells the database to sort the rows in reverse order, which means that last will be first.

Does your table have a primary key ? / Can your table be sorted?
Because the notion of 'first' and 'last' implies some sorting of the tuples. If this is the case, you could sort the data the other way around, so that your 'last' entries are on top. Then you can access them with the statement you tried.

To view tail of a table you may use ctid. It is a temporary physical identifier of a record in PostgreSQL.
SELECT * from my_table
WHERE user = Username1
ORDER BY ctid DESC
LIMIT 2

Related

Postgres: Optimization of query with simple where clause

I have a table with the following columns:
ID (VARCHAR)
CUSTOMER_ID (VARCHAR)
STATUS (VARCHAR) (4 different status possible)
other not relevant columns
I try to find all the lines with customer_id = and status = two different status.
The query looks like:
SELECT *
FROM my_table
WHERE customer_id = '12345678' AND status IN ('STATUS1', 'STATUS2');
The table contains about 1 mio lines. I added two indexes on customer_id and status. The query still needs about 1 second to run.
The explain plan is:
1. Gather
2. Seq Scan on my_table
Filter: (((status)::text = ANY ('{SUBMITTED,CANCELLED}'::text[])) AND ((customer_id)::text = '12345678'::text))
I ran the 'analyze my_table' after creating the indexes. What could I do to improve the performance of this quite simple query?
You need a compound (multi-column) index to help satisfy your query.
Guessing, it seems like the most selective column (the one with the most distinct values) is customer_id. status probably has only a few distinct values. So customer_id should go first in the index. Try this.
ALTER TABLE my_table ADD INDEX customer_id_status (customer_id, status);
This creates a BTREE index. A useful mental model for such an index is an old-fashioned telephone book. It's sorted in order. You look up the first matching entry in the index, then scan it sequentially for the items you want.
You may also want to try running ANALYZE my_table; to update the statistics (about selectivity) used by the query planner to choose an appropriate index.
Pro tip Avoid SELECT * if you can. Instead name the columns you want. This can help performance a lot.
Pro tip Your question said some of your columns aren't relevant to query optimization. That's probably not true; index design is a weird art. SELECT * makes it less true.

I found the inconsistency data on postgres

I have a table of datas on postgres. This table, called it 'table1', have a unique constraint on field 'id'. this table also have 3 other fields, 'write_date', 'state', 'state_detail'.
all this time, i got no problem when accesing and joining these table with another table with field 'id' as the relational field. but, this time, i got a strange result when i querying this table1.
when i run this query (called it Query1):
SELECT id, write_date, state, state_detail
FROM table1
WHERE write_date = '2019-07-30 19:42:49.314' or write_date = '2019-07-30 14:29:06.945'
it gives me 2 rows of datas, with the same id, but different value for the other fields:
id || write_date || state || state_detail
168972 2019-07-30 14:29:06.945 1 80
168972 2019-07-30 19:42:49.314 2 120
BUT, when i run this query (called it Query2):
SELECT id, write_date, state, state_detail
FROM table1
WHERE id = 168972
it gives me just 1 row:
id || write_date || state || state_detail
168972 2019-07-30 19:42:49.314 2 120
How come it gives the different result. i mean, i checked 'table1', it has the unique constraint 'id' as primary key. But, how come this happened?
i have restart the postgres service, and i run those 2 queries again. And it still gives me the same result as above.
This looks like a case of index corruption, specifically on the unique index on the id column. Could you run the following query:
SELECT ctid, id, write_date, state, state_detail FROM table1
WHERE write_date = '2019-07-30 19:42:49.314' or write_date = '2019-07-30 14:29:06.945'
You will likely receive 2 rows back for the id, with two different ctids. The ctid represents the physical location on disk for each row. Presuming you get two rows back, you will need to pick a row and delete the other one in order to "fix" the data. In addition, you'll want to recreate the unique index on the table, in order to prevent this from happening again.
Oh, and don't be surprised if you find other rows in the table like this. Depending on the source of the corruption (bad disks, bad memory, recent crash, upgrade to glibc?), this could be the sign of future trouble to come. I'd probably double-check all my tables for any issues, recreate any unique indexes, and look at the OS level for any signs of disk corruption or I/O errors. And if you aren't on the latest version of Postgres, you should upgrade.

How to sort table alphabetically by name initial?

I have a table contains columns 'employeename' and 'id', how can I sort the 'employeename' column following alphabetical order of the names initial?
Say the table is like this now:
employeename rid eid
Dave 1 1
Ben 4 2
Chloe 6 6
I tried the command ORDER BY, it shows what I want but when I query the data again by SELECT, the showed table data is the same as original, indicting ORDER BY does not modify the data, is this correct?
SELECT *
FROM employee
ORDER BY employeename ASC;
I expect the table data to be modified (sorted by names alphabetical order) like this:
employeename rid eid
Ben 4 2
Chloe 6 6
Dave 1 1
the showed table data is the same as original, indicting ORDER BY does not modify the data, is this correct?
Yes, this is correct. A SELECT statement does not change the data in a table. Only UPDATE, DELETE, INSERT or TRUNCATE statements will change the data.
However, your question shows a misconception on how a relational database works.
Rows in a table (of a relational database) are not sorted in any way. You can picture them as balls in a basket.
If you want to display data in a specific sort order, the only (really: the only) way to do that is to use an ORDER BY in your SELECT statement. There is no alternative to that.
Postgres allows to define a VIEW that includes an ORDER BY which might be an acceptable workaround for you:
CREATE VIEW sorted_employee;
AS
SELECT *
FROM employee
ORDER BY employeename ASC;
Then you can simply use
select *
from sorted_employees;
But be aware of the drawbacks. If you run select * from sorted_employees order by id then the data will be sorted twice. Postgres is not smart enough to remove the (useless) order by from the view's definition.
Some related questions:
Default row order in SELECT query - SQL Server 2008 vs SQL 2012
What is the default SQL result sort order with 'select *'?
Is PostgreSQL order fully guaranteed if sorting on a non-unique attribute?
Why do results from a SQL query not come back in the order I expect?

Join two tables where both joined columns have a large set of different values

I am currently trying to join two tables, where both of the tables have very many different in the columns I am joining.
Here's the tsql
from AVG(Position) as Position from MonitoringGsc_Keywords as sk
Join GSC_RankingData on sk.Id = GSC_RankingData.KeywordId
groupy by sk.Id
The execution plan shows me, that it takes very much time to perform the join. I think it is because a huge group from the first table has to be compared with a huge group of values in the second table.
MonitoringGsc_Keywords.Id has about 60.000 different values
GSC_RankingData hat about 100.000.000 Values
MonitoringGsc_Keywords.Id is Primary-Key of MonitoringGsc_Keywords GSC_RankingData.KeywordId is indexed.
So, what can i do to increase performance?
Is Position column from GSC_RankingData table? If yes then JOIN is redundant and query should looks like this:
SELECT AVG(rd.Position) as Position
FROM GSC_RankingData rd
GROUP BY rd.KeywordId;
If Position column is in GSC_RankingData table then index on GSC_RankingData should include this column and looks like this:
CREATE INDEX IX_GSC_RankingData_KeywordId_Position ON GSC_RankingData(KeywordId) INCLUDE(Position);
You should check indexes fragmentation for this tables, to do this you could use this query:
SELECT * FROM sys.dm_db_index_physical_stats(db_id(), object_id('MonitoringGsc_Keywords'), null, null, 'DETAILED')
if avg_fragmentation_in_percent > 5% and < 30% then
ALTER INDEX [index name] on [table name] REORGANIZE;
if avg_fragmentation_in_percent >= 30% then
ALTER INDEX [index name] on [table name] REBUILD;
It could be problem with statistics, you could check it with query:
SELECT
sp.stats_id, name, filter_definition, last_updated, rows, rows_sampled,
steps, unfiltered_rows, modification_counter
FROM sys.stats AS stat
CROSS APPLY sys.dm_db_stats_properties(stat.object_id, stat.stats_id) AS sp
WHERE stat.object_id = object_id('GSC_RankingData');
check last update date, rows count, if it not be current then update statistics. Also it could be possible that statistics not exist, then you must create it.

SQLite - a smart way to remove and add new objects

I have a table in my database and I want for each row in my table to have an unique id and to have the rows named sequently.
For example: I have 10 rows, each has an id - starting from 0, ending at 9. When I remove a row from a table, lets say - row number 5, there occurs a "hole". And afterwards I add more data, but the "hole" is still there.
It is important for me to know exact number of rows and to have at every row data in order to access my table arbitrarily.
There is a way in sqlite to do it? Or do I have to manually manage removing and adding of data?
Thank you in advance,
Ilya.
It may be worth considering whether you really want to do this. Primary keys usually should not change through the lifetime of the row, and you can always find the total number of rows by running:
SELECT COUNT(*) FROM table_name;
That said, the following trigger should "roll down" every ID number whenever a delete creates a hole:
CREATE TRIGGER sequentialize_ids AFTER DELETE ON table_name FOR EACH ROW
BEGIN
UPDATE table_name SET id=id-1 WHERE id > OLD.id;
END;
I tested this on a sample database and it appears to work as advertised. If you have the following table:
id name
1 First
2 Second
3 Third
4 Fourth
And delete where id=2, afterwards the table will be:
id name
1 First
2 Third
3 Fourth
This trigger can take a long time and has very poor scaling properties (it takes longer for each row you delete and each remaining row in the table). On my computer, deleting 15 rows at the beginning of a 1000 row table took 0.26 seconds, but this will certainly be longer on an iPhone.
I strongly suggest that you re-think your design. In my opinion your asking yourself for troubles in the future (e.g. if you create another table and want to have some relations between the tables).
If you want to know the number of rows just use:
SELECT count(*) FROM table_name;
If you want to access rows in the order of id, just define this field using PRIMARY KEY constraint:
CREATE TABLE test (
id INTEGER PRIMARY KEY,
...
);
and get rows using ORDER BY clause with ASC or DESC:
SELECT * FROM table_name ORDER BY id ASC;
Sqlite creates an index for the primary key field, so this query is fast.
I think that you would be interested in reading about LIMIT and OFFSET clauses.
The best source of information is the SQLite documentation.
If you don't want to take Stephen Jennings's very clever but performance-killing approach, just query a little differently. Instead of:
SELECT * FROM mytable WHERE id = ?
Do:
SELECT * FROM mytable ORDER BY id LIMIT 1 OFFSET ?
Note that OFFSET is zero-based, so you may need to subtract 1 from the variable you're indexing in with.
If you want to reclaim deleted row ids the VACUUM command or pragma may be what you seek,
http://www.sqlite.org/faq.html#q12
http://www.sqlite.org/lang_vacuum.html
http://www.sqlite.org/pragma.html#pragma_auto_vacuum