huge sql dump even after deleting large number of rows of data - centos

The size of sql dump is same(30GB) even if I delete large number of rows from mysql (myisam) table
note: this variabe is innodb_file_per_table ON
mysql> delete from radacct where YEAR(acctstarttime)='2014';
Query OK, 1963534 rows affected (1 hour 30.58 sec)

what it the question ?
if you have troubles with storin bakup in once.
maybe it will be esyer to transport it for you if it was smaller in size, bud more parts ?
part 1
select * from radacct where YEAR(acctstarttime)='2014' and id<100000000 order by id asc;
part 2
select * from radacct where YEAR(acctstarttime)='2014' and id<200000000 order by id asc;
etc ...
And after you cound compres it
PS: I cant add replay to your comment. so i will add it here:
You can view this page Vary use-full info. MySQL InnoDB not releasing disk space after deleting data rows from table

Related

PostgreSQL: UPDATE large table

I have a large PostgreSQL table of 29 million rows. The size (according to the stats tab in pgAdmin is almost 9GB.) The table is post-gis enabled with an empty geometry column.
I want to UPDATE the geometry column using ST_GeomFromText, reading from X and Y coordinate columns (SRID: 27700) stored in the same table. However, running this query on the whole table at once results in 'out of disk space' and 'connection to server lost' errors... the latter being less frequent.
To overcome this, should I UPDATE the 29 million rows in batches/stages? How can I do 1 million rows (the FIRST 1 million), then do the next 1 million rows until I reach 29 million?
Or are there other more efficient ways of updating large tables like this?
I should add, the table is hosted in AWS.
My UPDATE query is:
UPDATE schema.table
SET geom = ST_GeomFromText('POINT(' || eastingcolumn || ' ' || northingcolumn || ')',27700);
You did not give any server specs, writing 9GB can be pretty fast on recent hardware.
You should be OK with one, long, update - unless you have concurrent writes to this table.
A common trick to overcome this problem (a very long transaction, locking writes to the table) is to split the UPDATE into ranges based on the primary key, ran in separate transactions.
/* Use PK or any attribute with a known distribution pattern */
UPDATE schema.table SET ... WHERE id BETWEEN 0 AND 1000000;
UPDATE schema.table SET ... WHERE id BETWEEN 1000001 AND 2000000;
For high level of concurrent writes people use more subtle tricks (like: SELECT FOR UPDATE / NOWAIT, lightweight locks, retry logic, etc).
From my original question:
However, running this query on the whole table at once results in 'out of disk space' and 'connection to server lost' errors... the latter being less frequent.
Turns out our Amazon AWS instance database was running out of space, stopping my original ST_GeomFromText query from completing. Freeing up space fixed it.
On an important note, as suggested by #mlinth, ST_Point ran my query far quicker than ST_GeomFromText (24mins vs 2hours).
My final query being:
UPDATE schema.tablename
SET geom = ST_SetSRID(ST_Point(eastingcolumn,northingcolumn),27700);

Select * from table_name is running slow

The table contains around 700 000 data. Is there any way to make the query run faster?
This table is stored on a server.
I have tried to run the query by taking the specific columns.
If select * from table_name is unusually slow, check for these things:
Network speed. How large is the data and how fast is your network? For large queries you may want to think about your data in bytes instead of rows. Run select bytes/1024/1024/1024 gb from dba_segments where segment_name = 'TABLE_NAME'; and compare that with your network speed.
Row fetch size. If the application or IDE is fetching one-row-at-a-time, each row has a large overhead with network lag. You may need to increase that setting.
Empty segment. In a few weird cases the table's segment size can increase and never shrink. For example, if the table used to have billions of rows, and they were deleted but not truncated, the space would not be released. Then a select * from table_name may need to read a lot of empty extents to get to the real data. If the GB size from the above query seems too large, run alter table table_name move; to rebuild the table and possible save space.
Recursive query. A query that simple almost couldn't have a bad execution plan. It's possible, but rate, for a recursive query has a bad execution plan. While the query is running, look at select * from gv$sql where users_executing > 0;. There might be a data dictionary query that's really slow and needs to be tuned.

Postgres slow distinct query for multiple columns

I have a very simple query that is taking way too long to run.
SELECT DISTINCT col1,col2,col3,col4 FROM tbl1;
What indexes do I need to add to speed up? I ran a simple vacuum; command and added the following index but neither helped.
CREATE INDEX tbl_idx ON tbl1(col1,col2,col3,col4);
The table has 400k rows. In fact counting them is taking extremely long as well. Running a simple
SELECT count(*) from tbl1;
is taking 8 seconds. So it's possible my problems are with vacuuming or reindexing or something I'm not sure.
Here is the explain command
EXPLAIN SELECT DISTINCT col1,col2,col3,col4 FROM tbl1;
QUERY PLAN
---------------------------------------------------------------------------------
Unique (cost=3259846.80..3449267.51 rows=137830 width=25)
-> Sort (cost=3259846.80..3297730.94 rows=15153657 width=25)
Sort Key: col1, col2, col3, col4
-> Seq Scan on tbl1 (cost=0.00..727403.57 rows=15153657 width=25)
(4 rows)
Edit: I'm currently running vacuum full; which hopefully fixes the issue and then maybe someone can give me some pointers on how to fix where I went wrong. It is several hours in and still going as far as I can tell. I did run
select relname, last_autoanalyze, last_autovacuum, last_vacuum, n_dead_tup from pg_stat_all_tables where n_dead_tup >0;
and the table has nearly 16 million n_dead_tup rows.
My data doesn't change that frequently so I ended up creating a materialized view
CREATE MATERIALIZED VIEW tbl1_distinct_view AS SELECT DISTINCT col1,col2,col3,col4 FROM tbl1;
that I refresh with a cronjob once a day at 6am
0 6 * * * psql -U mydb mydb -c 'REFRESH MATERIALIZED VIEW tbl1_distinct_view;
try force database to use your index
set enable_seqscan=off ;
SELECT DISTINCT col1,col2,col3,col4 FROM tbl1;
set enable_seqscan=on ;
VACUUM and VACUUM FULL are two commands that sound the same but have very different effects.
VACUUM scans a table for tuples that it no longer needs, so that it can overwrite that space during INSERT or UPDATE statements. This command only looks at deleted rows, and does not "defragment" the table - it leaves the space usage the same, but simply marks some space as "dead" in order that it can be reused.
VACUUM FULL looks at every row, and reclaims the space left by deleted rows and dead tuples, essentially "defragmenting" the table. If this is done on a live table, it can take a very long time, and can result in heavy weight locks, increased IO, and index bloat.
I imagine what you need is a VACUUM followed by an ANALYZE, which will rebuild your statistics for each table, improving index performance. These should be performed reasonably regularly in low-usage times for a database. Only if you have a lot of space to reclaim (due to lots of DELETE statements) should you use VACUUM FULL.
Anyhow, since you've run a VACUUM FULL, once that it complete you should run an ANALYZE on the database, followed by a REINDEX (on the database), and then an EXPLAIN on your query again, you should notice an improvement.

Oracle order by query very slow

I am facing a problem when doing order by on a table.
My select query is working fine, but when i do order by (even on the primary key) it just goes on and on with no results. Finally i need to kill the session. The table has20K records.
Any suggestion for the this?
Query is as:
SELECT * FROM Users ORDER BY ID;
I do not about know about the query plan as i am new to oracle
For the unordered query, Is SQL Developer retrieving and displaying 20K rows, or just the fisrt 50? Your comparison might not be fair.
What is the size of those 20K rows: select bytes/1024/1024 MB from user_segments where segment_name = 'USERS'; I've seen many cases where a few megabytes of data use many gigabytes of storage. Maybe the data was very large before and somebody just deleted it (this doesn't remove the space). Or maybe somebody inserted those rows 1 at a time with an APPEND hint, and each row is taking an entire block.
Your query might be waiting for more temp tablespace for sorting, look at DBA_RESUMABLE.

SQLite - a smart way to remove and add new objects

I have a table in my database and I want for each row in my table to have an unique id and to have the rows named sequently.
For example: I have 10 rows, each has an id - starting from 0, ending at 9. When I remove a row from a table, lets say - row number 5, there occurs a "hole". And afterwards I add more data, but the "hole" is still there.
It is important for me to know exact number of rows and to have at every row data in order to access my table arbitrarily.
There is a way in sqlite to do it? Or do I have to manually manage removing and adding of data?
Thank you in advance,
Ilya.
It may be worth considering whether you really want to do this. Primary keys usually should not change through the lifetime of the row, and you can always find the total number of rows by running:
SELECT COUNT(*) FROM table_name;
That said, the following trigger should "roll down" every ID number whenever a delete creates a hole:
CREATE TRIGGER sequentialize_ids AFTER DELETE ON table_name FOR EACH ROW
BEGIN
UPDATE table_name SET id=id-1 WHERE id > OLD.id;
END;
I tested this on a sample database and it appears to work as advertised. If you have the following table:
id name
1 First
2 Second
3 Third
4 Fourth
And delete where id=2, afterwards the table will be:
id name
1 First
2 Third
3 Fourth
This trigger can take a long time and has very poor scaling properties (it takes longer for each row you delete and each remaining row in the table). On my computer, deleting 15 rows at the beginning of a 1000 row table took 0.26 seconds, but this will certainly be longer on an iPhone.
I strongly suggest that you re-think your design. In my opinion your asking yourself for troubles in the future (e.g. if you create another table and want to have some relations between the tables).
If you want to know the number of rows just use:
SELECT count(*) FROM table_name;
If you want to access rows in the order of id, just define this field using PRIMARY KEY constraint:
CREATE TABLE test (
id INTEGER PRIMARY KEY,
...
);
and get rows using ORDER BY clause with ASC or DESC:
SELECT * FROM table_name ORDER BY id ASC;
Sqlite creates an index for the primary key field, so this query is fast.
I think that you would be interested in reading about LIMIT and OFFSET clauses.
The best source of information is the SQLite documentation.
If you don't want to take Stephen Jennings's very clever but performance-killing approach, just query a little differently. Instead of:
SELECT * FROM mytable WHERE id = ?
Do:
SELECT * FROM mytable ORDER BY id LIMIT 1 OFFSET ?
Note that OFFSET is zero-based, so you may need to subtract 1 from the variable you're indexing in with.
If you want to reclaim deleted row ids the VACUUM command or pragma may be what you seek,
http://www.sqlite.org/faq.html#q12
http://www.sqlite.org/lang_vacuum.html
http://www.sqlite.org/pragma.html#pragma_auto_vacuum