Why do Redshift COPY queries use (much) more disk space for tables with a sort key - amazon-redshift

I have a large set of data on S3 in the form of a few hundred CSV files that are ~1.7 TB in total (uncompressed). I am trying to copy it to an empty table on a Redshift cluster.
The cluster is empty (no other tables) and has 10 dw2.large nodes. If I set a sort key on the table, the copy commands uses up all available disk space about 25% of the way through, and aborts. If there's no sort key, the copy completes successfully and never uses more than 45% of the available disk space. This behavior is consistent whether or not I also set a distribution key.
I don't really know why this happens, or if it's expected. Has anyone seen this behavior? If so, do you have any suggestions for how to get around it? One idea would be to try importing each file individually, but I'd love to find a way to let Redshift deal with that part itself and do it all in one query.

Got an answer to this from the Redshift team. The cluster needs free space of at least 2.5x the incoming data size to use as temporary space for the sort. You can upsize your cluster, copy the data, and resize it back down.

Each dw2.large box has 0.16 TB disk space. When you said you you have cluster of 10 nodes, total space available is around 1.6 TB.
You have mentioned that you have around 1.7 TB raw data ( uncompressed) to be loaded in redshift.
When you load data to redshift using copy commands redshift automatically compresses you data and load it table.
once you load any db table you can see compression encoding by below query
Select "column", type, encoding
from pg_table_def where tablename = 'my_table_name'
Once you load your data when table has no sort key. See what are compression are being applied.
I suggested you drop and create table each time when you load data for your testing So that compressions encoding will be analysed each time.Once you load your table using copy commands see below link and fire script to determine table size
http://docs.aws.amazon.com/redshift/latest/dg/c_analyzing-table-design.html
Since when you apply sort key for your table and load data , sort key also occupies some disk space.
Since table with out sort key need less disk space than table with sort key.
You need to make sure that compression are being applied to table.
When we have sort key applied it need more space to store. When you apply sort key you need to check if you are loading data in sorted order as well,so that data will be stored in sorted fashion. This we need to avoid vacuum command to sort table after data being loaded.

Related

Postgresql - How do I find compressed fields in a table?

Postgresql uses different storage techniques for field values. If values become large (e.g. are long texts), Postgresql will eventually apply compression and/or TOAST the values.
How do I find out which fields in a table are compressed?
(Backgound: I have database that stores tiny BLOBs in a some columns and I want to find out how much of it is compressed - if compression is hardly used, I want to turn it off, so Postgres won't waste CPU cycles on trying)
Starting in v14, there is the function pg_column_compression.
select pg_column_compression(colname),count(*) from tablename group by 1;

PostgreSQL - inserting string of 30,000 characters doesn't change size?

Via the command
select
relname as "Table",
pg_size_pretty(pg_total_relation_size(relid)) as "Size",
pg_size_pretty(pg_total_relation_size(relid) - pg_relation_size(relid)) as "External Size"
from pg_catalog.pg_statio_user_tables order by pg_total_relation_size(relid) desc;
I can retrieve the size of my tables. (according to this article) which works. But I just came to a weird conclusion. Inserting multiple values which contain approx 30,000 characters each, doesn't change the size.
When executing before inserting I get
tablename | size text |external size text
-------------------------------------------
participant | 264kb | 256kb
After inserting (btw they are base64 encoded images) and executing the select command, I get the exact same sizes returned.
I figured this couldn't be correct so I was wondering, is the command wrong? Or does PostgreSQL do something special with very large strings?
(In pgadminIII the strings do not show in the 'view data' view but do are shown when executing select base64image from participant).
And next to this was I wondering (not my main question but would be nice to have answered) if this is the best practice (since my app generates base64 images) or should I f.e. convert them to an image on the backend and store the images remotely on my server instead of in the database?
Storage management
When you insert (or update) data that requires more space on disk then it currently uses, Postgres (or actually any DBMS) will allocate that space to store the new data.
When you delete data either by setting a column to a smaller or by deleting rows, the space is not immediately released to the operating system. The assumption is that that space will most probably be re-used by subsequent updates or inserts and extending a file is a relatively expensive operation so the database tries to avoid that (again this is something that all DBMS do).
If the space allocated is much bigger then the space that is actually stored, this can influence the speed of the retrieval - especially for table scans ("Seq Scan" in the execution plan) as more blocks then necessary need to be read from the harddisk. This is also known as "table bloat".
It is possible to shrink the space used using the statement VACUUM FULL. But that should only be used if you do suspect a problem with "bloat". This blog post explains this in more details.
Storing binary data in the database
If you want to store images in the database, then by all means use bytea instead of a string value. An image encoded in Base64 takes twice as much spaces as the raw data would.
There are pros and cons regarding the question if binary data (images, documents) should be stored in the database or not. This is a somewhat subjective decision to make and depends on a lot of external factors.
See e.g. here: Which is the best method to store files on the server (in database or storing the location alone)?

Redshift doesn't recognize newly loaded data as pre-sorted

I am trying to load very large volumes of data into redshift into a single table that will be too cost prohibitive to Vacuum once loaded. To avoid having to vacuum this table, I am loading data using COPY command, from a large number of pre-sorted CSV files. The files I am loading are pre-sorted based on the sort keys defined in the table.
However after loading the first two files, I find that redshift reports the table as ~50% unsorted. I have verified that the files have the data in the correct sort order. Why would redshift not recognize the new incoming data as already sorted?
Do I have to do anything special to let the copy command know that this new data is already in the correct sort order?
I am using the SVV_TABLE_INFO table to determine the sort percentage (using the unsorted field). The sort key is a composite key of three different fields (plane, x, y).
Official Answer by Redshift Support:
Here is what we say officially:
http://docs.aws.amazon.com/redshift/latest/dg/vacuum-load-in-sort-key-order.html
When your table has a sort key defined, the table is divided into 2
regions:
sorted, and
unsorted
As long as you load data in sorted key order, even though the data is
in the unsorted region, it is still in sort key order, so there is no
need for VACUUM to ensure the data is sorted. A VACUUM is still needed
to move the data from the unsorted region to the sorted region,
however this is less critical as the data in the unsorted region is
already sorted.
Storing sorted data in Amazon Redshift tables has an impact in several areas:
How data is displayed when ORDER BY is not used
The speed with which data is sorted when using ORDER BY (eg sorting is faster if it is mostly pre-sorted)
The speed of reading data from disk, based upon Zone Maps
While you might want to choose a SORTKEY that improves sort speed (eg Change order of sortkey to descending), the primary benefit of a SORTKEY is to make queries run faster by minimizing disk access through the use of Zone Maps.
I admit there doesn't seem to be a lot of documentation available about how Zone Maps work, so I'll try and explain it here.
Amazon Redshift stores data on disk in 1MB blocks. Each block contains data relating to one column of one table, and data from that column can occupy multiple blocks. Blocks can be compressed, so they will typically contain more than 1MB of data.
Each block on disk has an associated Zone Map that identifies the minimum and maximum value in that block for the column being stored. This enables Redshift to skip over blocks that do not contain relevant data. For example, if the SORTKEY is a timestamp and a query has a WHERE clause that limits data to a specific day, then Redshift can skip over any blocks where the desired date is not within that block.
Once Redshift locates the blocks with desired data, it will load those blocks into memory and will then perform the query across the loaded data.
Queries will run the fastest in Redshift when it only has to load the fewest possible blocks from disk. Therefore, it is best to use a SORTKEY that commonly matches WHERE clauses, such as timestamps where data is often restricted by date ranges. Sometimes it is worth setting the SORTKEY to the same column as the DISTKEY even though they are used for different purposes.
Zone maps can be viewed via the STV_BLOCKLIST virtual system table. Each row in this table includes:
Table ID
Column ID
Block Number
Minimum Value of the field stored in this block
Maximum Value of the field stored in this block
Sorted/Unsorted flag
I suspect that the Sorted flag is set after the table is vacuumed. However, tables do not necessarily need to be vacuumed. For example, if data is always appended in timestamp order, then the data is already sorted on disk, allowing Zone Maps to work most efficiently.
You mention that your SORTKEY is "a composite key using 3 fields". This might not be the best SORTKEY to use. It could be worth running some timing tests against tables with different SORTKEYs to determine whether the composite SORTKEY is better than using a single SORTKEY. The composite key would probably perform best if all 3 fields are often used in WHERE clauses.

Postgres determine size of all blobs

hi i'm trying to find the size of all blobs. I always used this
SELECT sum(pg_column_size(pg_largeobject)) lob_size FROM pg_largeobject
but while my database is growing ~40GB this takes several hours and loads the cpu too much.
is there any more efficent way?
Some of the functions mentioned in Database Object Management Functions give an immediate result for the entire table.
I'd suggest pg_table_size(regclass) which is defined as:
Disk space used by the specified table, excluding indexes (but
including TOAST, free space map, and visibility map)
It differs from sum(pg_column_size(tablename)) FROM tablename because it counts entire pages, so that includes the padding between the rows, the dead rows (updated or deleted and not reused), and the row headers.

I get an error "could not write block .... of temporary file no space left on device ..." using postgresql

I'm running a really big query, that insert a lot of rows in table, almost 8 million of rows divide in some smaller querys, but in some moment appear that error : "I get an error "could not write block .... of temporary file no space left on device ..." using postgresql". I don't know if i need to delete temporary files after each query and how I can to do that, or if it is related with another issue.
Thank you
OK. As there are still some facts missing, an attempt to answer to maybe clarify the issue:
It appears that you are running out of disk space. Most likely because you don't have enough space on your disk. Check on a Linux/Unix df -h for example.
To show you, how this could happen:
Having a table with maybe 3 integers the data alone will occupy about 12Byte. You need to add some overhead to it for row management etc. On another answer Erwin mentioned about 23Byte and linked to the manual for more information about. Also there might needs some padding betweens rows etc. So doing a little math:
Even with a 3 integer we will end up at about 40 Byte per row. Having in mind you wanted to insert 8,000,000 this will sum up to 320,000,000Byte or ~ 300MB (for our 3 integer example only and very roughly).
Now giving, you have a couple of indexes on this table, the indexes will also grow during the inserts. Also another aspect might could be bloat on the table and indexes which might can be cleared with a vacuum.
So what's the solution:
Provide more disk space to your database
Split your inserts a little more and ensure, vacuum is running between them
Inserting data or index(create) always needs temp_tablespaces, which determines the placement of temporary tables and indexes, as well as temporary files that are used for purposes such as sorting large data sets.according to your error, it meant that your temp_tablespace location is not enough for disk space .
To resolve this problem you may need these two ways:
1.Re-claim the space of your temp_tablespace located to, default /PG_DATA/base/pgsql_tmp
2. If your temp_tablespace space still not enough for temp storing you can create the other temp tablespace for that database:
create tablespace tmp_YOURS location '[your enough space location';
alter database yourDB set temp_tablespaces = tmp_YOURS ;
GRANT ALL ON TABLESPACE tmp_YOURS to USER_OF_DB;
then disconnect the session and reconnect it.
The error is quite self-explanatory. You are running a big query yet you do not have enough disk space to do so. If postgresql is installed in /opt...check if you have enough space to run the query. If not LIMIT the output to confirm you are getting the expected output and then proceed to run the query and write the output to a file.