I want to store a 15 digit number in a table.
In terms of lookup speed should I use bigint or varchar?
Additionally, if it's populated with millions of records will the different data types have any impact on storage?
In terms of space, a bigint takes 8 bytes while a 15-character string will take up to 19 bytes (a 4 bytes header and up to another 15 bytes for the data), depending on its value. Also, generally speaking, equality checks should be slightly faster for numerical operations. Other than that, it widely depends on what you're intending to do with this data. If you intend to use numerical/mathematical operations or query according to ranges, you should use a bigint.
You can store them as BIGINT as the comparison using the INT are faster when compared with varchar. I would also advise to create an index on the column if you are expecting millions of records to make the data retrieval faster.
You can also check this forum:
Indexing is fastest on integer types. Char types are probably stored
underneath as integers (guessing.) VARCHARs are variable allocation
and when you change the string length it may have to reallocate.
Generally integers are fastest, then single characters then CHAR[x]
types and then VARCHARS.
Related
I have a Table with about 200Mio Rows and multiple Columns of Datatype DECIMAL(p,s) with varying precision/scales.
Now, as far as i understand, DECIMAL(p,s) is a fixed size column, with a size depending on the precision, see:
https://learn.microsoft.com/en-us/sql/t-sql/data-types/decimal-and-numeric-transact-sql?view=sql-server-ver16
Now, when altering the table and changing a column from DECIMAL(15,2) to DECIMAL(19,6), for example, i would have expected there to be almost no work to be done on the side of the SQL-Sever as the required bytes to store the value are the same, yet the altering itself does take a long time - so what exactly does the server do when i execute the alter statement?
Also, is there any benefit (other than having constraints on a column) to storing a DECIMAL(15,2) instead of, for example, a DECIMAL(19,2)? It seems to me the storage requirements would be the same, but i could store larger values in the latter.
Thanks in advance!
The precision and scale of a decimal / numeric type matters considerably.
As far as SQL Server is concerned, decimal(15,2) is a different data type to decimal(19,6), and is stored differently. You therefore cannot make the assumption that just because the overall storage requirements do not change, nothing else does.
SQL Server stores decimal data types in byte-reversed (little endian) format with the scale being the first incrementing value therefore changing the definition can require the data to be re-written, SQL Server will use an internal worktable to safely convert the data and update the values on every page.
This question already has answers here:
Oracle NUMBER(p) storage size?
(3 answers)
Closed 1 year ago.
I have been tasked to convert some Oracle DB's to Postgresql, using AWS Schema Conversion Tool. Oracle only has the Number[(precision[,scale])] type. It would seem that Oracle stores integers using fixed-point. A lot of ID columns are defined as Number(19,0) which is being converted to Numeric(19,0) by SCT.
I've even seen one (so far) simple Number which is converted to Double Precision.
Postgres has proper scalar integer types like bigint.
On first blush it seems that storing integers as fixed-point numbers would be grossly inefficient in both storage and time compared to simple integers.
Am I missing something, does Oracle store them as efficient scalar ints under-the-covers?
Out of interest what's the best type for a simple ID column in Oracle?
Oracle's number data type is a variable length data type, which means that the value 1 uses less storage (and memory) than 123456789
In theory number(19,0) should be mapped to bigint, however Oracle's number(19,0) allows storing a value like 9999999999999999999 which would exceed the range for a bigint in Postgres.
The bigget value a bigint can store is 9223372036854775807 - if that is enough for you, then stick with bigint.
If you really need higher values, you will have to bite the bullet and use numeric in Postgres.
I have an array that looks like this [[(Double,Double)]]. It's a multi-dimensional array of tuples.
This is data that I will never query on, as it doesn't need to be queried. It only makes sense if it's like that on the client side. I'm thinking of storing the entire thing as string and then parsing it back to multi array.
Would that be a good approach and would the parsing be very expensive considering I can have a max of 20 arrays with 4 max inner array each with a tuple of 2 Double?
How would I check to see which is a better approach and if storing it as multi-dimensional array in PostgreSQL is the better approach?
How would I store it?
To store an array of composite type (with any nesting level), you need a registered base type to work with. You could have a table defining the row type, or just create the type explicitly:
CREATE TYPE dd AS (a float8, b float8);
Here are some ways to construct that 2-dimensional array of yours:
SELECT ARRAY [['(1.23,23.4)'::dd]]
, (ARRAY [['(1.23,23.4)']])::dd[]
, '{{"(1.23,23.4)"}}'::dd[]
, ARRAY[ARRAY[dd '(1.23,23.4)']]
, ARRAY(SELECT ARRAY (SELECT dd '(1.23,23.4)'));
Related:
How to pass custom type array to Postgres function
Pass array from node-postgres to plpgsql function
Note that the Postgres array type dd[] can store values with any level of nesting. See:
Mapping PostgreSQL text[][] type and Java type
Whether that's more efficient than just storing the string literal as text very much depends on details of your use case.
Arrays types occupy an overhead of 24 bytes plus the usual storage size of element values.
float8 (= double precision) occupies 8 bytes. The text string '1' occupies 2 bytes on disk and 4 bytes in RAM. text '123.45678' occupies 10 bytes on disk and 12 bytes in RAM.
Simple text will be read and written a bit faster than an array type of equal size.
Large text values are compressed (automatically), which can benefit storage size (especially with repetitive patterns) - but adds compression / decompression cost.
An actual Postgres array is cleaner in any case, as Postgres does not allow illegal strings to be stored.
Using IBM DB2 9.7, in a 32k tablespace, assuming a 10000b (ten thousand bytes) long column fits nicely in a tablespce. Is there a difference between these two, and is one preferred over the other?
VARCHAR(10000)
CLOB(536870912) INLINE LENGTH 10000
Is either or preferred in terms of functionality and performance? A quick look at the two would be that the CLOB is actually more versatile; all content shorter than 10000 is stored in stablespace, but IF bigger content is required then that is fine too, it is just stored elsewhere on disk.
There are a number of restrictions on the way CLOB can be used in a query:
Special restrictions apply to expressions resulting in a CLOB data
type, and to structured type columns; such expressions and columns are
not permitted in:
A SELECT list preceded by the DISTINCT clause
A GROUP BY clause An ORDER BY clause A subselect of a set operator other
than UNION ALL
A basic, quantified, BETWEEN, or IN predicate
An aggregate function
VARGRAPHIC, TRANSLATE, and datetime scalar functions
The pattern operand in a LIKE predicate, or the search
string operand in a POSSTR function
The string representation of a
datetime value.
So if you need to do any of those things, VARCHAR is to be preferred.
I don't have a definitive answer about performance (unfortunately, information like this just doesn't seem to be in the documentation--or at least, it is not easily locatable). However, logically speaking, the DB has more work to do with a CLOB. It has to decide whether to return the CLOB directly in the result or not. That has to mean at least some overhead. Here is a good discussion of some of the issues, though it doesn't give a clear answer on performance, either.
My default position would be to use VARCHAR unless CLOB is really needed (data in the column can be bigger than the VARCHAR limit).
I've started working on a project where there is a fairly large table (about 82,000,000 rows) that I think is very bloated. One of the fields is defined as:
consistency character varying NOT NULL DEFAULT 'Y'::character varying
It's used as a boolean, the values should always either be ('Y'|'N').
Note: there is no check constraint, etc.
I'm trying to come up with reasons to justify changing this field. Here is what I have:
It's being used as a boolean, so make it that. Explicit is better than implicit.
It will protect against coding errors because right now there anything that can be converted to text will go blindly in there.
Here are my question(s).
What about size/storage? The db is UTF-8. So, I think there really isn't much of a savings in that regard. It should be 1 byte for a boolean, but also 1 byte for a 'Y' in UTF-8 (at least that's what I get when I check the length in Python). Is there any other storage overhead here that would be saved?
Query performance? Will Postgres get any performance gains for a where cause of "=TRUE" vs. "='Y'"?
PostgreSQL (unlike Oracle) has a fully-fledged boolean type. Generally, a "yes/no flag" should be boolean. That's the appropriate type!
What about size/storage?
A boolean column occupies 1 byte on disk.
(The manual) about text or character varying:
the storage requirement for a short string (up to 126 bytes) is 1 byte
plus the actual string
That's at least 2 bytes for a single character.
Actual storage is more complicated than that. There is some fixed overhead per table, page and row, there is special NULL storage and some types require data alignment. See:
How many records can I store in 5 MB of PostgreSQL on Heroku?
Encoding UTF8 doesn't make any difference here. Basic ASCII-characters are bit-compatible with other encodings like LATIN-1.
In your case, according to your description, you should keep the NOT NULL constraint you already have - independent of the data type.
Query performance?
Will be slightly better in any case with boolean. Besides being smaller, the logic for boolean is simpler and varchar or text are also generally burdened with COLLATION rules. But don't expect much for something that simple.
Instead of:
WHERE consistency = 'Y'
You could write:
WHERE consistency = true
But rather simplify to just:
WHERE consistency
No further evaluation needed.
Change type
Transforming your table is simple:
ALTER TABLE tbl ALTER consistency TYPE boolean
USING CASE consistency WHEN 'Y' THEN true ELSE false END;
This CASE expression folds everything that is not TRUE ('Y') to FALSE. The NOT NULL constraint just stays.
Neither storage size nor query performance will be significantly better switching from a single VARCHAR to a BOOLEAN. Although you are right that it's technically cleaner to use boolean when you are talking about a binary value the cost to change is probably significantly higher than the benefit. If you're worried about correctness then you could put a check on the column, for example
ALTER TABLE tablename ADD CONSTRAINT consistency CHECK (consistency IN ('Y', 'N'));