I wish to have stored many (N ~ about 150) boolean values of web app "environment" variables.
What is the proper way to get them stored?
creating N columns and one (1) row of data,
creating two (2) or three (3) columns (id smallserial, name varchar(255), value boolean) with N rows of data,
by using jsonb data type,
by using area data type,
by using bit string bit varying(n),
by another way (please advise)
Note: name may be too long.
Tia!
Could you perhaps use a bit string? https://www.postgresql.org/docs/7.3/static/datatype-bit.html. (Set the nth bit to 1 when the nth attribute would have been "true")
Depends how you wants to access them in normal usage.
Do you need to access one of this value at time, in this case JSONB is a really good way, really easy and quick to find a record, or do you need to get all of them in one call, in this case Bit String Types are the best, but you need to be really careful around, order and transcription for writing and reading..
Any of the options will do, depending on your circumstances. There is little need to optimise storage if you have only 150 values. Unless, of course there can be a very large number of these sets of 150 values or you are working in a very restricted environment like an embedded system (in which case a full-blown database client is probably not what you're looking for).
There is no definite answer here, but I will give you a few guidelines to consider. As from experience:
You don't want to have an anonymous string of values that is interpreted in code. When you change anything later on, your 1101011 or 0x12f08a will be rendered an fascinatingly enigmatic problem.
When the number of your fields starts to grow, you will regret if they are all stored in a single cell on a single row, because you will either be developing some obscure SQL or transforming a larger-than-needed dataset from the server.
When you feel that boolean values are really not enough, you start to wonder if there is a possibility to store something else too.
Settings and environmental properties are seldom subject to processor or data intensive processing, so follow the easiest path.
As my recommendation based on the given information and some educated guessing, you'll probably want to store your information in a table like
string key | integer set_idx | string value
---------------------------------------------------------
use.the.force | 1899 | 1
home.directory | 1899 | /home/dvader
use.the.force | 1900 | 0
home.directory | 1900 | /home/yoda
Converting a 1 to boolean true is cheap, and if you have only one set of values, you can ignore the set index.
Related
How to avoid the unnecessary CPU cost?
See this historic question with failure tests. Example: j->'x' is a JSONb representing a number and j->'y' a boolean. Since the first versions of JSONb (issued in 2014 with 9.4) until today (6 years!), with PostgreSQL v12... Seems that we need to enforce double conversion:
Discard j->'x' "binary JSONb number" information and transforms it into printable string j->>'x';discard j->'y' "binary JSONb boolean" information and transforms it into printable string j->>'y'.
Parse string to obtain "binary SQL float" by casting string (j->>'x')::float AS x; parse string to obtain "binary SQL boolean" by casting string (j->>'y')::boolean AS y.
Is there no syntax or optimized function to a programmer enforce the direct conversion?
I don't see in the guide... Or it was never implemented: is there a technical barrier to it?
NOTES about typical scenario where we need it
(responding to comments)
Imagine a scenario where your system need to store many many small datasets (real example!) with minimal disk usage, and managing all with a centralized control/metadata/etc. JSONb is a good solution, and offer at least 2 good alternatives to store in the database:
Metadata (with schema descriptor) and all dataset in an array of arrays;
Separating Metadata and table rows in two tables.
(and variations where metadata is translated to a cache of text[], etc.) Alternative-1, monolitic, is the best for the "minimal disk usage" requirement, and faster for full information retrieval. Alternative-2 can be the choice for random access or partial retrieval, when the table Alt2_DatasetLine have also more one column, like time, for time series.
You can create all SQL VIEWS in a separated schema, for example
CREATE mydatasets.t1234 AS
SELECT (j->>'d')::date AS d, j->>'t' AS t, (j->>'b')::boolean AS b,
(j->>'i')::int AS i, (j->>'f')::float AS f
FROM (
select jsonb_array_elements(j_alldata) j FROM Alt1_AllDataset
where dataset_id=1234
) t
-- or FROM alt2...
;
And CREATE VIEW's can by all automatic, running the SQL string dynamically ... we can reproduce the above "stable schema casting" by simple formating rules, extracted from metadata:
SELECT string_agg( CASE
WHEN x[2]!='text' THEN format(E'(j->>\'%s\')::%s AS %s',x[1],x[2],x[1])
ELSE format(E'j->>\'%s\' AS %s',x[1],x[1])
END, ',' ) as x2
FROM (
SELECT regexp_split_to_array(trim(x),'\s+') x
FROM regexp_split_to_table('d date, t text, b boolean, i int, f float', ',') t1(x)
) t2;
... It's a "real life scenario", this (apparently ugly) model is surprisingly fast for small traffic applications. And other advantages, besides disk usage reduction: flexibility (you can change datataset schema without need of change in the SQL schema) and scalability (2, 3, ... 1 billion of different datasets on the same table).
Returning to the question: imagine a dataset with ~50 or more columns, the SQL VIEW will be faster if PostgreSQL offers a "bynary to bynary casting".
Short answer: No, there is no better way to extract a jsonb number as PostgreSQL than (for example)
CAST(j ->> 'attr' AS double precision)
A JSON number happens to be stored as PostgreSQL numeric internally, so that wouldn't work “directly” anyway. But there is no principal reason why there could not be a more efficient way to extract such a value as numeric.
So, why don't we have that?
Nobody has implemented it. That is often an indication that nobody thought it worth the effort. I personally think that this would be a micro-optimization – if you want to go for maximum efficiency, you extract that column from the JSON and store it directly as column in the table.
It is not necessary to modify the PostgreSQL source to do this. It is possible to write your own C function that does exactly what you envision. If many people thought this was beneficial, I'd expect that somebody would already have written such a function.
PostgreSQL has just-in-time compilation (JIT). So if an expression like this is evaluated for a lot of rows, PostgreSQL will build executable code for that on the fly. That mitigates the inefficiency and makes it less necessary to have a special case for efficiency reasons.
It might not be quite as easy as it seems for many data types. JSON standard types don't necessarily correspond to PostgreSQL types in all cases. That may seem contrived, but look at this recent thread in the Hackers mailing list that deals with the differences between the numeric types between JSON and PostgreSQL.
All of the above are not reasons that such a feature could never exist, I just wanted to give reasons why we don't have it.
I am working on a database that (hopefully) will end up using a primary key with both numbers and letters in the values to track lots of agricultural product. Due to the way in which the weighing of product takes place at more than one facility, I have no other option but to maintain the same base number but use letters in addition to this base number to denote split portions of each lot of product. The problem is, after I create record number 99, the number 100 suddenly floats up and underneath 10. This makes it difficult to maintain consistency and forces me to replace this alphanumeric lot ID with a strictly numeric value in order to keep it sorted (which I use "autonumber" as the data type). Either way, I need the alphanumeric lot ID, and so having 2 ID's for the same lot can be confusing for anyone inputting values into the form. Is there a way around this that I am just not seeing?
If you're using query as a data source then you may try to sort it by string converted to number, something like
SELECT id, field1, field2, ..
ORDER BY CLng(YourAlphaNumericField)
Edit: you may also try Val function instead of CLng - it should not fail on non-numeric input
Why not properly format your key before saving ? e.g: "0000099". You will avoid a costly conversion later.
Alternatively, you could use 2 fields as the composite PK. One with the Number (as Long) and one with the Location (as String).
I need to store a list of user names in a Cassandra column family(wide row/dynamic columns).
The columnname/comparator type will be integer, so as to sort the users based on a score.
The score ranges from 0 to 100. The problem is, if two or more users have a same score, how can i store them on different columns?, as cassandra would not allow that...
Is there any way to convert integer to timeuuids? Or any other solution for this problem?
This is a problem I have seen quite often (not scores but preventing column name conflict). In general the solution is a form or another of concatenating a UUID to the column name (Since those are made to never conflict).
If you want to keep on sorting by score then I advice you to use a CompositeType column name.
More specifically:
CompositeType(score: Integer | time: TimeUUID)
The comparator in Cassandra will then first sort by score and then by time (putting the most recent last I believe).
TimeUUID should also take care of "simultaneous" score posting even thought the probabilities to have that with a Long timestamp would be ridiculously low.
You can use build-in list feature, see http://www.datastax.com/dev/blog/cql3_collections
Just have column with a value and list of users for that value.
Let's say I have a very large table with owners of cars like so:
OWNERSHIP
owner | car
---------------
steven | audi
bernahrd | vw
dieter | vw
eike | vw
robert | audi
... one hundred million rows ...
If I refactor it to this:
OWNERSHIP
owner | car <-foreign key TYPE.car_type
---------------
steven | audi
bernahrd | vw
dieter | vw
eike | vw
robert | audi
...
TYPE
car_type |
---------------
audi
vw
Do I win anything spacewise or speedwise or do I need to create an INTEGER surrogate key on car_type for that?
The integer is going to take up 4 bytes, which is one more byte than "vw" will. As it happens, PostgreSQL enums take up 4 bytes too, so you won't gain anything storage-wise by switching to this representation (except for the difficulties it imposes on changing the enum itself). Querying will be as fast either way, because with a table that size you're going to be consulting the index anyway. Database performance, especially when tables get large, is essentially a matter of I/O, not CPU performance. I'm not convinced that an index on integers is going to be smaller or faster than an index on short strings, especially when you have a huge number of rows referencing a very small set of possible values. It's certainly not going to be the bottleneck in your applications.
Even if we assume that you were able to recover 4 bytes by using an artificial key, how much storage are you going to save? 4 bytes times 100 million rows would be about 400 MB ideally. Are you so pressed for storage that you need to eek out a small amount like that, on your honkin' database server? And this is assuming you refactor it into its own table and use a proper foreign key.
The right way to answer this, of course, is not to argue from first principles at all. Take your 100 million row table and work it both ways. Then examine the size yourself, like so:
SELECT pg_size_pretty(pg_total_relation_size('ownership')));
SELECT pg_size_pretty(pg_total_relation_size('ownership2')));
Do your test queries, with EXPLAIN ANALYZE like so:
EXPLAIN ANALYZE SELECT * FROM ownership WHERE car = 'audi';
EXPLAIN ANALYZE SELECT * FROM ownership2 WHERE car_id = 1;
Pay more attention to the actual time taken than the cost, but do look at the cost. Do this on the same database server as your production, if possible; if not, a similar machine with the same PostgreSQL configuration. Then you'll have hard numbers to tell you what you're paying for and what you're getting. My suspicion is that you'll find the space usage to be slightly worse with the artificial key and the performance to be equivalent.
If that's what you find, do the relational thing and use the natural key, and stop worrying so much about optimizing the physical storage. Space is the cheapest commodity you have.
Using two tables and string foreign key would of course use more space than using one. How much more depends on how many types of cars you have.
You should use integer car_id:
Using integer keys would save space if significant percentage of car names would repeat.
More so if you'd need to index car column, as integer index is much smaller than string index.
Also comparing integers is faster than comparing strings, so searching by car should also be faster.
Smaller table means that bigger part if it would fit in cache, so accessing it should also be faster.
This is probably a super simple question, but I'm struggling to come up with the right keywords to find it on Google.
I have a Postgres table that has among its contents a column of type text named content_type. That stores what type of entry is stored in that row.
There are only about 5 different types, and I decided I want to change one of them to display as something else in my application (I had been directly displaying these).
It struck me that it's funny that my view is being dictated by my database model, and I decided I would convert the types being stored in my database as strings into integers, and enumerate the possible types in my application with constants that convert them into their display names. That way, if I ever got the urge to change any category names again, I could just change it with one alteration of a constant. I also have the hunch that storing integers might be somewhat more efficient than storing text in the database.
First, a quick threshold question of, is this a good idea? Any feedback or anything I missed?
Second, and my main question, what's the Postgres command I could enter to make an alteration like this? I'm thinking I could start by renaming the old content_type column to old_content_type and then creating a new integer column content_type. However, what command would look at a row's old_content_type and fill in the new content_type column based off of that?
If you're finding that you need to change the display values, then yes, it's probably a good idea not to store them in a database. Integers are also more efficient to store and search, but I really wouldn't worry about it unless you've got millions of rows.
You just need to run an update to populate your new column:
update table_name set content_type = (case when old_content_type = 'a' then 1
when old_content_type = 'b' then 2 else 3 end);
If you're on Postgres 8.4 then using an enum type instead of a plain integer might be a good idea.
Ideally you'd have these fields referring to a table containing the definitions of type. This should be via a foreign key constraint. This way you know that your database is clean and has no invalid values (i.e. referential integrity).
There are many ways to handle this:
Having a table for each field that can contain a number of values (i.e. like an enum) is the most obvious - but it breaks down when you have a table that requires many attributes.
You can use the Entity-attribute-value model, but beware that this is too easy to abuse and cause problems when things grow.
You can use, or refer to my implementation solution PET (Parameter Enumeration Tables). This is a half way house between between 1 & 2.