Related
The environment for this question is PostgreSQL 9.6.5 on AWS RDS.
The question is about an optimal schema design and batch update strategy for a table with 300 million rows containing the following logical data model:
id: primary key, string up to 40 characters long
code: integer 1-999
year: integer year
flags: variable number (1000+) each associated with a name, new flags added over time. Ideally, a flag should be thought of as having three values: absent (null), on (true/1) and off (false/0). It is possible, at the cost of additional updates (see below), to treat a flag as a simple bit (on or off, no absent). "On" values are typically very sparse: < 1/1000.
Queries typically involve boolean expressions on the presence or absence of one or more flags (by name) with code and year occasionally involved also.
The data is updated in batch via Apache Spark, i.e., updates can be represented as flat file(s), e.g., in COPY format, or as SQL operations. Only one update is active at any one time. Updates to code and year are very infrequent. Updates to flags affect 1-5% of rows per update (3-15 million rows). It is possible for the update rows to include all flags and their values, just the "on" flags to be updated or just the flags whose values have changed. In the former case, Spark would need to query the data to get the current values of flags.
There will be a small read load during updates.
The question is about an optimal schema and associated update strategy to support the query & updates as described.
Some comments from research so far:
Using 1,000+ boolean columns would create a very efficient row representation but, in addition to some DDL complexity, would require 1,000+ indexes.
Bit strings would be great if there was a way to index individual bits. Also, they do not offer a good way to represent absent flags. Using this approach would require maintaining a lookup table between flag names and bit IDs. Merging updates, if needed, works with ||, though, given PostgreSQL's MVCC there doesn't seem to be much benefit to updating just flags as opposed to replacing an entire row.
JSONB fields offer indexing. They also offer null representation but that comes at a cost: all flags that are "off" would need to be explicitly set, which would make the fields quite large. If we ignore null representation, JSONB fields would be relatively small. To further shrink them, we could use short 1-3 character field names with a lookup table. Same comments re: merging as with bit strings.
tsvector/tsquery: have no experience with this data type but, in theory, seems to be an exact representation of a set of "on" flags by name. Must use a lookup table mapping flag names to tokens with the additional requirement to ensure there are no collisions due to stemming.
Don't store the flags in the main table.
Assuming that the main table is called data, define something like the following:
CREATE TABLE flag_names (
id smallint PRIMARY KEY,
name text NOT NULL
);
CREATE TABLE flag (
flagname_id smallint NOT NULL REFERENCES flag_names(id),
data_id text NOT NULL REFERENCES data(id),
value boolean NOT NULL,
PRIMARY KEY (flagname_id, data_id)
);
If a new flag is created, insert a new row in flag_names.
If a flag is set to TRUE or FALSE, insert or update a row in the flag table.
Join flag with data to test if a certain flag is set.
I would like to replace some of the sequences I use for id's in my postgresql db with my own custom made id generator. The generator would produce a random number with a checkdigit at the end. So this:
SELECT nextval('customers')
would be replaced by something like this:
SELECT get_new_rand_id('customer')
The function would then return a numerical value such as: [1-9][0-9]{9} where the last digit is a checksum.
The concerns I have is:
How do I make the thing atomic
How do I avoid returning the same id twice (this would be caught by trying to insert it into a column with unique constraint but then its to late to I think)
Is this a good idea at all?
Note1: I do not want to use uuid since it is to be communicated with customers and 10 digits is far simpler to communicate than the 36 character uuid.
Note2: The function would rarely be called with SELECT get_new_rand_id() but would be assigned as default value on the id-column instead of nextval().
EDIT: Ok, good discussusion below! Here are some explanation for why:
So why would I over-comlicate things this way? The purpouse is to hide the primary key from the customers.
I give each new customer a unique
customerId (generated serial number in
the db). Since I communicate that
number with the customer it is a
fairly simple task for my competitors
to monitor my business (there are
other numbers such as invoice nr and
order nr that have the same
properties). It is this monitoring I
would like to make a little bit
harder (note: not impossible but
harder).
Why the check digit?
Before there was any talk of hiding the serial nr I added a checkdigit to ordernr since there were klumbsy fingers at some points in the production, and my thought was that this would be a good practice to keep in the future.
After reading the discussion I can certainly see that my approach is not the best way to solve my problem, but I have no other good idea of how to solve it, so please help me out here.
Should I add an extra column where I put the id I expose to the customer and keep the serial as primary key?
How can I generate the id to expose in a sane and efficient way?
Is the checkdigit necessary?
For generating unique and random-looking identifiers from a serial, using ciphers might be a good idea. Since their output is bijective (there is a one-to-one mapping between input and output values) -- you will not have any collisions, unlike hashes. Which means your identifiers don't have to be as long as hashes.
Most cryptographic ciphers work on 64-bit or larger blocks, but the PostgreSQL wiki has an example PL/pgSQL procedure for a "non-cryptographic" cipher function that works on (32-bit) int type. Disclaimer: I have not tried using this function myself.
To use it for your primary keys, run the CREATE FUNCTION call from the wiki page, and then on your empty tables do:
ALTER TABLE foo ALTER COLUMN foo_id SET DEFAULT pseudo_encrypt(nextval('foo_foo_id_seq')::int);
And voila!
pg=> insert into foo (foo_id) values(default);
pg=> insert into foo (foo_id) values(default);
pg=> insert into foo (foo_id) values(default);
pg=> select * from foo;
foo_id
------------
1241588087
1500453386
1755259484
(4 rows)
I added my comment to your question and then realized that I should have explained myself better... My apologies.
You could have a second key - not the primary key - that is visible to the user. That key could use the primary as the seed for the hash function you describe and be the one that you use to do lookups. That key would be generated by a trigger after insert (which is much simpler than trying to ensure atomicity of the operation) and
That is the key that you share with your clients, never the PK. I know there is debate (albeit, I can't understand why) if PKs are to be invisible to the user applications or not. The modern database design practices, and my personal experience, all seem to suggest that PKs should NOT be visible to users. They tend to attach meaning to them and, over time, that is a very bad thing - regardless if they have a check digit in the key or not.
Your joins will still be done using the PK. This other generated key is just supposed to be used for client lookups. They are the face, the PK is the guts.
Hope that helps.
Edit: FWIW, there is little to be said about "right" or "wrong" in database design. Sometimes it boils down to a choice. I think the choice you face will be better served by leaving the PK alone and creating a secondary key - just that.
I think you are way over-complicating this. Why not let the database do what it does best and let it take care of atomicity and ensuring that the same id is not used twice? Why not use a postgresql SERIAL type and get an autogenerated surrogate primary key, just like an integer IDENTITY column in SQL Server or DB2? Use that on the column instead. Plus it will be faster than your user-defined function.
I concur regarding hiding this surrogate primary key and using an exposed secondary key (with a unique constraint on it) to lookup clients in your interface.
Are you using a sequence because you need a unique identifier across several tables? This is usually an indication that you need to rethink your table design, and those several tables should perhaps be combined into one, with an autogenerated surrogate primary key.
Also see here
How you generate the random and unique ids is a useful question - but you seem to be making a counter productive assumption about when to generate them!
My point is that you do not need to generate these id's at the time of creating your rows, because they are essentially independent of the data being inserted.
What I do is pre-generate random id's for future use, that way I can take my own sweet time and absolutely guarantee they are unique, and there's no processing to be done at the time of the insert.
For example I have an orders table with order_id in it. This id is generated on the fly when the user enters the order, incrementally 1,2,3 etc forever. The user does not need to see this internal id.
Then I have another table - random_ids with (order_id, random_id). I have a routine that runs every night which pre-loads this table with enough rows to more than cover the orders that might be inserted in the next 24 hours. (If I ever get 10000 orders in one day I'll have a problem - but that would be a good problem to have!)
This approach guarantees uniqueness and takes any processing load away from the insert transaction and into the batch routine, where it does not affect the user.
Your best bet would probably be some form of hash function, and then a checksum added to the end.
If you're not using this too often (you do not have a new customer every second, do you?) then it is feasible to just get a random number and then try to insert the record. Just be prepared to retry inserting with another number when it fails with unique constraint violation.
I'd use numbers 1000000 to 999999 (900000 possible numbers of the same length) and check digit using UPC or ISBN 10 algorithm. 2 check digits would be better though as they'll eliminate 99% of human errors instead of 9%.
I posted a few weeks back inquiring about the firebird DB and how to monitor it. Since then I have come up with a nifty script that monitors all of the page reads/writes/fetches/marks. One of the columns I am monitoring is the MON$STAT_ID and the MON$STAT_GROUP fields. This prints out a nice number for me; however, I have no way to correlate and understand what exactly it is. I thought printing out the MON$STAT_GROUP would help but it has yet to assist me in any way...
I have also looked into the RDB$ commands but have found very limited documentation to see if they might assist me in monitoring my database.
So I decided to come here and inquire first off whether I am monitoring my database in a way that others can view the data from page reads/writes/fetches/marks and make an intelligent decision on whether or not the database is performing as expected.
Secondly, would adding RDB$ commands to my script add anything to the value of the data that I will be giving our database folks?
Lastly, and maybe most importantly, is there anyway to correlate the MON$STAT_ID fields to an actual table in the database to understand when something is going on that should not be? I currently am monitoring the database every minute which may be to frequent, but I am getting valid data out. The only question now is how to interpret this data. Can someone give me advice on methods they use/have used in the past that have worked for them?
(NOTE: Running firebird 2.1)
The column MON$STAT_ID in MON$IO_STATS (and MON$RECORD_STATS and MON$MEMORY_USAGE) is the primary key of the record in the monitoring table. Almost all other monitoring tables include a MON$STAT_ID to point to these statistics: MON$ATTACHMENTS, MON$CALL_STACK, MON$DATABASE, MON$STATEMENTS, MON$TRANSACTIONS.
In other words: the statistics apply on the database, attachment, transaction, statement or call level (PSQL executes). The statistics tables contain a column called MON$STAT_GROUP to discern these types. The values of MON$STAT_GROUP are described in RDB$TYPES:
0 : DATABASE
1 : ATTACHMENT
2 : TRANSACTION
3 : STATEMENT
4 : CALL
Typically the statistics of level 0 contain all from level 1, level 1 contains all from level 2 for that attachment, level 2 contains all from level 3 for that transaction, level 3 contains all from level 4 for that statement.
As there might be data processed unrelated to the lower level, or a specific attachment, transaction or statement handle has already been dropped, the numbers of the lower level do not necessarily aggregate to the entire number of the higher level.
There is no way to correlate the statistics to a specific table (as this information isn't table related, but - simplified - from executing statements which might cover multiple tables).
As I also commented, I am unsure what you mean with "RDB$ commands". But I am assuming you are talking about RDB$GET_CONTEXT() and RDB$SET_CONTEXT(). You could use RDB$GET_CONTEXT() to obtain the current connection (SESSION_ID) and transaction id (TRANSACTION_ID). These values values can be used for MON$ATTACHMENT_ID and MON$TRANSACTION_ID in the monitoring tables. I don't think the other variables in the SYSTEM namespace are interesting, and those in USER_SESSION and USER_TRANSACTION are all user-defined (and initially those namespaces are empty).
It is far easier to use the CURRENT_CONNECTION and CURRENT_TRANSACTION context variables within a statement. As documented in doc\README.monitoring_tables.txt in the Firebird installation:
System variables CURRENT_CONNECTION and CURRENT_TRANSACTION could be used to select data about the current (for the caller) connection and transaction respectively. These variables correspond to the ID columns of the appropriate monitoring tables.
Note: my answer is based on Firebird 2.5.
To present statistics by specific tables I use this SQL (FB 3)
select t.mon$table_name,trim(
case when r.mon$record_seq_reads>0 then 'Non index Reads: '||r.mon$record_seq_reads else '' end||
case when r.mon$record_idx_reads>0 then ' Index Reads: '||r.mon$record_idx_reads else '' end||
case when r.mon$record_inserts>0 then ' Inserts: '||r.mon$record_inserts else '' end||
case when r.mon$record_updates>0 then ' Updates: '||r.mon$record_updates else '' end||
case when r.mon$record_deletes>0 then ' Deletes: '||r.mon$record_deletes else '' end)
from MON$TABLE_STATS t
join mon$record_stats r on r.mon$stat_id=t.mon$record_stat_id
where t.mon$table_name not starting 'RDB$' and r.mon$stat_group=2
order by 1
I need to expire all keys in redis hash, which are older than 1 month.
This is not possible, for the sake of keeping Redis simple.
Quoth Antirez, creator of Redis:
Hi, it is not possible, either use a different top-level key for that
specific field, or store along with the filed another field with an
expire time, fetch both, and let the application understand if it is
still valid or not based on current time.
Redis does not support having TTL on hashes other than the top key, which would expire the whole hash. If you are using a sharded cluster, there is another approach you could use. This approach could not be useful in all scenarios and the performance characteristics might differ from the expected ones. Still worth mentioning:
When having a hash, the structure basically looks like:
hash_top_key
- child_key_1 -> some_value
- child_key_2 -> some_value
...
- child_key_n -> some_value
Since we want to add TTL to the child keys, we can move them to top keys. The main point is that the key now should be a combination of hash_top_key and child key:
{hash_top_key}child_key_1 -> some_value
{hash_top_key}child_key_2 -> some_value
...
{hash_top_key}child_key_n -> some_value
We are using the {} notation on purpose. This allows all those keys to fall in the same hash slot. You can read more about it here: https://redis.io/topics/cluster-tutorial
Now if we want to do the same operation of hashes, we could do:
HDEL hash_top_key child_key_1 => DEL {hash_top_key}child_key_1
HGET hash_top_key child_key_1 => GET {hash_top_key}child_key_1
HSET hash_top_key child_key_1 some_value => SET {hash_top_key}child_key_1 some_value [some_TTL]
HGETALL hash_top_key =>
keyslot = CLUSTER KEYSLOT {hash_top_key}
keys = CLUSTER GETKEYSINSLOT keyslot n
MGET keys
The interesting one here is HGETALL. First we get the hash slot for all our children keys. Then we get the keys for that particular hash slot and finally we retrieve the values. We need to be careful here since there could be more than n keys for that hash slot and also there could be keys that we are not interested in but they have the same hash slot. We could actually write a Lua script to do those steps in the server by executing an EVAL or EVALSHA command. Again, you need to take into consideration the performance of this approach for your particular scenario.
Some more references:
https://redis.io/commands/cluster-keyslot
https://redis.io/commands/cluster-getkeysinslot
https://redis.io/commands/eval
This is possible in KeyDB which is a Fork of Redis. Because it's a Fork its fully compatible with Redis and works as a drop in replacement.
Just use the EXPIREMEMBER command. It works with sets, hashes, and sorted sets.
EXPIREMEMBER keyname subkey [time]
You can also use TTL and PTTL to see the expiration
TTL keyname subkey
More documentation is available here: https://docs.keydb.dev/docs/commands/#expiremember
You can use Sorted Set in redis to get a TTL container with timestamp as score.
For example, whenever you insert a event string into the set you can set its score to the event time.
Thus you can get data of any time window by calling
zrangebyscore "your set name" min-time max-time
Moreover, we can do expire by using zremrangebyscore "your set name" min-time max-time to remove old events.
The only drawback here is you have to do housekeeping from an outsider process to maintain the size of the set.
Elon Musk will soon send people to the moon and we still cannot expire fields on redis :(
Anyway the solution I've been come up with is:
Lets say I want to expire every 3 minutes:
So im holding the data in 3 fields 0 1 2.
and then i do module% 3 to current time in minutes.
if the module for example == 0
so im using only 1 2 and 0 i delete;
then it change to 1 so im using 2 and 0 and delete 1.
Im not using it and i didnt checked it but im just let you know its possible
There is a Redisson java framework which implements hash Map object with entry TTL support. It uses hmap and zset Redis objects under the hood. Usage example:
RMapCache<Integer, String> map = redisson.getMapCache('map');
map.put(1, 30, TimeUnit.DAYS); // this entry expires in 30 days
This approach is quite useful.
We had the same problem discussed here.
We have a Redis hash, a key to hash entries (name/value pairs), and we needed to hold individual expiration times on each hash entry.
We implemented this by adding n bytes of prefix data containing encoded expiration information when we write the hash entry values, we also set the key to expire at the time contained in the value being written.
Then, on read, we decode the prefix and check for expiration. This is additional overhead, however, the reads are still O(n) and the entire key will expire when the last hash entry has expired.
Regarding a NodeJS implementation, I have added a custom expiryTime field in the object I save in the HASH. Then after a specific period time, I clear the expired HASH entries by using the following code:
client.hgetall(HASH_NAME, function(err, reply) {
if (reply) {
Object.keys(reply).forEach(key => {
if (reply[key] && JSON.parse(reply[key]).expiryTime < (new Date).getTime()) {
client.hdel(HASH_NAME, key);
}
})
}
});
If your use-case is that you're caching values in Redis and are tolerant of stale values but would like to refresh them occasionally so that they don't get too stale, a hacky workaround is to just include a timestamp in the field value and handle expirations in whatever place you're accessing the value.
This allows you to keep using Redis hashes normally without needing to worry about any complications that might arise from the other approaches. The only cost is a bit of extra logic and parsing on the client end. Not a perfect solution, but it's what I typically do as I haven't needed TTL for any other reason and I'm usually needing to do extra parsing on the cached value anyways.
So basically it'll be something like this:
In Redis:
hash_name
- field_1: "2021-01-15;123"
- field_2: "2021-01-20;125"
- field_2: "2021-02-01;127"
Your (pseudo)code:
val = redis.hget(hash_name, field_1)
timestamp = val.substring(0, val.index_of(";"))
if now() > timestamp:
new_val = get_updated_value()
new_timestamp = now() + EXPIRY_LENGTH
redis.hset(hash_name, field_1, new_timestamp + ";" + new_val)
val = new_val
else:
val = val.substring(val.index_of(";"))
// proceed to use val
The biggest caveat imo is that you don't ever remove fields so the hash can grow quite large. Not sure there's an elegant solution for that - I usually just delete the hash every once in a while if it feels too big. Maybe you could keep track of everything you've stored somewhere and remove them periodically (though at that point, you might as well just be using that mechanism to expire the fields manually...).
You could store key/values in Redis differently to achieve this, by just adding a prefix or namespace to your keys when you store them e.g. "hset_"
Get a key/value GET hset_key equals to HGET hset key
Add a key/value SET hset_key value equals to HSET hset key
Get all keys KEYS hset_* equals to HGETALL hset
Get all vals should be done in 2 ops, first get all keys KEYS hset_* then get the value for each key
Add a key/value with TTL or expire which is the topic of question:
SET hset_key value
EXPIRE hset_key
Note: KEYS will lookup up for matching the key in the whole database which may affect on performance especially if you have big database.
Note:
KEYS will lookup up for matching the key in the whole database which may affect on performance especially if you have big database. while SCAN 0 MATCH hset_* might be better as long as it doesn't block the server but still performance is an issue in case of big database.
You may create a new database for storing separately these keys that you want to expire especially if they are small set of keys.
Thanks to #DanFarrell who highlighted the performance issue related to
KEYS
You can. Here is an example.
redis 127.0.0.1:6379> hset key f1 1
(integer) 1
redis 127.0.0.1:6379> hset key f2 2
(integer) 1
redis 127.0.0.1:6379> hvals key
1) "1"
2) "1"
3) "2"
redis 127.0.0.1:6379> expire key 10
(integer) 1
redis 127.0.0.1:6379> hvals key
1) "1"
2) "1"
3) "2"
redis 127.0.0.1:6379> hvals key
1) "1"
2) "1"
3) "2"
redis 127.0.0.1:6379> hvals key
Use EXPIRE or EXPIREAT command.
If you want to expire specific keys in the hash older then 1 month. This is not possible.
Redis expire command is for all keys in the hash.
If you set daily hash key, you can set a keys time to live.
hset key-20140325 f1 1
expire key-20140325 100
hset key-20140325 f1 2
You could use the Redis Keyspace Notifications by using psubscribe and "__keyevent#<DB-INDEX>__:expired".
With that, each time that a key will expire, you will get a message published on your redis connection.
Regarding your question basically you create a temporary "normal" key using set with an expiration time in s/ms. It should match the name of the key that you wish to delete in your set.
As your temporary key will be published to your redis connection holding the "__keyevent#0__:expired" when it expired, you can easily delete your key from your original set as the message will have the name of the key.
A simple example in practice on that page : https://medium.com/#micah1powell/using-redis-keyspace-notifications-for-a-reminder-service-with-node-c05047befec3
doc : https://redis.io/topics/notifications ( look for the flag xE)
static async setCount(ip: string, count: number) {
const val = await redisClient.hSet(ip, 'ipHashField', count)
await redisClient.expire(ip, this.expireTime)
}
Try expire your key.
I have a Cassandra ColumnFamily (0.6.4) that will have new entries from users. I'd like to query Cassandra for those new entries so that I can process that data in another system.
My sense was that I could use a TimeUUIDType as the key for my entry, and then query on a KeyRange that starts either with "" as the startKey, or whatever the lastStartKey was. Is this the correct method?
How does get_range_slice actually create a range? Doesn't it have to know the data type of the key? There's no declaration of the data type of the key anywhere. In the storage_conf.xml file, you declare the type of the columns, but not of the keys. Is the key assumed to be of the same type as the columns? Or does it do some magic sniffing to guess?
I've also seen reference implementations where people store TimeUUIDType in columns. However, this seems to have scale issues as this particular key would then become "hot" since every change would have to update it.
Any pointers in this case would be appreciated.
When sorting data only the column-keys are important. The data stored is of no consequence neither is the auto-generated timestamp. The CompareWith attribute is important here. If you set CompareWith as UTF8Type then the keys will be interpreted as UTF8Types. If you set the CompareWith as TimeUUIDType then the keys are automatically interpreted as timestamps. You do not have to specify the data type. Look at the SlicePredicate and SliceRange definitions on this page http://wiki.apache.org/cassandra/API This is a good place to start. Also, you might find this article useful http://www.sodeso.nl/?p=80 In the third part or so he talks about slice ranging his queries and so on.
Doug,
Writing to a single column family can sometimes create a hot spot if you are using an Order-Preserving Partitioner, but not if you are using the default Random Partitioner (unless a subset of users create vastly more data than all other users!).
If you sorted your rows by time (using an Order-Preserving Partitioner) then you are probably even more likely to create hotspots, since you will be adding rows sequentially and a single node will be responsible for each range of the keyspace.
Columns and Keys can be of any type, since the row key is just the first column.
Virtually, the cluster is a circular hash key ring, and keys get hashed by the partitioner to get distributed around the cluster.
Beware of using dates as row keys however, since even the randomization of the default randompartitioner is limited and you could end up cluttering your data.
What's more, if that date is changing, you would have to delete the previous row since you can only do inserts in C*.
Here is what we know :
A slice range is a range of columns in a row with a start value and an end value, this is used mostly for wide rows as columns are ordered. Known column names defined in the CF are indexed however so they can be retrieved specifying names.
A key slice, is a key associated with the sliced column range as returned by Cassandra
The equivalent of a where clause uses secondary indexes, you may use inequality operators there, however there must be at least ONE equals clause in your statement (also see https://issues.apache.org/jira/browse/CASSANDRA-1599).
Using a key range is ineffective with a Random Partitionner as the MD5 hash of your key doesn't keep lexical ordering.
What you want to use is a Column Family based index using a Wide Row :
CompositeType(TimeUUID | UserID)
In order for this not to become hot, add a first meaningful key ("shard key") that would split the data accross nodes such as the user type or the region.
Having more data than necessary in Cassandra is not a problem, it's how it is designed, so what you must ask yourself is "what do I need to query" and then design a Column Family for it rather than trying to fit everything in one CF like you'd do in an RDBMS.