Only row key in a Cassandra column family - nosql

I'm using Cassandra 1.1.8 and today I saw in my keyspace a column family with the following content
SELECT * FROM challenge;
KEY
----------------------------
49feb2000100000a556522ed68
49feb2000100000a556522ed74
49feb2000100000a556522ed7a
49feb2000100000a556522ed72
49feb2000100000a556522ed76
49feb2000100000a556522ed6a
49feb2000100000a556522ed70
49feb2000100000a556522ed78
49feb2000100000a556522ed6e
49feb2000100000a556522ed6c
So, only rowkeys.
Yesterday those rows were there and I ran some deletions (exactly on those rows). I'm using Hector
Mutator<byte []> mutator = HFactory.createMutator(keyspace, BYTES_ARRAY_SERIALIZER)
.addDeletion(challengeRowKey(...), CHALLENGE_COLUMN_FAMILY_NAME)
.execute();
This is a small development and test environment on a single machine / single node so I don't believe the hardware details are relevant.
Probably I'm doing something stupid or I didn't get the point about how things are working, but as far I understood the rows above are no valid... column name and column value coordinates are missing so there are no valid cells (rowkey / column name / column value)...is that right?
I read about ghost reads but I think this is a scenario in a distribuited environment...is that valid after one day and on a single Cassandra node??

From http://www.datastax.com/docs/1.0/dml/about_writes#about-deletes
"The row key for a deleted row may still appear in range query results. When you delete a row in Cassandra, it marks all columns for that row key with a tombstone. Until those tombstones are cleared by compaction, you have an empty row key (a row that contains no columns). These deleted keys can show up in results of get_range_slices() calls. If your client application performs range queries on rows, you may want to have if filter out row keys that return empty column lists."

Related

Foreign Key query with unequal number of rows in data, that the remainders but remain

[PostgreSQL code with error(https://i.stack.imgur.com/sHNQs.png)Excel files with their rows]
So my issue is that i want to reference/correlate each table with each other under the variable patientAMKA or patient. I get the error that you can see in the photo in postgresql. What i want is to correlate each other, without removing the rest unmatched rows from the other excel files. How do i do that?

HBase - rowkey basics

NOTE : Its a few hours ago that I have begun HBase and I come from an RDBMS background :P
I have a RDBMS-like table CUSTOMERS having the following columns:
CUSTOMER_ID STRING
CUSTOMER_NAME STRING
CUSTOMER_EMAIL STRING
CUSTOMER_ADDRESS STRING
CUSTOMER_MOBILE STRING
I have thought of the following HBase equivalent :
table : CUSTOMERS rowkey : CUSTOMER_ID
column family : CUSTOMER_INFO
columns : NAME EMAIL ADDRESS MOBILE
From whatever I have read, a primary key in an RDBMS table is roughly similar to a HBase table's rowkey. Accordingly, I want to keep CUSTOMER_ID as the rowkey.
My questions are dumb and straightforward :
Irrespective of whether I use a shell command or the HBaseAdmin java
class, how do I define the rowkey? I didn't find anything to do it
either in the shell or in the HBaseAdmin class(some thing like
HBaseAdmin.createSuperKey(...))
Given a HBase table, how to determine the rowkey details i.e which are the values used as rowkey?
I understand that rowkey design is a critical thing. Suppose a customer id is receives values like CUST_12345, CUST_34434 and so on, how will HBase use the rowkey to decide in which region do particular rows reside(assuming that region concept is similar to DB horizontal partitioning)?
***Edited to add sample code snippet
I'm simply trying to create one row for the customer table using 'put' in the shell. I did this :
hbase(main):011:0> put 'CUSTOMERS', 'CUSTID12345', 'CUSTOMER_INFO:NAME','Omkar Joshi'
0 row(s) in 0.1030 seconds
hbase(main):012:0> scan 'CUSTOMERS'
ROW COLUMN+CELL
CUSTID12345 column=CUSTOMER_INFO:NAME, timestamp=1365600052104, value=Omkar Joshi
1 row(s) in 0.0500 seconds
hbase(main):013:0> put 'CUSTOMERS', 'CUSTID614', 'CUSTOMER_INFO:NAME','Prachi Shah', 'CUSTOMER_INFO:EMAIL','Prachi.Shah#lntinfotech.com'
ERROR: wrong number of arguments (6 for 5)
Here is some help for this command:
Put a cell 'value' at specified table/row/column and optionally
timestamp coordinates. To put a cell value into table 't1' at
row 'r1' under column 'c1' marked with the time 'ts1', do:
hbase> put 't1', 'r1', 'c1', 'value', ts1
hbase(main):014:0> put 'CUSTOMERS', 'CUSTID12345', 'CUSTOMER_INFO:EMAIL','Omkar.Joshi#lntinfotech.com'
0 row(s) in 0.0160 seconds
hbase(main):015:0>
hbase(main):016:0* scan 'CUSTOMERS'
ROW COLUMN+CELL
CUSTID12345 column=CUSTOMER_INFO:EMAIL, timestamp=1365600369284, value=Omkar.Joshi#lntinfotech.com
CUSTID12345 column=CUSTOMER_INFO:NAME, timestamp=1365600052104, value=Omkar Joshi
1 row(s) in 0.0230 seconds
As put takes max. 5 arguments, I was not able to figure out how to insert the entire row in one put command. This is resulting in incremental versions of the same row which isn't required and I'm not sure if CUSTOMER_ID is being used as a rowkey !
Thanks and regards !
You don't, the key (and any other column for that matter) is a bytearray you can put whatever you want there- even encapsulate sub-entities
Not sure I understand that - each value is stored as key+column family + column qualifier + datetime + value - so the key is there.
HBase figures out which region a record will go to as it goes. When regions gets too big it repartitions. Also from time to time when there's too much junk HBase performs compactions to rearrage the files. You can control that when you pre-partition yourself, which is somehting you should definitely think about in the future. However, since it seems you are just starting out with HBase you can start with HBase taking care of that. Once you understand your usage patterns and data better you will probably want to go over that again.
You can read/hear a little about HBase schema design here and here

What is the purpose of dividing rows into columnfamilies if they can have different number/types of columns anyway?

Given that a column family can have rows with arbitrary structure we could store all rows in a single "store" (avoiding the name 'columnfamily/table' on purpose).
What is the purpose of column families then?
The simplest of all reasons is evident in the name itself "Column Family". A Column Family groups a bunch of related columns together. You could consider it as a namespace containing related columns.
For example the Column "Name" by itself lacks context, which can be provided by ColumnFamilies like "Employees" or "Cities". Or each Column would need to carry all of it's context by itself with no concept of related Columns.
Atomicity
In Cassandra 1.1 and below, the only atomic guarantee you have is that writes to the same row (i.e. with the same key) will be atomic.
Thus, you think very carefully about what you want in your columns, and what row those columns should be in so that your application will behave appropriately if a write fails.
Reasons:
To have a different sort order for the columns within a row. The comparator is specified at column family creation time and can't be changed afterwards. So if you have rows which columns must be sorted alphabetically or numerically you have to create different column families.
Customize the storage options that can be set on per column family basis. E.g. caching or rows, compaction, deletion of expired columns, etc. Per column family storage options can be found here
Can't mix counter and non-counter columns in the same column family
As mentioned in other answers, due to logical cohesion - columns represent attributes of some entity identified by the row id.

Cassandra CompositeType as row key Validator

I'm working on some POC.
I have the Column Family which stores server event. Avoiding to get row oversize we are splitting each row to N another rows using compositeType in row key:
CREATE COLUMN FAMILY logs with comparator='ReversedType(TimeUUIDType)' and key_validation_class='CompositeType(UTF8Type,IntegerType)' and default_validation_class=UTF8Type;
so for each server name we have N rows and we are writing data to each row using Very Simple Round Robin algorithm.
I have no problem to write data to any row:
Mutator<Composite> mutator = HFactory.createMutator(keySpace, CompositeSerializer.get());
HColumn<UUID,String> col =
HFactory.createColumn( TimeUUIDUtils.getUniqueTimeUUIDinMillis(), log);
Composite rowName = new Composite();
rowName.addComponent(serverName, StringSerializer.get());
rowName.addComponent(this.roundRobinDestributor.getRow(), IntegerSerializer.get());
mutator.insert(rowName, columnFamilyName, col);
}
So far so good, but now I have two quetions:
1) Due to the fact that if I want to get all logs for some serverName I would scan row keys, should I use ByteOrderedPartitioner?
2) Can any body help me, or point me on some help how to create Hector query which will bring all rows for server1 ( {server1:0}, {server1:1} {server1:2), etc...)? I saw a lot of example using CompositeType as comparator, but no example for key validator.
Any help or comment is highly appreciated.
First of all, row oversizing shouldn't be a problem in cassandra. Despite that, it might worth to spilt rows, since data distribution across cluster will be more even in this situation.
ByteOrderedPartitioner doesn't look like a good option here, since it would be hard to achieve uniform distribution of rows across cluster, that will lead to hotspots.
There's no way to query range of keys when using RandomPartitioner. However, if the maximum N value is reasonably small (up to 256) MultigetSliceQuery might be used to query whole set of rows.

How to query Cassandra by date range

I have a Cassandra ColumnFamily (0.6.4) that will have new entries from users. I'd like to query Cassandra for those new entries so that I can process that data in another system.
My sense was that I could use a TimeUUIDType as the key for my entry, and then query on a KeyRange that starts either with "" as the startKey, or whatever the lastStartKey was. Is this the correct method?
How does get_range_slice actually create a range? Doesn't it have to know the data type of the key? There's no declaration of the data type of the key anywhere. In the storage_conf.xml file, you declare the type of the columns, but not of the keys. Is the key assumed to be of the same type as the columns? Or does it do some magic sniffing to guess?
I've also seen reference implementations where people store TimeUUIDType in columns. However, this seems to have scale issues as this particular key would then become "hot" since every change would have to update it.
Any pointers in this case would be appreciated.
When sorting data only the column-keys are important. The data stored is of no consequence neither is the auto-generated timestamp. The CompareWith attribute is important here. If you set CompareWith as UTF8Type then the keys will be interpreted as UTF8Types. If you set the CompareWith as TimeUUIDType then the keys are automatically interpreted as timestamps. You do not have to specify the data type. Look at the SlicePredicate and SliceRange definitions on this page http://wiki.apache.org/cassandra/API This is a good place to start. Also, you might find this article useful http://www.sodeso.nl/?p=80 In the third part or so he talks about slice ranging his queries and so on.
Doug,
Writing to a single column family can sometimes create a hot spot if you are using an Order-Preserving Partitioner, but not if you are using the default Random Partitioner (unless a subset of users create vastly more data than all other users!).
If you sorted your rows by time (using an Order-Preserving Partitioner) then you are probably even more likely to create hotspots, since you will be adding rows sequentially and a single node will be responsible for each range of the keyspace.
Columns and Keys can be of any type, since the row key is just the first column.
Virtually, the cluster is a circular hash key ring, and keys get hashed by the partitioner to get distributed around the cluster.
Beware of using dates as row keys however, since even the randomization of the default randompartitioner is limited and you could end up cluttering your data.
What's more, if that date is changing, you would have to delete the previous row since you can only do inserts in C*.
Here is what we know :
A slice range is a range of columns in a row with a start value and an end value, this is used mostly for wide rows as columns are ordered. Known column names defined in the CF are indexed however so they can be retrieved specifying names.
A key slice, is a key associated with the sliced column range as returned by Cassandra
The equivalent of a where clause uses secondary indexes, you may use inequality operators there, however there must be at least ONE equals clause in your statement (also see https://issues.apache.org/jira/browse/CASSANDRA-1599).
Using a key range is ineffective with a Random Partitionner as the MD5 hash of your key doesn't keep lexical ordering.
What you want to use is a Column Family based index using a Wide Row :
CompositeType(TimeUUID | UserID)
In order for this not to become hot, add a first meaningful key ("shard key") that would split the data accross nodes such as the user type or the region.
Having more data than necessary in Cassandra is not a problem, it's how it is designed, so what you must ask yourself is "what do I need to query" and then design a Column Family for it rather than trying to fit everything in one CF like you'd do in an RDBMS.