Can I retrieve record from aerospike database by previously saved hash digest?
Here's an example how you do it in the Aerospike client for Python. The Client.get needs a valid key tuple, which can be (namespace, set, None, digest) instead of the more standard (namespace, set, primary-key).
>>> client = aerospike.client(config).connect()
>>> client.put(('test','demo','oof'), {'id':0, 'a':1})
>>> (key, meta, bins) = client.get(('test','demo','oof'))
>>> key
('test', 'demo', None, bytearray(b'\ti\xcb\xb9\xb6V#V\xecI#\xealu\x05\x00H\x98\xe4='))
>>> (key2, meta2, bins2) = client.get(key)
>>> bins2
{'a': 1, 'id': 0}
>>> client.close()
You need three things to locate a record in Aerospike - namespace, set name (if used, can be null) and your key (that you used initially - say a string or integer). The "Key" object you pass to the get call comprises these three entities. The client library will compute the hash using set + your key, then in addition use the namespace to get the record. Aerospike only stores the hash (unless sendKey is set to true) but you need the namespace as well. So in your case, you can create the Key object that is passed to get() by specifying a namespace and hash and then pass that key object to get() but you cannot use get() with just the hash and not specifying a namespace.
Related
As context, I am creating a bucket of key values with empty documents to fulfill a want to quickly check IDs just through checking key existence in comparison to checking values. In the cluster, I have two buckets, source-bucket and new-bucket. The documents in source-bucket are in the form:
ID: {
ID: ...,
type: ...
}
You can move the contents of source to the new bucket using the query
INSERT INTO `new-bucket` (KEY k, VALUE v) SELECT meta(v).id AS k FROM `source-bucket` as v
Is there a way to copy over just the key? Something along the lines of this (although this example doesn't work):
INSERT INTO `new-bucket` (KEY k, VALUE v) values (SELECT meta().id FROM `source-bucket`, NULL)
I guess I'm not familiar enough with the n1ql syntax to under how to construct a query like this. Let me know if you have an answer to this. If this is a duplicate, feel free to point to the answer.
If you need empty object use {}.
CREATE PRIMARY INDEX ON `source-bucket`;
INSERT INTO `new-bucket` (KEY k, VALUE {})
SELECT meta(b).id AS k FROM `source-bucket` as b
NOTE: document value can be empty object or any data type. The following all are valid.
INSERT INTO default VALUES ("k01", {"a":1});
INSERT INTO default VALUES ("k02", {});
INSERT INTO default VALUES ("k03", 1);
INSERT INTO default VALUES ("k04", "aa");
INSERT INTO default VALUES ("k05", true);
INSERT INTO default VALUES ("k06", ["aa"]);
INSERT INTO default VALUES ("k07", NULL);
I would like to migrate hash generation to BigQuery which has SHA256, but does not have salt as parameter.
For example in R I can do something like this:
library(openssl)
sha256("test#gmail.com", key = "111")
# [1] "172f052058445afd9fe3afce05bfec573b5bb4c659bfd4cfc69a59d1597a0031"
Update
same with python based on an answer here:
import hmac
import hashlib
print(hmac.new(b"111", b"test#gmail.com", hashlib.sha256).hexdigest())
# 172f052058445afd9fe3afce05bfec573b5bb4c659bfd4cfc69a59d1597a0031
I hope by "migrate", you mean to migrate the logic not the exact byte-wise output from R Sha256() function.
R is using hmacsha256 and looking at Microsoft's HMACSHA256 class, it can be roughly expressed as:
The HMAC process mixes a secret key with the message data, hashes the result with the hash function, mixes that hash value with the secret key again, and then applies the hash function a second time. The output hash is 256 bits in length.
create temp function hmacsha256(content STRING, key STRING)
AS (SHA256(
CONCAT(
TO_HEX(SHA256(CONCAT(content, key))), key)
));
SELECT TO_HEX(hmacsha256("test#gmail.com", "111"));
Output:
+------------------------------------------------------------------+
| f0_ |
+------------------------------------------------------------------+
| 4010f74e5c69ddbe1e36975f7cb8be64bcfd1203dbc8e009b29d7a12a8bf5fef |
+------------------------------------------------------------------+
With the help of #Yun I have managed to solve this.
To apply HMAC you will need to include external library file in the example function bellow.
CREATE TEMP FUNCTION USER_HASH(message STRING, secret STRING)
RETURNS STRING
LANGUAGE js
OPTIONS (
-- copy this Forge library file to Storage:
-- https://cdn.jsdelivr.net/npm/node-forge#0.7.0/dist/forge.min.js
-- #see https://github.com/digitalbazaar/forge
library=["gs://.../forge.min.js"]
)
AS
"""
var hmac = forge.hmac.create();
hmac.start('sha256', secret);
hmac.update(message);
return hmac.digest().toHex();
""";
SELECT USER_HASH("test#gmail.com", "111");
-- Row f0_
-- 1 172f052058445afd9fe3afce05bfec573b5bb4c659bfd4cfc69a59d1597a0031
using psycopg2 and postgresql 9.3 requires a row to be inserted into a table using the following syntax:
cur.execute(
'INSERT INTO customer (name,address) VALUES ('Herman M', '1313 mockingbird lane'))
If the data comes in a dictionary {'name':'Herman M','address':'1313 mockingbird lane'} is there a better, more pythonic way to extract the keys and values from the dictionary in order than this:
fields,values = '',[]
for k,v in dictionary.items():
fields = ','.join((fields,k))
values.append((v))
In order to do this:
cur.execute(
"INSERT INTO {} ({}) VALUES {}".format(
tablename,fields[1:],tuple(values)))
It works, but after watching Raymond Hettinger give his talk on transforming code into beautiful idiomatic python I am sensitive to the fact that it is ugly and I am copying data. Is there a better way?
Use the dictionary in the cursor.execute method
insert_query = """
insert into customer (name, address)
values (%(name)s, %(address)s)
"""
insert_dict = {
'name': 'Herman M',
'address': '1313 mockingbird lane'
}
cursor.execute(insert_query, insert_dict)
I can't figure out why this works
var blah = this.Database.SqlQuery<MyObjectWithTwoStringPropsNamedKeyAndValue>("exec mySproc").ToList();
and this doesn't
var blah = this.Database.SqlQuery<KeyValuePair<string,string>>("exec mySproc").ToList();
"mySproc" returns three records with two varchar columns aliased "Key" and "Value".
In the second line of code, I get a list of three KeyValuePairs, but both properties (Key and Value) are null for each item in the list.
That's because the KeyValuePair properties, Key and Value are readonly properties.
The values for Key and Value can only be set in the constructor, and not changed later.
SqlQuery tries to map the columns returned by the stored procedure, but cannot find properties in which to write them. The documentation doesn't state that the properties must be writeable, but it's clear that it won't use the parameterized constructor, but the properties.
The type can be any type that has properties that match the names of the columns returned from the query, or can be a simple primitive type.
I created a url shortener algorithm with Ruby + MongoMapper
It's a simple url shortener algorithm with max 3 digits
http://pablocantero.com/###
Where each # can be [a-z] or [A-Z] or [0-9]
For this algorithm, I need to persist four attributes on MongoDB (through
MongoMapper)
class ShortenerData
include MongoMapper::Document
VALUES = ('a'..'z').to_a + ('A'..'Z').to_a + (0..9).to_a
key :col_a, Integer
key :col_b, Integer
key :col_c, Integer
key :index, Integer
end
I created another class to manage ShortenerData and to generate the unique
identifier
class Shortener
include Singleton
def get_unique
unique = nil
#shortener_data.reload
# some operations that can increment the attributes col_a, col_b, col_c and index
# ...
#shortener_data.save
unique
end
end
The Shortener usage
Shortener.instance.get_unique
My doubt is how can I make get_unique synchronized, my app will be
deployed on heroku, concurrent requests can call
Shortener.instance.get_unique
I changed the behaviour to get the base62 id. I created an auto increment gem to MongoMapper
With the auto incremented id I encode to base62
The gem is available on GitHub https://github.com/phstc/mongomapper_id2
# app/models/movie.rb
class Movie
include MongoMapper::Document
key :title, String
# Here is the mongomapper_id2
auto_increment!
end
Usage
movie = Movie.create(:title => 'Tropa de Elite')
movie.id # BSON::ObjectId('4d1d150d30f2246bc6000001')
movie.id2 # 3
movie.to_base62 # d
Short url
# app/helpers/application_helper.rb
def get_short_url model
"http://pablocantero.com/#{model.class.name.downcase}/#{model.to_base62}"
end
I solved the race condition with MongoDB find_and_modify http://www.mongodb.org/display/DOCS/findAndModify+Command
model = MongoMapper.database.collection(:incrementor).
find_and_modify(
:query => {'model_name' => 'movies'},
:update => {'$inc' => {:id2 => 1}}, :new => true)
model[:id2] # returns the auto incremented_id
If this new behaviour I solved the race condition problem!
If you liked this gem, please, help to improve it. You’re welcome to make your contributions and send them as a pull request or just send me a message http://pablocantero.com/blog/contato