Creating a SHA-256 hash in OrientDB Function - orientdb

I need to store a password's SHA-256 hash in OrientDB REST function - so I can use it to authenticate the user. The incoming call to the REST function will contain the password (over HTTPS) but I want to generate a hash and store that instead of the password itself.
However, OrientDB does not expose any helpers to do this. And straight javascript does not have helpers to do this either... any way I can make this happen?
(one obvious option is to SHA-256 it in the middle tier and pass that to OrientDB but I'd rather keep this in the database tier)

You can use OSecurityManager from Javascript functions like this
return com.orientechnologies.orient.core.security.OSecurityManager.instance().digest2String("password");

Related

Sign s3 URL in PostgreSQL RDS/Amazon Aurora

There is a lot of image files being returned by the DB(Either PostgreSQL RDS/Amazon Aurora). We need to sign the URL. Currently, a user defined function or a view returns the records.
I am looking for a way to sign the S3 URL directly in SQL as a user defined function. Unfortunately, there does not seem to be a way other than using Python language inside a user defined function and python is not supported as a procedural language in PostgreSQL/Aurora.
Does someone know of a way we can sign the URL directly as part of a SQL Query in PostgreSQL RDS/Amazon Aurora?
Database is not the place to perform such operation.
You should consider either putting a signed URL into the database already or to rethink your application if it shouldn't be rearchitected.

can we hash the JWT and save the hashed version in local storage since it is not safe to put the jwt in local storage?

since it is not safe to keep the JWT in local storage because of CSRF or XSS attacks so i am wondering is it possible for example to create a hash function that can hash the JWT and keep the hashed version (useless) in local storage? and we create also a decode function that decode the hashed JWT and return the correct JWT whenever we need to use it.
it seems something manageable to me could there be any possible flaws or issues related to this technique?
That is pretty pointless to hash the JWT.
A better approach is to use the BFF pattern as described in these two resources
alert‘OAuth 2 0’; // The impact of XSS on OAuth 2 0 in SPAs
The BFF Pattern (Backend for Frontend): An Introduction

How to recalculate private data hash from Hyperledger Fabric

I need to recompute the hash of private data to proof the integrity of the data. When private data collections are used the private data are stored in SideDBs and the hash of the data on the ledger according to the documentation. Basically the question splits up into two subquestions:
How to access the hash of the private data?
Which method to use to recompute the hash that is saved on the ledger?
Thanks in advance.
I use Hyperledger Fabric v1.4.2 with private data. I followed marbles example.
I expect to be able to calculate the private data hash and verify that it corresponds to the hash saved in the ledger.
to get the SHA256 hash (using Fabric 1.4.x contract API) use:
let pdHashBytes = await ctx.stub.getPrivateDataHash(collectionA, readKey);
let actual_hash = pdHashBytes.toString('hex');
You can calculate the private data written on Ubuntu like shown below.
echo -n "{\"name\":\"Joe\",\"quantity\":999}" |shasum -a 256
and verify they match. So that's the mechanics of using private data method and verify patterns. Now lets add information about salting mechanics, as mentioned elsewhere in this post.
For most uses of private data, you'll most likely use a random salt so the private data cannot be brute force attacked in the permissioned blockchain network (between agreed parties). The salt is passed along in the same transient field as the private data. And (later on), it will need to be included with the private data itself, when recalculating the private data hash. See https://hyperledger-fabric.readthedocs.io/en/release-1.4/private-data-arch.html#protecting-private-data-content
Don't use it, private data is security hole.
It amazes me that nobody had mentioned this before so I guess I better point this out now before more damages are being done.
The logic behind Privated data is simple, it puts data in a local embedded data store and puts a hash of that data on Blockchain.
The issue is that cryptographic hash is not an encryption mechanism, same data hashed by anyone using the same hashing algorithm (which is also very standardized) will always get the same hash! This is exactly what hash functions are designed for, and that’s why we use hash in digital signature to allow anyone to validate signed data.
However, this also means anyone can “decrypt” the data behind the hash by using dictionary attack.
Hashing is cheap, the cost of each hash on a normal laptop CPU core is about 3 microseconds, basically I can create 1 billion candidate hashes within one hour on a single laptop CPU core, and compare them to the hashes on Hyperledger Fabric DLT.
And I am just talking about using a single core on my laptop, not even 50% power of my laptop
Why is it dangerous? Because if an attacker is connected to a Blockchain system, the attacker knows the range of the data being hashed (etc, trade ID, item name, bank name, address, cell phone number), so you can easily create dictionary attack to get the true data behind the hash out.
How about adding salt to each data to be hashed? Well, that’s one thing Hyperledger Fabric didn’t do.
To their defense, Hyperledger didn’t implement salt because it is difficult to pass salts to counter parties. You can’t use DLT to pass salt value because attackers would see it, so you have to create another P2P connection with counter party. If you need to create connection with all the counter parties, what’s the point of using Blockchain in the first place?
It’s just scary that so many people are using this security whole.

NDB urlsafe keys and REST api requests

I was wondering what others are doing to expose REST api endpoints with the datastore (using app engine standard). I want to use urlsafe keys but 1 - I'd rather not pass this data directly as it poses a security risk since app-engine to app-engine calls are exposed over a public ip, and 2 - the keys that are generated are very long and would not be great when multiple need to be passed as a query parameter to form a get request (and would probably exceed browser character limits).
I was thinking maybe using compression of some sort to compress the urlsafe keys which would solve both 1 and 2, but want to see if there is a better way to create REST endpoints. Or if some type of compression method is already baked into ndb?
Google uses HTTPS internally so I'm not sure you need to worry about it.
Also, you should probably design your app so that keys are not secret info and such that it is safe to expose them.
I use key IDs for my REST calls, which I believe are 12 digit numbers. That works as long as you know the entity type. If you need to specify the entity type, you could add another parameter to your API call.

Multi-tenant Algolia index

I would like to offer full-text search to my users through their data - and make sure that they can only access the data they own. Are there any patterns allowing to do that on Algolia ? None of the solutions I've considered seem a good fit, so i was wondering if I had overlooked some other options.
We could host each user's data in a separate Algolia app, so that each API key would give access to only the relevant data, but that would quickly become unaffordable, as many would hit the 10000 records limit.
We could host each user's data in a separate index and use team index restrictions, but there does not seem to be an API to manage those, and that would anyway require an Algolia account for each customer, which seems like a misuse of the service (we could e.g. generate email addresses at our domain name).
Finally we could filter queries with some userId to retrieve only the relevant data, but that wouldn't be secure, as someone could use the apikey to query algolia without the filter.
We could proxy algolia calls to inject the filter and the api key - but the perf penalty would probably be high.
Any other suggestions ? Thanks!
I got a great answer from rayrutjes at Algolia, so I'm pasting it here in case :
The best approach for your use case is to use what we call generated API keys. Here is the documentation for the JavaScript client: https://www.algolia.com/doc/api-client/javascript/api-keys/#generate-key
The usage is fairly simple, you generate an API key on the fly based on your search API key + some additional query params.
The resulting API key can be used like a standard search API key, with the difference that it can be scoped on a given set of parameters.
Note that the generation of such a scoped API key does not require an actual call to the API.
Also be sure to generate those scoped API keys in the backend as in that case you don't want to expose the search API key you use for their generation.