My roommate and I had a conversation about hashing and encryption and their purpose in blockchain etc. We were curious as to why we would encrypt something that cannot be decrypted. It seems pointless since we cannot decrypt some information, it must mean we don't need it, thus leading to the question of why would we store it. Because wouldn't it be a waste of storage? Does anyone know why?
My question in short is, why would we encrypt something we cannot decrypt?
Related
I have a WebApplication, using GWT + Tomcat + Jersey + Postgres. All the important data is encrypted on Postgres with a gpg public key and decrypted with a gpg private key.
My question is: where to store those keys? It sounds very insecure to me store it on local disk, but I don't know where to.
Can someone help out? Thanks in advance.
All security is a trade-off.
(I'm not a crypto/security expert. These are my understandings from what I've read and studied, but if you're doing anything important, get professional advice from someone who's done this a lot for real).
In this case you have a number of choices that differ mainly on how they trade off uptime/convenience with key theft/abuse risk. I'm assuming you're using a GnuPG/OpenPGP library, rather than the command-line tools, but if not "the app" can be considered the GnuPG agent.
Store the key un-encrypted on disk. The app can use the key whenever it wants. If the app is restarted, it has immediate access to the key. An attacker that breaks into the system or steals an (unencrypted) backup can use the key easily. Proper backup encryption is vital.
A marginal improvement over this approach is to store the key encrypted and store the (obfuscated) passphrase for the key elsewhere in the system / in the app binary; it makes life a bit harder for the attacker and means they at least have to spend more time on it, but in most cases they'll still be able to recover it pretty easily. Proper backup encryption is vital.
Store the key encrypted on disk and store it decrypted in memory on app startup. A human can decrypt the key when prompted during app startup; after that, the app can use the key whenever it wants. Theft of the key from disk / backups does the attacker little good, they have to go to the extra effort of recovering the key from the application's memory, or modifying/wrapping the application to capture the passphrase when entered by the administrator after a crash/restart. Key must be locked in memory that cannot be swapped out.
Store the key encrypted on disk and decrypt it only with specific administrator interaction. The app cannot use the key without an admin intervening. The key pretty safe on disk and the theft risk from app memory is limited by the short periods it's in memory. However, an attacker that has broken into the system can still modify the app to record the key when it'd decrypted, or capture the passphrase. Key must be locked in memory that cannot be swapped out.
Store the key on removable storage. Physically insert it on app startup to decrypt the key and store it in app memory like (3), or when the app actually needs to use the key like (4). This makes it a bit harder for the attacker to steal the encrypted key and makes password theft less useful, but no harder to modify the app to steal the decrypted key. They can also just wait until they see the storage inserted and copy the encrypted key if they've stolen the passphrase with a wrapper/keylogger/etc. IMO it's not much benefit over a strong passphrase for the encrypted key on disk - it might make life a little harder on the attacker, but it's a lot harder on the admin.
Store the key on a smartcard, crypto accelerator, or USB crypto device that's designed never to permit the key to be exposed, only to perform crypto operations using it. The PKCS#11 standard is widely supported and useful for this. The key (theoretically) cannot be stolen without physically stealing the hardware - there are key extraction attacks on lots of hardware, but most require lots of time, and often require physical access. The server can use the key at will (if the accelerator has no timeout/unlock) or only with admin intervention (if the accelerator is locked after each use and must be unlocked by the admin). The attacker can still decrypt data using the accelerator by masquerading as the app, but they've got to do a lot more work, and will need to have ongoing access to the target system. Of course, this one costs more.
Disaster recovery is more challenging for this option; you depend on physical hardware for decrypting your data. If the data center burns down, you're done for. So you need duplicates and/or a very securely stored copy of the key its self. Every duplicate adds risk, of course, especially the one plugged into that "just in case" backup server we don't really use and don't keep the security patches up to date on...
If you go for hardware with a key built-in rather than one where you can store but not read the key, you have the added challenge that one day that hardware will be obsolete. Ever tried to get business critical software that requires an ISA card running on a modern server? It's fun - and one day, PCI/X and USB will be like that too. Of course, by then the crypto system you're using might be broken anyway, so you'll need to decrypt all your data and migrate it to another setup anyway. Still, I'd be using hardware where I can generate a key, program it into the hardware, and store the original key in a couple of different forms in a bank safe deposit box.
Now that you've read that, remember: I'm just an interested not-even-hobbyist. Go ask a professional. When they tell you how totally wrong I am, come here and explain :-)
Whatever you do, DO NOT invent your own crypto system.
I'm trying to build a API service for a system that (due to many reasons) does not have the main database in a completely secured fashion.
So, my question is - how do I salt the HMAC in such a manner such that even if the main database is compromised, you still cannot use the API key. This effectively means that the HMAC key is not preshared in plaintext but in some other way, but I'm not able to figure out how.
If your storage is insecure, you're going to have an impossible time encrypting it and still being able to decrypt it later unless you either store its encryption key somewhere (which, from your description, would still be insecure), or receive the key from the remote side of your service.
I'm storing some healthcare data on a mobile phone and I'd like to know what the best system of encryption is, to keep the data secure. It's basically a bunch of model objects, that I'm serializing and storing using NSKeyedArchiver / the equivalent on Blackberry (the name eludes me for now)
Any tips? I don't want to make up security protocols as I go along, but one of the other threads suggested the following approach.
Generate a public / private key pair
Store the public key
Encrypt the private key with a hash of the user's password.
Use the public key to encrypt the byte stream.
Decrypt the pvt key, keep it in memory, whenever the user logs in, and decrypt the stored data as needed.
Is there a more standard way of doing this?
Thanks,
Teja.
Edit: I appreciate it that you're trying to help me, but the things currently being discussed are business level discussions, on which I have no control of. So rephrasing my question, if you ignore that it's healthcare data, but some confidential data, say a password, how would you go about doing it?
There might be an easier way for secure data storage. With iOS 4.0 apple introduced system provided encryption of application documents. This means that the OS is responsible for doing all the encryption and decyryption in a fairly transparent way.
Applications that work with sensitive user data can now take advantage of the built-in encryption available on some devices to protect that data. When your application designates a particular file as protected, the system stores that file on-disk in an encrypted format. While the device is locked, the contents of the file are inaccessible to both your application and to any potential intruders. However, when the device is unlocked by the user, a decryption key is created to allow your application to access the file.
So only when your app is active, the files can be read back in unencrypted format. But the nice thing is that they are always encrypted on disk. So even if someone jailbreaks the device, or backs it up, the retrieved files are worthless.
This was probably introduced to conform to some specific data security standard that is required. I can't find that anywhere though.
For more info see the iOS 4.0 release notes.
http://en.wikipedia.org/wiki/HIPAA
Make sure you read and understand this!
edit: Sorry, didn't even bother to check to see where the OP is from, but even if they aren't from the USA there are still some good practices to follow in HIPAA.
HIPPA is a business practice and total system level privacy/security regulation. As such, an app can't comply by itself on random hardware for a random user. You need to determine how your app fits into a client health care provider's total regulatory compliance process before you can determine what algorithm might be found to comply with that process.
My best advice would be, don't store sensitive data in the user's mobile phone.
If that is not an option for you, then some kind of public/private key encryption, such as one you described, would be the next best option.
Here is what I got for a webapp login scheme.
Present in database would be two salts and hmac(hmac(password, salt1), salt2).
When the user go on the login page, he gets salt1.
If he has javascript activated, instead of sending the plaintext password, it would send hmac(password, salt1).
If he does not have javascript, the plaintext password is sent.
So, on the serverside, when getting a login request, we'd first check what is sent (passwordSent) against hmac(passwordSent, salt2). If it does not work, we'd try hmac(hmac(passwordSent, salt1), salt2).
Someone getting access to the database should not be able to login with the password hashes and I don't think (but I may be wrong) that multiples hmacs diminish the hash resistance.
Do any good crypto expert see any obvious error I may have done ?
This looks a little like security through obscurity, what is the point of using javascript to hash the password on the client side if you still accept plain text password from the client?
You didn't mention if this was over https, if you aren't using https then you may as well have no passwords at all. If you aren't running https then any MITM can see the salt you are sending as well as the javascript used to hash the original password so you have nothing gained.
As for your concern about the possibility of hmac collisions between two salts, that is probably very unlikely (depending on your hash algorithm) and how secure you keep your salt values. Even with MD5 that has had some collision attacks discovered and has a set of rainbow tables, you will be ok if you keep your salt very very safe.
Please, everybody, please stop trying
to implement custom crypto systems
unless you have a background in
cryptography. You will screw it up.
--Aaronaught
Well said!
Sounds pretty far-fetched to me. If the objective of this is to prevent a "man-in-the-middle" attack by virtue of the plaintext password being sent over HTTP, then use SSL.
Hashing on the client side accomplishes nothing; unless the salt is actually a nonce/OTP, and not stored in the database, then a man in the middle can simply look at the hash sent by the original client and pass that along to the server. The server won't know the difference.
If this is supposed to make it harder to crack if someone gets hold of the database, it won't. If the hashing function is cheap, like MD5, this won't be any more difficult to crack than one salt and one hash (in fact it's actually weaker, since hashing is a lossy function). And if you use a strong hashing function like bcrypt, it's already good enough.
Please, everybody, please stop trying to implement custom crypto systems unless you have a background in cryptography. You will screw it up.
Probably obvious to you already, but you're effectively letting people log in with two different passwords in that setup.
If you want to give people the option of sending their passwords with encryption, I wouldn't tie that to anything strictly client-side, and just force HTTPS, as Harley already implied.
You might want to look at HTTP Digest authentication. That is a standardized protocol which avoid clear text password in any case.
I need my application to use client's phone-number to generate unique ID for my web-service. Of course a phone-number is unique, but it must be secured. So it can be implemented with symmetric encryption (asymmetric will be later, because leak of resources), but I do not know where to store a encryption-key.
1.
I do not know why, but seems bad to store a key as a static field in code. May be because it's too easy to read it from here even not running an application.
2.
It seems better to store a key in Keychain and get it from here by request. But to avoid #1 it's necessary to install a key to Keychain while installation process. Is it possible? How to do that?
3.
I do not know what certificates do. Are they helpful to the problem?
4.
To transfer a key from server is also a bad idea, because it's very easy to sniffer it.
The way you solve the sniffing problem is that you communicate over HTTPS for your web service. NSURLConnection will do this easily, and all web service engines I know of handle HTTPS without trouble. This will get rid of many of your problems right away.
On which machine is the 100-1000x decrypt the bottleneck? Is your server so busy that it can't do an asym decryption? You should be doing this so infrequently on the phone that it should be irrelevant. I'm not saying asym is the answer here; only that its performance overhead shouldn't be the issue for securing a single string, decrypted once.
Your service requires SMS such that all users must provide their phone number? Are you trying to automate grabbing the phone number, or do you let the user enter it themselves? Automatically grabbing the phone number through the private APIs (or the non-private but undocumented configuration data) and sending that to a server is likely to run afoul of terms of service. This is a specific use-case Apple wants to protect the user from. You definitely need to be very clear in your UI that you are doing this and get explicit user permission.
Personally I'd authenticate as follows:
Server sends challenge byte
Client sends UUID, date, and hash(UUID+challenge+userPassword+obfuscationKey+date).
Server calculates same, makes sure date is in legal range (30-60s is good) and validates.
At this point I generally have the server generate a long, sparse, random session id which the client may use for the remainder of this "session" (anywhere from the next few minutes to the next year) rather than re-authenticating in every message.
ObfuscationKey is a secret key you hardcode into your program and server to make it harder for third parties to create bogus clients. It is not possible, period, not possible, to securely ensure that only your client can talk to your server. The obfuscationKey helps, however, especially on iPhone where reverse engineering is more difficult. Using UUID also helps because it is much less known to third-parties than phone number.
Note "userPassword" in there. The user should authenticate using something only the user knows. Neither the UUID nor the phone number is such a thing.
The system above, plus HTTPS, should be straightforward to implement (I've done it many times in many languages), have good performance, and be secure to an appropriate level for a broad range of "appropriate."
I don't think you're going to be able to do what you want securely with symmetric encryption. With asym you can send the public key without worrying about it too much (only threat is someone substituting your key with their own) and validate the encrypted unique id on your server with the private key.