I'm trying to build a API service for a system that (due to many reasons) does not have the main database in a completely secured fashion.
So, my question is - how do I salt the HMAC in such a manner such that even if the main database is compromised, you still cannot use the API key. This effectively means that the HMAC key is not preshared in plaintext but in some other way, but I'm not able to figure out how.
If your storage is insecure, you're going to have an impossible time encrypting it and still being able to decrypt it later unless you either store its encryption key somewhere (which, from your description, would still be insecure), or receive the key from the remote side of your service.
Related
I have read quite a bit about JWT. I understand that using JWT, you do not need a central database to check tokens against, which is great. However, do we not need to securely store the JWT secret key in different services in sync? Why is this not considered "state" (albeit a much smaller one than a token database)?
Because the secret key is static, it doesn't change regularly. The main problem of stateful applications is that you have to sync the state between your app server instances (for example through a database, as you said), which has the potential to create a big bottleneck. With the secret, you can just have it in a text file which exists on every server instance and not worry about having it synchronized between servers.
I've been writing client certificate code for iOS using many of the resources here: iOS Client Certificates and Mobile Device Management and I've broken out the process to these steps:
Get the Cert via email or AppConfig
Store the Cert (securely)
Extract Identity and Trust from the Cert.
Intercept failed web requests, create NSURLConnection to manually handle auth responses as per Eskimo's advice.
Turn Identity and Trust into the auth response challenge.
My problem is step 2. SecPKCS12Import function appears to automatically add Identity to the keychain as well as return all Identities and Trusts from the certificates, thus eliminating the need for the convenience function often given ExtractIdentityAndTrust().
But on my 2nd run, I will need the Identity and Trust, not just Identity. My current plan is to store the entire cert raw using SecItemAdd, test for duplicates and use it, but I feel like I should be able to just use SecPKCS12Import then later grab it without also using SecItemAdd.
The documentation that is most confusing is SecPKCS12Import, and I would like a clearer understanding of what it does vs secItemAdd, and if secItemCopyMatching() is the same in the end just to grab the certificate. Is Trust not needed or am I just being literal and it's stored with the identity?
The general save, use, store, grab is working, but I'm using NSData and would prefer to store it correctly
I eventually became more familiar with the KeyChain and Identity vs Trust and learned this:
The Trust is a Cert stored in a place that determines who your custom Certificate Authorities are. It only needs to be tested once, which is why it isn't stored.
Storing the Identity is also a certificate, but needed for later. The keychain considers Certificates/Identities to be a Special/Unique thing so it is stored as its own thing, which is why all the keychain code looks different than just securing a password.
Basically, storing the Trust is unnecessary for future reference, but should be checked for good practice. I personally think an expiration might be handy
I have a web application in which registered users enter data in a few forms and then, when they log in at a later stage, they see forms populated with their data. Data is stored on Postgresql server of the same hosting provider of the web server.
I'd like to encrypted data stored on Postgresql to prevent them to be read by the hosting provider.
I don't think this is possible to do, because whenever is the encryption key kept, if the webserver has to access it in order to serve pages to users, then it can use it to decrypt data to read them. Anyway I preferred to ask just to be sure I'm not missing something.
You could encrypt every piece of data put into the database, but for most applications it would be impractical - slow and extremely inconvenient.
Much better option would be to use dm-crypt encrypted block device for PostgreSQL data directory. It would be transparent for a database, so all features would work fine.
But you'd have to save encryption key somewhere in the database server filesystem or your server won't start with no interaction. So malicious hosting provider can still access all your data. Even if you don't store the key in the filesystem and type it yourself while mounting a data volume, then the key would have to reside in server memory, so malicious hosting provider can still read it.
There's not much you can do - you have to trust your hosting provider somewhat. You can only make a malicious one's live a little bit harder.
I have a WebApplication, using GWT + Tomcat + Jersey + Postgres. All the important data is encrypted on Postgres with a gpg public key and decrypted with a gpg private key.
My question is: where to store those keys? It sounds very insecure to me store it on local disk, but I don't know where to.
Can someone help out? Thanks in advance.
All security is a trade-off.
(I'm not a crypto/security expert. These are my understandings from what I've read and studied, but if you're doing anything important, get professional advice from someone who's done this a lot for real).
In this case you have a number of choices that differ mainly on how they trade off uptime/convenience with key theft/abuse risk. I'm assuming you're using a GnuPG/OpenPGP library, rather than the command-line tools, but if not "the app" can be considered the GnuPG agent.
Store the key un-encrypted on disk. The app can use the key whenever it wants. If the app is restarted, it has immediate access to the key. An attacker that breaks into the system or steals an (unencrypted) backup can use the key easily. Proper backup encryption is vital.
A marginal improvement over this approach is to store the key encrypted and store the (obfuscated) passphrase for the key elsewhere in the system / in the app binary; it makes life a bit harder for the attacker and means they at least have to spend more time on it, but in most cases they'll still be able to recover it pretty easily. Proper backup encryption is vital.
Store the key encrypted on disk and store it decrypted in memory on app startup. A human can decrypt the key when prompted during app startup; after that, the app can use the key whenever it wants. Theft of the key from disk / backups does the attacker little good, they have to go to the extra effort of recovering the key from the application's memory, or modifying/wrapping the application to capture the passphrase when entered by the administrator after a crash/restart. Key must be locked in memory that cannot be swapped out.
Store the key encrypted on disk and decrypt it only with specific administrator interaction. The app cannot use the key without an admin intervening. The key pretty safe on disk and the theft risk from app memory is limited by the short periods it's in memory. However, an attacker that has broken into the system can still modify the app to record the key when it'd decrypted, or capture the passphrase. Key must be locked in memory that cannot be swapped out.
Store the key on removable storage. Physically insert it on app startup to decrypt the key and store it in app memory like (3), or when the app actually needs to use the key like (4). This makes it a bit harder for the attacker to steal the encrypted key and makes password theft less useful, but no harder to modify the app to steal the decrypted key. They can also just wait until they see the storage inserted and copy the encrypted key if they've stolen the passphrase with a wrapper/keylogger/etc. IMO it's not much benefit over a strong passphrase for the encrypted key on disk - it might make life a little harder on the attacker, but it's a lot harder on the admin.
Store the key on a smartcard, crypto accelerator, or USB crypto device that's designed never to permit the key to be exposed, only to perform crypto operations using it. The PKCS#11 standard is widely supported and useful for this. The key (theoretically) cannot be stolen without physically stealing the hardware - there are key extraction attacks on lots of hardware, but most require lots of time, and often require physical access. The server can use the key at will (if the accelerator has no timeout/unlock) or only with admin intervention (if the accelerator is locked after each use and must be unlocked by the admin). The attacker can still decrypt data using the accelerator by masquerading as the app, but they've got to do a lot more work, and will need to have ongoing access to the target system. Of course, this one costs more.
Disaster recovery is more challenging for this option; you depend on physical hardware for decrypting your data. If the data center burns down, you're done for. So you need duplicates and/or a very securely stored copy of the key its self. Every duplicate adds risk, of course, especially the one plugged into that "just in case" backup server we don't really use and don't keep the security patches up to date on...
If you go for hardware with a key built-in rather than one where you can store but not read the key, you have the added challenge that one day that hardware will be obsolete. Ever tried to get business critical software that requires an ISA card running on a modern server? It's fun - and one day, PCI/X and USB will be like that too. Of course, by then the crypto system you're using might be broken anyway, so you'll need to decrypt all your data and migrate it to another setup anyway. Still, I'd be using hardware where I can generate a key, program it into the hardware, and store the original key in a couple of different forms in a bank safe deposit box.
Now that you've read that, remember: I'm just an interested not-even-hobbyist. Go ask a professional. When they tell you how totally wrong I am, come here and explain :-)
Whatever you do, DO NOT invent your own crypto system.
I need my application to use client's phone-number to generate unique ID for my web-service. Of course a phone-number is unique, but it must be secured. So it can be implemented with symmetric encryption (asymmetric will be later, because leak of resources), but I do not know where to store a encryption-key.
1.
I do not know why, but seems bad to store a key as a static field in code. May be because it's too easy to read it from here even not running an application.
2.
It seems better to store a key in Keychain and get it from here by request. But to avoid #1 it's necessary to install a key to Keychain while installation process. Is it possible? How to do that?
3.
I do not know what certificates do. Are they helpful to the problem?
4.
To transfer a key from server is also a bad idea, because it's very easy to sniffer it.
The way you solve the sniffing problem is that you communicate over HTTPS for your web service. NSURLConnection will do this easily, and all web service engines I know of handle HTTPS without trouble. This will get rid of many of your problems right away.
On which machine is the 100-1000x decrypt the bottleneck? Is your server so busy that it can't do an asym decryption? You should be doing this so infrequently on the phone that it should be irrelevant. I'm not saying asym is the answer here; only that its performance overhead shouldn't be the issue for securing a single string, decrypted once.
Your service requires SMS such that all users must provide their phone number? Are you trying to automate grabbing the phone number, or do you let the user enter it themselves? Automatically grabbing the phone number through the private APIs (or the non-private but undocumented configuration data) and sending that to a server is likely to run afoul of terms of service. This is a specific use-case Apple wants to protect the user from. You definitely need to be very clear in your UI that you are doing this and get explicit user permission.
Personally I'd authenticate as follows:
Server sends challenge byte
Client sends UUID, date, and hash(UUID+challenge+userPassword+obfuscationKey+date).
Server calculates same, makes sure date is in legal range (30-60s is good) and validates.
At this point I generally have the server generate a long, sparse, random session id which the client may use for the remainder of this "session" (anywhere from the next few minutes to the next year) rather than re-authenticating in every message.
ObfuscationKey is a secret key you hardcode into your program and server to make it harder for third parties to create bogus clients. It is not possible, period, not possible, to securely ensure that only your client can talk to your server. The obfuscationKey helps, however, especially on iPhone where reverse engineering is more difficult. Using UUID also helps because it is much less known to third-parties than phone number.
Note "userPassword" in there. The user should authenticate using something only the user knows. Neither the UUID nor the phone number is such a thing.
The system above, plus HTTPS, should be straightforward to implement (I've done it many times in many languages), have good performance, and be secure to an appropriate level for a broad range of "appropriate."
I don't think you're going to be able to do what you want securely with symmetric encryption. With asym you can send the public key without worrying about it too much (only threat is someone substituting your key with their own) and validate the encrypted unique id on your server with the private key.