I'm currently experimenting with both public-key and personal file encryption. The programs I use have 2048 bit RSA and 256 bit AES level encryption respectively. As a newbie to this stuff (I've only been a cypherpunk for about a month now - and am a little new to information systems) I'm not familiar with RSA algorithms, but that's not relevant here.
I know that unless some secret lab or NSA program happens to have a quantum computer, it is currently impossible to brute force hack the level of security these programs provide, but I was wondering how much more secure it would be to encrypt a file over and over again.
In a nutshell, what I would like to know is this:
When I encrypt a file using 256-bit AES, and then encrypt the already encrypted file once more (using 256 again), do I now have the equivalent of 512-bit AES security? This is pretty much a question of whether or not the the number of possible keys a brute force method would potentially have to test would be 2 x 2 to the 256th power or 2 to the 256th power squared. Being pessimistic, I think it is the former but I was wondering if 512-AES really is achievable by simply encrypting with 256-AES twice?
Once a file is encrypted several times so that you must keep using different keys or keep putting in passwords at each level of encryption, would someone** even recognize if they have gotten through the first level of encryption? I was thinking that perhaps - if one were to encrypt a file several times requiring several different passwords - a cracker would not have any way of knowing if they have even broken through the first level of encryption since all they would have would still be an encrypted file.
Here's an example:
Decrypted file
DKE$jptid UiWe
oxfialehv u%uk
Pretend for a moment that the last sequence is what a cracker had to work with - to brute-force their way back to the original file, the result they would have to get (prior to cracking through the next level of encryption) would still appear to be a totally useless file (the second line) once they break through the first level of encryption. Does this mean that anyone attempting to use brute-force would have no way of getting back to the original file since they presumably would still see nothing but encrypted files?
These are basically two questions that deal with the same thing: the effect of encrypting the same file over and over again. I have searched the web to find out what effect repeated encryption has on making a file secure, but aside from reading an anecdote somewhere that the answer to the first question is no, I have found nothing that pertains to the second spin on the same topic. I am especially curious about that last question.
**Assuming hypothetically that they somehow brute-forced their way through weak passwords - since this appears to be a technological possibility with 256-AES right now if you know how to make secure ones...
In general, if you encrypt a file with k-bit AES then again with k-bit AES, you only get (k+1) bits of security, rather than 2k bits of security, with a man-in-the-middle attack. The same holds for most types of encryption, like DES. (Note that triple-DES is not simply three rounds of encryption for this reason.)
Further, encrypting a file with method A and then with method B need not be even as strong as encrypting with method B alone! (This would rarely be the case unless method A is seriously flawed, though.) In contrast, you are guaranteed to be at least as strong as method A. (Anyone remembering the name of this theorem is encouraged to leave a comment; I've forgotten.)
Usually you're much better off simply choosing a single method as strong as possible.
For your second question: Yes, with most methods, an attacker would know that the first layer had been compromised.
More an opinion here...
First, when computer are strong enough to do a brute-force attack on AES-256 for example, it will be also for iterations of the same... doubling or tripling the time or effort is insignificant at that level.
Next, such considerations can be void depending on the application you are trying to use this encryption in... The "secrets" you will need to carry become bigger (number of iterations and all the different keys you will need, if in fact they are different), the time to do the encryption and the decryption will also need to increase.
My hunch is that iterating the encryption does not help much. Either the algorithm is strong enough to sustain a brute-force attach or it is not. The rest is all in the protection of the keys.
More practically, do you think your house is more protected if you have three identical or similar locks on your front door ? (and that includes number of keys for you to carry around, don't loose those keys, make sure windows and back door are secured also...)
Question 1:
The size of the solution space is going to be the same for two passes of the 256-bit key as the 512-bit key, since 2^(256+256) = 2^512
The actual running time of each decrypt() may increase non-linearly as the key-size grows (it would depend on the algorithm), in this case I think brute forcing the 256+256 would run faster than the 2^512, but would still be infeasible.
Question 2:
There are probably ways to identify certain ciphertext. I wouldn't be surprised if many algorithms leave some signature or artifacts that could be used for identification.
Related
Is it possible to reverse a SHA-1?
I'm thinking about using a SHA-1 to create a simple lightweight system to authenticate a small embedded system that communicates over an unencrypted connection.
Let's say that I create a sha1 like this with input from a "secret key" and spice it with a timestamp so that the SHA-1 will change all the time.
sha1("My Secret Key"+"a timestamp")
Then I include this SHA-1 in the communication and the server, which can do the same calculation. And hopefully, nobody would be able to figure out the "secret key".
But is this really true?
If you know that this is how I did it, you would know that I did put a timestamp in there and you would see the SHA-1.
Can you then use those two and figure out the "secret key"?
secret_key = bruteforce_sha1(sha1, timestamp)
Note1:
I guess you could brute force in some way, but how much work would that actually be?
Note2:
I don't plan to encrypt any data, I just would like to know who sent it.
No, you cannot reverse SHA-1, that is exactly why it is called a Secure Hash Algorithm.
What you should definitely be doing though, is include the message that is being transmitted into the hash calculation. Otherwise a man-in-the-middle could intercept the message, and use the signature (which only contains the sender's key and the timestamp) to attach it to a fake message (where it would still be valid).
And you should probably be using SHA-256 for new systems now.
sha("My Secret Key"+"a timestamp" + the whole message to be signed)
You also need to additionally transmit the timestamp in the clear, because otherwise you have no way to verify the digest (other than trying a lot of plausible timestamps).
If a brute force attack is feasible depends on the length of your secret key.
The security of your whole system would rely on this shared secret (because both sender and receiver need to know, but no one else). An attacker would try to go after the key (either but brute-force guessing or by trying to get it from your device) rather than trying to break SHA-1.
SHA-1 is a hash function that was designed to make it impractically difficult to reverse the operation. Such hash functions are often called one-way functions or cryptographic hash functions for this reason.
However, SHA-1's collision resistance was theoretically broken in 2005. This allows finding two different input that has the same hash value faster than the generic birthday attack that has 280 cost with 50% probability. In 2017, the collision attack become practicable as known as shattered.
As of 2015, NIST dropped SHA-1 for signatures. You should consider using something stronger like SHA-256 for new applications.
Jon Callas on SHA-1:
It's time to walk, but not run, to the fire exits. You don't see smoke, but the fire alarms have gone off.
The question is actually how to authenticate over an insecure session.
The standard why to do this is to use a message digest, e.g. HMAC.
You send the message plaintext as well as an accompanying hash of that message where your secret has been mixed in.
So instead of your:
sha1("My Secret Key"+"a timestamp")
You have:
msg,hmac("My Secret Key",sha(msg+msg_sequence_id))
The message sequence id is a simple counter to keep track by both parties to the number of messages they have exchanged in this 'session' - this prevents an attacker from simply replaying previous-seen messages.
This the industry standard and secure way of authenticating messages, whether they are encrypted or not.
(this is why you can't brute the hash:)
A hash is a one-way function, meaning that many inputs all produce the same output.
As you know the secret, and you can make a sensible guess as to the range of the timestamp, then you could iterate over all those timestamps, compute the hash and compare it.
Of course two or more timestamps within the range you examine might 'collide' i.e. although the timestamps are different, they generate the same hash.
So there is, fundamentally, no way to reverse the hash with any certainty.
In mathematical terms, only bijective functions have an inverse function. But hash functions are not injective as there are multiple input values that result in the same output value (collision).
So, no, hash functions can not be reversed. But you can look for such collisions.
Edit
As you want to authenticate the communication between your systems, I would suggest to use HMAC. This construct to calculate message authenticate codes can use different hash functions. You can use SHA-1, SHA-256 or whatever hash function you want.
And to authenticate the response to a specific request, I would send a nonce along with the request that needs to be used as salt to authenticate the response.
It is not entirely true that you cannot reverse SHA-1 encrypted string.
You cannot directly reverse one, but it can be done with rainbow tables.
Wikipedia:
A rainbow table is a precomputed table for reversing cryptographic hash functions, usually for cracking password hashes. Tables are usually used in recovering a plaintext password up to a certain length consisting of a limited set of characters.
Essentially, SHA-1 is only as safe as the strength of the password used. If users have long passwords with obscure combinations of characters, it is very unlikely that existing rainbow tables will have a key for the encrypted string.
You can test your encrypted SHA-1 strings here:
http://sha1.gromweb.com/
There are other rainbow tables on the internet that you can use so Google reverse SHA1.
Note that the best attacks against MD5 and SHA-1 have been about finding any two arbitrary messages m1 and m2 where h(m1) = h(m2) or finding m2 such that h(m1) = h(m2) and m1 != m2. Finding m1, given h(m1) is still computationally infeasible.
Also, you are using a MAC (message authentication code), so an attacker can't forget a message without knowing secret with one caveat - the general MAC construction that you used is susceptible to length extension attack - an attacker can in some circumstances forge a message m2|m3, h(secret, m2|m3) given m2, h(secret, m2). This is not an issue with just timestamp but it is an issue when you compute MAC over messages of arbitrary length. You could append the secret to timestamp instead of pre-pending but in general you are better off using HMAC with SHA1 digest (HMAC is just construction and can use MD5 or SHA as digest algorithms).
Finally, you are signing just the timestamp and the not the full request. An active attacker can easily attack the system especially if you have no replay protection (although even with replay protection, this flaw exists). For example, I can capture timestamp, HMAC(timestamp with secret) from one message and then use it in my own message and the server will accept it.
Best to send message, HMAC(message) with sufficiently long secret. The server can be assured of the integrity of the message and authenticity of the client.
You can depending on your threat scenario either add replay protection or note that it is not necessary since a message when replayed in entirety does not cause any problems.
Hashes are dependent on the input, and for the same input will give the same output.
So, in addition to the other answers, please keep the following in mind:
If you start the hash with the password, it is possible to pre-compute rainbow tables, and quickly add plausible timestamp values, which is much harder if you start with the timestamp.
So, rather than use
sha1("My Secret Key"+"a timestamp")
go for
sha1("a timestamp"+"My Secret Key")
I believe the accepted answer is technically right but wrong as it applies to the use case: to create & transmit tamper evident data over public/non-trusted mediums.
Because although it is technically highly-difficult to brute-force or reverse a SHA hash, when you are sending plain text "data & a hash of the data + secret" over the internet, as noted above, it is possible to intelligently get the secret after capturing enough samples of your data. Think about it - your data may be changing, but the secret key remains the same. So every time you send a new data blob out, it's a new sample to run basic cracking algorithms on. With 2 or more samples that contain different data & a hash of the data+secret, you can verify that the secret you determine is correct and not a false positive.
This scenario is similar to how Wifi crackers can crack wifi passwords after they capture enough data packets. After you gather enough data it's trivial to generate the secret key, even though you aren't technically reversing SHA1 or even SHA256. The ONLY way to ensure that your data has not been tampered with, or to verify who you are talking to on the other end, is to encrypt the entire data blob using GPG or the like (public & private keys). Hashing is, by nature, ALWAYS insecure when the data you are hashing is visible.
Practically speaking it really depends on the application and purpose of why you are hashing in the first place. If the level of security required is trivial or say you are inside of a 100% completely trusted network, then perhaps hashing would be a viable option. Hope no one on the network, or any intruder, is interested in your data. Otherwise, as far as I can determine at this time, the only other reliably viable option is key-based encryption. You can either encrypt the entire data blob or just sign it.
Note: This was one of the ways the British were able to crack the Enigma code during WW2, leading to favor the Allies.
Any thoughts on this?
SHA1 was designed to prevent recovery of the original text from the hash. However, SHA1 databases exists, that allow to lookup the common passwords by their SHA hash.
Is it possible to reverse a SHA-1?
SHA-1 was meant to be a collision-resistant hash, whose purpose is to make it hard to find distinct messages that have the same hash. It is also designed to have preimage-resistant, that is it should be hard to find a message having a prescribed hash, and second-preimage-resistant, so that it is hard to find a second message having the same hash as a prescribed message.
SHA-1's collision resistance is broken practically in 2017 by Google's team and NIST already removed the SHA-1 for signature purposes in 2015.
SHA-1 pre-image resistance, on the other hand, still exists. One should be careful about the pre-image resistance, if the input space is short, then finding the pre-image is easy. So, your secret should be at least 128-bit.
SHA-1("My Secret Key"+"a timestamp")
This is the pre-fix secret construction has an attack case known as the length extension attack on the Merkle-Damgard based hash function like SHA-1. Applied to the Flicker. One should not use this with SHA-1 or SHA-2. One can use
HMAC-SHA-256 (HMAC doesn't require the collision resistance of the hash function therefore SHA-1 and MD5 are still fine for HMAC, however, forgot about them) to achieve a better security system. HMAC has a cost of double call of the hash function. That is a weakness for time demanded systems. A note; HMAC is a beast in cryptography.
KMAC is the pre-fix secret construction from SHA-3, since SHA-3 has resistance to length extension attack, this is secure.
Use BLAKE2 with pre-fix construction and this is also secure since it has also resistance to length extension attacks. BLAKE is a really fast hash function, and now it has a parallel version BLAKE3, too (need some time for security analysis). Wireguard uses BLAKE2 as MAC.
Then I include this SHA-1 in the communication and the server, which can do the same calculation. And hopefully, nobody would be able to figure out the "secret key".
But is this really true?
If you know that this is how I did it, you would know that I did put a timestamp in there and you would see the SHA-1. Can you then use those two and figure out the "secret key"?
secret_key = bruteforce_sha1(sha1, timestamp)
You did not define the size of your secret. If your attacker knows the timestamp, then they try to look for it by searching. If we consider the collective power of the Bitcoin miners, as of 2022, they reach around ~293 double SHA-256 in a year. Therefore, you must adjust your security according to your risk. As of 2022, NIST's minimum security is 112-bit. One should consider the above 128-bit for the secret size.
Note1: I guess you could brute force in some way, but how much work would that actually be?
Given the answer above. As a special case, against the possible implementation of Grover's algorithm ( a Quantum algorithm for finding pre-images), one should use hash functions larger than 256 output size.
Note2: I don't plan to encrypt any data, I just would like to know who sent it.
This is not the way. Your construction can only work if the secret is mutually shared like a DHKE. That is the secret only known to party the sender and you. Instead of managing this, a better way is to use digital signatures to solve this issue. Besides, one will get non-repudiation, too.
Any hashing algorithm is reversible, if applied to strings of max length L. The only matter is the value of L. To assess it exactly, you could run the state of art dehashing utility, hashcat. It is optimized to get best performance of your hardware.
That's why you need long passwords, like 12 characters. Here they say for length 8 the password is dehashed (using brute force) in 24 hours (1 GPU involved). For each extra character multiply it by alphabet length (say 50). So for 9 characters you have 50 days, for 10 you have 6 years, and so on. It's definitely inaccurate, but can give us an idea, what the numbers could be.
I'm trying to store user passwords in my DB using Argon2 algorithm.
This is what I obtain by using it:
$echo -n "password" | argon2 "smallsalt" -id -t 4 -m 18 -p 4
Type: Argon2id
Iterations: 4
Memory: 262144 KiB
Parallelism: 4
Hash: cb4447d91dd62b085a555e13ebcc6f04f4c666388606b2c401ddf803055f54ac
Encoded: $argon2id$v=19$m=262144,t=4,p=4$c21hbGxzYWx0$y0RH2R3WKwhaVV4T68xvBPTGZjiGBrLEAd34AwVfVKw
1.486 seconds
Verification ok
In this case, what should I store in the DB?
The "encoded" value as shown above?
The "hash" value as shown above?
Neither, but another solution?
Please, could you help me? I'm a newbie with this and I'm a little bit lost.
I'm a bit late to the party, but I disagree with the previous answers.
You should store the field: Encoded
The $argon2id$.... value.
(At least if you are using normal Argon2 libraries having the verify() function.
It does not look like the man-page for argon2 command does this, however.
Only if you are stuck with the command line, you should consider storing each field individually.)
The $argon2id$ encoded hash
The argon2 encoded hash follows the same as its older cousin bcrypt's syntax.
The encoded hash includes all you ever need to verify the hash when the user logs in.
It is most likely more future proof. When a newer and better argon2 comes along: You can upgrade your one column hashed passwords. Just like you could detect bcrypt's $2a$-hashes, and re-hash them as $argon2id$-hashes, next time the user logs in. (If you were moving from bcrypt to agron2.)
TL;DR
Store the $-encoded value encoded_hash in your database.
Use argon2.verify(password, encoded_hash) to verify that the password is correct.
Don't bother about all the values inside the hash. Let the library do that for you. :)
Neither. Save following as a single value:
algorithm ID (e.g. argon2id)
salt
number of iterations (4)
memory usage factor (18)
parallelism (4)
The output of the field "encoded" is misleading because you cannot use it as is for password check (i.e. for hash generation), e.g. m=262144 where as for password check you need the original factor m=18.
Are you going to launch an OS process each time you check password? I would discourage you from doing this. I'd suggest you use a library (C++, Java, ...). They produce a string that contains all these data concatenated and separated with "$".
I'd put the type, iterations, memory, parallelism, hash, salt and corresponding user id into separate columns and leave the encoded bit out, because it's just all the attributes joined together. If they're in separate columns then you can reference the attributes more easily than having to split and index the encoded string.
The other option is to just store the encoded string in 1 column, but as I said its more tedious to look at certain attributes, as you'd have to split the encoded string and then index it.
I had the same question and read this post while gathering some information. Now after some days and thoughts about all this, I'll personally take a different route than the accepted answer and therefore slightly disagree with it. I thought I would share my perspective so that it might help others as well.
I suppose it will depend on everyone's context. I don't think there is a one size fits all answer here. I'm sure there are situations where it is perfectly valid and even better/simpler to store the encoded string ($argon2...).
However, I would argue that depending on the context, storing the encoded string doesn't seem to be the right approach.
First of all, it makes the hashing method very obvious. It is probably not that important but for some reasons it makes me a bit more comfortable not having it ^^. But, more importantly, it means that implementation details are stored in your persistence layer (db or else). At the time of writing, argon2id is the recommended hashing mechanism by OWASP but these things can change (eventually do change...). Some day, it might be considered unsecure, or another function will be considered more secure.
As a result, I would suggest this more function "agnostic" starting point:
The hash (for argon2 -> the hex string)
The salt
The last_modified date
A string with hashing parameters (for argon2, you could put the parameters here in the form of your choosing)
The last_modified allows to know if the hash needs updating or not and the parameters allows to support the verification and update of "old" hashes.
Of course this means that you have to work a bit more in the code and can't simply use every libraries shortcuts straight away. But, I would say that this increased complexity offer more flexibility in other circumstances (like moving away from a given hashing function). As always there are no free lunch.
That's why I suppose it depends on your context and why personally I wouldn't go with the accepted answer in my situation.
PS: I'm no cryptography expert nor some devsecop guru. So feel free to contradict, enrich, agree or disagree. I just like to keep my implementation details contained ;)
I store some sensitive data. Data is divded into parts and I want to have separate accees to each part. Let's assume that I have 1000 files. I want to encrypt each file by the same symetric encryption algorithm.
I guess that breaking a key is easier when hacker has got 1000 cryptogram than he has only one cryptogram, so I think that I should use separate key for each file.
My question is following:
Should I use separate key for each file?
If I should, there is problem with storing 1000 keys. So I want to have one secret key and use some my own algorithm to calculate separate key for each file from secret key. Is it good idea?
If you consider passive adversary and use CPA-strong cipher (like AES), it is sufficient to use only one key for all files. Supposing adversary knows the cipher you use, and even knows plaintexts, he cannot reconstruct the key with non-negligible probability. Here is more detailed answer.
If you consider also active adversary (which can replace ciphertexts) you should use Authenticated Encryption. But as I understand this is not your case.
So I want to have one secret key and use some my own algorithm to calculate separate key for each file from secret key. Is it good idea?
In general, developing your own algorithm or scheme is bad idea. You can easily make some unseen mistake in algorithm or implementation and you data will be vulnerable. It is better to use well-known algorithms and implementations peer-reviewed by lots of people and proved to be secure.
I've never taken any classes on encryption or security and I'm trying to teach myself some basics, so forgive me if this is a silly question (don't worry, I'm not working on anything sensitive)
So, I'm playing around with Crypto++ so that I can make a signature of a file to see if the file has been edited by someone other than me. The test application that comes with the library looks like it has options (rs and rv) that do exactly what I want to do in my own program (verify the integrity of the signature of a file). Of course, before doing that I need to generate a public and private key. When doing so with the test application's g option it asks me to specify the key length in bits. What difference does the key length make?
The key length determines how hard it is for someone to break your cryptography. For digital signatures, that means how hard is it for someone to generate a fake signature.
For RSA a key length of 1024 bits is sufficient for non-sensitive information, but it should only be used for a few years and then replaced with a new key. 2048 bits is stronger and 4096 is stronger still.
For a naive brute-force attacker, adding a single bit to the key length doubles the amount of work they need to do to compromise your key. However, algorithms like RSA do not scale in this way: a 2048-bit RSA key is not 2^1024 times as hard to break as a 1024-bit key (unless the attacker is really stupid).
Generally public key algorithms (e.g. RSA) need much larger keys than symmetric key algorithms (e.g. AES) because they rely on different mathematical properties.
For a good primer on cryptography you should check out Peter Gutmann's godzilla crypto tutorial. It's pretty readable and gives you a good overview of how crypto works in its various forms.
Any SQLite database on the iPhone is simply a file bundled with the application. It is relatively simple for anyone to extract this file and query it.
What are your suggestions for encrypting either the file or the data stored within the database.
Edit: The App is a game that will be played against other users. Information about a users relative strengths and weaknesses will be stored in the DB. I don't want a user to be able to jail-break the phone up their reputation/power etc then win the tournament/league etc (NB: Trying to be vague as the idea is under NDA).
I don't need military encryption, I just don't want to store things in plain text.
Edit 2: A little more clarification, my main goals are
Make it non-trivial to hack sensitive data
Have a simple way to discover if data has been altered (some kind of checksum)
You cannot trust the client, period. If your standalone app can decrypt it, so will they. Either put the data on a server or don't bother, as the number of people who actually crack it to enhance stats will be minuscule, and they should probably be rewarded for the effort anyway!
Put a string in the database saying "please don't cheat".
There are at least two easier approaches here (both complimentary) that avoid encrypting values or in-memory databases:
#1 - ipa crack detection
Avoid the technical (and legal) hassle of encrypting the database and/or the contents and just determine if the app is pirated and disable the network/scoring/ranking aspects of the game. See the following for more details:
http://thwart-ipa-cracks.blogspot.com/2008/11/detection.html
#2 - data integrity verification
Alternatively store a HMAC/salted hash of the important columns in each row when saving your data (and in your initial sqlite db). When loading each row, verify the data against the HMAC/hash and if verification fails act accordingly.
Neither approach will force you to fill out the encryption export forms required by Apple/US government.
Score submission
Don't forget you'll need to do something similar for the actual score submissions to protect against values coming from something other than your app. You can see an implementation of this in the cocos2d-iphone and cocoslive frameworks at http://code.google.com/p/cocos2d-iphone/ and http://code.google.com/p/cocoslive/
Response to comments
There is no solution here that will 100% prevent data tampering. If that is a requirement, the client needs to be view only and all state and logic must be calculated on a trusted server. Depending on the application, extra anti-cheat mechanisms will be required on the client.
There are a number of books on developing massively-multiplayer games that discuss these issues.
Having a hash with a known secret in the code is likely a reasonable approach (at least, when considering the type of applications that generally exist on the App Store).
Like Kendall said, including the key on the device is basically asking to get cracked. However, there are folks who have their reasons for obfuscating data with a key on-device. If you're determined to do it, you might consider using SQLCipher for your implementation. It's a build of SQLite that provides transparent, page-level encryption of the entire DB. There's a tutorial over on Mobile Orchard for using it in iPhone apps.
How likely do you think it is that your normal user will be doing this? I assume you're going through the app store, which means that everything is signed/encrypted before getting on to the user's device. They would have to jailbreak their device to get access to your database.
What sort of data are you storing such that it needs encryption? If it contains passwords that the user entered, then you don't really need to encrypt them; the user will not need to find out their own password. If it's generic BLOB data that you only want the user to access through the application, it could be as simple as storing an encrypted blob using the security API.
If it's the whole database you want secured, then you'd still want to use the security api, but on the whole file instead, and decrypt the file as necessary before opening it. The issue here is that if the application closes without cleanup, you're left with a decrypted file.
You may want to take a look at memory-resident databases, or temporary databases which you can create either using a template db or a hard-coded schema in the program (take a look at the documentation for sqlite3_open). The data could be decrypted, inserted into the temporary database, then delete the decrypted database. Do it in the opposite direction when closing the connection.
Edit:
You can cook up your own encryption scheme I'm sure with just a very simple security system by XOR-ing the data with a value stored in the app, and store a hash somewhere else to make sure it doesn't change, or something.
SQLCipher:
Based on my experience SQLCipher is the best option to encrypt the data base.
Once the key("PRAGMA key") is set SQLCipher will automatically encrypt all data in the database! Note that if you don't set a key then SQLCipher will operate identically to a standard SQLite database.
The call to sqlite3_key or "PRAGMA key" should occur as the first operation after opening the database. In most cases SQLCipher uses PBKDF2, a salted and iterated key derivation function, to obtain the encryption key. Alternately, an application can tell SQLCipher to use a specific binary key in blob notation (note that SQLCipher requires exactly 256 bits of key material), i.e.
Reference:
http://sqlcipher.net/ios-tutorial
I hope someone would save time on exploring about this
Ignoring the philosophical and export issues, I'd suggest that you'd be better off encrypting the data in the table directly.
You need to obfuscate the decryption key(s) in your code. Typically, this means breaking them into pieces and encoding the strings in hex and using functions to assemble the pieces of the key together.
For the algorithm, I'd use a trusted implementation of AES for whatever language you're using.
Maybe this one for C#:
http://msdn.microsoft.com/en-us/magazine/cc164055.aspx
Finally, you need to be aware of the limitations of the approach. Namely, the decryption key is a weak link, it will be available in memory at run-time in clear text. (At a minimum) It has to be so that you can use it. The implementation of your encryption scheme is another weakness--any flaws there are flaws in your code too. As several other people have pointed out your client-server communications are suspect too.
You should remember that your executable can be examined in a hex editor where cleartext strings will leap out of the random junk that is your compiled code. And that many languages (like C# for example) can be reverse-compiled and all that will be missing are the comments.
All that said, encrypting your data will raise the bar for cheating a bit. How much depends on how careful you are; but even so a determined adversary will still break your encryption and cheat. Furthermore, they will probably write a tool to make it easy if your game is popular; leaving you with an arms-race scenario at that point.
Regarding a checksum value, you can compute a checksum based on the sum of the values in a row assuming that you have enough numeric values in your database to do so. Or, for an bunch of boolean values you can store them in a varbinary field and use the bitwise exclusive operator ^ to compare them--you should end up with 0s.
For example,
for numeric columns,
2|3|5|7| with a checksum column | 17 |
for booleans,
0|1|0|1| with a checksum column | 0101 |
If you do this, you can even add a summary row at the end that sums your checksums. Although this can be problematic if you are constantly adding new records. You can also convert strings to their ANSI/UNICODE components and sum these too.
Then when you want to check the checksum simple do a select like so:
Select *
FROM OrigTable
right outer join
(select pk, (col1 + col2 + col3) as OnTheFlyChecksum, PreComputedChecksum from OrigTable) OT on OrigTable.pk = OT.pk
where OT.OnTheFlyChecksum = OT.PreComputedChecksum
It appears to be simplest to sync all tournament results to all iPhones in the tournament. You can do it during every game: before a game, if the databases of two phones contradict each other, the warning is shown.
If the User A falsifies the result if his game with User B, this result will propagate until B eventually sees it with the warning that A's data don't match with his phone. He then can go and beat up explain to A that his behavior isn't right, just the way it is in real life if somebody cheats.
When you compute the final tournament results, show the warning, name names, and throw out all games with contradictory results. This takes away the incentive to cheat.
As said before, encryption won't solve the problem since you can't trust the client. Even if your average person can't use disassembler, all it takes is one motivated person and whatever encryption you have will be broken.
Yet, if on windows platform, you also can select SQLiteEncrypt to satisfy your needs.SQLiteEncrypt extends sqlite encryption support, but you can treat it as original sqlite3 c library.