I'm using jBCrypt for the first time, I'm not using Spring in my project and don't want that heavy library for the one simple thing, BCrypt.
My understanding is a salted hash involves the salt being combined with the password and THEN hashing it. The results of using this library, however, show that the final hash appears to basically be the salt pre-pended to the hash.
This is my test code:
public static void main(String[] args) throws Exception{
String salt = BCrypt.gensalt();
System.out.println("SALT: " + salt);
String pass = "passwordpasswordpassword";
String hash = BCrypt.hashpw(pass, salt);
System.out.println("HASH: " + hash);
}
This is the output:
SALT: $2a$10$gJ9JwqTC0jNJEhX3IUl7je
HASH: $2a$10$gJ9JwqTC0jNJEhX3IUl7jeo18wnF1AgMjQha78sFA/c5Mubx49j6q
This just weirds me out. I'm wondering if I'm using the library wrong, or if I'm just misunderstanding the way BCrypt is working for this.
Yes, this is the expected behavious of BCrypt. There's no need to keep the salt secure. It's important that it's user specific. The salt is needed internally by the BCrypt.checkpw function to be able to compute the hash for the password the user has entered and compare it to the hash you have stored.
Also see: https://security.stackexchange.com/questions/17421/how-to-store-salt
Related
I want to protect my users data as much as possible! In this scenario I'm trying to protect data-in-use/data-in-memory against certain memory attacks or at least make it more difficult for nefarious people to get at my users' data.
I do not really understand how Flutter & Dart handle memory or really any language for that matter. So I'm looking for some insight, direction or confirmation in what I'm trying to do here without needing a masters in computer science. While I'm using Flutter/Dart this is also a generalized question.
My modus operandi here is simple, when done with some sensitive data I want to:
Encrypt data for memory zero
Zero all encrypted memory
Does this do what I intend?
If this does not do what I intend or is pointless in any way, please explain why.
/*
- Symmetric encryption
- Encryption before putting data into transit
- This symmetric key and nonce are asymmetrically encrypted with authorized users public keys
- Authorized users can decrypt the key
- Sensitive data is encrypted then zeroed
*/
Future<String> symmetricallyEncrypt(Sale sale) async {
String saleJson = jsonEncode(sale);
final symmetricKey = await secureStorage.read(key: kSSKeySymmetric);
final symmetricNonce = await secureStorage.read(key: kSSKeySymmetricNonce);
final symmetricCypher = AesCrypt(padding: PaddingAES.pkcs7, key: symmetricKey!);
final encryptedSale = symmetricCypher.gcm.encrypt(inp: saleJson, iv: symmetricNonce!);
/* --- ENCRYPTED ZERO --- */
encryptedZero(saleJson);
encryptedZero(symmetricKey);
encryptedZero(symmetricNonce);
encryptedZero(symmetricCypher.toString());
return encryptedSale;
}
/*
Encryption zero method
- Encrypts shredding input
- Zeros all inputs
*/
Future<void> encryptedZero(String shredding) async {
String? asymmetricPublicZeroKey = await secureStorage.read(key: kSSKeyMemoryZeroAsymmetricPublic);
String encryptedShredding = RSAPublicKey.fromPEM(asymmetricPublicZeroKey!).encrypt(shredding);
asymmetricPublicZeroKey = '';
encryptedShredding = '';
shredding = '';
}
I get what you're asking but think it's not the right way to think about the security of your memory.
What's the threat actor - another process? The operating system? The root user?
If you don't trust the root user, the OS, and the hardware, you've already lost.
If you have to trust them, then what else is your threat actor? You have to trust your application. So the only other things are other applications running on the same system.
The operating system prevents other applications from reading your memory space (SEG FAULT, etc). And the OS zeros out your application's memory pages before passing them to another process.
But that's not the whole story - read https://security.stackexchange.com/questions/29019/are-passwords-stored-in-memory-safe for even more details.
i need an help with the API to add a user in a custom audience.
Looking at the doc it seems not clear if it has to be hashed every single attribute of the payload.data key or not.
On the documentation (https://developers.facebook.com/docs/marketing-api/reference/custom-audience/users/v2.9) it seems like the hash in not required but in this other article (https://developers.facebook.com/docs/marketing-api/audiences-api) seems like each one of them has to be hashed.
Any thoughts?
Yes, you need to hash everything apart from external identifiers, see this part of the doc:
You must hash your data as SHA256; we don't support other hashing
mechanisms. This is required for all data except External Identifiers.
Before hashing, normalize your data. Only First name FN and Last Name
LN support special characters and non-Roman alphabet. For best match
results, provide the Roman alphabet translation with no special
characters.
External identifiers are just extern_id which isn't applicable to most people anyway.
I'd recommend using one our our SDKs if possible as we handle all of the hashing for you. For example:
use FacebookAds\Object\CustomAudienceMultiKey;
use FacebookAds\Object\Fields\CustomAudienceMultikeySchemaFields;
$users = array(
array('ExternId123', 'FirstName', 'test2#example.com', 'LastName1'),
array('ExternId456', 'FirstNameTest', '', 'LastNameTest'),
array('ExternId789', '', 'test3#example.com', 'LastNameTest'),
);
$schema = array(
CustomAudienceMultikeySchemaFields::EXTERN_ID,
CustomAudienceMultikeySchemaFields::FIRST_NAME,
CustomAudienceMultikeySchemaFields::EMAIL,
CustomAudienceMultikeySchemaFields::LAST_NAME,
);
$audience = new CustomAudienceMultiKey('<CUSTOM_AUDIENCE_ID>');
$audience->addUsers($users, $schema);
I'm trying to set up a TURN server for a project using Coturn but am finding that documentation is sketchy at best...
I realise that there is a turnadmin tool that will do this for you, but I would greatly prefer to just run queries on my database directly. This is an app with potentially many users and their shared keys (hmackey in turnusers_lt) are subject to change (in order to not share passwords with the app the app uses a 'fake' password which is a hash of certain volatile user parameters that aren't so secret).
I can gather from the scant docs that the hmackey is computed using the realm, username and password:
$ turnadmin -k -u myusername -r my.realm.org -p my-password
> e.g. 0x7a69b0e2b747a4560045f79d171b78c0
Given that my code will know these three parameters, how do I build the hmac hash? E.g. in PHP I have
string hash_hmac ( string $algo , string $data , string $key [, bool $raw_output = false ] )
$algo here should be SHA1, but what values would go into $data (e.g. concat of user/pass) and $key (e.g. realm)?
There's also a turn_secret table listing a 'value' for a realm, I was guessing this should be used as the $key in the above example, but adding and modifying the keys still give the same result when I call turnadmin.
Essentially, what I want to do is (pseudo-code):
// user registers
// pseudo-code, this is of course computed using php's password_hash function
$hashed_pw = hash($pw);
$db->query('insert into usertable (name, pass) values ($name, $hashed_pw)');
// this is implemented somewhere...
$coturn_pw = get_secret_hash($name);
// this needs implementing...
$HAMC = calc_hmac($name, $coturn_pw, 'my.realm.com');
$turndb->query('insert into turnusers_lt values (...)');
// on update, delete also update turnusers_lt
...and then in the client, I should now be able to connect to the TURN server using $name and $coturn_pw as credentials for my.realm.com.
Or am I over-thinking this and should I just use a generic user for my app, hardcode the password and let Coturn figure out who is talking to who?
How to build the HMAC key is described in RFC 5389:
key = MD5(username ":" realm ":" SASLprep(password))
where MD5 is defined in RFC 1321 and SASLprep() is defined in RFC 4013
The only table you need to update is turnusers_lt. The turn_secret table and SHA1 algorithm is used for generating time-limited credentials.
INSERT INTO turnusers_lt (realm, name, hmackey) VALUES (:realm, :username, :key);
And of course, use prepared statements rather than building the SQL string manually.
OrangeDog answer is correct.
With node.js:
const crypto= require("crypto");
const username= "foo";
const realm= "here";
const password= "secret";
const hmac = crypto
.createHash("md5")
.update(`${username}:${realm}:${password}`)
.digest("hex")
;
Read through the following references:
iText Digital signature white paper, and C# examples. (specifically chapter 4) For those interested, another great and concise summary of the PDF signing process.
CAPICOM documentation.
Online examples / questions here and on iText mailing list archives, such as here and here.
Hashing code:
BouncyCastle.X509Certificate[] chain = Utils.GetSignerCertChain();
reader = Utils.GetReader();
MemoryStream stream = new MemoryStream();
using (var stamper = PdfStamper.CreateSignature(reader, stream, '\0'))
{
PdfSignatureAppearance sap = stamper.SignatureAppearance;
sap.SetVisibleSignature(
new Rectangle(36, 740, 144, 770),
reader.NumberOfPages,
"SignatureField"
);
sap.Certificate = chain[0];
sap.SignDate = DateTime.Now;
sap.Reason = "testing web context signatures";
PdfSignature pdfSignature = new PdfSignature(
PdfName.ADOBE_PPKLITE, PdfName.ADBE_PKCS7_DETACHED
);
pdfSignature.Date = new PdfDate(sap.SignDate);
pdfSignature.Reason = sap.Reason;
sap.CryptoDictionary = pdfSignature;
Dictionary<PdfName, int> exclusionSizes = new Dictionary<PdfName, int>();
exclusionSizes.Add(PdfName.CONTENTS, SIG_BUFFER * 2 + 2);
sap.PreClose(exclusionSizes);
Stream sapStream = sap.GetRangeStream();
byte[] hash = DigestAlgorithms.Digest(
sapStream,
DigestAlgorithms.SHA256
);
// is this needed?
PdfPKCS7 sgn = new PdfPKCS7(
null, chain, DigestAlgorithms.SHA256, true
);
byte[] preSigned = sgn.getAuthenticatedAttributeBytes(
hash, sap.SignDate, null, null, CryptoStandard.CMS
);
var hashedValue = Convert.ToBase64String(preSigned);
}
Just a simple test - a dummy Pdf document is created on initial page request, hash is calculated, and put in a hidden input field Base64 encoded. (the hashedValue above)
Then use CAPICOM on client-side to POST the form and get user's signed response:
PdfSignatureAppearance sap = (PdfSignatureAppearance)TempData[TEMPDATA_SAP];
PdfPKCS7 sgn = (PdfPKCS7)TempData[TEMPDATA_PKCS7];
stream = (MemoryStream)TempData[TEMPDATA_STREAM];
byte[] hash = (byte[])TempData[TEMPDATA_HASH];
byte[] originalText = (Encoding.Unicode.GetBytes(hashValue));
// Oid algorithm verified on client side
ContentInfo content = new ContentInfo(new Oid("RSA"), originalText);
SignedCms cms = new SignedCms(content, true);
cms.Decode(Convert.FromBase64String(signedValue));
// CheckSignature does not throw exception
cms.CheckSignature(true);
var encodedSignature = cms.Encode();
/* tried this too, but no effect on result
sgn.SetExternalDigest(
Convert.FromBase64String(signedValue),
null,
"RSA"
);
byte[] encodedSignature = sgn.GetEncodedPKCS7(
hash, sap.SignDate, null, null, null, CryptoStandard.CMS
);
*/
byte[] paddedSignature = new byte[SIG_BUFFER];
Array.Copy(encodedSignature, 0, paddedSignature, 0, encodedSignature.Length);
var pdfDictionary = new PdfDictionary();
pdfDictionary.Put(
PdfName.CONTENTS,
new PdfString(paddedSignature).SetHexWriting(true)
);
sap.Close(pdfDictionary);
So right now I'm not sure if I'm messing up hashing part, signature part, or both. In signature code snippet above and in client code (not shown) I'm calling what I think is signature verification code, but that may be wrong too, since this is a first for me. Get the infamous "Document has been altered or corrupted since it was signed" invalid signature message when opening the PDF.
Client-side code (not authored by me) can be found here. Source has a variable naming error, which was corrected. For reference, CAPICOM documentation says signed response is in PKCS#7 format.
EDIT 2015-03-12:
After some nice pointers from #mkl and more research, it seems CAPICOM is practicably unusable in this scenario. Although not documented clearly, (what else is new?) according to here and here, CAPICOM expects a utf16 string (Encoding.Unicode in .NET) as input to create a digital signature. From there it either pads or truncates (depending which source in previous sentence in correct) whatever data it receives if the length is an odd number. I.e. signature creation will ALWAYS FAIL if the Stream returned by PdfSignatureAppearance.GetRangeStream() has a length that is an odd number. Maybe I should create an I'm lucky option: sign if ranged stream length is even, and throw an InvalidOperationException if odd. (sad attempt at humor)
For reference, here's the test project.
EDIT 2015-03-25:
To close the loop on this, here's a link to a VS 2013 ASP.NET MVC project. May not the be best way, but it does provide a fully working solution to the problem. Because of CAPICOM's strange and inflexible signing implementation, as described above, knew a possible solution would potentially require a second pass and a way to inject an extra byte if the return value of PdfSignatureAppearance.GetRangeStream() (again, Stream.Length) is an odd number. I was going to try the long and hard way by padding the PDF content, but luckily a co-worker found it was much easier to pad PdfSignatureAppearance.Reason. Requiring a second pass to do something with iText[Sharp], is not unprecedented - e.g. adding page x of y for a document page header/footer.
Use of PdfPkcs7
The server-side code contains this block after the calculation of the range stream digest and before forwarding data to the web page:
PdfPKCS7 sgn = new PdfPKCS7(
null, chain, DigestAlgorithms.SHA256, true
);
byte[] preSigned = sgn.getAuthenticatedAttributeBytes(
hash, sap.SignDate, null, null, CryptoStandard.CMS
);
var hashedValue = Convert.ToBase64String(preSigned);
In the case at hand this is not necessary. It is needed only if the external signing API you use merely returns a signed digest; in that case the PdfPKCS7 instance builds the CMS/PKCS#7 signature container. You, on the other hand, use an API for which you know
CAPICOM documentation says signed response is in PKCS#7 format.
Thus, you don't need and (more to the point) must not use the PdfPKCS7 instance.
What does sign.js sign
The content of the server-side hash variable already is the hash digest value of the data to sign. Thus, the frontend, i.e. the sign.js used there, must not hash it again to get the message digest attribute value to put into the signature.
But sign.js signing methods for IE eventually execute
var signedData = new ActiveXObject("CAPICOM.SignedData");
// Set the data that we want to sign
signedData.Content = src;
SignedData.Content, on the other hand, is documented as
Content Read/write Data to be signed.
(msdn: "SignedData object")
So the hash from the backend is used as data to be signed and not as hash of the data to be signed, you indeed hash twice and so have the wrong hash value there.
Thus, it looks like you have to transmit the whole ranged stream which is not really practical...
"But there used to be signing samples using CAPICOM..."
Indeed some old iTextSharp (version 4.x) signing example used CAPICOM. But that code only worked because it created signatures of PDF signature type adbe.pkcs7.sha1 for which a SHA1 hash of the ranged stream indeed is the data embedded in and signed by the PKCS#7 signature.
This is no real option anymore because
it requires the use of SHA1 which in serious contexts is invalid, and
its use has been discouraged at least since ISO 32000-1 (2008) and will be officially deprecated in ISO 32000-2 (under development).
I had to migrate a legacy database with clear text password to a PostgresSQL database. I've looked up what's the best way to encrypt password in a database and found the pgcrypto extension with slow algorithm. (see pgcrypto documentation for 8.4)
The migration is done for data and everything is working well.
Now I have to write a CRUD application to handle this data.
I'm wondering what's the best way to use this strong encryption with grails ?
In my model, I've used the afterInsert event to handle this :
def afterInsert() {
Compte.executeUpdate("update Compte set hashpass=crypt(hashpass, gen_salt('bf', 8)) where id = (:compteId)", [compteId: this.id])
}
I guess that I should also check if the hashpass field is modified whenever the model is saved. But before that, is there another (best) way to achieve my goal ?
Edit : I cannot use the Spring Security bcrypt plugin here. The CRUD application that I'm writing use SSO CAS so I don't need such a plugin. The CRUD application manages accounts for another application that I don't own. I just need to create a new account, modify or delete an existing one. This is very simple. The tricky part is to hack grails so that it takes into account the password field and use a specific sql to store it to a postgreSQL database.
Edit 2 :
I've come up with the following code but it doesn't work
def beforeInsert() {
hashpass = encodePassword(hashpass);
}
def encodePassword(cleartextpwd) {
// create a key generator based upon the Blowfish cipher
KeyGenerator keygenerator = KeyGenerator.getInstance("Blowfish");
// create a key
SecretKey secretkey = keygenerator.generateKey();
// create a cipher based upon Blowfish
Cipher cipher = Cipher.getInstance(ALGORITHM);
// initialise cipher to with secret key
cipher.init(Cipher.ENCRYPT_MODE, secretkey);
// get the text to encrypt
String inputText = cleartextpwd;
// encrypt message
byte[] encrypted = cipher.doFinal(inputText.getBytes("UTF-8"));
return Base64.encodeBase64String(encrypted);
}
I get a hash that is not a blowfish hash (beginning with $2a$08$ )
Edit 3 :
I've finally came up with a cleaner grails solution after reading this wiki page : grails.org/Simple+Dynamic+Password+Codec (not enough reputation to put more than 2 links so add http:// before) and the bug report jira.grails.org/browse/GRAILS-3620
Following advice from #lukelazarovic, I've also used the algorithm from the spring security plugin.
Here is my password encoder to put into grails/utils :
import grails.plugin.springsecurity.authentication.encoding.BCryptPasswordEncoder;
class BlowfishCodec {
static encode(target) {
// TODO need to put the logcount = 8 in configuration file
return new BCryptPasswordEncoder(8).encodePassword(
target, null)
}
}
I've updated my Compte model to call my password encoder before saving / updating the data :
def beforeInsert() {
hashpass = hashpass.encodeAsBlowfish();
}
def beforeUpdate() {
if(isDirty('hashpass')) {
hashpass = hashpass.encodeAsBlowfish();
}
}
The tricky part is to hack grails so that it takes into account the
password field and use a specific sql to store it to a postgreSQL
database.
Is there any particular reason to do the hashing in database?
IMHO it's better to hash the password in Grails, therefore have a code that is not database-specific and easier to read.
For hashing passwords using Blowfish algorithm using Java or Groovy see Encryption with BlowFish in Java
The resulting hash begins with algorithm specification, iteration count and salt, separated with dollar sign - '$'. So the hash may look like "$2a$08$saltCharacters" where 2a is a algorithm, 08 is iteration count, then follows salt and after salt is the hash itself.
For broader explanation see http://www.techrepublic.com/blog/australian-technology/securing-passwords-with-blowfish. Don't mind that it concerns to Blowfish in PHP, the principles applies for Java or Groovy as well.