how to validate code checksum at runtime - hash

I want to use code-signing certificate to verify code validity at runtime in Windows. For example, I have a service A, and another executable B that will call A. A needs to make sure only software having valid signature can call A. That is, every time B calls A, B needs to present its code checksum signed by the private key of the software vendor recognized by A. A has the public key of this vendor. The question is how does A make sure B presents the real signed hash (not someone else's) ?

There is no way. B is free to present whatever it wants. A has to either believe it or not. To be on the safer side A has to always believe B is hostile!

Related

what to choose between require and assert in scala

Both require and assert are used to perform certain checks during runtime to verify certain conditions.
So what is the basic difference between them?
The only one I see is that require throws IllegalArgumentException and assert throws AssertionError.
How do I choose which one to use?
As Kigyo mentioned there is a semantic difference
assert means that your program has reached an inconsistent state this might be a problem with the current method/function (I like to think of it a bit as HTTP 500 InternalServerError)
require means that the caller of the method is at fault and should fix its call (I like to think of it a bit as HTTP 400 BadRequest)
There is also a major technical difference:
assert is annotated with #elidable(ASSERTION)
meaning you can compile your program with -Xelide-below ASSERTION or with -Xdisable-assertions and the compiler will not generate the bytecode for the assertions. This can significantly reduce bytecode size and improve performance if you have a large number of asserts.
Knowing this, you can use an assert to verify all the invariants everywhere in your program (all the preconditions/postconditions for every single method/function calls) and not pay the price in production.
You would usually have the "test" build with all the assertions enabled, it would be slower as it would verify all the assertions at all times, then you could have the "production" build of your product without the assertions, which you would eliminate all the internal state checks done through assertion
require is not elidable, it makes more sense for use in libraries (including internal libraries) to inform the caller of the preconditions to call a given method/function.
This is only my subjective point of view.
I use require whenever I want a constraint on parameters.
As an example we can take the factorial for natural numbers. As we do not want to address negative numbers, we want to throw an IllegalArgumentException.
I would use assert, whenever you want to make sure some conditions (like invariants) are always true during execution. I see it as a way of testing.
Here is an example implementation of factorial with require and assert
def fac(i: Int) = {
require(i >= 0, "i must be non negative") //this is for correct input
#tailrec def loop(k: Int, result: Long = 1): Long = {
assert(result == 1 || result >= k) //this is only for verification
if(k > 0) loop(k - 1, result * k) else result
}
loop(i)
}
When result > 1 is true, then the loop was executed at least once. So the result has to be bigger or equal to k. That would be a loop invariant.
When you are sure that your code is correct, you can remove the assert, but the require would stay.
You can see here for a detailed discussion within Scala language.
I can add that, the key to distinguish between require and assert is to understand these two. These two are both tools of software quality but from different toolboxes of different paradigms. In summary assert is a Software testing tool which takes a corrective approach, whereas require is a design by contract tool which takes a preventive approach.
Both require and assert are means of controlling validity of state. Historically there were 2 distinct paradigms for dealing with invalid states. The first one which is mainstream collectively called software testing discipline methodologies and tools. The other, called design by contract. These are two paradigms which are not comparable.
Software testing ensures a code versatile enough to be capable of error prone actions, were not misused. Design by contract controls code from having such capability. In other words Software testing is corrective, and design by contract is preventive.
assert is used to write unit tests, i.e. if a method passes all tests each written by an assert expression, the code is qualified as error free. So assert seats besides operational code, and is an independent body.
require is embedded within code and part of it to assure nothing harmful can happen.
In very simple language:
Require is used to enforce a precondition on the caller of a function or the creator of an object of some class. Whereas, assert is used to check the code of the function itself.
So, if a precondition fails, then you get an illegal argument exception. Whereas, if an assertion fails and it's not the caller's fault and consequently you get an assertion error.
require, ensure and invariance are concepts in Contract By Design (CBD) development process.
require checks for the pre-conditions that the caller should satisfy to consume the routine.
ensure checks for the correctness in the return value (and to also verify only the desired change has happened and nothing more)
invariance checks for the validness of the class at all critical times.
CBD is a development methodology to build correct/robust software. For more details on CBD Google and you should hit a link from Eiffel Software. Hope this helps.
Scaladocs/javadocs are pretty good as well:
assert()
Tests an expression, throwing an AssertionError if false. Calls to this method will not be generated if -Xelide-below is greater than ASSERTION.
require()
Tests an expression, throwing an IllegalArgumentException if false. This method is similar to assert, but blames the caller of the method for violating the condition.

Is this encryption method secure?

I developed an application in C++ using Crypto++ to encrypt information and store the file in the hard drive. I use an integrity string to check if the password entered by the user is correct. Can you please tell me if the implementation generates a secure file? I am new to the world of the cryptography and I made this program with what I read.
string integrity = "ImGood"
string plaintext = integrity + string("some text");
byte password[pswd.length()]; // The password is filled somewhere else
byte salt[SALT_SIZE]; // SALT_SIZE is 32
byte key[CryptoPP::AES::MAX_KEYLENGTH];
byte iv[CryptoPP::AES::BLOCKSIZE];
CryptoPP::AutoSeededRandomPool rnd;
rnd.GenerateBlock(iv, CryptoPP::AES::BLOCKSIZE);
rnd.GenerateBlock(salt, SALT_SIZE);
CryptoPP::PKCS5_PBKDF2_HMAC<CryptoPP::SHA512> gen;
gen.DeriveKey(key, CryptoPP::AES::MAX_KEYLENGTH, 32,
password, pswd.length(),
salt, SALT_SIZE,
256);
CryptoPP::StringSink* sink = new CryptoPP::StringSink(cipher);
CryptoPP::Base64Encoder* base64_enc = new CryptoPP::Base64Encoder(sink);
CryptoPP::CFB_Mode<CryptoPP::AES>::Encryption cfb_encryption(key, CryptoPP::AES::MAX_KEYLENGTH, iv);
CryptoPP::StreamTransformationFilter* aes_enc = new CryptoPP::StreamTransformationFilter(cfb_encryption, base64_enc);
CryptoPP::StringSource source(plaintext, true, aes_enc);
sstream out;
out << iv << salt << cipher;
The information in the string stream "out" is then written to a file. Another thing is that I don't know what the "purpose" parameter in the derivation function means, I'm guessing it is the desired length of the key so I put 32, but I'm not sure and I can't find anything about it in the Crypto++ manual.
Any opinion, suggestion or mistake pointed out is appreciated.
Thank you very much in advance.
A file can be "secure" only if you define what you mean by "secure".
Usually, you will be interested in two properties:
Confidentiality: the data that is encrypted shall remain unreadable to attackers; revealing the plaintext data requires knowledge of a specific secret.
Integrity: any alteration of the data should be reliably detected; attackers shall not be able to modify the data in any way (even "blindly") without the modification being noticed by whoever decrypts the data.
Your piece of code, apparently, fulfils confidentiality (to some extent) but not integrity. Your string called "integrity" is a misnomer: it is not an integrity check. Its role is apparently to detect accidental password mistakes, not attacks; thus, it would be less confusing if that string was called passwordVerifier instead. An attacker can alter any bit beyond the first 48 bits without the decryption process noticing anything.
Adding integrity (the genuine thing) requires the use of a MAC. Combining encryption and a MAC securely is subject to subtleties; therefore, it is recommended to use for encryption and MAC an authenticated encryption mode which does both, and does so securely (i.e. that specific combination was explicitly reviewed by hordes of cryptographers). Usual recommended AE modes include GCM and EAX.
An important point to note is that, in a context where integrity matters, data cannot be processed before having been verified. This has implications for big files: if your huge file is adorned with a single MAC (whether "manually" or as part of an AE mode), then you must first verify the complete file before beginning to do anything with the plaintext data. This does not work well with streamed processing (e.g. if playing a huge video). A workaround is to split the data into individual chunks, each with its own MAC, but then some care must be taken about the ordering of chunks (attackers could try to remove, duplicate or reorder chunks): things become complex. Complexity, on a general basis, is bad for security.
There are contexts where integrity does not matter. For instance, if your attack model is "the attacker steals the laptop", then you only have to care about confidentiality. However, if the attack model is "the attacker steals the laptop, modifies a few files, and puts it back in my suitcase without me noticing", then integrity matters: the attacker could plant a modification in the file, and infer parts of the secret itself based on your external behaviour when you next access the file.
For confidentiality, you use CFB, which is a bit old-style, but not wrong. For the password-to-key transform, you use PBKDF2, which is fine; the iteration count, though, is quite low: you use 256. Typical values are 20000 or more. The theory is that you should make actual performance measures to set this count to as high a value as you can tolerate: a higher value means slower processing, both for you and for the attacker, so you ought to crank that up (depending on your patience).
Mandatory warning: you are in the process of defining your own crypto, which is a path fraught with perils. Most people who do that produce weak systems, and that includes trained cryptographers; in fact, being a trained cryptographer does not mean that you know how to define a secure protocol, but rather that you know better than defining your own protocol. You are thus highly encouraged to rely on an existing protocol (or format), rather than making your own. I suggest OpenPGP, with (for instance) GnuPG as support library. Even if you need for some reason (e.g. licence issues) to reimplement the format, using a standard format is still a good idea: it avoids introducing weaknesses, it promotes interoperability, and it can be tested against existing systems.

Does replacing passes in Passbook require new pass to be different than the old one?

I'm trying to update a pass in Passbook by calling -replacePassWithPass: and passing exactly the same pass that already is in PKPassLibrary. Method returns with NO (replacing failed) - why? (BTW: console on my device doesn't show any logs from iOS)
The docs for -replacePassWithPass: are not very helpful in that case:
This will fail if a pass with matching identifier and serial number is
not already present in the library, or if the process is not entitled
to access the pass.
and:
YES if the pass was replaced successfully; otherwise NO.
I fullfil both requirements.
Is it not possible to replace passes in that way? Should I use -removePass: and than try to add it with PKAddPassesViewController?
My backend does not support updating passes yet, so I cannot verify all posibilities here (i.e. really get updated pass with same typeID and serialNumber, but different content). What are your experiences?
As long as passTypeIdentifier and serial number of one pass equal to passTypeIdentifier and serial number of another pass - they are instances of the same pass. If everything is the same between two passes - they are one instance of one pass.
If you do have the pass in your PKPassLibrary, -replacePassWithPass works in the first case, -removePass and adding it back with PKAddPassesViewController does in the second one.
Updating passes is always a replacement. There are basically three requirements you have to fulfill in order to successfully replace the pass:
1) a pass with the same passTypeIdentifier and serialNumber must be already added to your PKPassLibrary;
2) the new instance of the pass must have at least one difference to the old pass (except passTypeIndentifier and serialNumber which must remain the same);
3) your app must have an entitlement for passes with this passTypeIdentifier.

GWT RPC server side safe deserialization to check field size

Suppose I send objects of the following type from GWT client to server through RPC. The objects get stored to a database.
public class MyData2Server implements Serializable
{
private String myDataStr;
public String getMyDataStr() { return myDataStr; }
public void setMyDataStr(String newVal) { myDataStr = newVal; }
}
On the client side, I constrain the field myDataStr to be say 20 character max.
I have been reading on web-application security. If I learned something it is client data should not be trusted. Server should then check the data. So I feel like I ought to check on the server that my field is indeed not longer than 20 characters otherwise I would abort the request since I know it must be an attack attempt (assuming no bug on the client side of course).
So my questions are:
How important is it to actually check on the server side my field is not longer than 20 characters? I mean what are the chances/risks of an attack and how bad could the consequences be? From what I have read, it looks like it could go as far as bringing the server down through overflow and denial of service, but not being a security expert, I could be mis-interpreting.
Assuming I would not be wasting my time doing the field-size check on the server, how should one accomplish it? I seem to recall reading (sorry I no longer have the reference) that a naive check like
if (myData2ServerObject.getMyDataStr().length() > 20) throw new MyException();
is not the right way. Instead one would need to define (or override?) the method readObject(), something like in here. If so, again how should one do it within the context of an RPC call?
Thank you in advance.
How important is it to actually check on the server side my field is not longer than 20 characters?
It's 100% important, except maybe if you can trust the end-user 100% (e. g. some internal apps).
I mean what are the chances
Generally: Increasing. The exact proability can only be answered for your concrete scenario individually (i. e. no one here will be able to tell you, though I would also be interested in general statistics). What I can say is, that tampering is trivially easy. It can be done in the JavaScript code (e. g. using Chrome's built-in dev tools debugger) or by editing the clearly visible HTTP request data.
/risks of an attack and how bad could the consequences be?
The risks can vary. The most direct risk can be evaluated by thinking: "What could you store and do, if you can set any field of any GWT-serializable object to any value?" This is not only about exceeding the size, but maybe tampering with the user ID etc.
From what I have read, it looks like it could go as far as bringing the server down through overflow and denial of service, but not being a security expert, I could be mis-interpreting.
This is yet another level to deal with, and cannot be addressed with server side validation within the GWT RPC method implementation.
Instead one would need to define (or override?) the method readObject(), something like in here.
I don't think that's a good approach. It tries to accomplish two things, but can do neither of them very well. There are two kinds of checks on the server side that must be done:
On a low level, when the bytes come in (before they are converted by RemoteServiceServlet to a Java Object). This needs to be dealt with on every server, not only with GWT, and would need to be answered in a separate question (the answer could simply be a server setting for the maximum request size).
On a logical level, after you have the data in the Java Object. For this, I would recommend a validation/authorization layer. One of the awesome features of GWT is, that you can use JSR 303 validation both on the server and client side now. It doesn't cover every aspect (you would still have to test for user permissions), but it can cover your "#Size(max = 20)" use case.

SSL_CTX_set_cert_verify_callback vs. SSL_CTX_set_verify

Can anyone tell me what is the difference between SSL_CTX_set_cert_verify_callback and SSL_CTX_set_verify?
From OpenSSL docs:
SSL_CTX_set_cert_verify_callback() sets the verification callback function for ctx. SSL objects that are created from ctx inherit the setting valid at the time when SSL_new(3) is called.
and:
SSL_CTX_set_verify() sets the verification flags for ctx to be mode and specifies the verify_callback function to be used. If no callback function shall be specified, the NULL pointer can be used for verify_callback.
So I'm trying to understand which callback to send for each one (from client side).
Thanks experts.
SSL_CTX_set_cert_verify_callback() means you're specifying a function to do the entire validation process (walking the certificate chain validating each cert in turn). [ you probably don't want to be doing this, per the warning below ]
SSL_CTX_set_verify(), on the other hand, specifies a function that's called when the default validator checks each certificate, with preverify_ok set to 0 or 1 to indicate if verification of the certificate in question worked.
From the doc for SSL_CTX_set_cert_verify_callback()
WARNINGS
Do not mix the verification callback
described in this function with the
verify_callback function called during
the verification process. The latter
is set using the SSL_CTX_set_verify(3)
family of functions.
Providing a complete verification
procedure including certificate
purpose settings etc is a complex
task. The built-in procedure is quite
powerful and in most cases it should
be sufficient to modify its behaviour
using the verify_callback function.
SSL_CTX_set_cert_verify_callback() changes the default certificate verification function. You probably should not do this. It's quite involved, you need to check the signature for each cert, verify the chain, possibly check CRL. It's the most complicated part of the SSL.
The SSL_CTX_set_verify() is used to set the mode of SSL. If the mode is SSL_VERIFY_PEER (2-way SSL), you should also set a callback in this function to further verify the client certificate (checking CN against a white-list etc). For other modes, this CB is not used. Since you said you are in client mode, you probably don't need to worry about this call.