Provides tapestry's hmac-passphrase sufficient csrf protection? - csrf

I wonder, why Tapestry's hmac funcionality is not enough to prevent Cross-Site-Request-Forgery.
Tapestry encodes serialized objects into Base64 encoded strings that are stored on the client; primarily, this is for form submissions, to encode the set of operations needed to process the form when it is submitted.
(quoted jira issue)
The tapestry framework (version 5.3.6 and newer) signs the above data using hmac. When the form is submitted by the client, this data is included in the POST request in a t:formdata parameter and validated by the server.
From my point of view, this is enough to protect against crsf. The signed data is created on the server with a key, that only the server knows.
How should an attacker create a valid t:formdata Element which is accepted by the server?
I'm confused because I found several libraries/implementations on the internet, which add csrf-protection to tapestry.
I also read this SO-answer which states, that hmac "might" (or might not?) be sufficient csrf protection.
Concluding: In which cases does the hmac not provide sufficient protection against csrf and why?

The HMAC only applies to portions of the submitted form data, the part that Tapestry will de-serialized into Java objects. That was, in theory, enough for an attacker to compromise the server JVM (though not, necessarily, to be able to inject code or behavior into the application). With HMAC enabled, the submitted form data is secure, for as long as the HMAC is kept secret on the server.

Related

Encrypted Password Accessible via API Call, Is this Secure?

I am working through some security concepts right now and I was curious if this method has been tried and/or if it is safe taking into consideration "Brute Forcing" is still possible.
Take for example a Microsoft WebAPI Template in Visual Studio where you access a endpoint using a "GET".
The Endpoint would be accessible by any user/application
The String value that a user/application would get from this endpoint would be the password they need, but encrypted using a "KeyValue"
After a TLS Transmission of this Encrypted Value, the user/application would decrypt the String using their "KeyValue"
Is this a secure practice?
Thanks for indulging me and look forward to your responses.
EDIT: Added Further Clarification with Image to Help Illustrate
Suppose the following 2 Scenarios:
Communication between Server and Client
a. Your Server serves the Client application with an encrypted password.
b. The Client can request any password.
c. The passwords are encrypted with a shared Key that is known by both server and client application
As James K Polk already pointed out:
A knowledgable Attacker can and will analyse your deployed application and at some point will find your hardcoded decryption key ("KeyValue"). What prevents him from requesting every password that is stored on the Server?
Rule of thumb here would be: "Do not trust the client side."
Communication between Server and Server
a. You have 2 server applications. Application A is acting as some kind of database server. Application B is your Back-End for a user application of some kind.
b. Application A serves paswords to any requester, not only Server B. With no type of authentication whatsoever.
c. Confidentiality is guaranteed through a shared and hard-coded Key.
I think you are trying to overcomplicate things hoping that no one is able to piece together the puzzle.
Someone with enough time and effort might be able to get information about your server compilation and/or be able to get the Code of Application B. Which again defaults in the scenario of 1. Another point is that there are enough bots out there randomly scanning ips to check responses. Application A might be found and even-though they do not have the shared key might be able to piece together the purpose of Application A and make this server a priority target.
Is this a safe practice?
No. It is never a good idea to give away possibly confidential information for free. Encrypted or not. You wouldn't let people freely download your database would you?
What you should do
All Authentication/Authorization (for example a user login, that's what I expect is your reason to exchange the passwords) should be done on the server side since you're in control of this environment.
Since you didn't tell us what you're actually trying to accomplish I'd recommend you read up on common attack vectors and find out about common ways to mitigate these.
A few suggestions from me:
Communication between 2 End-points -> SSL/TLS
Authorization / Authentication
Open Web Application Security Project and their Top 10 (2017)

Should a REST API wrapper validate inputs before making a request?

Suppose that the server restricts a JSON field to an enumerated set of values.
e.g. a POST request to /user expects an object with a field called gender that should only be "male", "female" or "n/a".
Should a wrapper library make sure that the field is set correctly before making the request?
Pro: Makes it possible for the client to quickly reject input that would otherwise require a roundtrip to the server. In some cases this would allow for a much better UX.
Con: You have to keep the libary in sync with the backend, otherwise you could reject some valid input.
With a decent type system you should encode this particular restriction in the library API anyway. I think usually people validate at least basic stuff on the client and let server do further validation, like things that can’t be verified on the client at all.
This is a design choice - the enum type constraint should be documented in the public API of the server and it's part of its contract.
Clients are forced to obey the contract to make a successful request, but are not required to implement the validation logic. You can safely let the clients fail with "Bad Request" or other 4xx error.
Implementing the validation logic on both sides couples the client and the server - any changes to the validation logic should be implemented on both sides.
If the validation logic is something closer to common sense (e.g. this field should not be empty) it can safely be implemented on both sides.
If the validation logic is something more domain specific, I think it should be kept on the backend side only.
You have to think about the same trade-offs with a wrapping library (which can be looked at as a client of the server API). It depends on what the role of the wrapping library is - if the wrapping library should expose the full API contract of the server - than by all means the validation logic can be duplicated in the wrapping lib - other wise I would keep it to the backend.
The wrapper-library is the actual client of the REST api and hence has to adhere to both the architectural and protocol imposed constraints. In his blog post Fielding explained some of the constraints even further. One of them are typed resources which states that clients shouldn't assume the API to return a specific type, i.e. some user details in JSON. This is what media-types and content negotiation are actually for.
The definition of a media type may give clients a hint on how to process the data received i.e. like with the JSON or XML based vCard format. As media types define the actual format of some specific document it may contain processing rules like pre-validation requirements or syntax regulations i.e. through XML schema or JSON schema validation.
One of the basic rules in remote computing is though to never trust inputs received and hence the server should validate the results regardless if the client has done a pre-validation before. Due to the typed resource constraint a true RESTful client will check if the received media type does support pre-validation through its spec and only apply pre-validation if the spec is defining it and also mentions some mechanisms on how to perform it (i.e. through certain schema mechanism).
My personal opinion on this is that if you try to follow the REST architectural approach you shouldn't validate the input unless the media type explicitely supports it. As a client will learn through error responses which fields and values a certain REST endpoint expects and the server hopefully validates the inputs anyway I don't see the necessity to validate it on the client side. As performance considerations are often more important than following the rules and recommendations it is though up to you. Note however, that this may or may not couple the client to the server and hence increase the risk of breaking on server changes more easily. As REST is not a protocol but a design suggestion, it is up to you which route you prefer.

How can I content-encrypt FHIR/REST

We have a requirement to transfer documents (HL7/FHIR and others) over HTTP (REST), but where the network architecture involves multiple hops through proxies that unpack and repack the TLS. So the TLS-encryption doesn't help us much, since we break the law if the proxies in-between can see the data. (This is all within a secure semi-private network, but that doesn't help us because different organizations own the different boxes and we're required to have end-to-end encryption).
So we need payload encryption of the documents transfered with HTTP/REST. The common way to do it here is to use SOAP and encrypt the envelope.
What is the best mechanism for content/payload-encrypting REST-transported data? Is there a standard or open specification for it? In health or some other industry?
I guess one feasible way could be to add a special content type that requests encrypted content (S/MIME-based?) to the HTTP request header. A FHIR/REST-server should then be able to understand from the Accept-header of the HTTP-request that the content must be encrypted before responding. As long as the URL itself isn't sensitive, I guess this should work?
I guess also that maybe even the public key for encrypting the content could be passed in a special HTTP request header, and the server could use this for encryption? Or the keys could be shared in setting up the system?
Is this feasible and an ok approach? Has payload-encryption been discussed in the HL7-FHIR work?
It hasn't been discussed significantly. One mechanism would be to use the Binary resource. Simply zip and encrypt the Bundle/resource you want to transfer and then base-64 encode it as a Binary resource. (I suggest zipping because that makes it easy to set the mime type for the Binary.) That resource would then be the focus of a MessageHeader that would expose the necessary metadata to ensure the content is appropriately delivered across the multiple hops.
This might be a good discussion to start up on the HL7 security list server as they will likely have some additional thoughts and recommendations. (And can also ensure that wherever things land in terms of final recommendation, that that gets documented as part of the spec :>)

iPhone/iPad Encrypting JSON

I want to encrypt some json from a server and then decrypt it on the iphone/ipad. What are your thoughts on this? What is the best approach to this? Should I scrap this idea and just go via SSL?
Save yourself a lot of trouble and just use HTTPS for all server communications.
As stated above one way is to do everything over https.
An alternative I can think of is the following:
Generate an symmetrical encryption
key per session/login per client on
the server
Send that key to the client over
https
From there on encrypt all the data
you send to the client with that key
The client can then decrypt the
encrypted data
I don't have enough knowledge about https. I often read that is heavy on the resources of the system, but since I have not made or read some good benchmarks I can't give you a rigorous argument for or against it.
The implementation I proposed require a little bit more coding, but you can tailor to your encryption needs.
I think ultimately your decision should be made based on your usage scenario, if you sent very little data, not often to a few client application, you can't go wrong with https. If your expected encrypted traffic is high, the alternative solution might make sense.

How does zeromq work together with SSL?

I am considerung to use zeromq as messaging layer between my applications. At least in some cases I want the communication to be secure and I am thinking about SSL.
Is there some standard way how to ssl-enable zeromq? As far as I understand it doesn't support it out of the box.
It would be nice if I just had a parameter when connnecting to a socket (bool: useSsl) :)
Any ideas?
Understanding that this is not really an answer to your question, I'm going to be encrypting the messages directly with RSA, before sending them with 0mq.
In the absence of a more integrated encryption method that is fully tested and implemented in my platform of choice, that's what I'm going with. 0mq just recently released version 4, which has encryption baked in, but it's still considered experimental and isn't fully supported by the language bindings.
Encrypting the message, rather than the connection, seems to provide the simplest upgrade path, and the difference for our purposes are pretty much just semantics given how we'd have to implement encryption currently, today.
Edit: I know more about encryption now than I did when I wrote this, RSA is not an appropriate choice for encrypting message data. Use AES, either with manually sharing keys (this is our approach for the short term) or implementing a key sharing scheme as in Jim Miller's answer... but beware if you take the latter approach, designing and implementing a key-sharing scheme securely is hard. Way harder than you'd think. You can implement SSL/TLS directly (using message BIOs), and others have done so, it's also not simple but at least know that the SSL scheme is industry standard and therefore meets a minimum security requirement.
In short, before the Elliptic Curve crypto baked into ZMQ 4 is considered reliable and becomes standard, the "accepted solution" would be to implement SSL/TLS over the connection manually, and failing that, use AES 128 or 256 with a secure key sharing mechanism (key sharing is where RSA would appropriately be used).
We are currently implementing a pre-shared key solution using 0mq that implements a key exchange protocol based loosely on TLS/SSL.
Essentially, we have a data aggregator service that publishes encrypted state of health data over a multicast 0mq publisher. A symmetric key is used (AES128) to encrypt the data and can be retrieved from a second service running as a simpler request/response model over 0mq.
To retrieve the symmetric key (PSK), we are implementing the following protocol:
Client connects
Server sends its certificate
Client verifies server certificate against a CA chain of trust
Client sends its certificate
Server verifies client certificate against its CA chain
Server encrypts PSK using client public key
Server sends encrypted PSK to client
Client decrypts PSK
Once a client has the PSK, it can decrypt the messages retrieved over multicast.
We are also looking at implementing a session expire algorithm that uses two enveloped keys in the multicast service. One key is the current session key, and the second is the old, expiring key. That way, a client has a little more time to retrieve the new key without having to buffer encrypted messages before retrieving the new key.
According to zeromq.org, it's not supported yet but they are looking into it. It looks like it's suggested as a project for Google Summer of Code.