How to sign data with certificate on web page - certificate

The Stage
I'm working on a web application to enable trade of assets between different parties.
The different parties are known to me, and are not too many.
I record each transaction of an asset from one party to another in a database (sql server). I've stripped down database access as much as I can, and parties have to identify themselves by usernames and passwords to initiate transactions. Communication from web to api will of course be secure (https).
The Problem
These measures leaves me pretty confident that the stored transactions are authentic, but of course the parties of the transactions does not necessarily trust that I (or anyone else with sa password) haven't tampered with the data or created fake transactions.
My thinking
So I figure I need something to be able to prove the authenticity of such a transaction.
Maybe a signature from the party who is giving away their asset? A signature that I can store along with the signed transaction? In that case, the signer needs a private key and I think certificates is a way do this, right? I don't know of any way to access certificates installed on a computer from a web page though so the web page would have to launch a locally installed application to do the actual signing, because the locally installed application could access the certificate, and the signing party have trust in this application to sign the actual untampered-with transaction.
Is this a feasible solution?
How should certificates be distributed? I guess they should be issued from a CA that both I and the party trust?

Related

Mobile app/API security: will a hardcoded access key suffice?

I'm building a mobile app which allows users to find stores and discounts near their location. The mobile app gets that information from the server via a REST API, and I obviously want to protect that API.
Would it be enough to hardcode an access key (128 bit) into the mobile app, and send it on every request (enforcing https), and then check if it matches the server key? I am aware of JWT, but I believe using it or other token-based approach will give me more flexibility but not necessarily more security.
As far as I can see the only problem with this approach is that I become vulnerable to a malicious developer on our team. Would there be a way to solve this?
First, as discussed many times here, what you want to achieve is not technically possible in a way that could be called secure. You can't authenticate the client, you can only authenticate people. The reason is that anything you put into a client will be fully available to people using it, so they can reproduce any request it makes.
Anything below is just a thought experiment.
Your approach of adding a static key, and sending it with every request is a very naive attempt. As an attacker, it would be enough for me to inspect one single request created by your app and then I would have the key to make any other request.
A very slightly better approach would be to store a key in your app and sign requests with it (instead of sending the actual key), like for example to create a hmac of the request with the key and add that as an additional request header. That's better, because I would have to disassemble the app to get the key - more work which could deter me, but very much possible.
As a next step, that secret key could be generated upon the first use of your app, and there we are getting close to as good as this could get. Your user installs the app, you generate a key, store it in the appropriate keystore of your platform, and register it with the user. From then on, on your backend you require every request to be signed (eg. the hmac method above) with the key associated with your user. The obvious problem is how you would assign keys - and such a key would still authenticate a user, and not his device or client, but at least from a proper keystore on a mobile platform, it would not be straightforward to get the key without rooting/jailbraking.But nothing would keep an attacker from installing the app on a rooted device or an emulator.
So the bottomline is, you can't prevent people from accessing your api with a different client. However, in the vast majority of cases they wouldn't want to anyway. If you really want to protect it, it should not be public, and a mobile app is not an appropriate platform.
YOUR PROBLEM
Would it be enough to hardcode an access key (128 bit) into the mobile app, and send it on every request (enforcing https), and then check if it matches the server key?
Well no, because every secret you hide in a mobile app, is not anymore a secret, because will be accessible to everyone that wants to spend some time to reverse engineer your mobile app or to perform a MitM attack against it in a device he controls.
On the article How to Extract an API key from a Mobile App by Static Binary Analysis I show how easy is to extract a secret from the binary, but I also show a good approach to hide it, that makes it hard to reverse engineer:
It's time to look for a more advanced technique to hide the API key in a way that will be very hard to reverse engineer from the APK, and for this we will make use of native C++ code to store the API key, by leveraging the JNI interface which uses NDK under the hood.
While the secret may be hard to reverse engineer it will be easy to extract with a MitM attack, and that is what I talk about in the article Steal that API key with a MitM Attack:
So, in this article you will learn how to setup and run a MitM attack to intercept https traffic in a mobile device under your control, so that you can steal the API key. Finally, you will see at a high level how MitM attacks can be mitigated.
If you read the article you will learn how an attacker will be able to extract any secret you transmit over https to your API server, therefore a malicious developer in your team will not be your only concern.
You can go and learn how to implement certificate pinning to protect your https connection to the API server, and I also have wrote an article on it, entitled Securing Https with Certificate Pinning on Android:
In this article you have learned that certificate pinning is the act of associating a domain name with their expected X.509 certificate, and that this is necessary to protect trust based assumptions in the certificate chain. Mistakenly issued or compromised certificates are a threat, and it is also necessary to protect the mobile app against their use in hostile environments like public wifis, or against DNS Hijacking attacks.
Finally you learned how to prevent MitM attacks with the implementation of certificate pinning in an Android app that makes use of a network security config file for modern Android devices, and later by using TrustKit package which supports certificate pinning for both modern and old devices.
Sorry but I need to inform you that certificate pinning can be bypassed in a device the attacker controls, and I show how it can be done in the article entitled Bypass Certificate Pinning:
In this article you will learn how to repackage a mobile app in order to make it trust custom ssl certificates. This will allow us to bypass certificate pinning.
Despite certificate pinning can be bypassed it is important to always use it to secure the connection between your mobile app and API server.
So and now what, am I doomed to fail in defending my API server... Hope still exists, keep reading!
DEFENDING THE API SERVER
The mobile app gets that information from the server via a REST API, and I obviously want to protect that API.
Protecting an API server for a mobile app is possible, and it can be done by using the Mobile App Attestation concept.
Before I go in detail to explain the Mobile App Attestation concept is important to we first clarify an usual misconception among developers, the difference between the WHO and the WHAT is accessing your API server.
The Difference Between WHO and WHAT is Accessing the API Server
To better understand the differences between the WHO and the WHAT are accessing an API server, let’s use this picture:
The Intended Communication Channel represents the mobile app being used as you expected, by a legit user without any malicious intentions, using an untampered version of the mobile app, and communicating directly with the API server without being man in the middle attacked.
The actual channel may represent several different scenarios, like a legit user with malicious intentions that may be using a repackaged version of the mobile app, a hacker using the genuine version of the mobile app, while man in the middle attacking it, to understand how the communication between the mobile app and the API server is being done in order to be able to automate attacks against your API. Many other scenarios are possible, but we will not enumerate each one here.
I hope that by now you may already have a clue why the WHO and the WHAT are not the same, but if not it will become clear in a moment.
The WHO is the user of the mobile app that we can authenticate, authorize and identify in several ways, like using OpenID Connect or OAUTH2 flows.
OAUTH
Generally, OAuth provides to clients a "secure delegated access" to server resources on behalf of a resource owner. It specifies a process for resource owners to authorize third-party access to their server resources without sharing their credentials. Designed specifically to work with Hypertext Transfer Protocol (HTTP), OAuth essentially allows access tokens to be issued to third-party clients by an authorization server, with the approval of the resource owner. The third party then uses the access token to access the protected resources hosted by the resource server.
OpenID Connect
OpenID Connect 1.0 is a simple identity layer on top of the OAuth 2.0 protocol. It allows Clients to verify the identity of the End-User based on the authentication performed by an Authorization Server, as well as to obtain basic profile information about the End-User in an interoperable and REST-like manner.
While user authentication may let the API server know WHO is using the API, it cannot guarantee that the requests have originated from WHAT you expect, the original version of the mobile app.
Now we need a way to identify WHAT is calling the API server, and here things become more tricky than most developers may think. The WHAT is the thing making the request to the API server. Is it really a genuine instance of the mobile app, or is a bot, an automated script or an attacker manually poking around with the API server, using a tool like Postman?
For your surprise you may end up discovering that It can be one of the legit users using a repackaged version of the mobile app or an automated script that is trying to gamify and take advantage of the service provided by the application.
Well, to identify the WHAT, developers tend to resort to an API key that usually they hard-code in the code of their mobile app. Some developers go the extra mile and compute the key at run-time in the mobile app, thus it becomes a runtime secret as opposed to the former approach when a static secret is embedded in the code.
The above write-up was extracted from an article I wrote, entitled WHY DOES YOUR MOBILE APP NEED AN API KEY?, and that you can read in full here, that is the first article in a series of articles about API keys.
Mobile App Attestation
The role of a Mobile App Attestation solution is to guarantee at run-time that your mobile app was not tampered with, is not running in a rooted device, not being instrumented by a framework like xPosed or Frida, not being MitM attacked, and this is achieved by running an SDK in the background. The service running in the cloud will challenge the app, and based on the responses it will attest the integrity of the mobile app and device is running on, thus the SDK will never be responsible for any decisions.
Frida
Inject your own scripts into black box processes. Hook any function, spy on crypto APIs or trace private application code, no source code needed. Edit, hit save, and instantly see the results. All without compilation steps or program restarts.
xPosed
Xposed is a framework for modules that can change the behavior of the system and apps without touching any APKs. That's great because it means that modules can work for different versions and even ROMs without any changes (as long as the original code was not changed too much). It's also easy to undo.
MiTM Proxy
An interactive TLS-capable intercepting HTTP proxy for penetration testers and software developers.
On successful attestation of the mobile app integrity a short time lived JWT token is issued and signed with a secret that only the API server and the Mobile App Attestation service in the cloud are aware. In the case of failure on the mobile app attestation the JWT token is signed with a secret that the API server does not know.
Now the App must sent with every API call the JWT token in the headers of the request. This will allow the API server to only serve requests when it can verify the signature and expiration time in the JWT token and refuse them when it fails the verification.
Once the secret used by the Mobile App Attestation service is not known by the mobile app, is not possible to reverse engineer it at run-time even when the App is tampered, running in a rooted device or communicating over a connection that is being the target of a Man in the Middle Attack.
The Mobile App Attestation service already exists as a SAAS solution at Approov(I work here) that provides SDKs for several platforms, including iOS, Android, React Native and others. The integration will also need a small check in the API server code to verify the JWT token issued by the cloud service. This check is necessary for the API server to be able to decide what requests to serve and what ones to deny.
SUMMARY
In the end, the solution to use in order to protect your API server must be chosen in accordance with the value of what you are trying to protect and the legal requirements for that type of data, like the GDPR regulations in Europe.
DO YOU WANT TO GO THE EXTRA MILE?
OWASP Mobile Security Project - Top 10 risks
The OWASP Mobile Security Project is a centralized resource intended to give developers and security teams the resources they need to build and maintain secure mobile applications. Through the project, our goal is to classify mobile security risks and provide developmental controls to reduce their impact or likelihood of exploitation.

Signing HTML Documents Using PGP or SSL via Web Forms

Let’s say I have a contract between two parties published on the Web. I want both parties to be able to sign the contract to show they consent to the terms, the way they would with handwriting in real life. I have seen many TOS agreements online where this is done with just a check box, but I want to go a step further and enable each party to assert that the signature is theirs and not a forgery (somebody else checking the box for them).
Assuming the page is already served via HTTPS and username/password combos are not an option, which cryptographic technology is best suited for identity validation: PGP, SSL, or something else?
How might I do this using only HTML and a LAMP server on the other end, in such a way that the process is as automated as possible while still being secure? Code samples are obviously welcome but not necessary; I’m just trying to conceptualize it: do the contents of the contract have to be included in the signature? Do I have the users upload public keys or something? I’m no crypto expert so that’s where I get lost.
SSL is a transport security mechanism, it's not applicable.
You can use OpenPGP or you can use PKI (X.509 certificates and CMS format). These technologies let you sign the data twice or more times without invalidating previous signatures - this is done by using detached signatures.
The choice of what (PGP or PKI) to use is yours - these technologies can be used in similar scenarios, but have different ways to authenticate keys: in PGP user keys are signed by other users, while in PKI certificates are signed by certificate authorities, which is supposed to have more credibility.
When you "sign the document" using cryptographic signature, from technical point of view it's a hash of the document that is signed. The hash can be calculated on the server and sent to the client for signing, then the detached signature is transferred back to the server. So you can keep the document on the server, and private keys used for signing will not leave the client.
However, to do actual signing on the client, you need some module which will communicate with the server and do the job. You can't go with just a web browser - some browser plug-in is required. The reason is that Javascript "cryptography", even if it technically allowed access to client-side keys stored in files or on cryptographic devices, has certain conceptual flaws which make it almost useless. So you end up with using something more trusted and secure, i.e. signed applet or ActiveX control or Flash script.
Our company provides various security components, among which there are components and modules for distributed signing (including above mentioned plugins). These modules are for PKI operations (though in general we also have components for OpenPGP operations, these components don't support distributed signing at the moment).
And I should note, that "automation" here is possible to extent when the user chooses the certificate to use and clicks "sign" button (for example). You can't sign anything without user's explicit action. In some cases the user would also need to provide a PIN / password which protects a private key from being misused.

Is it possible to distribute a populated keychain with an application

I am working on an application that uses a private web service.
We currently use a bundled client certificate to enable 2-way SSL connectivity however the password for the certificate is in the code and it is a concern that this could be de-compiled and used with the (trivially)extracted certificate file for nefarious purposes.
Is there a method by which I can pre-load a password into the application keychain for distribution with the app so that the password is never left in the open?
No matter how you put your password into your binary, there will be someway to exploit this, be it with debugging tools, code analysis etc.
You better treat your web service as open... maybe unlikely to get not properly authorized requests in the very next future, but basically you give away access to the public.
Keychain should be encrypted with user specific key, and this you obviously cannot do - or you would be able to read everyones data anyway.
If you really need to protect it, you probably need user accounts on your server... if this is more secure than obscurity it up to you.

How can I verify the authenticity of requests from an iphone app to my web service

I'd like to make requests from an iphone app to a web service I've built. How can I verify that requests made to the web service come from my iphone app (or indeed any authorised source) and are not forged?
I have looked at basic auth over HTTPS but is baking credentials into an application secure?
This question isn't really iphone specific; I'd like to know how to protect and authenticate requests in general.
Authentication can be asserted by presenting something you know, something you have, something you are or a combination of the three.
The iPhone doesn't have retinal or fingerprint scanners, so there are no "something you are" options available.
Client certificates work well as a "something you have" token. Most smartcards work by signing a message with an embedded certificate. When a certificate is compromised, it can be put onto a Certificate Revocation List (CRL) referenced by the webservers. Obviously, you wouldn't want to put your app's embedded certificate in the CRL -- that would deny access to all your users. Instead, you'll want users to download individual certificates to their iPhone.
After that, it's a matter of monitoring for unusual behavior to find the bad actors and adding those certs to the CRL. Two dead giveaways would be clients who send too many requests at once or from too many different IPs in too short a time.
Login/password is a simple "something you know" token. Like certificates, login/password combinations can be compromised and similar monitoring can be set up to find inappropriate behavior. The difference is compromised accounts would be marked "blocked" rather than added to a CRL.
By requiring both a client certificate and a login/password you increase the amount of effort needed to compromise an account.
Of course, you must ensure only valid accounts are added to the database. If there is an automated way to create new accounts and corresponding client certificates, then that account creation server/process becomes the easiest way for bad actors to create viable, unauthorized accounts. Requiring a real person to sign-off on accounts removes the automation process, but means a disgruntled or corrupt employee could create invalid accounts. Requiring a second person to counter-sign the account makes it harder for a single person to be an inside threat.
In short, ensuring high integrity of the clients is a process that can be made arbitrarily complex and expensive. What tools and processes you decide to deploy as the authentication scheme has to be balanced by the value of what it is protecting.
In theory, if you want the connection to be secure, the best is to have the client sign their request using a certificate. There are multiple resources about this. Look for "client certificate" on Google.
This example from Sun is in Java, but the concept is similar whatever the language.
PS: obviously, this doesn't prevent you from using other authentication methods such as passwords, etc...
PPS: Keep in mind that if someone manages to extract the certificate from your application, you are screwed either way ;-). You can imagine a store providing an individual certificate to each app and invalidating the certificates that are compromised.

How to securely communicate with server?

I'm building a solution consisting of an app and a server. Server provides some methods (json) and the app uses them. My aim is to make those API methods inaccessible to other clients. What is the best way to do so?
Should I take a look at certificates (to sign every outgoing request)? If yes, where do I start and what is the performance impact of doing so?
What are alternatives?
Put another way, you need a way to distinguish a valid client's request from an invalid client's request. That means the client needs to present credentials that demonstrate the request comes from a valid source.
SSL certificates are an excellent way to assert identity that can be validated. The validity of an SSL certificate can be confirmed if the certificate contains a valid signature created by another certificate known to be secure, a root cert. As noted in other answers an embedded certificate won't do the job because that certificate can be compromised by dissecting the app. Once it is compromised, you can't accept any requests presenting it, locking out all your users.
Instead of one embedded app cert, you need to issue a separate certificate to each valid user. To do that, you need to set up (or outsource to) a Certificate Authority and issue individual, signed certificates to valid clients. Some of these certificate will be compromised by the user -- either because they were hacked, careless or intentionally trying to defraud your service. You'll need to watch for these stolen certificates, place them on a certificate revocation list (CRL) and refuse service to these compromised certificates. Any web server is able to refuse a connection based on a CRL.
This doesn't solve the security issues, it just moves them out of the app. It is still possible for someone to create what appears to be a valid certificate through social engineering or by stealing your root certificate and manufacturing new signed certificates. (These are problems all PKI providers face.)
There will be a performance hit. How much of a hit depends on the number of requests from the app. The iPhone NSURLConnection class provides support for SSL client certificates and client certificates can be installed in the phone from an e-mail or authenticated web request. Managing the infrastructure to support the client certs will require more effort than coding it into the app.
Incidentally, voting down any answer you don't like creates a chilling effect in the community. You're not nearly as likely to get advice -- good or bad -- if you're going to take a whack at everyone's reputation score.
I will now freely admit that it's an interesting question, but I have no idea how it could be done.
Original answer:
Interesting question. Assuming people can't reverse-engineer the iPhone app, the only solution that comes to mind would be to sign requests with a public key, or some other secret known only to the application. By that, I mean adding an extra argument to every API call that is a hash of the destination URL and other arguments combined with a secret known only to your server and application.
To expand upon this: suppose your API call has arguments foo, bar and qux. I would add a signature argument, the value of which could be something as simple as sorting the other arguments by name, concatenating them with their values, adding a secret, and hashing the lot. Then on the server side, I would do the same thing (excepting the signature argument) and check that the hash matches the one we were given in the request.
Consider authenticated HTTP.
For a cheaper alternative, there's shared secret/hash scheme. The client and the server have a shared secret string of text. Upon request, the client hashes together (using MD5, or SHA1, or SHA something else - you choose) the request fields and the secret. The hash value is attached to the request - say, as another POST field.
The server does the same operation with the request and with its copy of the secret, then compares the hash values. If they don't match - service denied.
For added security, you may encrypt the hash with a RSA public key. The client has the public key, the server keeps the private key. The server decrypts the hash with the private key, then the same. I did that with a C++ WinMobile client and a PHP-based service - works like a charm. No experience with crypto on iPhone, though.
UPDATE: now that I think of it, if we assume that the attacker has complete control over the client (ahem jailbroken iPhone and a debugger), the problem, as formulated above, is not solvable in theory. After all, the attacker might use your bits to access the service. Reverse-engineer the executable, find the relevant functions and call them with desired data. Build some global state, if necessary. Alternatively, they can automate your UI, screen scraper style. Such is the sad state of affairs.