Limit web-service access to only SMS-verified smart-phones - iphone

I have a web service on Google App Engine that an iphone/android app is calling.
I use HTTPS with basic authentication with: mobilenumber:pincode:device_id
The phone number is verified with SMS.
But to secure that hackers arent using the service with other peoples account, when the device is SMS verified, the device is given an RSA public key, with the private key saved on the server (each device has its own set of public/private keys) and when a request is made, the device use the basic authentication but also include the whole string encrypted with the public key, then on the server-side I look up the private key with the device_id, decrypt the encrypted message and make sure it matches the basic authentication string.
The reason of this is that I find the basic authentication to easy to hack (perhaps its not?), the pincode is only 4 digits, so I guess a hacker would have to find out another users device_id somehow,
but maybe the whole RSA thing is an overkill? could I just limit the "server-key" to be a 20 alphanumeric string for each device create on the server and given to the device at the SMS verification? so that it would be: mobilenumber:pincode:device_id:20AlphanumericStringForThisDevice?

Related

Where to Store Private Keys?

I'm building an app that I want to have E2EE. My struggle is with the private keys. Most of what I read they say you don't store it in AWS servers because it will not be an E2EE anymore and it's a backdoor. I don't want to create a backdoor, I want the user ONLY to hold the key. However, at the same time if the user logged in from another device, they cannot retrieve their data coz the private key on the original device.
So what are some ways to let the user be able to login from another device without having a trouble retrieving the data and not putting their private key on risk!
Please consider that I'm new to this subject and I'm using cryptoKit from Apple :)
Thanks!
You can use the user’s id and password hash (for example) to encrypt the private key and store the encrypted version of it on the server.
Encrypt the private key locally using the user's id and password (or a hash of it)
Send this encrypted key to the server to store it there
Now when the user logs in from another device, the encrypted key can be retrieved and decrypted locally using the user's id and password.
Thus, it won’t be possible to decrypt and use the encrypted key without the user’s credentials. However, this also means that if the user changes their password, the encrypted key also needs to be decrypted with the old and re-encrypted with the new password.
That’s the usual approach for your requirement.

Protect OAuth2 code exchange API endpoint

Let say i have an android app with Reddit OAuth2 authentication. I initiate authorize request with my client id and user accepts the consent. Now i got the authorization code which will be exchanged for token in my server via HTTP request. This process will protect my client secret as it is in my server, but it actually doesn't. Anyone can take the client ID from the app by decompiling and initiate authorize request to reddit and exchange code for token from my server. They don't even need to know secret to get the token.
How can one protect the API against this kind of misuse (or attack?)?
Is there any way i can allow my API to accept requests only from my app and reject other requests (using SHA256 or etc.)?
I have looked up and studied about PKCE. But this is not useful in case as it only protect again code sniffing/intercepting and accept only the original authorize request initiator.
You will probably want to store a secret. When first opening the app (and after certain interval of times to keep it secure) you will need to generate a keypair. Store the private key on the device's Keystore and send over the public key to your backend. When authenticating to your api, sign the client's secret with the private key and verify it using the public key on the backend.
Note that this will induce substantial overhead to your login process. Because mobile devices are not necessarily well equipped to perform cryptography. Though this is less and less true.
EDIT: Your keypair will need to be issued from a CA you trust, otherwise this is all useless.

How to secure a REST API between mobile app and the server

My project include a web application, a mobile app and a REST API module.
The mobile app is made with Ionic 3 for android and uses a REST API located to an address like example.com/api.php on a server with https. The API has access to a MySQL database.
For the users who access the API I have to create the login/access to API function/logout since they already have the accounts created in the web application.
The main concern is to implement a secure login. Meaning, if someone tries to access my API without authorization (knows the address, the functions name or the used parameters name) to recive an error message. In order to access the API you must be logged in and to have the right to acces a certain section (I have multiple levels of access).
But how can I detect if an user that access my REST API is logged in and has the proper rights?
The plan:
For the login step
In order to access the REST API I have to login with username/password in app. I check if the credentials are correct (if the user exists then I determine the access level) and return a JWT with the user ID and other parameters if necessary (a token). Store in phones local storage the JWT.
To secure the access to REST API functions
The question is: HOW DO I DO THAT? How do I access secure a function from my REST API?
for every request that I make to the REST API should I send also the token from the Local Storage and verify it on the server side?
how do I perform the validation on the server? Do I store the token on the device and also on the server and compare them for each request?
Thanks a lot!
There are multiple ways to do it, it's all depends on you. Hence i am sharing the method i generally use, but not claiming it is most secure way.
We use encryption, decryption with private key. for example:
Register User Web-Service
ex. we have 4 params 1. username 2. name 3. email 4. password. with my register web service.
We will create SHA256 Hash using data concat with private key. then we will pass the hash key to server and at server side we will generate hash key with same method and compare both.
ex. string with private key = usernamenameemailpasswordprivatekey
sha256 of string = 7814b2d22af647308884acff0be4c675b7f72ba000cf1e8390520100cc930e74
You may have any sequence of your data string and same method will work with your server. Always use SSL certificate with your server for more security.

Understanding RSA signing for JWT

I'm implementing a sign in system with the help of the JWT (JSON Web Token) scheme. Basically, after a user sign in / login, the server signs a JWT and passes it to the client.
The client then returns the token with each request and the server verifies the token before sending back a response.
This is pretty much how you would expect it, but I'm having some problems with the logic of the process. From all the mathematical articles I've read, it seems that RSA signing uses asymmetric keys for signing. As the public key, as its name suggests, is exposed to the client and the private key is kept on the server, it makes sense to sign the JWT with the public key which is sent to the client and verify it on the server side using the private key.
However, on every example and library I see it seems to be the other way around. Any idea as to why it is so? If a JWT is signed with the private key and verified with the public one than whats the point?
First off, apologies, this answer got rather long.
If you use RSA to sign your tokens, and a connecting client is a web browser, the client will never see the RSA keys (public or private). This is because the client presumably doesn't need to verify that the JWT is valid, only the server needs to do that. The client just holds onto the JWT and shows it to the server when asked. Then the server checks to make sure its valid when it see's the token.
So why might you need a public / private key combo for JWT's? Well first off, you don't need to use a public / private key algorithm.
You can sign JWT's with a number of different algorithms, RSA being one of them. Other popular choices for signing your JWT's are ECDSA or HMAC algorithms (the JWT standard supports others as well). HMAC, specifically, is not a public / private key scheme. There's just one key, the key, which is used to both sign and validate the tokens. You can think of this as using the private key for both signing and validating the JWT's. I'm not an expert on this by any means, but here's the conclusions I came to from doing my own research recently:
Using HMAC is nice because it's the fastest option. However, in order to validate the JWT's, you need to give someone the one key that does everything, Sharing this key with someone else means that that person could now also sign tokens and pretend like they're you. If you're building multiple server applications that all need to be able to validate your JWT's, you might not want every application to have the ability to sign tokens as well (different programmers might be maintaining the different applications, sharing the signing ability with more people is a security risk, etc). In this case, it's better to have one, tightly controlled private key (and one app that does the signing) and then share the public key around with other people to give them the ability to validate the tokens. Here, the private key is used for signing the tokens, and the public key is used for validating them. In this case you'd want to choose RSA or ECDSA.
As an example, you might have an ecosystem of apps that all connect to
the same database. To log users in, each app sends folks to one,
dedicated, 'logging in' app. This app has the private key. The other
apps can verify that the person is logged in using the public key (but
they can't log people in).
The research I've done points to RSA being the better option for most JWT apps in this scenario. This is because your app will be, theoretically, validating tokens frequently. RSA is much faster then ECDSA at verification. ECDSA is primarily nice because the keys are smaller in size. This makes it better for HTTPS certificates because you need to send the public key to the client's browser. In the JWT scenario though, the keys are staying on a server so the storage size is n/a and the verification speed is more important.
Conclusion: if you're building a small app without multiple smaller 'micro-service apps' / you're the only developer, probably choose HMAC to encrypt your keys. Otherwise, probably choose RSA. Again though, I'm not an expert, just someone who recently googled this topic, so take this with a grain of salt.
There is a difference between signing/verifying and encrypting/decrypting data but the semantics can be similar.
You sign data with a private key that only controlled sources have so anyone who receives the information can use your public key to validate this information was indeed sent by you and is the same information you intended to send out.
You encrypt data with a public key and decrypt with a private key. This sounds opposite but really follows the same logical concept as signing. If you want to send data between person A and person B, both people have a public/private key pair and they share their public keys with each other when they meet (handshake). A constructs a message for B and encrypts it using B's public key and sends it to B. Now, no one without B's private key can decrypt that message including A - even though they originally sent it.
In terms of JWT, a JWT payload by itself is just Base64 encoded JSON with some standardized fields. The signature allows someone with the public key to validate the information hasn't been altered by someone in the middle. Similar to a checksum but with some extra security based warm fuzzy feelings. The contents of the signed JWT are easily visible (base64 is encoding like unicode or utf-8, not encryption) to the end user and anyone in the middle which is why it is generally frowned upon to send sensitive data in a JWT like passwords or PII.
As others have mentioned, most JWTs contain information not intended for clients but to help facilitate the stateless part of RESTful services. Commonly, a JWT will contain an accountid, userid and often permissions as "claims". An API endpoint can verify the signature and reasonably trust the claims to not be altered by the client. Having the client send the JWT for each request saves the endpoint having to do a lot of database back and forth just to get where they are by simply verifying a signature with a public key.
Additionally, signed JWTs can be encrypted. According to the JWE spec, the payload is encrypted after signing and then decrypted before verifying. The trade off here is that all endpoints must also have the private key to decrypt the JWT but end users won't be able to see the contents of the JWT. I say trade off because in general private keys are meant to be kept secure and a widely distributed private key is just less secure. Security, risk assessment and cost/benefit of encryption is a whole other beast :)
Your suggestion:
it make sense to sign the JWT with the public key which is sent to the
client and verify it on the server side using the private key.
is not correct. Signing is done with the private key of the sender, encryption is done with the public key of the receiver. That is how PKI works in general.

How to deliver private keys for later decryption safely?

I'm developing the set of applications, that provide the possibility to read encrypted data between several users using email messages.
It's rather hard... If to compare email messaging with the live chatting (IMs) through single server (for live chatting, I need just chanell with TLS). because I need to decrypt the the message, which is just saved on remote server.
Also, as I suppose the secure server mustn't keep private keys, because the user wants to be sure, that event supplier side (backend) can't decrypt content. Private keys must store on some stuff like smart-cards (which only user has).
For emails, I've found two options:
S/MIME
OpenPGP
So... the main problem (for me) is how to distribute private data, which will allow to decrypt email message for the user, which received the encrypted email message.
So, question is about correct distribution of private keys, right now I can't imagine how to deliver it in secure way.
Private keys are, well, private. You don't want to be transferring them. Never.
Instead, re-think the problem in terms of distributing the public keys in the other direction. Then you don't need to worry about eavesdropping (but you will want to be concerned with authenticity).
The proper approach is to use asymmetric cryptography to secure the data. In this scenario your users send each other their public key, and they can do this in any way. Private keys remain on user's side. The sender encrypts the data with the public key of the recipient, the recipient uses the private key to decrypt the data.
If you absolutely must use symmetric algorithms and keys for encrypting the data, then you still can use asymmetric cryptography to deliver symmetric keys in encrypted form (this is what S/MIME and OpenPGP would do for you, actually).
Note: when I am talking about encryption with a public key, I mean a hybrid scheme, when the data is encrypted with a symmetric session key, which is then encrypted with a public key. The data are almost never encrypted with asymmetric cryptography directly, without employing a symmetric algorithm.