How Do Hardware Token Devices work? [closed] - hash

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Recently, my bank sent me this tiny device that generates a unique code that must be used when performing online transactions, all the device does is generate this unique code when I press a particular white button and it doesn't look like it connects to a remote server or anything of such.
I did some research and ended up in cryptography with something called the Hash function but I still don't get it.
My Questions
How does my bank's servers know the code generated by this device is correct?
Since it just generates five random digits every 30 seconds, why won't the server authenticate a random number I have also decided to use?

This has very little to do with hash functions. A cryptographic hash function may be part of the implementation, but it's not required.
Actually, it generates the digits on a time-based interval, if I press the button for it to generate the digits, it generates the digits and after about 25 seconds, and I press it again, the digits change not when I press it again immediately after I'd just pressed it.
There's your hint. It's a time based pseudo-random or cryptographic algorithm. Based on the time, there is a code. The dongle and the server know – or rather, can compute – the code for every window. This is a shared secret - the dongle does not connect to a remote server. The server will probably allow one or two of the most recent secret keys, to prevent the situation where you enter a key that has just expired while the transmission was en route.
(Although my recent experience with Amazon Web Service multi-factor authentication has definitely resulted in login failures within 5 seconds of a code being displayed to me. In other words, some vendors are very strict with their timing windows. As always, it's a trade-off between security and usability.)
The abbreviations CodesInChaos mention are Time-based One-Time Password (TOTP) and HMAC-based One-Time Password (HOTP), two algorithms commonly used in two-factor authentication.
Wikipedia has this to say about the RSA SecurID, a particular brand of two-factor-authentication dongle.
The RSA SecurID authentication mechanism consists of a "token" — either hardware (e.g. a USB dongle) or software (a soft token) — which is assigned to a computer user and which generates an authentication code at fixed intervals (usually 60 seconds) using a built-in clock and the card's factory-encoded random key (known as the "seed"). The seed is different for each token, and is loaded into the corresponding RSA SecurID server (RSA Authentication Manager, formerly ACE/Server) as the tokens are purchased.
I chose this article because it has a reasonable, physical description; the higher-level articles focus on the theoretical over the physical implementation.
The article also confirms that you need to keep the secrecy of the token, or someone else can impersonate your logins by knowing what the codes are as easily as you do.
The token hardware is designed to be tamper-resistant to deter reverse engineering. When software implementations of the same algorithm ("software tokens") appeared on the market, public code has been developed by the security community allowing a user to emulate RSA SecurID in software, but only if they have access to a current RSA SecurID code, and the original 64-bit RSA SecurID seed file introduced to the server.
However, since the verifying server has to have foreknowledge of the tokens, the two-factor secrets are vulnerable to attacks on the source as well. SecurID was the victim of a high-profile theft that targeted their own servers and eventually led to secondary incursions on their clients' servers as well.
Finally, there is more information available on the security.stackexchange sister-site under the multi-factor tag, and also on this site under the two-factor-authentication tag.

I just opened an old security device and brainstormed about it.
I have an answer which is related with elapsed time:
Each of these security devices have a quartz crystal inside and whenever that crystal is powered, its life cycle is starting (as everybody born to world) and none of the devices started at the same exact time (due to not generate same number at exact same moment) so whenever you push the button it generates a unique number by calculating the elapsed time (probably in the order of 1/1000000 precision due to 6 digit shown at my device with 15 sec intervals) converted to a unique number. But how do bank server know my unique generated number?
Answer to bank server:
Probably the bank is counting the elapsed time after you activate it; because you have to activate these security devices at the first use with a generated unique number from your own device. So, in an exact timing calculation bank server knows the input number have to be xxx-xxx and will change while time elapses.
I am sure that the device battery gives power to quartz crystal within battery life cycle even if you never use the security device. If the battery is removed it fails generating number due to the quartz crystal not being powered and time cannot be counted at that moment. So it can never generate same unique numbers again.

Related

Is there a need to expire email verification code? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I've registered a GitHub account to test their email verification process. So:
They've sent me an email with a link, containing my username and 40-chars code, like:
https://github.com/users/USERNAME/emails/120066679/confirm_verification/47889d71648523e5d99db5b969f59809c2715fb6
I have not followed the link
4 days later, the've sent me another (a reminder), that I have to verify my email, containing link with another different 40-chars code
So, what was the purpose of changing 40-chars code? As I remember, other services, used to expire verification code anyway. If there is already a username in verification link, is there really a need to do that? In case of brute force, I can just count failed attempts related to specific user and block it, right?
P.S. Also interesting, what is the purpose of emails/120066679 in link? (which is similar for both letters)
There are several reasons why quick expiration of verification codes is the best practice.
If protection with a verification code is deemed appropriate, it's safest to make it not only complex enough but also valid for minimum amount of time. If you only make the code work for the time needed (usually really short), you diminish the risk of someone abusing it. (For example, someone could programmatically 'guess' the codes - the more time for this exercise, the higher chance for success.)
Also, it's not efficient to store data of this kind. It's used once, it doesn't contain any actual information and as soon as it's used, it's ready to be "thrown away". It's not a good practice to store anything that doesn't add value when stored.
In addition, it's fairly rare that users don't use the codes immediately/soon. For the small percentage of cases where the code expires by the time the user tries to use it, it's more efficient to generate new ones.
Well, the purpose of an email validation link is to make sure that you actually own the email. Most validation links simply contain some secret that they send out your way, only in the possession of which may you verify the email address.
The reason they changed the code is because it probably expires. In that case you could not activate the account, so they sent you another in case you'd like to continue.
What if they don't send out a secret like this then?
In that case there is nothing that prevents an attacker from "verifying" emails that they actually have no control over. They could just visit the url with the username plugged in and activate the account.
Normal users would not do this, but spammers might.
For the case of brute force:
If the secret is sufficiently random, and the keyspace is large enough, trying to guess it is a fool's errand.
We can assume this is a random 40 hex char number, which gives us:
16**40 == 1461501637330902918203684832716283019655932542976
possible values for it. It is safe to say that no one will guess this number in the near future.

Are there any advantages in signing an application?

I looked recently into signing my application. The price is AT LEAST one hundred euros/dollars per year for EV (anything less than EV seems pointless anyway).
My application uses a basic installer (self-extracting WinRar) that requires no admin password. But the drawback of this is that I cannot install the app in Program Files.
The actual problem here is that you will find lots of resources that tell you how to sign your app but not so many (at all) that tell you if there is any real advantage. For example: do the regular PC users care when they install and app and Windows shows "Publisher: unknown" or they just quickly hit the OK button to have the installation process done as soon as possible?
Honestly, I don't think that the user reads and cares about "unknown". That might stop him is actually the yellow color (instead of blue).
So, my question for those that already did code signing for their apps is: have you seen an improvement (downloads, installations, sales) after signing your app?
Should I invest any time/money/energy in this?
Update: It seems that having the app signed is not enough. After that, you have to keep fighting to improve your reputation factor, otherwise, Microsoft SmartScreen might pop-up: https://mkaz.blog/code/code-signing-a-windows-application/
For those interested in prices, here a few random offers sorted by price. I will also post the documents required:
Signing a Windows EXE file
For those interested in prices (and few extra tips), here a few random offers sorted by price.
The documents required (by Sectigo, in my case) for obtaining an OVL are:
company's registration certificate
a photo of you holding your ID close to your face
a phone landline so they can call you for verification (it as actually a robot calling you to give you a number, that you have to enter then into the browser).
The whole verification process (especially phone) took like 2 months because they involved some kind of automatic calling that did not work on my line/phone?.
I will post soon the number of downloads necessary to get reputation for your newly signed exe file. At this point, I can tell you that 1000 downloads are not enough.

Avoiding data loss: suggested reading

I am about to work on an app which handles extremely valuable data. Any loss of this data for the user would be very costly, so I'm interested in finding out more about the best architecture design for our needs.
The user will be inputting this data in their iPhone each day. The alternative to using this app is carrying around a piece of paper with this sensitive information on it. So while I know we can be more secure than a piece of paper, I want to make sure we also cover the user stories like "I flushed my phone down the toilet" or "my son deleted the app, where's my data?"
A service like Dropbox comes to mind, but I wouldn't want to require our users to have a Dropbox account; the syncing architecture must be transparent to the user. iCloud is out because web and Android versions may follow.
Can anyone suggest either some good reading on this subject, or some good frameworks to look at? I expect to use a node.js backend, and while we are targeting iPhone first, Android will follow.
The data itself consists of 2 tables, each with a small number of fields, with a many to many relationship. A few new rows will be created by the user each day, but the data will be small and highly compressible.
Turns out this is an extremely difficult issue. In data assurance (this isnt yet a security type situation although could become one because of the assurance aspect) there is ALWAYS a time element. As a simple example what happens if your use has locally updated some piece of data. Just before you have the ability to fully push the data to some cloud service, etc... he / she dumps it in the toilet. Even if good signal was there for transmitting the data there is time in transferring and time necessary for the cloud server to respond saying the data got there properly.
Generally in data assurance, you really have to work to the best you can. You will NEVER be able to solve all issues as there is no data center, nor link to a data center, etc... that is perfect. There is always a chance of data loss. Truly the best you can do, is SYNC as fast as data changes, and if there is loss of connection, as soon as the connection becomes alive again.
Now, for security. Security by itself does not create assurance. If the data itself is something that the customer does not want to lose, and that is his only requirement, then security is un-necessary. If he / she is also worried about other getting their hands on his data, then you have to be worried about data-in-transit (both up and down during syncing), and on the device itself. For the best potential security, encrypt the data locally on the device prior to pushing over the cloud. There are many known attacks that even if using SSL or other services, can get at the data. If you wish, locally encrypt a file, then you could for SOME added security still use SSL (at this point you will have doubly encrypted the data). You also want to sign the data so that there is little chance of it being manipulated in transit, or by the cloud server itself (if a hacker hacked the cloud server). Generally the way to protect the data while on device, you may choose to have the user input a password, and put some fairly strict rules around how passwords are formed, and how many tries you allow before you disallow attempts for 30 minutes or so.
You may also wish to store the data locally in an encrypted form. This way if someone gets the device, they still will need to have the password before they can get the data (unless of course they can crack the algorithm you use to generate the symetric key from the password).
In terms of online data service, you could use iCloud, etc... I am actually NOT a fan of anything cloud. I think it is SO counter enterprise / proprietary data, it isnt even funny. I think it actually almost laughable that so many of these phone / device manufacturers are going SOOOOO cloud based. I think they are abandoning the big companies, as NO big company I know of wants to place their proprietary data on a cloud server that THEY DONT CONTROL. In any case, I would argue that so long as you have a good local encryption scheme prior to sending out the data, then you should be OK. I would from an assurance perspective however look at where the servers are in locale. the reason being that if assurance of data is of prime concern, most larger IT setups like to have replicated data centers on opposing sides of the country / world etc... The reason for this is if an earthquake takes down the data center on one side of the country, it most likely will NOT take down the one on the other side of the country simultaneously. If the data centers for iCloud or whatever you can find are essentially in one locale, then you may consider syncing with one data center on the west coast, and choose a completely differing data center (in this case company) to sync with that is centered on the east coast.
This is all very high level, how you would implement this on an iPhone specifically we could also talk about, byt I hope this at least begins to help pave a path.

Store an encryption key in Keychain while application installation process

I need my application to use client's phone-number to generate unique ID for my web-service. Of course a phone-number is unique, but it must be secured. So it can be implemented with symmetric encryption (asymmetric will be later, because leak of resources), but I do not know where to store a encryption-key.
1.
I do not know why, but seems bad to store a key as a static field in code. May be because it's too easy to read it from here even not running an application.
2.
It seems better to store a key in Keychain and get it from here by request. But to avoid #1 it's necessary to install a key to Keychain while installation process. Is it possible? How to do that?
3.
I do not know what certificates do. Are they helpful to the problem?
4.
To transfer a key from server is also a bad idea, because it's very easy to sniffer it.
The way you solve the sniffing problem is that you communicate over HTTPS for your web service. NSURLConnection will do this easily, and all web service engines I know of handle HTTPS without trouble. This will get rid of many of your problems right away.
On which machine is the 100-1000x decrypt the bottleneck? Is your server so busy that it can't do an asym decryption? You should be doing this so infrequently on the phone that it should be irrelevant. I'm not saying asym is the answer here; only that its performance overhead shouldn't be the issue for securing a single string, decrypted once.
Your service requires SMS such that all users must provide their phone number? Are you trying to automate grabbing the phone number, or do you let the user enter it themselves? Automatically grabbing the phone number through the private APIs (or the non-private but undocumented configuration data) and sending that to a server is likely to run afoul of terms of service. This is a specific use-case Apple wants to protect the user from. You definitely need to be very clear in your UI that you are doing this and get explicit user permission.
Personally I'd authenticate as follows:
Server sends challenge byte
Client sends UUID, date, and hash(UUID+challenge+userPassword+obfuscationKey+date).
Server calculates same, makes sure date is in legal range (30-60s is good) and validates.
At this point I generally have the server generate a long, sparse, random session id which the client may use for the remainder of this "session" (anywhere from the next few minutes to the next year) rather than re-authenticating in every message.
ObfuscationKey is a secret key you hardcode into your program and server to make it harder for third parties to create bogus clients. It is not possible, period, not possible, to securely ensure that only your client can talk to your server. The obfuscationKey helps, however, especially on iPhone where reverse engineering is more difficult. Using UUID also helps because it is much less known to third-parties than phone number.
Note "userPassword" in there. The user should authenticate using something only the user knows. Neither the UUID nor the phone number is such a thing.
The system above, plus HTTPS, should be straightforward to implement (I've done it many times in many languages), have good performance, and be secure to an appropriate level for a broad range of "appropriate."
I don't think you're going to be able to do what you want securely with symmetric encryption. With asym you can send the public key without worrying about it too much (only threat is someone substituting your key with their own) and validate the encrypted unique id on your server with the private key.

Verified channel to server from app on iPhone

I'm working on a game for the iPhone and would like it to be able to submit scores back to the server. Simple enough, but I want the scores to be verified to actually come from a game-play. With the (defacto) prohibition on real crypto with the export conditions, what would be the best way to get information back in a secure/verified channel?
All my thoughts lead back to an RSA-style digital signature algorithm, but would prefer something less "crypto" to get past that export question.
Thanks!
Couldn't you just use a client certificate (signed by you) and establish an HTTPS connection to your server, which has been configured to only accept connections begun with a client certificate signed by you?
To make a long story very short, you're allowed to export digital signature code with very few restrictions. To learn more, start at the BIS export FAQ.
You probably want to look at EAR 742.15(b)3, which covers the digital signature exemptions.
Of course, I Am Not A Lawyer, and the rules may have changed in the last year.
Using real crypto won't actually buy you anything here. You basically have the reverse of the typical DRM problem. In that case, you want to prevent people from decrypting content, but they have to decrypt it to watch it, so you have to give them to key anyway.
In your case, you want to prevent people from signing fake scores, but they have to be able to sign real scores, so you have to give them the key anyway.
All you need to do is make sure your scheme requires more effort to crack than the potential rewards. Since we're talking about a game leader board, the stakes are not that high. Make it so that someone using tcpdump won't figure it out too quickly, and you should be fine. If your server is smart enough to detect "experimentation" (a lot of failed submissions from one source) you will be safer than relying on any cryptographic algorithm.
generate a random, something fairly long, then tack the score to the end, and maybe the name or something else static, then sha1/md5 it, and pass both to the server, verify that the random hashes, to be equal to the hash.
After-thought: If you want to make it harder to reverse engenier, then multiply your random by the numerical representation of the day (monday=1, tuesday=2,...)
One idea that might be Good Enough:
Let Secret1, Secret2, Secret3 be any random strings.
Let DeviceID be the iPhone's unique device ID.
Let Hash(Foo + Bar) mean I concatenate Foo and Bar and then compute a hash.
Then:
The first time the app talks to the server, it makes a request for a DevicePassword. iPhone sends: DeviceID, Hash(DeviceID + Secret1)
The server uses Secret1 to verify the request came from the app. If so, it generates a DevicePassword and saves the association between DeviceID and DevicePassword on the server.
The server replies: DevicePassword, Hash(DevicePassword + Secret2)
The app uses Secret2 to verify that the password came from the server. If so, it saves it.
To submit a score, iPhone sends: DeviceID, Score, Hash(Score + DevicePassword + Secret3)
The server verifies using Secret3 and the DevicePassword.
The advantage of the DevicePassword is that each device effectively has a unique secret, and if I didn't know that it would make it harder to determine the secret by packet sniffing the submitted scores.
Also, in normal cases the app should only request a DevicePassword once per install, so you could easily identify suspicious requests for a DevicePassword or simply limit it to once per day.
Disclaimer: This solution is off the top of my head, so I can't guarantee there isn't a major flaw in this scheme.