Are there any advantages in signing an application? - certificate

I looked recently into signing my application. The price is AT LEAST one hundred euros/dollars per year for EV (anything less than EV seems pointless anyway).
My application uses a basic installer (self-extracting WinRar) that requires no admin password. But the drawback of this is that I cannot install the app in Program Files.
The actual problem here is that you will find lots of resources that tell you how to sign your app but not so many (at all) that tell you if there is any real advantage. For example: do the regular PC users care when they install and app and Windows shows "Publisher: unknown" or they just quickly hit the OK button to have the installation process done as soon as possible?
Honestly, I don't think that the user reads and cares about "unknown". That might stop him is actually the yellow color (instead of blue).
So, my question for those that already did code signing for their apps is: have you seen an improvement (downloads, installations, sales) after signing your app?
Should I invest any time/money/energy in this?
Update: It seems that having the app signed is not enough. After that, you have to keep fighting to improve your reputation factor, otherwise, Microsoft SmartScreen might pop-up: https://mkaz.blog/code/code-signing-a-windows-application/
For those interested in prices, here a few random offers sorted by price. I will also post the documents required:
Signing a Windows EXE file

For those interested in prices (and few extra tips), here a few random offers sorted by price.
The documents required (by Sectigo, in my case) for obtaining an OVL are:
company's registration certificate
a photo of you holding your ID close to your face
a phone landline so they can call you for verification (it as actually a robot calling you to give you a number, that you have to enter then into the browser).
The whole verification process (especially phone) took like 2 months because they involved some kind of automatic calling that did not work on my line/phone?.
I will post soon the number of downloads necessary to get reputation for your newly signed exe file. At this point, I can tell you that 1000 downloads are not enough.

Related

How to stop antivirus false positives everytime we re-release software?

Windows Defender and AVG/Avast pickup our software application as a virus/false positive everytime we release. We have a code signing certificate and add taggant as well.
Every time we release the software we have to go through the process of doing a false positive form on multiple AV vendors sites.
How can we get our company code signing cert marked as safe or avoid this time consuming false positive report process on each release?
Edit: Is there any premiere support we can pay for to have this done automatically?
Edit2: we actually had our certificate revoked due to "malware distribution" as a result of these false positives. It seems there is no recourse other than to buy another one.
Signing cert doesn't help most of the time, it's probably a coding pattern which is similar to a virus listed in them, best you can do is contacting the AV to whitelist you to get past through that.
My recommendation is to contact with the AV vendors and told them your problem. Probably your software have some strings or patters defined that potentially trigger the heuristics of the AV. You can try to find that strings easily in your base code and base64/xor/encrypt them and see what happens with the AV, that may help to solve your problem
While it is certainly possible that your software shares some characteristics with know malware, I would guess that it is a "cloud" detection.
Cutting through the marketing speak, it basically means that (among other possible caues) your file is flagged as suspicious if it has not been seen on many other PCs.
Try removing any thing that could activate antivirus flags, like self-extracting, UPX, file encryption, suspicious website requests, or suspicious behaviour.
Why to remove these?
self-extracting is triggered because it's a suspicious behaviour (not really normal to do)
UPX is detected as some malwares try to hide the malware by being compressed by UPX, as antiviruses need to decompress it.
File encryption may be easily detected as Riskware / EncoderTool / Ransomware
Suspicious websites: Evit downloading files from strange URL.
I had this problem with a program auto-update, an antivirus detected it as a TrojanDownloader.
If your program doesn't do any of these things, I can't help you more, as that is a problem that the programmer community has.
I wish that could help

Bittorrent sync approval process not working properly

I created a link to share a folder, deselecting the option that peers I invite must be approved on this device.
The other person used the link, and received a message that the "Sender needs to approve access to this folder based on these identity details".
My bittorrent sync window isn't showing me anything to indicate that someone is waiting on approval. I've never shared a folder via a link before (always just used keys directly on previous versions), so I have no idea how the program is supposed to prompt me for approval, and I can't find any documentation indicating how this prompt would be provided.
So there seem to be two problems here:
1. Even though I said the link doesn't require approval, they are being told that it does.
2. I don't have any way to approve it.
What's going on here? How do I fix this?
Thanks.
The most common cause of this is one of the systems having clock time out of sync too much, usually resetting your computers time using an online time server resolves it.

How Do Hardware Token Devices work? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Recently, my bank sent me this tiny device that generates a unique code that must be used when performing online transactions, all the device does is generate this unique code when I press a particular white button and it doesn't look like it connects to a remote server or anything of such.
I did some research and ended up in cryptography with something called the Hash function but I still don't get it.
My Questions
How does my bank's servers know the code generated by this device is correct?
Since it just generates five random digits every 30 seconds, why won't the server authenticate a random number I have also decided to use?
This has very little to do with hash functions. A cryptographic hash function may be part of the implementation, but it's not required.
Actually, it generates the digits on a time-based interval, if I press the button for it to generate the digits, it generates the digits and after about 25 seconds, and I press it again, the digits change not when I press it again immediately after I'd just pressed it.
There's your hint. It's a time based pseudo-random or cryptographic algorithm. Based on the time, there is a code. The dongle and the server know – or rather, can compute – the code for every window. This is a shared secret - the dongle does not connect to a remote server. The server will probably allow one or two of the most recent secret keys, to prevent the situation where you enter a key that has just expired while the transmission was en route.
(Although my recent experience with Amazon Web Service multi-factor authentication has definitely resulted in login failures within 5 seconds of a code being displayed to me. In other words, some vendors are very strict with their timing windows. As always, it's a trade-off between security and usability.)
The abbreviations CodesInChaos mention are Time-based One-Time Password (TOTP) and HMAC-based One-Time Password (HOTP), two algorithms commonly used in two-factor authentication.
Wikipedia has this to say about the RSA SecurID, a particular brand of two-factor-authentication dongle.
The RSA SecurID authentication mechanism consists of a "token" — either hardware (e.g. a USB dongle) or software (a soft token) — which is assigned to a computer user and which generates an authentication code at fixed intervals (usually 60 seconds) using a built-in clock and the card's factory-encoded random key (known as the "seed"). The seed is different for each token, and is loaded into the corresponding RSA SecurID server (RSA Authentication Manager, formerly ACE/Server) as the tokens are purchased.
I chose this article because it has a reasonable, physical description; the higher-level articles focus on the theoretical over the physical implementation.
The article also confirms that you need to keep the secrecy of the token, or someone else can impersonate your logins by knowing what the codes are as easily as you do.
The token hardware is designed to be tamper-resistant to deter reverse engineering. When software implementations of the same algorithm ("software tokens") appeared on the market, public code has been developed by the security community allowing a user to emulate RSA SecurID in software, but only if they have access to a current RSA SecurID code, and the original 64-bit RSA SecurID seed file introduced to the server.
However, since the verifying server has to have foreknowledge of the tokens, the two-factor secrets are vulnerable to attacks on the source as well. SecurID was the victim of a high-profile theft that targeted their own servers and eventually led to secondary incursions on their clients' servers as well.
Finally, there is more information available on the security.stackexchange sister-site under the multi-factor tag, and also on this site under the two-factor-authentication tag.
I just opened an old security device and brainstormed about it.
I have an answer which is related with elapsed time:
Each of these security devices have a quartz crystal inside and whenever that crystal is powered, its life cycle is starting (as everybody born to world) and none of the devices started at the same exact time (due to not generate same number at exact same moment) so whenever you push the button it generates a unique number by calculating the elapsed time (probably in the order of 1/1000000 precision due to 6 digit shown at my device with 15 sec intervals) converted to a unique number. But how do bank server know my unique generated number?
Answer to bank server:
Probably the bank is counting the elapsed time after you activate it; because you have to activate these security devices at the first use with a generated unique number from your own device. So, in an exact timing calculation bank server knows the input number have to be xxx-xxx and will change while time elapses.
I am sure that the device battery gives power to quartz crystal within battery life cycle even if you never use the security device. If the battery is removed it fails generating number due to the quartz crystal not being powered and time cannot be counted at that moment. So it can never generate same unique numbers again.

Password login for ios app

I am currently developing an app for a company that is in a very competitive field. I have finished all of the features of the app that they requested except for one, making it somehow protected from their competing companies to download and use. I thought that I could set up a UIViewController with a password field that would check against some kind of database, but I'm not sure how to do the checking against a database part nor the practicality of it, and was hoping I could get some ideas on how to do this so that other companies couldn't steal and use this app without a password or something that changes like every 30 days or something and is kind of like an activation code.
Review the WWDC 2012 video "Building and Distributing Custom B2B Apps for iOS". I'm unsure if your app is in this B2B classification, it seems that it might be from your description.
What I ended up doing (if everyone needs a reference) was setting up a server with an SQL table that has pass codes in it. Since apple does not allow for any sort of system that requires you to "buy the app from outside the app store" I made a dumby username field (shame on me) that takes any value you like and then requires to have a pass code that fits. Once the pass code gets authenticated with the web server in a json sql request (there are plenty of api's to do this with) it comes back and sends the user to the first screen and sets a value in a plist with how many days of use the user has left. Whenever the user opens up the app it checks to see if the date is different from the last date logged in (saved in the same plist file) and if it is different then it calculates the difference and deducts that many. When the count reaches 0 it sends the user to the pass code authentication screen again. A bit complicated but an effective method of getting around Apple's restriction on not having a sort of pass code system like this. Thanks for the answers, unfortunately enterprise did not work for this company since they needed to be able to distribute the app to as many 3rd party members as they wanted to without having to worry about them leaving the company for other suppliers and remote management of the app (I.e ability to remote uninstall) was also not an option. Hope this helps someone someday!

Verified channel to server from app on iPhone

I'm working on a game for the iPhone and would like it to be able to submit scores back to the server. Simple enough, but I want the scores to be verified to actually come from a game-play. With the (defacto) prohibition on real crypto with the export conditions, what would be the best way to get information back in a secure/verified channel?
All my thoughts lead back to an RSA-style digital signature algorithm, but would prefer something less "crypto" to get past that export question.
Thanks!
Couldn't you just use a client certificate (signed by you) and establish an HTTPS connection to your server, which has been configured to only accept connections begun with a client certificate signed by you?
To make a long story very short, you're allowed to export digital signature code with very few restrictions. To learn more, start at the BIS export FAQ.
You probably want to look at EAR 742.15(b)3, which covers the digital signature exemptions.
Of course, I Am Not A Lawyer, and the rules may have changed in the last year.
Using real crypto won't actually buy you anything here. You basically have the reverse of the typical DRM problem. In that case, you want to prevent people from decrypting content, but they have to decrypt it to watch it, so you have to give them to key anyway.
In your case, you want to prevent people from signing fake scores, but they have to be able to sign real scores, so you have to give them the key anyway.
All you need to do is make sure your scheme requires more effort to crack than the potential rewards. Since we're talking about a game leader board, the stakes are not that high. Make it so that someone using tcpdump won't figure it out too quickly, and you should be fine. If your server is smart enough to detect "experimentation" (a lot of failed submissions from one source) you will be safer than relying on any cryptographic algorithm.
generate a random, something fairly long, then tack the score to the end, and maybe the name or something else static, then sha1/md5 it, and pass both to the server, verify that the random hashes, to be equal to the hash.
After-thought: If you want to make it harder to reverse engenier, then multiply your random by the numerical representation of the day (monday=1, tuesday=2,...)
One idea that might be Good Enough:
Let Secret1, Secret2, Secret3 be any random strings.
Let DeviceID be the iPhone's unique device ID.
Let Hash(Foo + Bar) mean I concatenate Foo and Bar and then compute a hash.
Then:
The first time the app talks to the server, it makes a request for a DevicePassword. iPhone sends: DeviceID, Hash(DeviceID + Secret1)
The server uses Secret1 to verify the request came from the app. If so, it generates a DevicePassword and saves the association between DeviceID and DevicePassword on the server.
The server replies: DevicePassword, Hash(DevicePassword + Secret2)
The app uses Secret2 to verify that the password came from the server. If so, it saves it.
To submit a score, iPhone sends: DeviceID, Score, Hash(Score + DevicePassword + Secret3)
The server verifies using Secret3 and the DevicePassword.
The advantage of the DevicePassword is that each device effectively has a unique secret, and if I didn't know that it would make it harder to determine the secret by packet sniffing the submitted scores.
Also, in normal cases the app should only request a DevicePassword once per install, so you could easily identify suspicious requests for a DevicePassword or simply limit it to once per day.
Disclaimer: This solution is off the top of my head, so I can't guarantee there isn't a major flaw in this scheme.