I have a powershell script (but I think these considerations could be extended to any script that requires a runtime to interpret and execute it) that does what I also need to expose to a web application front end as a REST API to be called and I've been asked to call directly the script itself from the web method but although technically feasible, having a web api method that starts a shell/process to execute the script and redirecting stdin/stdout/stderr looks like a very bad practice to me. Is there any specific security risk in doing something like this?
Reading this question brings to mind how many of the OWASP Top Ten Security Vulnerabilities it would expose your site to.
Injection Flaws - This is definitely a high risk. There are ways to remediate it, of course. Parameterizing all input with strongly-typed dates and numbers instead of strings is one method that can be used, but it may not fit with your business case. You should never allow user-provided code to be executed, but if you are accepting strings as input and running a script against that input, it becomes very difficult to prevent arbitrary code execution.
Broken Authentication - possibly vulnerable. If you force a user to authenticate before reaching your script (you probably should), there is a chance that the user reuses their credentials elsewhere and exposes those credentials to a brute force attack. Do you lock out accounts after too many tries? Do you have two-factor authentication? Do you allow weak passwords? These are all considerations when you introduce a new authentication mechanism.
Sensitive data exposure - likely vulnerable, depending on your script. Does the script allow reading files and returning their contents? If not now, will it do so in the future? Even if it's never designed to do so, combined with other exploits the script might be able to read a file from a path that's outside the web directory. It's very difficult to prevent directory traversal exploits that would allow a malicious user access to your server, or even the entire network. Compiled code and the web server prevent this in many cases.
XML External Entities - possibly vulnerable, depending on your requirements. If you allow user-provided XML, the bad guy can inject other files and create havoc. This is easier to trap when you're using standard web tools.
Broken Access Control - definitely vulnerable. A Web API application can enforce user controls, and set permission levels in a C# controller. Exceptions are handled with HTTP status codes that indicate the request was not allowed. In contrast, Powershell executes within the security context of the logged in user, and allows system-level changes even if not running escalated. If an injection flaw is exploited, the code would be executed in the web server's security context, not the user's. You may be surprised how much the IIS_USER (or other Application Pool service account) can do. For one, if the bad guy is executing in the context of a service account, they might be able to bring down your whole site with a single request by locking out that account or changing the password - a task that's much easier with a Powershell script than with compiled C# code.
Security Misconfiguration - likely vulnerable. A running script would require it's own security configuration outside whatever framework you are using for the Web API. Are you ready to re-implement something like OAuth Claims or ACLs?
Cross-Site Scripting - likely vulnerable. Are you echoing the script output? If you're not sanitizing input and output, the script could echo some Javascript that sends a user's cookie content to a malicious server, giving them access to all the user's resources. Cross site request forgery is also a risk if input is not validated.
Insecure Deserialization - Probably not vulnerable.
Using Components with Known Vulnerabilities - greatly increased vulnerability, compared to compiled. Powershell grants access to a whole set of libraries that would otherwise need explicit references in a compiled application.
Insufficient Logging & Monitoring - likely vulnerable. IIS logs requests by default, but Powershell doesn't log anything unless you explicitly write to a file or start a transcript. Neither method is designed for concurrency and may introduce performance or functional problems for shared files.
In short, 9 out of the top 10 Vulnerabilities may affect this implementation. I would hope that would be enough to prevent you making your script public, at the very least. Basically the problem is that you're using the tool (Powershell) for a purpose it wasn't intended to fulfill.
Is there any way we can trigger an addition of a registry key for each user that uses the application?
Right now it seems that whatever value I add somewhere in HKEY_CURRENT_USER it will only be added for the user that trigger the installation.
It is not possible to enumerate users. In a large network, there may be very many users and it may not be possible to say which of those are "local" users.
Is there a way under Citrix for my application to make a call to the Citrix host to find out how many copies of my application are presently running? We want to limit this in our license and we need to have a way to verify it in the code.
thanks - dave
No, Citrix XenApp (which is their best-known product and probably the one you are asking about) does not offer any APIs or services that can be used for application license checking.
The closest you could get via Citrix is to use the Powershell SDK and call Get-XASessionProcess to get a process list. The problem with this approach is you need to be a Citrix admin, and it's a fairly round about way of doing this.
If I wanted to implemented a simple concurrent limit for license enforcement I would look at two options:
Implement a simple web-service somewhere that my app talks to, to get a license.
Create a simple Windows service that tracks processes to maintain a count of concurrent instances of your app. When the configured license count is exceeded you could set a flag in a shared memory section in the global namespace. Then in your app you check this flag at startup and exit immediately when it is set.
You could track processes using WMI, e.g.
http://weblogs.asp.net/whaggard/archive/2006/02/11/438006.aspx
I develop an app for iPhone / iPod Touch which has to have access to a MySQL database. I wrote a PHP API which I can call from the iPhone app.
In the database I store sensitive data which I want to encrypt. I think I will use AES_ENCRYPT. My problem is where to store the key.
It'd be great of you have any idea where to store the key to encrypt / decrypt so that it can not be seen by any other persons, e.g. hackers.
In general:
Don't keep your key in a part of the server that the web server has direct access to. For example, if your site is in /var/www/home, don't put your key in there. Put it someplace outside the web server's part of the tree.
Make sure that the permissions on the folder containing your key are correctly set. Your PHP app needs to have READ access only, NOT write or execute on that folder (and the key file).
Make sure the server itself has a good password (long, lots of random numbers, letters, and symbols).
Make sure the server is protected by a properly configured firewall, and is kept up to date with the most recent security patches.
As for trying to keep the key and the data separate -- this is a perennial problem for which there is no very good solution. The simple fact of the matter is that your application has to have access to the key. Either that means forcing everyone who's going to use the app to memorize the key -- which is likely to lead to sticky notes on monitors in plain view -- or else it has to live somewhere that the app can find it, either on the same server or another.
Let's say I need to access a web service from an iPhone app. This web service requires clients to digitally sign HTTP requests in order to prove that the app "knows" a shared secret; a client key. The request signature is stored in a HTTP header and the request is simply sent over HTTP (not HTTPS).
This key must stay secret at all times yet needs to be used by the iPhone app.
So, how would you securely store this key given that you've always been told to never store anything sensitive on the client side?
The average user (99% of users) will happily just use the application. There will be somebody (an enemy?) who wants that secret client key so as to do the service or client key owner harm by way of impersonation. Such a person might jailbreak their phone, get access to the binary, run 'strings' or a hex editor and poke around. Thus, just storing the key in the source code is a terrible idea.
Another idea is storing the key in code not a string literal but in a NSMutableArray that's created from byte literals.
One can use the Keychain but since an iPhone app never has to supply a password to store things in the Keychain, I'm wary that someone with access to the app's sandbox can and will be able to simply look at or trivially decode items therein.
EDIT - so I read this about the Keychain: "In iPhone OS, an application always has access to its own keychain items and does not have access to any other application’s items. The system generates its own password for the keychain, and stores the key on the device in such a way that it is not accessible to any application."
So perhaps this is the best place to store the key.... If so, how do I ship with the key pre-entered into the app's keychain? Is that possible? Else, how could you add the key on first launch without the key being in the source code? Hmm..
EDIT - Filed bug report # 6584858 at http://bugreport.apple.com
Thanks.
The goal is, ultimately, restrict access of the web service to authorized users, right? Very easy if you control the web service (if you don't -- wrap it in a web service which you do control).
1) Create a public/private key pair. The private key goes on the web service server, which is put in a dungeon and guarded by a dragon. The public key goes on the phone. If someone is able to read the public key, this is not a problem.
2) Have each copy of the application generate a unique identifier. How you do this is up to you. For example, you could build it into the executable on download (is this possible for iPhone apps)? You could use the phone's GUID, assuming they have a way of calculating one. You could also redo this per session if you really wanted.
3) Use the public key to encrypt "My unique identifier is $FOO and I approved this message". Submit that with every request to the web service.
4) The web service decrypts each request, bouncing any which don't contain a valid identifier. You can do as much or as little work as you want here: keep a whitelist/blacklist, monitor usage on a per-identifier basis and investigate suspicious behavior, etc.
5) Since the unique identifier now never gets sent over the wire, the only way to compromise it is to have physical access to the phone. If they have physical access to the phone, you lose control of any data anywhere on the phone. Always. Can't be helped. That is why we built the system such that compromising one phone never compromises more than one account.
6) Build business processes to accommodate the need to a) remove access from a user who is abusing it and b) restore access to a user whose phone has been physically compromised (this is going to be very, very infrequent unless the user is the adversary).
The simple answer is that as things stand today it's just not possible to keep secrets on the iPhone. A jailbroken iPhone is just a general-purpose computer that fits in your hand. There's no trusted platform hardware that you can access. The user can spoof anything you can imagine using to uniquely identify a given device. The user can inject code into your process to do things like inspect the keychain. (Search for MobileSubstrate to see what I mean.) Sorry, you're screwed.
One ray of light in this situation is in app purchase receipts. If you sell an item in your app using in app purchase you get a receipt that's crypto signed and can be verified with Apple on demand. Even though you can't keep the receipt secret it can be traced (by Apple, not you) to a specific purchase, which might discourage pirates from sharing them. You can also throttle access to your server on a per-receipt basis to prevent your server resources from being drained by pirates.
UAObfuscatedString could be a solution to your problem. From the docs:
When you write code that has a string constant in it, this string is saved in the binary in clear text. A hacker could potentially discover exploits or change the string to affect your app's behavior. UAObfuscatedString only ever stores single characters in the binary, then combines them at runtime to produce your string. It is highly unlikely that these single letters will be discoverable in the binary as they will be interjected at random places in the compiled code. Thus, they appear to be randomized code to anyone trying to extract strings.
If you can bear to be iPhone OS 3.0-only, you may want to look at push notifications. I can't go into the specifics, but you can deliver a payload to Apple's servers along with the notification itself. When they accept the alert (or if your app is running), then some part of your code is called and the keychain item is stored. At this point, that is the only route to securely storing a secret on an iPhone that I can think of.
I had the same question and spent a lot of time poking around for an answer. The issue is a chicken and egg one: how to pre-poluate the keychain with data needed by your app.
In any case, I found a technique that at least will make it harder for a jailbreaker to uncover the information - they'll at least have to disassemble your code to find out what you did to mask the info:
String Obfuscation (if the link breaks search for "Obfuscate / Encrypt a String (NSString)")
Essentially the string is obfuscated before placed in the app, then you unobfuscate it using code.
Its better than doing nothing.
David
EDIT: I actually used this in an app. I put a base coding string into the info.plist, then did several operations on it in code - rot13, rotate/invert bytes, etc. The final processed string was used to decode the obfuscated string. Now, the three letter agencies could for sure break this - but at a huge cost of many hours decoding the binary.
I was going to say that this is the best technique I've come across, but I just read Kiran's post on UAObfuscatedString (different answer), which is a completely different way to obfuscate. It has the benefit of no strings saved anywhere in the app - each letter is turned into a method call. The selectors will show up as strings, so a hacker can quickly tell that your class used that technique though.
I think that this similar question, and my answer, may be relevant to your case too. In a nutshell, there was some talk of a trusted platform module being present in an iPhone. This would allow your service to trust an iPhone, even in the hands of an attacker. However, it looks like using the keychain is your best bet.
Did you consider/try the Push Notification suggestion, for initially transmitting the secret to the app & keychain? Or end up finding some other method to achieve this?
I'm going have my iphone app upload images to Amazon S3. Instead of putting the AWS credentials in the app, I am going to have the app phone home to my server for the URI and headers to use in the S3 upload request. My server will generate the S3 URI, proper signatures, etc. I can then implement a tighter, more specific security model on my app's webservice than AWS offers by itself and not give away my AWS keys to anyone with a jailbroken iphone.
But there still has to be some trust (credentials or otherwise) given to the app, and that trust can be stolen. All you can ever do is limit the damage done if someone jailbreaks an iphone and steals whatever credentials are in the app. The more powerful those credentials are, the worst things are. Ways to limit the power of credentials include:
avoid global credentials. make them per-user/application
avoid permanent credentials. make them temporary if possible
avoid global permissions. give them only the permissions they need. for instance, write permissions might be broken down into insert, overwrite, delete, write against resource group A or B, etc, and read could be broken into read named resources, read a list of all existing resources, read resource groups A or B, etc.
I would recommend creating a key at run time if possible. This way if the key were to get apprehended during a particular session, once the session ends, the key will be worthless. They could still apprehend the key from memory if they are smart enough, but it wouldn't matter since the key would become invalid after a period of time.
Sounds wonky. Would use HTTPS and maybe an encryption package to handle the key.
I think CommonCrypto is available for iPhone.
EDIT: Still sounds wonky. Why would anyone pass a secret key in an HTTP header? Anyone who traces your network traffic (via a logging wifi router, for instance) would see it.
There are well-established security methods for encrypting message traffic...why not use them rather than invent what is basically a trivially flawed system?
EDIT II: Ah, I see. I would go ahead and use the Keychain...I think it is intended for just these kinds of cases. I missed that you were generating the request using the key. Would still use HTTPS if I could though, since that way you don't risk people deducing your keygeneration scheme via inspection of enough signatures.