Reading RMS Data of other Midlet - midlet

I want to Read RMS Data created by one midlet from second midlet
Target devise are S60 is it possible??

It is possible. Open a record store associated with the named MIDlet suite. The MIDlet suite is identified by MIDlet vendor and MIDlet name. Access is granted only if the authorization mode of the RecordStore allows access by the current MIDlet suite.
Access is limited by the authorization mode set when the record store was created:
AUTHMODE_PRIVATE - Succeeds only if
vendorName and suiteName identify the
current MIDlet suite; this case
behaves identically to
openRecordStore(recordStoreName,
createIfNecessary).
AUTHMODE_ANY - Always succeeds. Note
that this makes your recordStore
accessible by any other MIDlet on the
device. This could have privacy and
security issues depending on the data
being shared. Please use carefully.
Untrusted MIDlet suites are allowed
to share data but this is not
recommended. The authenticity of the
origin of untrusted MIDlet suites
cannot be verified so shared data may
be used unscrupulously.
See this links for your reference.
Sharing Data Between MIDlet
Suites
Advanced Programming

Related

Windows Store app sideloading

I have associate my Windows Store app with the Windows store and I got a "Appname_StoreKey.pfx" key file, and I deleted my temporary key. If I create my app package using this key will my app expires after a month?
Temporary key expires on a month but this StoreKey says it will expire after a year. Please explain any cons of this procedure.
Due to company requirement I can't submit the app to the store.
The test certificates will expire one year (not month) after creating them. Technically you can use these to publish your app, but this is not recommended.
The same is true if you get a [appname]_StoreKey.pfx after associating your app with the store.
You can view the expiration date in VS, see screenshot.
This cert is taken to sign your code. The app package signature ensures that the package and contents haven't been modified after they were signed.
When you're not deploying using the store the disadvantage is that during deplyoment, the certificate is not trusted. This is in both cases no matter if you take the testcert or the storekey.pfx.
Your users are required to first install the certificate. This is something they might not be able to do (missing rights) or don't dare to do (because they're scared).
When you take a closer look, you will find that the issuer name of testcert might look good, and probably looks very strange (long guid) for a StoreKey.pfx . So users who are asked to install the cert might be scared for a reason.
The best way is to use your own certificate that you created that won't expire too early and that is installed by your IT device management Infrastructure (if you're building a company app for example). This will lead to a struggle-free installation for users.
Using a test certificate is possible technically but not a very "clean solution".
Also check out this post
https://msdn.microsoft.com/en-us/library/windows/apps/br230260(v=vs.85).aspx
https://msdn.microsoft.com/en-us/library/windows/apps/jj835832.aspx

Security code for IOS app

Currently we are developing a track and trace app for our company. To make this app more secure we would like to to ask the user at first launch for a security code. This security code will most likely be the customer number.
Ones the user sends a request to the server with his trace and trace code the server will check if the security code and the track and trace code are linked.
My question is what would be the best way to store the security code in the app so that the app can ask for it ones a track and trace gets send to the server. We want the user to enter the security code only at first start.
[[NSUserDefaults standardUserDefaults] boolForKey:#"security code"];
would this be an option?
For any secure data you wish to store on the device, I strongly advise you use the Keychain. You can then check for the code's existence within the keychain, and use that if it's available.
See the following documentation below to get you on your feet with the Keychain!
Keychain Concepts https://developer.apple.com/library/ios/#documentation/security/Conceptual/keychainServConcepts/02concepts/concepts.html#//apple_ref/doc/uid/TP30000897-CH204-TP9
How to implement on iOS https://developer.apple.com/library/ios/#documentation/security/Conceptual/keychainServConcepts/iPhoneTasks/iPhoneTasks.html
Reference https://developer.apple.com/library/ios/#documentation/security/Reference/keychainservices/Reference/reference.html#//apple_ref/doc/uid/TP30000898
Sample Project https://developer.apple.com/library/ios/#samplecode/GenericKeychain/Introduction/Intro.html#//apple_ref/doc/uid/DTS40007797
user1805901> This security code will most likely be the customer number.
How guessable is that? You might want to look into a UUID in case of loss. Provide the mapping from UUID to customer number at the server.
WDUK> For any secure data you wish to store on the device, I strongly advise you use the Keychain.
In addition to Keychain, you also have to look at kSecAttrAccessibleWhenUnlocked, kSecAttrAccessibleAfterFirstUnlock, kSecAttrAccessibleAlways, kSecAttrAccessibleWhenUnlockedThisDeviceOnly, kSecAttrAccessibleAfterFirstUnlockThisDeviceOnly, and kSecAttrAccessibleAlwaysThisDeviceOnly.
kSecAttrAccessibleAlways and kSecAttrAccessibleAlwaysThisDeviceOnly are poison, and should not be used since the Keychain can leak the data. See iOS Keychain Weaknesses for details (its gotten better, but its can still be a problem).
*ThisDeviceOnly means the secret is not included on a restore to a different device. Its helpful for some Copy/Paste attacks.
And if you don't want the secret on the cloud, I believe the Keychain attributes need to include com.apple.MobileBackup. See Technical Q&A QA1719 for details.
Jeff

Application-specific data protection on iOS

I've seen some documentation and videos from WWDC about data protection in iOS5 and it seems very nice since it can encrypt all your application data and keep it protected as long as your device is locked. However, I see 2 main problems with that system-wide data protection mechanism:
1- if somebody manages to steal my iPhone while it is not locked (which is typically what happens on a "steal-and-run" case), it can potentially plug my iPhone into a laptop and access my data unencrypted
2- it forces me to define a system-wide passcode, which seems natural to some users but is still cumbersome to a lot of users. And it seems abusive that I force my users to define a system-level passcode even though my app is the only one where encryption might really require it. And it's even more abusive as a four-digit password is not such a good protection against brute force attacks.
So my question is the following. Is there any simple way to encrypt my data with a passcode specific to my application, so that every time a user launches the app, they have to enter the passcode, but they don't have to define one on the system level? If not, can I at least plug into standard data protection API's with such an application-specific passcode? If not, is it worth it to write my own encryption layer on top of core data to enable such a scenario? Or is it something that might be added to future versions of iOS (in which case I'll probably stick with system-wide passcodes for now and upgrade it later)?
Several data protection api's on other operating systems (e.g. DPAPI on windows) allow developers to provide supplemental entropy for the key derivation process used to protect the data . On those systems, you could easily derive that data from a pin number. Without that pin, then you can't generate the key and read the data.
I looked and couldn't find anything to this effect on iOS, but I am not an objective c programmer so frankly , reading apple's documentation is a pain for me, and I didn't look too hard.
Depending on your use case you may want to enable data protection in your app, but if the user doesn't use a passcode it won't give you much protection. I don't know if enabling that entitlement will force a passcode.
You could take the path to require that the app have a pin code at launch and then use that pin code along with some other data data to generate a key for the common crypto functions.
https://developer.apple.com/reference/security

Is this plan for preventing iPhone app client spoofing sound?

I'm designing an iPhone app that communicates with a server over HTTP.
I only want the app, not arbitrary HTTP clients, to be able to POST to certain URL's on the server. So I'll set up the server to only validate POSTs that include a secret token, and set up the app to include that secret token. All requests that include this token will be sent only over an HTTPS connection, so that it cannot be sniffed.
Do you see any flaws with this reasoning? For example, would it be possible to read the token out of the compiled app using "strings", a hex editor, etc? I wouldn't be storing this token in a .plist or other plain-text format, of course.
Suggestions for an alternate design are welcome.
In general, assuming that a determined attacker can't discover a key that is embedded in application on a device under his physical control (and, probably, that he owns anyway) is unwarranted. Look at all of the broken DRM schemes that relied on this assumption.
What really matters is who's trying to get the key, and what their incentive is. Sell a product aimed at a demographic that isn't eager to steal. Price your product so that it's cheaper to buy it than it is to discover the key. Provide good service to your customers. These are all marketing and legal issues, rather than technological.
If you do embed a key, use a method that requires each client to discover the key themselves, like requiring a different key for each client. You don't want a situation where one attacker can discover the key and publish it, granting everyone access.
The iPhone does provide the "KeyChain" API, which can help the application hide secrets from the device owner, for better or worse. But, anything is breakable.
The way I understand it, yes, the key could be retrieved from the app one way or another. It's almost impossible to hide something in the Objective-C runtime due to the very nature of it. To the best of my knowledge, only Omni have managed it with their serial numbers, apparently by keeping the critical code in C (Cocoa Insecurity).
It might be a lot of work (I've no idea how complex it is to implement), but you might want to consider using the push notifications to send an authentication key with a validity of one hour to the program every hour. This would largely offload the problem of verifying that it's your app to Apple.
I suggest to add some checksum (md5/sha1) based on the sent data and a secret key that your app and the server knows.
Applications can be disassembled so that they could find your key.
More information is needed to determine whether the approach is sound. It may be sound for one asset being protected and unsound for another, all based on the value of the asset and the cost if the asset is revealed.
Several earlier posters have alluded to the fact that anything on the device can be revealed by a determined attacker. So, the best you can do is determine valuable the asset is and put enough hurdles in the way of the attacker that the cost of the attack exceeds the value of the asset.
One could add to your scheme client-side certificates for the SSL. One could bury that cert and the key for the token deep in some obfuscated code. One could probably craft a scheme using public/private key cryptography to further obscure the token. One could implement a challenge/response protocol that has a time boxed response time wherein the server challenges the app and the app has X milliseconds to respond before it's disconnected.
The number and complexity of the hurdles all depend on the value of the asset.
Jack
You should look into the Entrust Technologies (www.entrust.com) product line for two-factor authentication tied to all sorts of specifics (e.g., device, IMEI, application serial number, user ID, etc.)

How would you keep secret data secret in an iPhone application?

Let's say I need to access a web service from an iPhone app. This web service requires clients to digitally sign HTTP requests in order to prove that the app "knows" a shared secret; a client key. The request signature is stored in a HTTP header and the request is simply sent over HTTP (not HTTPS).
This key must stay secret at all times yet needs to be used by the iPhone app.
So, how would you securely store this key given that you've always been told to never store anything sensitive on the client side?
The average user (99% of users) will happily just use the application. There will be somebody (an enemy?) who wants that secret client key so as to do the service or client key owner harm by way of impersonation. Such a person might jailbreak their phone, get access to the binary, run 'strings' or a hex editor and poke around. Thus, just storing the key in the source code is a terrible idea.
Another idea is storing the key in code not a string literal but in a NSMutableArray that's created from byte literals.
One can use the Keychain but since an iPhone app never has to supply a password to store things in the Keychain, I'm wary that someone with access to the app's sandbox can and will be able to simply look at or trivially decode items therein.
EDIT - so I read this about the Keychain: "In iPhone OS, an application always has access to its own keychain items and does not have access to any other application’s items. The system generates its own password for the keychain, and stores the key on the device in such a way that it is not accessible to any application."
So perhaps this is the best place to store the key.... If so, how do I ship with the key pre-entered into the app's keychain? Is that possible? Else, how could you add the key on first launch without the key being in the source code? Hmm..
EDIT - Filed bug report # 6584858 at http://bugreport.apple.com
Thanks.
The goal is, ultimately, restrict access of the web service to authorized users, right? Very easy if you control the web service (if you don't -- wrap it in a web service which you do control).
1) Create a public/private key pair. The private key goes on the web service server, which is put in a dungeon and guarded by a dragon. The public key goes on the phone. If someone is able to read the public key, this is not a problem.
2) Have each copy of the application generate a unique identifier. How you do this is up to you. For example, you could build it into the executable on download (is this possible for iPhone apps)? You could use the phone's GUID, assuming they have a way of calculating one. You could also redo this per session if you really wanted.
3) Use the public key to encrypt "My unique identifier is $FOO and I approved this message". Submit that with every request to the web service.
4) The web service decrypts each request, bouncing any which don't contain a valid identifier. You can do as much or as little work as you want here: keep a whitelist/blacklist, monitor usage on a per-identifier basis and investigate suspicious behavior, etc.
5) Since the unique identifier now never gets sent over the wire, the only way to compromise it is to have physical access to the phone. If they have physical access to the phone, you lose control of any data anywhere on the phone. Always. Can't be helped. That is why we built the system such that compromising one phone never compromises more than one account.
6) Build business processes to accommodate the need to a) remove access from a user who is abusing it and b) restore access to a user whose phone has been physically compromised (this is going to be very, very infrequent unless the user is the adversary).
The simple answer is that as things stand today it's just not possible to keep secrets on the iPhone. A jailbroken iPhone is just a general-purpose computer that fits in your hand. There's no trusted platform hardware that you can access. The user can spoof anything you can imagine using to uniquely identify a given device. The user can inject code into your process to do things like inspect the keychain. (Search for MobileSubstrate to see what I mean.) Sorry, you're screwed.
One ray of light in this situation is in app purchase receipts. If you sell an item in your app using in app purchase you get a receipt that's crypto signed and can be verified with Apple on demand. Even though you can't keep the receipt secret it can be traced (by Apple, not you) to a specific purchase, which might discourage pirates from sharing them. You can also throttle access to your server on a per-receipt basis to prevent your server resources from being drained by pirates.
UAObfuscatedString could be a solution to your problem. From the docs:
When you write code that has a string constant in it, this string is saved in the binary in clear text. A hacker could potentially discover exploits or change the string to affect your app's behavior. UAObfuscatedString only ever stores single characters in the binary, then combines them at runtime to produce your string. It is highly unlikely that these single letters will be discoverable in the binary as they will be interjected at random places in the compiled code. Thus, they appear to be randomized code to anyone trying to extract strings.
If you can bear to be iPhone OS 3.0-only, you may want to look at push notifications. I can't go into the specifics, but you can deliver a payload to Apple's servers along with the notification itself. When they accept the alert (or if your app is running), then some part of your code is called and the keychain item is stored. At this point, that is the only route to securely storing a secret on an iPhone that I can think of.
I had the same question and spent a lot of time poking around for an answer. The issue is a chicken and egg one: how to pre-poluate the keychain with data needed by your app.
In any case, I found a technique that at least will make it harder for a jailbreaker to uncover the information - they'll at least have to disassemble your code to find out what you did to mask the info:
String Obfuscation (if the link breaks search for "Obfuscate / Encrypt a String (NSString)")
Essentially the string is obfuscated before placed in the app, then you unobfuscate it using code.
Its better than doing nothing.
David
EDIT: I actually used this in an app. I put a base coding string into the info.plist, then did several operations on it in code - rot13, rotate/invert bytes, etc. The final processed string was used to decode the obfuscated string. Now, the three letter agencies could for sure break this - but at a huge cost of many hours decoding the binary.
I was going to say that this is the best technique I've come across, but I just read Kiran's post on UAObfuscatedString (different answer), which is a completely different way to obfuscate. It has the benefit of no strings saved anywhere in the app - each letter is turned into a method call. The selectors will show up as strings, so a hacker can quickly tell that your class used that technique though.
I think that this similar question, and my answer, may be relevant to your case too. In a nutshell, there was some talk of a trusted platform module being present in an iPhone. This would allow your service to trust an iPhone, even in the hands of an attacker. However, it looks like using the keychain is your best bet.
Did you consider/try the Push Notification suggestion, for initially transmitting the secret to the app & keychain? Or end up finding some other method to achieve this?
I'm going have my iphone app upload images to Amazon S3. Instead of putting the AWS credentials in the app, I am going to have the app phone home to my server for the URI and headers to use in the S3 upload request. My server will generate the S3 URI, proper signatures, etc. I can then implement a tighter, more specific security model on my app's webservice than AWS offers by itself and not give away my AWS keys to anyone with a jailbroken iphone.
But there still has to be some trust (credentials or otherwise) given to the app, and that trust can be stolen. All you can ever do is limit the damage done if someone jailbreaks an iphone and steals whatever credentials are in the app. The more powerful those credentials are, the worst things are. Ways to limit the power of credentials include:
avoid global credentials. make them per-user/application
avoid permanent credentials. make them temporary if possible
avoid global permissions. give them only the permissions they need. for instance, write permissions might be broken down into insert, overwrite, delete, write against resource group A or B, etc, and read could be broken into read named resources, read a list of all existing resources, read resource groups A or B, etc.
I would recommend creating a key at run time if possible. This way if the key were to get apprehended during a particular session, once the session ends, the key will be worthless. They could still apprehend the key from memory if they are smart enough, but it wouldn't matter since the key would become invalid after a period of time.
Sounds wonky. Would use HTTPS and maybe an encryption package to handle the key.
I think CommonCrypto is available for iPhone.
EDIT: Still sounds wonky. Why would anyone pass a secret key in an HTTP header? Anyone who traces your network traffic (via a logging wifi router, for instance) would see it.
There are well-established security methods for encrypting message traffic...why not use them rather than invent what is basically a trivially flawed system?
EDIT II: Ah, I see. I would go ahead and use the Keychain...I think it is intended for just these kinds of cases. I missed that you were generating the request using the key. Would still use HTTPS if I could though, since that way you don't risk people deducing your keygeneration scheme via inspection of enough signatures.