There are two case when working on an embedded system.
Embedded system have limited resources like as ARM Cortex M0 Microcontroller with 12 K Flash.
Case 1 :
Common function/module usage for Bootloader and Firmware :
Bootloader and Firmware may need to use same module and function to prevent code duplication. Otherwise, same code will be included both Firmware and Bootloader twice.
We can prevent this by specify the function address and call this function by calling functions by addresses. This is one of the solutions.
Is there any smart method to provide common function usage?
Case 2 :
Sometimes, we need to upgrade firmware. One of the duties of bootloader is firmware upgrades. We can easily upgrade the firmware by overwrite the old one.
As we saw, two case can be implemented separately. But when we merge they, some problems are appeared.
Question :
Bootloader's are generally static objects but firmware's are can be modified. Therefore, common functions are generally located at Bootloaders. But when we need update a common module/function, how can we do?
What are the general or smart approaches which bootloader, firmware structured embedded systems? In Addition, for limited resources.
To discrete common modules/functions, Can one or more additional areas solve this problems.
Firmware, Bootloader and Library(New Area)?
I want to learn general approaches. Is there any paper, book and source about advanced Firmware management?
Thanks
If you share code between your bootloader and your mainline firmware application, then your bootloader will be using this code when it flashes the application space. In order to prevent this condition you must sacrifice the ability to update the common code, otherwise your bootloader will crash.
With only 12k of flash, it's pretty ambitious to expect a bootloader and mainline application to fit. You might consider writing the bootloader in assembly (gasp!). Some Cortex M0 parts (such as the NXP LPC11xx family) have an additional boot ROM which stores some useful functions and help alleviate some of the memory constraints.
Your question states the problem correctly - you cannot have your cake and eat it. Either:
1. You go for a small memory footprint and do not include firmware upgrade logic in the bootloader (i.e bootloader might just validate application image CRC etc but nothing more complicated). Here you could share functions to save space. OR
2. The bootloader has firmware upgrade functionality. Here you have to have shared functions compiled both into app and bootloader. The shared functions should be small - probably not a huge overhead but you need the space that this would take - if you dont have it then you need to go for more memory.
There is no way to share functionality and do firmware upgrade from bootloader reliably.
In lights of the current discussion about security in the firmware update process I would like to add the following for clarification:
Sharing code between the bootloader and the app will open yet another door for the potential attack, so you really want to avoid that.
The bootloader part is the part you actually do not want to change ever, this should be as static as possible. If the bootloader is broken, in-the-field-updates become nearly impossible or at least insecure.
Having said that, you might want to use a different approach.
You could create a maintenance mode for your device.
This mode opens the JTAG interface and allows direct access to memory. Than the service technician could apply the update.
Now you "only" need to secure the activation of the maintenance mode.
The following could work:
Use a UART interface to communicate the activation.
Maintenance system sends its own id and requests maintenance mode via UART
The id of the maintenance system, a random number and a unique system id are sent back to the maintenance system.
The maintenance system sends this id-sequence to your certification server.
If the unique system id and the maintenance systems id is correct, the server will create a signature of the information received and send it back to the maintenance system.
Your system now will receive the signature via UART
Your system verifies the signature against the previously send id-string with a public key stored during production
On a successfull verification maintenance mode is entered
To add security, you definitely want to put some effort into the maintenance systems id following a similar scheme. The ID should basically depend on MAC-address or another unique hardware id and a signature of the same. The ID should be created in a secure environment during production process of the maintenance system. The unique hardware id should be something visible to the outside world, so the server could actually verify, whether the ID received matches with the maintenance system communicating with the server.
This whole setup would give you a secure firmware update without a bootloader.
To have secure firmware updates, common understanding is, that you need a authentication system based on asymmetric encryption like RSA. If you need the verification code anyway, the above will exchange a bootloader capable of accepting updates with a simple UART interface, saving some resources in the process.
Is this something you were looking for?
A commercial bootloader in my experience uses between 4 and 8k of flash memory depending on flash algorithm and a couple of other things. I have been sticking with the same vendor throughout my carreer, so this might vary from your experience.
A digital signature system optimized for embedded systems uses approximately 4.5kByte in flash memory (for an example, see here: https://www.segger.com/emlib-emsecure.html ) and no more RAM than the stack.
You see, that 12k is really really low in terms of having a system which can be updated securely in the field. And even more so, if you want the system to be updated using a bootloader.
Related
I would like to use Simics to test out Intel Atom Verified Boot & Measured Boot with Boot Guard without potentially breaking my development hardware (which would be a permanent breakage if I mis-fuse it). I believe that the initial boot block (IBB) verification and fused-key usage is done by the CSME. Is it possible to test whether the tamper-proofing of the IBB is working correctly, or will I only be able to test the main x86-side portion within Simics?
(I think also maybe if the CSME portion is not emulated then perhaps the handoff of trusted key hashes won't happen, so even the IBB won't be able to verify subsequent stages and thus it wouldn't be able to simulate Boot Guard in Simics?)
The current public Simics Quick-Start Platform model does not include the CSME or other secure boot features. It is a model limitation.
I can see from my OS the informations about my hard disk, RAM and CPU. But I've never told my OS these info.
How does my OS know it?
Is there some place in the hard disk or CPU or RAM that stores this kind of information?
Is there some standard about the format of this kind of information?
SMBIOS (formerly known as DMI) contains much of this information. SMBIOS is a a data structure/API that is part of the BIOS/UEFI firmware and contains info like brand and model of the computer, etc.
The rest is gathered by the OS querying hardware directly.
Answer grabbed from superuser by Mokubai.
You don't need to tell it because each device already knows (or has a way) to identify itself.
If you get the idea that every device is accessed via address and data lines, and in some cases only data lines then you come to the relaisation that in those data lines you need some kind of "protocol" that determines just how you talk to those devices.
In amongst that protocol you have commands that say "read this" and "send that" or "put this over there". It is also relatively easy to have a command that says "identify yourself" which, rather than reading a block of disk or memory or painting a pixel a particular colour, will return a premade string or set of strings that tell the driver or operating system what that device is. Using a series of identity commands you could discover a device type, it's capabilities and what driver might be able to work with it.
You don't need to tell a device what it is, because it already knows. And you don't need to tell the operating system what it is because it can ask the device itself.
You don't tell people what they're called and how they talk, you ask them.
Each device has it's own protocol for these messages, and they don't store the details of other devices because to do so would be insane and near useless given that you can remove any device at any time. Your hard drive doesn't need to store information about your memory or graphics card except for the driver that the operating system uses to talk to it with.
The PC UEFI specification would define a core set of system specifications that every computer has, allowing the processor to be powered up and for a program stored in an EEPROM to begin the asbolute basic system probing necessary to determine the processor, set up the RAM, find a disk and display and thus continue to boot the computer.
From there the UEFI system would hand over to the operating system which would have more detailed probing and identification procedures, but it all starts at the most basic "I have a processor, what is around me?" situation.
I've seen some documentation and videos from WWDC about data protection in iOS5 and it seems very nice since it can encrypt all your application data and keep it protected as long as your device is locked. However, I see 2 main problems with that system-wide data protection mechanism:
1- if somebody manages to steal my iPhone while it is not locked (which is typically what happens on a "steal-and-run" case), it can potentially plug my iPhone into a laptop and access my data unencrypted
2- it forces me to define a system-wide passcode, which seems natural to some users but is still cumbersome to a lot of users. And it seems abusive that I force my users to define a system-level passcode even though my app is the only one where encryption might really require it. And it's even more abusive as a four-digit password is not such a good protection against brute force attacks.
So my question is the following. Is there any simple way to encrypt my data with a passcode specific to my application, so that every time a user launches the app, they have to enter the passcode, but they don't have to define one on the system level? If not, can I at least plug into standard data protection API's with such an application-specific passcode? If not, is it worth it to write my own encryption layer on top of core data to enable such a scenario? Or is it something that might be added to future versions of iOS (in which case I'll probably stick with system-wide passcodes for now and upgrade it later)?
Several data protection api's on other operating systems (e.g. DPAPI on windows) allow developers to provide supplemental entropy for the key derivation process used to protect the data . On those systems, you could easily derive that data from a pin number. Without that pin, then you can't generate the key and read the data.
I looked and couldn't find anything to this effect on iOS, but I am not an objective c programmer so frankly , reading apple's documentation is a pain for me, and I didn't look too hard.
Depending on your use case you may want to enable data protection in your app, but if the user doesn't use a passcode it won't give you much protection. I don't know if enabling that entitlement will force a passcode.
You could take the path to require that the app have a pin code at launch and then use that pin code along with some other data data to generate a key for the common crypto functions.
https://developer.apple.com/reference/security
I have written an application in Java using Eclipse IDE and I now need to know the minimum JRE version that is required to run the application! I know that certain methods are only available under later JREs, but I was wondering what the easiest way to find out the highest requirement of my application would be, so any suggestions would be appreciated...
Also whilst I am on the topic of requirements, I would appreciate any advice or methods for determining the minimum system requirements for my software in general - i.e minimum amount of RAM...
Thanks in advance
Method 1: For minimum JRE version, that's going to be tough. The easiest way is to simply require the same version that you're building against, or later, e.g. JRE 6.x.x or higher.
Method 2: Install multiple JDK's, making them available in Eclipse, and just change the version you're building against, running your app's test suite each time, and making sure they all pass. The earliest version of the JDK that allows all your tests to pass is the lowest JRE it can run against. Simply having your app successfully compile isn't enough, because previous versions of the JRE/JDK might have bugs that allow for successful compilation, but don't allow for proper program execution.
Method 3: Always require the latest on the client side, because Oracle is constantly patching security holes, and ultimately, it may be best to require the latest versions, if you have that kind of control, on the client side.
As far as RAM, that's easy. When the JVM starts it sets a 'maximum' amount of RAM (I believe the default may be 128MB), and that's a hard limit that your application cannot exceed without crashing. Profile your app over time, tweaking the memory settings on the JVM, and find out what the minimum amount of RAM is that you'll need for your app to run both (a) with acceptable performance, and (b) without throwing an OutOfMemoryError, and you're done.
Ref: How to configure JVM options and memory?
For other requirements such as CPU req., things get a little fuzzier. There are a lot of CPUs out there, and the throughput that a given system produces can vary not just based on CPU speed, but the speed of the hard drive, the amount of RAM installed in the system, the speed of the network interface (if you're writing a network app), and other things. For requirements such as that, you'll want to just test it on a variety of systems and sort of draw a line somewhere, and say, "You can expect acceptable performance if you have hardware that is at least as powerful as X, Y, Z".
The other thing you could do is build in a benchmark, or some kind of performance logging, and have that performance data sent back to you. Lots of apps do this. You know that "May we send anonymous usage data back to the mothership?" question you get when installing some software? Well, common among that data are system-specific details such as RAM, CPU, hard drive model, and other hardware details (whatever data you determine is relevant to your app), along with performance logging data. By taking that kind of approach, what you get is a lot of performance data from lots of different system configurations without needing to have a huge number of differently configured machines in-house.
You can do the same thing for program crashes and bugs - have the stack traces, system info, and other relevant data dumped to a log file that is sent back to you - but of course, only if your users have said it's okay to send that data back to you.
I have developed a software piece (with C and Python) which I want to protect with dongle so that copying and reverse engineering becomes hard enough. My dongle device comes with an api which provides these:
Check dongle existence
Check proper dongle
Write into a memory location in dongle
Read from a memory location in dongle etc. (I think the rests aren't that good..)
What I can do in the source code so that it becomes harder to crack. Dongle provider suggested that, I should check proper dongle existence in a loop or after an event, or I should use the dongle memory in an efficient way. But how? I have no idea how crackers crack. Please shed some light. Thanks in advance.
P.S: Please don't suggest obfuscating. I have already done that.
First of all, realize that the dongle will only provide a little bit of an obstacle. Someone who knows what they're doing will just remove the call to the dongle and put in a 'true' for whatever result that was called. Everyone will tell you this. But there are roadblocks you can add!
I would find a key portion of your code, something that's difficult or hard to know, something that requires domain knowledge. Then put that knowledge onto the key. One example of this would be shader routines. Shader routines are text files that are sent to a graphics card to achieve particular effects; a very simple brightness/contrast filter would take less than 500 characters to implement, and you can store that in the user space on most dongles. Then you put that information on the key, and only use information from the key in order to show images. That way, if someone tries to just simply remove your dongle, all the images in your program will be blacked out. It would take someone either having a copy of your program, grabbing the text file from the key, and then modifying your program to include that text file, and then knowing that that particular file will be the 'right' way to display images. Particulars of implementation depend on your deployment platform. If you're running a program in WPF, for instance, you might be able to store a directx routine onto your key, and then load that routine from the key and apply the effect to all the images in your app. The cracker then has to be able to intercept that directx routine and apply it properly.
Another possibility is to use the key's random number generation routines to develop UIDs. As soon as someone removes the dongle functionality, all generated UIDs will be zeroed.
The best thing to do, though, is to put a domain specific function onto the dongle (such as the entire UID generation routine). Different manufacturers will have different capabilities in this regard.
How much of a roadblock will these clevernesses get you? Realistically, it depends on the popularity of your program. The more popular your program, the more likely someone will want to crack it, and will devote their time to doing so. In that scenario, you might have a few days if you're particularly good at dongle coding. If your program is not that popular (only a few hundred customers, say), then just the presence of a dongle could be deterrent enough without having to do anything clever.
Crackers will crack by sniffing the traffic between your app and the dongle and either disabling any code that tests for dongle presence or writing code to emulate the dongle (e.g. by replaying recorded traffic), whichever looks easier.
Obfuscation of the testing code, and many scattered pieces of code that perform tests in different ways, as well as separating spatially and temporally the effect of the test (disabling/degrading functionality, displaying a warning etc.) from the test itself make the former method harder.
Mutating the content of the dongle with each test based on some random nonce created each run or possibly even preserved between runs, so that naively recording and replaying the traffic does not work, will make the latter method harder.
However, with the system as described, it is still straightforward to emulate the dongle, so sooner or later someone will do it.
If you have the ability to execute code inside the dongle, you could move code that performs functions critical to your application there, which would mean that the crackers must either rederive the code or break the dongle's physical security - a much more expensive proposal (though still feasible; realise that there is no such thing as perfect security).
How to maximize protection with a simple dongle?
Use API together with Enveloper if an enveloper exists for your resulting file format. This is a very basic rule. Because our enveloper is already equipped with some anti-debugging and obfuscating methods to prevent common newbie hackers to give up hacking the program. Only using enveloper is also not recommended, because once a hacker can break the enveloper protection in other program, they can also break yours.
Call dongle APIs in a LOT of places in your application. For example when first start up, when opening a file, when a dialog box opens, and before processing any information. Also maybe do some random checking even when there's nothing done at all.
Use more than one function to protect a program. Do not just only use find function to look for a plugged dongle.
Use multiple dlls/libraries (if applicable) to call dongle functions. In case one dll is hacked, then there are still other parts of the software that uses the functions from another dll. For example, copying sdx.dll to print.dll, open.dll, and other names, then define the function calls from each dll with different names.
If you use a dll file to call dongle functions, bind it together with the executable. There are quite some programs capable of doing this; for example PEBundle. 3
I have got this article on PRLOG and found it quite useful on maximizing protection with a simple dongle. Maybe this link may help you
Maximizing Protection with a Simple Dongle for your Software
You can implement many check points in your application.
I don't know if you use HASP, but unfortunatelly, dongles can be emulated.
You may want to look into using Dinkey Dongles for your copy protection.
It seems a very secure system and the documentation gives you tips for improving your overall security using the system.
http://www.microcosm.co.uk/dongles.php
Ironically, the thing you want to discourage is not piracy by users, but theft by vendors. The internet has become such a lawless place that vendors can steal and resell your software at will. You have legal recourse in some cases, and not in others.
Nothing is fool-proof, as previously stated. Also, the more complex your security is, the more likely it is to cause headaches or problems for legitimate users.
I'd say the most secure application is always the one tied closest to the server. Sadly, then users worry about it being spyware.
If you make a lot of different calls to your dongle, then maybe the cracker will just emulate your dongle -- or find a single point of failure (quite common to change one or two bytes and all your calls are useless). It is a no-win situation.
As the author of PECompact, I always tell customers that they can not rely on anything to protect their software -- as it can and will be cracked if a dedicated cracker goes after it. The harder you make it, the more of a challenge (fun) it is to them.
I personally use very minimal protection techniques on my software, knowing these facts.
Use smartcard + encrypt/decrypt working files through secret function stored in card. Then software can be pirated, but it will not able to open properly encrypted working files.
I would say that if someone wants to crack your software protection, they will do so. When you say 'hard enough' - how should 'enough' be interpreted?
A dongle will perhaps prevent your average user from copying your software - so in that sense it is already 'enough'. But anyone who feels the need and is able to circumvent the dongle will likely be able to get past any other scheme that you engineer.