Does simics simulate the Intel Converged Security & Management Engine (CSME)? - simics

I would like to use Simics to test out Intel Atom Verified Boot & Measured Boot with Boot Guard without potentially breaking my development hardware (which would be a permanent breakage if I mis-fuse it). I believe that the initial boot block (IBB) verification and fused-key usage is done by the CSME. Is it possible to test whether the tamper-proofing of the IBB is working correctly, or will I only be able to test the main x86-side portion within Simics?
(I think also maybe if the CSME portion is not emulated then perhaps the handoff of trusted key hashes won't happen, so even the IBB won't be able to verify subsequent stages and thus it wouldn't be able to simulate Boot Guard in Simics?)

The current public Simics Quick-Start Platform model does not include the CSME or other secure boot features. It is a model limitation.

Related

What are the limitations of the flask built-in web server

I'm a newbie in web server administration. I've read multiple times that flask built-in web server is not designed for "production", and must be used only for tests and debug...
But what if my app touchs only a thousand users who occasionnaly send data to the server ?
If it works, when will I have to bother with the configuration of a more sophisticated web server ? (I am looking for approximative metrics).
In a nutshell, I would love to find what the builtin web server can do (with approx thresholds) and what it cannot.
Thanks a lot !
There isn't one right answer to this question, but here are some things to keep in mind:
With the right amount of horizontal scaling, it is quite possible you could keep scaling out use of the debug server forever. When exactly you would need to start scaling (or switch to using a "real" web server) would also depend on the environment you are hosting in, the expectations of the users, etc.
The main issue you would probably run into is that the server is single-threaded. This means that it will handle each request one at a time, serially. This means that if you are trying to serve more than one request (including favicons, static items like images, CSS and Javascript files, etc.) the requests will take longer. If any given requests happens to take a long time (say, 20 seconds) then your entire application is unresponsive for that time (20 seconds). This is only the default, of course: you could bump the thread counts (or have requests be handled in other processes), which might alleviate some issues. But once again, it can still be slow under a "high" load. What is considered a "high" load will be dependent on your application and the expectations of a maximum acceptable response time.
Another issue is security: if you are concerned at ALL about security (and not just the security of the data in the application itself, but the security of the box that will be running it as well) then you should not use the development server. It is not ready to withstand any sort of attack.
Finally, the development server could just fail outright. It is not designed to be used as a long-running process (days, weeks, months), and so it has not been well tested to work in this capacity.
So, yes, it has limitations. Yes, you could still conceivably use it in production. And yes, I would still recommend using a "real" web server. If you don't like the idea of needing to install something like Apache or Nginx, you can still go with a solution that is still as easy as "run a python script" by using some of the WSGI Standalone servers, which can run a server that is designed to be in production with something just as simple as running python run_app.py in the command line. You typically just need to create a 4-5 line python script to import and create the server object, point it to your Flask app, and run it.
gunicorn could be run with only the following on the command line, no extra script needed:
gunicorn myproject:app
...where "myproject" is the Python package that contains the app Flask object. Keep in mind that one of developers of gunicorn would probably recommend against this approach. See https://serverfault.com/questions/331256/why-do-i-need-nginx-and-something-like-gunicorn.
The OP has long-since moved on, but for those who encounter this question in the future I would just add that setting up an Apache server, even on a laptop, is free and pretty easy. It can be readily configured for as few or as many features as you want just by uncomment in or commenting out lines in the config file. There might be an even easier GUI method for doing that nowdays, but just editing the configs is simple.

Bootloader and Firmware Common Usage and Firmware Upgrade

There are two case when working on an embedded system.
Embedded system have limited resources like as ARM Cortex M0 Microcontroller with 12 K Flash.
Case 1 :
Common function/module usage for Bootloader and Firmware :
Bootloader and Firmware may need to use same module and function to prevent code duplication. Otherwise, same code will be included both Firmware and Bootloader twice.
We can prevent this by specify the function address and call this function by calling functions by addresses. This is one of the solutions.
Is there any smart method to provide common function usage?
Case 2 :
Sometimes, we need to upgrade firmware. One of the duties of bootloader is firmware upgrades. We can easily upgrade the firmware by overwrite the old one.
As we saw, two case can be implemented separately. But when we merge they, some problems are appeared.
Question :
Bootloader's are generally static objects but firmware's are can be modified. Therefore, common functions are generally located at Bootloaders. But when we need update a common module/function, how can we do?
What are the general or smart approaches which bootloader, firmware structured embedded systems? In Addition, for limited resources.
To discrete common modules/functions, Can one or more additional areas solve this problems.
Firmware, Bootloader and Library(New Area)?
I want to learn general approaches. Is there any paper, book and source about advanced Firmware management?
Thanks
If you share code between your bootloader and your mainline firmware application, then your bootloader will be using this code when it flashes the application space. In order to prevent this condition you must sacrifice the ability to update the common code, otherwise your bootloader will crash.
With only 12k of flash, it's pretty ambitious to expect a bootloader and mainline application to fit. You might consider writing the bootloader in assembly (gasp!). Some Cortex M0 parts (such as the NXP LPC11xx family) have an additional boot ROM which stores some useful functions and help alleviate some of the memory constraints.
Your question states the problem correctly - you cannot have your cake and eat it. Either:
1. You go for a small memory footprint and do not include firmware upgrade logic in the bootloader (i.e bootloader might just validate application image CRC etc but nothing more complicated). Here you could share functions to save space. OR
2. The bootloader has firmware upgrade functionality. Here you have to have shared functions compiled both into app and bootloader. The shared functions should be small - probably not a huge overhead but you need the space that this would take - if you dont have it then you need to go for more memory.
There is no way to share functionality and do firmware upgrade from bootloader reliably.
In lights of the current discussion about security in the firmware update process I would like to add the following for clarification:
Sharing code between the bootloader and the app will open yet another door for the potential attack, so you really want to avoid that.
The bootloader part is the part you actually do not want to change ever, this should be as static as possible. If the bootloader is broken, in-the-field-updates become nearly impossible or at least insecure.
Having said that, you might want to use a different approach.
You could create a maintenance mode for your device.
This mode opens the JTAG interface and allows direct access to memory. Than the service technician could apply the update.
Now you "only" need to secure the activation of the maintenance mode.
The following could work:
Use a UART interface to communicate the activation.
Maintenance system sends its own id and requests maintenance mode via UART
The id of the maintenance system, a random number and a unique system id are sent back to the maintenance system.
The maintenance system sends this id-sequence to your certification server.
If the unique system id and the maintenance systems id is correct, the server will create a signature of the information received and send it back to the maintenance system.
Your system now will receive the signature via UART
Your system verifies the signature against the previously send id-string with a public key stored during production
On a successfull verification maintenance mode is entered
To add security, you definitely want to put some effort into the maintenance systems id following a similar scheme. The ID should basically depend on MAC-address or another unique hardware id and a signature of the same. The ID should be created in a secure environment during production process of the maintenance system. The unique hardware id should be something visible to the outside world, so the server could actually verify, whether the ID received matches with the maintenance system communicating with the server.
This whole setup would give you a secure firmware update without a bootloader.
To have secure firmware updates, common understanding is, that you need a authentication system based on asymmetric encryption like RSA. If you need the verification code anyway, the above will exchange a bootloader capable of accepting updates with a simple UART interface, saving some resources in the process.
Is this something you were looking for?
A commercial bootloader in my experience uses between 4 and 8k of flash memory depending on flash algorithm and a couple of other things. I have been sticking with the same vendor throughout my carreer, so this might vary from your experience.
A digital signature system optimized for embedded systems uses approximately 4.5kByte in flash memory (for an example, see here: https://www.segger.com/emlib-emsecure.html ) and no more RAM than the stack.
You see, that 12k is really really low in terms of having a system which can be updated securely in the field. And even more so, if you want the system to be updated using a bootloader.

Testing a Product that Includes Syncing and other Network Requests

I am nearing the release of an iOS app that syncs and otherwise interacts with a server. I am struggling with a testing procedure that can cover most/all possible situations. I don't have any experience with automated testing so I have been doing everything manually so far with the iPhone simulator and a physical device.
How would I start designing automated tests that can help me get better coverage of possible situations and also serve me well in the future as I make changes and add new features?
You probably need to be more specific in your question. ie. outline how you communicate with your server, what technology is being employed etc.
But as a general approach the first thing I would be doing is looking to find a way to get reproducable results from the server. For example if I send a message asking for a record with an id of 'x' then the server will alwasy return the same record with the same data. There are severa ways to do this, one would be to load a set of test data into your server. Another would be to create a local test server and talk to that instead. Another option is to avoid the server all together in your automaticed tests and mock out the communication classes in your app. It totally depends on what you are trying to test and how.
Once you have your back end dealt with you can then look into automating the tests. This very much depends on how you have dealt with the server. For example, if you are performing an integration style test where you actually talk to a server, then the test might take the form:
Reset or clear the server data.
Load it with predictable data.
Run the iOS app using some testing framework and verify any data sent from the server.
Access the server and verify any changes made there.

How to determine the minimum JRE version and system requirements for my Java application

I have written an application in Java using Eclipse IDE and I now need to know the minimum JRE version that is required to run the application! I know that certain methods are only available under later JREs, but I was wondering what the easiest way to find out the highest requirement of my application would be, so any suggestions would be appreciated...
Also whilst I am on the topic of requirements, I would appreciate any advice or methods for determining the minimum system requirements for my software in general - i.e minimum amount of RAM...
Thanks in advance
Method 1: For minimum JRE version, that's going to be tough. The easiest way is to simply require the same version that you're building against, or later, e.g. JRE 6.x.x or higher.
Method 2: Install multiple JDK's, making them available in Eclipse, and just change the version you're building against, running your app's test suite each time, and making sure they all pass. The earliest version of the JDK that allows all your tests to pass is the lowest JRE it can run against. Simply having your app successfully compile isn't enough, because previous versions of the JRE/JDK might have bugs that allow for successful compilation, but don't allow for proper program execution.
Method 3: Always require the latest on the client side, because Oracle is constantly patching security holes, and ultimately, it may be best to require the latest versions, if you have that kind of control, on the client side.
As far as RAM, that's easy. When the JVM starts it sets a 'maximum' amount of RAM (I believe the default may be 128MB), and that's a hard limit that your application cannot exceed without crashing. Profile your app over time, tweaking the memory settings on the JVM, and find out what the minimum amount of RAM is that you'll need for your app to run both (a) with acceptable performance, and (b) without throwing an OutOfMemoryError, and you're done.
Ref: How to configure JVM options and memory?
For other requirements such as CPU req., things get a little fuzzier. There are a lot of CPUs out there, and the throughput that a given system produces can vary not just based on CPU speed, but the speed of the hard drive, the amount of RAM installed in the system, the speed of the network interface (if you're writing a network app), and other things. For requirements such as that, you'll want to just test it on a variety of systems and sort of draw a line somewhere, and say, "You can expect acceptable performance if you have hardware that is at least as powerful as X, Y, Z".
The other thing you could do is build in a benchmark, or some kind of performance logging, and have that performance data sent back to you. Lots of apps do this. You know that "May we send anonymous usage data back to the mothership?" question you get when installing some software? Well, common among that data are system-specific details such as RAM, CPU, hard drive model, and other hardware details (whatever data you determine is relevant to your app), along with performance logging data. By taking that kind of approach, what you get is a lot of performance data from lots of different system configurations without needing to have a huge number of differently configured machines in-house.
You can do the same thing for program crashes and bugs - have the stack traces, system info, and other relevant data dumped to a log file that is sent back to you - but of course, only if your users have said it's okay to send that data back to you.

Continuation of a process after a system crash/restart - Drools Flow

I've been playing with examples I downloaded with the book Drools JBoss Rules 5.0. To my relief they work :) Drools Flow has been my point of interest as a possible workflow engine replacement.
As I'm trying to wrap my head around things, I've been wondering how a premature death of a rulesflow process gets restarted? What I'm mean is say a process is bouncing from one node to another like expected, then the containing process dies due to a crash, restart or whatever. Is the current node/place of the ruleflow process retained, and can it just continue from that point on system restart? If so how?
The group I work for is very Java EE centric with JBoss being our favorite application server. I see examples of Drools leveraging Spring's persistence and bean lookup support.
Are there examples of doing the same with JBoss?
If you persist the state of the process instances and tasks in the database. Even if the VM was down and restart again, you can retrieve the process instances.
Use the
To create the session
ksession = JPAKnowledgeService.newStatefulKnowledgeSession(kbase,null,env)
To load the session with the session id.
ksession = JPAKnowledgeService.loadStatefulKnowledgeSession( sessionId, kbase,
You only need to know the session id. Session information will be store in SessionInfo table. Download the example project below.
http://dl.dropbox.com/u/2634115/drools-test.zip
The example is using Btm with H2 database, it also work well with mysql-connector-java-5.1.13 with Btm. Note that the process that are complete will be automatically deleted from the database.
You are looking at the basic concept of Process Migration. During what is known as strong migration, a process can be stopped on one machine and the entire state of the process migrated to another machine (including the program counter and all existing stacks). Before you go thinking that this is completely insane, think about this from a JVM perspective. Since you're application is already being run in virtual hardware; it isn't hard to stop the application and pick it back up where it left off since it is completely virtualized.
If you would like another example, look at VMWare; an entire machine can be paused and migrated to another machine and started again. It's very interesting stuff and usually relates mainly to Distributed Computing where you might have hundreds of agents that need to migrate from machine to machine as some go down for maintenance.
I realize that I didn't give an example of this through JBoss; but giving a background on what exactly you're looking for can give you a much better insight into what to look for going forward.