how to measure the CPU cycles of a C function in iPhone4 application with xCode4? - iphone

The system time substraction may be one method, but it includes the running time for all the tasks/threads, the function is only in one thread of one task.
Instrument in Xcode may be another method, but how to measure the time for a specific function?

You need to understand a few things here: firstly, the concept of a 'CPU cycle' isn't very useful. It's in fact fairly meaningless. You're never going to get an accurate result. You can use valgrind to get detailed output regarding the number of instructions being executed, and in theory (that's a big 'in theory') you could use this information to derive cycle counts. Realistically it's impossible, and not worth the effort.
One would have to ask why you'd want to find this out in the first place.

Related

What is the best definition of an RTOS?

I have yet to find a definition of an RTOS that is specific enough to have meaning. The best one I can find is on wiki:
https://en.wikipedia.org/wiki/Real-time_operating_system
However I have some critical comments/questions:
"Real Time" seems to be undefined in all the definitions for RTOS I've found. Nothing can be fast as actual real time (infinitesimally small!). Therefore, I believe "real time" only makes sense in the context of the observer. Real time for a human using an iPhone user might be <20ms because human eye sight cannot detect changes faster than that. For an air bag deployment it might be <1ms. All definitions on the internet seem to gloss over the definition of "real time"!
If RTOS is defined by the requirement to execute something within a specific time frame ("deadline"), why does jitter come into the definition? If the iPhone response jitters between 12-14ms, is it no longer responding in real time? It meets the 20ms requirement, right? If one time the response went to 100ms, the user might notice, at which point the system is not an RTOS
How can there possibly be a "soft" RTOS?! The definition of RTOS is meeting a particular deadline time requirement. If it doesn't meet it, than its not an RTOS! The very definition of RTOS prohibits a "soft" RTOS
To me it seems there is no formal and precise definition of RTOS. It's a general term to explain the characteristic of an OS who's main priority is the appearance of "real time" (per requirement number) to a particular type of observer. It also seems like the name has taken on implementation meaning such as how things are processed, multi-tasking, message passing, semaphores, etc... all which may NOT be part of an RTOS at all if the system fails to respond within the "deadline" requirement, right?
Sorry about such a ubiquitous question, but I can't get a clear picture in my brain. All definitions I've found are simply not precise enough or cloud the definition with implementation details.
You're right that no definition defines the exact time bounds. That's not the goal of a definition. Real time isn't dependent on the observer, though, but the application. As applications differ, time bounds differ, and therefore a definition cannot give that bound as a number.
Jitter is irrelevant as long as the application's time bound is met. You're absolutely right about the example. If the deadline is 20 ms, taking 100 ms is a failure. If the OS is to blame for the delay, it's not an RTOS.
"Soft realtime" has a very specific meaning, and this is probably the only thing you really got wrong. The concept at work here is, what do you do when a task exceeds its deadline? (Note: this could be either the fault of the task itself or the RTOS.) In a hard realtime system, the task simply has no value anymore. A late outcome is as good as no outcome, and you cancel the task. No point in risking other tasks.
Soft RTOS is actually more complex. Finishing the task still has value, although diminished. So the RTOS cannot hard kill the task, but the OS still has to ensure other tasks meet their deadlines. That requires extra care, which wouldn't have been necessary if you'd just kill the task.
There is an Embedded Systems Dictionary. Here are some excerpts:
real-time adj. Having timeliness requirements, typically in the form of deadlines that can’t be missed.
real-time operating system n. An operating system designed specifically for use in real-time systems. Abbreviated RTOS.
real-time system n. Any computer system, embedded or otherwise, that has timeliness requirements. The following question can be used
to distinguish real-time systems from the rest: “Is a late answer as
bad, or even worse, than a wrong answer?” In other words, what happens
if the computation doesn’t finish in time? If nothing bad happens,
it’s not a real-time system. If someone dies or the mission fails,
it’s generally considered “hard” real-time, which is meant to imply
that the system has hard deadlines. Everything in between is “soft”
real-time.

How to implement deterministic single threaded network simulation

I read about how FoundationDB does its network testing/simulation here: http://www.slideshare.net/FoundationDB/deterministic-simulation-testing
I would like to implement something very similar, but cannot figure out how they actually did implement it. How would one go about writing, for example, a C++ class that does what they do. Is it possible to do the kind of simulation they do without doing any code generation (as they presumeably do)?
Also: How can a simulation be repeated, if it contains random events?? Each time the simulation would require to choose a new random value and thus be not the same run as the one before. Maybe I am missing something here...hope somebody can shed a bit of light on the matter.
You can find a little bit more detail in the talk that went along with those slides here: https://www.youtube.com/watch?v=4fFDFbi3toc
As for the determinism question, you're right that a simulation cannot be repeated exactly unless all possible sources of randomness and other non-determinism are carefully controlled. To that end:
(1) Generate all random numbers from a PRNG that you seed with a known value.
(2) Avoid any sort of branching or conditionals based on facts about the world which you don't control (e.g. the time of day, the load on the machine, etc.), or if you can't help that, then pseudo-randomly simulate those things too.
(3) Ensure that whatever mechanism you pick for concurrency has a mode in which it can guarantee a deterministic execution order.
Since it's easy to mess all those things up, you'll also want to have a way of checking whether determinism has been violated.
All of this is covered in greater detail in the talk that I linked above.
In the sims I've built the biggest issue with repeatability ends up being proper seed management (as per the previous answer). You want your simulations to give different results only when you supply a different seed to your random number generators than before.
After that the biggest issue I've seen seems tends to be making sure you don't iterate over collections with nondeterministic ordering. For instance, in Java, you'd use a LinkedHashMap instead of a HashMap.

Importance of knowing if a standard library function is executing a system call

Is it actually important for a programmer to know if the standard library function he/she is using is actually executing a system call? If so, why?
Intuitively I'm guessing the only importance is in knowing if the general standard function is a library function or a system call itself. In other cases, I'm guessing there isn't much of a need to know if a library functions uses internally a system call?
It is not always possible to know (for sure) if a library function wraps a system call. But in one way or another, this knowledge can help improve the portability and (or) efficiency of your program. At least in the following two cases, knowing the syscall-level behaviours of your program is helpful.
When your program is time critical. Some system calls are expensive, and the library functions that wrap them are even more expensive. Thus time-critical tasks may need to switch to equivalent functions that do not enter kernel space at all.
It is also worth noticing the vsyscall (or vdso) mechanism of linux, which accelerates some system calls (i.e. gettimeofday) through mapping their implementations into user-space memory. See this for more details.
When your program needs to be deployed to some restricted environments with system call auditing. In order for your programs to survive such environments, it could be necessary to profile your program for any potential policy violations, or perhaps less tough if you are aware of the restrictions when you wrote the program.
Sometimes it might be important, and sometimes it isn't. I don't think there's any universal answer to this question. Reasons I can think of that might be important in some contexts are: if the system call requires user permissions that the user might not have; in performance critical code a system call might be too heavyweight; if you're writing a signal-handler where most system calls are forbidden; if it might use some system resource (e.g. reading from /dev/random for every random number could use up the whole entropy pool - you'd want to know if that's going to happen every time you call rand()).

Set custom production firing time in ACT-R

When defining a model in ACT-R, I would like to set for each of my productions, a different firing time.
How could I do that?
Thanks!
Not too many ACT-R modelers here, huh?
First off, keep a copy of the ACT-R reference manual handy. This a great resource that answers 90% of the questions you will have.
You can set a production's action time using (spp <production-name> :at <time>) or you can set the default action time using (sgp :dat <time>). Times are in seconds, so the default is .05.
That being said, you should modify these parameters very rarely, if at all. The whole point of production firing time is that it's supposed to represent a psychological constant. If you're tinkering with this, your model may fit the data but is less likely to be psychologically plausible. And if you don't care about psychological plausibility, then you shouldn't be using ACT-R! But there's an exception to every rule, so proceed with caution.
While this is a bit old, this question still comes up fairly high on Google when searching for ACT-R production firing times, so I feel it is acceptable to post a response.
As a published ACT-R modeler with 4 years under my belt, I would like to echo Jeff's statements. You very, very rarely modify most ACT-R parameters for the exact reason Jeff stated. All aspects of ACT-R and the amount of time certain modules take to fire are empirically backed by many studies. If you start changing these, then your model, like Jeff said, is completely implausible. While some modelers do change these values, they have empirical data to back up their reasons for changing any parameters.

Line Level Profiling for iPhone

I'm looking for a way to find out how much time is spent in each of my program's source line when running on the iPhone.Similar to what Shark can provide on the method/function level. Is this possible with the standard tools? Are there 3rd party tools that can provide this sort of granularity?
It wouldn't be necessary for profiling data for every line of source code in the project to be collected. Ideally one would be able to select specific methods or functions whose performance would be analyzed.
This link talks about how to gather trace data on an iPhone app, and that includes sampling the stack. Unfortunately, I could not tell from the doc if you can have samples drawn at random wall-clock times, or manually when you hit a key combination.
When you have traces, you can get a call tree, and that should get you line-level information. In fact the percent of time a line is responsible for is a simple number, the fraction of stack traces containing the line. The problem is, the UI may not show you that. The fact that that is a useful statistic is not well known.