Cannot get CPU frequency in iOS 5? - ios5

This used to work in iOS 5 but does not seem to work any longer:
unsigned freq;
mib[0] = CTL_HW;
mib[1] = HW_CPU_FREQ;
sysctl (mib, 2, &freq, (void*) &len, NULL, 0);
Does anybody know an alternative?
Thanks.

Apple doesn't provide the CPU frequency for all hardware. For example, it was unknown for some time exactly what the clock rate for the A4 in the iPod touch 4g was.
I think the best you can do is determine what the device is, and construct a lookup table with the CPU frequencies you can find on wikipedia and so on. Then, if you can't probe the CPU frequency, look it up in the lookup table.

Related

Accuracy of STM32L496 generated square wave

I have an STM32L496 MCU, and I want to generate a 3MHz square wave. I would like to know what would be the accuracy of this signal.
The system clock frequency of this MCU is 80MHz. If I use a prescaler of 80MHz / 3MHz = 26.667 (can I do that?), then the timer will tick at a rate of 3MHz. If I use a 16-bit timer (TIMER16), it would count to 65 535 maximum, which means it would increment once every 0.33 microseconds.
That is how far I understood, but I am not sure how to calculate the accuracy of this signal. Any help would be much appreciated!
If the core is 80MHz you can't make 3MHz exactly with a timer clocked from the same source as the core.
You can make 3.076923 MHz with even mark space ratio (prescaler 1, compare value 13 reset value 26), or you can make 2.962962 MHz (which is slightly closer) with a 13:14 mark space ratio (prescaler 1, compare value 13 or 14, reset value 27).
To get 3MHz you would have to underclock your core down to 78MHz.
I don't know the exact part you are using. You might be able to get it exactly using one of the clock outputs or a PLLs other than the one that drives the core, eg: if you have a 12MHz crystal you can output 3MHz easily on an MCO pin.

How does an OS with a 1 kHz tick rate make nanosecond measurements?

I've worked on low level devices such as microcontrollers, and I understand how timers and interrupts work; but, I was wondering how an operating system with a 1 kHz tick rate is able to give time measurements of events down to the nanosecond precision. Are they using timers with input capture or something like that? Is there a limit to how many measurements you can take at once?
On modern x86 platforms, the TSC can be used for high resolution time measurement, and even to generate high resolution interrupts (that's a different interface of course, but it counts in TSC ticks). There is no inherent limit to the number of parallel measurements. Before the "Invariant TSC" feature was introduced it was much harder to use the TSC that way, because the TSC frequency changed with the C-state (and so usually went out of sync across cores and in any case was hard to use as a stopwatch).

how to calculate battery time,3g game play etc,iOS

i want to calculate time for
3D gaming,3G Talk time,Video use,WiFi Internet,2G Talk time,Audio use and Standby depending on the current battery level.
There are many app do this, but i didnt found any resource to calculate that in iOS.
I calculated the battery level
[myDevice setBatteryMonitoringEnabled:YES];
float batLeft = [myDevice batteryLevel];
int i=[myDevice batteryState];
int batinfo=(batLeft*100);
now on the basis of batinfo,i need to calculate the time remaining for other parameters.
Thanks
Apple provide some data in device specification. You can use it for your calculation.
For example Apple say, app. 8 hours talk time on 3G (Iphone 5).
Now,
int batinfo=(batLeft*100);
int minutes3G=480; //minutes of 8 hour (static)
int talk3G=(minutes3G*batinfo)/100;
hour=talk3G/60;
min=talk3G%60;
lbl3Gtalk.text=[lbl3Gtalk.text stringByAppendingFormat:#"Remaining Time : %dh %dmin",hour,min];

get carrier name and signal strength return wrong value in iphone

i curious why i get wrong value to get carrier name and signal strength.
Here the code.
CTTelephonyNetworkInfo *netinfo = [[CTTelephonyNetworkInfo alloc] init];
CTCarrier *car = [netinfo subscriberCellularProvider];
NSLog(#"Carrier Name: %#", car.carrierName);
[netinfo release];
Why i get value "carrier" instead of carrier i use?
this is code to get signal strength
void *libHandle = dlopen("/System/Library/Frameworks/CoreTelephony.framework/CoreTelephony", RTLD_LAZY);
int (*CTGetSignalStrength)();
CTGetSignalStrength = dlsym(libHandle, "CTGetSignalStrength");
if( CTGetSignalStrength == NULL) NSLog(#"Could not find CTGetSignalStrength");
int result = CTGetSignalStrength();
NSLog(#"Signal strength: %d", result);
dlclose(libHandle);
as i kno, signal strength is in dBm value (in negative), but why the value above show positif value and now shown the signal strength?
is there any value mapping to present the signal strength on dBm
P.S i ran the program on the real iphone devices and still get wrong value.
any help would be appreciate.
Thanks.
About the carrier: Running your code on the simulator gives me nil while running on a device correctly says 2011-11-24 10:49:05.182 testapp[12579:707] Carrier Name: Vodafone.de, so the code is absolutely correct (running on iOS 5.0.1 using Xcode 4.2). Maybe your carrier didn't fill out some field correctly? In any case I would consider testing on another device or with another SIM card.
Concerning signal strength: As CTGetSignalStrength seems to be a rather undocumented API the values may be arbitrarily defined by Apple (and redefined as well). In any case this seems to be a RSSI value (received signal strength indication) which is more or less a positive number where 1 is the worst signal strength and upper is better. As such there is no predefined (documented and thus stable) available mapping to dBm values, a mapping would probably have to be created experimentally.
It is quite common that signal strength values are returned as integer numbers. The tricky point is the mapping to the corresponding dBm value. Usually the int values provide a resolution of 0.5, 1, or 2 dBm. The dBm values reported by the handset/modem usually range from -115 to -51 dBm for 2G (GSM/EDGE) and -120 to -25 dBm for 3G (UMTS/HSxPA) and represent the RSSI (received signal strength indicator).
E.g. the Android API uses the default 3GPP mapping (see Android reference).
Please take also into account that the baseband modem differs between the iPhone 4S (Qualcomm) and earlier models which used an Infineon Gold.

Main memory bandwidth measurement

I want to measure the main memory bandwidth and while looking for the methodology, I found that,
many used 'bcopy' function to copy bytes from a source to destination and then measure the time which they report as the bandwidth.
Others ways of doing it is to allocate and array and walk through the array (with some stride) - this basically gives the time to read the entire array.
I tried doing (1) for data size of 1GB and the bandwidth I got is '700MB/sec' (I used rdtsc to count the number of cycles elapsed for the copy). But I suspect that this is not correct because my RAM config is as follows:
Speed: 1333 MHz
Bus width: 32bit
As per wikipedia, the theoretical bandwidth is calculated as follows:
clock speed * bus width * # bits per clock cycle per line (2 for ddr 3
ram) 1333 MHz * 32 * 2 ~= 8GB/sec.
So mine is completely different from the estimated bandwidth. Any idea of what am I doing wrong?
=========
Other question is, bcopy involves both read and write. So does it mean that I should divide the calculated bandwidth by two to get only the read or only the write bandwidth? I would like to confirm whether the bandwidth is just the inverse of latency? Please suggest any other ways of measuring the bandwidth.
I can't comment on the effectiveness of bcopy, but the most straightforward approach is the second method you stated (with a stride of 1). Additionally, you are confusing bits with bytes in your memory bandwidth equation. 32 bits = 4bytes. Modern computers use 64 bit wide memory buses. So your effective transfer rate (assuming DDR3 tech)
1333Mhz * 64bit/(8bits/byte) = 10666MB/s (also classified as PC3-10666)
The 1333Mhz already has the 2 transfer/clock factored in.
Check out the wiki page for more info: http://en.wikipedia.org/wiki/DDR3_SDRAM
Regarding your results, try again with the array access. Malloc 1GB and traverse the entire thing. You can sum each element of the array and print it out so your compiler doesn't think it's dead code.
Something like this:
double time;
int size = 1024*1024*1024;
int sum;
*char *array = (char*)malloc(size);
//start timer here
for(int i=0; i < size; i++)
sum += array[i];
//end timer
printf("time taken: %f \tsum is %d\n", time, sum);