I done computing intensive app using OpenCV for iOS. Of course it was slow. But it was something like 200 times slower than my PC prototype. So I was optimizing it down. From very first 15 seconds I was able to get 0.4 seconds speed. I wonder if I found all things and what others may want to share. What I did:
Replaced "double" data types inside OpenCV to "float". Double is 64bit and 32bit CPU cannot easily handle them, so float gave me some speed. OpenCV uses double very often.
Added "-mpfu=neon" to compiler options. Side-effect was new problem that emulator compiler does not work anymore and anything can be tested on native hardware only.
Replaced sin() and cos() implementation with 90 values lookup tables. Speedup was huge! This is somewhat opposite to PC where such optimizations does not give any speedup. There was code working in degrees and this value was converted to radians for sin() and cos(). This code was removed too. But lookup tables did the job.
Enabled "thumb optimizations". Some blog posts recommend exactly opposite but this is because thumb makes things usually slower on armv6. armv7 is free of any problems and makes things just faster and smaller.
To make sure thumb optimizations and -mfpu=neon work at best and do not introduce crashes I removed armv6 target completely. All my code is compiled to armv7 and this is also listed as requirement in app store. This means minimum iPhone will be 3GS. I think it is OK to drop older ones. Anyway older ones have slower CPUs and CPU intensive app provides bad user experience if installed on old device.
Of course I use -O3 flag
I deleted "dead code" from OpenCV. Often when optimizing OpenCV I see code which is clearly not needed for my project. For example often there is a extra "if()" to check for pixel size being 8 bit or 32 bit and I know that I need 8bit only. This removes some code, provides optimizer better chance to remove something more or replace with constants. Also code fits better into cache.
Any other tricks and ideas? For me enabling thumb and replacing trigonometry with lookups were boost makers and made me surprise. Maybe you know something more to do which makes apps fly?
If you are doing a lot of floating point calculations, it would benefit you greatly to use Apple's Accelerate framework. It is designed to use the floating point hardware to do calculations on vectors in parallel.
I will also address your points one by one:
1) This is not because of the CPU, it is because as of the armv7-era only 32-bit floating point operations will be calculated in the floating point processor hardware (because apple replaced the hardware). 64-bit ones will be calculated in software instead. In exchange, 32-bit operations got much faster.
2) NEON is the name of the new floating point processor instruction set
3) Yes, this is a well known method. An alternative is to use Apple's framework that I mentioned above. It provides sin and cos functions that calculate 4 values in parallel. The algorithms are fine tuned in assembly and NEON so they give the maximum performance while using minimal battery.
4) The new armv7 implementation of thumb doesn't have the drawbacks of armv6. The disabling recommendation only applies to v6.
5) Yes, considering 80% of users are on iOS 5.0 or above now (armv6 devices ended support at 4.2.1), that is perfectly acceptable for most situations.
6) This happens automatically when you build in release mode.
7) Yes, this won't have as large an effect as the above methods though.
My recommendation is to check out Accelerate. That way you can make sure you are leveraging the full power of the floating point processor.
I provide some feedback to previous posts. This explains some idea I tried to provide about dead code in point 7. This was meant to be slightly wider idea. I need formatting, so no comment form can be used. Such code was in OpenCV:
for( kk = 0; kk < (int)(descriptors->elem_size/sizeof(vec[0])); kk++ ) {
vec[kk] = 0;
}
I wanted to see how it looks on assembly. To make sure I can find it in assembly, I wrapped it like this:
__asm__("#start");
for( kk = 0; kk < (int)(descriptors->elem_size/sizeof(vec[0])); kk++ ) {
vec[kk] = 0;
}
__asm__("#stop");
Now I press "Product -> Generate Output -> Assembly file" and what I get is:
# InlineAsm Start
#start
# InlineAsm End
Ltmp1915:
ldr r0, [sp, #84]
movs r1, #0
ldr r0, [r0, #16]
ldr r0, [r0, #28]
cmp r0, #4
mov r0, r4
blo LBB14_71
LBB14_70:
Ltmp1916:
ldr r3, [sp, #84]
movs r2, #0
Ltmp1917:
str r2, [r0], #4
adds r1, #1
Ltmp1918:
Ltmp1919:
ldr r2, [r3, #16]
ldr r2, [r2, #28]
lsrs r2, r2, #2
cmp r2, r1
bgt LBB14_70
LBB14_71:
Ltmp1920:
add.w r0, r4, #8
# InlineAsm Start
#stop
# InlineAsm End
A lot of code. I printf-d out value of (int)(descriptors->elem_size/sizeof(vec[0])) and it was always 64. So I hardcoded it to be 64 and passed again via assembler:
# InlineAsm Start
#start
# InlineAsm End
Ltmp1915:
vldr.32 s16, LCPI14_7
mov r0, r4
movs r1, #0
mov.w r2, #256
blx _memset
# InlineAsm Start
#stop
# InlineAsm End
As you might see now optimizer got the idea and code became much shorter. It was able to vectorize this. Point is that compiler always does not know what inputs are constants if this is something like webcam camera size or pixel depth but in reality in my contexts they are usually constant and all I care about is speed.
I also tried Accelerate as suggested replacing three lines with:
__asm__("#start");
vDSP_vclr(vec,1,64);
__asm__("#stop");
Assembly now looks:
# InlineAsm Start
#start
# InlineAsm End
Ltmp1917:
str r1, [r7, #-140]
Ltmp1459:
Ltmp1918:
movs r1, #1
movs r2, #64
blx _vDSP_vclr
Ltmp1460:
Ltmp1919:
add.w r0, r4, #8
# InlineAsm Start
#stop
# InlineAsm End
Unsure if this is faster than bzero though. In my context this part does not time much time and two variants seemed to work at same speed.
One more thing I learned is using GPU. More about it here http://www.sunsetlakesoftware.com/2012/02/12/introducing-gpuimage-framework
Related
I have been done the Baking Pi tutorial, and I have studied about SVC system call, in the Baking Pi tutorial, it set the base of my program is 0x8000 but the vector table base is 0, how do I access 0x0 by GNU assembler and use which kernel.ld I use now?
Depending on the Pi you can start at 0x8000 or 0x80000 by default. There are now different filenames to guide the bootloader as to what mode you want the processor kernel.img, kernel7.img kernel32.img or some various combinations you can easily look this up.
The baking Pi first off had issues as written but asked and answered many times in the Raspberry Pi website baremetal forums (a very good resource, best I have seen in a long time if not ever). You will need to be using an old old pi or a Pi Zero to get the tutorial to work unless it has been updated.
This is bare metal you own the whole address space if you want to put something at zero you simply do that.
Another approach is you can create a config.txt file and in that you can tell the bootloader in the GPU to load your image to 0x00000000 in the arms address space. Depending on the arm core you are using you can also use a VTOR register if present to change where the vector table is (so set it at 0x80000 instead of 0x0000. I don't think the arm11 in the Pi Zero or old old pis allows for that though. 32 bit mode on the newer ones does, but they are multi-core and that will unravel any learning exercises. you have to "sort the cores" as I like to say on boot, isolating one to continue and putting the others in an infinite loop so they don't interfere. The boot code that the gpu lays down for you on those Pi's does this for you so that only one hits 0x8000 or 0x80000, so the config.txt approach is something folks contemplate, but I would recommend against it for a while.
There are a number of tutorials linked in the raspberrypi baremetal forum on their website that should take you well beyond the baking Pi one(s). and/or help you through those as folks struggled with them for some time.
A linker script like this
MEMORY
{
ram : ORIGIN = 0x8000, LENGTH = 0x10000
}
SECTIONS
{
.text : { *(.text*) } > ram
.rodata : { *(.rodata*) } > ram
.bss : { *(.bss*) } > ram
.data : { *(.data*) } > ram
}
with a bootstrap like this
.globl _start
_start:
mov sp,#0x8000
bl main
hang: b hang
should get you booted.
For the linker script you may need 0x80000 instead of 0x8000, and if you have at least one .data item, like a global variable:
unsigned int x = 5;
Then the bootstrap doesn't have to zero .bss (if your programming style is such that you rely on that). objcopy will pad the -O binary file with zeros between .rodata and .data if there is .data there taking care of zeroing bss.
You can let the tools do the work for you as far as an exception table goes:
.globl _start
_start:
ldr pc,reset_handler
ldr pc,undefined_handler
ldr pc,swi_handler
ldr pc,prefetch_handler
ldr pc,data_handler
ldr pc,unused_handler
ldr pc,irq_handler
ldr pc,fiq_handler
reset_handler: .word reset
undefined_handler: .word hang
swi_handler: .word hang
prefetch_handler: .word hang
data_handler: .word hang
unused_handler: .word hang
irq_handler: .word irq
fiq_handler: .word hang
reset:
mov r0,#0x8000
mov r1,#0x0000
ldmia r0!,{r2,r3,r4,r5,r6,r7,r8,r9}
stmia r1!,{r2,r3,r4,r5,r6,r7,r8,r9}
ldmia r0!,{r2,r3,r4,r5,r6,r7,r8,r9}
stmia r1!,{r2,r3,r4,r5,r6,r7,r8,r9}
Now if this is not a Pi Zero then the vector table works differently you need to read the arm docs anyway before going off into stuff like this but read up on the core and mode as well as the architecture docs for whichever you are using. The newer Pis have an armv7 mode and an armv8 mode (aarch32 and aarch64) and each has its own challenges, but they have all been covered in the forum.
Consider a RISC pipeline having 5 stages, Find how many cycles are required for the instruction given below, assume operand forwarding, branch prediction is used in which the branch is not taken, ACS is the branch instruction and the five stages are Instruction fetch, Decode, Execute, Memory and Write back.
I1: ACS R0, R1,X
I2: LOAD R2, 0(R3)
I3: SUB R4 R2, R2
I4: X: ADD R5, R1, R2
I5: LOAD R1, 0(R5)
I6: SUB R1, R1, R4
I7: ADD R1, R1, R5
A. 11
B. 12
C. 13
D. 14
Solution:
In the solution, I coludn't understand why have they neglected 2 DECODE cycles in I6 and I7 although they have a RAW dependency?
Source of the question:
Question 41 of https://practice.geeksforgeeks.org/contest-quiz/sudo-gate-2020-mock-iii
I think the answer gives the right total (13 cycles) but put the stall in the wrong instruction.
I5 doesn't need to stall; I4 (ADD R5, R1, R2) produces R5 in time to forward it to the next instruction's EX for address calculation (LOAD R1, 0(R5)). (Your 5-stage classic RISC pipeline has bypass forwarding).
But I6 reads the result of a load instruction, and loads produce their result a cycle later than the ALU in EX. So like I3, I6 needs to stall, not I5.
(I7 depends on I6, but I6 is an ALU instruction so it can forward without stalling.)
They stalls in the D stage because the ID stage can't fetch registers that the I2 / I5 load hasn't produced yet.
Separately from that, your diagram shows I4 (and what should be I7) not even being fetched when the previous instruction stalls. That doesn't make sense to me. At the start of that cycle, the pipeline doesn't even know that it needs to stall because it hasn't yet decoded I3 (and I6) and detected that it reads a not-ready register so an interlock is needed.
Fetch doesn't wait until after decoding the previous instruction to see if it stalled or not; that would defeat the entire purpose of pipelining. It should look like
I3 IF D D EX MEM WB
I4 IF IF D EX MEM WB
BTW, load latency is the reason that classic MIPS has a load-delay slot (unpredictable behaviour if you try to use a register in the next instruction after loading into it). Later MIPS added interlocks to stall if you do that, instead of making it an error, so you can keep static code-size smaller (no NOP filler) in cases where you can't find any other instruction to put in that slot. (And some even later MIPS did out-of-order exec which can hide latency.)
I have been fighting this subject for a while. I am using STM32F103C8 with the ST-Link V2 on Atollic.
I made some delay functions on assembly. I have been testing this piece of code using a oscilloscope on ATSAM (84 MHz and work perfectly) and on STM32 I also use a CPU register to see the exact amount of cycles on the debugging - DWT (Data Watchpoint and Trace).
When I configure the STM32 CPU clock to 24MHz the exact amount of cycles that I have designed for the time delay is correct. It is, 1 cycle for a decrement assembly instruction and 2 cycles for a branch instruction (on most cases). So, the main loop spend 3 cycles.
When I change the CPU clock to 72MHz each assembly instruction spend twice that time!
Well, the prefecth buffer is 2x64 bits, and the wait states should not let influence the execution CPU time (not thinking on prediction or other code stalls) on this microcontroller? Should it?
Well, on 24MHz the flash memory has no wait state, with higher clock, the CPU should not wait to execute any code. Should it?
I flashing with the release hex to see some difference and did not find any.
My only explanation would be of the ST-LINK V2? Am I right?
Thanks a lot for your time and attention.
This is the piece of the code that matters:
asm (".equ fcpu, 72000000\n\t"); //72 MHz
asm (".equ const_ms, fcpu/3000 \n\t");
asm (".equ const_us, fcpu/3000000 \n\t");
void delay_us(uint32_t valor)
{
asm volatile ( "movw r1, #:lower16:const_us \n\t"
"movt r1, #:upper16:const_us \n\t"
"mul r0, r0, r1 \n\t"
"r_us: subs r0, r0, #1 \n\t"
"bne r_us \n\t");
}
void delay_ms(uint32_t valor)
{
asm volatile ("movw r1, #:lower16:const_ms \n\t"
"movt r1, #:upper16:const_ms \n\t"
"mul r0, r0, r1 \n\t"
"r_ms: subs r0, r0, #1 \n\t"
"bne r_ms \n\t");
}
It is because of the wait states of the FLASH memory run at 72MHz. It is good to read the documentation :).
Place the code in the SRAM and you will get what you want.
For the good results fro the FLASH avoid the branching as it flushes the pipeline. This kind of delays are good only for the very short ones. Anything longer should be implemented using the timers.
I advice to avoid delays in the code.
PS St-Link is not guilty :)
I have been doing several tests. My first conclusion is that the overhead depends on the alignment of the instructions on memory (the prefetch buffer is 2x64bits).
Second, because of the deterministic behavior of the branch, when taken, it flushes the prefetch buffer and also the pipeline.
I know cpu registers are used for fast access. But could anyone give me an example of the data content stored in? Why these data are so imporant and have to be stored by operating system during context switching?
I would place registers in two groups:
System Registers
Registers that define the process state
System registers do not change with process contexts. Classically, the second group of registers includes:
A processor status register
General registers
Memory mapping registers
You seen to be most interested in #2 from the call of your question. For simplicity, I will use the VAX processor as the working example (The Intel Kludge-On-A-Chip is overly complex).
The VAX has 16 32-bit registers (R0 - R15). Some of those registers (R12–R15) have have special purposes:
PC = Program Counter points to the next instruction to execute
SP = Stack pointer points to bottom of the stack for the current mode.
AP = Argument Pointer points to the arguments to a function
FP = Frame Pointer used to restore the stack after a function call completes.
That leaves R0–R11 for general use.
R6-R11 can be used by programmers at will.
R0-R5 can be used by programmers but some instructions change their values.
The registers are 32 bits. They can then store:
One-Byte signed or unsigned integer
Two-byte signed or unsigned integer
Four-byte signed or unsigned integer
Four-byte floating point
You can do something like these:
ADDL3 R0, R1, R2 ; Add contents of R0 and R1 and store the result in R2
ADDF3 R0, R1, R2
In the first case, the processor treats the contents of R0 and R1 as 32-bit signed integers. In the second case, it treats the contents of R0 and R1 as 32-bit floating point values.
The interpretation of the register contents depends upon the instruction being executed. Thus, the two instructions above are likely to store different values in R2, even if they have the same values in R0 and R1.
Larger data types, adjacent registers can be combined.
ADDD3 R0, R2, R4
This adds the contents of R0/R1, to the contents of R2/R3, and stores the result in R4/R5, treating the contents of all the register pairs as 64-bit floating point values.
You can even do
ADDH3 R0, R4, R8
This adds the contents of R0/R1/R2/R3 to the contents of R4/R5/R6/R7, and stores the result in R8/R9/R10/R11, treating the contents of all the register quads as 128-bit floating point value.
The VAX has character and come complex matching instructions that use R0-R5 for special purposes (such as loop counters). These are instructions with long execution that can be interrupted. Using the registers to maintain the state of the instruction allows the instruction to be restarted midstream when the process is restarted.
Programmers use R0-R5. There is no problem with that as long as you don't use the instructions that disrupt them.
By Convention R0 and R1 are used for function return values.
So these are the kinds of things you do with registers.
They are not for fast access. They are the core of the cpu and every operation must be done on them. Cpu can add two numbers after you move them from memory to the registers, for example.
I wrote this very naive NEON implementation to convert from RGBA to RGB. It works but I was wondering if there was anything else I could do to further improve performances.
I tried playing around with the prefetching size and unrolling the loop a bit more but performances didn't change much. By the way, are there any rule of thumbs when it comes to dimension the prefetching? I couldn't find anything useful on the net. Furthermore in the "ARMv8 Instruction Set Overview" I see there's also a prefetch for store, how is that useful?
Currently I'm getting around 1.7ms to convert a 1280x720 image on an iPhone5s.
// unsigned int * rgba2rgb_neon(unsigned int * pDst, unsigned int * pSrc, unsigned int count);
_rgba2rgb_neon:
cmp w2, #0x7
b.gt loop
mov w0, #0
ret
loop:
prfm pldl1strm, [w1, #64]
ld4.8b {v0, v1, v2, v3}, [w1], #32
ld4.8b {v4, v5, v6, v7}, [w1], #32
prfm pldl1strm, [w1, #64]
st3.8b {v0, v1, v2}, [w0], #24
st3.8b {v4, v5, v6}, [w0], #24
subs w2, w2, #16
b.gt loop
done:
ret
First (since I assume you’re targeting iOS), vImage (part of the Accelerate.framework) provides this conversion for you, as vImageConvert_RGBA8888toRGB888. This has the advantage of being available on all iOS and OS X systems, so you don’t need to write separate implementations for arm64, armv7s, armv7, i386, x86_64.
Now, it may be that you’re writing this conversion as an exercise yourself, and not because you simply didn’t know that one was already available. In that case:
Avoid using ld[34] or st[34]. They are convenient but generally slower than using ld1 and a permute.
For completely regular data access patterns like this, manual prefetch isn’t necessary.
Load four 16b RGBA vectors with ld1.16b, extract three 16b RGB vectors from them with three tbl.16b instructions, and store them with st1.16b
Alternatively, try using non-temporal loads and stores (ldnp/stnp), as your image size is too large to fit in the caches.
Finally, to answer your question: a prefetch hint for stores is primarily useful because some implementations might have a significant stall for a partial line write that misses cache. Especially simple implementations might have a stall for any write that misses cache.
See also vImageFlatten_RGBA8888toRGB888 if you want something interesting done with the alpha channel besides chucking it over your shoulder.