When does the pipeline take 2 decode stages when there is a RAW dependency in 2 successive instructions - cpu-architecture

Consider a RISC pipeline having 5 stages, Find how many cycles are required for the instruction given below, assume operand forwarding, branch prediction is used in which the branch is not taken, ACS is the branch instruction and the five stages are Instruction fetch, Decode, Execute, Memory and Write back.
I1: ACS R0, R1,X
I2: LOAD R2, 0(R3)
I3: SUB R4 R2, R2
I4: X: ADD R5, R1, R2
I5: LOAD R1, 0(R5)
I6: SUB R1, R1, R4
I7: ADD R1, R1, R5
A. 11
B. 12
C. 13
D. 14
Solution:
In the solution, I coludn't understand why have they neglected 2 DECODE cycles in I6 and I7 although they have a RAW dependency?
Source of the question:
Question 41 of https://practice.geeksforgeeks.org/contest-quiz/sudo-gate-2020-mock-iii

I think the answer gives the right total (13 cycles) but put the stall in the wrong instruction.
I5 doesn't need to stall; I4 (ADD R5, R1, R2) produces R5 in time to forward it to the next instruction's EX for address calculation (LOAD R1, 0(R5)). (Your 5-stage classic RISC pipeline has bypass forwarding).
But I6 reads the result of a load instruction, and loads produce their result a cycle later than the ALU in EX. So like I3, I6 needs to stall, not I5.
(I7 depends on I6, but I6 is an ALU instruction so it can forward without stalling.)
They stalls in the D stage because the ID stage can't fetch registers that the I2 / I5 load hasn't produced yet.
Separately from that, your diagram shows I4 (and what should be I7) not even being fetched when the previous instruction stalls. That doesn't make sense to me. At the start of that cycle, the pipeline doesn't even know that it needs to stall because it hasn't yet decoded I3 (and I6) and detected that it reads a not-ready register so an interlock is needed.
Fetch doesn't wait until after decoding the previous instruction to see if it stalled or not; that would defeat the entire purpose of pipelining. It should look like
I3 IF D D EX MEM WB
I4 IF IF D EX MEM WB
BTW, load latency is the reason that classic MIPS has a load-delay slot (unpredictable behaviour if you try to use a register in the next instruction after loading into it). Later MIPS added interlocks to stall if you do that, instead of making it an error, so you can keep static code-size smaller (no NOP filler) in cases where you can't find any other instruction to put in that slot. (And some even later MIPS did out-of-order exec which can hide latency.)

Related

Determine the value of data paths from a given instruction

During a particular clock cycle, consider the CPU shown in the drawing.
Assume that the following initial data is present
(all values are shown in decimal, DM is Data Memory):x3=8, x14=40
During the cycle in question, assume that the following instruction is executed
(the first column is the instruction's address; all values are shown in decimal):
50788 beq x3,x14,80
How to determine the value of L1, L2 and L3
As per what I understand the L1 will have the program counter
But how do I determine the value of program counter
L2 will have 0 or 1 depending upon whether it uses MemtoReg
Now sure about L3. Although above is a guesswork.
Any hints or pointers how to procees on this ?
L1 has 50788, which is the address of the current branch instruction being executed — it is the address that was fed into the Instruction Memory that results in fetching the beq x3, x14, 80.
You can follow that after L1, there's an ADDer, which will add 4 to the PC value (offering that value to subsequent circuitry), adding 4 to skip past this 4 byte instruction, and thus referring to the next sequential memory address, which holds the next sequential instruction.
L2 is "don't care", which means it doesn't matter whether it is 0 or 1, so different implementations of the same basic design could use either value.  Why doesn't it matter whether this signal is 0 or 1?  Because this instruction is a conditional branch, and as such does not update a register, so RegWrite will be 0, thus the value of WriteData is unimportant and is ignored.
The hardware is a union of all the necessary components to execute any instruction in the instruction set, and as such, some circuitry here or there goes unused during execution of different instructions.  Rather than turning off the unused circuitry (which is an advance technique that takes work to design & implement, not employed here) the unused circuitry for any given instruction is allowed to execute — but (whether turned off or allowed to execute) the control signals further down the line of datapaths are set up to ignore the results of these unused circuits based on the current instruction.
L3 is the branch condition signal, that dynamically informs the PC update circuitry whether to take the branch or not.  Here that condition is effectively generated in the ALU from the expression x3 == x14 and determines the value of this control signal: if they are equal then that control signal needs to be 1 to make it take the branch (as per the definition of the conditional branch instruction) and that control signal needs to be 0 to make it not take the branch and instead continue with sequential execution.
Hopefully, you can see that for conditional branch instructions, the Branch control signal is asserted (1/true) — this signal combined with Zero goes into an AND gate, which results in 1 for take the branch vs. 0 for don't take the branch, by controlling that MUX after the AND gate.  So, the only condition in which the branch can be taken [pc := pc + sxt(imm)*2] is when both Branch and Zero are true.  If Branch is false, it is not a branch instruction, so Zero doesn't matter, and if Zero is false, the branch condition is false, so Branch is overridden [pc := pc + 4].
More explicitly, the PC update circuitry says:
PC := (Branch & Zero) ? PC + sxt(imm)*2 : PC + 4;
Using C ternary operator (could also be written using if-then-else).
Zero is a rather poor choice for the name of this dynamic control signal.  I would have chosen Take or Taken instead.  I believe the name Zero is historical from older RISC architectures.
This circuitry follows the RISC V standard of multiplying the branch target immediate field by 2 (instead of 4 as with MIPS), and this standard makes it so that regular 4 byte instructions are identical (unchanged) in the presence of compressed instructions — thus, on hardware that supports compressed instructions, no mode switching is needed, and, compressed instructions can be interleaved with uncompressed instructions (unlike with MIPS16 or ARM Thumb).  However, this block diagram does not provide the other features necessary to execute compressed instructions (for one, there is no increment by 2 option diagrammed in this PC update circuitry, for another there is no compressed instruction expander, which would go in between the Instruction Memory output and the Decode logic).
What you asking is very implementation depended. From the diagram I guess it is some MIPS or MIPS like uarch. Some of real RISC implementations have (curr_instr_addr+2instrsize) in the PC according to ISA. This has some historical reasons, because on old machines depth of the pipeline was 3 levels. So L1 has addr of the next or some of the next instructions. I can't say which exactly. If beq instruciton is in ALU, then L2 has MemToReg of the previous instruction to determine if writeback phase is needed. L3 keeping the zero flag to bypass the pipeline directly to the PC if the next instruction is branch.

Count Cycles not matching on STM32F103C8? Prefetch buffer not working as I think?

I have been fighting this subject for a while. I am using STM32F103C8 with the ST-Link V2 on Atollic.
I made some delay functions on assembly. I have been testing this piece of code using a oscilloscope on ATSAM (84 MHz and work perfectly) and on STM32 I also use a CPU register to see the exact amount of cycles on the debugging - DWT (Data Watchpoint and Trace).
When I configure the STM32 CPU clock to 24MHz the exact amount of cycles that I have designed for the time delay is correct. It is, 1 cycle for a decrement assembly instruction and 2 cycles for a branch instruction (on most cases). So, the main loop spend 3 cycles.
When I change the CPU clock to 72MHz each assembly instruction spend twice that time!
Well, the prefecth buffer is 2x64 bits, and the wait states should not let influence the execution CPU time (not thinking on prediction or other code stalls) on this microcontroller? Should it?
Well, on 24MHz the flash memory has no wait state, with higher clock, the CPU should not wait to execute any code. Should it?
I flashing with the release hex to see some difference and did not find any.
My only explanation would be of the ST-LINK V2? Am I right?
Thanks a lot for your time and attention.
This is the piece of the code that matters:
asm (".equ fcpu, 72000000\n\t"); //72 MHz
asm (".equ const_ms, fcpu/3000 \n\t");
asm (".equ const_us, fcpu/3000000 \n\t");
void delay_us(uint32_t valor)
{
asm volatile ( "movw r1, #:lower16:const_us \n\t"
"movt r1, #:upper16:const_us \n\t"
"mul r0, r0, r1 \n\t"
"r_us: subs r0, r0, #1 \n\t"
"bne r_us \n\t");
}
void delay_ms(uint32_t valor)
{
asm volatile ("movw r1, #:lower16:const_ms \n\t"
"movt r1, #:upper16:const_ms \n\t"
"mul r0, r0, r1 \n\t"
"r_ms: subs r0, r0, #1 \n\t"
"bne r_ms \n\t");
}
It is because of the wait states of the FLASH memory run at 72MHz. It is good to read the documentation :).
Place the code in the SRAM and you will get what you want.
For the good results fro the FLASH avoid the branching as it flushes the pipeline. This kind of delays are good only for the very short ones. Anything longer should be implemented using the timers.
I advice to avoid delays in the code.
PS St-Link is not guilty :)
I have been doing several tests. My first conclusion is that the overhead depends on the alignment of the instructions on memory (the prefetch buffer is 2x64bits).
Second, because of the deterministic behavior of the branch, when taken, it flushes the prefetch buffer and also the pipeline.

What sort of data is stored in cpu registers

I know cpu registers are used for fast access. But could anyone give me an example of the data content stored in? Why these data are so imporant and have to be stored by operating system during context switching?
I would place registers in two groups:
System Registers
Registers that define the process state
System registers do not change with process contexts. Classically, the second group of registers includes:
A processor status register
General registers
Memory mapping registers
You seen to be most interested in #2 from the call of your question. For simplicity, I will use the VAX processor as the working example (The Intel Kludge-On-A-Chip is overly complex).
The VAX has 16 32-bit registers (R0 - R15). Some of those registers (R12–R15) have have special purposes:
PC = Program Counter points to the next instruction to execute
SP = Stack pointer points to bottom of the stack for the current mode.
AP = Argument Pointer points to the arguments to a function
FP = Frame Pointer used to restore the stack after a function call completes.
That leaves R0–R11 for general use.
R6-R11 can be used by programmers at will.
R0-R5 can be used by programmers but some instructions change their values.
The registers are 32 bits. They can then store:
One-Byte signed or unsigned integer
Two-byte signed or unsigned integer
Four-byte signed or unsigned integer
Four-byte floating point
You can do something like these:
ADDL3 R0, R1, R2 ; Add contents of R0 and R1 and store the result in R2
ADDF3 R0, R1, R2
In the first case, the processor treats the contents of R0 and R1 as 32-bit signed integers. In the second case, it treats the contents of R0 and R1 as 32-bit floating point values.
The interpretation of the register contents depends upon the instruction being executed. Thus, the two instructions above are likely to store different values in R2, even if they have the same values in R0 and R1.
Larger data types, adjacent registers can be combined.
ADDD3 R0, R2, R4
This adds the contents of R0/R1, to the contents of R2/R3, and stores the result in R4/R5, treating the contents of all the register pairs as 64-bit floating point values.
You can even do
ADDH3 R0, R4, R8
This adds the contents of R0/R1/R2/R3 to the contents of R4/R5/R6/R7, and stores the result in R8/R9/R10/R11, treating the contents of all the register quads as 128-bit floating point value.
The VAX has character and come complex matching instructions that use R0-R5 for special purposes (such as loop counters). These are instructions with long execution that can be interrupted. Using the registers to maintain the state of the instruction allows the instruction to be restarted midstream when the process is restarted.
Programmers use R0-R5. There is no problem with that as long as you don't use the instructions that disrupt them.
By Convention R0 and R1 are used for function return values.
So these are the kinds of things you do with registers.
They are not for fast access. They are the core of the cpu and every operation must be done on them. Cpu can add two numbers after you move them from memory to the registers, for example.

Understanding stalls and branch delay slots

I am taking a course on Computer Architecture. I found this website from another University which has notes and videos which are helping me thus far: CS6810, Univ of Utah. I am working through some old homework assignments posted on that site, in particular this one. I am trying to understand pipelining and related concepts, specifically stalls and branch delay slots.
I am looking now at the first question from that old homework assignment and am unsure of how to do these problems.
The question is as follows:
Consider the following code segment, where the branch is taken 30% of the time and not
taken 70% of the time.
R1 = R2 + R3
R4 = R5 + R6
R7 = R8 + R9
if R10 = 0, branch to linex
R11 = R12 + R13
R14 = R11 + R15
R16 = R14 + R17
...
linex: R18 = R19 + R20
R21 = R18 + R22
R23 = R18 + R21
...
Consider a 10-stage in-order processor, where the instruction is fetched in the first
stage, and the branch outcome is known after three stages. Estimate the CPI of the
processor under the following scenarios (assume that all stalls in the processor are
branch-related and branches account for 15% of all executed instructions):
On every branch, fetch is stalled until the branch outcome is known.
Every branch is predicted not-taken and the mis-fetched instructions are squashed if the branch is taken.
The processor has two delay slots and the two instructions following the branch are always fetched and executed, and
3.1. You are unable to find any instructions to fill the delay slot.
3.2. You are able to move two instructions before the branch into the delay slot.
3.3. You are able to move two instructions after label "linex" into the delay slot.
3.4. You are able to move one (note: one, not two!) instruction immediately after the branch (in the original code) into the delay slot.
I am unsure of how to even begin to look at this question. I have read all the notes and watched the videos on that site and have read sections from the H&P book but am still confused on this problem. If anyone has the time, I would appreciate someone helping me step through this question. I just need to know how to begin to conceptualize the answers.
In the described pipeline the direction and target of a conditional branch is not available until the end of the third cycle, so the correct next instruction after the branch cannot be fetched (with certainty) until the beginning of the fourth cycle.
Design 1
An obvious way to handle the delayed availability of the address of the instruction after the branch is simply to wait. This is what the design 1 does by stalling for two cycles (which is equivalent to fetching two no-ops that are not part of the actual program). This means that for both taken and not taken paths two cycles will wasted, just as if two no-op instructions had been inserted by the compiler.
Here are diagrams of the pipeline (ST is a stall, NO is a no-op, XX is a canceled instruction, UU is a useless instruction, I1, I2, and I3 are the three instructions before the branch [in the original program order before filling any delay slots], BI is the branch instruction, I5, I6, and I7 are the fall-through instructions after the branch, I21, I22, and I23 are the instructions at the start of the taken path; IF is the instruction fetch stage, DE is decode, BR is branch resolve, S1 is the stage after BR):
Taken Not taken
IF DE BR S1 ... IF DE BR S1 ...
cycle 1 BI I3 I2 I1 BI I3 I2 I1
cycle 2 ST BI I3 I2 ST BI I3 I2
cycle 3 ST ST BI I3 ST ST BI I3
cycle 4 I21 ST ST BI I5 ST ST BI
cycle 5 I22 I21 ST ST I6 I5 ST ST
Design 2
To avoid having to detect the presence of a branch by the end of the IF stage and to allow some useful work to be done sometimes (in the not taken case), rather than having hardware effectively insert no-ops into the pipeline (i.e., stall fetch after the branch) the hardware can treat the branch as any other instruction until it is resolved in the third pipeline stage. This is predicting all branches as not taken. If the branch is taken, then the two instructions fetched after the branch are canceled (effectively turned into no-ops). This is the design 2:
Taken Not taken
IF DE BR S1 ... IF DE BR S1 ...
cycle 1 BI I3 I2 I1 BI I3 I2 I1
cycle 2 I5 BI I3 I2 I5 BI I3 I2
cycle 3 I6 I5 BI I3 I6 I5 BI I3
cycle 4 I21 XX XX BI I7 I6 I5 BI
cycle 5 I22 I21 XX XX I8 I7 I6 I5
Design 3
Always predicting a branch to be not taken will waste two cycles whenever a branch is taken, so a third mechanism was developed to avoid this waste--the delayed branch. In a delayed branch, the hardware always executes (does not cancel) the delay slot instructions after the branch (two instructions in the example). By always executing the delay slot instructions, the pipeline simplified. The compiler's job is to try to fill these delay slots with useful instructions.
Instructions taken from before the branch (in the program without delayed branches) will be useful regardless of which path is taken (but dependencies can prevent the compiler from scheduling any such instructions after the branch). The compiler can fill a delay slot with an instruction from the taken or not taken path, but such an instruction cannot be one that overwrites state used by the other path (or after the paths join) since delay slot instructions are not canceled (unlike with prediction). (If both paths join--as is common for if-then-else constructs--, then delay slots could potentially be filled from the join point; but such instructions are usually dependent on instructions from at least one of the paths before the join, which dependency would prevent them from being used in delay slots.) If the compiler cannot find a useful instruction, it must fill the delay slot with a no-op.
In case 3.1 (the worst case for a delayed branch design), the compiler could not find any useful instructions to fill the delay slots and so must fill them with no-ops:
Taken Not taken
IF DE BR S1 ... IF DE BR S1 ...
cycle 1 BI I3 I2 I1 BI I3 I2 I1
cycle 2 NO BI I3 I2 NO BI I3 I2
cycle 3 NO NO BI I3 NO NO BI I3
cycle 4 I21 NO NO BI I5 NO NO BI
cycle 5 I22 I21 NO NO I6 I5 NO NO
This is equivalent in performance to design 1 (stall two cycles).
In case 3.2 (the best case for a delayed branch design), the compiler found two instructions from before the branch to fill the delay slots:
Taken Not taken
IF DE BR S1 ... IF DE BR S1 ...
cycle 1 BI I1 ... BI I1 ...
cycle 2 I2 BI I1 ... I2 BI I1 ...
cycle 3 I3 I2 BI I1 I3 I2 BI I1
cycle 4 I21 I3 I2 BI I5 I3 I2 BI
cycle 5 I22 I21 I3 I2 I6 I5 I3 I2
In this case, all pipeline slots are filled with useful instructions regardless of whether the branch is taken or not taken. The performance (CPI) is the same as for an ideal pipeline without delayed resolution of branches.
In case 3.3, the compiler filled the delay slots with instructions from the taken path:
Taken Not taken
IF DE BR S1 ... IF DE BR S1 ...
cycle 1 BI I3 I2 I1 BI I3 I2 I1
cycle 2 I21 BI I3 I2 I21 BI I3 I2
cycle 3 I22 I21 BI I3 I22 I21 BI I3
cycle 4 I23 I22 I21 BI I5 UU UU BI
cycle 5 I24 I23 I22 I21 I6 I5 UU UU
In the not taken path I21 and I22 are useless. Although they are actually executed (and update state), this state is not used in the not taken path (or after any joining of the paths). For the not taken path, it is as if the delay slots had been filled with no-ops.
In case 3.4, the compiler could only find one safe instruction from the not taken path and must fill the other delay slot with a no-op:
Taken Not taken
IF DE BR S1 ... IF DE BR S1 ...
cycle 1 BI I3 I2 I1 BI I3 I2 I1
cycle 2 I5 BI I3 I2 I5 BI I3 I2
cycle 3 NO I5 BI I3 NO I5 BI I3
cycle 4 I21 NO UU BI I6 NO I5 BI
cycle 5 I22 I21 NO UU I7 I6 NO I5
For the taken path, one useless instruction and one no-op are executed, wasting two cycles. For the not taken path, one no-op is executed, wasting one cycle.
Calculating CPI
The formula for calculating CPI in this case is:
%non_branch * CPI_non_branch + %branch * CPI_branch
CPI_branch is calculated by accounting for the time taken for the branch itself (baseCPI_branch) and the percentage of times the branch is taken with the wasted cycles when it is taken and the percentage of times the branch is not taken with the wasted cycles when it is not taken. So the CPI_branch is:
baseCPI_branch + (%taken * wasted_cycles_taken) +
(%not_taken * wasted_cycles_not_taken)
In an ideal scalar pipeline, each instruction takes one cycle, i.e., the Cycles Per Instruction is 1. In this example, non-branch instructions behave as if the pipeline were ideal ("all stalls in the processor are branch-related"), so each non-branch instruction has a CPI of 1. Likewise, the baseCPI_branch (excluding wasted cycles from stalls, no-ops, et al.) is 1.
Based on the pipeline diagrams above, one can determine the number of cycles that are wasted in the taken and in the not taken paths. The example gives the percentage of branches and the percentages of branches that are taken and not taken.
For the design 1, both taken and not taken paths waste 2 cycles, so the CPI_branch is:
1 + (0.3 * 2) + (0.7 *2) = 3
and the total CPI is therefore:
(0.85 * 1) + (0.15 * 3) = 1.3

Maximum speed from IOS/iPad/iPhone

I done computing intensive app using OpenCV for iOS. Of course it was slow. But it was something like 200 times slower than my PC prototype. So I was optimizing it down. From very first 15 seconds I was able to get 0.4 seconds speed. I wonder if I found all things and what others may want to share. What I did:
Replaced "double" data types inside OpenCV to "float". Double is 64bit and 32bit CPU cannot easily handle them, so float gave me some speed. OpenCV uses double very often.
Added "-mpfu=neon" to compiler options. Side-effect was new problem that emulator compiler does not work anymore and anything can be tested on native hardware only.
Replaced sin() and cos() implementation with 90 values lookup tables. Speedup was huge! This is somewhat opposite to PC where such optimizations does not give any speedup. There was code working in degrees and this value was converted to radians for sin() and cos(). This code was removed too. But lookup tables did the job.
Enabled "thumb optimizations". Some blog posts recommend exactly opposite but this is because thumb makes things usually slower on armv6. armv7 is free of any problems and makes things just faster and smaller.
To make sure thumb optimizations and -mfpu=neon work at best and do not introduce crashes I removed armv6 target completely. All my code is compiled to armv7 and this is also listed as requirement in app store. This means minimum iPhone will be 3GS. I think it is OK to drop older ones. Anyway older ones have slower CPUs and CPU intensive app provides bad user experience if installed on old device.
Of course I use -O3 flag
I deleted "dead code" from OpenCV. Often when optimizing OpenCV I see code which is clearly not needed for my project. For example often there is a extra "if()" to check for pixel size being 8 bit or 32 bit and I know that I need 8bit only. This removes some code, provides optimizer better chance to remove something more or replace with constants. Also code fits better into cache.
Any other tricks and ideas? For me enabling thumb and replacing trigonometry with lookups were boost makers and made me surprise. Maybe you know something more to do which makes apps fly?
If you are doing a lot of floating point calculations, it would benefit you greatly to use Apple's Accelerate framework. It is designed to use the floating point hardware to do calculations on vectors in parallel.
I will also address your points one by one:
1) This is not because of the CPU, it is because as of the armv7-era only 32-bit floating point operations will be calculated in the floating point processor hardware (because apple replaced the hardware). 64-bit ones will be calculated in software instead. In exchange, 32-bit operations got much faster.
2) NEON is the name of the new floating point processor instruction set
3) Yes, this is a well known method. An alternative is to use Apple's framework that I mentioned above. It provides sin and cos functions that calculate 4 values in parallel. The algorithms are fine tuned in assembly and NEON so they give the maximum performance while using minimal battery.
4) The new armv7 implementation of thumb doesn't have the drawbacks of armv6. The disabling recommendation only applies to v6.
5) Yes, considering 80% of users are on iOS 5.0 or above now (armv6 devices ended support at 4.2.1), that is perfectly acceptable for most situations.
6) This happens automatically when you build in release mode.
7) Yes, this won't have as large an effect as the above methods though.
My recommendation is to check out Accelerate. That way you can make sure you are leveraging the full power of the floating point processor.
I provide some feedback to previous posts. This explains some idea I tried to provide about dead code in point 7. This was meant to be slightly wider idea. I need formatting, so no comment form can be used. Such code was in OpenCV:
for( kk = 0; kk < (int)(descriptors->elem_size/sizeof(vec[0])); kk++ ) {
vec[kk] = 0;
}
I wanted to see how it looks on assembly. To make sure I can find it in assembly, I wrapped it like this:
__asm__("#start");
for( kk = 0; kk < (int)(descriptors->elem_size/sizeof(vec[0])); kk++ ) {
vec[kk] = 0;
}
__asm__("#stop");
Now I press "Product -> Generate Output -> Assembly file" and what I get is:
# InlineAsm Start
#start
# InlineAsm End
Ltmp1915:
ldr r0, [sp, #84]
movs r1, #0
ldr r0, [r0, #16]
ldr r0, [r0, #28]
cmp r0, #4
mov r0, r4
blo LBB14_71
LBB14_70:
Ltmp1916:
ldr r3, [sp, #84]
movs r2, #0
Ltmp1917:
str r2, [r0], #4
adds r1, #1
Ltmp1918:
Ltmp1919:
ldr r2, [r3, #16]
ldr r2, [r2, #28]
lsrs r2, r2, #2
cmp r2, r1
bgt LBB14_70
LBB14_71:
Ltmp1920:
add.w r0, r4, #8
# InlineAsm Start
#stop
# InlineAsm End
A lot of code. I printf-d out value of (int)(descriptors->elem_size/sizeof(vec[0])) and it was always 64. So I hardcoded it to be 64 and passed again via assembler:
# InlineAsm Start
#start
# InlineAsm End
Ltmp1915:
vldr.32 s16, LCPI14_7
mov r0, r4
movs r1, #0
mov.w r2, #256
blx _memset
# InlineAsm Start
#stop
# InlineAsm End
As you might see now optimizer got the idea and code became much shorter. It was able to vectorize this. Point is that compiler always does not know what inputs are constants if this is something like webcam camera size or pixel depth but in reality in my contexts they are usually constant and all I care about is speed.
I also tried Accelerate as suggested replacing three lines with:
__asm__("#start");
vDSP_vclr(vec,1,64);
__asm__("#stop");
Assembly now looks:
# InlineAsm Start
#start
# InlineAsm End
Ltmp1917:
str r1, [r7, #-140]
Ltmp1459:
Ltmp1918:
movs r1, #1
movs r2, #64
blx _vDSP_vclr
Ltmp1460:
Ltmp1919:
add.w r0, r4, #8
# InlineAsm Start
#stop
# InlineAsm End
Unsure if this is faster than bzero though. In my context this part does not time much time and two variants seemed to work at same speed.
One more thing I learned is using GPU. More about it here http://www.sunsetlakesoftware.com/2012/02/12/introducing-gpuimage-framework