Indirect Addressing in Brainfuck - brainfuck

How would I move the memory pointer to a location described in a memory cell? Super confused.
So if cell 4 is 10, how would I set the memory pointer to 10 given the address of cell 4. Absolutely no idea where to start.
I figured something out using a [>] where all cells were 0 between the two cells, but otherwise I'm completely lost.

You would need to implement some sort of memory model for your program. Brainfuck does not support indirect addressing. But since it is turing complete, it definitely is possible to do whatever.
You're thinking along the wrong lines. You want to simulate indirect addressing in bf. Before you can do that, you need to think about simulating RAM in the first place. I.e. even direct addressing is a problem. You can't just access "the 5th memory location" unless you know exactly where you are, which you don't always know if you're not extremely careful... because it's brainfuck
You might want to take a look at some C to brainfuck projects floating around. They do a similar sort of thing.

Related

Labview - Managing large numbers of constants

This is more of a formatting problem than code logic and probably seems silly (considering I've seen far more dense block diagrams). I'm working with a lot of numeric constants and they're starting to clutter my Block Diagram. Is there something I can use to group them nice and compactly?
Preferably I would like to avoid clustering them because I would need to bundle and unbundle every time I needed access.
EDIT: Picture of code in question (code segment is used repeatedly, so would be nice to have a more compact case structure)
I think you should rethink how much of your block diagram you expect to devote to constants :-)
Using numbers directly in code, the equivalent of unlabelled constants on the LabVIEW block diagram, is a recognised anti-pattern. Unless the reason for the constant value is both obvious and fundamental to the operation being carried out, anyone looking at your code (including you, any time after a couple of weeks since you wrote it) will not understand why the value was chosen. Therefore, you should make this clear by labelling the constant somehow (equivalent to assigning it to a name in a text language) and also make it easy to change the value if necessary.
It's usually clear what a 0 or 1 constant is doing there but in the code image you've posted you have two constants of 1000 and one of 999. Why is it 1000, and if I decide that it should be (say) 2000 instead, do I need to update the other two values as well? If so you should define it once, label it with a suitable name describing what it is (in your example it might be chunk size or something) and wire that value to wherever you need to use it. Where you have a constant 999 you could get that value with a Decrement function, or you could also change your Greater Than function to a Greater or Equal and compare directly with the 1000 value. In this way your initial constant definition will take up more space because of the label, but you'll save space and improve maintainability by wiring that value to wherever you need it rather than placing additional constants.
If you need to refer to the same constants in multiple places on your block diagram, you can place the constants (and just the constants, not any other program logic) in a subVI, with each constant wired to an indicator with a suitable label, and each indicator wired to a different output on the connector pane. When you hover the wiring tool over the SubVI's terminals you'll see the label in the tip-strip. Alternatively, especially if you need loads of different constant values, you can do the same thing but in your SubVI bundle the different constants into a named cluster (which you save as a typedef), and then use Unbundle by Name to access specific constant values from the cluster where you need them. Again this doesn't necessarily save block diagram space, but it does make your code more readable and maintainable.
Simple answer was to reorganize my block diagram making more space for the constants. Dave_St suggested creating subvi's for the case structures for anyone looking for alternatives. Wanted to mark this as resolved regardless.

Numerical Integral of large numbers in Fortran 90

so I have the following Integral that i need to do numerically:
Int[Exp(0.5*(aCosx + bSinx + cCos2x + dSin2x))] x=0..2Pi
The problem is that the output at any given value of x can be extremely large, e^2000, so larger than I can deal with in double precision.
I havn't had much luck googling for the following, how do you deal with large numbers in fortran, not high precision, i dont care if i know it to beyond double precision, and at the end i'll just be taking the log, but i just need to be able to handle the large numbers untill i can take the log..
Are there integration packes that have the ability to handle arbitrarily large numbers? Mathematica clearly can.. so there must be something like this out there.
Cheers
This is probably an extended comment rather than an answer but here goes anyway ...
As you've already observed Fortran isn't equipped, out of the box, with the facility for handling such large numbers as e^2000. I think you have 3 options.
Use mathematics to reduce your problem to one which does (or a number of related ones which do) fall within the numerical range that your Fortran compiler can compute.
Use Mathematica or one of the other computer algebra systems (eg Maple, SAGE, Maxima). All (I think) of these can be integrated into a Fortran program (with varying degrees of difficulty and integration).
Use a library for high-precision (often called either arbitray-precision or multiple-precision too) arithmetic. Your favourite search engine will turn up a number of these for you, some written in Fortran (and therefore easy to integrate), some written in C/C++ or other languages (and therefore slightly harder to integrate). You might start your search at Lawrence Berkeley or the GNU bignum library.
(Yes I know that I wrote that you have 3 options, but your question suggests that you aren't ready to consider this yet) You could write your own high-/arbitrary-/multiple-precision functions. Fortran provides everything you need to construct such a library, there is a lot of work already done in the field to learn from, and it might be something of interest to you.
In practice it generally makes sense to apply as much mathematics as possible to a problem before resorting to a computer, that process can not only assist in solving the problem but guide your selection or construction of a program to solve what's left of the problem.
I agree with High Peformance Mark that the best option here numerically is to use analytics to scale or simplify the result first.
I will mention that if you do want to brute force it, gfortran (as of 4.6, with the libquadmath library) has support for quadruple precision reals, which you can use by selecting the appropriate kind. As long as your answers (and the intermediate results!) don't get too much bigger than what you're describing, that may work, but it will generally be much slower than double precision.
This requires looking deeper at the problem you are trying to solve and the behavior of the underlying mathematics. To add to the good advice already provided by Mark and Jonathan, consider expanding the exponential and trig functions into Taylor series and truncating to the desired level of precision.
Also, take a step back and ask why you are trying to accomplish by calculating this value. As an example, I recently had to debug why I was getting outlandish results from a property correlation which was calculating vapor pressure of a fluid to see if condensation was occurring. I spent a long time trying to understand what was wrong with the temperature being fed into the correlation until I realized the case causing the error was a simulation of vapor detonation. The problem was not in the numerics but in the logic of checking for condensation during a literal explosion; physically, a condensation check made no sense. The real problem was the code was asking an unnecessary question; it already had the answer.
I highly recommend Forman Acton's Numerical Methods That (Usually) Work and Real Computing Made Real. Both focus on problems like this and suggest techniques to tame ill-mannered computations.

Compilation optimization for iPhone : floating point or fixed point?

I'm building a library for iphone (speex, but i'm sure it will apply to a lot of other libs too) and the make script has an option to use fixed point instead of floating point.
As the iphone ARM processor has the VFP extension and performs very well floating point calculations, do you think it's a better choice to use the fixed point option ?
If someone already benchmarked this and wants to share , i would really thank him.
Well, it depends on the setup of your application, here is some guidelines
First try turning on optimization to 0s (Fastest Smallest)
Turn on Relax IEEE Compliance
If your application can easily process floating point numbers in contiguous memory locations independently, you should look at the ARM NEON intrinsic's and assembly instructions, they can process up to 4 floating point numbers in a single instruction.
If you are already heavily using floating point math, try to switch some of your logic to fixed point (but keep in mind that moving from an NEON register to an integer register results in a full pipeline stall)
If you are already heavily using integer math, try changing some of your logic to floating point math.
Remember to profile before optimization
And above all, better algorithms will always beat micro-optimizations such as the above.
If you are dealing with large blocks of sequential data, NEON is definitely the way to go.
Float or fixed, that's a good question. NEON is somewhat faster dealing with fixed, but I'd keep the native input format since conversions take time and eventually, extra memory.
Even if the lib offers a different output formats as an option, it almost alway means lib-internal conversions. So I guess float is the native one in this case. Stick to it.
Noone prevents you from micro-optimizing better algorithms. And usually, the better the algorithm, the more performance gain can be achieved through micro-optimizations due to the pipelining on modern machines.
I'd stay away from intrinsics though. There are so many posts on the net complaining about intrinsics doing something crazy, especially when dealing with immediate values.
It can and will get very troublesome, and you can hardly optimize anything with intrinsics either.

CGAffineTransformMakeRotation() Question

I was wondering which of the following is less expensive on memory? I noticed you can leave out the *M_PI portion and it still will work fine. Does this mean if saves some calculations as well or does it matter?
Example:
CGAffineTransformMakeRotation(0.5*M_PI);
Or other example:
CGAffineTransformMakeRotation(0.7); for example.
I would think the last example is more efficient because it doesn't have to multiply by PI or am I wrong in assuming that?
Over all I don't think either one is over powering and a big memory suck I just was curious about what is happening under the hood.
Neither is a 'memory suck' since both involve the same amount of allocated memory for a CGAffineTransform struct.
Additionally, neither one offers a CPU advantage over the other, since 0.5*M_PI can be calculated at compile time, so is equivalent of writing 0.7 or other constant.

Should I avoid recursion on the iPhone?

Should I avoid recursion with code that runs on the iPhone?
Or put another way, does anyone know the max stack size on the iphone?
Yes, avoiding recursion is a good thing on all embedded platforms.
Not only does it lowers or even removes the chance of a stack-overflow, it often gives you faster code as well.
You can always rewrite a recursive algorithm to be iterative. That's not always practical though (think quicksort). A way to get around this is to rewrite the algorithms in a way that the recursion depth is limited.
The introsort is a perfect example how it's done in practice. It limits the recursion depth of a quicksort to log2 (number-of-elements). So on a 32 bit machine you will never recurse deeper than 32.
http://en.wikipedia.org/wiki/Introsort
I've written quite a bit of software for embedded platforms in the past (car entertainment systems, phones, game-consoles and the like) and I always made sure that I put a upper limit on the recursion depth or avoided recursion at the first place.
As a result none of my programs ever died with a stack-overflow and most programs are happy with 32kb of stack. This pays off big time once you need multiple threads as each thread gets it's own stack.. You can save megabytes of memory that way.
I see a couple of answers that boil down to "don't use recursion". I disagree - it's not like the iPhone is some severely-constrained embedded system. If a problem is inherently recursive, feel free to express it that way.
Unless you're recursing to a stack depth of hundreds or thousands of frames, you'll never have an issue.
The max stack size on the iphone?
The iPhone runs a modified OSX in which every process is given a valid memory space, just like in most operating systems.
It's a full processor, so stack grows up, and heap grows down (or vice versa depending on your perspective). This means that you won't overflow the stack until you run out of memory allocated to your program.
It's best to avoid recursion when you can for stack and performance reasons (function calls are expensive, relative to simple loops), but in any case you should decide what limits you can place on recursive functions, and truncate them if they go too long.