MATLAB pointers that access memory - matlab

Is it possible to make pointers in MATLAB that accesses the actual memory locations? I would like to use pointers to reference certain structures that I've made, but I want to be able to modify the structures through the pointer. I would use C++ but I can't use C++ on the servers I'm working with.
This is the best thing I've found so far, but it doesn't look like what I want.
http://www.mathworks.com/help/matlab/matlab_external/working-with-pointers.html
If it's not possible I have other ways around it, but it makes my code significantly less extensible.

Related

Do cats and scalaz create performance overhead on application?

I know it is totally a nonsense question but due to my illiteracy on programming skill this question came to my mind.
Cats and scalaz are used so that we can code in Scala similar to Haskell/in pure functional programming way. But for achieving this we need to add those libraries additionally with our projects. Eventually for using these we need to wrap our codes with their objects and functions. It is something adding extra codes and dependencies.
I don't know whether these create larger objects in memory.
These is making me think about. So my question: will I face any performance issue like more memory consumption if I use cats/scalaz ?
Or should I avoid these if my application needs performance?
Do cats and scalaz create performance overhead on application?
Absolutely.
The same way any line of code adds performance overhead.
So, if that is your concern, then don't write any code (well, actually the world may be simpler if we would have never tried all this).
Now, dick answer outside. The proper question you should be asking is: "Does the overhead of X library is harmful to my software?"; remember this applies to any library, actually to any code you write, to any algorithm you pick, etc.
And, in order to answer that question, we need some things before.
Define the SLAs the software you are writing must hold. Without those, any performance question / observation you made is pointless. It doesn't matter if something is faster / slower if you don't know if that is meaningful for you and your clients.
Once you have SLAs you need to perform stress tests to verify if your current version of the software satisfies those. Because, if your current code is performant enough, then you should worry about other things like maintainability, testing, adding more features, etc.
PS: Remember that those SLAs should not be raw numbers but be expressed in terms of percentiles, the same goes for the results of the tests.
When you found that you are falling your SLAs then you need to do proper benchmarking and debugging to identify the bottlenecks of your project. As you saw, caring about performance must be done on each line of code, but that is a lot of work that usually doesn't produce any relevant output. Thus, instead of evaluating the performance of everything, we find the bottlenecks first, those small pieces of code that have the biggest contributions to the overall performance of your software (remember the Pareto principle).
Remember that in this step, we have to be integral, network matters too. (and you will see this last one is usually the biggest slowdown; thus, usually you would rather search for architectural solutions like using Fibers instead of Threads rather than trying to optimize small functions. Also, sometimes the easier and cheaper solution is better infrastructure).
When you find the bottleneck, then you need to formulate some alternatives, implement those and not only benchmark them but do Statistical hypothesis testing to validate if the proposed changes are worth it or not. And, of course, validate if they were enough to satisfy the SLAs.
Thus, as you can see, performance is an art and a lot of work. So, unless you are committed to doing all this then stop worrying about something you will not measure and optimize properly.
Rather, focus on increasing the maintainability of your code. This actually also helps performance, because when you find that you need to change something you would be grateful that the code is as clean as possible and that the whole architecture of the code allows for an easy change.
And, believe me when I say that, using tools like cats, cats-effect, fs2, etc will help with that regard. Also, they actually pretty optimized on their core so you should be good for a lot of use cases.
Now, the big exception is that if you know that the work you are doing will be very CPU and memory bound then yeah, you pretty much can be sure all those abstractions will be harmful. In those cases, you may even want to stay away from the JVM and rather write pretty low-level code in a language like Rust which will provide you with proper tools for that kind of problem and still be way safer than plain old C.

Coupling Lua and MATLAB

I am in the situation where I have a part of the codebase written in MATLAB and another part in Lua (which is used for scripting of a 3rd party program). As of now the exchange of data between them is makeshift, using the file I/O system. This evolved to be a substantial part of the code, even though that wasn't really planned.
The program is structured in a way, that some Lua scripts are run, then some MATLAB evaluation is done based on which some more Lua is run and so on. It handles simulations and evaluations (scientific code) and creates new simulations based on that. It handles thousands of files and sims.
To streamline the process I started looking into possibilities to change the data I/O and make easy calls from one to another.
I wanted to hear some opinions on how to solve the problem, the optimal solution would be one where I could call everything from MATLAB or Lua, and organize the large datasets in a more consistent and accessible way.
Solutions:
Use the Lua C API to create bindings for the Lua modules, and to add this to MATLAB as a C-Library. In this way I should hopefully be able to achieve my goals and reduce the system complexity.
Some smarter data format for the exchange of datasets (HDF?), and some functions which read the needed workspace variables. This way the parts of the program remain independent, but the data exchange gets solved.
Create wrappers for Lua/MATLAB functions, so they can be called more easily. Data exchange could be done through the return parameters of the functions.
Suggestions?
I would suggest 1 or if you aren't adverse to spending a lot of money, use MATLAB coder to generate C functions from the MATLAB side of the analysis, compile the generated code as a shared library, import the library with the LuaJIT FFI, and run everything from Lua. With this solution you would not have to change any of the MATLAB code and not much of the Lua code thanks to the LuaJIT's semantics regarding array indexing. Solution 1 is free, but it is not as efficient because of the constant marshaling between the two languages' data structures. It would also be a lot of work writing the interface. But either solution would be more efficient than file I/O.
As a easy performance boost, have you tried keeping the files in memory using a RAMdisk or tmpfs?

Faster alternative to Containers.map?

This question is related to: Matlab: dynamically storing objects, alternatives to containers.Map class
I'm building a data structure that needs to have key-value functionality, where the key is an int and the value is an object. And also needs to be able to dinamically add elements to this key-value map.
So, Containers.map would a good option, but it is extremely slow (I have measured retrieval of values on a map of ~450 elements to be around 0.1s on my Linux machine). That's really strange, as I thought that they would implement this class as a hashmap or something like that.
I need something a lot faster. I'm thinking on implementing myself a balanced binary search tree or something like that, but I don't know if this kind if dynamic recursive object would be fast on MATLAB (probably not).
Is it possible to bind std::map into my application, or something else, that is faster than Containers.map?
Edit, clarifications and code sample:
I'm running this on a Matlab 2015a on linux. Here's a reproduction of the bad performance. In this program, the performance is not as bad because on my program I have a much more complex class hierarchy which generates a lot of overhead (the simple act of having a for to iterate for each element of the map and simply retrieve it takes almost 1 minute when there was ~450 elements). Here, I created a very simple graph class to illustrate the problem. pastebin.com/TvyzJxgK

iPhone Objective-C, malloc or NSMutableData?

I'm need to use a volitile block of memory to constantly write and rewrite the data inside using multiple threads. The data will be rendered thread-safe using #synchronized if I utilize either malloc'd data or NSMutableData.
My question is what is more recommended for speed? Seeing I'm running recursivly calculated equations on the matrix of data I need to be able to allocate, retrieve, and set the data as quickly as possible.
I'm going to be doing my own research on the subject but I was wondering if anyone knew off-hand if the overhead of the Objective-C NSMutableData would introduce speed setbacks?
re: psychotik's suggestion: volatile is a keyword in C that basically tells the compiler to avoid optimizing usage of the symbol it's attached to. This is important for multithreaded code, or code that directly interfaces with hardware. However, it's not very useful for working with blocks of memory (from malloc() or NSData.) As psychotik said, it's for use with primitives such as an int or a pointer (i.e. the pointer itself, not the data it points to.) It's not going to make your data access any faster, and may in fact slow it down by defeating the compiler's optimization tricks.
For cross-thread synchronization, your fastest bet is, I think, an OSSpinLock if you don't need recursive access, or a pthread_mutex set up as recursive if you do. Keep in mind OSSpinLock is, as the name suggests, a spin lock, so certain usage patterns make it less efficient than a pthread_mutex, but it's also extremely close to the metal (it's based off the hardware's atomic get/set operations.)
If your data really is being accessed frequently enough that you're concerned with locking performance, you'll probably want to avoid NSData and just work with a block of memory from malloc()--but, without knowing more about what you're trying to accomplish or how frequently you're accessing the data, a solution does not readily present itself. Can you tell us more about your intent?

Obj-C circular buffer object, implementing one?

I've been developing for the iPhone for quite some time and I've been wondering if there's any array object that uses circular buffer in Obj-C? Like Java's Stack or List or Queue.
I've been tinkering with the NSMutableArray, testing it's limits... and it seems that after 50k simple objects inside the array - the application is significantly slowed down.
So, is there any better solution other than the NSMutableArray (which becomes very slow with huge amounts of data). If not, can anyone tell me about a way to create such an object (would that involve using chain (node) objects??).
Bottom line: Populating a UITableView from an SQLite DB directly would be smart? As it won't require memory from an array or anything, but just the queries. And SQLite is fast and not memory grinding.
Thank you very much for you time and attention,
~ Natanavra.
From what I've been thinking it seems that going for Quinn's class is the best option possibly.
I have another question - would it be faster or smarter to load everything straight from the SQLite DB instead of creating an object and pushing it into an array?
Thank you in advance,
~ Natanavra.
Apologies for tooting my own horn, but I implemented a C-based circular buffer in CHDataStructures. (Specifically, check out CHCircularBufferQueue and CHCircularBufferStack.) The project is open source and has benchmarks which demonstrate that a true circular buffer is quite fast when compared to NSMutableArray in the general case, but results will depend on your data and usage, as well as the fact that you're operating on a memory-constrained device (e.g. iPhone). Hope that helps!
If you're seeing performance issues, measure where your app is spending its time, don't just guess. Apple provides an excellent set of performance measurement tools.
It's trivial to have NSMutable array act like a stack, list, queue etc using the various insertObject:atIndex: and removeObjectAtIndex: methods. You can write your own subclasses if you want to hardwire the behavior.
I doubt the performance problems you are seeing are being caused by NSMutableArray especially if your point of reference is the much, much slower Java. The problem is most likely the iPhone itself. As noted previously, 50,000 objective-c objects is not a trivial amount of data in this context and the iPhone hardware may struggle to managed that much data.
If you need some kind of high performance array for bytes, you could use one of the core foundation arrays or roll your own in plain C and then wrap them in a custom class.
It sounds to me like you need to switch to core data so you don't have to keep all this in memory. Core data will efficiently fetch what you want only when you need it.
You can use STL classes in "Objective-C++" - which is a fancy name for Objective-C making use of C++ classes. Just name those source files that use C++ code with a ".mm" extension and you'll get the mixed runtime.
Objective-C objects are not really "simple," so 50,000 of them is going to be pretty demanding. Write your own in straight C or C++ if you want to avoid the bottlenecks and resource demands of the Objective-C runtime.
A rather lengthy and non-theoretical discussion of the overhead associated with convenience:
http://www.cocoabuilder.com/archive/cocoa/35145-nsarray-overhead-question.html#35128
And some simple math for simple people:
All it takes to make an object as opposed to a struct is a single pointer at the beginning.
Let's say that's true, and let's say we're running on a 32-bit system with 4 byte pointers.
4 bytes x 50,000 objects = 200000 bytes
That's nearly 200MB worth of extra memory that your data suddenly needs just because you used Objective-C. Now compound that with the fact that whatever NSArray you add those objects to is going to double that by keeping its own set of pointers to those objects and you've just chewed up 400MB of RAM just so you could use a couple of convenience wrappers.
Refresh my memory here... Are swap files on hard drives as fast as RAM? How much RAM is there in an iPhone? How many function calls and stack frames does it take to send an object a message? Why isn't IOKit written in Objective-C? How many of Apple's flagship applications that do a lot of DSP use AppKit? Anybody got a copy of otool they can check with? I'm seeing zero here.