Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I a medical researcher with code written in MATLAB 2009b that runs very slowly because of a self-referential loop (not sure of the programming lingo here), i.e., the results of the first iteration is used during the second iteration, etc. (I have vectorized it to a fare-thee-well. I have run the Profiler.)
I'd like to convert the slow parts of the code to a mex function. I learned Fortran in the early 1970s but haven't used it since. The code I need to convert doesn't do anything fancy, it is just a long numerical calculation.
My question is: what would be the easiest-to-relearn version of Fortran adequate for this purpose, and what compiler works best on the Intel Mac for this? I found information comparing syntax in MATLAB to Fortran 90 for example, and the conversion doesn't look like it would be too daunting for me. However, again, I am no programmer.
I am using a MacBook Pro with OS 10.6.
Appreciate any help, thanks.
I'd recommend using modern Fortran, at least 90/95 as the syntax is much more forgiving and almost all compilers now support it.
On a Mac I would recommend gfortran from here. It's not the most recent version, but it's well integrated with Apple build tools (you will need to install Xcode from your Mac OS DVD) and works well. In the numerical python community, which depends a lot of Fortran extensions, this build is highly recommended.
I haven't actually used fortran mex on the mac - but I think it should be fairly straightforward if you follow the mex documentation - and as you say translating code from Matlab to Fortran shouldn't be too bad (it's better if you can avoid calling Matlab functions, but fortran has sensible slicing and array access).
Well, you have probably found a solution already. However, I will say this: Matlab has been getting faster and faster. However making full use of Matlab's JIT is sometimes not intuitive. Mathworks used to say vectorize code for speed. Then they said write everything in explicit loops. I'm actually not certain what the current best practice is.
What I'm saying is, before you go to fortran, find out the best practice and implement it. That may give you enough of a speed-up right there.
Also, are you absolutely certain that you have isolated the slowdown to a loop? Have you been using the profiler? You probably have, since you sound experienced. I just thought I'd mention it.
Good luck,
Ariel
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
Around 15 years ago, I was writing .xll addins for Excel using the C-API, mostly for spreadsheet formulae (complex and time-consuming calcs), and written in C++ (very much my preferred language). I subsequently did a few things with C# wrappers.
After a long break, during which Excel and Visual Studio have moved on apace, I am once again looking at building an addin to use compiled code for a lengthy optimization, rather than using a VBA script. The optimization involves taking large arrays of data out of worksheets, crunching then and then writing the results back into the workbook (as I want to chart them etc later).
NB. This is for my own use, I am not planning on distributing any solution
In pseudo Excel VBA code what I'd like to have is something like this:
Dim obj as MyOptimizer
Dim rngInputData as Range
obj.SetData(rngInputData)
obj.Optimize
Dim v as vOutputData
v = obj.GetResult
I am a little bewildered by the options available (in no particular order):
code a C-style dll with a VBA xla wrapper
code a C-style dll with an Excel.DNA C# wrapper
code a COM object of some sort (though I am not sure with Visual Studio template to use)
Something else ...
A COM object has its attractions as it will hold state, and I can create an instance of my object without having to write a clunky handle-passing system. The other consideration is the amount of marshalling between different layers (I'll be using a lot of 2-D arrays, though only of doubles), and the type-checking of the inputs.
I'm looking for some guidance at this crossroads, before I go down the rabbit hole of one particular method ... then of course, I'll be back with more questions!
Thanks!
If C++ is still very much your preferred language, you should certainly consider that. It will give the the best performance possible, and leverage your knowledge of the C API. If you can afford it, I would also recommend you have a look at the XLL+ library. While expensive, I think if you look at the features you'll find it brings a lot of value.
I develop Excel-DNA and very much prefer to avoid C++ in favour of the .NET platform.
So, while biased, I think it will be an even better fit for your requirements. It is a vastly easier environment to work in compared to C++ (in my opinion - no XLOPERs!) However, there is some overhead in the marshaling and so you'd have some compromise on the performance. You don't indicate what range sizes or how fast you expect the interchange to happen. With Excel-DNA, I expect you can easily read, process and write a block of a million numbers in about a second (see my answer here: https://stackoverflow.com/a/3868370/44264). Similarly, a small block of 10 cells with numbers can be read and written more than 10,000 times in a second. Things will get a bit slower if you are working with long strings rather than numbers, but you suggest this is not the case.
While you should expect calculation code (apart from the Excel interop) to be very fast with C# and .NET, you can also talk to C libraries from .NET very efficiently. So having core calculation libraries in C and the use the P/Invoke mechanism to interact to them from .NET is a completely viable plan, and you still get the (large) benefit of a more pleasant environment for the Excel parts.
VBA and COM approaches to reading / writing with Excel will not perform quite as well as the Excel-DNA / .NET approach (which also uses the C API for this kind of thing under the hood, rather than COM). Still, when used properly the COM overhead is not terrible, and might not on its own be a showstopper for you.
I am interested in this kind of optimization approach myself, so would be happy to help if you do take the Excel-DNA approach. The best place for Excel-DNA questions is the Excel-DNA Google group.
Getting started with Excel-DNA would look like this:
Download and install the (free) Visual Studio 2019 Community Edition.
Select the Desktop .NET Development workload when installing. You don't need to check the Office options when installing for Excel-DNA add-ins.
Then make a new C# "Class Library (.NET Framework)" project. It's important at this step not to pick ".NET Standard" or ".NET Core" (long story...).
Then in your project install the "ExcelDna.AddIn" package from NuGet.
Read and follow the instructions in the readme that pops up.
Paste in the code snippet from here: https://stackoverflow.com/a/3868370/44264 press F5 and test in Excel.
After the slow Visual Studio install, it will only take a few minutes, and you'll get some idea of what's involved.
Background
I've experimented with OpenCL with the C++ programming language on a Windows PC to write simple programs to a PC's GPU. In other words, I used it as a GPGPU. Under simple calculations I mean that I made two arrays, each containing 1 million items, than I added the corresponding parts of the two arrays.
So if I have X[1000000] and Y[1000000], then I would do:
for(int i=0; i<1000000; i++)
{
Output[i]=X[i]+Y[i]
}
What do I want to do?
I want to do the same thing on the Xbox One S, as what I did on the PC. I want to write a program for the Xbox One S that normally runs on the cpu (I am NOT trying to create a program that I will force to run on the GPU). When it makes calculations that are like the one described above, it will load it in the GPU and the GPU will calculate it to increase speeds.
What did I already do on my Xbox?
I've already made UWP apps (coded in C++) on my Xbox. I wanted to continue writing my GPU programs using the UWP, but, however I didn't find any tutorial that explained how to do this.
What do I expect in the answer?
I want an answer that has a guide/tutorial linked or even in the answer that explains how to do this.
The guide/tutorial (the solution to my program) has to be:
Written in C++
Preferably uses OpenCL, but this is not required, only if possible
What's my knowledge about these kinds of softwares?
I know Visual Studio C++ (MSVC) very well, so the most ideal would be if the guide/tutorial used MSVC. I also know a little about Unity, so I would also appreciate if the guide/tutorial would be about writing this in Unity (but also in C++). But if this isn't possible, any other tool would also work fine.
PS: I did a lot of research on this, but didn't find a guide/tutorial that explained this. That's why I'm asking here on Stack overflow. I'm pretty new to Stack overflow, so please do not just say (if this question is poorly written) "this answer doesn't meet the answer requirments.
You have to apply to the ID#Xbox program. To do this, you need a company. Applying to this program isn't just a fill-out-a-form-and-get-automatically-accepted thing. Microsoft will verify you. Microsoft doesn't want to give their software development kits to anybody mainly to prevent cheating in games. Unfortunately, this isn't possible any other way.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
after a long term of reading the theory behind neural networks I finally want to stark to do my own project in object recognition.
However I struggle to find a practical entry point. I want to use either C#,C++ or C however all new tutorials seem to involve newer languages such as python.
For starting I would especially like to reprogram the theory concepts of Yann LeCuns publications about object recognition.
Which programming language is recommended to use? And much more important: Which framework do I use? There seem to be docents of frameworks (AForge, Apache Mahout, OpenCV) and my theoretical knowledge seems to be too impractical to differentiate the usage of these.
I want to program a simple independent neural network application which should be easy trainable plus I don't want to reprogram classes such as neuron or layer in order to focus on the architecture for the beginning.
Thanks and sorry for the simple probably often ask question, however I just couldn't find anything matching.
Greetings
Nex
disclosure: i'm not an expert.
depends on what exactly you want to do.
if you want to build something from scratch, probably the easiest language to start prototyping is matlab/octave because it's high level and offers pretty fast matrix manipulations, nice math support (like numeric derivatives) and robust plotting to quickly verify your models. when you have your prototype, you can port to to c/c++ to make it faster, more space efficient, portable etc.
if you want to just use exiting tools/techniques and just play with parameters (preprocessing, feature selection etc) to find the best model for you, i would recommend start from R and caret package or python (don't remember the package name)
if you want to use NN in cluster on big data then i would try using existing frameworks like openCV (not sure if mahout provides NN)
Google just released their tensorflow framework.
Its perfect to start with and offers even for high skilled NN-architectures a lot of feautes. I highly recommend it for everyone.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I've been using OpenCV for quite a while now and was wondering if switching to MATLAB would be a good idea. As far as I know they are both the same with MATLAB built over underlying OpenCV libraries. OpenCV is open source which is a definite advantage and supported on more platforms.
I'm trying algorithms specific for Pupil Detection so I need the results to be really precise.
Does anyone know any advantages by way of speed or processing or inbuilt functions that MATLAB uses?
If you already know OpenCV then stick with OpenCV. Currently OpenCV is the most comprehensive open source library for computer vision and it has large user community. OpenCV has more functions for computer vision than Matlab. Many of its functions are implemented on GPU. The library is being continuously updated (an updated version is released approximately every 3 to 4 months). In general C++ OpenCV code runs faster than Matlab code (if it's not fast enough, you can make it faster by optimizing the source code).
Matlab is useful for rapid prototyping and Matlab code is very easy to debug. It has good documentation and support. However, as others have mentioned, Matlab is not open source, its licence is pretty pricey, and its programs are not portable. Matlab is an interpreted language and it negatively affects its performance. Performance matters a lot in computer vision, especially if you are doing real time video processing. Its programs can be made fast too, however you will have to rely on high-level functions (i.e. built-in functions professionally written in C), mex functions (your own compiled C code), and you'll have to learn how to vectorize your code to achieve decent speed.
You haven't mentioned how you are using OpenCV so I am going to assume that you are using C++; in case you are using Python, please read this page..
If you are planning to use GPU for processing, then I would suggest you stick to C++.. Of course, there are loads of other optimizations you can do to your code..
For MATLAB, there are some fairly basic things that can be done as well..
At the end of the day, I would say that the closer you are to machine level language, the better your performance is going to be. But of course, using C can be a pain since there is a HIGH chance of writing unoptimized code and memory leaks. For this reason, C++ gives the best trade-off..
HTH
Your question does not really make sense.
OpenCV is a C++-library for carrying out computer vision tasks. Apart from C++, there is support for other programming languages via bindings.
MATLAB is a full scientific suite that consists of a massive IDE with its own language.
If you want your code to run in MATLAB, then you write MATLAB code. But then you will also need to install a 4GB IDE, and pay for a fairly expensive license.
My personal choice is to use OpenCV with the Python language bindings, as this gives me a nice scripting interface to do matrix operations (arguably somewhat more cluttered than MATLAB's) while still having easy access to OpenCV-functions.
If you really understand about opencv means definitly you never think about switching from opencv to matlab.
You can use opencv with python or cpp and even java etc., also.
Actually, you should not consider opencv only to complete your whole task.
Like opencv, other libraries also exists.
For example,
numpy -> for fast numeric calculation
matplotlib -> to show figure window etc., like matlab.
scipy -> for fast scientific calculation.
If you use your_programming_language + opencv + matplotlib + numpy + scipy definitly you will wonder about opencv.
And, don't worry about how to mingle these libraries together. Just mention their name and do your actual coding. Thats all.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
This semester, I'm implementing a compressed-sensing algorithm as an iPhone app. To do this, I'll need some good matrix/linear algebra libraries. I'm a little new to both iOS and Python, and am looking for some help at evaluating my options.
I know the iPhone has the Accelerate framework, which includes vecLib, BLAS, and LAPACK, but I'm not familiar with their API's (and they seem fairly confusing).
I've played around with Python/numpy, and I really like how simple it is to use - if I have the choice, I'd prefer to use numpy over Accelerate.
I know it's possible to embed Python, but I have had little luck on my own. I tried to include Enthought's EPD.framework in an XCode project, but didn't get it to work after playing around for an hour or so. I would imagine that compiling numpy would be worse.
As another alternative, could I use Cython (http://cython.org/) to generate C files then call functions from that? I also attempted this, but ran into more issues with including a .so library and calling it. Is there any way to have Cython generate .c and .h files? Would said .c and .h files still depend on numpy?
I've read some stuff about PyInstaller and freeze.py. Could either of those help me here?
Are there any options besides Accelerate or Python+numpy? Is Python+numpy a good option, or will it be hard to compile/build? Is Cython a valid solution?
Thank you!
Take the time to learn the Accelerate commands. If your doing complex math, these are the functions you want to be using. They are much faster, infinitely more likely to continue to be supported and tuned to future hardware, and as well - they use less energy than naive solutions.
The new release of the Swift programming language with iOS 8 allows for high level Python/Matlab -like code to be written. Accordingly, a framework called swix has been developed that wraps the Accelerate (/BLAS/Lapack/etc) frameworks.
Code snippet that fully utilizes the Accelerate framework:
var N = 10
var x = ones(N) * pi
var y = ones(N) * phi
var result = (x+y+4)*x
This code will can be compiled for the iPhone/iOS. Full details on installation are covered in the swix documentation.
With the release of Swift, and its access to the Accelerate framework, there is little reason to go out of your way to get Python running. With the right frameworks, you can use Swift to write high-level, and performant, code for iOS with syntax similar to Python/numpy, and performance that will be significantly greater than running numpy on iOS.
As other people have already posted, there are various libraries that attempt to 'wrap' the Accelerate framework to provide high-performance with an accessible API. As an alternative to the swix library in another answer, I have had great success with the Upsurge framework. For the kinds of operations that you are probably going to use, Upsurge may have everything that you need.
It provides an easy and detailed interface to a variety of Accelerate functions; matrices, convolution, FFT, linear algebra, mathematics etc. It also supports a lot of these operations on its own custom tensor type. The big advantage that I found over swix when I was deciding between them, was that Upsurge didn't have a dependency on OpenCV and didn't require any bridging headers (it is written in pure swift) which made debugging easier for me.
That being said, they are both great frameworks and either one will cover your needs. I would have a look at both and see which one better fits your needs.
There are libraries to include Python on iOS. These libraries are Kivy and Beeware. These are libraries that have the entire app written in Python.
Kivy
does not have a native GUI -- the app looks "Kivy"
can not make native system API calls
this is a more mature project than Beeware
Beeware
has a native looking GUI
can call native system APIs
(a newer project; less mature)
For Kivy, look at kivy-ios and the GitHub project and for Beeware look at python-ios-template.