Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
This semester, I'm implementing a compressed-sensing algorithm as an iPhone app. To do this, I'll need some good matrix/linear algebra libraries. I'm a little new to both iOS and Python, and am looking for some help at evaluating my options.
I know the iPhone has the Accelerate framework, which includes vecLib, BLAS, and LAPACK, but I'm not familiar with their API's (and they seem fairly confusing).
I've played around with Python/numpy, and I really like how simple it is to use - if I have the choice, I'd prefer to use numpy over Accelerate.
I know it's possible to embed Python, but I have had little luck on my own. I tried to include Enthought's EPD.framework in an XCode project, but didn't get it to work after playing around for an hour or so. I would imagine that compiling numpy would be worse.
As another alternative, could I use Cython (http://cython.org/) to generate C files then call functions from that? I also attempted this, but ran into more issues with including a .so library and calling it. Is there any way to have Cython generate .c and .h files? Would said .c and .h files still depend on numpy?
I've read some stuff about PyInstaller and freeze.py. Could either of those help me here?
Are there any options besides Accelerate or Python+numpy? Is Python+numpy a good option, or will it be hard to compile/build? Is Cython a valid solution?
Thank you!
Take the time to learn the Accelerate commands. If your doing complex math, these are the functions you want to be using. They are much faster, infinitely more likely to continue to be supported and tuned to future hardware, and as well - they use less energy than naive solutions.
The new release of the Swift programming language with iOS 8 allows for high level Python/Matlab -like code to be written. Accordingly, a framework called swix has been developed that wraps the Accelerate (/BLAS/Lapack/etc) frameworks.
Code snippet that fully utilizes the Accelerate framework:
var N = 10
var x = ones(N) * pi
var y = ones(N) * phi
var result = (x+y+4)*x
This code will can be compiled for the iPhone/iOS. Full details on installation are covered in the swix documentation.
With the release of Swift, and its access to the Accelerate framework, there is little reason to go out of your way to get Python running. With the right frameworks, you can use Swift to write high-level, and performant, code for iOS with syntax similar to Python/numpy, and performance that will be significantly greater than running numpy on iOS.
As other people have already posted, there are various libraries that attempt to 'wrap' the Accelerate framework to provide high-performance with an accessible API. As an alternative to the swix library in another answer, I have had great success with the Upsurge framework. For the kinds of operations that you are probably going to use, Upsurge may have everything that you need.
It provides an easy and detailed interface to a variety of Accelerate functions; matrices, convolution, FFT, linear algebra, mathematics etc. It also supports a lot of these operations on its own custom tensor type. The big advantage that I found over swix when I was deciding between them, was that Upsurge didn't have a dependency on OpenCV and didn't require any bridging headers (it is written in pure swift) which made debugging easier for me.
That being said, they are both great frameworks and either one will cover your needs. I would have a look at both and see which one better fits your needs.
There are libraries to include Python on iOS. These libraries are Kivy and Beeware. These are libraries that have the entire app written in Python.
Kivy
does not have a native GUI -- the app looks "Kivy"
can not make native system API calls
this is a more mature project than Beeware
Beeware
has a native looking GUI
can call native system APIs
(a newer project; less mature)
For Kivy, look at kivy-ios and the GitHub project and for Beeware look at python-ios-template.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
Around 15 years ago, I was writing .xll addins for Excel using the C-API, mostly for spreadsheet formulae (complex and time-consuming calcs), and written in C++ (very much my preferred language). I subsequently did a few things with C# wrappers.
After a long break, during which Excel and Visual Studio have moved on apace, I am once again looking at building an addin to use compiled code for a lengthy optimization, rather than using a VBA script. The optimization involves taking large arrays of data out of worksheets, crunching then and then writing the results back into the workbook (as I want to chart them etc later).
NB. This is for my own use, I am not planning on distributing any solution
In pseudo Excel VBA code what I'd like to have is something like this:
Dim obj as MyOptimizer
Dim rngInputData as Range
obj.SetData(rngInputData)
obj.Optimize
Dim v as vOutputData
v = obj.GetResult
I am a little bewildered by the options available (in no particular order):
code a C-style dll with a VBA xla wrapper
code a C-style dll with an Excel.DNA C# wrapper
code a COM object of some sort (though I am not sure with Visual Studio template to use)
Something else ...
A COM object has its attractions as it will hold state, and I can create an instance of my object without having to write a clunky handle-passing system. The other consideration is the amount of marshalling between different layers (I'll be using a lot of 2-D arrays, though only of doubles), and the type-checking of the inputs.
I'm looking for some guidance at this crossroads, before I go down the rabbit hole of one particular method ... then of course, I'll be back with more questions!
Thanks!
If C++ is still very much your preferred language, you should certainly consider that. It will give the the best performance possible, and leverage your knowledge of the C API. If you can afford it, I would also recommend you have a look at the XLL+ library. While expensive, I think if you look at the features you'll find it brings a lot of value.
I develop Excel-DNA and very much prefer to avoid C++ in favour of the .NET platform.
So, while biased, I think it will be an even better fit for your requirements. It is a vastly easier environment to work in compared to C++ (in my opinion - no XLOPERs!) However, there is some overhead in the marshaling and so you'd have some compromise on the performance. You don't indicate what range sizes or how fast you expect the interchange to happen. With Excel-DNA, I expect you can easily read, process and write a block of a million numbers in about a second (see my answer here: https://stackoverflow.com/a/3868370/44264). Similarly, a small block of 10 cells with numbers can be read and written more than 10,000 times in a second. Things will get a bit slower if you are working with long strings rather than numbers, but you suggest this is not the case.
While you should expect calculation code (apart from the Excel interop) to be very fast with C# and .NET, you can also talk to C libraries from .NET very efficiently. So having core calculation libraries in C and the use the P/Invoke mechanism to interact to them from .NET is a completely viable plan, and you still get the (large) benefit of a more pleasant environment for the Excel parts.
VBA and COM approaches to reading / writing with Excel will not perform quite as well as the Excel-DNA / .NET approach (which also uses the C API for this kind of thing under the hood, rather than COM). Still, when used properly the COM overhead is not terrible, and might not on its own be a showstopper for you.
I am interested in this kind of optimization approach myself, so would be happy to help if you do take the Excel-DNA approach. The best place for Excel-DNA questions is the Excel-DNA Google group.
Getting started with Excel-DNA would look like this:
Download and install the (free) Visual Studio 2019 Community Edition.
Select the Desktop .NET Development workload when installing. You don't need to check the Office options when installing for Excel-DNA add-ins.
Then make a new C# "Class Library (.NET Framework)" project. It's important at this step not to pick ".NET Standard" or ".NET Core" (long story...).
Then in your project install the "ExcelDna.AddIn" package from NuGet.
Read and follow the instructions in the readme that pops up.
Paste in the code snippet from here: https://stackoverflow.com/a/3868370/44264 press F5 and test in Excel.
After the slow Visual Studio install, it will only take a few minutes, and you'll get some idea of what's involved.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 9 months ago.
This post was edited and submitted for review 9 months ago and failed to reopen the post:
Original close reason(s) were not resolved
Improve this question
I designed few simple 3D parts with OpenSCAD and I would like to move on to more complex parts now. As in most other programming languages, that would naturally include starting to re-use code that others have written before. Such as functions for round/bevel edges, infill corners, beziers curves and some common parts like screws, bolts.
How does that work in OpenSCAD? Specifically: What are the language features, idioms and officially recommended good practices of how code reuse is achieved in OpenSCAD?
(You are welcome to include pointers to good examples. But the question is about the mechanisms and good practices for code reuse in OpenSCAD, not about specific code that can be reused.)
Reusable code in OpenSCAD is organized in "libraries", similar to the package or library system in many other languages.
As with all code reuse, there is the problem of library scope overlap, where two libraries solve the same issue. This cannot be truly solved. But as a best practice, I would choose the one most appropriate library for each design project, and then stick with whatever that library has to offer. For example, don't depend on both BOSL and NopSCADlib because you like BOSL for everything except its threading functions, which you like better in NopSCADlib. For a small project, I like to work with only Round Anything, which is small and compact.
To help you get started with choosing a library appropriate for your project, I include a list of examples below that I thing show good practices of reusable OpenSCAD code. I had a long look at OpenSCAD libraries recently and this is the result. Most of them come from the official OpenSCAD libraries page, which I found to recommend only a few but very good libraries.
My favourite libraries, roughly in my personal order of desirability:
BOSL (source, docs) and BOSL2. (source, docs) "The Belfry OpenScad Library - A library of tools, shapes, and helpers to make OpenScad easier to use." Includes lots of modules and functions to make OpenSCAD code more readable. Overall, it's like MCAD in scope, but much better in execution. BOSL2 is a much extended second edition of BOSL, but as of 2020-11 the author says it is not yet ready for production use.
BOSL includes a very good Bezier library. Includes a threading library.
Round Anything. (source, API, visual overview) "Round-Anything is primarily a set of OpenSCAD utilities that help with rounding parts, but it also embodies a robust approach to developing OpenSCAD parts."
NopSCADlib. (source, docs) A very large library. Use for any kind of machine design, as it contains nuts, bolts, washers, electronic components, belts etc.. "It contains lots of vitamins (the RepRap term for non-printed parts), some general purpose printed parts and some utilities. There are also Python scripts to generate Bills of Materials (BOMs), STL files for all the printed parts, DXF files for CNC routed parts in a project and a manual containing assembly instructions and exploded views by scraping markdown embedded in OpenSCAD comments, see scripts."
Also contains a 3D sweep function and a thread generation module.
BOLTS. (source, docs) "BOLTS is an Open Library of Technical Specifications." Contains all kinds of models for metal hardware standard parts (example).
dotSCAD. (source) Seems to be one of the best general library for OpenSCAD, being both huge, good quality, and well maintained. Mostly focused on math art parts. For an overview of the designs made by the author of dotSCAD, using that library, see here. For background articles about the designs made with dotSCAD, see here.
MCAD. (source, docs) This is so far the only library shipped with every installation of OpenSCAD, so would qualify as its standard library. No need to tell users of your designs to install anything when you only include MCAD.
Note that currently (as of 2020-11), a large rework is being done to MCAD, with the effect that the dev branch has nearly twice the commits as the master branch. You'll find many goodies here, but of course users of your design would then have to install the dev branch first.
The problem with MCAD, esp. the current master branch, is that I don't find it useful. It's so far a rather a non-integrated hotchpotch of contributions from many authors. But since it's the standard library, we should give it a chance. When I have something to generally useful, I'd try to contribute it here.
Revolve2. (source, announcement) In terms of speed, this is hands-down the best thread generation library I could find. I did not yet test the threading features in BOSL and NopSCADlib, though.
3D sweep demos. (source) But note that this is rather demo code than a library; first try the sweep module in NopSCADlib.
scad-utils. (source)
Relativity. (source) A library to arrange objects relative to each other. Also includes a CSS-like styling language for objects. Seemingly no longer in active development, but still great to learn really advanced OpenSCAD techniques.
After some researches, it seems that the BOSL2 library is the most complete :
BOSL2 Library documentation
BOSL2 Library
[edit 2022: update BOSL to BOSL2]
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
after a long term of reading the theory behind neural networks I finally want to stark to do my own project in object recognition.
However I struggle to find a practical entry point. I want to use either C#,C++ or C however all new tutorials seem to involve newer languages such as python.
For starting I would especially like to reprogram the theory concepts of Yann LeCuns publications about object recognition.
Which programming language is recommended to use? And much more important: Which framework do I use? There seem to be docents of frameworks (AForge, Apache Mahout, OpenCV) and my theoretical knowledge seems to be too impractical to differentiate the usage of these.
I want to program a simple independent neural network application which should be easy trainable plus I don't want to reprogram classes such as neuron or layer in order to focus on the architecture for the beginning.
Thanks and sorry for the simple probably often ask question, however I just couldn't find anything matching.
Greetings
Nex
disclosure: i'm not an expert.
depends on what exactly you want to do.
if you want to build something from scratch, probably the easiest language to start prototyping is matlab/octave because it's high level and offers pretty fast matrix manipulations, nice math support (like numeric derivatives) and robust plotting to quickly verify your models. when you have your prototype, you can port to to c/c++ to make it faster, more space efficient, portable etc.
if you want to just use exiting tools/techniques and just play with parameters (preprocessing, feature selection etc) to find the best model for you, i would recommend start from R and caret package or python (don't remember the package name)
if you want to use NN in cluster on big data then i would try using existing frameworks like openCV (not sure if mahout provides NN)
Google just released their tensorflow framework.
Its perfect to start with and offers even for high skilled NN-architectures a lot of feautes. I highly recommend it for everyone.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I've been using OpenCV for quite a while now and was wondering if switching to MATLAB would be a good idea. As far as I know they are both the same with MATLAB built over underlying OpenCV libraries. OpenCV is open source which is a definite advantage and supported on more platforms.
I'm trying algorithms specific for Pupil Detection so I need the results to be really precise.
Does anyone know any advantages by way of speed or processing or inbuilt functions that MATLAB uses?
If you already know OpenCV then stick with OpenCV. Currently OpenCV is the most comprehensive open source library for computer vision and it has large user community. OpenCV has more functions for computer vision than Matlab. Many of its functions are implemented on GPU. The library is being continuously updated (an updated version is released approximately every 3 to 4 months). In general C++ OpenCV code runs faster than Matlab code (if it's not fast enough, you can make it faster by optimizing the source code).
Matlab is useful for rapid prototyping and Matlab code is very easy to debug. It has good documentation and support. However, as others have mentioned, Matlab is not open source, its licence is pretty pricey, and its programs are not portable. Matlab is an interpreted language and it negatively affects its performance. Performance matters a lot in computer vision, especially if you are doing real time video processing. Its programs can be made fast too, however you will have to rely on high-level functions (i.e. built-in functions professionally written in C), mex functions (your own compiled C code), and you'll have to learn how to vectorize your code to achieve decent speed.
You haven't mentioned how you are using OpenCV so I am going to assume that you are using C++; in case you are using Python, please read this page..
If you are planning to use GPU for processing, then I would suggest you stick to C++.. Of course, there are loads of other optimizations you can do to your code..
For MATLAB, there are some fairly basic things that can be done as well..
At the end of the day, I would say that the closer you are to machine level language, the better your performance is going to be. But of course, using C can be a pain since there is a HIGH chance of writing unoptimized code and memory leaks. For this reason, C++ gives the best trade-off..
HTH
Your question does not really make sense.
OpenCV is a C++-library for carrying out computer vision tasks. Apart from C++, there is support for other programming languages via bindings.
MATLAB is a full scientific suite that consists of a massive IDE with its own language.
If you want your code to run in MATLAB, then you write MATLAB code. But then you will also need to install a 4GB IDE, and pay for a fairly expensive license.
My personal choice is to use OpenCV with the Python language bindings, as this gives me a nice scripting interface to do matrix operations (arguably somewhat more cluttered than MATLAB's) while still having easy access to OpenCV-functions.
If you really understand about opencv means definitly you never think about switching from opencv to matlab.
You can use opencv with python or cpp and even java etc., also.
Actually, you should not consider opencv only to complete your whole task.
Like opencv, other libraries also exists.
For example,
numpy -> for fast numeric calculation
matplotlib -> to show figure window etc., like matlab.
scipy -> for fast scientific calculation.
If you use your_programming_language + opencv + matplotlib + numpy + scipy definitly you will wonder about opencv.
And, don't worry about how to mingle these libraries together. Just mention their name and do your actual coding. Thats all.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Now that Apple relaxed the restrictions on developer tools/programs, I wonder what tempts developers to other languages than Apple offers by default, Objective-C, which is quite fun to program with. What missing feautures makes you not to program with it but something else?
Lack of Objective-C expertise or a large/complex code base in another language would be among common reasons.
Cross-platform coding might well be another.
I haven't done any iPhone development yet, but generally speaking, here's a few reasons:
Cross-platform development
The other language suits your coding style better
The other language is a better tool for the job
You are comfortable in the other language and don't have the time / budget / motivation to learn Objective-C
Existing libraries / codebase
Specific tools you might want to use
Testing some concepts in Objective-C can sometimes be kind of tedious to set up. Sometimes you just want to see how a single method works or play around with an object's functionality to see how it works.
Setting up a new project is somewhat tedious, and it's not always feasible to incorporate the test code in to a new project.
In this case, I do one of two things:
Keep an empty project around specifically for testing things
Drop down to the Terminal and use irb (or PyObjC) to play with the objects in Ruby or Python.
In a nutshell, the thing that's missing is the ability to use Objective-C in an interpreted manner. You have to use another language (like Ruby or Python) to do this.
I recently wrote some networking code in Python, then had to translate it into Objective-C for use on the iPad. A typical line of clear Python would become five or ten lines of busy-work C. I just work faster in higher-level lanugages; the language puts up less resistance, requires fewer forms to be filled out.
I have ported a couple of tiny language interpreters (for my own use, not for App store distribution) to the iPhone. This allows me to write short snippets of code on the road, without having to carry my Mac, and run them locally. I don't know of any small Objective C interpreters, and the language is not really designed for interactive use.