Advice on which Excel addin development system to use? [closed] - excel-addins

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
Around 15 years ago, I was writing .xll addins for Excel using the C-API, mostly for spreadsheet formulae (complex and time-consuming calcs), and written in C++ (very much my preferred language). I subsequently did a few things with C# wrappers.
After a long break, during which Excel and Visual Studio have moved on apace, I am once again looking at building an addin to use compiled code for a lengthy optimization, rather than using a VBA script. The optimization involves taking large arrays of data out of worksheets, crunching then and then writing the results back into the workbook (as I want to chart them etc later).
NB. This is for my own use, I am not planning on distributing any solution
In pseudo Excel VBA code what I'd like to have is something like this:
Dim obj as MyOptimizer
Dim rngInputData as Range
obj.SetData(rngInputData)
obj.Optimize
Dim v as vOutputData
v = obj.GetResult
I am a little bewildered by the options available (in no particular order):
code a C-style dll with a VBA xla wrapper
code a C-style dll with an Excel.DNA C# wrapper
code a COM object of some sort (though I am not sure with Visual Studio template to use)
Something else ...
A COM object has its attractions as it will hold state, and I can create an instance of my object without having to write a clunky handle-passing system. The other consideration is the amount of marshalling between different layers (I'll be using a lot of 2-D arrays, though only of doubles), and the type-checking of the inputs.
I'm looking for some guidance at this crossroads, before I go down the rabbit hole of one particular method ... then of course, I'll be back with more questions!
Thanks!

If C++ is still very much your preferred language, you should certainly consider that. It will give the the best performance possible, and leverage your knowledge of the C API. If you can afford it, I would also recommend you have a look at the XLL+ library. While expensive, I think if you look at the features you'll find it brings a lot of value.
I develop Excel-DNA and very much prefer to avoid C++ in favour of the .NET platform.
So, while biased, I think it will be an even better fit for your requirements. It is a vastly easier environment to work in compared to C++ (in my opinion - no XLOPERs!) However, there is some overhead in the marshaling and so you'd have some compromise on the performance. You don't indicate what range sizes or how fast you expect the interchange to happen. With Excel-DNA, I expect you can easily read, process and write a block of a million numbers in about a second (see my answer here: https://stackoverflow.com/a/3868370/44264). Similarly, a small block of 10 cells with numbers can be read and written more than 10,000 times in a second. Things will get a bit slower if you are working with long strings rather than numbers, but you suggest this is not the case.
While you should expect calculation code (apart from the Excel interop) to be very fast with C# and .NET, you can also talk to C libraries from .NET very efficiently. So having core calculation libraries in C and the use the P/Invoke mechanism to interact to them from .NET is a completely viable plan, and you still get the (large) benefit of a more pleasant environment for the Excel parts.
VBA and COM approaches to reading / writing with Excel will not perform quite as well as the Excel-DNA / .NET approach (which also uses the C API for this kind of thing under the hood, rather than COM). Still, when used properly the COM overhead is not terrible, and might not on its own be a showstopper for you.
I am interested in this kind of optimization approach myself, so would be happy to help if you do take the Excel-DNA approach. The best place for Excel-DNA questions is the Excel-DNA Google group.
Getting started with Excel-DNA would look like this:
Download and install the (free) Visual Studio 2019 Community Edition.
Select the Desktop .NET Development workload when installing. You don't need to check the Office options when installing for Excel-DNA add-ins.
Then make a new C# "Class Library (.NET Framework)" project. It's important at this step not to pick ".NET Standard" or ".NET Core" (long story...).
Then in your project install the "ExcelDna.AddIn" package from NuGet.
Read and follow the instructions in the readme that pops up.
Paste in the code snippet from here: https://stackoverflow.com/a/3868370/44264 press F5 and test in Excel.
After the slow Visual Studio install, it will only take a few minutes, and you'll get some idea of what's involved.

Related

What are the mechanisms and idioms of code re-use in OpenSCAD? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 9 months ago.
This post was edited and submitted for review 9 months ago and failed to reopen the post:
Original close reason(s) were not resolved
Improve this question
I designed few simple 3D parts with OpenSCAD and I would like to move on to more complex parts now. As in most other programming languages, that would naturally include starting to re-use code that others have written before. Such as functions for round/bevel edges, infill corners, beziers curves and some common parts like screws, bolts.
How does that work in OpenSCAD? Specifically: What are the language features, idioms and officially recommended good practices of how code reuse is achieved in OpenSCAD?
(You are welcome to include pointers to good examples. But the question is about the mechanisms and good practices for code reuse in OpenSCAD, not about specific code that can be reused.)
Reusable code in OpenSCAD is organized in "libraries", similar to the package or library system in many other languages.
As with all code reuse, there is the problem of library scope overlap, where two libraries solve the same issue. This cannot be truly solved. But as a best practice, I would choose the one most appropriate library for each design project, and then stick with whatever that library has to offer. For example, don't depend on both BOSL and NopSCADlib because you like BOSL for everything except its threading functions, which you like better in NopSCADlib. For a small project, I like to work with only Round Anything, which is small and compact.
To help you get started with choosing a library appropriate for your project, I include a list of examples below that I thing show good practices of reusable OpenSCAD code. I had a long look at OpenSCAD libraries recently and this is the result. Most of them come from the official OpenSCAD libraries page, which I found to recommend only a few but very good libraries.
My favourite libraries, roughly in my personal order of desirability:
BOSL (source, docs) and BOSL2. (source, docs) "The Belfry OpenScad Library - A library of tools, shapes, and helpers to make OpenScad easier to use." Includes lots of modules and functions to make OpenSCAD code more readable. Overall, it's like MCAD in scope, but much better in execution. BOSL2 is a much extended second edition of BOSL, but as of 2020-11 the author says it is not yet ready for production use.
BOSL includes a very good Bezier library. Includes a threading library.
Round Anything. (source, API, visual overview) "Round-Anything is primarily a set of OpenSCAD utilities that help with rounding parts, but it also embodies a robust approach to developing OpenSCAD parts."
NopSCADlib. (source, docs) A very large library. Use for any kind of machine design, as it contains nuts, bolts, washers, electronic components, belts etc.. "It contains lots of vitamins (the RepRap term for non-printed parts), some general purpose printed parts and some utilities. There are also Python scripts to generate Bills of Materials (BOMs), STL files for all the printed parts, DXF files for CNC routed parts in a project and a manual containing assembly instructions and exploded views by scraping markdown embedded in OpenSCAD comments, see scripts."
Also contains a 3D sweep function and a thread generation module.
BOLTS. (source, docs) "BOLTS is an Open Library of Technical Specifications." Contains all kinds of models for metal hardware standard parts (example).
dotSCAD. (source) Seems to be one of the best general library for OpenSCAD, being both huge, good quality, and well maintained. Mostly focused on math art parts. For an overview of the designs made by the author of dotSCAD, using that library, see here. For background articles about the designs made with dotSCAD, see here.
MCAD. (source, docs) This is so far the only library shipped with every installation of OpenSCAD, so would qualify as its standard library. No need to tell users of your designs to install anything when you only include MCAD.
Note that currently (as of 2020-11), a large rework is being done to MCAD, with the effect that the dev branch has nearly twice the commits as the master branch. You'll find many goodies here, but of course users of your design would then have to install the dev branch first.
The problem with MCAD, esp. the current master branch, is that I don't find it useful. It's so far a rather a non-integrated hotchpotch of contributions from many authors. But since it's the standard library, we should give it a chance. When I have something to generally useful, I'd try to contribute it here.
Revolve2. (source, announcement) In terms of speed, this is hands-down the best thread generation library I could find. I did not yet test the threading features in BOSL and NopSCADlib, though.
3D sweep demos. (source) But note that this is rather demo code than a library; first try the sweep module in NopSCADlib.
scad-utils. (source)
Relativity. (source) A library to arrange objects relative to each other. Also includes a CSS-like styling language for objects. Seemingly no longer in active development, but still great to learn really advanced OpenSCAD techniques.
After some researches, it seems that the BOSL2 library is the most complete :
BOSL2 Library documentation
BOSL2 Library
[edit 2022: update BOSL to BOSL2]

Planning Application development for a single developer [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Im wondering how to kick-of projects for a single developer in proper way.
There are a lot of reasons why its not a good idea to just sitting down and hack the solution you want to reach without any plans or organisations.
In professional software engineering, there are a lot of required steps for planing a piece of software (like writing a vison and scope-document, doing requirements engineering and finaly planing the architecture).
But what are the common way for planning an application development endeavour for projects with just one single developer. For example: Do you use architecture-tempaltes like the well known arc42-Template. Do you thing its a littlebit unduly to doing such steps just for ideological reasons or is this the only right way?
What kind of Design-/Architecture-Templates so you use for planing projects?
Although you are a single developer I would consider using a Methodology for your development.
When you are working alone on a Product you will have to take up several roles to keep an overview.
I would recommend to use the RUP-Process.
It is an agile development approach which can be used be small or bigger groups. Although you are working alone you will profit from this methodology as it will make your Implementation easier. You can adapt the methodology to your needs and decide what is necessary or not for you.
The RUP-Process consitst of 4 Phases.
Inception:
In This Phase you are gathering all Requirements. Which means that you write down what you actually want to do before you start coding. Coding directly will quickly limit your creativity as you will soon encounter errors and forget what you actually want because you have to fix one problem. Take a week or two and write down everything you expect from your application. What it has to deliver or what could be a plus (in the next release for example).
Elaboration:
First you make the architectonial decisions. What db will you use. What language etc. Then you start coding. You are coding arround 80% of your application, leaving the hard stuff out at first. In this Phase you should finish the main parts of the GUI and have bindings to the methods used by the GUI. They do not have to be finished.
Construction
Now you tackle all Programming problems left and all the little errors which you have left out in the Elaboration Phase. It might be that you encouter new requirements. Estimate how long it would take you to add them in this release. You can then decide if you want to finish it or save it for the next inception phase. You should also finish all comments for the methods in your application.
Transition
Now you are Testing the Product before delivery and fix the last errors. Also you should be writing a Documentation of what you have implementet. If you have writting something down in the Inception it should not be to hard for you to write documentation.
When finished you can start another Cycle again starting with the inception.
CONS:
- It might be an overhead having a methodology for only one developer
- You might get annoyed
PROS
- You will have good requirements
- Your Development Process will be faster because you do not have to reconsider your next steps while developing. You have done that in the inception
- You will have a good documentation of what you have built
- You can build a timeline and predict when you will be finished
- You can predict what will be finished with this realease
- If you need help you can give parts to other developers. You are prepared if you need help at somepoint.
At the moment I am the only developer in a Project. With this Methodolgy we can keep track of the process and it helps to coordinate my tasks.
You should also definitly use GIT to secure your process.
UPDATE
Planning of the Architecture/Software itself.
First you should check where you want to use the Software. There are numerous possiblities.
a. Web Applications
b. Mac/Windows
c. Iphone/Android etc.
The first thing you have to decide is where is the Software to be used. If you use it on a Mac or Iphone you could work with Apples new Language Swift
If you work on Windows you could use C#
The Andvantages of these languages are that they are optimized for the System and will deliver you more possibilities then Java or C++.
Now this is only one example. If you need a really fast program, where you can do a lot of optimization on the lower Level you could use C++.
If you want an application that you can theoretically use on both System you could use Java. Although from my experience you will have to do lots of modifactions if you want to publish it on multiple Plattforms.
Your coding skills are also important. It depends on what you can code and what your are willing to learn. Each programming Language is optimized for a porpouse. Python, Javascript, Lisp etc. are also really great languages. It depends on what you need.
First Step
Decide the field of operation --> choose the fitting Language
Second Step
Decide if database is needed.
If you have a simple program you may not need a Database. However databases are a great way to perserve data and offer a lot of functionalities.
For local Applications you could use SQLite. It is a simple Lightweight Database which can be accesed from any language.
If you need more database featueres. Make a research on what the databases offer and which one you need.
Third Step
Start building the Application (a skeleton) and test if your Architecture is durable. You may still change your architecture at this point if you encouter that it is to complex.
I will give you a short example for an app:
You want to build an application that sorts all your Mp3-Files in playlists. Basically a better player then Itunes. But you would want it on multiple Systems.
First Step
- File handling (complexity low)
- Multiple Systems (complexity high)
-> node-webkit
You can build cross plattform Applications with node webkit where you can Access folder etcs. The Corresponding programming language would be Javascript while using HTML5,CSS,Jquery etc.
Second Step
In order to organize the MP3s you will need a database. You will only load the links to the files in your folder so the complexity of entries and the load on the database is low. You can use a SQLite DB here. You could use node-sqlite3 whith your app.
Third Step
Build a scratch application in which you can upload a file or load a file in a folder. See if you setting is working. If yes continue with the building of your app. If no start from Step one and decide whats missing.

What are advantages and disadvantages of code generation? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
There are probably different kinds of code generation. In RoR for example, Rails can create skeletons for models, controllers, etc. But the developer has to complete those skeletons.
Now some times there are projects where many core artifacts in their entirety get generated according to a set of definitions or models.
I am mainly interested to know the advantages and disadvantages of this latter type of code generation.
The main advantage is that it does the work for you, its repeatable, and that the code will most likely work (that depends of course if the person who wrote the generator knew what they were doing). It can remove the a lot of necessary time doing menial coding tasks. For example, is it really worth your time to write objects which are nothing more than containers for data from the database, or is it better to have some program automatically create these for you?
The big disadvantage is that it forces you into writing the code that is compatible with the generated code. Most of the time this isn't a problem, but it can be a real hassle when someone comes up to you and says "Hey, can we do X?" and that conflicts with the generated code. If the generator is good, it will allow you to change functionality, but that almost always increases the complexity of the code generated etc. This complexity has a price. It's more difficult to understand, and it can be less efficient that code you write yourself. This of course varies by situation.
The main problem with this style of programming is that it contaminates a view of your project. It no longer allows you to practice DRY. It is useful to have a clean separation between that what is automatically generated, and that which is written by a human. Most systems, especially file-based ones, do not support such a separation well. In systems that have good introspection capabilities (e.g. smalltalk images), building a dynamic object structure by walking the definition/model is preferable.
In illusion-based programming (as practiced in large companies and government agencies) it is very useful because it allows the generation of very impressive stacks of documentation and show impressive implementation performance as measured in lines of code per man month. There your most important skill is of course timing your disappearance act.
I think the most important thing to keep in mind is WHY you want to generate source code. Is it, for instance, because you are more fluent with UML than any programming language and hence want to generate object-oriented classes from that graphical model?
Is it because you expressed a schema definition in any language (SQL DDL for example: jOOQ, XSD for example JAXB code generation) and want to generate a model from that?
The advantage of code generation is always the fact that you express something only once (as in DRY, like Stephan stated). This is a very good practice that made it deep into extreme programming (among other processes). When you keep things DRY, you will not run the risk that the model differs from its glue code. On the other hand, you might blow up your glue code because it will exactly match its underlying model. Typically, you have one class/type/object per RDMBS table or per XML element.
If, however, you use code generation because you're more at ease with a modelling language (as in MDA, or model-driven architecture), you might run the risk that your generated code is not good enough (lack of detail) or too complicated (lack of simplicity) because - for instance - UML is not suited for solving problems in detail.
In any case: code generation can be very helpful if the generated code can be used AS-IS and does not need any customisation. As soon as you start customising generated code, it may become a maintenance nightmare.

CodeCharge : is it any good?

A job came in to me that's built with CodeCharge - had a look at it and seems to be a pretty basic point-and-click site builder tool. Has anyone got any in-depth experience with it? My first reaction is one of horror and to just rebuild the code in Rails or PHP but I thought I'd ask the question first, maybe i'm missing something...
I'm currently evaluating it for use in quickly producing a back-office environment, and it seems a very comfortable way of getting an interface up and running quickly.
Once the code is generated it is simple, but easily modified for any special needs you have.
In short it is a very good tool (though it would seem that Iron Speed Designer is much better though much more expensive) for what it does - fast prototyping and almost no coding approach to developing a web application. In my opinion, not much different than a Ruby On Rails application in terms of functionality, and, I can generate the code in any language I want.
You have to realize, it is all about speed - some quality is thrown in, but it is a very generic of quality - this is NOT a custom application, mind you, the resulting code you get here might not be pretty but it is a few level higher than your average script kiddie code.
I'm seriously considering this tool for creating back-office applications for sites I develop - a fast and easy solution instead of mucking around in tables of data and useless and repetitive SQL code.
Codecharge is a powerful tool that I have used for over 10 years to build very large content management systems, CRMs and many other management type tools.
Its far from simple once you get into it, and honestly when you use a tool like Codecharge to cleanly generate your user interfaces, you end up with a healthier application that can last many many years.
For instance, I have three clients that have been running Codecharge created portals for over 10 years and they always comment how bug free they have been.
There is a learning curve to learning CodeCharge but it will also teach you what entire applications should have in place and it will please the executives every time because they can get functionality within hours or days rather than weeks or never.
Development teams will often not like it though, because they would rather hand code everything or use the latest and greatest approach to development.

Why people don't use LabVIEW for purposes other than data acquisition and virtualization? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
This is marked as a subjective question, I hope I won't get too many down votes though.
LV seems to offer a nice graphic alternative to traditional text based programming. As I understand, it's not a just-virtualization/data acquisition programming language. Nonetheless, it seems to have that paradigm pegged to its creator's name.
My question comes up because it doesn't seem to be widely used for multi-purpose applications. I'm not a LV-expert of any kind, I'm more like a learner. I'm still getting used to LV.
Labview is fantastic if you have National Instruments hardware, and want to do something like acquire, plot and log the data.
When you start interfacing to custom devices the wiring between modules gets complicated having to do all the string manipulation work for input and output to a device.
At my place of work, we found that we got annoyed with having to make massive, complicated VI's to interface to devices and started writing them in .NET and interfacing them to Labview.
In the end we ended up scrapping Labview all together and using the NI Measurement Studio for Visual Studio to give us all the lovely looking NI controls (waveform plot, tank, gauges, switches etc) with the flexibility of C#.
In summary, even with a couple of 24" screens, sometimes the wiring for Labview code can get too complex and becomes impossible to comment, debug, and make extensible for any future changes. I suggest taking a look at Measurement Studio for Visual Studio and using your favourite .NET language with the pretty NI controls.
My two experiences with "graphic alternative[s] to traditional text based programming" have been dreadful. I find such languages to be slow to use, hard to edit, and inexpressive. Debugging them is a nightmare. And they offer no real advantages.
To be sure, it has been quite a long time since I looked at one, but the opinions of others I've asked about them have been only luke warm, so I have never taken the time to look again. Reasons to look again are welcome and will be taken on board...
Labview can be used to author large, complex software projects. Labview is unquestionably much more fun to use than a syntax based language. I have programmed mathematically dense, dynamic simulations using labview. Newer versions of Labview include alot of exciting features, especially for utilizing multiple processors. I like Labview very much. But I don't recommend it to anyone.
Unfortunately, it's an absolute nightmare for anything other than simple acquisition and display. It may one day be sufficiently developed to be considered as a viable alternative to text based languages. However, the developers at NI have consistently opted to ignore the three fundamental problems that plague labview.
1) It is unstable and riddled with bugs. There are thousands of bugs that have been posted to the labview support forums that are yet to be fixed. Some of these are quite serious, such as memory leaks, or mathematical errors in basic functions.
2) The documentation is atrocious. More often than not, when you look for help with a labview function in the local help file you'll find a sentence that merely restates the name of the item you are trying to find some detail on. e.g. A user looks up the help file on the texture filter mode setting and the only thing written in the help file is "Texture Filter Mode- selects the mode used for texture filtering." Gee, thanks. That clears things right up, doesn't it? The problem goes much deeper in that; quite often, when you ask a technical representative from national instruments to provide critical details about labview functionality or the specific behavior of mathematical functions, they simply don't know how the functions in their own library work. This may sound like an exaggeration, but trust me, it's not.
3) While it's not impossible to keep graphical code clean and well documented, Labview is designed to make these tasks both difficult and inefficient. In order to keep your code from becoming a tangled, confusing mess, you must routinely (every few operations) employ structures like clusters, and sub-vis and giant type defined controls (which can stretch over multiple screens in a large project). These structures eat memory and destroy performance by forcing labview to make multiple copies of data in memory and perform gratuitous operations- all for the sake of keeping the graphical diagram from looking like rainbow colored spaghetti with no comments or text anywhere in sight. Programming in labview is like playing pictionary with the devil. Imagine your giant software project written as a wall sized flowchart with no words on it at all. Now imagine that all the lines cross each other a thousand times so that tracing the data flow is completely impossible. You have just envisioned the most natural and most efficient way to program in labview.
Labview is cool. Labview is getting better with each new release. If National Instruments keeps improving it, it will be great one day as a general programming language. Right now, it's an extremely bad choice as a software development platform for large or logically complex projects.
I **have been writing in LabVIEW for almost 20 years now. I develop automated test systems. I have developed, RF, Vison, high speed digital and many different flavors of mixed signal test systems. I was a "C" programmer before I switched to LabVIEW.
It's true that you can build some programs quickly in LabVIEW, but just like any other language it takes a lot of training to learn to build a large application that is clean easy to maintain with reusable code. In 20 years I have never had a LabVIEW bug stop me from finishing a project.
Back in the day, NIWEEK would have a software shootout every year. LabVIEW and LabWINDOWS (NI's version of "C") programmers would both be given the same problem and have a race to see which group finished first. Each and every year all the LabVIEW programmers were done way before the 1st LabWINDOWs person finished. I have challenged many of my dedicated text based programming friends to shootouts and they all admit they don't stand a chance, even if I let them define the software problem.
So, I feel LabVIEW is a great programming tool. It's definitely the way to go if you’re interfacing with any type of NI hardware. It's not the answer for everything but I’m sure there are many people not using it just because they don’t consider LabVIEW a “real programming language”. After all, we just wire a bunch of blocks together right? I do find it funny how many text based programmers snub there noses at it as they are so proud of the mess of text code they have created that only they can understand. A good programmer in any language should write code that others can easily read. Writing overly complex code that is impossible to follow does not make the programmer a genius. It means the programmer is a “compliator”(someone who can take a simple problem and complicate it). I believe in the KISS principle (KEEP IT SIMPLE STUPID).
Anyway, there’s my two cents worth!**
I thought LabVIEW was a dream for FPGA programming. Independent executable blocks just... work. In general, I use LabVIEW for various tasks interfacing with my DAQ and FPGA hardware, but that's about it. It seems (again to me) that this is LabVIEW's strong point and the reason it was built, but outside that arena it feels "cumbersome." As far as getting things done, it's like any other language with a learning curve - once you figure it out it's not too bad for getting work done. I've seen several people give up before that thinking the learning curve was permanent or something.
Picking up a 30" monitor made a huge difference.
I know one thing that people dislike is the version control integration.
Edit: LabVIEW/hardware is hella expensive for "just for fun" use. I dropped $10K on their hardware (student prices) and got the software for free from school for making toys around the house.
Our company is using LabVIEW for the last 10 years for measuring, monitoring and reporting of our subject (trains).
Recently we have started using LabVIEW as GUI for databases with lots of data, the powers of LabVIEW with the recent new features (Classes, XControls) allows use to create these kinds of GUIs for a fraction of development costs at other platforms. While we don't need external programmers at consultancy rate.
Ton
I first started using Labview in a college physics lab. Initially, I thought it was slow and cumbersome when compared to other text-based languages. It was too difficult to create complex logic and code became sloppy real fast (wires everywhere).
Then, a few years later, I learned about using sub-vi's and bundles. What a difference! At this point, I was using labview for very high level functions. I was taking raw input from a camera, using all kinds of image filters and processing to ultimately parse out the lines in a road so that a vehicle could drive itself down this road with no driver - it was for the DARPA URBAN CHALLENGE. I was also generating maps from text waypoint data, making high-level parsing functions, and a slew of other applications that had nothing to do with processing data from input devices. It was really a lot of fun. and FAST.
After leaving college, I am now back to using text-based languages. I've been using: PHP, Javascript, VBA, C#, VBscript, VB.net, Matlab, Epson RC+, Codeigniter, various API's, and I'm sure some others. I often get very frustrated in the amount of syntax I have to memorize in order to program with any significant speed. I find it annoying to have to switch schools of thought based on the language I am using... when all programming languages essentially do the same thing! I need a second monitor just to have the help up at all times so i can find the syntax for the same functions in different languages. I miss Labview very much, it's too bad it's so expensive otherwise I would use it for everything.
Graphical based programming I think has a huge potential. By not being constrained by syntax, you can focus on logic instead of code. Labview itself may still be in its infancy in terms of support and debugging, but I believe conceptually it beats out the competition. It's simple a more intuitive way to program.
We use LabVIEW for running our end of line test equipment and it is ideal for data acquisition and control. Typically measuring 15 to 80 differential voltages and controlling environmental chambers, mass flow controllers and various serial devices LabVIEW is more than capable.
Interfacing with custom devices can be simplified greatly by using the NI instrument driver wizard to create reusable VI's, interfacing with custom dll's if needed. On a number of projects we have created such drivers for custom hardware and once created there are reusable in future projects with no modification.
Using event driven structures user interfaces are responsive and we regularly use LabVIEW applications to interface with a database.
Whatever programming environment you choose it's the process of designing the application that matters most. I agree that you can create some really horrible and unreadable block diagrams in LabVIEW but then you can also create unreadable code in Visual studio. With just a little thought and planning a LabVIEW block diagram can be made to fit on a single 24" monitor with plenty of space to add comments.
I would use LabVIEW over Visual Studio for most projects.
But people do use LabView for purposes other than data acquisition and virtualization. Of course LabVIEW is mainly used in labs and production environments because it is (or was) one of the main NI's customer target.
However you can do a lots of various things with LabVIEW, like programming a robot that would perform a lot of image analysis, and then tweet the results. Have a look at videos from NI Week 2009 on you-tube, and you'll see how powerful this tool is. For instance, there is possibility to write code and deploy it to ARM MCUs (see this Dev Monkey article from 2009.08.10).
And finally check this LabVIEW DIY group
I have been using LabVIEW for about two years for developing automation. If given due care and proper design we sure can develop maintainable and really good looking application in LabVIEW.I think this is the same for all the other languages out there. I have seen equally bad code in LabVIEW primarily from people who use it only to develop quick and dirty working automation. IMHO Graphical programming is a lot easier to code and understand if rightly done. But that said I feel text based programming 'feels' more powerful!
LabVIEW is primarily marketed for industrial automation, has inherent support for lot of NI hardware and you can get the third party hardwares working with it pretty quickly. I think that is the reason you see it only in automation field. Moreover it is pretty costly and you are locked down with NI as you do cannot even open your code if you do not buy the software from them!
I've been thinking about this question for decades (yes, since 1989...)
Like all programming languages, LabVIEW is a high-level tool used to manipulate the flow of electrons. Unless you are a purist and refuse to use anything other than a breadboard and wires; transistors, integrated circuits and programming languages are probably a good thing if you wish to build something of any consequence.
But like all high-level tools, just wielding one does not make you a professional craftsman. Back in the day of soldering irons, op-amps and UARTs it required a large amount of careful study before you could create a system that actually functioned. The modern realm of text-based languages is so overly dominated by syntax that the programmer must get it just right before it will compile and run. In order to write code that works, the programmer must increase their skill level to create systems much larger than "Hello World".
LabVIEW is not dominated by syntax, but by Data Flow. Back in the day, reaching for your flow charting template and developing the diagram of a well-balanced information system was the art and beauty part of the job. Only after you had the reviewed flowchart in hand would you even consider slogging through the drudgery of punching out the code. (yes... punch cards)
LabVIEW is a development system that allows the programmer to use flow charting tools to diagram the complete information system and press "run"..... LabVIEW "punches out the code" and compiles it for you. No need to fight through the syntax of text language A or language B.
With such a powerful tool, novices can build large, working programs rapidly -- implying some level of professional craftsmanship since it runs at all. However, if the system does not perform elegantly, or the source code diagram is a mess, it is not the fault of LabVIEW.
People often point to "LabVIEW is only good for developing large data acquisition systems." Perhaps those people should consider the professionalism of the scientists and engineers that are working in data acquisition. If they know enough to get the actual wires right for the sensors and transducers, it may be a good bet that they are expert at developing LabVIEW wiring diagrams as well.
I do use LabView at home, as it is part of Lego Mindstorms, which my son loves. And I really like the way to compose systems like this.
However, in my work (embedded systems), it is generally to restrictive. But also here, I'm trying to move up in abstraction:
- control and state behavior: Model based design (i.e. Rhapsody)
- data algorithms etc. Simulink
Sometimes a graphical model can require more clicks than a piece of code. But this also includes the work a good programmer need to do in design & documentation; not just the code typing. The graphical notation takes many hassles away and is generally much faster if the tool is powerful enough for the complexity at hand. So I expect these kinds of tools will gain more popularity in the next years as they mature and people get familiar with them.
I have used LabView for some 10 years. It's brilliant for Scientific prorgamming ie like Matlab or Simulink but 10 times better. If you are having problems then you are doing something wrong. It takes time to learn like any language. As for using .Net instead - are these people even on the same planet? Why would you go to the trouble of writing eveything from scratch when you can say pull up an FFT etc and use alread written code. .NET is fine for simple programs but not so good for Scientific processing. yes you can do it but not without oodles of add-ons for graphics etc. Prorgamming in G is far easier than text based for Scientific problems. You can of course program in c if you are interfacing and use the dll. Now there are things that I would not use LabView for - speech recognition for example may be a bit messy at present. More to the point though, why do people like programming in outdated text form when there is an easy alternative. It is as if people want to make things complicated so as to justify their job in some way. Simplify Simplify!
Somebody said that LabView is only sued in the Automation field. Simply not write at all. It has applications in Digital Signal Processing,Control Systems,Communications, Web Based,Mathematics,Image Processing and so on. It started as a data aquisition method and they invented the name Virtual Instrumentation but it has gone far beyond that now. It is a Scientific programming language with a second to none graphical interface. It is way beyond Simulink and if you like Matlab then it has a type of Matlab scripting built in for those that like such ways of programming. It is evolving all the time. The one thing I found difficult was writing code for the Compact Rio - tricky but far easier than the alternative. It's expensive but you get a quality product. I personally have not found any bugs in ordinary programming. It is an engineers language but anybody could use it to program.