Is there a tool that supports discrete mathematics? - discrete-mathematics

Discrete mathematics (also finite mathematics) deals with topics such as logic, set theory, information theory, partially ordered sets, proofs, relations, and a number of other topics.
For other branches of mathematics, there are tools that support programming. For statistics, there is R and S that have many useful statistics functions built in. For numerical analysis, Octave can be used as a language or integrated into C++.
I don't know of any languages or packages that deal specifically with discrete mathematics (although just about every language can be used to implement algorithms used in discrete mathematics, there should be libraries or environments out there designed specifically for these applications).

The current version of Mathematica is 7. License costs:
Home Edition: $295.
Standard: $2,495 Win/Mac/Linux PC ($3,120 for Solaris)
Government: $1,996 ($2,496 for Solaris)
Educational: $1,095 ($1,370 for Solaris)
Student: $139.95 (no Solaris)
Above, the Home Edition link says:
Mathematica Home Edition is a fully functional version of Mathematica Professional with the same features.
The current version of Maple is 12. License costs:
Student: $99
Commercial: $1,895
Academic: $995
Government: $1,795
And yes, check out Sage, mentioned above by Thomas Owens.

Mathematica

Mathematica has a Combinatorica package, which though quite venerable at this point, provides a good deal of support for combinatorics and graphs. Commands like this are available:
NecklacePolynomial[8, m, Cyclic];
GrayCodeSubsets[{1, 2, 3, 4}];
IntegerPartitions[6]

I'd say Mathematica is your best bet.. even if it does not come with some functionality out of the box, it has very well designed supplementary packages available for it on the net
check out http://www.wolfram.com/products/mathematica/analysis/
you might be interested in the links for Number Theory, Graph Visualizations

I also found Sage. It appears to be the closest thing to Mathematica that's open source, but I'm not sure how well it handles discrete mathematics.

Maple and Matlab would be a couple of Mathematical software packages that may cover part of what you want.

Stanford GraphBase, written primarily by Donald Knuth is a great package for combinatorial computing. I wouldn't call it an extensive code base, but it has great support for graphs and a great deal of discrete mathematics can be formulated in terms of graph theory. It's written in CWEB, which is (IMO) a more readable version of C.
EDIT: It's free.

I love Mathematica and used it to prototype ideas during my PhD in computational physics. However, Mathematica tries to be all things to all people and there are a few downsides:
Being a for-profit company, bug-fixes sometimes come in the next major release: you pay.
Being a proprietary product, sharing code with non-Mathematica people (the world) is problematic.
New features are often half-baked and break when you try to take it beyond the embedded example.
It's user base (tutorials, advice, external libraries) is less active than say python's,
Mulitpanel figures are difficult to generate; see SciDraw library.
That being said, Mathematica's core functionality is amazing for the following reasons:
Its default math functionality is quite robust allowing quick solutions.
It allows both functional and procedural programming.
One can quickly code & publish in a variety of formats: pdf, interactive website.
A new Discrete Book came out.
Bottom line
Apple users expecting ease of use, will like Mathematica for its Apple-like, get-up-and-go feel.
Linux users wanting extensibility, will find Mathematica frustrating for having its Apple-like, box-welded-shut design.

Related

Integrating NetLogo and Java : when should we think about this integration as a good option?

I just came to know about this excellent tutorials
http://scientificgems.wordpress.com/2013/12/11/integrating-netlogo-and-java-part-1/
http://scientificgems.wordpress.com/2013/12/12/integrating-netlogo-and-java-2/
http://scientificgems.wordpress.com/2013/12/13/integrating-netlogo-and-java-3/
Their example concerns about computation needed for patch diffusion and shows how to access patch variable from java and change them in netlogo.
I was wondering if anyone has any idea or comments on when we should think of writing an extension to make our model work better? I am new to netlogo itself, but I think it's good to know what are the options that I might not be aware of :)
I think looking through the extensions listed at https://github.com/NetLogo/NetLogo/wiki/Extensions , both the ones that we (on the NetLogo team) have built ourselves and the ones that have come from the user community, gives you a pretty good idea of the range of things extensions can be good for.
Some broad categories:
data structures (tables, arrays, matrices, priority queues...)
algorithms (networks, statistics, discrete event scheduling, diffusion, ...)
integration with other tools (R, SQL databases, MatLab, ...)
media (sound playback, sound synthesis, images, movies, speech, ...)
new device types (Gogo, Arduino, WiiMote...)
visualization (ray-tracing, sprites, Java2D drawing, ...)
not necessarily exhaustive!

simple speech recognition methods

Yes, I'm aware that speech recognition is fairly complicated (as an understatement). What I'm looking for is a method for distinguishing between maybe 20-30 phrases. An ability to split words (discrete speech is fine) would be nice, but isn't required. The software will be user-dependent(i.e. for use by me). I'm not looking for existing software, but for a good way of going about doing this myself. I've looked into various existing methods and it seems like splitting the sound into phonemes, while common, is somewhat excessive for my needs.
For some context, I'm just looking for a way to control some aspects of my computer with a few simple voice commands. I'm aware that Windows already has speech recognition software, but I'd like to go about this one myself as a learning exercise. Commands would be simple like "Open Google", or "Mute". What I had in mind (not sure if this is a good idea) is that some commands would be compound. So "Mute" would just be "Mute". Whereas the "Open" command could be recognized individually, and then have its suffixes (Google, Photoshop, etc). recognized with another network/model/whatever. But I'm not sure if looking for prefixes/word breaks in this way would produce better results than having to deal with an increased number of individual commands.
I've been looking into perceptrons, hopfield networks (though they're somewhat obsolete from what I understand) and HMMs, and while I understand the ideas behind these (I've implemented the ANNs before) I don't really know which is best suited to this task. I'm assuming that linear vector quantization models would also be appropriate, but I can't really find much literature to this end. Any guidance/resources would be greatly appreciated.
There are some open source project in speech recognition:
HTK (Hidden Markov Models Toolkit)
Sphinx
Both have decoder, training, language model toolkits. Eveything to build a complete and robust speech recognizer.
Voxforge has acoustic and language models for both open source speech recognition toolkits.
Some time ago, I read a whitepaper about a limited vocabulary system, which used a simple recognition process. The system divided each utterance into a small number of bins (6 in time, and 4 in magnitude, if I remember correctly, for 24 total), and all it did was count the number of sample audio measurements in each bin. There was a fuzzy logic rule base which then interpreted each utterances 24 bin counts, and generated an interpretation.
I imagine that (for some applications) a simple matching process might work just as well, in which the 24 bin counts of the current utterance are simple matched against those of each of your stored prototypes, and the one with the least overall difference is the winner.

Matlab vs Aforge vs OpenCV

I am about to start a project in visual image-processing and have no had experience with Matlab, Aforge, OpenCV and was wondering if anyone had any experiences with these different software packages.
I was also wondering which of the three packages were most efficient I assume OpenCV but has anyone had any experience?
Thanks
Jamie.
The question you need to ask yourself is which is more important - your time or the computer's time. If your task is really simple, you may be able to code it up in MATLAB and have it work right off the bat. MATLAB is by far the easiest for development - a scripted language with built-in memory management, a huge array of provided functions, and a great interface for displaying and manipulating data while debugging.
On the other hand, MATLAB is at least an order of magnitude slower than compiled openCV code for many tasks. This is especially true if you use the intel performance primitives libraries.
If you know how to code in MATLAB, I would suggest writing and debugging your algorithms in that language, then porting them to c/c++ with openCV for speed. If there are only a couple of simple functions that you need to speed up, you can call c code from MATLAB, but it's hard to get this working right the first few times you try it, so you're probably better off just rewriting your finished code entirely in c/c++
First, please elaborate about your project's needs. It has the biggest impact on the choice, in addition to other factors - your general programming knowledge (If you haven't dealt with dot net but just with C++, AForge is not a good choice, for example).
Generally,
Both AForge and OpenCV has a built-in interface to .Net, and OpenCV also with C++, python, and more. Matlab might be more efficient, but if you don't have any experience with it - you should also learn its syntax. Take it into consideration.
Matlab probably has the largest variety of functions, but it is more complicated than the other projects. OpenCV and AForge themselves have some differences - see them described in this StackOverflow question/ answers.
I worked last year in two similar projects with cars on the highway. Afaik, Matlab allows to process only one picture frame at a time (surely you could elaborate an algorithm to compute a stream) but using Simulink you can process the stream directly.
On the other hand, i found AForge a lot friendlier and easier to use since you can easily adjust the processing parameters from a GUI (not so fast/easy) to do in Matlab/simulink.
I'd go for Aforge.Net. It's also fast enough if you're worrying about processing speed. (using 640x480)
If you are asking about using one of these in .net,easily you can get info by this:
1-matlab mostly used in simulation of projects not the End-prototype project; my numer : 30;
2-aforge (as I'v used in many project) if you do not need the circular process like capturing image, or recognition of something in images or ... you'll find it very good, cause it is easy to use but useful for single processes; my number : 50
3-opencv very good at speed and useful for circular processes, for example you can capture images from a webcam and Instantly cartoonize it without any delay, But not easy-to-use as aforge. I like it anyway cause of its speed and MANY functions it gives us mostly anything we need in programming; my number : 80
Dr.Taha - Tahasoft.net

Your experiences with Matlab/F#/R for data analysis and modeling algorithms

I've been using F# for a while now to model algorithms before coding them in C++, and also using it afterwards to check the results of the C++ code, and also against real-world recorded data.
For the modeling side of things, it's very handy, but for the 'data mashup' kind of stuff, pulling in data from CSV and other sources, generating statistics, drawing charts etc., my colleague teases me no end ("why are you coding that yourself? It's built in to MatLab").
And I have another colleague who swears by R, which also has charting stuff 'built-in'.
I know that MatLab, R and F# are not strictly comparable, so I'm not asking for a 'feature comparison shoot out'. I just wondered what other people are using for these kind of pre- and post-analysis scenarios, and how happy they are with it.
(If there's anyone out there working on wrapping Microsoft Charts into something F#-friendly, let me know, I'd be happy to participate...)
(Note: answers to this question will be subjective, but based on experience, please)
I have very little experience with F#, but regarding C++/Matlab/R: If the speed of your program's execution is the most important, use C++. If speed of implementation is the most important, use Matlab or R. This is true for a number of reasons, not the least of which is their massive libraries of math/stats packages.
Both Matlab and R can be sped up through parallelism: so generally, I think that speed and quality of implementation should be a bigger concern. That's where the real "value" of programming is taking place, in the design of the application. It's not a minor proposition if you can write 3 or 4 good R programs in the same time it takes you to write 1 good C++ program.
Regarding F#: so far as it is part of Microsoft's framework, it must have a lot to offer. If you're developing in Visual Studio or working on a big .Net project (for instance), it might make sense to use F#. On the other hand, you can call both Matlab and R from .Net applications, so I would probably argue that their libraries should be a bigger concern. For instance, see this article as an example for R and the Matlab Builder.
Long story short: comparing F# and Matlab/R isn't a good comparison. F# is a general purpose programming language, while Matlab/R can be viewed as massive mathematical/data analysis toolkits. Some people call Matlab or R from F# in order to take advantage of each language's benefits (e.g. see this discussion, this article on Matlab/F#, or this article on R/F#).
So far as charting is concerned: R is extremely strong on this front. Have a look at the graphics view on CRAN and this series of posts on the LearnR blog about Lattice and ggplot2.
I've worked a bit with matlab and python/pylab for these purposes. What these tools have 'built-in' is a programming environment, a shell, and gui tools designed for quickly looking at data from a variety of sources.
In a few commands, you can go from having a csv file to interactive plots on the screen, then to an image export in just about any format. It takes a minute or two to go from data to visualization once you have the hang of it. I would imagine this is uncommon in the C++ world (although I have seen some professors with pretty impressive work-flows).
I've tried R, but I can't say much useful about it. It seems to offer about the same set of features, but it may be troublesome to Google for support.
If you are spending more than a couple minutes getting from data to plot using your current method, it's definitely worth learning one of these environments. The best choice depends on your colleagues, your work environment, experience, and your budget.
This is a reasonable close double to the previous question on suitable functional language for scientific/statistical computing so you may want to peruse the long and detailed answers there.
Answers depends, as so often, on your experience and prior language training. I very much prefer R for data munging / modeling / visualization.
I use R because on the one hand it has everything built in and on the other hand you can still manipulate almost everything or start from scratch. Nevertheless, R is rather slow for heavy calculations (although I do all my Monte Carlo simulations in it).
I would say that Matlab is best for the availability of mathematical functionalities in general, R is best for data input/manipulation/visualisation/analysis/etc., and C++ for high-speed subroutines. You can by the way easily integrate C++ (or C, fortran, ...) code in R. Why not read and manipulate input data in R, apply the models in C++, and analyse/visualize output back in R?
I always prototype my models in MATLAB. If my prototype is fast enough, I refactor and it's done. If not, I go back and implement certain functions in C to be called by MATLAB. This requires knowledge of a low level language, which I think is always going to be the case if you are doing anything that is technically challenging.
I'm intrigued with this Lisp flavor if it ever gets off the ground.

How do I estimate tasks using function points?

What are the steps to estimating using function points?
Is there a quick-reference guide of some sort out there?
I took a conference session on Function Point Analysis a few years back. There is a lot too it. You can check out the Free Function Point Training Manual online, the Fundamentals of Function Points, or I suspect you can get a book on it at a computer store.
You might also check out the International Function Point Users Group and see if they have some resources or a local meeting for you.
You really need to get some training on it. Check with IFPUG. You will unknowingly pick up some destructive bad habits if self-taught. It also helps to have an experienced FP analyst review some of your early attempts.
It's the kind of thing that appears overwhelmingly complex until you "get it" and then it's fairly quick to do. It improved my requirements analysis a lot too. I often spot contradictions and gaps when doing a count.
It isn't limited to BDUF Waterfall projects either. I spent three years using FP and Planning Poker as cross-checks on one another when contracting agile methods projects.
I was IFPUG-certified from 2002-2005 and am still using FP analysis. I've seen it misused a lot, and I think that's why it has such a bad reputation.
I recommend you take a look at COSMIC Function points. https://cosmic-sizing.org. COSMIC Function points are also an ISO standard for measuring software size. They are an evolved improvement over IFPUG.
You can quickly estimate size by counting the entries, exits, reads and writes.
Compared with the IFPUG manual, learning COSMIC is much easier, the free book below is all you need, and you can read it in a day.
Recommended reading: https://cosmic-sizing.org/publications/measurement-guide/