Related
As my complex question says, methinks, quite much search would be involved, if I wanted to solve it alone.
I don't know anything about these 2 platforms and getting acquainted with both their Hardware and Programming abilities / tools would be quite time-consuming.
So, would kindly ask the community:
(Thinking at a huge company level and not as an Indie)
what would be the greatest issues during porting the 3D AAA game on the programming side?
(Mentioning the most significant ones would be sufficient)
Many many thanx,
good - Byte.
The DSi has little RAM (~16Mo) to work with, fairly slow processors (66MHz) and limited number of polygons. So direct conversion of a modern 3D game will be a nightmare. You're much better to consider adapting your assets to the DSi specs rather than trying to stick "as close as possible" to the iPhone version.
i'm a computer systems engineering undergraduate student, i just want to know what advantages MATLAB has over SCILAB and vice versa other than that SCILAB is freeware.
i mean from a computer engineer point of view.
thanks
I can't get into the nitty-gritty details, as I haven't used SCILAB extensively.
But from a bird's eye view, MATLAB is a very polished software, with decades of development behind it. And a price to match. It has a huge array of specialized packages, good support, a reasonably well designed UI, and it's generally user-friendly enough for non-computer engineers to work with. It's also very common in the industry, so it's not a bad thing to have on your resume.
But if you don't have very complex needs (which I suspect, given the use I made of MATLAB during my undergrad years) and you don't need the robustness and polish of a professional package, SCILAB will probably meet your needs.
And since it's based on the MATLAB language, what you'll learn can be transferred later on if your needs change, or you find yourself working in an environment where MATLAB is the default.
Scilab is to MATLAB as OpenOffice is to MS Office. That is to say, it's a not-quite-a-clone, and it's not as polished. You do get most of the functionality of MATLAB, and the price is much more agreeable.
That said, if you want a free/open pretend MATLAB, I personally prefer Octave, since the syntax is closer to MATLAB's.
If you aren't bothered about MATLAB compatibility, then check out the statistics language/environment R, which is delightful.
Matlab is the de-facto industrial standard, is ready now and here, and has a big firm behind to push it.
Scilab has been for long time the open source alternative, but honestly it never appealed me. I think that or they never belived enough on the project, or that you need too much money to make a valid product of this kind.
And it is a real pity, since we desperately need a good open source alternative, because being open source is the only way to be very efficient on different platform: actually matlab is very good at prototyping small-medium programs, but since it is closed source, it's very difficult to scale it up, to supercomputers for example, requiring often a complete rewrite of the code.
Sage might be the third way, it has a lot of potential, and I would bet on it. Check it. It doesn't reinvent the wheel like Scilab did, but take existing software and merge it in a new program. It is based on python which gained a lot of momentum in the computing world, since it has shown to be both easy enough to quick prototype, and versatile enough to run on exotic platforma like supercomputers or GPGPU.
# MatlabDoug
It is feasible in small-medium environment, but on very big task the flexibility of open source is invaluable.
Starting from low-level tool like open-mpi that allows you to finely tune your applications, through higher-level framework like PETSc that lift a lot of work from your shoulders, to java and python implementations that let you concentrate on the algorithms forgetting about many of the headaches of the lower level languages.
But the real proof is that an astonishing majority of the top500 supercompunters prefers open source alternatives.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
This is marked as a subjective question, I hope I won't get too many down votes though.
LV seems to offer a nice graphic alternative to traditional text based programming. As I understand, it's not a just-virtualization/data acquisition programming language. Nonetheless, it seems to have that paradigm pegged to its creator's name.
My question comes up because it doesn't seem to be widely used for multi-purpose applications. I'm not a LV-expert of any kind, I'm more like a learner. I'm still getting used to LV.
Labview is fantastic if you have National Instruments hardware, and want to do something like acquire, plot and log the data.
When you start interfacing to custom devices the wiring between modules gets complicated having to do all the string manipulation work for input and output to a device.
At my place of work, we found that we got annoyed with having to make massive, complicated VI's to interface to devices and started writing them in .NET and interfacing them to Labview.
In the end we ended up scrapping Labview all together and using the NI Measurement Studio for Visual Studio to give us all the lovely looking NI controls (waveform plot, tank, gauges, switches etc) with the flexibility of C#.
In summary, even with a couple of 24" screens, sometimes the wiring for Labview code can get too complex and becomes impossible to comment, debug, and make extensible for any future changes. I suggest taking a look at Measurement Studio for Visual Studio and using your favourite .NET language with the pretty NI controls.
My two experiences with "graphic alternative[s] to traditional text based programming" have been dreadful. I find such languages to be slow to use, hard to edit, and inexpressive. Debugging them is a nightmare. And they offer no real advantages.
To be sure, it has been quite a long time since I looked at one, but the opinions of others I've asked about them have been only luke warm, so I have never taken the time to look again. Reasons to look again are welcome and will be taken on board...
Labview can be used to author large, complex software projects. Labview is unquestionably much more fun to use than a syntax based language. I have programmed mathematically dense, dynamic simulations using labview. Newer versions of Labview include alot of exciting features, especially for utilizing multiple processors. I like Labview very much. But I don't recommend it to anyone.
Unfortunately, it's an absolute nightmare for anything other than simple acquisition and display. It may one day be sufficiently developed to be considered as a viable alternative to text based languages. However, the developers at NI have consistently opted to ignore the three fundamental problems that plague labview.
1) It is unstable and riddled with bugs. There are thousands of bugs that have been posted to the labview support forums that are yet to be fixed. Some of these are quite serious, such as memory leaks, or mathematical errors in basic functions.
2) The documentation is atrocious. More often than not, when you look for help with a labview function in the local help file you'll find a sentence that merely restates the name of the item you are trying to find some detail on. e.g. A user looks up the help file on the texture filter mode setting and the only thing written in the help file is "Texture Filter Mode- selects the mode used for texture filtering." Gee, thanks. That clears things right up, doesn't it? The problem goes much deeper in that; quite often, when you ask a technical representative from national instruments to provide critical details about labview functionality or the specific behavior of mathematical functions, they simply don't know how the functions in their own library work. This may sound like an exaggeration, but trust me, it's not.
3) While it's not impossible to keep graphical code clean and well documented, Labview is designed to make these tasks both difficult and inefficient. In order to keep your code from becoming a tangled, confusing mess, you must routinely (every few operations) employ structures like clusters, and sub-vis and giant type defined controls (which can stretch over multiple screens in a large project). These structures eat memory and destroy performance by forcing labview to make multiple copies of data in memory and perform gratuitous operations- all for the sake of keeping the graphical diagram from looking like rainbow colored spaghetti with no comments or text anywhere in sight. Programming in labview is like playing pictionary with the devil. Imagine your giant software project written as a wall sized flowchart with no words on it at all. Now imagine that all the lines cross each other a thousand times so that tracing the data flow is completely impossible. You have just envisioned the most natural and most efficient way to program in labview.
Labview is cool. Labview is getting better with each new release. If National Instruments keeps improving it, it will be great one day as a general programming language. Right now, it's an extremely bad choice as a software development platform for large or logically complex projects.
I **have been writing in LabVIEW for almost 20 years now. I develop automated test systems. I have developed, RF, Vison, high speed digital and many different flavors of mixed signal test systems. I was a "C" programmer before I switched to LabVIEW.
It's true that you can build some programs quickly in LabVIEW, but just like any other language it takes a lot of training to learn to build a large application that is clean easy to maintain with reusable code. In 20 years I have never had a LabVIEW bug stop me from finishing a project.
Back in the day, NIWEEK would have a software shootout every year. LabVIEW and LabWINDOWS (NI's version of "C") programmers would both be given the same problem and have a race to see which group finished first. Each and every year all the LabVIEW programmers were done way before the 1st LabWINDOWs person finished. I have challenged many of my dedicated text based programming friends to shootouts and they all admit they don't stand a chance, even if I let them define the software problem.
So, I feel LabVIEW is a great programming tool. It's definitely the way to go if you’re interfacing with any type of NI hardware. It's not the answer for everything but I’m sure there are many people not using it just because they don’t consider LabVIEW a “real programming language”. After all, we just wire a bunch of blocks together right? I do find it funny how many text based programmers snub there noses at it as they are so proud of the mess of text code they have created that only they can understand. A good programmer in any language should write code that others can easily read. Writing overly complex code that is impossible to follow does not make the programmer a genius. It means the programmer is a “compliator”(someone who can take a simple problem and complicate it). I believe in the KISS principle (KEEP IT SIMPLE STUPID).
Anyway, there’s my two cents worth!**
I thought LabVIEW was a dream for FPGA programming. Independent executable blocks just... work. In general, I use LabVIEW for various tasks interfacing with my DAQ and FPGA hardware, but that's about it. It seems (again to me) that this is LabVIEW's strong point and the reason it was built, but outside that arena it feels "cumbersome." As far as getting things done, it's like any other language with a learning curve - once you figure it out it's not too bad for getting work done. I've seen several people give up before that thinking the learning curve was permanent or something.
Picking up a 30" monitor made a huge difference.
I know one thing that people dislike is the version control integration.
Edit: LabVIEW/hardware is hella expensive for "just for fun" use. I dropped $10K on their hardware (student prices) and got the software for free from school for making toys around the house.
Our company is using LabVIEW for the last 10 years for measuring, monitoring and reporting of our subject (trains).
Recently we have started using LabVIEW as GUI for databases with lots of data, the powers of LabVIEW with the recent new features (Classes, XControls) allows use to create these kinds of GUIs for a fraction of development costs at other platforms. While we don't need external programmers at consultancy rate.
Ton
I first started using Labview in a college physics lab. Initially, I thought it was slow and cumbersome when compared to other text-based languages. It was too difficult to create complex logic and code became sloppy real fast (wires everywhere).
Then, a few years later, I learned about using sub-vi's and bundles. What a difference! At this point, I was using labview for very high level functions. I was taking raw input from a camera, using all kinds of image filters and processing to ultimately parse out the lines in a road so that a vehicle could drive itself down this road with no driver - it was for the DARPA URBAN CHALLENGE. I was also generating maps from text waypoint data, making high-level parsing functions, and a slew of other applications that had nothing to do with processing data from input devices. It was really a lot of fun. and FAST.
After leaving college, I am now back to using text-based languages. I've been using: PHP, Javascript, VBA, C#, VBscript, VB.net, Matlab, Epson RC+, Codeigniter, various API's, and I'm sure some others. I often get very frustrated in the amount of syntax I have to memorize in order to program with any significant speed. I find it annoying to have to switch schools of thought based on the language I am using... when all programming languages essentially do the same thing! I need a second monitor just to have the help up at all times so i can find the syntax for the same functions in different languages. I miss Labview very much, it's too bad it's so expensive otherwise I would use it for everything.
Graphical based programming I think has a huge potential. By not being constrained by syntax, you can focus on logic instead of code. Labview itself may still be in its infancy in terms of support and debugging, but I believe conceptually it beats out the competition. It's simple a more intuitive way to program.
We use LabVIEW for running our end of line test equipment and it is ideal for data acquisition and control. Typically measuring 15 to 80 differential voltages and controlling environmental chambers, mass flow controllers and various serial devices LabVIEW is more than capable.
Interfacing with custom devices can be simplified greatly by using the NI instrument driver wizard to create reusable VI's, interfacing with custom dll's if needed. On a number of projects we have created such drivers for custom hardware and once created there are reusable in future projects with no modification.
Using event driven structures user interfaces are responsive and we regularly use LabVIEW applications to interface with a database.
Whatever programming environment you choose it's the process of designing the application that matters most. I agree that you can create some really horrible and unreadable block diagrams in LabVIEW but then you can also create unreadable code in Visual studio. With just a little thought and planning a LabVIEW block diagram can be made to fit on a single 24" monitor with plenty of space to add comments.
I would use LabVIEW over Visual Studio for most projects.
But people do use LabView for purposes other than data acquisition and virtualization. Of course LabVIEW is mainly used in labs and production environments because it is (or was) one of the main NI's customer target.
However you can do a lots of various things with LabVIEW, like programming a robot that would perform a lot of image analysis, and then tweet the results. Have a look at videos from NI Week 2009 on you-tube, and you'll see how powerful this tool is. For instance, there is possibility to write code and deploy it to ARM MCUs (see this Dev Monkey article from 2009.08.10).
And finally check this LabVIEW DIY group
I have been using LabVIEW for about two years for developing automation. If given due care and proper design we sure can develop maintainable and really good looking application in LabVIEW.I think this is the same for all the other languages out there. I have seen equally bad code in LabVIEW primarily from people who use it only to develop quick and dirty working automation. IMHO Graphical programming is a lot easier to code and understand if rightly done. But that said I feel text based programming 'feels' more powerful!
LabVIEW is primarily marketed for industrial automation, has inherent support for lot of NI hardware and you can get the third party hardwares working with it pretty quickly. I think that is the reason you see it only in automation field. Moreover it is pretty costly and you are locked down with NI as you do cannot even open your code if you do not buy the software from them!
I've been thinking about this question for decades (yes, since 1989...)
Like all programming languages, LabVIEW is a high-level tool used to manipulate the flow of electrons. Unless you are a purist and refuse to use anything other than a breadboard and wires; transistors, integrated circuits and programming languages are probably a good thing if you wish to build something of any consequence.
But like all high-level tools, just wielding one does not make you a professional craftsman. Back in the day of soldering irons, op-amps and UARTs it required a large amount of careful study before you could create a system that actually functioned. The modern realm of text-based languages is so overly dominated by syntax that the programmer must get it just right before it will compile and run. In order to write code that works, the programmer must increase their skill level to create systems much larger than "Hello World".
LabVIEW is not dominated by syntax, but by Data Flow. Back in the day, reaching for your flow charting template and developing the diagram of a well-balanced information system was the art and beauty part of the job. Only after you had the reviewed flowchart in hand would you even consider slogging through the drudgery of punching out the code. (yes... punch cards)
LabVIEW is a development system that allows the programmer to use flow charting tools to diagram the complete information system and press "run"..... LabVIEW "punches out the code" and compiles it for you. No need to fight through the syntax of text language A or language B.
With such a powerful tool, novices can build large, working programs rapidly -- implying some level of professional craftsmanship since it runs at all. However, if the system does not perform elegantly, or the source code diagram is a mess, it is not the fault of LabVIEW.
People often point to "LabVIEW is only good for developing large data acquisition systems." Perhaps those people should consider the professionalism of the scientists and engineers that are working in data acquisition. If they know enough to get the actual wires right for the sensors and transducers, it may be a good bet that they are expert at developing LabVIEW wiring diagrams as well.
I do use LabView at home, as it is part of Lego Mindstorms, which my son loves. And I really like the way to compose systems like this.
However, in my work (embedded systems), it is generally to restrictive. But also here, I'm trying to move up in abstraction:
- control and state behavior: Model based design (i.e. Rhapsody)
- data algorithms etc. Simulink
Sometimes a graphical model can require more clicks than a piece of code. But this also includes the work a good programmer need to do in design & documentation; not just the code typing. The graphical notation takes many hassles away and is generally much faster if the tool is powerful enough for the complexity at hand. So I expect these kinds of tools will gain more popularity in the next years as they mature and people get familiar with them.
I have used LabView for some 10 years. It's brilliant for Scientific prorgamming ie like Matlab or Simulink but 10 times better. If you are having problems then you are doing something wrong. It takes time to learn like any language. As for using .Net instead - are these people even on the same planet? Why would you go to the trouble of writing eveything from scratch when you can say pull up an FFT etc and use alread written code. .NET is fine for simple programs but not so good for Scientific processing. yes you can do it but not without oodles of add-ons for graphics etc. Prorgamming in G is far easier than text based for Scientific problems. You can of course program in c if you are interfacing and use the dll. Now there are things that I would not use LabView for - speech recognition for example may be a bit messy at present. More to the point though, why do people like programming in outdated text form when there is an easy alternative. It is as if people want to make things complicated so as to justify their job in some way. Simplify Simplify!
Somebody said that LabView is only sued in the Automation field. Simply not write at all. It has applications in Digital Signal Processing,Control Systems,Communications, Web Based,Mathematics,Image Processing and so on. It started as a data aquisition method and they invented the name Virtual Instrumentation but it has gone far beyond that now. It is a Scientific programming language with a second to none graphical interface. It is way beyond Simulink and if you like Matlab then it has a type of Matlab scripting built in for those that like such ways of programming. It is evolving all the time. The one thing I found difficult was writing code for the Compact Rio - tricky but far easier than the alternative. It's expensive but you get a quality product. I personally have not found any bugs in ordinary programming. It is an engineers language but anybody could use it to program.
Till today I am working with Basic UIKIT application but now onwards I need to work in OpenGL.
Problem is I have not any idea about OpenGL and am confused lot about how to start and from where to start.
I need to create an application which is same as "iBeer" (see movie in YouTube).
So I am having lots of confusion about how do I create graphics of beer that you seen in application, so what should be preferred library?
Creating opengl applications is kinda time consuming, it takes time to learn the syntax and get comfortable with.It's not easy to just magically write an "ibeer"-app, even if it's not the most advanced application ever written.
I don't think there are any shortcuts, you will have to learn opengl and its syntax.
iphone also uses Opengl|Es which syntax is not to far away from regular opengl, but still differs enough that you can't use regular opengl-engines.
The two best opengl resources i have encountered are:
The Red Book
Nehe's game tutorials
There are also some good tutorials specifically for opengl|es and iphone development
Nehe tutorials ported to iphone
Blog-post about iphone-development
However i'm gonna guess that you are more interested in creating the application than learning opengl syntax, therefore you should also take a look at some opengl-engines
Irrlicht opengl engine
Unity
There are probably a million more resources on the web, should get you started though.
Jeff LaMarsh has a good primer on his blog. Here's the TOC:
Basic Concepts. A Look at Simple
Drawing Viewports in Perspective
Let There Be Light.
Living in a Material World
Textures and Texture Mapping
Learning OpenGL and/or OpenGL ES is done best by learning the prerequisite concept of Computer Graphics in general, as well as learning the mathematics involved. Specifically linear algebra, vector spaces and how matrices can be used to represent coordination systems. OpenGL is just an API which is easy to use, as long as you understand what you're doing. If you're afraid of math, don't waste your time.
For absolute beginners, I recommend the book "3D Math Primer", with errata and samples on gamemath.com and an online article named "The matrix and quaternion FAQ" available here: www.j3d.org/matrix_faq/matrfaq_latest.html
And the OpenGL ES 1.x specification for implementation used on the iPhone:
http://www.khronos.org/opengles/1_X
And last but not the least, the specification for OpenGL 2.1 and the first version of the OpenGL Shading Language, GLSL:
http://www.opengl.org/registry/doc/glspec21.20061201.pdf
http://www.opengl.org/registry/doc/GLSLangSpec.Full.1.10.59.pdf
A lot of people I've spoken to don't think the specification counts as documentation for library users, but in my opinion it does; simply because it is accurate, it has has authority and any other publications would be just citing it anyway.
If you understand the mathematics required and the general concepts of Computer Graphics, reading the standard should be a breeze.
Last word: Avoid NeHe at all costs. I saw it suggested here, and it's probably okay if you want to go the try-and-fail route. NeHe's tutorials teaches how, but not why. Most of the examples are also horribly outdated.
I recommend focusing on the programmable pipline of OpenGL (shaders) instead of the fixed-function pipeline, if you want to use OpenGL on a computer. (OpenGL ES 1.x does not have shaders) Shader languages like Cg, GLSL and HLSL are here to stay. The proof for that is the latest OpenGL 3.1 standard where the only way to get things done is through shaders. Just a wise warning, as The RedBook and other "Tech yourself OpenGL in 10 days" books tend to focus on the latter.
I would suggest taking the other technologies out of the equation and learning OpenGL separately: then look at the differences in ES and introduce the iPhone specifics, etc. I would second Mads that NeHe is a bad place to start: There are some mistakes in those tutorials that have gone unaddressed (such as forgetting that OpenGL's y-coordinate is inverted relative to most UI toolkits but also forgetting that BMPs are stored upside-down relative to UI toolkits, so loading a BMP and displaying it works through coincidence rather than by intention).
The OpenGL red book suggested above is hands down one of the clearest and best written computing books I have ever read, but the free online copy is out of date so grab a modern dead-tree copy. You won't regret it. Again I agree with Mads that a good understanding of the maths is important. I think you could touch up your maths in parallel with reading the red book as it breaks you in gently and has some appendices covering some of the tricks.
Finally when trying out different things, like the impact of translations, scaling, etc., you often find yourself in a compile,tweak,compile loop. Much more effective is to have an app that lets you tweak parameters at run-time: but writing such a thing whilst learning is a bit of a steep ask. Nate Robins has some excellent demonstration programs that let you tweak parameters to opengl calls and show you the impact right there in the app. They come highly recommended by the opengl red book itself : http://www.xmission.com/~nate/opengl.html
I found this book quite useful for learning OpenGL ES 1.1 (the version the iphone supports):
http://www.amazon.co.uk/Mobile-3D-Graphics-Kaufmann-Computer/dp/0123737273/ref=sr_1_3?ie=UTF8&s=books
Although I already knew the basics of OpenGl before reading it, so can't vouch for how good it is for a total beginner.
Other people seem to like the "Redbook", a free online version is available here: http://www.opengl.org/documentation/red_book/ This might be useful for learning the basic principles, but it covers standard desktop openGL, and there are differences between that and what is available on the Iphone.
There are also a couple of examples that come with the Iphone SDK, which I found very useful.
I would like to make a list of remarkable robot simulation environments including advantages and disadvantages of them. Some examples I know of are Webots and Player/Stage.
ROS will visualize your robot and any data you've recorded from it.
Packages to check out would rviz and nav_view
This made me remember the breve project.
breve is a free, open-source software package which makes it easy to build 3D simulations of multi-agent systems and artificial life.
There is also a wikipage listing Robotics simulators
Microsoft Robotics Studio/Microsoft Robotics Developer Studio 2008
Also read this article on MSDN Magazine
It all depends on what you want to do with the simulation.
I do legged robot simulation, I am coming from a perspective that is different than mobile robotics, but...
If you are interested in dynamics, then the one of the oldest but most difficult to use is sd/fast. The company that originally made it was acquired by a large cad outfit.
You might try heading to : http://www.sdfast.com/
It will cost you a bit of money, but I trust the accuracy of the simulation. There is no contact or collision model, so you have to roll you own. I have used it to simulate bipeds, swimming fish, etc.. There is also no visualization. So, it is for the hardcore programmer. However, it is well respected among us old folk.
OpenDynamics engine is used by people http://www.ode.org/ for "easier" simulation. It comes with an integrator and a primitive visualization package. There are python binding (Hurray for python!).
The build in friction model.. is ... well not very well documented. And did not make sense. Also, the simulations can suddenly "fly apart" for no apparent reason. The simulations may or may not be accurate.
Now, MapleSoft (in beautiful Waterloo Canada) has come out with maplesim. It will set you back a bit of money but here is what I like about it:
It goes beyond just robotics. You can virtually anything. I am sure you can simulate the suspension system on a car, gears, engines... I think it even interfaces with electrical circuit simulation. So, if you are building a high performance product, than MapleSim is a strong contender. Goto www.maplesoft.com and search for it.
They are pretty nice about giving you an eval copy for 30 days.
Of course, you can go home brew. You can solve the Lagrange-Euler equations of motion for most simple robots using a symbolic computation program like maple or mathematica.
EDIT: Have not be able to elegantly do certain derivatives in Maple. I have to resort to a hack.
However, be aware of speed issue.
Finally for more biologically motivated work, you might want to look at opensim (not to be confused with OpenSimulator).
EDIT: OpenSim shares a team member with SD/Fast.
There a lots of other specialized simulators. But, beware.
In sum here are the evaluation criteria for a simulator for robot oriented work:
(1) What kind of collision model do you have ? If it is a very stiff elastic collision, you may have problem in numerical stability during collisions
(2) Visualization- Can you add different terrains, etc..
(3) Handy graphical building tools so you don't have to code then see-what-you-get.
Handling complex system (say a full scale humanoid) is hard to think about in your head.
(4) What is the complexity of the underlying simulation algorithm. If it is O(N) then that is great. But it could be O(N^4) as would be the case for a straight Lagrange-Euler derivation... then your system just will not scale no matter how fast your machine.
(5) How accurate is it and do you care?
(6) Does it help you integrate sensors. For mobile robots you need to have a "robot-eyes view"
(7) If it does visualization, can it you do things like automatically follow the object as it is moving or do you have to chase it around?
Hope that helps!
It's not as impressive looking as Webots, but RobotBasic is free, easy to learn, and useful for prototyping simple robot movement algorithms. You can also program a BasicStamp from the IDE.
I've been programming against SimSpark. It's the open-source simulation engine behind the RoboCup 3D Simulated Soccer League.
It's extensible for different simulations. You can plug in your own sensors, actuators and models using C++, Ruby and/or RSG (Ruby Scene Graph) files.
ABB has a quite a solution called RobotStudio for simulating their huge industrial robots. I don't think it's free and I don't guess you'll get much fun out of it but it's quite impressive. Here's a page about it
I have been working with Carmen http://carmen.sourceforge.net/ and find it useful.
One of the disadvantages with Carmen is the documentation with all respect I think the webpage is a bit outdated and insufficient. So I like to hear from other people with experience in working with Carmen, or student reports/projects dealing with Carmen.
You can find a great list with simulation environments http://www.intorobotics.com/robotics-simulation-softwares-with-3d-modeling-and-programming-support/
MRDS is one of the best and it's free. Also LabView is good to be used in robotcs
National Instruments' LabView is a graphical programming environment for developing measurement, test, and control systems.
It could be used for 3D control simulation with SolidWorks.
MRDS is free and is one of the best simulation environment for robotics. Workspace also can be used, and please check this link if you want a complete list with robotics simulation software
Trik Studio has a nice and clear 2D model simulator and also visual and textual programming programming environments for them. They also soon will support 3D modeling tools based on Morse simulator. Also it is free and opensource and has multi-language interface.