Related
I'm building a solution to fit a number of objects most efficiently into a box. I hope to implement more efficient algorithms soon, but to start out with I'm going to use the brute force method, checking every possible position. This is fine for now since the box is small, with a very few number of items. Later, the complexity will grow.
I'm using Unity to allow the user to see how the items ultimately fit in the box. My initial thought was to also use Unity's physics and collision detection to implement the best fit algorithm; but, with a huge potential number of locations and positions to check, is this a bad approach? Am I much better off running my algorithm in a data structure instead? A 10x10x10 box with even three 1x1x1 objects have almost a billion possible positions...
I'm new to Unity so any advice is welcome; thanks!
Update: right, so this problem is definitely in the bin-packing set of problems, which I know is NP hard. I'm assuming a rectangular box, filled with rectangular box-shaped items of random dimensions.
My question is...
My question is: given my particular algorithm, when we ask, "is there currently something in this x,y,z space?" would it be more efficient to figure that out via code, or to use Unity objects with collision-detection.
Based on the answers I've seen, I can see using Unity would be profoundly inefficient.
If you LITERALLY want to know:
"is there currently something in this x,y,z space?"
the best possible way to do that, is to simply use Unity's engine. So, you trivially check the AABB to see if a point is inside it (or perhaps just check for intersection). You can use one of many
I understand that the question "is there currently something in this x,y,z space?" is or could be one important part of whatever solution you are planning. And indeed the best way to do that is to let Unity's engine do that. It's absolutely impossible you or I could write anything as efficient -- to begin with it comes right off the quaternion cloud in the GPU.
That is the actual answer to what you have now stated is your specific question.
Now regarding the more general issue, which I first fully explained when that was the question you were asking :)
Here are some of my thoughts on trivial "box packing" algorithms in 2D, at the level useful in video games.
https://stackoverflow.com/a/35228592/294884
Regarding 3D "box packing" it's absolutely impossible to offer any guidance unless you include a screen shot of what you are trying to do and fully explain the shapes and constraints involved.
If you are a matheatician and looking for the latest in algorithmic thinking on the matter, just google something like "3d box packing algorithm"
example , example
Again, readers here have utterly no clue what shapes/etc you are dealing with, so please click Edit and explain!
Note too that sphere packing is a really fascinating scientific problem, if that's what you are talking about:
https://en.wikipedia.org/wiki/Close-packing_of_equal_spheres
I was looking at some study i have to do in the future to do with procedural generation techniques and i was wondering what type of content you have:
Developed
Helped Develop
Seen implemented
Tried to develop
and what methods/techniques/procedures you used to develop it.
If you feel generous maybe you can even go into specifics of it such as data structures ad algorithms you have used to develop it.
If this needs to be put as community wiki because it is not me asking for a problem to be solved just let me know.
This is not a homework thread because it is a research unit that i'm not taking yet ;)
Introversion software, the makers of the games Defcon, Uplink and Darwinia (among others) have started working on a game about a year ago which extensively uses PCG for city generation, here is a video of their work, and you can read more about it on the development diary of the game (start from the first part at the bottom of the page!).
This immediately got me extremely interested, and seeing the potential for games I immediately started researching the technology. I have amassed a folder of 18 PDFs about the subject (research papers, SIGGRAPH presentations, etc). Here, I uploaded it for you.
The main approach is to use L-Systems, however, I never got around to understanding enough of that to make something out of this. I tried other, less successful approaches like using Voronois, recursively splitting a rectangular area into more smaller areas and shifting the boundaries a little to obtain a bit of randomness and polygon division.
The last method I had gotten from Mike's Code Blog's posts (here and here). The screenshots shown on his blog make me drool, it is my biggest programmer's dream to ever get something that looks like that. I emailed him to ask how he did it, and here is the relevant part of his reply, I'm sure he wouldn't mind me posting this here:
L-Systems is definitely one way to go, but that isn't what I'm doing. The basis of my method is polygon subdivision. I start with a simple polygon that represents the entire area of the city. Then, I split it (roughly) in half, and then split those two polygons, etc. until I get down to city-block size. At that point, the edges of all my polygons represent roads. I then use the same subdivision method to break the blocks down into building-size lots.
The devil is in the details, of course, but that is the basic method.
I for one still haven't managed to fully implement a solution of which I'm satisfied of, but it remains one of, if not my single biggest programmer's dream to ever achieve something like this.
Here are a few of the leaders in procedurally generated terrain (and to a lesser extent foliage). If you don't get a detailed answer here regarding methods and techniques, you might want to look in / ask in their forums. I have seen some discussions of techniques there.
TerraGen 2
World Builder
World Machine
Natural Graphics
Noone mentioned the demoscene that ONLY use procedural stuff?
So, go search for Werkkzeug, Kkrieger, MilkyTracker to start. Also you can visit the site pouet and see the wonder of well done procedural videos (yes, procedural videoclips! With music and graphics, all procedural!)
Allegorithmic's products are used in actual shipping titles. These guys focus on texture generation (both offline and at runtime).
They have some very pretty screenshots and demos.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
This is marked as a subjective question, I hope I won't get too many down votes though.
LV seems to offer a nice graphic alternative to traditional text based programming. As I understand, it's not a just-virtualization/data acquisition programming language. Nonetheless, it seems to have that paradigm pegged to its creator's name.
My question comes up because it doesn't seem to be widely used for multi-purpose applications. I'm not a LV-expert of any kind, I'm more like a learner. I'm still getting used to LV.
Labview is fantastic if you have National Instruments hardware, and want to do something like acquire, plot and log the data.
When you start interfacing to custom devices the wiring between modules gets complicated having to do all the string manipulation work for input and output to a device.
At my place of work, we found that we got annoyed with having to make massive, complicated VI's to interface to devices and started writing them in .NET and interfacing them to Labview.
In the end we ended up scrapping Labview all together and using the NI Measurement Studio for Visual Studio to give us all the lovely looking NI controls (waveform plot, tank, gauges, switches etc) with the flexibility of C#.
In summary, even with a couple of 24" screens, sometimes the wiring for Labview code can get too complex and becomes impossible to comment, debug, and make extensible for any future changes. I suggest taking a look at Measurement Studio for Visual Studio and using your favourite .NET language with the pretty NI controls.
My two experiences with "graphic alternative[s] to traditional text based programming" have been dreadful. I find such languages to be slow to use, hard to edit, and inexpressive. Debugging them is a nightmare. And they offer no real advantages.
To be sure, it has been quite a long time since I looked at one, but the opinions of others I've asked about them have been only luke warm, so I have never taken the time to look again. Reasons to look again are welcome and will be taken on board...
Labview can be used to author large, complex software projects. Labview is unquestionably much more fun to use than a syntax based language. I have programmed mathematically dense, dynamic simulations using labview. Newer versions of Labview include alot of exciting features, especially for utilizing multiple processors. I like Labview very much. But I don't recommend it to anyone.
Unfortunately, it's an absolute nightmare for anything other than simple acquisition and display. It may one day be sufficiently developed to be considered as a viable alternative to text based languages. However, the developers at NI have consistently opted to ignore the three fundamental problems that plague labview.
1) It is unstable and riddled with bugs. There are thousands of bugs that have been posted to the labview support forums that are yet to be fixed. Some of these are quite serious, such as memory leaks, or mathematical errors in basic functions.
2) The documentation is atrocious. More often than not, when you look for help with a labview function in the local help file you'll find a sentence that merely restates the name of the item you are trying to find some detail on. e.g. A user looks up the help file on the texture filter mode setting and the only thing written in the help file is "Texture Filter Mode- selects the mode used for texture filtering." Gee, thanks. That clears things right up, doesn't it? The problem goes much deeper in that; quite often, when you ask a technical representative from national instruments to provide critical details about labview functionality or the specific behavior of mathematical functions, they simply don't know how the functions in their own library work. This may sound like an exaggeration, but trust me, it's not.
3) While it's not impossible to keep graphical code clean and well documented, Labview is designed to make these tasks both difficult and inefficient. In order to keep your code from becoming a tangled, confusing mess, you must routinely (every few operations) employ structures like clusters, and sub-vis and giant type defined controls (which can stretch over multiple screens in a large project). These structures eat memory and destroy performance by forcing labview to make multiple copies of data in memory and perform gratuitous operations- all for the sake of keeping the graphical diagram from looking like rainbow colored spaghetti with no comments or text anywhere in sight. Programming in labview is like playing pictionary with the devil. Imagine your giant software project written as a wall sized flowchart with no words on it at all. Now imagine that all the lines cross each other a thousand times so that tracing the data flow is completely impossible. You have just envisioned the most natural and most efficient way to program in labview.
Labview is cool. Labview is getting better with each new release. If National Instruments keeps improving it, it will be great one day as a general programming language. Right now, it's an extremely bad choice as a software development platform for large or logically complex projects.
I **have been writing in LabVIEW for almost 20 years now. I develop automated test systems. I have developed, RF, Vison, high speed digital and many different flavors of mixed signal test systems. I was a "C" programmer before I switched to LabVIEW.
It's true that you can build some programs quickly in LabVIEW, but just like any other language it takes a lot of training to learn to build a large application that is clean easy to maintain with reusable code. In 20 years I have never had a LabVIEW bug stop me from finishing a project.
Back in the day, NIWEEK would have a software shootout every year. LabVIEW and LabWINDOWS (NI's version of "C") programmers would both be given the same problem and have a race to see which group finished first. Each and every year all the LabVIEW programmers were done way before the 1st LabWINDOWs person finished. I have challenged many of my dedicated text based programming friends to shootouts and they all admit they don't stand a chance, even if I let them define the software problem.
So, I feel LabVIEW is a great programming tool. It's definitely the way to go if you’re interfacing with any type of NI hardware. It's not the answer for everything but I’m sure there are many people not using it just because they don’t consider LabVIEW a “real programming language”. After all, we just wire a bunch of blocks together right? I do find it funny how many text based programmers snub there noses at it as they are so proud of the mess of text code they have created that only they can understand. A good programmer in any language should write code that others can easily read. Writing overly complex code that is impossible to follow does not make the programmer a genius. It means the programmer is a “compliator”(someone who can take a simple problem and complicate it). I believe in the KISS principle (KEEP IT SIMPLE STUPID).
Anyway, there’s my two cents worth!**
I thought LabVIEW was a dream for FPGA programming. Independent executable blocks just... work. In general, I use LabVIEW for various tasks interfacing with my DAQ and FPGA hardware, but that's about it. It seems (again to me) that this is LabVIEW's strong point and the reason it was built, but outside that arena it feels "cumbersome." As far as getting things done, it's like any other language with a learning curve - once you figure it out it's not too bad for getting work done. I've seen several people give up before that thinking the learning curve was permanent or something.
Picking up a 30" monitor made a huge difference.
I know one thing that people dislike is the version control integration.
Edit: LabVIEW/hardware is hella expensive for "just for fun" use. I dropped $10K on their hardware (student prices) and got the software for free from school for making toys around the house.
Our company is using LabVIEW for the last 10 years for measuring, monitoring and reporting of our subject (trains).
Recently we have started using LabVIEW as GUI for databases with lots of data, the powers of LabVIEW with the recent new features (Classes, XControls) allows use to create these kinds of GUIs for a fraction of development costs at other platforms. While we don't need external programmers at consultancy rate.
Ton
I first started using Labview in a college physics lab. Initially, I thought it was slow and cumbersome when compared to other text-based languages. It was too difficult to create complex logic and code became sloppy real fast (wires everywhere).
Then, a few years later, I learned about using sub-vi's and bundles. What a difference! At this point, I was using labview for very high level functions. I was taking raw input from a camera, using all kinds of image filters and processing to ultimately parse out the lines in a road so that a vehicle could drive itself down this road with no driver - it was for the DARPA URBAN CHALLENGE. I was also generating maps from text waypoint data, making high-level parsing functions, and a slew of other applications that had nothing to do with processing data from input devices. It was really a lot of fun. and FAST.
After leaving college, I am now back to using text-based languages. I've been using: PHP, Javascript, VBA, C#, VBscript, VB.net, Matlab, Epson RC+, Codeigniter, various API's, and I'm sure some others. I often get very frustrated in the amount of syntax I have to memorize in order to program with any significant speed. I find it annoying to have to switch schools of thought based on the language I am using... when all programming languages essentially do the same thing! I need a second monitor just to have the help up at all times so i can find the syntax for the same functions in different languages. I miss Labview very much, it's too bad it's so expensive otherwise I would use it for everything.
Graphical based programming I think has a huge potential. By not being constrained by syntax, you can focus on logic instead of code. Labview itself may still be in its infancy in terms of support and debugging, but I believe conceptually it beats out the competition. It's simple a more intuitive way to program.
We use LabVIEW for running our end of line test equipment and it is ideal for data acquisition and control. Typically measuring 15 to 80 differential voltages and controlling environmental chambers, mass flow controllers and various serial devices LabVIEW is more than capable.
Interfacing with custom devices can be simplified greatly by using the NI instrument driver wizard to create reusable VI's, interfacing with custom dll's if needed. On a number of projects we have created such drivers for custom hardware and once created there are reusable in future projects with no modification.
Using event driven structures user interfaces are responsive and we regularly use LabVIEW applications to interface with a database.
Whatever programming environment you choose it's the process of designing the application that matters most. I agree that you can create some really horrible and unreadable block diagrams in LabVIEW but then you can also create unreadable code in Visual studio. With just a little thought and planning a LabVIEW block diagram can be made to fit on a single 24" monitor with plenty of space to add comments.
I would use LabVIEW over Visual Studio for most projects.
But people do use LabView for purposes other than data acquisition and virtualization. Of course LabVIEW is mainly used in labs and production environments because it is (or was) one of the main NI's customer target.
However you can do a lots of various things with LabVIEW, like programming a robot that would perform a lot of image analysis, and then tweet the results. Have a look at videos from NI Week 2009 on you-tube, and you'll see how powerful this tool is. For instance, there is possibility to write code and deploy it to ARM MCUs (see this Dev Monkey article from 2009.08.10).
And finally check this LabVIEW DIY group
I have been using LabVIEW for about two years for developing automation. If given due care and proper design we sure can develop maintainable and really good looking application in LabVIEW.I think this is the same for all the other languages out there. I have seen equally bad code in LabVIEW primarily from people who use it only to develop quick and dirty working automation. IMHO Graphical programming is a lot easier to code and understand if rightly done. But that said I feel text based programming 'feels' more powerful!
LabVIEW is primarily marketed for industrial automation, has inherent support for lot of NI hardware and you can get the third party hardwares working with it pretty quickly. I think that is the reason you see it only in automation field. Moreover it is pretty costly and you are locked down with NI as you do cannot even open your code if you do not buy the software from them!
I've been thinking about this question for decades (yes, since 1989...)
Like all programming languages, LabVIEW is a high-level tool used to manipulate the flow of electrons. Unless you are a purist and refuse to use anything other than a breadboard and wires; transistors, integrated circuits and programming languages are probably a good thing if you wish to build something of any consequence.
But like all high-level tools, just wielding one does not make you a professional craftsman. Back in the day of soldering irons, op-amps and UARTs it required a large amount of careful study before you could create a system that actually functioned. The modern realm of text-based languages is so overly dominated by syntax that the programmer must get it just right before it will compile and run. In order to write code that works, the programmer must increase their skill level to create systems much larger than "Hello World".
LabVIEW is not dominated by syntax, but by Data Flow. Back in the day, reaching for your flow charting template and developing the diagram of a well-balanced information system was the art and beauty part of the job. Only after you had the reviewed flowchart in hand would you even consider slogging through the drudgery of punching out the code. (yes... punch cards)
LabVIEW is a development system that allows the programmer to use flow charting tools to diagram the complete information system and press "run"..... LabVIEW "punches out the code" and compiles it for you. No need to fight through the syntax of text language A or language B.
With such a powerful tool, novices can build large, working programs rapidly -- implying some level of professional craftsmanship since it runs at all. However, if the system does not perform elegantly, or the source code diagram is a mess, it is not the fault of LabVIEW.
People often point to "LabVIEW is only good for developing large data acquisition systems." Perhaps those people should consider the professionalism of the scientists and engineers that are working in data acquisition. If they know enough to get the actual wires right for the sensors and transducers, it may be a good bet that they are expert at developing LabVIEW wiring diagrams as well.
I do use LabView at home, as it is part of Lego Mindstorms, which my son loves. And I really like the way to compose systems like this.
However, in my work (embedded systems), it is generally to restrictive. But also here, I'm trying to move up in abstraction:
- control and state behavior: Model based design (i.e. Rhapsody)
- data algorithms etc. Simulink
Sometimes a graphical model can require more clicks than a piece of code. But this also includes the work a good programmer need to do in design & documentation; not just the code typing. The graphical notation takes many hassles away and is generally much faster if the tool is powerful enough for the complexity at hand. So I expect these kinds of tools will gain more popularity in the next years as they mature and people get familiar with them.
I have used LabView for some 10 years. It's brilliant for Scientific prorgamming ie like Matlab or Simulink but 10 times better. If you are having problems then you are doing something wrong. It takes time to learn like any language. As for using .Net instead - are these people even on the same planet? Why would you go to the trouble of writing eveything from scratch when you can say pull up an FFT etc and use alread written code. .NET is fine for simple programs but not so good for Scientific processing. yes you can do it but not without oodles of add-ons for graphics etc. Prorgamming in G is far easier than text based for Scientific problems. You can of course program in c if you are interfacing and use the dll. Now there are things that I would not use LabView for - speech recognition for example may be a bit messy at present. More to the point though, why do people like programming in outdated text form when there is an easy alternative. It is as if people want to make things complicated so as to justify their job in some way. Simplify Simplify!
Somebody said that LabView is only sued in the Automation field. Simply not write at all. It has applications in Digital Signal Processing,Control Systems,Communications, Web Based,Mathematics,Image Processing and so on. It started as a data aquisition method and they invented the name Virtual Instrumentation but it has gone far beyond that now. It is a Scientific programming language with a second to none graphical interface. It is way beyond Simulink and if you like Matlab then it has a type of Matlab scripting built in for those that like such ways of programming. It is evolving all the time. The one thing I found difficult was writing code for the Compact Rio - tricky but far easier than the alternative. It's expensive but you get a quality product. I personally have not found any bugs in ordinary programming. It is an engineers language but anybody could use it to program.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed last month.
Improve this question
Summary:
Can I program a "thick
client" game in C without reinventing
wheels, or should I just bite the
bullet and use some library or SDK?
I'm a moderate C programmer and am not
afraid to work with pointers, data
structures, memory locations, etc. if
it will give me the control I need to
make a great "thick-client" game.
However, I'm thinking of eschewing
high-level languages & frameworks for
the sake of power and control, not
ease of use.
I'm interesting in tinkering with a 2D fighting/platforming game as a side project sometime. I'm primarily a Linux server-side programmer with experience in Python, Ruby and PHP. I know that there are excellent frameworks in some of these languages, like PyGame. I am also aware of the success people have had with stuff like Air and .NET... but I have some concerns:
Performance: Scripting languages are notoriously slow. If I'm making a real-time game, I want it to be as snappy as possible.
Huge binaries: Using frameworks like .NET or scripting languages like Ruby often result in big CLRs or libraries that you wouldn't otherwise need. The game I want to make will be small and simple--I don't want its CLR to be bigger than the game itself!
Extra stuff: Honestly, I just don't like the idea of inheriting some big game library's baggage if I can wrap my head around my own code better.
I'm asking this question because I know I'm very susceptible to Not Invented Here Syndrome. I always want to program it myself, and I'm sure it wastes a lot of time. However, this works out for me remarkably often--for example, instead of using Rails (a very big web project framework with an ORM and GUI toolkit baked in), I used an array of smaller Ruby tools like rack and sequel that fit together beautifully.
So, I turn to you, SO experts. Am I being naive? Here's how I see it:
Use C
Cons
Will probably make me hate programming
High risk of reinventing wheels
High risk of it taking so long that I lose interest
Pros
Tried & true - most A-list games are done in C (is this still true today?)
High level of control over memory management, speed, asset management, etc., which I trust myself to learn to handle
No cruft
Use framework or SDK
Cons
Risk of oversized deliverable
Dependent on original library authors for all facets of game development--what if there isn't a feature I want? I'll have to program it myself, which isn't bad, but partially defeats the purpose of using a high-level framework in the first place
High risk of performance issues
Pros
MUCH faster development time
Might be easier to maintain
No time wasted reinventing common paradigms
What else can I add to this list? Is it a pure judgment call, or can someone seal the deal for me? Book suggestions welcome.
I believe you are working under a fallacy.
There are several frameworks out there specifically for game programming --- written by people with much experience with the complication of game design, almost certainly more tha you do.
In other words, you have a "High risk of performance issues" if you DON'T use a framework.
My current thinking is:
If you want to learn to program, start making the game engine from the base elements upwards (even implementing basic data structures - lists, maps, etc). I've done this once, and while it was a learning experience, I made many mistakes, and I wouldn't do this a second time around. However for learning how to program as well as making something cool and seeing results I'd rate this highly.
If you want to make a proper game, use whatever libraries that you want and design all of the game infrastructure yourself. This is what I'm doing now, and I'm using all of the nice things like STL, ATL/WTL, Boost, SQLite, DirectX, etc. So far I've learnt a lot about the middle/game logic aspect of the code and design.
If you just want to make a game with artists and other people collaborating to create a finished product, use one of the existing engines (OGRE, Irrlicht, Nebula, Torque, etc) and just add in your game logic and art.
One final bit of wisdom I've learnt is that don't worry about the Not Invented Here syndrome. As I've come to realise that other libraries (such as STL, Boost, DirectX, etc) have an order of magnitude (or three) more man-hours of development time in them, far more than I could ever spend on that portion of the game/engine. Therefore the only reason to implement these things yourself is if you want to learn about them.
I would recomend you try pyglet.
It has good performance, as it utilizes opengl
Its a compact all-in-one library
It has no extra dependencies besides python
Do some tests, see if you can make it fast enough for you. Only if you prove to yourself that it's not move to a lower level. Although, I'm fairly confident that python + pyglet can handle it... at worst you'll have to write a few C extensions.
Today, I believe you are at a point where you can safely ignore the performance issue unless you're specifically trying to do something that pushes the limits. If your game is, say, no more complicated than Quake II, then you should choose tools and libraries that let you do the most for your time.
Why did I choose Quake II? Because running in a version compiled for .NET, it runs with a software renderer at a more than acceptable frame rate on a current machine. (If you like - compare MAME which emulates multiple processors and graphics hardware at acceptable rates)
You need to ask yourself if you are in this to build an engine or to build a game. If your purpose is to create a game, you should definitely look at an established gaming engine. For 2D game development, look at Torque Game Builder. It is a very powerful 2D gaming engine/SDK that will put you into production from day 1. They have plenty of tools that integrate with it, content packs, and you get the full source code if you want to make changes and/or learn how it works. It is also Mac OSX compatible and has Linux versions in the community.
If you are looking for something on the console side, they have that too.
I'm surprised nobody has mentioned XNA. Its a framework built around DirectX for doing managed DirectX programming while removing a lot of the fluff and verbosity of lower level DirectX programming.
Performance-wise, for most 2D and 3D game tasks, especially building something like a fighting game, this platform works very well. Its not as fast as if you were doing bare metal DirectX programming, but it gets you very close, and in a managed environment, no less.
Another cool benefit of XNA is that most of the code can be run on an Xbox 360 and can even be debugged over the network connection was the game runs on the Xbox. XNA games are now allowed to be approved by the Xbox Live team for distribution and sale on Xbox Live Arcade as well. So if you're looking to take the project to a commercial state, you might have am available means of distribution at your disposal.
Like all MS development tools, the documentation and support is first rate, and there is a large developer community with plenty of tutorials, existing projects, etc.
Do you want to be able to play your game on a console? Do you want to do it as a learning experience? Do you want the final product to be cross platform? Which libraries have you looked into so far?
For a 2d game I don't think performance will be a problem, I recommend going with something that will get you results on screen in the shortest amount of time. If you have a lot of experience doing Python then pyGame is a good choice.
If you plan on doing some 3d games in the future, I would recommend taking a look at Ogre (http://www.ogre3d.org). It's a cross platform 3d graphics engine that abstracts away the graphics APIs. However for a 2d project it's probably overkill.
The most common implementation language for A-list games today is C++, and a lot of games embed a scripting language (such as Python or Lua) for game event scripting.
The tools you'd use to write a game have a lot to do with your reasons for writing it, and with your requirements. This is no different from any other programming project, really. If it's a side project, and you're doing it on your own, then only you can assess how much time you have to spend on this and what your performance requirements are.
Generally speaking, today's PCs are fast enough to run 2D platformers written in scripting languages. Using a scripting language will allow you to prototype things faster and you'll have more time to tweak the gameplay. Again, this is no different than with any other project.
If you go with C++, and your reasons don't have to be more elaborate than "because I want to," I would suggest that you look at SDL for rendering and audio support. It will make things a little bit easier.
If you want to learn the underlying technologies (DirectX, or you want to write optimized blitters for some perverse reason) then by all means, use C++.
Having said all that, I would caution you against premature optimization. For a 2D game, you'll probably be better off going with Python and PyGame first. I'd be surprised if those tools will prove to be inadequate on modern PCs.
As to what people have said about C/C++/Python, I'm a game developer and my company encourages C. Not b/c C++ is bad, but because badly written C++ is poison for game development due to it's difficulty to read/debug compared to C. (C++ gives benefits when used properly, but let a junior guy make some mistakes with it and your time sink is huge)
As to the actual question:
If your purpose is to just get something working, use a library.
Otherwise, code it yourself for a very important reason: Practice
Practice in manipulating data structures. There WILL be times you need to manage your own data. Practice in debugging utility code.
Often libs do just what you want and are great, but sometimes YOUR specific use case is handled very badly by the lib and you will gain big benefits from writing you own. This is especially on consoles compared to PCs
(edit:) Regarding script and garbage collection: it will kill you on a console, on a recent game I had to rewrite major portions of the garbage collection on Unreal just to fill our needs in the editor portion. Even more had to be done in the actual game (not just by me) (to be fair though we were pushing beyond Unreal's original specs)
Scripting often good, but it is not an "I win" button. In general the gains disappear if you are pushing against the limits of your platform. I would use "percent of platforms CPU that I have to spare" as my evaluation function in deciding how appropriate script is
One consideration in favor of C/C++/obj-C is that you can mix and match various libraries for different areas of concern. In other words, you are not stuck with the implementation of a feature in a framework.
I use this approach in my games; using chipmunk for 2D physics, Lua as an embedded scripting language, and an openGL ES implementation from Apple. I write the glue to tie all of these together in a C language. The final product being the ability to define game objects, create instances of them, and handle events as they interact with each other in C functions exposed to Lua. This approach is used in many high performance games to much success.
If you don't already know C++, I would definitely recommend you go forward with a scripting language. Making a game from start to finish takes a lot of motivation, and forcing yourself to learn a new language at the same time is a good way to make things go slowly enough that you lose interest (although it IS a good way to learn a new language...).
Most scripting languages will be compiled to byte code anyway, so their biggest performance hit will be the garbage collection. I'm not experienced enough to give a definite description of how big a hit garbage collection would be, but I would be inclined to think that it shouldn't be too bad in a small game.
Also, if you use an existing scripting language library to make your game, most of the performance critical areas (like graphics) can be written in C++ anyway (hopefully by the game libraries). So 80% of the CPU might actually be spent in C++ code anyway, despite the fact that most of your project is written in, say Python.
I would say, ask yourself what you want more: To write a game from start to finish and learn about game development, or to learn a new language (C++). If you want to write a game, do it in a scripting language. If you want to learn a new language, do it in C++.
Yeah unless you just want to learn all of the details of the things that go into making a game, you definitely want to go with a game engine and just focus on building your game logic rather than the details of graphics, audio, resource management, etc.
Personally I like to recommend the Torque Game Builder (aka Torque 2D) from GarageGames. But you can probably find some free game engines out there that will suit your needs as well.
I'm pretty sure most modern games are done in C++, not C. (Every gaming company I ever interviewed with asked C++ questions.)
Why not use C++ and existing libraries for physics + collisions, sound, graphics engine etc. You still write the game, but the mundane stuff is taken care of.
There are alot of different solutions to the issue of abstracting and each deals with it in different ways.
My current project uses C#, DirectX 9, HLSL and SlimDX. Each of these offers a carefully calibrated level of abstraction. HLSL allows me to actually read the shader code I'm writing and SlimDX/C# allows me to ignore pointers, circular dependencies and handling unmanaged code.
That said, none of these technologies has any impact on the ease of developing my AI, lighting or physics! I still have to break out my textbooks in a way that I wouldn't with a higher-level framework.
Even using a framework like XNA, if most video games development concepts are foreign to you there's a hell of a lot still to take in and learn. XNA will allow you to neatly sidestep gimbal lock, but woe betide those who don't understand basic shading concepts. On the other hand, something like DarkBASIC won't solve your gimbal lock problem, but shading is mostly handled for you.
It's a sufficiently big field that your first engine will never be the one you actually use. If you write it yourself, you won't write it well enough. If you use third party libraries, there's certainly aspects that will annoy you and you'll want to replace.
As an idea, it might be worth taking various libraries/frameworks (definately make XNA one of your stops, even if you decide you don't want to use it, it's a great benchmark) and trying to build various prototypes. Perhaps a landscape (with a body of water) or a space physics demo.
I would like to make a list of remarkable robot simulation environments including advantages and disadvantages of them. Some examples I know of are Webots and Player/Stage.
ROS will visualize your robot and any data you've recorded from it.
Packages to check out would rviz and nav_view
This made me remember the breve project.
breve is a free, open-source software package which makes it easy to build 3D simulations of multi-agent systems and artificial life.
There is also a wikipage listing Robotics simulators
Microsoft Robotics Studio/Microsoft Robotics Developer Studio 2008
Also read this article on MSDN Magazine
It all depends on what you want to do with the simulation.
I do legged robot simulation, I am coming from a perspective that is different than mobile robotics, but...
If you are interested in dynamics, then the one of the oldest but most difficult to use is sd/fast. The company that originally made it was acquired by a large cad outfit.
You might try heading to : http://www.sdfast.com/
It will cost you a bit of money, but I trust the accuracy of the simulation. There is no contact or collision model, so you have to roll you own. I have used it to simulate bipeds, swimming fish, etc.. There is also no visualization. So, it is for the hardcore programmer. However, it is well respected among us old folk.
OpenDynamics engine is used by people http://www.ode.org/ for "easier" simulation. It comes with an integrator and a primitive visualization package. There are python binding (Hurray for python!).
The build in friction model.. is ... well not very well documented. And did not make sense. Also, the simulations can suddenly "fly apart" for no apparent reason. The simulations may or may not be accurate.
Now, MapleSoft (in beautiful Waterloo Canada) has come out with maplesim. It will set you back a bit of money but here is what I like about it:
It goes beyond just robotics. You can virtually anything. I am sure you can simulate the suspension system on a car, gears, engines... I think it even interfaces with electrical circuit simulation. So, if you are building a high performance product, than MapleSim is a strong contender. Goto www.maplesoft.com and search for it.
They are pretty nice about giving you an eval copy for 30 days.
Of course, you can go home brew. You can solve the Lagrange-Euler equations of motion for most simple robots using a symbolic computation program like maple or mathematica.
EDIT: Have not be able to elegantly do certain derivatives in Maple. I have to resort to a hack.
However, be aware of speed issue.
Finally for more biologically motivated work, you might want to look at opensim (not to be confused with OpenSimulator).
EDIT: OpenSim shares a team member with SD/Fast.
There a lots of other specialized simulators. But, beware.
In sum here are the evaluation criteria for a simulator for robot oriented work:
(1) What kind of collision model do you have ? If it is a very stiff elastic collision, you may have problem in numerical stability during collisions
(2) Visualization- Can you add different terrains, etc..
(3) Handy graphical building tools so you don't have to code then see-what-you-get.
Handling complex system (say a full scale humanoid) is hard to think about in your head.
(4) What is the complexity of the underlying simulation algorithm. If it is O(N) then that is great. But it could be O(N^4) as would be the case for a straight Lagrange-Euler derivation... then your system just will not scale no matter how fast your machine.
(5) How accurate is it and do you care?
(6) Does it help you integrate sensors. For mobile robots you need to have a "robot-eyes view"
(7) If it does visualization, can it you do things like automatically follow the object as it is moving or do you have to chase it around?
Hope that helps!
It's not as impressive looking as Webots, but RobotBasic is free, easy to learn, and useful for prototyping simple robot movement algorithms. You can also program a BasicStamp from the IDE.
I've been programming against SimSpark. It's the open-source simulation engine behind the RoboCup 3D Simulated Soccer League.
It's extensible for different simulations. You can plug in your own sensors, actuators and models using C++, Ruby and/or RSG (Ruby Scene Graph) files.
ABB has a quite a solution called RobotStudio for simulating their huge industrial robots. I don't think it's free and I don't guess you'll get much fun out of it but it's quite impressive. Here's a page about it
I have been working with Carmen http://carmen.sourceforge.net/ and find it useful.
One of the disadvantages with Carmen is the documentation with all respect I think the webpage is a bit outdated and insufficient. So I like to hear from other people with experience in working with Carmen, or student reports/projects dealing with Carmen.
You can find a great list with simulation environments http://www.intorobotics.com/robotics-simulation-softwares-with-3d-modeling-and-programming-support/
MRDS is one of the best and it's free. Also LabView is good to be used in robotcs
National Instruments' LabView is a graphical programming environment for developing measurement, test, and control systems.
It could be used for 3D control simulation with SolidWorks.
MRDS is free and is one of the best simulation environment for robotics. Workspace also can be used, and please check this link if you want a complete list with robotics simulation software
Trik Studio has a nice and clear 2D model simulator and also visual and textual programming programming environments for them. They also soon will support 3D modeling tools based on Morse simulator. Also it is free and opensource and has multi-language interface.