Does Web Audio have any plans of making limiters, imager, saturator, or multiband compressors? - web-audio-api

I have checked the interfaces that web audio offers. Pizzicato.js offers a great library for these effects, but it is a pitty that some of the best and essential effects are missing, like a limiter, multi-band compressor, parametric equalizer, saturator, stereo imager. I was just wondering if there any plans for them where i can check if they are willing to make these in the future. Just don't know where i could ask.
Thanks

WebAudio is a collection of fairly elemental base processors. There's really no higher order effects, because instead, it provides the foundational elements with which to build the effects.
For example, there's a dynamics compressor, but there's no variMU emulation, or FET circuit, or even your run of the mill digital compressor. But, using the pieces that the API does have, you can build out or model a compressor that behaves however you want. Just think it through and figure out how the signal needs to be processed and you'll find pretty much every component you need to achieve it. If not, the AudioWorklet(the successor to the deprecated ScriptProcessorNode) can fill in the blanks.
That being said, you would build a limiter using the DynamicsCompressor, and use BiquadFilters and DynamicsCompressors to build a multiband comp. You'd use the WaveShaper to build things like tape saturators, overdrive, and bit crushers. And you can create an imager or stereo widener effect using things like the PannerNode and one of the AudioParams (setValueAtTime() is probably the simplest).
WebAudio isn't really plug & play, but it's what you'd use to build something that is. If you'd rather skip the tedium of building your own DSPs, I can't really blame you, it's not easy. But there's plenty of sadists out there who already did, and many are made by very imaginative and talented engineers. A couple libraries worth checking out would be Tuna.js which is a very user friendly/straight forward effects library, or Tone.js for something much more fleshed out and complete.

According to the docs several already seem to be implemented:
Compressor: https://developer.mozilla.org/en-US/docs/Web/API/DynamicsCompressorNode
EQ: https://developer.mozilla.org/en-US/docs/Web/API/BiquadFilterNode
Saturator can probaby be impemented using: https://developer.mozilla.org/en-US/docs/Web/API/ConvolverNode

Related

Is there a way to use assembly hand-coded shaders instead of using GLSL on iPhone?

I would like to use hand-coded assembly language vertex and fragment shaders in order to program very optimized shaders on iPhone with OpenGL ES 2.0.
I googled around but I can't find an example or even if it is allowed by apple sdk.
On the iPhone, there is no way to hand tune shaders. It's worth noting that on the iPhone in particular, there are no optimizations that you can do that a compiler can't. That said, the GLSL compiler will probably beat or match your hand-tuned assembly.
On the PC, however, I personally do not have faith in the driver to know that a looping shader should use fewer registers and more instructions in order to achieve greater throughput through higher occupancy. Drivers simply don't have enough context to make the right choice all of the time. Data-specific compile-time optimizations are a great example of this problem.
As someone who has actually looked at the GLSL compiler's assembly output and tried to game the compiler's register allocation strategy, I can tell you that not having assembly access absolutely does hurt performance (on the PC, there are some publicly available tools from NVidia and AMD that allow you to do this). The trade-off to using assembly is that every shader needs to be hand tuned for every supported part in order to achieve the maximum possible performance. While this is a bit extreme, if I want to invest my own time into fine tuning a rendering back-end for each video card my products support, then I should be able to do so. A more practical example would be hand-tuning for low-end video cards, but letting the GLSL compiler do it's job on more high-end video cards.
Further, offline compilers provide a safety mechanism. Many video games today rely on drivers to emulate a lot of the functionality available in modern graphics APIs. As a game developer who works in the game-as-a-service space on the PC, I can tell you that it's extremely upsetting to get a call in the middle of the night because of a minor GLSL bug in a newly released graphics driver. Driver bugs severely impact the overall player experience. Most players simply think that your game is broken, and you can actually lose players as a result (and we probably have). Being able to compile once for each supported video card and hand-tune after the fact would be a huge win in this regard. It simply means that the driver would have to do less work. Code is evil, so the less code that executes is better =).
As a side note, I made the following demo using the 'compile'-'view assembly'-'modify'-'repeat' approach: http://www.youtube.com/watch?v=km0DpZUgvbg . I can tell you with 100% certainty that I could further improve the performance of this ray-tracer with assembly language, and AFAIK, it's the fastest voxel ray-tracer whose existance has been published (that was the case as of Mar 2012, but is likely no longer true). Unsurprisingly, each time a new driver would come out, I would see this demo's performance go from 125-130 fps down to 30 fps - all because the driver didn't know how to optimize my shader correctly. That means I'd have to repeat my optimization process each time a new driver came out, which caused me to simply mothball the project (ACK!). Even though my voxel raytracer can support a large variety of hardware in a performant manner, drivers are currently making it impossible to support this technology in a full product. I simply do not have the weight to put this technology in action because it would require driver vendors to know the ways in which they need to optimize my shader. How many other technologies would be possible if we simply had direct assembly shader access? This implies that lacking assembly access is actually a serious cost. For anyone else in this position, I recommend the following: Use NVidia's assembly language when possible and fallback to GLSL when it's not. If we show the advantage of assembly over GLSL, then hopefully we'll get first-class assembly support from all vendors =).
And finally, not to pick on another author, but I want to point out that the argument made by 'Nicol Bolas' is almost entirely fallacious (sorry Nicol, I have nothing against you, but I wanted to point out some popular arguments that simply don't hold up to an ethics test). Please note that a fallacious argument does not mean that a particular conclusion is incorrect -- just that the argument posited is simply fallacious.
"Why? You don't trust the compiler to do it's job? Do you really think that you know enough about the GPU in question to be able to consistently beat the compiler?"
This is not an argument. This is simply a question that a lot of people can't think of a real answer to. Therefor, they come to the conclusion that they should just trust the compiler. This prevents people from further investigating the ramifications of trusting the compiler, and prevents logical discourse of the actual pros and cons from taking place. Furthermore, the use of the word "really" in your second question implies that someone answering yes must be deluded. This also hints at what you think about yourself, Nicol -- it implies that you value your opinion above all others, and anyone who doesn't think like you must have something wrong with them (not that they are wrong, but that they have something wrong with them - big difference). That said, you might want to take some time to think about your thought process, your feelings, and your emotional state. Taking this approach will seriously limit your ability to learn, as you won't be challenging your own ideas with enough rigor. Please stop using this argument. It's not healthy or ethical.
"Ultimately, you're just going to have to trust the compiler made by the people who built your GPU. Nobody else has a problem with that these days."
I have a problem with it. Also, even if no one did have a problem with it, that still wouldn't matter. The fact is that there might still be some benefit to allowing assembly shaders. That said, a consensus does not mean correctness. Further, this argument is especially unethical because there is an implied "Nobody else has a problem with that these days so if you have a problem with it, you must be weird or out of date in your thinking". People have a natural desire to fit in, and using this argument is a way of dividing people on this issue and persecuting people that don't think like you. That said, this argument is especially insidious because of it's implications. Please don't use this argument.
Nicol, both of your fallacies imply that you are right and normal, and anyone who doesn't agree with you is wrong and has something wrong with them. These are extremely unhealthy viewpoints, and you should examine them rigorously for your own mental health and career.
For future reference: http://en.wikipedia.org/wiki/List_of_fallacies#Formal_fallacies
Thanks!
I would like to use hand-coded assembly language vertex and fragment shaders in order to program very optimized shaders on iphone with opengl es 2.0.
Why? You don't trust the compiler to do it's job? Do you really think that you know enough about the GPU in question to be able to consistently beat the compiler?
Anyway, you can't. Nor could you do it in desktop OpenGL. ARB assembly shaders aren't that much closer to the hardware than GLSL; they both go through compilation and optimization by an internal compiler.
Ultimately, you're just going to have to trust the compiler made by the people who built your GPU. Nobody else has a problem with that these days.
I understand you point very well : you want to see the generated assembly source code, and maybe modify it.
In fact, GLSL compilers are not optimized in comparison with HLSL ones. To convince yourself, just compare the generated assembly in ShaderAnalyzer for the same shader in GLSL and HLSL ; you will immediately see that they don't give the same optimization at all.
Even for trivial optimizations, like factorizing if() conditions, most of GLSL compilers don't do the job.
I really would like to see the generated ASM by Apple's compiler (especially for iOS platforms). If you know a way to get the assembly, I'm very interested in the process.

OpenGL ES, OpenFrameworks, Cinder and IOS creative development

I'm in the middle of a difficult choice.
I'd like to learn a language that can help me create application with a strong artistic/creative/graphic component and use it for commercial projects for my customers.
My first choice was OpenGL ES, i think of it as the "Standard" way to go through.
But, in the meanwhile, i discovered this site : http://www.creativeapplications.net/ where i found many cool apps for ios, for most built using OpenFramewors and Cinder.
My question is: why choose this 2 "wrapper" instead of OpenGL? I need to understand benefits and disadvantages.
I'm not sure that using these frameworks i can mix in a easy (and standard) way (As for OpenGL) UIKit/Cocoa and Graphics. At the moment i still prefer OpenGL because i know that this's the way suggested by apple (i mean... proposed by Apple) and i'm sure that i can take advantage of it for my customer too. While i' not sure that using OF and Cinder i can fully manage UIKit and Cocoa without tricks.
The benefits of using a framework are, as stated by Ruben, that you're not re-inventing the wheel.
OpenGL doesn't come with a lot of classes you would normally need: Vectors, matrices, cameras, colour, images etc and the methods that you will need to work with them: normalise, arithmetic, cross product etc
Of course you can implement all this in OpenGL but if someone's done it before, why not just leverage that instead? Your choice of framework or library will depend on what implementation you prefer. OF will do things differently to Cinder which is different to another library instead.
You don't have to use everything a framework provides. If you don't like the base application (like Cinder for example) you can create your own contexts and what not and just use the framework's 3d maths libraries, or its image library, or whatever other part you want. Just include the relevant headers you want.
Alternatively you can just use a 3d math library if you are so inclined and do away with frameworks all together. This gives you more control over your rendering pipeline and also potentially decreases application size.
Ultimately what you choose will depend on its features and your preference for a particular style. I would suggest going with a framework or library you are comfortable with and that has been used in production (unless you are just playing around with stuff). Documentation is also important. If the docs/resources aren't very good I would shy away from using something.
Of course, if you want to learn the ins and outs (never a bad idea), by all means write your own library.
I think the main advantage of choosing OF and Cinder is that you can focus on your creation better than loosing lots of hours dealing with the OpenGL library. Cinder even includes image downloading and memory treatment. However, you must be patient because these frameworks are being imported to the iOS platform right now.
In some months or years, everybody will use these frameworks that abstract all the stuff behind the graphics programming to bring them the full potential and time to make art!
If you don't miss anything, i think you'd be OK with OpenGL alone.
Cinder offers some additional goodies, see http://libcinder.org/features/. Maybe triangulation, loading of system fonts, matrix support etc might be interesting for you in the future.
Also Cinder's Tinderbox makes creating new projects very easy.
Now both Cinder and OF fully support iOS platform and you can use them easily in an iOS application.
also note that these frameworks are designed specially with designers and creative artists/coders in mind, but OpenGL is a technical standard for dealing with graphic hardwares.
Note: I'm the author of this framework.
I've spent some time creating Rend, an Objective-C based OpenGL ES 2.0 framework for iOS. It's lightweight and focusing on pure rendering which may be appropriate for some projects.
Also, if you're creating your own framework, you may be able to use it for inspiration and code snippets.
http://github.com/antonholmquist/rend-ios

Any good library or software for queue networks simulation?

I have been trying to make work EZSIM with no luck, which is a software to build discrete event simulators in a graphical DOS environment. In this software, my simulator and many others (of the other people in the course I'm taking) don't work, but teacher's simulator (and examples of the downloaded files) does work.
So, I began to distrust of the software.
Do you know any software that resolves the same kind of problems but really works? It will be good if it is free, or I can download an evaluation copy or something like that.
If you don't know any software, do you know any library which might work? Preferably in C#, Ansi C, Java or Delphi.
This may be more than what you're looking for, but check out NS2. It's the standard for open source network simulations, and will allow you to simulate all kinds of network layer behavior.
I've also used JUNG in the past. It's very flexible, although it also doesn't offer much out of the box.
I used Möbius in my computer systems analysis class. It is free for educational use (which sounds like what you're doing). It's a Java GUI which generates C++ code.
The R package queuecomputer. queuecomputer is a computationally efficient method for simulating queues with arbitrary arrival and service times. There is a submitted paper on arXiv describing the algorithm used in the package. Examples can be found within the arXiv paper and the vignette. A web app based on the package is available at https://ace-ebert.shinyapps.io/queue_simulator_mmk/ .

Do Poor Code Samples Turn You Away From Libraries?

I've been evaluating a framework that on paper looks great. The problem is that the sample code is incomplete and of poor quality. The supplied reference implementations are for the most part not meant to be used (so they can be considered as sample code as well) and have only succeeded at confusing me.
I know that it's common for things to look better on paper, but my experience with the sample code is turning me away from further investigation.
Do you let poor code samples change your judgment of frameworks/libraries? So far my experience has been similar to the "resume effect": if someone doesn't put the effort into spell checking their resume, they probably won't get the job...
For me, it does. I tend to want to avoid libraries where the code samples are incomplete. If the library is open source, I will overlook it, since I can directly look at the code and see if the library's internals are reasonable, and I know that, if there is a problem someday, I could (if I had to) fix it.
If the library is commercial, and their samples and/or documentation is poor, I look elsewhere. I just see it as risk management - poor samples make me fear the quality of the library in general.
No matter how good something is on paper or in theory, it can still be crap when programmed.
I think this is a valid reason to turn away from and evaluate other libraries. As a potential user of a library a lack of documentation and/or bad code samples gives the impression that the library is not yet mature enough for use by third parties. In time it may well gain the missing pieces but until then I think its reasonable to look elsewhere.
I was recently evaluating the multitude of blogging applications that people have uploaded to github.com I quickly skipped ones that no documentation as they obviously weren't ready for others to use. The ones that remained at the end had a good README with info on how to get the app up and running as well as an online example of the code running.
Poor code samples combined with poor documentation will make me turn away from a library unless there is a compelling reason to use it. However, a library that has either good code samples or good documentation is usually worth using. (Assuming that the library itself otherwise meets my needs.)
If I can't find good examples (and/or documentation) illustrating how to use the library, I'm definitely less likely to use it - just as a practical matter, it'll be harder for me to figure out how. But I don't care what the code that implements the library itself looks like. I don't think I'd choose one library/framework over another just because the developers of the one have shown an ability to write cleaner code (which is what I understand the "resume effect" to mean).
Lack of documentation and examples makes me a whole lot less likely to use that particular library. It's not worth my time testing and trying to figure out how a black box works if there are alternate solutions to the problem out there.
Yes, definitely. Every library should come with a simple example using program and a CLI interface (for very simple libraries with <3 methods and <10 hooks, one example should suffice).
And why does your framework "look great" if it's so hard to use that even the original coders make mistakes using it?
It certainly matters to me. Evidence of sloppy/incomplete coding and poor communication decreases my confidence that the actual implementation code is stable and robust.
Myself yes, but there must be people out there who aren't turned off by this otherwise there are plenty of open source projects that would have died a long long time ago.

Game programming - How to avoid reinventing the wheel [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed last month.
Improve this question
Summary:
Can I program a "thick
client" game in C without reinventing
wheels, or should I just bite the
bullet and use some library or SDK?
I'm a moderate C programmer and am not
afraid to work with pointers, data
structures, memory locations, etc. if
it will give me the control I need to
make a great "thick-client" game.
However, I'm thinking of eschewing
high-level languages & frameworks for
the sake of power and control, not
ease of use.
I'm interesting in tinkering with a 2D fighting/platforming game as a side project sometime. I'm primarily a Linux server-side programmer with experience in Python, Ruby and PHP. I know that there are excellent frameworks in some of these languages, like PyGame. I am also aware of the success people have had with stuff like Air and .NET... but I have some concerns:
Performance: Scripting languages are notoriously slow. If I'm making a real-time game, I want it to be as snappy as possible.
Huge binaries: Using frameworks like .NET or scripting languages like Ruby often result in big CLRs or libraries that you wouldn't otherwise need. The game I want to make will be small and simple--I don't want its CLR to be bigger than the game itself!
Extra stuff: Honestly, I just don't like the idea of inheriting some big game library's baggage if I can wrap my head around my own code better.
I'm asking this question because I know I'm very susceptible to Not Invented Here Syndrome. I always want to program it myself, and I'm sure it wastes a lot of time. However, this works out for me remarkably often--for example, instead of using Rails (a very big web project framework with an ORM and GUI toolkit baked in), I used an array of smaller Ruby tools like rack and sequel that fit together beautifully.
So, I turn to you, SO experts. Am I being naive? Here's how I see it:
Use C
Cons
Will probably make me hate programming
High risk of reinventing wheels
High risk of it taking so long that I lose interest
Pros
Tried & true - most A-list games are done in C (is this still true today?)
High level of control over memory management, speed, asset management, etc., which I trust myself to learn to handle
No cruft
Use framework or SDK
Cons
Risk of oversized deliverable
Dependent on original library authors for all facets of game development--what if there isn't a feature I want? I'll have to program it myself, which isn't bad, but partially defeats the purpose of using a high-level framework in the first place
High risk of performance issues
Pros
MUCH faster development time
Might be easier to maintain
No time wasted reinventing common paradigms
What else can I add to this list? Is it a pure judgment call, or can someone seal the deal for me? Book suggestions welcome.
I believe you are working under a fallacy.
There are several frameworks out there specifically for game programming --- written by people with much experience with the complication of game design, almost certainly more tha you do.
In other words, you have a "High risk of performance issues" if you DON'T use a framework.
My current thinking is:
If you want to learn to program, start making the game engine from the base elements upwards (even implementing basic data structures - lists, maps, etc). I've done this once, and while it was a learning experience, I made many mistakes, and I wouldn't do this a second time around. However for learning how to program as well as making something cool and seeing results I'd rate this highly.
If you want to make a proper game, use whatever libraries that you want and design all of the game infrastructure yourself. This is what I'm doing now, and I'm using all of the nice things like STL, ATL/WTL, Boost, SQLite, DirectX, etc. So far I've learnt a lot about the middle/game logic aspect of the code and design.
If you just want to make a game with artists and other people collaborating to create a finished product, use one of the existing engines (OGRE, Irrlicht, Nebula, Torque, etc) and just add in your game logic and art.
One final bit of wisdom I've learnt is that don't worry about the Not Invented Here syndrome. As I've come to realise that other libraries (such as STL, Boost, DirectX, etc) have an order of magnitude (or three) more man-hours of development time in them, far more than I could ever spend on that portion of the game/engine. Therefore the only reason to implement these things yourself is if you want to learn about them.
I would recomend you try pyglet.
It has good performance, as it utilizes opengl
Its a compact all-in-one library
It has no extra dependencies besides python
Do some tests, see if you can make it fast enough for you. Only if you prove to yourself that it's not move to a lower level. Although, I'm fairly confident that python + pyglet can handle it... at worst you'll have to write a few C extensions.
Today, I believe you are at a point where you can safely ignore the performance issue unless you're specifically trying to do something that pushes the limits. If your game is, say, no more complicated than Quake II, then you should choose tools and libraries that let you do the most for your time.
Why did I choose Quake II? Because running in a version compiled for .NET, it runs with a software renderer at a more than acceptable frame rate on a current machine. (If you like - compare MAME which emulates multiple processors and graphics hardware at acceptable rates)
You need to ask yourself if you are in this to build an engine or to build a game. If your purpose is to create a game, you should definitely look at an established gaming engine. For 2D game development, look at Torque Game Builder. It is a very powerful 2D gaming engine/SDK that will put you into production from day 1. They have plenty of tools that integrate with it, content packs, and you get the full source code if you want to make changes and/or learn how it works. It is also Mac OSX compatible and has Linux versions in the community.
If you are looking for something on the console side, they have that too.
I'm surprised nobody has mentioned XNA. Its a framework built around DirectX for doing managed DirectX programming while removing a lot of the fluff and verbosity of lower level DirectX programming.
Performance-wise, for most 2D and 3D game tasks, especially building something like a fighting game, this platform works very well. Its not as fast as if you were doing bare metal DirectX programming, but it gets you very close, and in a managed environment, no less.
Another cool benefit of XNA is that most of the code can be run on an Xbox 360 and can even be debugged over the network connection was the game runs on the Xbox. XNA games are now allowed to be approved by the Xbox Live team for distribution and sale on Xbox Live Arcade as well. So if you're looking to take the project to a commercial state, you might have am available means of distribution at your disposal.
Like all MS development tools, the documentation and support is first rate, and there is a large developer community with plenty of tutorials, existing projects, etc.
Do you want to be able to play your game on a console? Do you want to do it as a learning experience? Do you want the final product to be cross platform? Which libraries have you looked into so far?
For a 2d game I don't think performance will be a problem, I recommend going with something that will get you results on screen in the shortest amount of time. If you have a lot of experience doing Python then pyGame is a good choice.
If you plan on doing some 3d games in the future, I would recommend taking a look at Ogre (http://www.ogre3d.org). It's a cross platform 3d graphics engine that abstracts away the graphics APIs. However for a 2d project it's probably overkill.
The most common implementation language for A-list games today is C++, and a lot of games embed a scripting language (such as Python or Lua) for game event scripting.
The tools you'd use to write a game have a lot to do with your reasons for writing it, and with your requirements. This is no different from any other programming project, really. If it's a side project, and you're doing it on your own, then only you can assess how much time you have to spend on this and what your performance requirements are.
Generally speaking, today's PCs are fast enough to run 2D platformers written in scripting languages. Using a scripting language will allow you to prototype things faster and you'll have more time to tweak the gameplay. Again, this is no different than with any other project.
If you go with C++, and your reasons don't have to be more elaborate than "because I want to," I would suggest that you look at SDL for rendering and audio support. It will make things a little bit easier.
If you want to learn the underlying technologies (DirectX, or you want to write optimized blitters for some perverse reason) then by all means, use C++.
Having said all that, I would caution you against premature optimization. For a 2D game, you'll probably be better off going with Python and PyGame first. I'd be surprised if those tools will prove to be inadequate on modern PCs.
As to what people have said about C/C++/Python, I'm a game developer and my company encourages C. Not b/c C++ is bad, but because badly written C++ is poison for game development due to it's difficulty to read/debug compared to C. (C++ gives benefits when used properly, but let a junior guy make some mistakes with it and your time sink is huge)
As to the actual question:
If your purpose is to just get something working, use a library.
Otherwise, code it yourself for a very important reason: Practice
Practice in manipulating data structures. There WILL be times you need to manage your own data. Practice in debugging utility code.
Often libs do just what you want and are great, but sometimes YOUR specific use case is handled very badly by the lib and you will gain big benefits from writing you own. This is especially on consoles compared to PCs
(edit:) Regarding script and garbage collection: it will kill you on a console, on a recent game I had to rewrite major portions of the garbage collection on Unreal just to fill our needs in the editor portion. Even more had to be done in the actual game (not just by me) (to be fair though we were pushing beyond Unreal's original specs)
Scripting often good, but it is not an "I win" button. In general the gains disappear if you are pushing against the limits of your platform. I would use "percent of platforms CPU that I have to spare" as my evaluation function in deciding how appropriate script is
One consideration in favor of C/C++/obj-C is that you can mix and match various libraries for different areas of concern. In other words, you are not stuck with the implementation of a feature in a framework.
I use this approach in my games; using chipmunk for 2D physics, Lua as an embedded scripting language, and an openGL ES implementation from Apple. I write the glue to tie all of these together in a C language. The final product being the ability to define game objects, create instances of them, and handle events as they interact with each other in C functions exposed to Lua. This approach is used in many high performance games to much success.
If you don't already know C++, I would definitely recommend you go forward with a scripting language. Making a game from start to finish takes a lot of motivation, and forcing yourself to learn a new language at the same time is a good way to make things go slowly enough that you lose interest (although it IS a good way to learn a new language...).
Most scripting languages will be compiled to byte code anyway, so their biggest performance hit will be the garbage collection. I'm not experienced enough to give a definite description of how big a hit garbage collection would be, but I would be inclined to think that it shouldn't be too bad in a small game.
Also, if you use an existing scripting language library to make your game, most of the performance critical areas (like graphics) can be written in C++ anyway (hopefully by the game libraries). So 80% of the CPU might actually be spent in C++ code anyway, despite the fact that most of your project is written in, say Python.
I would say, ask yourself what you want more: To write a game from start to finish and learn about game development, or to learn a new language (C++). If you want to write a game, do it in a scripting language. If you want to learn a new language, do it in C++.
Yeah unless you just want to learn all of the details of the things that go into making a game, you definitely want to go with a game engine and just focus on building your game logic rather than the details of graphics, audio, resource management, etc.
Personally I like to recommend the Torque Game Builder (aka Torque 2D) from GarageGames. But you can probably find some free game engines out there that will suit your needs as well.
I'm pretty sure most modern games are done in C++, not C. (Every gaming company I ever interviewed with asked C++ questions.)
Why not use C++ and existing libraries for physics + collisions, sound, graphics engine etc. You still write the game, but the mundane stuff is taken care of.
There are alot of different solutions to the issue of abstracting and each deals with it in different ways.
My current project uses C#, DirectX 9, HLSL and SlimDX. Each of these offers a carefully calibrated level of abstraction. HLSL allows me to actually read the shader code I'm writing and SlimDX/C# allows me to ignore pointers, circular dependencies and handling unmanaged code.
That said, none of these technologies has any impact on the ease of developing my AI, lighting or physics! I still have to break out my textbooks in a way that I wouldn't with a higher-level framework.
Even using a framework like XNA, if most video games development concepts are foreign to you there's a hell of a lot still to take in and learn. XNA will allow you to neatly sidestep gimbal lock, but woe betide those who don't understand basic shading concepts. On the other hand, something like DarkBASIC won't solve your gimbal lock problem, but shading is mostly handled for you.
It's a sufficiently big field that your first engine will never be the one you actually use. If you write it yourself, you won't write it well enough. If you use third party libraries, there's certainly aspects that will annoy you and you'll want to replace.
As an idea, it might be worth taking various libraries/frameworks (definately make XNA one of your stops, even if you decide you don't want to use it, it's a great benchmark) and trying to build various prototypes. Perhaps a landscape (with a body of water) or a space physics demo.