Would it be difficult to have AwesomeWM use the cairo-gl backend of cairo? - cairo

Basically, the question. As far as I know, AwesomeWM does not use the cairo-gl backend when using cairo.
Would it be difficult to make AwesomeWM maybe check "do we have a gpu? if yes, do we have OpenGL support? if yes, use the cairo-gl backend. if not, use the software renderer." I am asking this because it would be nice for things like fast (100 fps+) animations.

There are reasons why the Cairo OpenGL backend never became very popular and why the Vulkan one was abandoned. Cairo was also abandoned by Safari, Firefox and Chrome because they could not improve the speed much further.
Cairo API uses a "context" (cr) object to track the painting operation as part of transactions (which are completed with calls such as :fill(), :paint() and :stroke()).
All of those are state machines that heavily depend on the CPU to synchronize them. So even if you use painting primitives from OpenGL, the control flow remains pretty much the same. This nullify any performance advantages because for the efficiency gains you get from processing in the GPU, you had a ton of context sync between the CPU and GPU.
Something like the Skia library (and QtQuick) is better suited to unlock the GPU power. They use a different mechanism to to sync the state between the application side scene graph and the GPU textures. This is a lot closer to a 2D game engine than a traditional painting API. However Skia has its fair share of problems too. The largest is the lack of official API stability (idk if that has changed). The other, for Awesome in particular, is the lack of proper Lua bindings which are compatible with glib event loop.
Last, but not least, all AwesomeWM widget boxes (wibox, wibar, titlebar, etc) are managed by the same CPU thread. This is the real bottleneck for animations, not the painting speed. Every frame requires the Lua VM thread to perform every single Cairo transactions and repaint every affected areas. Then, with the OpenGL backend, it would have to to sync every each of those individually to the GPU.
So, tl;dr, it would not be faster and it would be less stable. If you need more speed, you can work on improving how LGI works. Changes like this one could make it ~5x faster. Of course, ensuring this is done properly and regresses nothing wont be trivial. But it's a low hanging fruit to increase the FPS.

Related

Making an overlay with vulkan [duplicate]

I am a mathematician and not a programmer, I have a notion on the basics of programming and am a quite advanced power-user both in linux and windows.
I know some C and some python but nothing much.
I would like to make an overlay so that when I start a game it can get info about amd and nvidia GPUs like frame time and FPS because I am quite certain the current system benchmarks use to compare two GPUs is flawed because small instances and scenes that bump up the FPS momentarily (but are totally irrelevant in terms of user experience) result in a higher average FPS number and mislead the market either unintentionally or intentionally (for example, I cant remember the name of the game probably COD there was a highly tessellated entity on the map that wasnt even visible to the player which lead AMD GPUs to seemingly under perform when roaming though that area leading to lower average FPS count)
I have an idea on how to calculate GPU performance in theory but I dont know how to harvest the data from the GPU, Could you refer me to api manuals or references to help me making such an overlay possible?
I would like to study as little as possible (by that I mean I would like to learn what I absolutely have to learn in order to get the job done I dont intent to become a coder).
I thank you in advance.
It is generally what the Vulkan Layer system is for, which allows to intercept API commands and inject your own. But it is nontrivial to code it yourself. Here are some pre-existing open-source options for you:
To get to timing info and draw your custom overlay you can use (and modify) a tool like OCAT. It supports Direct3D 11, Direct3D 12, and Vulkan apps.
To just get the timing (and other interesting info) as CSV you can use a command-line tool like PresentMon. Should work in D3D, and I have been using it with Vulkan apps too and it seems to accept them.

Writing a "Brushes" quality drawing app, need book and resource recommendations

I've spent about a week reading all the freely available information on iPhone drawing, animation, and OpenGL. Using the available iOS drawing examples like Apple's GLPaint and Quartz sample apps I've written a few versions of a painting tool, but I've hit many limitations which I think is due to "not knowing what I don't know." Quartz is easy to use and fast initially, but slows to a crawl after 20 or 30 paths due to having to rerender the context with every addition. OpenGL stroke rendering seems slow in general (GLPaint app) and makes the UI touches lag and feel "cheap". A search through Amazon and the forums has not revealed any great book or resource recommendations on low level iPhone drawing technologies that could help me become technically proficient enough to write a high performance app with a user experience and visual quality as good as "Brushes" or "Adobe Ideas 1.0". I'm not trying to get free code, I want to learn and I'm willing to pay for learning tools!
Suggestions? Guidance?
Edit: I'm surprised at how few books are out there. I'm making progress, drawing paths that are responsive even when there's many of them and planning an easy undo feature, but still wondering about how to have an erase feature and undo at the same time. Erase will require the scene to be rasterized I suppose, and then undo would have to be done by cacheing screenshots instead of just keeping track of paths.
Look into CGLayers. This will allow you to cache some of your drawing and not be forced to re-render everything each time it changes, but rather only draw the changes. You'll need to do a little work if you want to add undo/redo support, but this should alleviate some of your performance issues.

OpenGL render state management

I'm currently working on a small iPhone game, and am porting the 3d engine I've started to develop for the Mac to the iPhone. This is all going very well, and all functionality of the Mac engine is now present on the iPhone. The engine was by no means finished, but now at least I have basic resource management, a scene graph and a construction to easily animate and move objects around.
A screenshot of what I have now: http://emle.nl/forumpics/site/planes_grid.png. The little plane is a test object I've made several years ago for a game I was making then. It's not related to the game I'm developing now, but the 3d engine and its facilities are, of course.
Now, I've come to the topic of materials, the description of which textures, lights, etc belong to a renderable object. This means a lot of OpenGL clientstate and glEnable/glDisable calls for every object. What way would you suggest to minimise these state changes?
Currently I'm sorting by material, since objects with the same material don't need any changes at all. I've created a class called RenderState that caches the current OpenGL state and only applies the members that are different when a different material is selected. Is this a workable solution, or will it grow beyond control when the engine matures and more and more state needs to be cached?
A bit of advice. Just write the code you need for your game. Don't spend time writing a generalised rendering engine because it's more than likely you won't need it. If you end writing another game then extract the useful bits out into an engine at that point. This will be way quicker.
If the number of states in OpenGL ES as high as the standard version, it will be difficult to manage at some point.
Also, if you really want to minimize state changes you might need some kind of state-sorting concept, so that drawables with similar states are rendered together w/o needing a lot of glEnable/glDisable's between them. However, this might be sort of difficult to manage even on the PC hardware (imagine state-sorting thousands of drawables) and blindly changing the state might actually be cheaper, depending on the OpenGL implementation.
For a comparison, here's the approach taken by OpenSceneGraph:
Basically, every node in the scene graph has its own stateset which stores the material properties, states etc. The nice thing is that statesets can be shared by multiple nodes. This way, the rendering backend can just sort the drawables with respect to their stateset pointers (not the contents of the stateset!) and render nodes with same stateset together. This offers a nice trade-off since the backend is not bothered with managing individual opengl states, yet can achieve nearly minimal state changing, if the scenegraph is generated accordingly.
What I suggest, in your case is that you should do a lot of testing before sticking with a solution. Whatever you do, I'm sure that you will need some kind of abstraction to OpenGL states.

Real time system concept proof project

I'm taking an introductory course (3 months) about real time systems design, but any implementation.
I would like to build something that let me understand better what I'll learn in theory, but since I have never done any real time system I can't estimate how long will take any project. It would be a concept proof project, or something like that, given my available time and knowledge.
Please, could you give me some idea? Thank you in advance.
I programm in TSQL, Delphi and C#, but I'll not have any problem in learning another language.
Suggest you consider exploring the Real-Time Specification for Java (RTSJ). While it is not a traditional environment for constructing real-time software, it is an up-and-coming technology with a lot of interest. Even better, you can witness some of the ongoing debate about what matters and what doesn't in real-time systems.
Sun's JavaRTS is freely available for download, and has some interesting demonstrations available to show deterministic behavior, and show off their RT garbage collector.
In terms of a specific project, I suggest you start simple: 1) Build a work-generator that you can tune to consume a given amount of CPU time; 2) Put this into a framework that can produce a distribution of work-generator tasks (as threads, or as chunks of work executed in a thread) and a mechanism for logging the work produced; 3) Produce charts of the execution time, sojourn time, deadline, slack/overrun of these tasks versus their priority; 4) demonstrate that tasks running in the context of real-time threads (vice timesharing) behave differently.
Bonus points if you can measure the overhead in the scheduler by determining at what supplied load (total CPU time produced by your work generator tasks divided by wall-clock time) your tasks begin missing deadlines.
Try to think of real-time tasks that are time-critical, for instance video-playing, which fails if tasks are not finished (e.g. calculating the next frame) in time.
You can also think of some industrial solutions, but they are probably more difficult to study in your local environment.
You should definitely consider building your system using a hardware development board equipped with a small processor (ARM, PIC, AVR, any one will do). This really helped remove my fear of the low-level when I started developing. You'll have to use C or C++ though.
You will then have two alternatives : either go bare-metal, or use a real-time OS.
Going bare-metal, you can learn :
How to initalize your processor from scratch and most importantly how to use interrupts, which are the fastest way you have to respond to an externel event
How to implement lightweight threads with fast context switching, something every real-time OS implements
In order to ease this a bit, look for a dev kit which comes with lots of documentation and source code. I used Embedded Artists ARM boards and they give you a lot of material.
Going with the RT OS :
You'll fast-track your project, and will be able to learn how to fine-tune a RT OS
You may try your hand at an open-source OS, such as Linux or the BSDs, and learn a lot from the source code
Either choice is good, you will get a really cool hands-on project to show off and hopefully better understand your course material. Good luck!
As most realtime systems are still implemented in C or C++ it may be good to brush up your knowledge of these programming languages. Many realtime systems are also embedded systems, so you might want to play around with a cheap open source one like BeagleBoard (http://beagleboard.org/). This will also give you a chance to learn about cross compiling etc.

Game programming - How to avoid reinventing the wheel [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed last month.
Improve this question
Summary:
Can I program a "thick
client" game in C without reinventing
wheels, or should I just bite the
bullet and use some library or SDK?
I'm a moderate C programmer and am not
afraid to work with pointers, data
structures, memory locations, etc. if
it will give me the control I need to
make a great "thick-client" game.
However, I'm thinking of eschewing
high-level languages & frameworks for
the sake of power and control, not
ease of use.
I'm interesting in tinkering with a 2D fighting/platforming game as a side project sometime. I'm primarily a Linux server-side programmer with experience in Python, Ruby and PHP. I know that there are excellent frameworks in some of these languages, like PyGame. I am also aware of the success people have had with stuff like Air and .NET... but I have some concerns:
Performance: Scripting languages are notoriously slow. If I'm making a real-time game, I want it to be as snappy as possible.
Huge binaries: Using frameworks like .NET or scripting languages like Ruby often result in big CLRs or libraries that you wouldn't otherwise need. The game I want to make will be small and simple--I don't want its CLR to be bigger than the game itself!
Extra stuff: Honestly, I just don't like the idea of inheriting some big game library's baggage if I can wrap my head around my own code better.
I'm asking this question because I know I'm very susceptible to Not Invented Here Syndrome. I always want to program it myself, and I'm sure it wastes a lot of time. However, this works out for me remarkably often--for example, instead of using Rails (a very big web project framework with an ORM and GUI toolkit baked in), I used an array of smaller Ruby tools like rack and sequel that fit together beautifully.
So, I turn to you, SO experts. Am I being naive? Here's how I see it:
Use C
Cons
Will probably make me hate programming
High risk of reinventing wheels
High risk of it taking so long that I lose interest
Pros
Tried & true - most A-list games are done in C (is this still true today?)
High level of control over memory management, speed, asset management, etc., which I trust myself to learn to handle
No cruft
Use framework or SDK
Cons
Risk of oversized deliverable
Dependent on original library authors for all facets of game development--what if there isn't a feature I want? I'll have to program it myself, which isn't bad, but partially defeats the purpose of using a high-level framework in the first place
High risk of performance issues
Pros
MUCH faster development time
Might be easier to maintain
No time wasted reinventing common paradigms
What else can I add to this list? Is it a pure judgment call, or can someone seal the deal for me? Book suggestions welcome.
I believe you are working under a fallacy.
There are several frameworks out there specifically for game programming --- written by people with much experience with the complication of game design, almost certainly more tha you do.
In other words, you have a "High risk of performance issues" if you DON'T use a framework.
My current thinking is:
If you want to learn to program, start making the game engine from the base elements upwards (even implementing basic data structures - lists, maps, etc). I've done this once, and while it was a learning experience, I made many mistakes, and I wouldn't do this a second time around. However for learning how to program as well as making something cool and seeing results I'd rate this highly.
If you want to make a proper game, use whatever libraries that you want and design all of the game infrastructure yourself. This is what I'm doing now, and I'm using all of the nice things like STL, ATL/WTL, Boost, SQLite, DirectX, etc. So far I've learnt a lot about the middle/game logic aspect of the code and design.
If you just want to make a game with artists and other people collaborating to create a finished product, use one of the existing engines (OGRE, Irrlicht, Nebula, Torque, etc) and just add in your game logic and art.
One final bit of wisdom I've learnt is that don't worry about the Not Invented Here syndrome. As I've come to realise that other libraries (such as STL, Boost, DirectX, etc) have an order of magnitude (or three) more man-hours of development time in them, far more than I could ever spend on that portion of the game/engine. Therefore the only reason to implement these things yourself is if you want to learn about them.
I would recomend you try pyglet.
It has good performance, as it utilizes opengl
Its a compact all-in-one library
It has no extra dependencies besides python
Do some tests, see if you can make it fast enough for you. Only if you prove to yourself that it's not move to a lower level. Although, I'm fairly confident that python + pyglet can handle it... at worst you'll have to write a few C extensions.
Today, I believe you are at a point where you can safely ignore the performance issue unless you're specifically trying to do something that pushes the limits. If your game is, say, no more complicated than Quake II, then you should choose tools and libraries that let you do the most for your time.
Why did I choose Quake II? Because running in a version compiled for .NET, it runs with a software renderer at a more than acceptable frame rate on a current machine. (If you like - compare MAME which emulates multiple processors and graphics hardware at acceptable rates)
You need to ask yourself if you are in this to build an engine or to build a game. If your purpose is to create a game, you should definitely look at an established gaming engine. For 2D game development, look at Torque Game Builder. It is a very powerful 2D gaming engine/SDK that will put you into production from day 1. They have plenty of tools that integrate with it, content packs, and you get the full source code if you want to make changes and/or learn how it works. It is also Mac OSX compatible and has Linux versions in the community.
If you are looking for something on the console side, they have that too.
I'm surprised nobody has mentioned XNA. Its a framework built around DirectX for doing managed DirectX programming while removing a lot of the fluff and verbosity of lower level DirectX programming.
Performance-wise, for most 2D and 3D game tasks, especially building something like a fighting game, this platform works very well. Its not as fast as if you were doing bare metal DirectX programming, but it gets you very close, and in a managed environment, no less.
Another cool benefit of XNA is that most of the code can be run on an Xbox 360 and can even be debugged over the network connection was the game runs on the Xbox. XNA games are now allowed to be approved by the Xbox Live team for distribution and sale on Xbox Live Arcade as well. So if you're looking to take the project to a commercial state, you might have am available means of distribution at your disposal.
Like all MS development tools, the documentation and support is first rate, and there is a large developer community with plenty of tutorials, existing projects, etc.
Do you want to be able to play your game on a console? Do you want to do it as a learning experience? Do you want the final product to be cross platform? Which libraries have you looked into so far?
For a 2d game I don't think performance will be a problem, I recommend going with something that will get you results on screen in the shortest amount of time. If you have a lot of experience doing Python then pyGame is a good choice.
If you plan on doing some 3d games in the future, I would recommend taking a look at Ogre (http://www.ogre3d.org). It's a cross platform 3d graphics engine that abstracts away the graphics APIs. However for a 2d project it's probably overkill.
The most common implementation language for A-list games today is C++, and a lot of games embed a scripting language (such as Python or Lua) for game event scripting.
The tools you'd use to write a game have a lot to do with your reasons for writing it, and with your requirements. This is no different from any other programming project, really. If it's a side project, and you're doing it on your own, then only you can assess how much time you have to spend on this and what your performance requirements are.
Generally speaking, today's PCs are fast enough to run 2D platformers written in scripting languages. Using a scripting language will allow you to prototype things faster and you'll have more time to tweak the gameplay. Again, this is no different than with any other project.
If you go with C++, and your reasons don't have to be more elaborate than "because I want to," I would suggest that you look at SDL for rendering and audio support. It will make things a little bit easier.
If you want to learn the underlying technologies (DirectX, or you want to write optimized blitters for some perverse reason) then by all means, use C++.
Having said all that, I would caution you against premature optimization. For a 2D game, you'll probably be better off going with Python and PyGame first. I'd be surprised if those tools will prove to be inadequate on modern PCs.
As to what people have said about C/C++/Python, I'm a game developer and my company encourages C. Not b/c C++ is bad, but because badly written C++ is poison for game development due to it's difficulty to read/debug compared to C. (C++ gives benefits when used properly, but let a junior guy make some mistakes with it and your time sink is huge)
As to the actual question:
If your purpose is to just get something working, use a library.
Otherwise, code it yourself for a very important reason: Practice
Practice in manipulating data structures. There WILL be times you need to manage your own data. Practice in debugging utility code.
Often libs do just what you want and are great, but sometimes YOUR specific use case is handled very badly by the lib and you will gain big benefits from writing you own. This is especially on consoles compared to PCs
(edit:) Regarding script and garbage collection: it will kill you on a console, on a recent game I had to rewrite major portions of the garbage collection on Unreal just to fill our needs in the editor portion. Even more had to be done in the actual game (not just by me) (to be fair though we were pushing beyond Unreal's original specs)
Scripting often good, but it is not an "I win" button. In general the gains disappear if you are pushing against the limits of your platform. I would use "percent of platforms CPU that I have to spare" as my evaluation function in deciding how appropriate script is
One consideration in favor of C/C++/obj-C is that you can mix and match various libraries for different areas of concern. In other words, you are not stuck with the implementation of a feature in a framework.
I use this approach in my games; using chipmunk for 2D physics, Lua as an embedded scripting language, and an openGL ES implementation from Apple. I write the glue to tie all of these together in a C language. The final product being the ability to define game objects, create instances of them, and handle events as they interact with each other in C functions exposed to Lua. This approach is used in many high performance games to much success.
If you don't already know C++, I would definitely recommend you go forward with a scripting language. Making a game from start to finish takes a lot of motivation, and forcing yourself to learn a new language at the same time is a good way to make things go slowly enough that you lose interest (although it IS a good way to learn a new language...).
Most scripting languages will be compiled to byte code anyway, so their biggest performance hit will be the garbage collection. I'm not experienced enough to give a definite description of how big a hit garbage collection would be, but I would be inclined to think that it shouldn't be too bad in a small game.
Also, if you use an existing scripting language library to make your game, most of the performance critical areas (like graphics) can be written in C++ anyway (hopefully by the game libraries). So 80% of the CPU might actually be spent in C++ code anyway, despite the fact that most of your project is written in, say Python.
I would say, ask yourself what you want more: To write a game from start to finish and learn about game development, or to learn a new language (C++). If you want to write a game, do it in a scripting language. If you want to learn a new language, do it in C++.
Yeah unless you just want to learn all of the details of the things that go into making a game, you definitely want to go with a game engine and just focus on building your game logic rather than the details of graphics, audio, resource management, etc.
Personally I like to recommend the Torque Game Builder (aka Torque 2D) from GarageGames. But you can probably find some free game engines out there that will suit your needs as well.
I'm pretty sure most modern games are done in C++, not C. (Every gaming company I ever interviewed with asked C++ questions.)
Why not use C++ and existing libraries for physics + collisions, sound, graphics engine etc. You still write the game, but the mundane stuff is taken care of.
There are alot of different solutions to the issue of abstracting and each deals with it in different ways.
My current project uses C#, DirectX 9, HLSL and SlimDX. Each of these offers a carefully calibrated level of abstraction. HLSL allows me to actually read the shader code I'm writing and SlimDX/C# allows me to ignore pointers, circular dependencies and handling unmanaged code.
That said, none of these technologies has any impact on the ease of developing my AI, lighting or physics! I still have to break out my textbooks in a way that I wouldn't with a higher-level framework.
Even using a framework like XNA, if most video games development concepts are foreign to you there's a hell of a lot still to take in and learn. XNA will allow you to neatly sidestep gimbal lock, but woe betide those who don't understand basic shading concepts. On the other hand, something like DarkBASIC won't solve your gimbal lock problem, but shading is mostly handled for you.
It's a sufficiently big field that your first engine will never be the one you actually use. If you write it yourself, you won't write it well enough. If you use third party libraries, there's certainly aspects that will annoy you and you'll want to replace.
As an idea, it might be worth taking various libraries/frameworks (definately make XNA one of your stops, even if you decide you don't want to use it, it's a great benchmark) and trying to build various prototypes. Perhaps a landscape (with a body of water) or a space physics demo.