How can I reuse directshow components in Gstreamer (windows) - frameworks

I am developing a new media playback application for Digital cinema.
While checking the multimedia framework options, I am pretty impressed with GStreamer and would like to use it.
BUT, we already have developed some directshow filters, which we do not intend to throw away or refactor for now. The directshow filters involve in-house developed(with source code) and also purchased(without source code).
Question:
How can I reuse these components even though I switch from Directshow to Gstreamer?
Ideas and pointers will be much appreciated.

You can develop your own plugin for GStreamer, which passes the control to your custom filters. Here is the Guide.

IMO and I stand open to correction that doesn't make any sense at all. A DirectShow filter has been designed to fit into the DirectShow framework (the interfaces are designed for this), gstreamer is a multimedia framework with it's own set of interfaces and requirements, etc. Even if you could wrap the filters in a custom gstreamer plug-in, you would need to implement everything the DS framework provides you with, which sounds very complicated and is likely to be more work than just refactoring your DS filter in the first place. The other option of creating a DS graph inside a plug-in doesn't sound like a good idea either.

Related

Does Web Audio have any plans of making limiters, imager, saturator, or multiband compressors?

I have checked the interfaces that web audio offers. Pizzicato.js offers a great library for these effects, but it is a pitty that some of the best and essential effects are missing, like a limiter, multi-band compressor, parametric equalizer, saturator, stereo imager. I was just wondering if there any plans for them where i can check if they are willing to make these in the future. Just don't know where i could ask.
Thanks
WebAudio is a collection of fairly elemental base processors. There's really no higher order effects, because instead, it provides the foundational elements with which to build the effects.
For example, there's a dynamics compressor, but there's no variMU emulation, or FET circuit, or even your run of the mill digital compressor. But, using the pieces that the API does have, you can build out or model a compressor that behaves however you want. Just think it through and figure out how the signal needs to be processed and you'll find pretty much every component you need to achieve it. If not, the AudioWorklet(the successor to the deprecated ScriptProcessorNode) can fill in the blanks.
That being said, you would build a limiter using the DynamicsCompressor, and use BiquadFilters and DynamicsCompressors to build a multiband comp. You'd use the WaveShaper to build things like tape saturators, overdrive, and bit crushers. And you can create an imager or stereo widener effect using things like the PannerNode and one of the AudioParams (setValueAtTime() is probably the simplest).
WebAudio isn't really plug & play, but it's what you'd use to build something that is. If you'd rather skip the tedium of building your own DSPs, I can't really blame you, it's not easy. But there's plenty of sadists out there who already did, and many are made by very imaginative and talented engineers. A couple libraries worth checking out would be Tuna.js which is a very user friendly/straight forward effects library, or Tone.js for something much more fleshed out and complete.
According to the docs several already seem to be implemented:
Compressor: https://developer.mozilla.org/en-US/docs/Web/API/DynamicsCompressorNode
EQ: https://developer.mozilla.org/en-US/docs/Web/API/BiquadFilterNode
Saturator can probaby be impemented using: https://developer.mozilla.org/en-US/docs/Web/API/ConvolverNode

choosing development framework for game

I used to be a as3 game programmer. when I begin develop game, I would use a coding framework(suppose these called coding framework) like puremvc/mate/robotlegs plus other graphic framework like starling/away3d/featherUI. And now I am a C++ game programmer(novice), and I want to make a game using cocos2dx. But cocos2dx just a graphics framework, and I want to choose a coding framework like puremvc. I know puremvc has a c++ multicore version, but I found it very hard to learn because there no docs no example on the internet, I would not use puremvc-cpp until I found a good example or document.
I wonder to know that if other people that developing game with cocos2dx did not use any other framework? if yes, then what's mainstream framework for this situation. If no, then I am very sad.
Maybe this answer is late, but I'd like to suggest you a possible scenario: if you're learning to be a C++ programmer, then you'll surely find an easy path to move to the Unreal Engine to create your games.
About the chance to use the PureMVC implementation (standard or multicore), make sure to subdivide your game into 5 tiers:
ViewTech (Unreal, engine logic)
View (PureMVC Mediators, visual logic)
Controller (PureMVC Commands, game logic)
Model (PureMVC Proxies, entities logic)
DBTech (LightSpeed is my suggestion here, persistent logic)
PureMVC is feature frozen, so you can pick up any of the skeletal C++ examples from its main site and adapt them to suit your needs. So even if there are not much working examples out there, you'll still be able to build a prototype in less than 2 days.
This solution doesn't use Cocos2dx, but I think you'll have far more expressive power with these guidelines.
Hope this helps. Bye!
You won't need any extra frameworks when using cocos2d-x.
Cocos2d-x isn't just a graphics library - it's a whole graphics, input and audio framework. The framework itself promotes a certain type of architecture, so a coding framework like the ones you mentioned would probably not fit too well.
I suggest you have a look at the official samples (github) and use them as guidelines.
If you're using JavaScript to build your game, you might try the PureMVC JS port: http://js.puremvc.org
Essentially, PureMVC just want's to help you keep your model, view, and controller concerns separated, and it does so just the same in JS as it did in the AS3 world.

OpenGL ES, OpenFrameworks, Cinder and IOS creative development

I'm in the middle of a difficult choice.
I'd like to learn a language that can help me create application with a strong artistic/creative/graphic component and use it for commercial projects for my customers.
My first choice was OpenGL ES, i think of it as the "Standard" way to go through.
But, in the meanwhile, i discovered this site : http://www.creativeapplications.net/ where i found many cool apps for ios, for most built using OpenFramewors and Cinder.
My question is: why choose this 2 "wrapper" instead of OpenGL? I need to understand benefits and disadvantages.
I'm not sure that using these frameworks i can mix in a easy (and standard) way (As for OpenGL) UIKit/Cocoa and Graphics. At the moment i still prefer OpenGL because i know that this's the way suggested by apple (i mean... proposed by Apple) and i'm sure that i can take advantage of it for my customer too. While i' not sure that using OF and Cinder i can fully manage UIKit and Cocoa without tricks.
The benefits of using a framework are, as stated by Ruben, that you're not re-inventing the wheel.
OpenGL doesn't come with a lot of classes you would normally need: Vectors, matrices, cameras, colour, images etc and the methods that you will need to work with them: normalise, arithmetic, cross product etc
Of course you can implement all this in OpenGL but if someone's done it before, why not just leverage that instead? Your choice of framework or library will depend on what implementation you prefer. OF will do things differently to Cinder which is different to another library instead.
You don't have to use everything a framework provides. If you don't like the base application (like Cinder for example) you can create your own contexts and what not and just use the framework's 3d maths libraries, or its image library, or whatever other part you want. Just include the relevant headers you want.
Alternatively you can just use a 3d math library if you are so inclined and do away with frameworks all together. This gives you more control over your rendering pipeline and also potentially decreases application size.
Ultimately what you choose will depend on its features and your preference for a particular style. I would suggest going with a framework or library you are comfortable with and that has been used in production (unless you are just playing around with stuff). Documentation is also important. If the docs/resources aren't very good I would shy away from using something.
Of course, if you want to learn the ins and outs (never a bad idea), by all means write your own library.
I think the main advantage of choosing OF and Cinder is that you can focus on your creation better than loosing lots of hours dealing with the OpenGL library. Cinder even includes image downloading and memory treatment. However, you must be patient because these frameworks are being imported to the iOS platform right now.
In some months or years, everybody will use these frameworks that abstract all the stuff behind the graphics programming to bring them the full potential and time to make art!
If you don't miss anything, i think you'd be OK with OpenGL alone.
Cinder offers some additional goodies, see http://libcinder.org/features/. Maybe triangulation, loading of system fonts, matrix support etc might be interesting for you in the future.
Also Cinder's Tinderbox makes creating new projects very easy.
Now both Cinder and OF fully support iOS platform and you can use them easily in an iOS application.
also note that these frameworks are designed specially with designers and creative artists/coders in mind, but OpenGL is a technical standard for dealing with graphic hardwares.
Note: I'm the author of this framework.
I've spent some time creating Rend, an Objective-C based OpenGL ES 2.0 framework for iOS. It's lightweight and focusing on pure rendering which may be appropriate for some projects.
Also, if you're creating your own framework, you may be able to use it for inspiration and code snippets.
http://github.com/antonholmquist/rend-ios

building an app to cater for WP7,Iphone & Android

I am about to start building an app that will be used across all platforms. I will using monotouch and monodriod so I can keep things in .net
I'm a little lazy so I want to be able to reuse as much code as possible.
Lets say I want to create an application that stores contact information. e.g. Name & Phone number
My application needs to be able to retrieve data from a web service and also store data locally.
The MVVM pattern looks like the way to go but im not sure my approach below is 100% correct
Is this correct?
A project that contains my models
A project that contains my views,local storage methods and also view models which I bind my views to. In this case there would be 3 different projects based on the 3 os's
A data access layer project that is used for binding to services and local data storage
Any suggestions would be great.
Thanks for your time
Not specifically answering your question, but here are some lazy pointers...
you can definitely reuse a lot of code across all 3 platforms (plus MonoWebOS?!)
reusing the code is pretty easy, but you'll need to maintain separate project files for every library on each platform (this can be a chore)
MVVM certainly works for WP7. It's not quite as well catered for in MonoTouch and MonoDroid
some of the main areas you'll need to code separately for each device are:
UI abstractions - each platform has their own idea of "tabs", "lists", "toasts", etc
network operations - the System.Net capabilities are slightly different on each
file IO
multitasking capabilities
device interaction (e.g. location, making calls etc)
interface abstraction and IoC (Ninject?) could help with all of these
The same unit tests should be able to run all 3 platforms?
Update - I can't believe I just stumbled across my own answer... :) In addition to this answer, you might want to look at MonoCross and MvvmCross - and no doubt plenty of other hybrid platforms on the way:
https://github.com/slodge/MvvmCross
http://monocross.net (MVC Rather then Mvvm)
Jonas Follesoe's cross platform development talk: Has to be the most comprehensive resource out there at the moment. He talks about how best to share code and resources, abstract out much of the UI and UX differences, shows viable reusable usage of MVVM across platforms and nice techniques for putting together an almost automated build. (yes, that includes a way for you to compile you monotouch stuff on Visual Studio)
Best of all he has a available source code for the finished product and for a number of the major component individually placed in its own workshop project and a 50 + page pdf detailing the steps to do so.FlightsNorway on github
IMO the only thing missing is how best to handle local data storage across all platforms. In which case I would direct you to Vici Cool Storage an ORM that can work with WP7, MonoTouch and (while not officially supported) MonoDroid.
*Disclaimer* The site documentation isn't the most updated but the source code is available. (Because documentation is Kriptonite to many a programmer)
I think the easiest way to write the code once and have it work on all three platforms will probably be a web-based application. Check out Untappd for example.
You can start by looking at Robert Kozak's MonoTouch MVVM framework. It's just a start though.
MonoTouch MVVM

core animation xml or json framework

Does anyone know if there is a framework for dynamically loading core animation sequences from some kind of description file like xml or json or even better if there is some kind of core animation studio. I would need some way to allow designers to work on animations without having to talk to programmers for every single change ...
In a project I worked on, we did this exact thing (at least the description part). The animation data is passed down in JSON and parsed and interpreted. It maps to a lot of the major animation capabilities provided by Core Animation--mostly position and frame animations.
Unfortunately what we developed is proprietary and it is highly doubtful the company would be willing to release it as open source.
In the end my answer to your question is there don't appear to be any frameworks that currently support this, however, implementing it yourself wouldn't be too terribly difficult. Then creating a tool your designers can use to generate the animation JSON would be the next logical step. If the tool was not WYSIWYG but rather just a bit of a pseudo design tool, it probably wouldn't be too hard to create either.
Good luck and best regards.
It sounds like you're looking for Quartz Composer, except that QC depends heavily on hardware acceleration and isn't available on iOS. Perhaps that will change in the future.