Using templates to rewrite polymorphic class to one single class (compile time polymorphism) - class

In my current code I use runtime polymorphism to create different subtypes of "light" from a LightBase class. The lighttypes however are already known at compile time (preprocessor picks the right variant). So I figured it is really not the right tool to do so as it is slow (vtable lookup for virtual getter functions) and could be already done at compile time. I just don't know how... could it be done with templates? I don't have too much experience in template programming so I don't know what is possible.
Essentially I want to instantiate a subclass of either type NormalLight or SpecialLight which have the same functions as LightBase but operate on a different set of constants:
class Color
{
Color(std::string color_name) : color_name_(color_name) { }
private:
std::string color_name_;
}
class LightBase {
public:
std::unique_ptr& GetInstance() { return instance_; }
protected:
const resolution;
std::array<Color, 0> = { };
// ..
private:
static std::unique_ptr<LightBase> instance_;
}
class NormalLight : public LightBase
{
protected:
const resolution = 9;
std::array<Color, 3> { Color("blue"), Color("red"), Color("purple") };
// ..
}
class SpecialLight : public LightBase
{
protected:
const resolution = 13;
std::array<Color, 3> { Color("yellow"), Color("magenta"), Color("orange") };
// ..
}
#if defined CONFIG_LIGHT_TYPE_NORMAL
std::unique_ptr<LightBase> LightBase::instance_ = std::unique_ptr<NormalLight>(new NormalLight());
#elif defined CONFIG_LIGHT_TYPE_SPECIAL
std::unique_ptr<LightBase> LightBase::instance_ = std::unique_ptr<SpecialLight>(new SpecialLight());
#endif
In a function I could conditionally check for a template parameter (I guess) but it's a class definition. Also, the thing should compile in C++11. Any ideas?

Related

Haxe: Function argument that matches an interface

I have a method (in Haxe) that needs to accept any object that has a 'width' and 'height' property. I'm assuming that I can do that somehow with an Interface, but for my purposes the object that is passed in does not need to implement the interface, it just needs to be any object with a width and height property.
This does not work, as the object you pass in needs to implement interface:
Interface IWidthAndHeight{
public var width : Float;
public var height : Float;
}
Class Main{
var o : IWidthAndHeight;
public function setObject( o : IWidthAndHeight ){
this.o = o;
}
}
Currently I am using Dynamic, and manually cheking that the properties exist, but is there a smarter way? Current method:
Class Main{
var o : Dynamic;
public function setObject( o : Dynamic ){
if (propertiesExist(o,['width','height'])){
this.o = o;
}
}
// Note: propertiesExist is my own method. Just assume it works :)
}
Any help appreciated. Thanks!
You can use anonymous structures here:
typedef WidthAndHeight = {
width:Float,
height:Float
}
class Main {
var o:WidthAndHeight;
public function setObject(o:WidthAndHeight) {
this.o = o;
}
}
This is just alternative to var o:{width:Float, height:Float}; with typedef.
Any class or structure in setObject argument will be checked for these fields in compile time, this is called structural subtyping.

Object Within Object - Arduino

I'm designing a library for an IIR filter design. Long story short, a filter with an order of more than two, can be broken down into multiple 'sections' with order of two. I.e. 4-order filter = 2 x second-order filters.
I would like to create a class for a single-stage, second-order filter (already written). Then I would like to write another class that is able to break the full filter into multiple second-order filters.
… still following?!
Here is my code: I get an error saying "singleFilter not declared in this scope"
IIR.h:
class master
{
public:
master();
int unsigned getHelloWorld();
};
class slave:
{
public:
slave();
int unsigned helloWorld();
};
IIR.cpp
#include "IIR.h"
master::master()
{
slave singleFilter;
}
int unsigned master::getHelloWorld()
{
return singleFilter.hellowWorld();
}
slave::slave()
{
}
int unsigned slave::helloWorld()
{
return 10;
}
Arduino.ino:
#include <IIR.h>
master filter;
void setup()
{
filer.getHelloWorld();
}
void loop()
{
}
Since the variable singleFilter is declared in the master constructor it is confined to function scope and cannot be referenced outside of the constructor.
Move singleFilter to the master class declaration to give it class scope and then it can be referenced in all master class methods.
class master
{
slave singleFilter;
public:
master();
int unsigned getHelloWorld();
};

Swift: how to understand dynamic method dispatching in init method?

I find that the dynamic method dispatching in init method of Swift is different from which in C++, can anyone explain why?
This is the demo code and its output:
In Swift:
class Object {
init() {
a()
}
func a() {
print("Object")
}
}
class SubObject: Object {
override init() {
}
override func a() {
print("SubObject")
}
}
let a = SubObject()
// output: SubObject
In C++:
class Object {
public:
Object() {
a();
}
virtual void a() {
std::cout << "Object" << std::endl;
}
};
class SubObject: public Object {
public:
SubObject() {
}
virtual void a() {
std::cout << "SubObject" << std::endl;
}
};
int main(int argc, const char * argv[]) {
SubObject s;
return 0;
}
// output: Object
As you can see, these code above write in Swift and C++ are nearly the same, but their output is quite different, while Swift seems that it finds the override method and called the derived one, the C++ still called the super's.
Here at LearnCpp, it says:
Do not call virtual functions from constructors or destructors Here’s another gotcha that often catches unsuspecting new programmers. You should not call virtual functions from constructors or destructors. Why?
Remember that when a Derived class is created, the Base portion is constructed first. If you were to call a virtual function from the Base constructor, and Derived portion of the class hadn’t even been created yet, it would be unable to call the Derived version of the function because there’s no Derived object for the Derived function to work on.
In C++, it will call the Base version instead.
A similar issue exists for destructors. If you call a virtual function in a Base class destructor, it will always resolve to the Base class version of the function, because the Derived portion of the class will already have been destroyed.

C++/CLI class wrapper for c library - callbacks

I am wrapping a C library using C++/CLI. The C library was designed to be used from an unmanaged C++ class. This means that the library functions accept a C++ object pointer and then provide that pointer back in callbacks. This enables the callback code to redirect requests to an appropriate event function in the calling C++ object.
The actual functions are quite involved, so I have simplified the problem space to just a few basic items:
// C library function signature
void CLibFunc(CLIBCALLBACK *callback, void *caller);
// C callback signature
// Second parameter is meant to point to the calling C++ object
typedef int (__stdcall CLIBCALLBACK) (int param1, void *caller);
// C callback implementation
int CallBackImpl(int param1, void* caller)
{
// Need to call the ManagedCaller's EventFunction from here
// ???
}
// C++/CLI caller class
public ref class ManagedCaller
{
public:
void CallerFunction(void)
{
// Call the C library function
// Need to pass some kind of this class pointer that refers to this object
CLibFunc(CallBackImpl, ????);
}
void EventFunction(param1)
{
}
}
Now the C library functions need to be called from a managed C++ class. Under C++/CLI, the garbage collector moves objects around in memory, so passing a simple fixed pointer to the class does not work anymore. I can solve the problem by pinning the object, but that is not recommended because it leads to memory fragmentation. It seems that another option would be to use auto_gcroot pointers, but I am fairly new to managed C++ an I am not sure how to make this work.
Does anyone know how to make this work? What kind of pointer should be passed to the C function? How should the callback implementation redirect to the calling object's event function?
This just happens to be similar to something I'm in the middle of working on right now.
Here is an blog post on providing native callbacks using C++ classes: http://blogs.microsoft.co.il/blogs/alon/archive/2007/05/29/Native-Callback.aspx
I'm not familiar with calling C++ member functions from C, but I have done an interface (abstract base) class to another C++ class for callbacks (similar to the article). Here is a basic example of what I am providing a bridge for:
// Interface (abstract base) class providing the callback
class IProvider {
public:
virtual ~IProvider() {}
virtual void Callback() = 0;
};
// User class of the callback
class CUser {
IProvider * m_pProvider;
public:
CUser(IProvider * pProvider) {
m_pProvider = pProvider;
}
void DoSomething() {
m_pProvider->Callback();
}
};
// Implementation of the interface class
class CHelloWorldProvider : public IProvider {
void Callback() {
printf("Hello World!");
}
};
// Usage of the callback provider in a pure native setting
void PureNativeUsage() {
CHelloWorldProvider oProvider;
CUser oUser(&oProvider);
oUser.DoSomething();
}
Now in order to make this available for managed implementations of the provider, we have to create a series of classes that provide the bridge.
// Where gcroot is defined
#include <vcclr.h>
// Managed provider interface class
public interface class IManagedProvider {
void Callback();
};
// Native bridge class that can be passed to the user
class CProviderBridge : public IProvider {
// Give the managed class full access
friend ref class ManagedProviderBase;
// Store a reference to the managed object for callback redirects
gcroot<IManagedProvider ^> m_rManaged;
public:
void Callback(){
m_rManaged->Callback();
}
};
// Managed provider base class, this provides a managed base class for extending
public ref class ManagedProviderBase abstract : public IManagedProvider {
// Pointer to the native bridge object
CProviderBridge * m_pNative;
protected:
ManagedProviderBase() {
// Create the native bridge object and set the managed reference
m_pNative = new CProviderBridge();
m_pNative->m_rManaged = this;
}
public:
~ManagedProviderBase() {
delete m_pNative;
}
// Returns a pointer to the native provider object
IProvider * GetProvider() {
return m_pNative;
}
// Makes the deriving class implement the function
virtual void Callback() = 0;
};
// Pure managed provider implementation (this could also be declared in another library and/or in C#/VB.net)
public ref class ManagedHelloWorldProvider : public ManagedProviderBase {
public:
virtual void Callback() override {
Console::Write("Hello World");
}
};
// Usage of the managed provider from the native user
void MixedUsage() {
ManagedHelloWorldProvider ^ rManagedProvider = gcnew ManagedHelloWorldProvider;
CUser oUser(rManagedProvider->GetProvider());
oUser.DoSomething();
}
Edit: Added code to show w/o the managed interface class example I use.
Here is a modified version of my example that can be used given your CLibFunc above. This is assuming how the C function performs the callback is accurate.
Also this might be able to be slimmed down a bit depending on how involved your callback classes are and how much freedom for extension you need.
// Where gcroot is defined
#include <vcclr.h>
// C callback signature
// Second parameter is meant to point to the calling C++ object
typedef int (__stdcall CLIBCALLBACK) (int param1, void *caller);
// C library function
void CLibFunc(CLIBCALLBACK *callback, void *caller) {
// Do some work
(*callback)(1234, caller);
// Do more work
}
// Managed caller interface class
public interface class IManagedCaller {
void EventFunction(int param1);
};
// C++ native bridge struct
struct CCallerBridge {
// Give the managed class full access
friend ref class ManagedCaller;
// Store a reference to the managed object for callback redirects
gcroot<IManagedCaller ^> m_rManaged;
public:
// Cast the caller to the native bridge and call managed event function
// Note: This must be __stdcall to prevent function call stack corruption
static int __stdcall CallBackImpl(int param1, void * caller) {
CCallerBridge * pCaller = (CCallerBridge *) caller;
pCaller->m_rManaged->EventFunction(param1);
return 0;
}
};
// C++/CLI caller class
public ref class ManagedCaller : public IManagedCaller {
// Pointer to the native bridge object
CCallerBridge * m_pNative;
public:
ManagedCaller() {
// Create the native bridge object and set the managed reference
m_pNative = new CCallerBridge();
m_pNative->m_rManaged = this;
}
~ManagedCaller() {
delete m_pNative;
}
// Calls the C library function
void CallerFunction() {
CLibFunc(CCallerBridge::CallBackImpl, m_pNative);
}
// Managed callback function
virtual void EventFunction(int param1) {
Console::WriteLine(param1);
}
};
// Usage
int main(array<System::String ^> ^args) {
ManagedCaller ^ oCaller = gcnew ManagedCaller();
oCaller->CallerFunction();
return 0;
}

Should I use an interface or factory (and interface) for a cross-platform implementation?

Example A:
// pseudo code
interface IFoo {
void bar();
}
class FooPlatformA : IFoo {
void bar() { /* ... */ }
}
class FooPlatformB : IFoo {
void bar() { /* ... */ }
}
class Foo : IFoo {
IFoo m_foo;
public Foo() {
if (detectPlatformA()} {
m_foo = new FooPlatformA();
} else {
m_foo = new FooPlatformB();
}
}
// wrapper function - downside is we'd have to create one
// of these for each function, which doesn't seem right.
void bar() {
m_foo.bar();
}
}
Main() {
Foo foo = new Foo();
foo.bar();
}
Example B:
// pseudo code
interface IFoo {
void bar();
}
class FooPlatformA : IFoo {
void bar() { /* ... */ }
}
class FooPlatformB : IFoo {
void bar() { /* ... */ }
}
class FooFactory {
IFoo newFoo() {
if (detectPlatformA()} {
return new FooPlatformA();
} else {
return new FooPlatformB();
}
}
}
Main() {
FooFactory factory = new FooFactory();
IFoo foo = factory.newFoo();
foo.bar();
}
Which is the better option, example A, B, neither, or "it depends"?
I would say that your explicit factory option (option B) is generally better.
In your first example your Foo class is effectively doing two jobs, it's a factory and it's a proxy. Two jobs, one class, makes me uneasy.
Your second option puts a little more responsibility on the client: they need to know to use the factory, but this is such a widely used idiom that I think it's not hard to understand.
The problem with A is that you have to implement every method of IFoo in Foo. That is not a big deal if there are only a couple, but is a pain if there are dozens of them. If you are working with a language that supports factory methods, such as Curl, then you could put a factory method in IFoo:
{define-class abstract IFoo
{method abstract {bar}:void}
{factory {default}:{this-class}
{if platformA? then
{return {FooPlatformA}}
else
{return {FooPlatformB}}
}
}
}
{define-class FooPlatformA {inherits IFoo}
{method {bar}:void}
}
...
def foo = {IFoo}
{foo.bar}
If you ask me B is way better - since Foo itself does not need to do any switching on platform. Why does that matter? Well, since you probably want to test all components separately - Foo with a 'test' IFoo, FooPlatformA separately on platform A and FooPlatformB on platform B. If you stick the choice inside Foo you need to test Foo on both A and B, not only the different IFoos. Makes the components more coupled for no apparent reason.
The factory is a cleaner solution as you do not have implement each member of the interface in the wrapper class Foo : IFoo. Imagine, each time you modify the IFoo interface you would need update the wrapper. When programming, depending on your goals, try to consider maintainability as much as possible.
Are all 'platforms' available or only one of them? Is the only difference between the platforms is logic? Thinking from a game developer perspective I would use #defines to implement this.
class Platform : IPlatform
{
void Update()
{
#if PLATFORM_A
* ... Logic for platform A */
#elif PLATFORM_B
* ... Logic for platform A */
#endif
}
}
HTH,
Interfaces are used when it is possible that multiple implementations of a single functional set may exist. This sounds as though it applies to your particular scenario.
In terms of your examples, I would definitely roll with B, it is easier to maintain. A embeds too much common logic [ie platform detection] within individual classes [and/or methods]. If you are to build your own Factory class, try to generalize it [through a generic Resolve<IType> () method or something], as opposed to a method\class per interface.
For instance,
// i called it a "container" because it "contains" implementations
// or instantiation methods for requested types - but it *is* a
// factory.
public class Container
{
// "resolves" correct implementation for a requested type.
public IType Resolve<IType> ()
{
IType typed = default (IType);
if (isPlatformA)
{
// switch or function map on IType for correct
// platform A implementation
}
else if (isPlatformB)
{
// switch or function map on IType for correct
// platform B implementation
}
else
{
// throw NotSupportedException
}
return typed;
}
}
However, rather than implement your own Factory pattern, you may wish to investigate alternative implementations, such as MS's Unity2.0 or Castle Windsor's CastleWindsorContainer. These are easy to configure and consume.
Ideally,
// use an interface to isolate *your* code from actual
// implementation, which could change depending on your needs,
// for instance if you "roll your own" or switch between Unity,
// Castle Windsor, or some other vendor
public interface IContainer
{
IType Resolve<IType> ();
}
// custom "roll your own" container, similar to above,
public class Container : IContainer { }
// delegates to an instance of a Unity container,
public class UnityContainer : IContainer { }
// delegates to an instance of a CastleWindsorContainer,
public class CastleWindsorContainer : IContainer { }
Oh, suppose I ought to shout out to Ninject and StructureMap too. I am just not as familiar with these as with Unity or CastleWindsor.