I am new to blackberry 10 platform and I am developing an application that is already in java. In the Java application there I used many interfaces for usability. I don't know how to use this interfaces in Blackberry 10. Please give me some samples using interface in blackberry 10.
You can find a lot of discussion of this topic in previous StackOverflow questions like this one.
As one of the responders says a C++ class with only pure virtual methods is the same thing as a Java interface. C++ allows multiple inheritance, which has its own pitfalls if used incorrectly. Inheriting from one or more classes with only pure virtual methods is the equivalent of implementing one or more interfaces.
It sounds like you are not familiar with C++. You need to tool up your skills. C++ is not the same language as Java. Especially under Cascades or Qt you need to be aware of who is responsible for freeing memory (deleting objects) and when. Java will forgive many sins, C++ will not forgive any.
Related
I have integrated Lua with my ObjC code (iphone game). The setup was pretty easy, but now, I have a little problem with the bridging. I have googled for results, etc... and it seems there isn't anything that could work without modifications. I mean, I have checked luaobjc bridge (it seems pretty old and dicontinued), I heard about LuaCocoa but it seems not to work on iphone, and wax is too thick.
My needs are pretty spare, I just need to be able to call objc methods from lua and don't mind having to do extra work to make it work (I don't need a totally authomatic bridging system).
So, I have decided to build a little bridge myself based on this page http://anti-alias.me/?p=36. It has key information about how to accomplish what I need, but the tutorial is not completed and I have some doubts about how to deal with method overloading when called from lua, etc...
Do anybody know if there exist any working bridge between objc and lua on the iphone or if it could be so hard to complete the bridge the above site offers?
Any information will be welcomed.
Don't reinvent the wheel!
First, you are correct that luaobjc and some other variants are outdated. A good overview can be found on the LuaCocoa page. LuaCocoa is fine but apparently doesn't support iPhone development, so the only other choice is Wax. Both LuaCocoa and Wax are runtime bridges, which means that you can (in theory) access every Objective-C class and method in Lua at the expense of runtime performance.
For games and from my experience the runtime performance overhead is so significant that it doesn't warrant the use of any runtime binding library. From a perspective of why one would use a scripting language, both libraries defy the purpose of favoring a scripting language over a lower-level language: they don't provide a DSL solution - which means you're still going to write what is essentially Objective-C code but with a slightly different syntax, no runtime debugging support, and no code editing support in Xcode. In other words: runtime Lua binding is a questionable solution at best, and has lots of cons going against it. Runtime Lua bindings are particularly unsuited for fast-paced action games aiming at a constantly high framerate.
What you want is a static binding. Static bindings at a minimum require you to declare what kind of methods will be available in Lua code. Some binding libraries scan your header files, others require you to provide a special declaration file similar to a header file. Most binding libraries can use both approaches. The benefit is optimal runtime performance, and being able to actually design what classes, methods and variables Lua scripts have access to.
There are but 3 candidates to bind Lua code to an iPhone app. To be fair, there are a lot more but most have one or more crucial flaws or are simply not stable or for special purposes only, or simply don't work for iPhone apps. The candidates are:
tolua and tolua++
luabind
SWIG
Big disadvantage shared by all Lua static binding libraries: none of them can bind directly to Objective-C code. All require to have an additional C or C++ layer available that ultimately interfaces with your Objective-C code. This has to do with how Objective-C works as a language and how small a role it has played (so far) when it comes to embedding Lua in Objective-C apps.
I recently evaluated all three binding libraries and came to enjoy SWIG. It is very well documented but has a bit of a learning curve. But I believe that learning curve is warranted because SWIG can be used to combine nearly any programming and scripting language, it can be advantageous to know how to use SWIG for future projects. Plus, once you understand their definition file implementation it turns out to be very easy (especially when compared to luabind) and considerably more flexible than tolua.
OK, bit late to the party but in case others come late also to this post here's another approach to add to the choices available: hand-code your LUA APIs.
I did a lecture on this topic where I live coded some basic LUA bindings in an hour. Its not hard. From the lecture I made a set of video tutorials that shows how to get started.
The approach of using a bindings generation tool like SWIG is a good one if you already have exactly the APIs that you need to call written in Objective-C and it makes sense to bring all those same API's over into LUA.
The pros of the hand-coding approach:
your project just compiles with one standard Xcode target
your project is all C & Obj-C
the LUA is just data shipped along with your images
no fiddling with "do I check in generated code" to Git
you create LUA functions for just the things you want
you can easily have hosted scripts that live inside an object
the API is under your control and is well known
dont expose engine APIs to level building team/tools
The last point is just that if you have detail functions that only make sense at the engine level and you don't want to see those when coding the game play you'll need to tell SWIG not to bind those.
Steffens answer is perfect and this approach is just another option, that may suit some folks better depending on the project.
I am trying to evaluate some technologies for implementing a communication process between some Ada modules with some C++/OpenGL modules. There is an (Windows XP) Ada application which communicates with a C++ application using COM, but I intend to switch the COM to a new technology. Some suggestions came up, such as direct Sockets, DSA, Polyorb, Corba and DSS/Opensplice.
DSA appears to be just Ada -implemented (not sure)
Polyorb has its last implementation on 2006, according to http://polyorb.ow2.org/
Corba someone argumented that it could be not simple enough to justify its complexity for implementing simple applications
DSS/Opensplice appears to be just C/C++ implemented, so an Ada binding should be done. It also looks to be not very simple to be implemented too.
Personally I like COM, but due to the migration, I'd rather take the sockets option due to its simplicity, and the interface architecture could be implemented very easily.
So, what you think? Could you please comment about these technologies or even suggest other more?
Thanks very much.
A big factor in your choice is the size and complexity of the system you're reengineering. Is it a broadly distributed system with lots of complex messages? Is it a relatively small system with a handful of mundane message exchanges?
For small systems I used to just roll-my-own socket-based comm modules. Now, though, I lean more towards ZeroMQ (brokerless) or STOMP (text-based). And there's some Ada support for these, zeromq-Ada and TOMI_4_Ada (supports both).
While these handle the distribution mechanics, you would still need to handle the serialization of the messages into transportable form.
CORBA/PolyORB and DDS solutions are rather heavyweight, but are complete solutions. If you don't fear IDL and managing brokers, they can do well for large-scale distributed systems. Yeah, there may need to be some Ada bindings built, but if you can get C headers or a C API to bind to, it's typically not too bad if you focus on just binding the functions and data structures you require. Rather than creating a comprehensive binding, liberally employ opaque and void pointers (void_ptr, opaque_structure_def_ptr) for structs and parameters whose internal contents you don't care about.
we intend to switch the COM to a new (suported) technology, since COM is not more supported by Microsoft
Whoever told you COM is no longer supported is totally clueless.
While COM has undergone many name changes (OLE, COM, OLE Automation, DCOM, COM+, ActiveX, WinRT) and extensions over the past decades, it is the single most important technology for MS platforms: past, present and future. The .NET runtime uses COM extensively. Much of the Win32 API is written in COM, and the portions that weren't, will be in Win8, since WinRT components are COM objects.
Also take a look at AMQP (RabbitMQ for server), there seems to be Ada library available for it http://www.gti-ia.upv.es/sma/tools/AdaBinding/index.php.
If you could find binding for Ada, Apache thrift might also be a lightweight option. Maybe you could even write your own binding, it should not be more difficult that rolling something of your own over the sockets.
If you do go sockets route, than I would suggest ZeroMQ as "supersockets".
One more option for your list should be to use Ada's distributed programming support, and write C/C++ wrappers to interface your C++ program into it.
I don't know that its the best option for your needs, but if your Ada compiler supports Annex E, it should be on the list.
Since this post, AdaCore published PolyORB on GitHub with regular updates :)
This question already has answers here:
Closed 12 years ago.
Possible Duplicates:
What is a framework? What does it do? Why do we need a framework
What is the difference between a class library and a framework
Although I referred to various sources, I still can't understand the proper definition. What is meant by "application framework"?
Here's a simpler answer:
Application frameworks make writing applications easier.
Creating applications is hard. Applications have to provide input and output which they get through operating system semantics. Modern applications are usually GUI based and a GUI app is orders of magnitude more complex than a non-GUI app.
It's that simple. The framework takes all the complexities of interfacing with the operating system and simplifies them for you. It handles all the nitty-gritty details for you. Obviously certain frameworks do a better job at it than others.
There is one drawback to using an application framework that rarely seems to be discussed (presumably because we are all smiling about the amount of work we didn't have to do). In order to provide a simplified view of the operating environment, a framework has to box you into a certain 'style'. If your app is sufficiently different from the usual form of app, you are likely to end up frustrated in the framework as it will make doing what you want very difficult. This is partly because you now have to do all the things that the framework was hiding from you and partly because the framework is probably a closed system.
Frameworks are a special case of software libraries in that they are
reusable abstractions of code wrapped
in a well-defined Application
programming interface (API), yet they
contain some key distinguishing
features that separate them from
normal libraries.
An application framework consists of a framework used by software developers to implement the standard structure of an application for a specific development environment
Wikipedia answers, as you might expect, that an application framework is a framework for developing applications.
An application typically provides a user interface. "Application framework" can be used loosely to refer to user-interface frameworks that provide little more than a collection of low-level user-interface controls -- like MFC, Swing, Qt and the like.
However, it is useful to distinguish these from more powerful frameworks like the Eclipse Rich-Client Platform and the Netbeans Platform, which provide a higher-level framework -- built atop those low-level toolkits -- on which to develop applications.
I personally use "application platform" only for these latter platforms, and refer to the low-level APIs as "user-interface toolkits."
A repeating theme in my development work has been the use of or creation of an in-house plug-in architecture. I've seen it approached many ways - configuration files (XML, .conf, and so on), inheritance frameworks, database information, libraries, and others. In my experience:
A database isn't a great place to store your configuration information, especially co-mingled with data
Attempting this with an inheritance hierarchy requires knowledge about the plug-ins to be coded in, meaning the plug-in architecture isn't all that dynamic
Configuration files work well for providing simple information, but can't handle more complex behaviors
Libraries seem to work well, but the one-way dependencies have to be carefully created.
As I seek to learn from the various architectures I've worked with, I'm also looking to the community for suggestions. How have you implemented a SOLID plug-in architecture? What was your worst failure (or the worst failure you've seen)? What would you do if you were going to implement a new plug-in architecture? What SDK or open source project that you've worked with has the best example of a good architecture?
A few examples I've been finding on my own:
Perl's Module::Plugable and IOC for dependency injection in Perl
The various Spring frameworks (Java, .NET, Python) for dependency injection.
An SO question with a list for Java (including Service Provider Interfaces)
An SO question for C++ pointing to a Dr. Dobbs article
An SO question regarding a specific plugin idea for ASP.NET MVC
These examples seem to play to various language strengths. Is a good plugin architecture necessarily tied to the language? Is it best to use tools to create a plugin architecture, or to do it on one's own following models?
This is not an answer as much as a bunch of potentially useful remarks/examples.
One effective way to make your application extensible is to expose its internals as a scripting language and write all the top level stuff in that language. This makes it quite modifiable and practically future proof (if your primitives are well chosen and implemented). A success story of this kind of thing is Emacs. I prefer this to the eclipse style plugin system because if I want to extend functionality, I don't have to learn the API and write/compile a separate plugin. I can write a 3 line snippet in the current buffer itself, evaluate it and use it. Very smooth learning curve and very pleasing results.
One application which I've extended a little is Trac. It has a component architecture which in this situation means that tasks are delegated to modules that advertise extension points. You can then implement other components which would fit into these points and change the flow. It's a little like Kalkie's suggestion above.
Another one that's good is py.test. It follows the "best API is no API" philosophy and relies purely on hooks being called at every level. You can override these hooks in files/functions named according to a convention and alter the behaviour. You can see the list of plugins on the site to see how quickly/easily they can be implemented.
A few general points.
Try to keep your non-extensible/non-user-modifiable core as small as possible. Delegate everything you can to a higher layer so that the extensibility increases. Less stuff to correct in the core then in case of bad choices.
Related to the above point is that you shouldn't make too many decisions about the direction of your project at the outset. Implement the smallest needed subset and then start writing plugins.
If you are embedding a scripting language, make sure it's a full one in which you can write general programs and not a toy language just for your application.
Reduce boilerplate as much as you can. Don't bother with subclassing, complex APIs, plugin registration and stuff like that. Try to keep it simple so that it's easy and not just possible to extend. This will let your plugin API be used more and will encourage end users to write plugins. Not just plugin developers. py.test does this well. Eclipse as far as I know, does not.
In my experience I've found there are really two types of plug-in Architectures.
One follows the Eclipse model which is meant to allow for freedom and is open-ended.
The other usually requires plugins to follow a narrow API because the plugin will fill a specific function.
To state this in a different way, one allows plugins to access your application while the other allows your application to access plugins.
The distinction is subtle, and sometimes there is no distiction... you want both for your application.
I do not have a ton of experience with Eclipse/Opening up your App to plugins model (the article in Kalkie's post is great). I've read a bit on the way eclipse does things, but nothing more than that.
Yegge's properties blog talks a bit about how the use of the properties pattern allows for plugins and extensibility.
Most of the work I've done has used a plugin architecture to allow my app to access plugins, things like time/display/map data, etc.
Years ago I would create factories, plugin managers and config files to manage all of it and let me determine which plugin to use at runtime.
Now I usually just have a DI framework do most of that work.
I still have to write adapters to use third party libraries, but they usually aren't that bad.
One of the best plug-in architectures that I have seen is implemented in Eclipse. Instead of having an application with a plug-in model, everything is a plug-in. The base application itself is the plug-in framework.
http://www.eclipse.org/articles/Article-Plug-in-architecture/plugin_architecture.html
I'll describe a fairly simple technique that I have use in the past. This approach uses C# reflection to help in the plugin loading process. This technique can be modified so it is applicable to C++ but you lose the convenience of being able to use reflection.
An IPlugin interface is used to identify classes that implement plugins. Methods are added to the interface to allow the application to communicate with the plugin. For example the Init method that the application will use to instruct the plugin to initialize.
To find plugins the application scans a plugin folder for .Net assemblies. Each assembly is loaded. Reflection is used to scan for classes that implement IPlugin. An instance of each plugin class is created.
(Alternatively, an Xml file might list the assemblies and classes to load. This might help performance but I never found an issue with performance).
The Init method is called for each plugin object. It is passed a reference to an object that implements the application interface: IApplication (or something else named specific to your app, eg ITextEditorApplication).
IApplication contains methods that allows the plugin to communicate with the application. For instance if you are writing a text editor this interface would have an OpenDocuments property that allows plugins to enumerate the collection of currently open documents.
This plugin system can be extended to scripting languages, eg Lua, by creating a derived plugin class, eg LuaPlugin that forwards IPlugin functions and the application interface to a Lua script.
This technique allows you to iteratively implement your IPlugin, IApplication and other application-specific interfaces during development. When the application is complete and nicely refactored you can document your exposed interfaces and you should have a nice system for which users can write their own plugins.
I once worked on a project that had to be so flexible in the way each customer could setup the system, which the only good design we found was to ship the customer a C# compiler!
If the spec is filled with words like:
Flexible
Plug-In
Customisable
Ask lots of questions about how you will support the system (and how support will be charged for, as each customer will think their case is the normal case and should not need any plug-ins.), as in my experience
The support of customers (or
fount-line support people) writing
Plug-Ins is a lot harder than the
Architecture
Usualy I use MEF. The Managed Extensibility Framework (or MEF for short) simplifies the creation of extensible applications. MEF offers discovery and composition capabilities that you can leverage to load application extensions.
If you are interested read more...
In my experience, the two best ways to create a flexible plugin architecture are scripting languages and libraries. These two concepts are in my mind orthogonal; the two can be mixed in any proportion, rather like functional and object-oriented programming, but find their greatest strengths when balanced. A library is typically responsible for fulfilling a specific interface with dynamic functionality, whereas scripts tend to emphasise functionality with a dynamic interface.
I have found that an architecture based on scripts managing libraries seems to work the best. The scripting language allows high-level manipulation of lower-level libraries, and the libraries are thus freed from any specific interface, leaving all of the application-level interaction in the more flexible hands of the scripting system.
For this to work, the scripting system must have a fairly robust API, with hooks to the application data, logic, and GUI, as well as the base functionality of importing and executing code from libraries. Further, scripts are usually required to be safe in the sense that the application can gracefully recover from a poorly-written script. Using a scripting system as a layer of indirection means that the application can more easily detach itself in case of Something Badâ„¢.
The means of packaging plugins depends largely on personal preference, but you can never go wrong with a compressed archive with a simple interface, say PluginName.ext in the root directory.
I think you need to first answer the question: "What components are expected to be plugins?"
You want to keep this number to an absolute minimum or the number of combinations which you must test explodes. Try to separate your core product (which should not have too much flexibility) from plugin functionality.
I've found that the IOC (Inversion of Control) principal (read springframework) works well for providing a flexible base, which you can add specialization to to make plugin development simpler.
You can scan the container for the "interface as a plugin type advertisement" mechanism.
You can use the container to inject common dependencies which plugins may require (i.e. ResourceLoaderAware or MessageSourceAware).
The Plug-in Pattern is a software pattern for extending the behaviour of a class with a clean interface. Often behaviour of classes is extended by class inheritance, where the derived class overwrites some of the virtual methods of the class. A problem with this solution is that it conflicts with implementation hiding. It also leads to situations where derived class become a gathering places of unrelated behaviour extensions. Also, scripting is used to implement this pattern as mentioned above "Make internals as a scripting language and write all the top level stuff in that language. This makes it quite modifiable and practically future proof". Libraries use script managing libraries. The scripting language allows high-level manipulation of lower level libraries. (Also as mentioned above)
I have a set of functionality (classes) that I would like to share with an application I'm building for the iPhone and for the Blackberry (Java). Does anyone have any best practices on doing this?
This is not going to be possible as far as I understand your question - the binary format for the iPhone and Java are not compatible - and even for a native library on a blackberry device.
This is not like building for OS X where you can use Java unfornately the iPhone doesn't support Java.
The best idea is probably to build you library in Objective-C and then port it to Java which is an easier transition than going the other way. If you programme for Objective-C and make sure you code has no memory leaks - then the changes are not so complex.
If you keep the structure of your classes the same then you should find maintenance much simpler - fix a bug in the Java and you should find it easy to check for the same bug in the ObjC methods etc.
Hope this helps - sorry that it is not all good news.
As Grouchal mentioned - you are not going to be able to share any physical components of your application between the two platforms. However you should be able to share the logical design of your application if you carefully separate it into highly decoupled layers. This is still a big win because the logical application design probably accounts for a large part of your development effort.
You could aim to wrap the sections of the platform specific APIs (iPhone SDK etc.) that you use with your own interfaces. In doing so you are effectively hiding the platform specific libraries and making your design and code easier to manage when dealing with differences in the platforms.
With this in place you can write your core application code so that it appears very similar on either platform - even though they are written in different languages. I find Java and Objective-C to be very similar conceptually (at least at the level at which I use it) and would expect to be able to achieve parity with at least the following:
An almost identical set of Java and Objective-C classes with the same names and responsibilities
Java/Objective-C classes with similarly named methods
Java/Objective-C methods with the same responsibilities and logical implementations
This alone will make the application easier to understand across platforms. Of course the code will always look very different at the edges - i.e when you start dealing with the view, threading, networking etc. However, these concerns will be handled by your API wrappers which once developed should have fairly static interfaces.
You might also stand to benefit if you later developer further applications that need to be delivered to both platforms as you might find that you can reuse or extend your API wrappers.
If you are writing a client-server type application you should also try and keep as much logic on your server as possible. Keep the amount of extra business logic on the device to a minimum. The more you can just treat the device as a view layer the less porting you'll have to do over all.
Aside from that, following the same naming conventions and package structure across all the projects helps greatly, especially for your framework code.
The UI API's and usability paradigms for BlackBerry and iPhone are so different that it won't be possible in most cases to directly port this kind of logic between apps. The biggest mistake one could make (in my opinion) is to try and transplant a user experience designed for one mobile platform on to another. The way people interact with BlackBerrys vs iPhones is very different so be prepared to revamp your user experience for each mobile platform you want to deploy on.
Hope this is helpful.
It is possible to write C++ code that works in both a BB10 Native app and an iOS app.
XCode would need to see the C++ files as ObjectiveCPP code.
I am currently working on such a task in my spare time. I have not yet completed it enough to either show or know if it is truly possible, but I haven't run in to any road-blocks yet.
You will need to be disciplined to write good cross-platform code designed w/ abstractions for platform-specific features.
My general pattern is that I have "class Foo" to do cross platform stuff, and a "class FooPlatform" to do platform specific stuff.
Class "Foo" can call class "FooPlatform" which abstracts out anything platform specific.
The raw cross-platform code is itself not compile-able on its own.
Separate BB10 and XCode projects are created in their respective IDEs.
Each project implements a thin (few [dozen] line) "class FooPlatform" and references the raw cross-platform code.
When I get something working that I can show I will post again here...