Upgrade and Downgrade design - upgrade

I am developing an embedded linux based product. The linux user-space runs multiple processes that are part of the product. I want to implement a clever design of upgrade (and downgrade) of the firmware version. So for example, If I change a structure in a newer version, the newer process can know how to read the old data (which is stored on flash) and build from it the new structure, But!, when downgrade, the older process won't understand the new structure that is being saved on the flash.
So what is the best design for handling the upgrading (and downgrading) of a whole firmware version?

Related

UWP App size with Entity Framework

I am trying to create a UWP app with dbStorage feature.
I looked at the UWP-Howto : https://msdn.microsoft.com/en-us/windows/uwp/data-access/index, and see that there are two ways of doing this.
Either using the basic sqlite3 APIs (I dont wish to use any 3rd party wrapper libraries)
Integrating Entity Framework with the App.
The latter option does seem to be a better option, since it is a ORM, but I am curious to know, adding it does it shoot up the App size in comparison to using the plain sqlite extension? Has anyone using it faced such problems?
App size should be one of the last things to worry about when creating UWP apps. This size is a one time 'problem' when the user installs the app since it takes some time to download. There are a lot of improvements built into the Windows 10 Store to tackle this problem:
Incremental updates (only update part of your app that changed)
File single instancing (if the file/library is on your system, it isn't downloaded again)
Partial resource downloads (only the language and scaled assets that fit for the device)
More info, see e.g. this Build session.
If you want exact numbers, I would suggest building a small PoC with and without and compare in size (although size will change a tiny bit as well because you write different code pieces to make it work).
But the added size because of using EF Core won't be in the magnitude of 100s of MBs. The packages itself are EFCore.Sqlite 71kb, EFCore.Relational 475kb, EFCore 783kb and these even include multiple dlls, xml, ... so it's only a fraction of that. Together with a few more base packages (Caching, Logging, ...) and maybe a few extra .NET standard libraries that will be used that might not be used when using the basic API you'll have a few MB extra. In my opinion ignorable.
The things you SHOULD worry instead of the initial download time about are:
Application startup time.
Overall application performance.
Performance of the developer (and thus the cost of creating the app).
If you're choice is between the native sqlite3 APIs and EF Core, it would be an easy pick for me (thinking on point 3 mentioned above). Just bear in mind that there are still issues with EF Core and .NET Native (the optimizations done by the store or when you build in Release mode with .NET native toolchain enabled). If you want to publish to the store very soon and have a rather large database, you might have some troubles to get it crash free. If you can sideload the app, just build without the .NET Native toolchain.

What is the advantages of CQ5/AEM over CQ4? Worth the upgrade?

Who used Communique 4 CMS and now using CQ5/AEM, what is the most important improvements to mention? How hard /fast was the migration process, what was the complicated things about it, were you able migrate all the content? Any experiences are welcome.
Application stability, improved authoring experience, security fixes, performance, and feature set is going to be the biggest delta when you're comparing with a legacy version. Upgrading also means you can tap on a larger resource pool for CQ5 development (legacy resources are difficult to find and training new resources on a legacy platform takes time and decreases morale) If you have a healthy allocation on your resources for large efforts, I would suggest going with a hybrid route for upgrading where you simply migrate areas that have the easiest compatibility. Major steps that come into mind...
Migrate content, users, and DAM.
Rewrite or port reuseable code to current CQ version (5.6.1 # time of writing)
Run a large regression effort to assure functionality loss is kept at non-impactful levels. Some examples: application code base [APIs, beans, taglibs, component configs, dialogs, etc...], front end, architecture [author instances, publisher instances, and dispatchers], and CRX data consistency/retention).
I do not have experience in CQ prior to the Adobe acquisition but the minor releases that have been distributed (5.1, 5.2, 5.3, 5.4, 5.5, 5.6, 5.6.1) are consistently substantial. Expect a large migration effort with the biggest pain points surrounding deprecated/removed APIs, component compatibility issues (especially custom components written in ExtJS), missing etc and app files, and overall performance of the application.

MongoDB aggregation framework availability

The new aggregation framework will come with the 2.2 version.
They made some presentations and demos on that :
http://www.10gen.com/presentations/mongosv-2011/mongodbs-new-aggregation-framework
I does not found any development release on their site.
Does someone knows where I can test the new framework ?
Thanks
You can download and install the development version 2.1, it is inside (and currently under development).
http://www.mongodb.org/downloads
But (at this stage) it is still a bit "young" (see for example a recent thread http://groups.google.com/group/mongodb-user/browse_thread/thread/3de5df85ce5b3713).
MongoDB uses the standard “odd numbers are development, even are stable” versioning scheme. So the 2.1.x series is still under development, and you should probably wait until the release of 2.2.x to use this feature in production unless you fully understand what you’re doing.
I am looking for it as well. :-)

Is writing plug in for eclipse dependent on operating system?

we start to write plug in for eclipse to work with some java frame work like hadoop (we want to edit hadoop eclipse plug in and merge it with other. our plug in must work in Linux operating system. Generally writing plug in for eclipse depend on operating system or not? if depend what benefits to write it for Linux?
Well, the previous answer is correct... in most cases. You should specifically check all the interfaces with the operating system.
SWT is a Java wrapper over native OS widgets. It behaves almost the same on all OSs, but not exactly. There are subtleties. For example, events that might be fired a bit differently, drawing of widgets, etc. My experience shows that you have to check on all OSs to be sure that it works as it should, especially if you are doing more complex UI rendering. In many cases I had to do some fine tuning to get it right. It is not a great deal of effort, but it should be considered.
Another issue is working with the file system. For example, make sure you are composing files paths correctly. It is always a good idea to test that part as well.
Eclipse plugins are platform independent (you are writing them in Java), unless your plugin requires some low-level calls to the operating system (i.e. JNI) or to invoke some tool found only in the Linux OS.
The only part of Eclipse tied - in part - to the OS is the SWT toolkit, since it's optimized for the graphic environment you are running it, but if Eclipse can run in the OS you are interested in, you should not be bothered by this.

Static (iPhone) libraries, distribution, and dependencies

(Presumably the following question is not iPhone specific, aside from the fact that we would likely use a Framework or dynamic library otherwise.)
I am building a proprietary iPhone SDK for a client, to integrate with their web back-end. Since we don't want to distribute the source code to customers, we need to distribute the SDK as a static library. This all works fine, and I have verified that I can link new iPhone apps against the library and install them on the device.
My concern is around third party libraries that our SDK depends on. For example we are currently using HTTPRiot and Three20 (the exact libraries may change, but that's not the point). I am worried that this may result in conflicts if customers are also using any of these libraries (and perhaps even different versions) in their app.
What are the best practices around this? Is there some way to exclude the dependent libraries' symbols from our own static library (in which case customers would have to manually link to both our SDK as well as HTTPRiot and Three20)? Or is there some other established mechanism?
I'm trying to strike a balance between ease of use and flexibility / compatibility. Ideally customers would only have to drop our own SDK into their project and make a minimal number of build settings changes, but if it makes things more robust, it might make more sense to have customers link against multiple libraries individually. Or I suppose we could distribute multiple versions of the SDK, with and without third party dependencies, to cover both cases.
I hope my questions make sense... Coming mainly from a Ruby and Java background, I haven't had to deal with compiled libraries (in the traditional sense) for a long time... ;)
If it were me I would specify exactly which versions of those 3rd party libraries my library interoperates with. I would then test against them, document them, and probably deliver with those particular versions included in the release.
Two things I would worry about:
-I would want to be sure it 'just works' when my customers install it.
-I wouldn't want to guarantee support for arbitrary future versions of those 3rd party libraries.
It is fine to include a process for the customer to move to newer versions, but if anything doesn't work then I would expect the customer to pay for that development work as an enhancement, rather than it being a free bug fix (unless you include that in the original license/support arrangement).
At that point it becomes an issue of ensuring your specific versions of the 3rd party libraries can work happily alongside anything else the customer needs (in your case a web back-end). In my experience that is usually a function of the library, e.g. some aren't designed so multiple versions can run side-by-side.