ADO.NET Provider for SAP HANA - Version mismatch issue - ado.net

I have a WPF application which is using ADO.NET client for SAP HANA ( Version 1.0.9.0) in my local development environment and i have added the same reference of SAP.DATA.HANA.v4.5.dll in my code. The connection works fine.
When i try to run the same application on a server which is having a different version of ADO.NET client , it throws error.
It should refer to the client from the location(C:\Program Files\sap\hdbclient\ado.net\v4.5) instead of version number ?
Can someone please explain if i am doing something wrong.

You're not doing anything wrong. But I think you're missing a key piece of knowledge about assembly references. And there are ways to deal with it, both for you and SAP.
When .NET references an assembly, it does so against the exact version used at compile time. Which means if at runtime the version number is different, it fails by default. Differences in the signing key used for strong-named assemblies or cultures can also cause the assembly loading to fail. This previous answer discusses that and suggests an approach to dealing with it using the AssemblyResolve event. You might have a lot more to do if your application has a complex plugin loading mechanism, but if it's really simple, you can get it updated with minimal effort. Of course, you'll still have issues if the client DB makes a breaking change, but they'll be different issues.
Another approach is to edit your application's configuration file to redirect assembly versions yourself. Sometimes Nuget packages are even set up to automatically insert these redirects into the application configuration file. Unfortunately this isn't an easy strategy with HANA given how frequently the versions change. And you need to know the exact version that's present to do this. So if you're deploying your application to multiple clients with different HANA versions, it could be a bit of a nightmare.
The sad thing is that there's already an existing mechanism in place for dealing with this sort of thing by the vendor who publishes the library. It's called a publisher policy assembly, but SAP botched its delivery of the policy assembly by including (and embedding) the most useless binding redirect ever conceived.
Normally, with a binding redirect, the purpose is to allow for seamless upgrading of an assembly to a newer version as long as your types haven't changed their public interfaces significantly. All you do is specify the range of versions that can be handled by this version in the XML.
SAP, however has this as their binding redirect element:
<bindingRedirect oldVersion="1.0.120.0-1.0.120.0" newVersion="1.0.120.0" />
Which does you the wonderful favor of allowing only the currently installed version of their DB client (in this case, 1.0.120.0). And no, that DLL doesn't change its ADO.NET classes enough for this sort of strict versioning. In my case, everything about the different assembly versions was exactly the same.
Had I the ability to get in touch with someone sane on the HANA team with enough clout to fix this, I'd suggest they set the oldVersion attribute properly to the oldest upgradable version and upgrade every subsequent driver package. So if you've installed 1.0.120.0, it'd look something more like this with the :
<bindingRedirect oldVersion="1.0.9.0-1.0.120.0" newVersion="1.0.120.0" />
And then it could be used by any software on the machine without extra effort. I mean, it's in the GAC. It wouldn't be that much effort for them to make this significantly easier. Although they'd need to patch all versions of the HANA client, as many applications (even those shipped by SAP) are tied to specific HANA versions. If they only fixed it in the latest version, we wouldn't see the benefits for several years.

Related

Having OSGi services use latest version of bundle, even if multiple bundle versions are installed

I'm facing an OSGi problem, and I'm not sufficiently well versed in OSGi details to figure out a way forward.
My problem is this:
I have a service, which lives behind a well-defined interface, and periodically emits a file in a particular location. This is controlled by the config admin (via a config file in Karaf)
Some components provide this service to others via a Karaf feature file, bundling my service in a particular version (1.X.0)
Other components provide this service in a newer version (1.Y.0, where Y > X), either via another feature file, or just by adding it to their kar file.
As these are just minor version changes, the consuming services don't really care which service they talk to (the API is the same).
My problem is that both of these bundles are Active in karaf, and there is a race condition as to who gets to overwrite who's output file.
I tried making the #Component into a Singleton (using scope = ServiceScope.SINGLETON), and while this might solve the case of every service consumer using the same implementation, the issue of file overwriting persists, as both services are Active.
Basically, I'm looking for a way to tell OSGi to "don't bother with the older versions, the new version (which is the same major as the others) are fine for all of the consumers (who use the default of [1.X,2[)
As the config file is akin to an "API" for enabling my service I would like to avoid having multiple config files for the different versions.
If possible, I would like to keep the version location logic outside of my service. I guess in theory, the service could listen for other versions of bundles providing the same service interface, and take appropriate action - but this seems very cumbersome to me. Surely there is a better way, which has less impact on the business logic code (i.e. my service)?
The simple answer is of course, why bother with the old bundle? Just uninstall it?
Anyway, the usual answer is then: I can't for some reason. Some solutions in preferred (my) order:
Remove older bundle
Make your components require a configuration and configure the appropriate component, the other one won't run. This is basically the pattern that gave us the Configurator specification. This is actually a really good solution that I use everywhere. It allows the application to be configured in fine detail.
Just solve the configuration file conflict in the bundles.
Use startlevels to never start the older bundle. A bit of a hack.
Register a service property with the service and let the references filter on that property. Rabbit hole.
Use Service Hooks to filter out the old service. This introduces ordering since the service hook must be registered before anyone using it. So I tend to shy away from it. Here is an implementation
This is imho a typical use case that, in retrospect, made the system much more complicated than expected. One hack like this does not seem to be too bad but these hacks tend to multiply like rabbits. OSGi is about clean modules communicating with well defined services. Your description seems you're there but then not correctly solving the problem will lead you down the path to the big ball of mud again :-(
For Apache Karaf there is a special way to implement the first solution from Peter (Remove older bundle).
Set dependency=true in the feature file for the bundle that provides the service.
This way Apache Karaf will automatically install the best bundle for the requirements of your other bundles. In this case it should only install the providing bundle with the highest minor version number.

How to support multiple mobile application versions with the most up-to-date server deployment?

I'm trying to figure out a scenario, but I can't find any relevant information on the web.
Let's say I am deploying an Android Application (v1.0.0) with the backend (v1.0.0).
At some point, I will make some changes and update the app to v1.0.1 and the backend to v1.0.1 and they will work perfectly.
But how can I also support the previous version of the application (maybe the new server version provides another format of response for one specific request)?
I thought of having separate deployments for every version of the server, but for many updates, that would mean a big resources impact.
Also, forcing the users to update doesn't seem a good option in my opinion.
Essentially you can go multiple ways of doing it. Really depends on your requirements, but usually the reality is a mixture of the things below.
As a rule of thumb, think in data model that will be held compatible. If the contract can not be kept or your realize major changes are needed, introduce a new version of API. If old clients can not be supported, force update. You can also agree on a policy on how long to support each previous versions and then force update, this will make your life much easier and simpler, than maintaining tens of versions of APIs.
Backwards compatible data model
You must think backwards with each release:
Think of incremental modelling with each release cycle. So you can keep things.
When you forget about it and you need to switch behavior based on data:
Happened to me in my trainee years. Simply you have to decode which version it might be, based on the data if you forget to add protocol version. From the beginning you can always have a version field on the data. Moreover, you can also add a set of compatible parser versions.
Include protocol version in data:
{
"data": [ arbitrary data model],
"protocolVersion": "v1"
}
Based on the protocol version, you can decide how to process the data on the server side. You don't need to keep client version in mind, only the protocol's. Since it can be that you release: 1.0.0, 1.0.1, 1.1.0, and you only change protocol in 1.2.0.
But I think the problem is the that as data changes subsequently, so does behavior on server side processing.
Versioned API
Sometimes you see different paths for major versions:
/api/v1/query
/api/v2/query
Used when backwards compatibility is broken or after total reconsideration. Therefore not every version will have an increment.
Use a query parameter with the client version:
/api/query?v=1.0
/api/query?v=1.1
Similar to previous one, just written differently. Although I think this is not the way you want to go down.
Track client releases and used service versions
In reality there are multiple requests and multiple versions of services being used all times per one client version. This makes things even harder to maintain.
All the time, document and track. Version and release management is very important. Know the git hash from which version you built, many time it can get confusing and you just changed only one parameter after release as a quick fix and the day after nothing is compatible anymore.
Map all service versions to the client version at each release, know which commit you really built and use tagging and release management.
Test everything before each release
Have clear requirements for your backwards compatibility. Maybe you only support two older versions, then test with all the two clients, new client with the to be released server. Document everything. And when you meet your criteria for release, go with it.
Summary
Reality is a mixture of solutions. Have clear requirements on the backward compatibility. Therefore you can test before each release. When it is necessary, force update. Keep track of releases, and document each client versions with all the services being used with their versions.
Use switch case at the server for each different version of the app.

Web Application deployment and database/runtime data management

I have decided to finally nail down my team's deployment processes, soup-to-nuts. The last remaining pain point for us is managing database and runtime data migration/management. Here are two examples, though many exist:
If releasing a new "Upload" feature, automatically create upload directory and configure permisions. In later releases, verify existence/permissions - forever, automatically.
If a value in the database (let's say an Account Status of "Signup") is no longer valid, automatically migrate data in database to proper values, given some set of business rules.
I am interested in implementing a framework that allows developers to manage and deploy these changes with the same ease that we manage and deploy our code.
So the first question is: 1. What tools/frameworks are out there that provide this capacity?
In general, this seems to be an issue in any given language and platform. In my specific case, I am deploying a .NET MVC2 application which uses Fluent NHibernate for database abstraction. I already have in my deployment process a tool which triggers NHibernate's SchemaUpdate - which is awesome.
What I have built up to address this issue in my own way, is a tool that will scan target assemblies for classes which inherit from a certain abstract class (Deployment). That abstract class exposes hooks which you can override and implement your own arbitrary deployment code - in the context of your application's codebase. the Deployment class also provides for a versioning mechanism and the tool manages the current "deployment version" of a given running app. Then, a custom NAnt task glues this together with the NAnt deployment script, triggering the hooks at the appropriate times.
This seems to work well, and does meet my goals - but here's my beef, and leads to my second question: 2. Surely what I just wrote has to already exist. If so, can you point me to it? and 3. Has anyone started down this path and have insight into problems with this approach?
Lastly, if something like this exists, but not on the .NET platform, please still let me know - as I would be more interested in porting a known solution than starting from zero on my own solution.
Thanks everyone, I really appreciate your feedback!
Each major release, have a script to create the environment with the exact requirements you need.
For minor releases, have a script that is split into the various releases and incrementally alters the environment. There are some big benefits to this
You can look at the changes to the environment over time by reading the script and matching it with release notes and change logs.
You can create a brand new environment by running the latest major and then latest minor scripts.
You can create a brand new environment of a previous version (perhaps for testing purposes) by specifying it to stop at a certain minor release.

What is the difference between configuration management and version control?

Can anyone explain in simple terms what the difference is between configuration management and version control? From the descriptions I've been able to find on various websites, it seems like configuration management is just a fancy term for putting your config files in a source control repository. But others lead me to believe there is a more involved explanation.
Version control is necessary but not sufficient for configuration management. Version control happens in some central or distributed repository, but says nothing about where any particular version is deployed or used.
Configuration management worries about how to take what is in version control and deploy that consistently to the appropriate places, primarily QA and production, but in a large enough development operation developers as well.
For example, you may keep all of your SQL queries in version control, including your table modification scripts, but that doesn't control when those scripts are deployed to the appropriate database server and kept in sync with the deployment of any other code that relies on that database structure.
Configuration management includes, but is not limited to, version control.
Configuration management is everything that you need to manage in terms of a project. This includes software, hardware, tests, documentation, release management, and more. It identifies every end-user component and tracks every proposed and approved change to it from Day 1 of the project to the day the project ends.
Version control is specifically applied to computer files. This includes documents, spreadsheets, emails, source code, and more.
Version control is saving files and keeping different versions of them, so you can see the change over time.
Configuration management is generally referred to as an overall process of which keeps track of what version of the code is on what server, how the servers are setup (and the install scripts to do so at many places). It is how process of what happens after the code goes into source control and how gets to deployed to the servers/desktops etc.
Configuration management is an ambigute term.
In software, it tends to be a superset of version control with emphasis on the entire process to produce a result in a repeatable and predictible manner.
In computing maintenance, it is related to the maintenance of the configuration settings and hardware/firmware/software versions of entire networks and set of attached computing machines (including servers, clients, routers...).
In hardware manufacturing, it represents even a superset of the two above, including the hardware pieces and software modules needed to obtain a product, with the description of the process to manufacture them, and even sometime the entire schemas and configurations of the production lines themselves.
In addition to everything said above I'd like to recommend Bob Aiello's book named "Configuration Management Best Practices" - http://www.amazon.com/dp/0321685865 .
It covers all aspects of Software Configuration Management including version control.
Version control is the control of deliverables whereas configuration management is managing the entire process leading to produce the deliverables. Configuration management involves change management, project management, etc., which generally are not managed by simple version control.
Roughly speaking, version control means you can check out the source for any particular version. Configuration management means you can build and deploy and probably test any particular version.
This can be helpful.
Versions and configurations
Versions:
Ability to maintain several versions of an object.
Commonly found in many software engineering and concurrent engineering environments.
Merging and reconciliation of various versions is left to the application program
Some systems maintain a version graph
Configuration:
A configuration is a collection compatible versions of modules of a software system (a version per module)
Version control is one of the features of a SCM system.
From the subversion user guide:
http://svnbook.red-bean.com/en/1.7/svn-book.html
"Some version control systems are also software configuration management (SCM) systems. These systems are specifically tailored to manage trees of source code and have many features that are specific to software development—such as natively understanding programming languages, or supplying tools for building software. Subversion, however, is not one of these systems. It is a general system that can be used to manage any collection of files. For you, those files might be source code—for others, anything from grocery shopping lists to digital video mixdowns and beyond."

What should I propose for a reusable code library organization?

My organization has begun slowly repurposing itself to a less product-oriented business model and more contract-oriented business model over the last year or two. During the past year, I was shifted into the new contracting business to help put out fires and fill orders. While the year as a whole was profitable (and therefore, by at least one measure, successful, we had a couple projects that really dinged our numbers for the year back around June.
I was talking with my manager before the Christmas holiday, and he mentioned that, while he doesn't like the term "post-mortem" (I have no idea what's wrong with the term, any business folks or managers out there know?), he did want to hold a meeting sometime mid-January where the entire contract group would review the year and try to figure out what went right, what went wrong, and what initiatives we can perform to try to improve profitability.
For various reasons (I'll go into more detail if it's requested), I believe that one thing our team, and indeed the organization as a whole, would benefit from is some form of organized code-sharing. The same things get done again and again by different people and they end up getting done (and broken) in different ways. I'd like to at least establish a repository where people can grab code that performs a certain task and include (or, realistically, copy/paste) that code in their own projects.
What should I propose as a workable common source repository for a team of at least 10-12 full-time devs, plus anywhere from 5-50 (very) part time developers who are temporarily loaned to the contract group for specialized work?
The answer required some cultural information for any chance at a reasonable answer, so I'll provide it here, along with some of my thoughts on the topic:
Developers will not be forced to use this repository. The barrier to
entry must be as low as possible to
encourage participation, or it will
be ignored. Sadly, this means
that anything which requires an
additional software client to be
installed and run will likely fail.
ClickOnce deployment's about as
close as we can get, and that's awfully iffy.
We are a risk-averse, Microsoft shop. I may be able to sell open-source solutions, but they'll be looked upon with suspicion. All devs have VSS, the corporate director has declared that VSTS is not viable going forward. If it isn't too difficult a setup and the license is liberal, I could still try to ninja a VSTS server into the lab.
Some of my fellow devs care about writing quality, reliable software, some don't. I'd like to protect any shared code written by those who care from those who don't. Common configuration management practices (like checking out code while it's being worked on) are completely ignored by at least a fifth of my colleagues on the contract team.
We're better at writing processes than following them. I will pretty much have to have some form of written process to be able to sell this to my manager. I believe it will have to be lightweight, flexible, and enforced by the tools to be remotely relevant because my manager is the only person who will ever read it.
Don't assume best practices. I would very much like to include things like mandatory code reviews to enforce use of static analysis tools (FxCop, StyleCop) on common code. This raises the bar, however, because no such practices are currently performed in a consistent manner.
I will be happy to provide any additional requested information. :)
EDIT: (Responsing to questions)
Perhaps contracting isn't the correct term. We absolutely own our own code assets. A significant part of the business model on paper (though not, yet, in practice) is that we own the code/projects we write and we can re-sell them to other customers. Our projects typically take the form of adding some special functionality to one of the company's many existing software products.
From the sounds of it you have a opportunity during the "post-mortem"to present some solutions. I would create a presentation outlining your ideas and present them at this meeting. Before that I would recommend that you set up some solutions and demonstrate it during your presentation. Some things to do -
Evangelize component based programming (A good read is Programming .NET Components - Jubal Lowy). Advocate the DRY (Don't Repeat Yourself) principle of coding.
Set up a central common location in you repository for all your re-usable code libraries. This should have the reference implementation of your re-usable code library.
Make it easy for people to use your code libraries by providing project templates for common scenarios with the code libraries already baked in. This way your colleagues will have a consistent template to work from. You can leverage the VS.NET project template capabilities to this - check out the following links VSX Project System (VS.Net 2008), Code Project article on creating Project Templates
Use a build automation tool like MSBuild (which is bundled in VS2005 and up) to copy over just the components needed for a particular project. Make this part of your build setup in the IDE (VS.NET 2005 and up have nifty ways to set up pre-compile and post-compile tasks using MSBuild)
I know there is resistance for open source solutions but I would still recommend setting up and using a continuous automation system like CruiseControl.NET so that you can leverage it to compile and test your projects on a regular basis from a central repository where the re-usable code library is maintained. This way any changes to the code library can be quickly checked to make sure it does not break anything, It also helps bring out version issues with the various projects.
If you can set this up on a machine and show it during your post-mortem as part of the steps that can be taken to improve, you should get better buy since you are showing something already working that can be scaled up easily.
Hope this helps and best of luck with your evangelism :-)
I came across this set of frameworks recently called the Chuck Norris Frameworks - They are available on NuGet at http://nuget.org/packages/chucknorris . You should definitely check them out, as they have some nice templates for your ASP.NET projects. Also definitely checkout Nuget.
organize by topic, require unit tests (feature-level) for check-in/acceptance into library; add a wiki to explain what/why and for searching
One question: You say this is a consulting group. What code assets do you have? I would think most of your teams' coding efforts would be owned by your clients as part of your work-for-hire contract. If you are going to do this you need to make absolutely certain that your contracts grant you rights to your employees' work.
Maven has solved code reuse in the Java community - you should go check it out.
I have a .NET developer that's devised something similar for our internal use for .NET assemblies. Because there's no comparable .NET Internet community, this tool will just access an internal repository in our corporate network. Otherwise will work rather much the way Maven does.
Maven could really be used to manage .NET assemblies directly (we use it with our Flex .swf and .swc code modules) is just .NET folk would have to get over using a Java tool and would probably have to write a Maven plugin to drive msbuild.
First of all for code organization check out Microsoft Framework Design Guidelines at http://msdn.microsoft.com/en-us/library/ms229042.aspx and then create a central Location source control for the new framework that your going to create. Set up some default namespaces, assemblies for cleaner seperation and make sure everyone gets a daily build.
Just an additional point, since we have "shared code" in my shop as well.
We found out this is very much a packaging issue:
Whatever code your are producing or tool you are using, what you should have is a common build tool able to package your sources into a "delivery component", with everything used to actually execute the code, but also the documentation (compressed), and the source (compressed).
The main interest into having a such a "delivery package unit" is to have as less files to deploy as possible, in order to ease the download of those units.
The build process can very well be managed by Maven or any other (ant/nant) tool you want.
When some audit team want to examine all our projects, we just deploy on their post the same packages we deploy on a production machine, except they will un-compressed the source files and do their work.
Since our source files also includes whatever files are needed to compile them (like for instance eclipse files), they even can re-compile those projects in their development environment).
That way:
Developers will not be forced to use this repository. The barrier to entry must be as low as possible to encourage participation, or it will be ignored: it is just a script to execute to get the "delivery module" with everything in it they need (a maven repository can be used for that too)
We are a risk-averse, Microsoft shop: you can use any repository you want
Some of my fellow devs care about writing quality, reliable software, some don't: this has nothing to do with the quality of code written in these packages modules
We're better at writing processes than following them: the only process involved in this is the packaging process, and it can be fairly automated
Don't assume best practices: you are not forced to apply any kind of static code analysis before packaging executable and source files.