Transition from legacy database to new one that works with legacy application - upgrade

I have a problem concerning legacy application that can’t be changed in any way (single executable file with no dlls) which is connected to a database that can be changed. It is a visual basic 6 application connecting to the database using ADO.Net. Database engine is a SQL Server 2008. The goal is to create new correct database that will work with legacy application
It is coupled so tightly, that it does not even work with views instead of tables as suggested here. So the present situation look like this: current situation diagram
Currently I am trying to research into the problem and find my options. I have some idea that might work:
Since the approach to change tables to views does not work, I think that one possibility is to intercept the communication between app and legacy DB, read a sent command and redirect it somewhere else and not letting legacy db respond to the request.
Each command is either CRUD or procedure execution and we know what possible commands can be sent. Let’s suppose that a new database is set and has views corresponding to the legacy one. Commands are redirected to my own application that filters out everything and manipulates it (somehow) to work with the new schema.
Diagram of intercepted communication
This is my general idea of what I want to do to avoid rewriting the legacy application which is tightly coupled. Someone already asked a question similar to mine.
They discuss approach how to either dig commands from sql dump files or to intercept the communication.
The interception itself doesn’t seem to be a problem as discussed here. But I wonder how can the mirror reply.
The same goes for port mirroring using [TCP packet hijacking] (https://reverseengineering.stackexchange.com/a/1816)
To sum up, my questions are as follows:
Is that feasible approach to achieve smooth transition from a legacy modifiable solution to new one?
If my idea is doable, how can I listen to db requests and create responses from a different application and not the original db?
Is there a better way how to achieve my goal which is to create new database with database abstraction layer so the old legacy application will remain functional?

Related

How to have complete offline functionality in a web app with PostgreSQL database?

I would like to give a web app with a PostgreSQL database 100% offline functionality. In an ideal case the database should be completely replicated in the browser per user, and synchronized when online. So that the same code can be used to talk to both the offline and online database. I know this is possible with PouchDB and CouchDB, but have not found a solution that works with PostgreSQL. Is this at all possible?
Short answer: I don't know of anything like this that currently exists.
However, in theory, this could be made to work...(long answer:)
Write a PostgreSQL backend for levelup (one exists for MySQL: https://github.com/kesla/mysqldown)
Wire up pouch-server to read/write from your PostgreSQL db using pouchdb's existing leveldb adapter (which in turn will have to be configured to use your postgres backend). Congrats, you can now sync data using PouchDB!
Whether an approach like this is practical in reality for your application is a different question you'll have to answer.
You may be wondering, for example, "will I be able to sync an existing complex schema with multiple tables to the client with this approach?" The answer is probably not - the mysqldown implementation of leveldown uses a single MySQL table with three fields: id, key, and value (source), and I imagine any general-purpose PostgreSQL adapter would be similar (nothing says you can't do a special-purpose adapter just for your app though!).
On the other hand, if you were to implement a couchdb-compatible API (or a subset- you may not need attachments, for example) over your existing database schema, there's nothing stopping you from using PouchDB on the client to talk directly to that as if it were an actual CouchDB - just pop in the URL and call replicate()! Implementing the replication protocol might be a fair bit of work, since you'd need to track revisions and so on somewhere - but again, technically not impossible!
There are also implementations of levelup's backend storage that are designed for browsers. See level.js, which could be another way to sync between a server-side Postgres levelup backend and the browser.
TL;DR: There's tons of work being done around Javascript databases right now. Is syncing with Postgres impossible? probably not. Would it be a lot of work? Definitely. Worth it? Who knows, but it would be cool.
Without installing PostgreSQL on the client? No. Obviously you can cache data for offline use, but an entire RDBMS+procedural languages in Javscript, no.

Merging two database into single database

I have deployed same application in two different computers. Now i need to merge both data from two different database into single database.
The application is developed using c# .net and uses sql express 2008.
The problem arised because i could not use the application over LAN.
So i need to merge the two database into one.
So please help me to solve the problem to merge it.
I also need to run the application over LAN but the sqlbrowser doesnot start and i have searched the internet for the answer but i haven't been lucky.
thank you, waiting for response.
The approach you want to take will depend largely on your schema, but Microsoft Sync Framework should likely be useful. It would let you define rules for resolving conflicts and merging your data.
As for accessing your data over the LAN, this post has a good overview of what it takes to enable remote access to your SQL Server Express.

Reading data from Oracle DB in iPhone

I would like to read/write data from my Oracle DB into my iPhone code.
Can you suggest some methods for the same?
One possible solution is provide your iOS App a REST Api and implement methods to read/update/delete your model entities.
If you could access a database directly from your iOS App, for every change in your model you had to deploy a new version on your iOS App. Providing a REST Api you can made changes in your model and do not change parameters or response on your services.
Don't.
Database connections generally expect to be reliable. Connections from an iPhone aren't.
Also, any DB administrator would tell you that the first step to ensuring database security is to lock down the number of places from which the database can be directly accessed. This is why you never (or should never) see client devices talking directly to a database.
Instead, implement an intermediary (such as a web service) that accepts, e.g., HTTPS connections from the iPhone in the usual manner (NSURLConnection, etc.) and does the actual database heavy lifting itself. I'm not an Oracle expert, but I would assume that they have some products that help you do this with relatively little effort given how common a task it is. If not, it should be fairly straightforward for you to implement your own in Java, Python, or a language of your choosing.

Using the Entity Framework with multiple identical databases

I have a system where there are two identical databases. One is for back of house work where data is imported, edited generally worked on. Once the data in the first database is as required it is coped to the second database, which is used to drive a public facing (read only) site.
So once a month, or so I will need to push data from database to another. I'd like to drive all this with EF, is that reasonable, can EF do this kind of thing, or will I get stuck part way down the line?
It's probably doable, but frankly, EF (or any other ORM) is not really suited for this kind of task. If you do decide to implement your synchronization tool with EF, at least make sure to turn off change tracking.
I wouldn't dismiss Yuri's suggestion (simply using a scheduled backup/restore), if the databases are really identical. It's certainly the easiest to implement!
Another solution would be to use a database synchronization tool, like Sql Server Integration Services.

Strategies for "Always-Connected" Windows Client Data Architecture

Let me start by saying: this is my 1st post here, this is a bit lenghty, and I havent done Windows Forms development in years....with that in mind please excuse me if this isn't directly a programming question and please bear with me as I really need the help!!
I have been asked to develop a Windows Forms app for our company that talks to a central (local area network) Linux Server hosting a PostgreSQL database. The app is to allow users to authenticate themselves into the system and thereafter conduct the usual transactions with the PG database. Ordinarily, I would propose writing a webforms app against Mono, but the clients need to utilise local resources such as USB peripheral devices, so that is out of the question. While it might not seem clear, my questions are italised below:
Dilemma #1:
The application is meant to be always connected. How should I structure my DAL/BLL - Should this reside on the server or with the client?
Dilemma #2:
I have been reading up on Client Application Services (CAS), and it seems like a great fit for authentication, as everything is exposed via URIs. I know that a .NET Data Provider exists for PostgreSQL, but not too sure if CAS will all work on a Linux (Debian) server? Believe me, I would get my hands dirty and try myself, but I need to come up with a logical design first before resources are allocated to me for "trial purposes"!
Dilemma #3:
If the DAL/BLL is to reside on the server, is there any way I can create data services, and expose only these services to authenticated clients. There is a (security) requirement whereby a connection string with username and password to the database cannot be present on any client machines...even if security on the database side is quite rigid. I'm guessing that the only way for this to work would be to create the various CRUD data service methods that are exposed by an ASP.NET app, and have the WindowsForms make a request for data or persist data to the ASP.NET app (thru a URI) and have that return a resultset or value. Would I be correct in assuming this? Should I be looking into WCF Data Services? and will WCF work with a non-SQL Server database?
Thank you for taking the time out to read this, but know that I am desperately seeking any advice on this! THANKS A MILLION!!!!
EDIT:
I am considering also using NHibernate as my ORM
Some parts of your questions are complicated and beyond my expertise. However, in general you can do almost anything you put effort into, CAP theorem and the like aside.
DAL/BLL stuff in general can reside in any of the tiers. I put a lot of this in my database and some in the middle tier, however this is to allow re-use in different environments which may or may not be a goal for you. The thing is I would think through carefully the separation of concerns issues here and what sorts of centralization of logic you want to place. The further back, the more re-usable this becomes but this is not always a free tradeoff.
I am not entirely familiar with CAS but it looked like AJAX kinds of stuff from what I saw on the MSDN web site. That could be wrong, but if it is right, then you have an issue in that such requests may be stateless and this could be an issue if you need a constant connection.
On the whole based on what you are saying it sounds cleanest to do a two tier rather than a three tier app, and have the DAL/BLL sit on the client, possibly supported by stored procedures in the server. You can then set PostgreSQL up to authenticate against whatever you use on your network (KRB5 if AD is what I would recommend). This simplifies your data access, and it allows you to control permissions based on the authentication against the database. Since you can authenticate users based on AD, you can then set permissions accordingly.
One important consideration is going to be number of connections. PostgreSQL does have some places where every current connection must be checked and iterated through, and connection startup and tear-down overhead in some cases can be significant. So one important decision will involve connection pooling. Whether or not you use connection pooling to boost performance will depend on what you are doing but I have seen cases where PostgreSQL has handled 600 connections without serious problems.