convert always connected to occasionally connected application - enterprise-library

I have an existing client-server 3-tier application with the following stack :
Smart-Client (Win-Forms)
IIS/ASP.NET
Sql server
Some of the data is stored in Entity–attribute–value (EAV) model.
All primary keys are integer identity columns.
Database operations are mostly performed using Stored procedures.
I am tasked with converting this application into an occasionally connected application (OCA)
There should be no issues with installation and resources limitation on the clients.
This is the first such project for me.
I have done some reading about
ms sync framework
Enterprise library / occasionally connected smart clients
SQL server replication
In order preserve existing code and limiting change impact, I am considering installing the the 3-tier application on each client , using sync framework to handle synchronization on the WS to handle synchronization. Also having one master server to which synchronizations will refer.
Does this solution look feasible?
Are there any other resources regarding converting an on always Connected 3-tier application to an occasionally connected application ?
Thank you .

should be feasible. not much change in your app. you just have to install a local database on your clients.
however, your're using identity columns. unless you partitioned your identity values (client 1 is 1-1000, client 2 is 1001 - 2000, etc...) you will duplicate IDs when you upload them.
have a look at this: Database Sync:SQL Server and SQL Express N-Tier with WCF

Related

What are the differences (CPU, runtime, or otherwise) between using pg pool and pgp as a database connection in an express server

I've created a few apps which utilize a postgres database, but in all of those projects, I've either used the pool or client function from the pg npm package. Recently I came across the pg-promise node package, and was just wondering if there were any drawbacks to using pg-promise over pool or client. I'm just worried about changes in runtime that would affect how many clients the app could service at one time.
pg-promise is "Built on top of node-postgres". You're still using the same pools and clients.
Nothing changes regarding the amount of connections your database will be able to handle, and unless you use a different approach to building your application (like, using transactions instead of not using transactions, or using individual clients instead of pooling), nothing will change regarding the amount of clients your app will be able to serve.

Symmetric DS simple configuration guidance

Over the past couple of weeks I have been prototyping out some examples in symmetric DS. Looking for some guidance and examples because I am really running into some walls here. I have used the server and android examples successfully, don't need any assistance with setup on getting the basics working. It is a complex tool and I;m still learning it as well.
So I am trying to setup an environment where all the clients that run on android device sync up to a server. So I know it's fairly straight forward to do a setup where its 1 MASTER -> <- multiple clients, as the example that they provide do.
What I am trying to do is multiple masters to multiple clients. Essentially I want a database on the server for each client. Ill attach a diagram to try to help explain but I want a database for each store so store #1 has a master DB on the server and it syncs both ways with the client device.
server-diagram
SymmetricDS requires having a central node to store the configuration. I would recommend to have a central node with bunch of databases that connect to the central database. Connect each android application to another database. This topology will allow configuring what data syncs from the central node to the bunch of databases and what goes back
On the router from client to server you can set the target catalog to be a variable : $(sourceExternalId). This will use the clients external id as the database name on your server.
If you also need to replicate data back down you can set the external select on the triggers at the server. This would need to be an expression on your server database that would evaluate the current database. This would fire when a change occurs on the server database and populate the external_data column on sym_data during capture with the database that the change occurred in. You would then adjust the router from server to client to be a column match router type. Your expression then for the router would be: EXTERNAL_DATA=:EXTERNAL_ID. This would ensure that this data only be sent to the appropriate client.

MongoDB connection overhead on client side

We are evaluating different alternatives for multi-tenancy in our platform. We think that one database per customer is the way to go as data structure and requirements are completely different from one customer to another, and we want to keep them as isolated as possible.
However we are facing the question of how to manage the connection to multiple databases. We don't want to have one app instance per customer. Instead we want to have a pool of app instances handling requests for all our customers and use the correct database depending on the customer.
Our concern is if keeping connections open to many (maybe thousands) of database will cause a performance issue. We are actually worried about memory usage, so we are wondering what's the overhead on client side when performing a connection to the MongoDB server.
Also we are thinking about moving the database access to a different service, which is going to be responsible of handling the database connection for all customers. In this case, is there an existing tool that allows to do that kind of "multiplexing" of MongoDB databases?
Some additional notes:
We discarded sharding. It won't fit our needs. We need different databases.
Databases will be in different servers with reserved resources. This means all databases run its own mondod process and we need different connections.
We use Java driver.

How can I store and query remote data from my iPhone app?

My app reads data from two sources, a local sqlite file and a remote server which is a clone of the local db but with lots of pictures. I do not write to the server database, but I do need multiple simultaneous fetch operations.
What DBMS should I use for storing information on the server?
It needs to be very easily used from an iPhone app, be reliable, etc.
Talking to a remote server should not be tied to any platform like iOS. If you have control over the remote db server, the best bet IMO is crafting a RESTful API which you express your queries in, the server processes it and sends you the pictures/records using proper content type. If you do NOT have such control over the remote db, you'll have to stick to the API the db hoster provides. There are plenty such "on the cloud" db hosters (including NoSQL solutions) that give you a web-services interface to your db. MongoLabs is one such provider for MongoDB(which is a NOSQL db - meaning no schemas, no bounds on the structure of a "table"). You can continue to stick to SQLite on the client side.
You seem to have two sources of data local storage and a remote server.
This question on SO might help you to decide approaches of storing data on the server.
Once you have downloaded data using something like NSURLConnection class the images could be stored in the filesystem using writeToFile or the likes.
- (BOOL)writeToFile:(NSString *)path atomically:(BOOL)flag method.
You might like to save the rest of the data in sqlite. We used sqlite and the CoreData framework to save data for one of our applications and it worked fine for us. CoreData allowed us to interact with the database without actual SQL queries.
The iPhone client resides on the phone while on the server side we might have a database and a webservice interacting with the db. Webservice itself might be implemented in python or php like scripting language. The client interacts with the webservice that might return data in formats like XML or JSON. Thus there is no direct communication between the client and db. However, the client does implement network communication code to communicate with the webservice.
This page shows how to consume an XML based web service.

iPhone: Connecting to database over Internet?

I've been talking with someone about the possibility of a iPhone development contract gig. All I really know at this point is that there is a company that wants to make an iPhone app that will hit their internal database. I'm not sure what the database type is( Oracle, MySQL, etc...).
I've wanted to know if the database type was Oracle or MySQL if there is a big learning curve for connecting to one of these across the internet?
If it's a real pain I may do more research before accepting the conract.
I would advise against directly accessing the database from the iPhone application.
Usually, you would create a web service which accesses the database, and then you consume that web service from the iPhone application.
Create a web service. This allows you to make the iphone app more of a thin client. Let the application push commands to the web service for processing and interaction with the database returning only the data needed by the app.
This option is better for the app, the database, and the customer's security.
You can easily perform the connection over the internet, the same way you would locally, but you are opening the database up to attacks if it will accept communication from any remote IP address. Typically you will just connect via a socket open to the server's remote IP address over the open port, MySQL's default port is 3306.
I would recommend against this sort of system in general unless there is some critical reason they want their internal database exposed to the world's hacker community.
What I am doing is creating a web service using Sinatra to access the online database.
Those answers from 2009 are mostly obsolete now.
http://ODBCrouter.com/ipad (new) has XCode client-side ODBC libraries, header files and multi-threaded Objective C objects that let your apps send SQL to server-side ODBC drivers and get back binary results! This reduces the need to stop and separately maintain SOAP/REST servers that can get pretty frightening anyway after a while maintaining it.
The XML schemes were okay for transferring static configurations to mobile devices "every once in a while", but XML was meant for infrequent inter-company type transfers in a "server environment" (with power cords, wired networks and air conditioning) and is definitely not efficient for frequent database queries coming in from n-copies of a mobile app. There are third-party JSON libraries that help things, but even with JSON, everything has to be encoded (and decoded) from the binary representation in the database to text representation on the server (only fine if it's going to be shown to the user in a web browser anyway, but not fine if the mobile app is going to translate it right back into binary so that it can perform calculations "behind the scenes" to what is going on with the user). Aside from the higher network overhead and battery power the mobile CPU will draw with XML and JSON, it will also make you buy more RAM and CPU power on the back-end server faster than just using an ODBC connection to the database.