Strategies for "Always-Connected" Windows Client Data Architecture - postgresql

Let me start by saying: this is my 1st post here, this is a bit lenghty, and I havent done Windows Forms development in years....with that in mind please excuse me if this isn't directly a programming question and please bear with me as I really need the help!!
I have been asked to develop a Windows Forms app for our company that talks to a central (local area network) Linux Server hosting a PostgreSQL database. The app is to allow users to authenticate themselves into the system and thereafter conduct the usual transactions with the PG database. Ordinarily, I would propose writing a webforms app against Mono, but the clients need to utilise local resources such as USB peripheral devices, so that is out of the question. While it might not seem clear, my questions are italised below:
Dilemma #1:
The application is meant to be always connected. How should I structure my DAL/BLL - Should this reside on the server or with the client?
Dilemma #2:
I have been reading up on Client Application Services (CAS), and it seems like a great fit for authentication, as everything is exposed via URIs. I know that a .NET Data Provider exists for PostgreSQL, but not too sure if CAS will all work on a Linux (Debian) server? Believe me, I would get my hands dirty and try myself, but I need to come up with a logical design first before resources are allocated to me for "trial purposes"!
Dilemma #3:
If the DAL/BLL is to reside on the server, is there any way I can create data services, and expose only these services to authenticated clients. There is a (security) requirement whereby a connection string with username and password to the database cannot be present on any client machines...even if security on the database side is quite rigid. I'm guessing that the only way for this to work would be to create the various CRUD data service methods that are exposed by an ASP.NET app, and have the WindowsForms make a request for data or persist data to the ASP.NET app (thru a URI) and have that return a resultset or value. Would I be correct in assuming this? Should I be looking into WCF Data Services? and will WCF work with a non-SQL Server database?
Thank you for taking the time out to read this, but know that I am desperately seeking any advice on this! THANKS A MILLION!!!!
EDIT:
I am considering also using NHibernate as my ORM

Some parts of your questions are complicated and beyond my expertise. However, in general you can do almost anything you put effort into, CAP theorem and the like aside.
DAL/BLL stuff in general can reside in any of the tiers. I put a lot of this in my database and some in the middle tier, however this is to allow re-use in different environments which may or may not be a goal for you. The thing is I would think through carefully the separation of concerns issues here and what sorts of centralization of logic you want to place. The further back, the more re-usable this becomes but this is not always a free tradeoff.
I am not entirely familiar with CAS but it looked like AJAX kinds of stuff from what I saw on the MSDN web site. That could be wrong, but if it is right, then you have an issue in that such requests may be stateless and this could be an issue if you need a constant connection.
On the whole based on what you are saying it sounds cleanest to do a two tier rather than a three tier app, and have the DAL/BLL sit on the client, possibly supported by stored procedures in the server. You can then set PostgreSQL up to authenticate against whatever you use on your network (KRB5 if AD is what I would recommend). This simplifies your data access, and it allows you to control permissions based on the authentication against the database. Since you can authenticate users based on AD, you can then set permissions accordingly.
One important consideration is going to be number of connections. PostgreSQL does have some places where every current connection must be checked and iterated through, and connection startup and tear-down overhead in some cases can be significant. So one important decision will involve connection pooling. Whether or not you use connection pooling to boost performance will depend on what you are doing but I have seen cases where PostgreSQL has handled 600 connections without serious problems.

Related

Making DB Query as a Microservice API

We are currently using direct DB connection to query mongodb from our scripts and retrieve the required data.
Is it advisable / best practice to make the data retrieval from DB as a microservice.
It does until it doesn't :)
A service needs to get its data from somewhere and a database is a good start. If you have high loads you may find that you need to add a cache in the middle see this post from Instagram engineering https://instagram-engineering.com/thundering-herds-promises-82191c8af57d
edit (after comment)
generally speaking, a service should own its database and other services shouldn't access another database service directly only via its API. The idea is to keep services autonomous and enable them to evolve independently.
Depending on the size of microservice, that's now always practical since it can make the overhead of having the service be more of the utility it provide (I call this nanoservices). Also, if you have a lot of services you don't want to allow each one to talk to any other (even not via the DB) since you just get a huge mess. The way I see it there should be clear logical boundaries (services or microservices) and then within each such logical services you may find that it makes sense to have more than one "parts" (which I call aspects) e.g. they have different scaling needs or different suitable technologies etc. When you set things this way aspects can access the same database and services shouldn't (and you can still tame the chaos :) )
One last thing to think about - who said API is only a REST API, you can add views on top of the data that belongs to another service and as long as you treat that like an API (security, versioning etc.) you can have other services access that as well

How secure is this security method in postgresql?

For example I have 2 databases. One of them is called ecommerce which contains real customer information. Another is called ec1 which basically contains only views from tables of ecommerce.
We use our ec1 database to connect to our website or apps. How secure is this method in terms of back end security?
Only exposing ec1 is better than exposing ecommerce because you can reset ec1 using your "safe" values in case of corruption and you can keep some secret data only stored in ecommerce if it doesn't need to be used by your website or your app.
However, this is only a small portion of backend security. Having two different databases with real data and data views doesn't matter a lot if someone can access your server OR can corrupt your data.
I mean, if someone found a way to get some data he should be not authorized to read, it is bad even if it comes from ec1 and not from ecommerce
So yeah, exposing only views is a BETTER solution, but nothing can be said on the overall security because it mainly doesn't depend on that
EDIT: A detailed explaination of backend security is way beyond the possibility of a simple stackoverflow answer (and probably i am not the best teacher) but for basic server security you must take care of:
- Firewall to stop every request but your webapps ones.
- Updated software
- good database passwords
- The user you use for your application queries must only be able to perform operations on ecl1 database, while the views should be generated with a cron and using a different user
These are the main security enhancement tips that comes to my mind

Connecting to Oracle from iOS App

I know this has been asked a few times, but there seems to be no clear answer ... am searching on this for the past 3 days or more.
There seem to be 2 ways to connect to an Oracle database from an iOS App :
ODBC Client
I need to compile ODBC (which ODBC?) using gcj for ARM. I think this is the hard way, wrought with errors, but possible with quite an effort.
USING WEB SERVICE
Connect from App to webservice and from web service to Oracle DB.
Are these the 2 methods available or any other?
Few questions on the two methods:
a. Which is more secure?
b. Will my company's security department oppose to any of the above?
c. Which is more performant?
d. Which of the above does one normally use?
Webservices are the answer, you do not want people connecting directly to the database from a mobile device. A Webserver will add one extra layer of security as well as the ability to handle simultaneous request without stressing the database directly
a. Which is more secure?
Webservices as explained above
b. Will my company's security department oppose to any of the above?
Yes, security department will insist not to open the oracle port to connect directly, unless they have it already open.
c. Which is more performant?
Webservices, setting up the right cache policies in a webserver can save resources to the database.
d. Which of the above does one normally use?
Webservices, because they offer you great advantages in security and performance, not only that, webservices are reusable and can be accessed by many different platforms, think on the future you might want to serve your application later on Android devices and Webservices will save you a lot of development time.
Many of today's top applications in the market use webservices, think about it.
Google Maps is a great example of how powerful webservices are!
It's not a good idea to connect to your database directly from your app. It can be secure if you create an account that can do nothing but SELECT, but there are some other things to consider.
Why burden the app with the Oracle client?
If you have many users you have to worry about Oracle handling a huge number of simultaneous connections. With a Restful API requests are stateless.
If you decide to change your schema. You'll also have to change your app. When you place a service in between, the app is no longer dependent on the schema.
ODBC connection will require that the Oracle port is open to the Internet, which in vast majority of cases will not be allowed for security and performance reasons. Even if it were, or even if you establish a secure VPN, a direct database access requires that the connection is kept open, which can be problematic when a mobile device can go in and out of the network coverage.
HTTP is far more tolerant to unreliable networks and can be encrypted using SSL (HTTPS). The problem with HTTP is that database do not have direct support for this transport so most people develop dedicated web services.
I work on a project called SlashDB, which automatically constructs RESTful APIs out of databases. For public APIs you would install /db in so called DMZ (a network segment between two firewalls) as described in this blog post.
SlashDB can be configured to allow restricted data access to public users or you can define specific users with varying privileges to data. It is designed as stateless service, which means that you can easily set up multiple nodes behind a load balancer and reverse HTTP proxy for high availability web scale deployments.
Regardless whether you develop the web service by hand or use our product you will achieve better scalablity, performance and security for your solution than by using direct client/server approach. I would even argue that REST APIs should be used internal enterprise data integration solutions but that's a whole new topic.
I am going to repeat what everyone else said, Rest API is the way to go. Do not connect to the database directly. However, there might be a way to connect to your database which I never tried my self.
http://odbcrouter.com/iosvsweb#hn_iOS_Open_Database_Connectivity_SDK

Writing to remote database without using web service

i am working on a ipad app, i need to write some data to a remote online database , can i do this with out using web service,,, i need some advice,,,
thanx in advance
Technically this is possible, there are remote database drivers for the iPhone platform, for example Flipper.
However, I'd strongly recommend use some kind of "Service" to do your database access. This could be a full SOAP/HTTP WebService, a RESTful Service, or even just a little bit of php that you invoke over http or https. Don't be concerned that developing this "Service" will be lots of work, it need take no more than an hour or two. In fact with a product such as Worklight it took me literally 15 minutes using the Worklight SQL adapter. (Disclaimer I work for IBM, we recently acquired Worklight.)
There are several reasons to prefer using an intermediary service rather than direct access to the DB from the client. Here's a couple:
Scalability. Each user's connection to the DB consumes server side resources, if your app is widely used then you could end up with many tens of thousands of simultaneous connections. The service approach uses Web-facing connections to the phone, using (for example) web containers designed for high numbers of concurrent sessions, and then funnels down to a few database connections. Even very busy web sites tend to use (and reuse) only a small number (a few 10s) of database connections.
Security. It is strongly recommended to avoid making databases directly accessible to the internet. It's a big topic, but if the database contains any kind of valuable data then a pattern of fronting the database by a service greatly reduces vulnerability.
I recommend using the service Parse. Their service is built specifically to solve the iOS/Android backend problem. I just wrote a blog post about them: Parse, The Best Backend for iPhone SDK.

iPhone + server + production-scale

Okay, so I'm currently developing an iphone app that I plan to take into production and scale. I'm a bit lost on the whole subject.
What is better to use: core data or sqlite? (as the local DB)
Also, can sqlite be used exclusively to communicate with my remote server as well? At first I thought it could but I've been reading that sqlite isn't great to use on servers that get a massive amount of hits.
I've read that oracle, mysql, or mssql may be better to use on a remote server and that I can communicate with these servers via REST or SOAP.
I plan to be able to both read and write to a remote server. The files transferred will mostly be small data objects and pictures. Speed is of the essence, so I'd like to know which options are my fastest routes. Of course, I want the option to scale and not have performance take too much of a hit as well.
On the subject of Core Data vs sqlite see this question.
SQLite is a small and lite embedded SQL database engine. It's not meant to used in server environments. In general, it's not a good idea to communicate directly over the Internet. It's more common to have some sort of process logic between the client code and the database to do a range of things like validate input, process business logic, security, etc. You can implement this sort of layer in REST, SOAP, or whatever you like. Since your clients will be mobile devices, a http based web service (like REST or SOAP) is a good idea as all mobility platforms have inbuilt API support for http messaging. There are lots and lots of options on the server depending on what type of server you want to setup and run with.
If your new to this, maybe you should read something like 'Patterns of Enterprise Application Architecture' by Martin Fowler to get a idea of what sort of design patterns people use to implement the server side layering.