Suggestions needed for replacement of Oracle SSO 10g in an 11g environment - oracle10g

We're currently using the SSO component of Oracle 10g App Server to authenticate users on our external / internet facing client "portal" (think similar to online banking)
SSO uses Oracle Internet Directory to store it's data, and we've been able to use PL/SQL and Java to access and modify the data held in OID (e.g create/drop users, change/verify passwords etc)
With the advent of 11g, Oracle appears to have "orphaned" SSO… it is available, but only as an add-on, and it appears to have been superseded by Oracle Access Manager. I'm guessing that it will have been dropped together by 12g. Plus it looks pretty difficult to install and get running correctly.
So, I'm wondering if anyone has any experience of having had the same migration problem as us? If so, what did you do?
Alternatively, does anyone have any experience of doing something similar using Oracle Access Manager? Do you think it will do what we want?
Or is there a better road to go down? Is there something else I should be considering?
Sorry for the very broad question, but it's one of those situations where a person's experience of what does + doesn’t work can make an enormous difference to us making some progress in a timely fashion. Thanks.

From my knowledge, Oracle Internet Directory (OID) is an LDAP compliant directory whereas Oracle Access Manager (OAM) is much more complex and consists of two main systems:
Identity System (users, groups,
workflows)
Access System (single/multi domain
SSO solution for Web and non-Web
based applications).
Access Manager relies on an Identity Server which is a stand-alone server process that communicates with any Directory Server (AD, OID, Sun Directory server..).
So you can use the new OAM and link it with your existing OID... to retrieve users/groups and metadata. All that you could do with OID will be doable with OAM as it brings more abstraction layers.
But in my opinion, and considering your case, directly accessing LDAP servers (OID, AD, etc) and using a light and "home made" SSO system is cheaper than relying on those big systems.... I think OAM is a usefull solution when you have lots of heterogeneous applications (web, non web, mobile, ...) and/or multiple organizations/domains with links and/or you need a very scalable approach.

Related

Plone usefulness for Backend Development

We have a python based server that uses mongodb database. Our server programs uses RabbitMQ to exchange request/reply packets with many Android apps and perform actions accordingly. In addition to this, now we also need to create a web portal for the admin staff to let them manipulate the database, upload/download files, view data/statistics and trigger actions for android clients. So, the database is going to be common for the portal and the existing server programs.
For the web portal development, I got a recommendation for using Plone. We are comfortable in using traditional Node.js. Could anybody guide me on the use of Plone within this context. Is plone able to communicate with mongodb and existing server side programs?
Plone is a CMS designed around managing web based content and is tightly integrated for storage of its data in the ZODB, a NoSQL database. If data is very custom and isn't all about webpages and website nagivation etc, or if you have a need for the data to live in a different kind of DB then Plone probably isn't the right tool for you. This isn't to say it can't be made to do these these things but you would have to learn a lot about it's internals to make it do these things.

Can we connect external data to k2?

I am new to K2 and have to check how similar is it to MS Access. So I need to know whether we connect external data from example from SQL server to K2.
Yes, K2 uses SmartObjects to connect to external data sources (like SQL Server).
Absolutely! Connecting with many disparate repositories of data is K2's great strength.
To connect to a SQL Server, you simply have to create an instance of the SQL service broker, with the details of the server and database you want to read from. Then you can create a SmartObject for each table, view, or stored procedure within that SQL Server database that you need to interact with.
The following thread on K2 Community should get you started: http://community.k2.com/t5/K2-blackpearl/How-to-connect-K2-blackpearl-with-MS-SQL-R2/td-p/53993
K2 is not similar to Access since it is larger platform which meets enterprise workflow automation needs whereas Access would rather allow you to build department level apps with little flexibility - so comparison is incorrect neither from feature set nor from product positioning or pricing point of views.
K2 has 3 major pillars tightly integrated with each other:
Workflow Engine (manages execution of steps defined for process you automating)
SmartForms (allow to build you web UI to your apps and processes)
SmartObjects - this abstraction layer offers you set of OOB connectors which allow you to consume or write data from variety of external LOB systems - SQL Server, Oracle, SharePoint and many more. Custom brokers can be created to connect to any other LOB system which is not covered by OOB brokers set.
So in terms of connecting to different to external data you won't have any problems and capabilities are far greater than those you may find in MS Access. Comparing those two things it is almost like compare SMB shared folder VS SharePoint Server or something like this.
Also product being marketed (and build in that way) to allow "code-less development" - it has really gentle learning curve / allows you to start quick with building your applications.

Connecting to Oracle from iOS App

I know this has been asked a few times, but there seems to be no clear answer ... am searching on this for the past 3 days or more.
There seem to be 2 ways to connect to an Oracle database from an iOS App :
ODBC Client
I need to compile ODBC (which ODBC?) using gcj for ARM. I think this is the hard way, wrought with errors, but possible with quite an effort.
USING WEB SERVICE
Connect from App to webservice and from web service to Oracle DB.
Are these the 2 methods available or any other?
Few questions on the two methods:
a. Which is more secure?
b. Will my company's security department oppose to any of the above?
c. Which is more performant?
d. Which of the above does one normally use?
Webservices are the answer, you do not want people connecting directly to the database from a mobile device. A Webserver will add one extra layer of security as well as the ability to handle simultaneous request without stressing the database directly
a. Which is more secure?
Webservices as explained above
b. Will my company's security department oppose to any of the above?
Yes, security department will insist not to open the oracle port to connect directly, unless they have it already open.
c. Which is more performant?
Webservices, setting up the right cache policies in a webserver can save resources to the database.
d. Which of the above does one normally use?
Webservices, because they offer you great advantages in security and performance, not only that, webservices are reusable and can be accessed by many different platforms, think on the future you might want to serve your application later on Android devices and Webservices will save you a lot of development time.
Many of today's top applications in the market use webservices, think about it.
Google Maps is a great example of how powerful webservices are!
It's not a good idea to connect to your database directly from your app. It can be secure if you create an account that can do nothing but SELECT, but there are some other things to consider.
Why burden the app with the Oracle client?
If you have many users you have to worry about Oracle handling a huge number of simultaneous connections. With a Restful API requests are stateless.
If you decide to change your schema. You'll also have to change your app. When you place a service in between, the app is no longer dependent on the schema.
ODBC connection will require that the Oracle port is open to the Internet, which in vast majority of cases will not be allowed for security and performance reasons. Even if it were, or even if you establish a secure VPN, a direct database access requires that the connection is kept open, which can be problematic when a mobile device can go in and out of the network coverage.
HTTP is far more tolerant to unreliable networks and can be encrypted using SSL (HTTPS). The problem with HTTP is that database do not have direct support for this transport so most people develop dedicated web services.
I work on a project called SlashDB, which automatically constructs RESTful APIs out of databases. For public APIs you would install /db in so called DMZ (a network segment between two firewalls) as described in this blog post.
SlashDB can be configured to allow restricted data access to public users or you can define specific users with varying privileges to data. It is designed as stateless service, which means that you can easily set up multiple nodes behind a load balancer and reverse HTTP proxy for high availability web scale deployments.
Regardless whether you develop the web service by hand or use our product you will achieve better scalablity, performance and security for your solution than by using direct client/server approach. I would even argue that REST APIs should be used internal enterprise data integration solutions but that's a whole new topic.
I am going to repeat what everyone else said, Rest API is the way to go. Do not connect to the database directly. However, there might be a way to connect to your database which I never tried my self.
http://odbcrouter.com/iosvsweb#hn_iOS_Open_Database_Connectivity_SDK

IBM Portal Database and Authentication

I have a couple of questions regarding IBM Portal Portlets.
I have just stumbled into the realm of Portlets - and as far as I am concerned was dropped into the deep end. Having to work on a IBM WebSphere Portal 6.1
We are still in the evaluation stage - and three things that I haven't been able to find clear answers to yet.
Database - is there one single Database that also gets used by the installed Portlets - or do you configure DB individually on a per Portlet Basis?
Authorization and Authentication - how can a Portlet get hold of the User and the rights the user has?
Are there any known constraints in using JSR-301 compliant JSF Bridges instead of bog standard Portlets?
Thanks in advanced.
I haven't used Portal 7 yet, but I have used pretty much every other version, so my apologies if you are using 7 and this information doesn't fit exactly.
1) Database: when you install portal, you configure a database it uses to store portal configuration (and sometimes user rights as well, although this aspect can be set up using a custom user registry like LDAP). If you don't have an already dedicated DB, Portal will use its packaged DB, Cloudscape/Derby. This DB can be completely separate from the DB that the portlets use to manipulate data unrelated to configuration. E.g. if your portlet is displaying inventory for a bike shop, the DB holding that info can be accessed in the normal web application way through a datasource set up in the WAS GUI.
2) For a lot of scenarios, your portlet doesn't need to know the user's rights, it won't render the portlet unless the user has been assigned the correct rights via Portal Administration. But in the cases in which you would need to know the user's rights, they can be accessed via the Portal User Management Architecture. Here's a good whitepaper on the subject: http://public.dhe.ibm.com/software/dw/websphere/PUMA_scenarios.pdf
3) Known constraints? You may have to google for that specifically, but I will say that unless you use IBM's custom JSF bridge, there may not be a lot of support from IBM's technical issue team if you come up against a problem. However, the support guys are usually pretty helpful, I find. Don't let that stop you from trying though :)
The two resources that I use pretty exhaustively are the InfoCenter http://publib.boulder.ibm.com/infocenter/wpdoc/v6r1/index.jsp and the developer forums on IBM Developerworks.
Best of luck, and welcome to the dark side!

Strategies for "Always-Connected" Windows Client Data Architecture

Let me start by saying: this is my 1st post here, this is a bit lenghty, and I havent done Windows Forms development in years....with that in mind please excuse me if this isn't directly a programming question and please bear with me as I really need the help!!
I have been asked to develop a Windows Forms app for our company that talks to a central (local area network) Linux Server hosting a PostgreSQL database. The app is to allow users to authenticate themselves into the system and thereafter conduct the usual transactions with the PG database. Ordinarily, I would propose writing a webforms app against Mono, but the clients need to utilise local resources such as USB peripheral devices, so that is out of the question. While it might not seem clear, my questions are italised below:
Dilemma #1:
The application is meant to be always connected. How should I structure my DAL/BLL - Should this reside on the server or with the client?
Dilemma #2:
I have been reading up on Client Application Services (CAS), and it seems like a great fit for authentication, as everything is exposed via URIs. I know that a .NET Data Provider exists for PostgreSQL, but not too sure if CAS will all work on a Linux (Debian) server? Believe me, I would get my hands dirty and try myself, but I need to come up with a logical design first before resources are allocated to me for "trial purposes"!
Dilemma #3:
If the DAL/BLL is to reside on the server, is there any way I can create data services, and expose only these services to authenticated clients. There is a (security) requirement whereby a connection string with username and password to the database cannot be present on any client machines...even if security on the database side is quite rigid. I'm guessing that the only way for this to work would be to create the various CRUD data service methods that are exposed by an ASP.NET app, and have the WindowsForms make a request for data or persist data to the ASP.NET app (thru a URI) and have that return a resultset or value. Would I be correct in assuming this? Should I be looking into WCF Data Services? and will WCF work with a non-SQL Server database?
Thank you for taking the time out to read this, but know that I am desperately seeking any advice on this! THANKS A MILLION!!!!
EDIT:
I am considering also using NHibernate as my ORM
Some parts of your questions are complicated and beyond my expertise. However, in general you can do almost anything you put effort into, CAP theorem and the like aside.
DAL/BLL stuff in general can reside in any of the tiers. I put a lot of this in my database and some in the middle tier, however this is to allow re-use in different environments which may or may not be a goal for you. The thing is I would think through carefully the separation of concerns issues here and what sorts of centralization of logic you want to place. The further back, the more re-usable this becomes but this is not always a free tradeoff.
I am not entirely familiar with CAS but it looked like AJAX kinds of stuff from what I saw on the MSDN web site. That could be wrong, but if it is right, then you have an issue in that such requests may be stateless and this could be an issue if you need a constant connection.
On the whole based on what you are saying it sounds cleanest to do a two tier rather than a three tier app, and have the DAL/BLL sit on the client, possibly supported by stored procedures in the server. You can then set PostgreSQL up to authenticate against whatever you use on your network (KRB5 if AD is what I would recommend). This simplifies your data access, and it allows you to control permissions based on the authentication against the database. Since you can authenticate users based on AD, you can then set permissions accordingly.
One important consideration is going to be number of connections. PostgreSQL does have some places where every current connection must be checked and iterated through, and connection startup and tear-down overhead in some cases can be significant. So one important decision will involve connection pooling. Whether or not you use connection pooling to boost performance will depend on what you are doing but I have seen cases where PostgreSQL has handled 600 connections without serious problems.