PersistenceContext propagation - jpa

I'm migrating an application from desktop to web. In the desktop application, users connect to an Oracle database using different database users, ie users are managed by Oracle, not within a database table. All use the same scheme to store and manage data, PLMU_PROD.
I have to implement authentication (JPA) for the Web application and, as I read, I have to create a EntityManagerFactory for each database user.
The other option I'm thinking is to create a table of users / passwords and use the same EntityManagerFactory to serve all EntityManager, as all users will access the same data that is in the scheme PLMU_PROD.
I wonder if the PersistenceContext is shared between different EntityManagerFactories, as my web server has little RAM and do not want to waste it having duplicate entities.
Thanks for your time!

What you seem to be referring to is caching. JPA requires that EntityManagers keep entities cached so that they can track changes. So each EntityManager is required to have its own cache, keeping changes made in one separate from changes that might be made concurrently in others - transaction isolation. Within EclipseLink, there is a concept of a second level cache that is shared at the EMFactory level. http://wiki.eclipse.org/EclipseLink/Examples/JPA/Caching is a good document on caching in EclipseLink. This second level cache helps avoid database access and can be disabled as required. If your EntityManagers do not need to track changes, such as if the application is read-only and the entitys are not modified, you can set queries to return entities from the shared cache so that only a single instance of the data exists using the read-only query hint: http://www.eclipse.org/eclipselink/documentation/2.4/jpa/extensions/q_read_only.htm#readonly
Read-only instances can allow avoiding duplication and using resources unnecessarily, but you will need to manage them appropriately and get managed copies from the EntityManager before making changes.

Related

EF Core and caching of results

I'm working on an Websocket application. When the client connects to the server, the websocket session get one dbcontext from dependency injection
Services.AddDbContext<Db>
This dbcontext will be the same for the whole websocket session. The problem is that the dbcontext will cache results. So if the websocket session is open for for example two hours and its reading the same data twice, while the data has been changed outside that dbcontext, the dbContext will give give invalid data back as response for the query. (the cached result from last query). There is serveral examples on how to avoid this, but it has to be done on every query. This is not really practical and somewhere in the code it might be forgotten and you have a chance to get invalid data.
Is there someway to permanently disable caching?
I think that you try use Entity Framewor in a very wrong way, DbContext is not supposed to work this way and it is not a cache per say, although it keeps some data in memory for you.
In your case I would suggest to either
Query The database every time as you suggested.
Or even better
Take advantage of proper caching mechanisms.
The decision if you should use sql server or a caching mechanism is based on how long you want to keep the data and how often you want to query them. If it is permanent and not query so often then it is sql server. If it is a couple of hours and you query very often it is better caching.
As a caching mechanism you can use:
The default MemoryCache, but it has quite limited functionality and it is restricted to the application level, so if you run multiple instyance of yor application this solution will not work out.
A distributed cache solution, like Redis, which supports a lot of functionality and you can connect many instances of your applications.

Is it safe to enable MSDTC (Microsoft Distributed Transaction Coordinator) on our server

I'm always worried about using two instances of DbContext that would require distributed transaction coordinator, especially when I'm using a library like SimpleMembership which has its own connection to the database.
This problem is always an issue in my case. For example I'm using SimpleMembership provider for my application's user accounts and I want to save a user with additional information, like company. So without MSDTC enable I can't do this inside a transaction. So it is possible that inconsistent data is inserted into the database.
So my question is should I worry about this problem and try to find a better solution or I can just enable MSDTC on my server and don't worry about it? Is there any consequences of enabling MSTDC ?
Thanks!

Is there a way of connecting to shared OpenEdge RDBMS with read only access?

Our new security policies require data access restriction for developers to the production database. Setting up -RO parameter does not work for several reasons (extracts from 'Startup command and Parameter reference' http://documentation.progress.com/output/OpenEdge102b/pdfs/dpspr/dpspr.pdf)
1) "If you use the -RO parameter when other users are updating the database, you might see invalid data, such as stale data or index entries pointing to records that have been deleted."
2) "A read-only session is essentially a single-user session. Read-only users do not share database resources (database buffers, lock table, index cursors)."
3) "When a read-only session starts, it does not check for the existence of a lock file for the database. Furthermore, a read-only user opens the database file, but not the log or before-image files.
Therefore, read-only user activity does not appear in the log file."
We would like to be able to access data on the production database from OpenEdge Architect, but not being able to edit data. Is it possible?
In most security conscious companies developers are not allowed to access production. Period. Full stop.
One thing that you could do as a compromise... if the need is to occasionally query data you could give them access to a replicated database via OpenEdge Replication Plus. This is a read-only db connection without the drawbacks of -RO. It is real-time, up to date and access is separately controlled -- you could, for instance, put the replicated db on a different server that is on a different subnet.
The short answer is no, they can't access it directly and read-only.
If you have an appserver, you could write some code which would provide a level of dynamic RO data access via appserver or webservice calls.
The other question I'd have is - what are your developers doing accessing the production database? That should be a big no-no.

Make sure Entity framework always reads from database?

I have this applikation that is actually two applications, a webapplication and a console application. The console application is used as a scheduled task on the windows machine and is executed 3 times a day to to some recurring work. Both application uses the same Model and repository that is placed in a seperate projekt (class library). The problem is that if the console application need to make som changes to the database it updates the model entity and save the changes to database but when this happens the context in the webbapplication is unaware of this and therefore the object context is not refreshed with the new/updated data and the user of the application can not see the changes.
My question is: Is there a way to tell the objectcontext to always load data from the database, either on the hole objectcontext or for a specific query?
/Regards Vinblad
I don't think you should have this problem in web application. ObjectContext in web application should be created per request so only requests processing during update should be affected.
Anyway there are few methods wich can force ObjectContext to reload data. Queries and load functions allow passing MergeOption which should be able to overwrite current data. But the most interesting should be Refresh method especially with this application.
By Using a DbSet you can you can also make use of the .AsNoTracking() method.
Whenever you run something like
context.Entities.FirstOrDefault()
or whatever query against the context, the data is actually fetched from the database, so you shouldn't be having a problem.
What is your ObjectContext lifetime in the webapp? The ObjectContext is a UnitOfWork, so it should be only created to fetch/write/update data and disposed quickly afterwards.
You can find a similar question here:
Refresh ObjectContext or recreate it to reflect changes made to the database?
FWIW, creating a new (anonymous) object in the query also forces a round trip to the database:
' queries from memory
context.Entities.FirstOrDefault()
' queries from db
context.Entities.Select(Function(x) New With {p.ID, p.Name}).FirstOrDefault()
Please forgive the VB, it's my native language :)

How to implement Tenant View Filter security pattern in a shared database using ASP.NET MVC2 and MS SQL Server

I am starting to build a SaaS line of business application in ASP.NET MVC2 but before I start I want to establish good architecture foundation.
I am going towards a shared database and shared schema approach because the data architecture and business logic will be quite simple and efficiency along with cost effectiveness are key issues.
To ensure good isolation of data between tenants I would like to implement the Tenant View Filter security pattern (take a look here). In order to do that my application has to impersonate different tenants (DB logins) based on the user that is logging in to the application. The login process needs to be as simple as possible (it's not going to be enterprise class software) - so a customer should only input their user name and password.
Users will access their data through their own sub-domain (using Subdomain routing) like http://tenant1.myapp.com or http://tenant2.myapp.com
What is the best way to meet this scenario?
I would also suggest using two database, a ConfigDB and a ContentDB.
The ConfigDB contains the tenant table and the hostname, databasename, sql username and sql password of the Content database for each of tenants in this table and is accessed via a seperate sql user called usrAdmin
The ContentDB contain all the application tables, segmented on the SID (or SUSER_ID) of the user and is access by each tenants sql user called usrTenantA, usrTenantB, usrTenantC etc.
To retrieve data, you connect to the ConfigDB as admin, retrieve the credentials for the appropriate client, connect to the server using the retrieved credentials and then query the database.
The reasons i did this is for horizontal scalability and the ability to isolate clients upon demand.
You can now have many ContentDBs, maybe with every ten tenants that sign up you create a new database, and configure your application to start provisioning clients in that database.
Alternatively you could provision a few sql servers, create a content DB on each and have your code provision tenants on which ever server has the lowest utilization historically.
You could also host all your regular clients on server A and B, but Server C could have tenants in their own INDIVIDUAL databases, all the multitenancy code is still there, but these clients can be told they are now more secure because of the higher isolation.
The easiest way is to have a Tenants table which contains a URL field that you match up for all queries coming through.
If a tenant can have multiple URL's, then just have an additional table like TenantAlias which maintains the multiple urls for each tenant.
Cache this table web side as it will be hit a lot; invalidate the cache whenever a value changes.
You can look at DotNetNuke. It is an open source CMS that implements this exact model. I'm using the model in a couple of our apps at it works well.
BTW, for EVERY entity in your system you'll need to have a tenantid column acquired for the above table.