How to access EF Core in-memory db from another application? - entity-framework-core

I have two applications, one is a Web API, and the other is a scheduled job.
Web API
First I run this service
There is an entity called 'User'
I'm adding some fake users using a DB Context called 'ApplicationContext'
The data will be persisted in an in-memory DB
Scheduled Job service (Background service)
Now I'm running this service and trying to access the same DbContext
But I don't see the fake users in the new context
How can I access the data in another application?

Your second application must access the DB via the API on the first. You can't access an in-memory database from another process without doing this.
The good news is, this means the first application enables you to expose higher level features than the bare database. This is mostly the entire purpose of a service.

Related

Unable to persist data across instances of web service in Swift using Vapor-Fluent

I'm writing a web service in Swift using Vapor framework.
I use FluentSQLite to save data. I have a User Model which conforms to the SQLiteModel and Migration. I have added routes to create new user via a post methods and return the list of users via a get method like below.
When I hit the get API for first time, it returns an empty array. After I post some users, I am able to get them. But when I stop the service and run again, I am unable to get the previously saved users.
Since I am new to Vapor, I can't figure out what I am missing here and all the online searches and docs didn't help. Initially I did not have a save or query inside a transaction, after seeing that in the docs I tried that also, but same issue.
What does your configuration for the SQLite database (typically in Sources/App/configure.swift) look like?
Is it actually persisting to disk, or just running an in-memory database (which goes away when you restart)?

Deploying IdentityServer3 on Load Balancer

We are moving right along with building out our custom IdentityServer solution based on IdentityServer3. We will be deploying in a load balanced environment.
According to https://identityserver.github.io/Documentation/docsv2/configuration/serviceFactory.html there are a number of services and stores that need to be implemented.
I have implemented the mandatory user service, client and scope stores.
The document says there are other mandatory items to implement but that there are default InMemory versions.
We were planning on using the default in memory for the other stuff but am concerned that not all will work in a load balanced scenario.
What are the other mandatory services and stores we must implement for things to work properly when load balanced?
With multiple Identity Server installations serving the same requests (e.g. load balanced) you won't be able to use the various in-memory token stores, otherwise authorization codes, refresh tokens and reference tokens issued by one server won't be recognized by the other, nor will user consent be persisted. If you are using IIS, machine key synchronization is also necessary to have tokens work across all instances.
There's an entity framework package available for the token stores. You'll need the operational data.
There's also a very useful guide to going live here.

PersistenceContext propagation

I'm migrating an application from desktop to web. In the desktop application, users connect to an Oracle database using different database users, ie users are managed by Oracle, not within a database table. All use the same scheme to store and manage data, PLMU_PROD.
I have to implement authentication (JPA) for the Web application and, as I read, I have to create a EntityManagerFactory for each database user.
The other option I'm thinking is to create a table of users / passwords and use the same EntityManagerFactory to serve all EntityManager, as all users will access the same data that is in the scheme PLMU_PROD.
I wonder if the PersistenceContext is shared between different EntityManagerFactories, as my web server has little RAM and do not want to waste it having duplicate entities.
Thanks for your time!
What you seem to be referring to is caching. JPA requires that EntityManagers keep entities cached so that they can track changes. So each EntityManager is required to have its own cache, keeping changes made in one separate from changes that might be made concurrently in others - transaction isolation. Within EclipseLink, there is a concept of a second level cache that is shared at the EMFactory level. http://wiki.eclipse.org/EclipseLink/Examples/JPA/Caching is a good document on caching in EclipseLink. This second level cache helps avoid database access and can be disabled as required. If your EntityManagers do not need to track changes, such as if the application is read-only and the entitys are not modified, you can set queries to return entities from the shared cache so that only a single instance of the data exists using the read-only query hint: http://www.eclipse.org/eclipselink/documentation/2.4/jpa/extensions/q_read_only.htm#readonly
Read-only instances can allow avoiding duplication and using resources unnecessarily, but you will need to manage them appropriately and get managed copies from the EntityManager before making changes.

handling iPhone app in offline/online modes

We are looking to develop an application that will get data from a web service if there is internet connectivity, and store this data using Core Data to Sqlite database just for viewing only (no updates will take place to local data), and whenever there is internet connectivity the app (or possibly a background thread) keeps checking the web service for new data.
How to know that the data which returned by the web service contains new records, and that the app should only store the new data, not the whole dataset again?
Is there a tutorial available on the web for a similar scenario?
My solution is to have an 'added' field in the database (Both local and remote) then when the service is called,the phone passes the most recent updated date/time in the local DB as a parameter...so the service only returns new data
Check out RestKit. It has hooks for reachability as well as hooks into Core Data.

How to implement Tenant View Filter security pattern in a shared database using ASP.NET MVC2 and MS SQL Server

I am starting to build a SaaS line of business application in ASP.NET MVC2 but before I start I want to establish good architecture foundation.
I am going towards a shared database and shared schema approach because the data architecture and business logic will be quite simple and efficiency along with cost effectiveness are key issues.
To ensure good isolation of data between tenants I would like to implement the Tenant View Filter security pattern (take a look here). In order to do that my application has to impersonate different tenants (DB logins) based on the user that is logging in to the application. The login process needs to be as simple as possible (it's not going to be enterprise class software) - so a customer should only input their user name and password.
Users will access their data through their own sub-domain (using Subdomain routing) like http://tenant1.myapp.com or http://tenant2.myapp.com
What is the best way to meet this scenario?
I would also suggest using two database, a ConfigDB and a ContentDB.
The ConfigDB contains the tenant table and the hostname, databasename, sql username and sql password of the Content database for each of tenants in this table and is accessed via a seperate sql user called usrAdmin
The ContentDB contain all the application tables, segmented on the SID (or SUSER_ID) of the user and is access by each tenants sql user called usrTenantA, usrTenantB, usrTenantC etc.
To retrieve data, you connect to the ConfigDB as admin, retrieve the credentials for the appropriate client, connect to the server using the retrieved credentials and then query the database.
The reasons i did this is for horizontal scalability and the ability to isolate clients upon demand.
You can now have many ContentDBs, maybe with every ten tenants that sign up you create a new database, and configure your application to start provisioning clients in that database.
Alternatively you could provision a few sql servers, create a content DB on each and have your code provision tenants on which ever server has the lowest utilization historically.
You could also host all your regular clients on server A and B, but Server C could have tenants in their own INDIVIDUAL databases, all the multitenancy code is still there, but these clients can be told they are now more secure because of the higher isolation.
The easiest way is to have a Tenants table which contains a URL field that you match up for all queries coming through.
If a tenant can have multiple URL's, then just have an additional table like TenantAlias which maintains the multiple urls for each tenant.
Cache this table web side as it will be hit a lot; invalidate the cache whenever a value changes.
You can look at DotNetNuke. It is an open source CMS that implements this exact model. I'm using the model in a couple of our apps at it works well.
BTW, for EVERY entity in your system you'll need to have a tenantid column acquired for the above table.