At my job we're gradually replacing a monolithic legacy system with a microservices architecture. I've been tasked with writing an auth server using Asp.Net Core, Identity Server 4 and Entity Framework*. The legacy system already has auth and our deadline is approaching, so we're going to use the legacy system as a backend for the time being.
How can I set up Identity Server/Entity Framework to pull login info through the legacy system? So far, everything I've found involves adding a database like SQL server. Assume for the sake of argument I'm not able to pull data directly from the MySQL database that the legacy system uses, but it is easy to get the user data via a JSON API.
I have written a DbContext and an implementation of IProfileService which uses it, but I'm not sure how to actually pull the users in the DbContext, and when I try to sign in from a client I get this error:
No database provider has been configured for this DbContext. A provider can be configured by overriding the DbContext.OnConfiguring method or by using AddDbContext on the application service provider. If AddDbContext is used, then also ensure that your DbContext type accepts a DbContextOptions object in its constructor and passes it to the base constructor for DbContext.
However I haven't been able to find/figure out what to put in DbContext.OnConfiguring to set this up. I suppose I need to implement IServiceProvider somewhere, but can't find any details of how to do so.
*We're not married to these so suggestions for something more appropriate are welcome. We are using .Net Core.
The EF bit seems like a red herring here. If you're talking to an API in a legacy system then you won't use EF for that at all.
If using IdentityServer4 then it makes sense to use their EF implementations for the configuration and operational stores and then implement your sign in UI, IClaimsService etc using the API exposed by your existing system. To do that just create a simple client implementation that calls said API and accepts and returns whatever models you require.
Related
I have the following setup but am unable to finish building as I get an obscure error related to line 439 in file Blazor.MonoRuntime.targets (MSB3073).
Does this essentially mean that Entity Framework Core will in no way work with Blazor preview 6?
Details:
Asp.net Hosted Blazor
AspNetCore.Blazor (3.0.0-preview6.19307.2)
Microsoft.EntityFrameworkCore (3.0.0-preview6.19304.10)
Microsoft.EntityFrameworkCore.Design (3.0.0-preview6.19304.10)
Microsoft.EntityFrameworkCore.SqlServer (3.0.0-preview6.19304.10)
Resolved via a hack solution!
Somehow I was able to resolve everything and makes things run
end-to-end. I believe the big, critical thing was:
* Ensure that the Blazor client AND server projects do not directly reference Entity Framework
* Do not let the Blazor client reference (directly or indirectly) the project with the generated entities). To get access to the models, I
just create a duplicate of the generated entities (and removed the
"partial" from the classes that were generated)
Some clarification is needed here, right:
You cannot use Entity Framework on the Client project of Blazor. Entity Framework is a server technology.
You may use Entity Framework on the Server project of your application.
Communication between your Client side and Server hosting side is ordinarily done via Http calls (HttpClient service), but you may also employ SignleR.
To enable Http calls you should expose Http routing endpoints... This can be enabled by using Web Api with the required endpoints. Your Web Api exposed methods (Controllers' methods) can access the database directly (or indirectly if you define repositories, services, etc) via Entity Framework objects, and return the queried data to the calling methods (HttpClient methods).
Note that in my answer I particularly relate to Blazor Client-side apps, but it is mostly true with regards to Blazor server-side apps. I may just add here that in Blazor server-side apps you don't have to use Web Api since Blazor is executed on the server. In such a case, you can define a normal service to retrieve the data from the database, and pass it to the calling methods (no HttpClient involved here).
The Shared project intended to contain objects that can be used by both the front end and back end. This is the place where you can define your Model objects. As for instance, you can define an Employee class that can be used to retrieve the data and pass it to the Client as a list of Employee objects, and in the Client you can define a list of Employee objects that will store the retrieved data. In short, you don't have to define two types of objects, one appropriate to the server, and one appropriate to the client (say your client is an Angular app).
Hope this helps..
I have an application that will connect to a Sql Server database that is installed in a server. The application will run in many client computers.
This application has a repository that uses EF to access to the database and has the logic bussiness related with the data of the database, check the information is correct, add, delete, modify and so on.
I am thinking that I have two main options.
First one, the client application, the application that will run in the client computers, can use this respository, so the application would can connect directly to the Sql Server database.
The second option, to have a server application that use this repository to connect to the databse. The clients will not use the repository, instead, will use WCF to connect with the server application to request actions and data to the database. The server would do all the work and send the result to the client through WCF.
If I am not wrong, WCF it is good when two applications has to communicate between them, for notify something between different applications or work together to do some work and so on. But in my case, it would be use just to access to the database, but the clients could do it directly if the use the repository. So I guess that use WCF to do that it would add a new layer that will make to have more work and I guess that comsume more resources.
However, this first option has a problem, if the repository has a bug and would make that the information of the databse it is incorrect or inconsistent, if I fix the problem, I would have to update all the clients to avoid they update the database incorrectly. But in some cases it would be very hard to ensure all the clients update the application to avoid the problem. At least, I don't know the way to force to a client can't run the application if it is detected a new version of the application. Are there any way to force this update?
The second option solve this problem, because I just to update the server application and all it is done. However, it makes the server to work more and need more resources. Also, I would add a new layer to the application, more work too.
So my question is, in this kind of applications, what is the best solution, first one, second one or another one that I don't know?
Is it possible to avoid that a client application runs if it is detected a new mandatory update? If it is possible, the first option would be a good solution, letting to the client applications access directly to the database?
Thanks so much.
From my point of view I would use ASP.NET WebApi 2 rather than WCF as with the first one you will be able to create a resource-oriented services over HTTP (RESTful) that can use the full features of HTTP (like URIs, request/response headers, caching, versioning, various content formats).
The idea would be to call your WebApi endpoints from the client application. In this way, all the code related to retrieve the information from your database would be defined in the API and it will be only accessible through the API.
Getting started with ASP.NET WebAPi 2: https://www.asp.net/web-api/overview/getting-started-with-aspnet-web-api/tutorial-your-first-web-api
I am looking for some best practice advice with regards to building a self contained service, that is a DLL with all of the domain logic and data layer. I would like to use an off the self CMS, such as orchard, then talk to the service to carry out CRUD operations. The service should have it's own IOC, and ORM, in this case I am using Ninject and Entity Framework. In this design I will have a separate database than the CMS, and can port it to other CMS systems when required.
The CMS should start the service and pass it a connection string or file name. If I use orchard it has different ORM, and IOC frameworks, so this leads me to wanting to keep Ninject and Entity Framework inside the service.
I have setup an experiment where the DbConext and domain are in the service DLL, and I call it from a console app. This only works if I have entity framework referenced in the console application, even though I don't use it in that dll. Here is the error message when EF is not referenced by the console app.
No Entity Framework provider found for 'System.Data.SqlClient' ADO.NET provider.
Why is this and how best to solve my design problem?
If your library (DLL) depends on Entity Framework, it's perfectly normal that you need to reference both in your application (whether it's console, web or whatever else). You always need to reference all dependencies.
Wiring your custom library with Orchard would be fairy simple. The only thing you'd need to do on Orchard side would be to register the services coming from your library with Autofac, in order to have them available for dependency injection. This post describes a similar scenario to yours.
Please bear in mind that using multiple database connections is a bit troublesome in Orchard <= 1.6, because of the usage of TransactionScope - you need to run all your custom database code in a suppressed scope, otherwise you'd have transaction errors and/or MSDTC-related problems. It will be a non-issue since Orchard 1.7 which is going to arrive in about a week. I'd strongly recommend waiting for the new version. You can also fetch the pre-release code from 1.x branch.
I have recently created a pretty robust API built around Entity Framework's DbContext. I am using a lot of metadata programming and taking advantage of the fact that I can get my data with a call like DbContext.Set(typeof(Customer)). Only, in my API I do not know at compile time what type I will be passing to the Set method. This is working very well with EntityFramework and I would like to add another layer abstraction and have it work with both EntityFramework or DataServiceContext. So, I really have two questions.
Firstly, and more specifically, is there a DataServiceContext (i.e. odata/wcf) equivalent to the DbContext.Set(type) method?
Secondly, and more generally, is there a good resource that compares the APIs provided by DbContext with DataServiceContext?
EntityFramework and DataServices client API should not be mixed. Even though they look similar they are not. DbSet represents entity set. I don't think there is a strong contract around entity sets in DataServiceContext. Instead the name of the entity set is passed to methods that need to know this (e.g. look at DataServiceContext.AddObject() or DataServiceContext.CreateQuery() methods) as strings. In some sense it makes it much easier to program the DataServiceContext dynamically. On the other hand you still need to know what is on the other side of the pipe (i.e. the server). As said above WCF Data Services and EntityFramework are different technologies (even though they can work together) and their APIs, though similar, serve different purposes. Therefore comparing them would be like comparing apples to oranges.
The DbContext API in the client side is not the same from DbContext on server side. The main goal is to expose the data and model, which can be done pretty well. I think you may be overengeneering your app, since WCF Data Services can provide enough funcionalities.
Here is a link from Ladislav Mrnka, who is very good at entity framework, he shows how you could expose your robust api with WCF Data Services.
Implement WCF Data Service using the Repository Pattern
I really like OData (WCF Data Services). In past projects I have coded up so many Web-Services just to allow different ways to read my data.
OData gives great flexibility for the clients to have the data as they need it.
However, in a discussion today, a co-worker pointed out that how we are doing OData is little more than giving the client application a connection to the database.
Here is how we are setting up our WCF Data Service (Note: this is the traditional way)
Create an Entity Framework (E)F Data Model of our database
Publish that model with WCF Data Services
Add Security to the OData feed
(This is where it is better than a direct connection to the SQL Server)
My co-worker (correctly) pointed out that all our clients will be coupled to the database now. (If a table or column is refactored then the clients will have to change too)
EF offers a bit of flexibility on how your data is presented and could be used to hide some minor database changes that don't affect the client apps. But I have found it to be quite limited. (See this post for an example) I have found that the POCO templates (while nice for allowing separation of the model and the entities) also does not offer very much flexibility.
So, the question: What do I tell my co-worker? How do I setup my WCF Data Services so they are using business oriented contracts (like they would be if every read operation used a standard WCF Soap based service)?
Just to be clear, let me ask this a different way. How can I decouple EF from WCF Data Services. I am fine to make up my own contracts and use AutoMapper to convert between them. But I would like to not go directly from EF to OData.
NOTE: I still want to use EF as my ORM. Rolling my own ORM is not really a solution...
If you use your custom classes instead of using classes generated directly by EF you will also change a provide for WCF Data Services. It means you will no more pass EF context as generic parameter to DataService base class. This will be OK if you have read only services but once you expect any data modifications from clients you will have a lot of work to do.
Data services based on EF context supports data modifications. All other data services use reflection provider which is read only by default until you implement IUpdatable on your custom "service context class".
Data services are technology for creating quickly services exposing your data. They are coupled with their context and it is responsibility of the context to provide abstraction. If you want to make quick and easy services you are dependent on features supported by EF mapping. You can make some abstractions in EDMX, you can make projections (DefiningQuery, QueryView) etc. but all these features have some limitations (for example projections are readonly unless you use stored procedures for modifications).
Data services are not the same as providing connection to database. There is one very big difference - connection to database will ensure only access and execution permissions but it will not ensure data security. WCF Data Services offer data security because you can create interceptors which will add filters to queries to retrieve only data the user is allowed to see or check if he is allowed to modify the data. That is the difference you can tell your colleague.
In case of abstraction - do you want a quick easy solution or not? You can inject abstraction layer between service and ORM but you need to implement mentioned method and you have to test it.
Most simple approach:
DO NOT PUBLISH YOUR TABLES ;)
Make a separate schema
Add views to this
Put those views to EF and publish them.
The views are decoupled from the tables and thus can be simplified and refactored separately.
Standard approach, also for reporting.
Apart from achieving more granular data authorisation (based of certain field values etc) OData also allows your data to be accessible via open standards like JSON/Xml over Http using OAuth. This is very useful for the web/mobile applications. Now you could create a web service to expose your data but that will warrant a change every time your client needs change in the data requirements (e.g. extra fields needed) whereas OData allows this via OData queries. In a big enterprise this is also useful for designing security at infrastructure level as it will only allow the text based (http) calls which can be inspected/verified for security threats via network firewalls'.
You have some other options for your OData client. Have a look at Simple.OData.Client, described in this article: http://www.codeproject.com/Articles/686240/reasons-to-consume-OData-feeds-using-Simple-ODa
And in case you are familiar with Simple.Data microORM, there is an OData adapter for it:
https://github.com/simplefx/Simple.OData/wiki
UPDATE. My recommendations go for client choice while your question is about setting up your server side. Then of course they are not what you are asking. I will leave however my answer so you aware of client alternatives.