I have reverse engineered a physical data model from an existing db using enterprise architect. It looks similar to this, which IMHO is an erm too:
Can I transform this into an erm like this?:
Thanks!
Not automatically. You are trying an abstraction from physics. Only a human can do that. So you need to do that manually. The reverse however is possible via a transformation. In that case you take an abstract class model and transform it into one (or many) physical representations.
Related
I am quite new to the Achrimate 3.0 and I am trying to make my model in it. I put an example below. My goal is to model this stream of data where I have a Source System which is creating output files in specific format -> next Step is Pulling the data by processing component and looking for some values in connected DB -> final step is then deliver this data (pushed by processing component) to Target Systems.
Q1: What relationship is correct to Application Component and Interface? In the picture is triggering (but maybe FLOW fits better) ?
Q2: Database is joined via Access relationship ?
Q3: For my purposes it will need to hold information about DB (columns+types+notes) structure in the diagram, any tips, how to manage it in Archimate ?
Diagram Example Here:
First, it would be nice to read the specification (http://pubs.opengroup.org/architecture/archimate3-doc/chap09.html#_Toc489946063).
Are you sure that you use only application layer? If so, then the interface is not defined correctly, the interface that you specified is rather related to the technology layer.
So:
Wrong interface (IMHO). Figure 67: Application Layer Metamodel of spec show you how elements can be linked on this layer.
Database can be represent as component, not DataObject.
In my experience - no good way. Use the standard reverse engineering mechanism. Associate the resulting UML objects with the Archimate elements if you need it.
So, I'm working on new software, but I have no choice but to brownfield the database. I would like to use Entity Framework where it makes sense.
Here's my dilemma:
Since the tables are very wide, and I can't change this, I will probably make heavy use of projection to limit the width of the datasets that I query.
I do want to make use of navigation properties where it makes sense
From what I've seen, a lot of people use a model where there is a single DbContext class for the whole project.
So,
I'm weighing these pro's and cons, and I'm wondering what the established best practices might be:
Use 1 DbContext.
There could be A LOT of "pollution" here, with bunches of projections of the data inside of the 1 context class. This sounds like it could become a maintenance nightmare.
Don't make my projections dbsets at all -- just make them plain old objects and select new MyProject {..} into them.
This offers the benefit of keeping my projections in module-specific assemblies and namespaces, but now I get NO navigation/lazy loading/ etc.
Be evil?? and use multiple DbContexts?
I'm not really sure what the maintenance story looks like here, but I'm kind of starting to lean in this direction. My biggest problem with it is that it feels like I'm swimming against the current -- not many people seem to do this, but for a large system, it seems like it could be the best option.
Thoughts?
I think you must use POCO or DTO for data transfer between different layers of application. Use ViewModel to send data to View.
Consider using Repository Pattern and UoW to have a better and efficient architecture in this scenario. Limit the use of navigation properties till repository otherwise they makes entity heavy while transferring those across layers (Use POCO or DTO).
If you are doing as above, then I do not think using multiple DbContexts would give you any benefit. Thanks.
I'm starting to work with graph databases, and in my team we've started modeling a graph for our software. The problem comes when we try to "document" the model, to see the structure of our database. With SQL databases you only have to look at the SQL schema.
We've spent some time reading neo4j blogs and documentation, but we've seen that the usual way to show how a graph works is with a minimal graph showing some sample data (Random samples: sample1, sample2, etc). That's great for educational purposes, but we'd love to be able to do it in a little more formal way. We'd like to set what kind of node can relate with another one, and with what kind of relationship, that kind of stuff.
Using Spring you can wrap the graph with classes, but it's very specific to Java and OO model, and we're working with Erlang. We're looking for some kind of formal language (SQL Schema equivalent), or a E-R model equivalent, or something like that.
One way to do this is to put the "meta-model" of your graph (a type network) in the graph as well and then connect the instances (nodes) to their meta-model-type. So you can visualize the meta-model using the graph visualization and at the same time use the meta-model to enforce additional constraints (by storing constraint information in the meta-model and using that when the actual model is updated) and also use the type-nodes of the meta-model to quickly access all "instance"-nodes of this type.
What is the domain you want to model?
A quick idea - could you use a subset of UML? Graph modeling seems to be closer to the domain, so maybe that's reasonable.
What we do is a generalization of the "example data" approach, where we include cardinality on each side of a relationship, as well as type and direction. I also often include a node "type" in the diagram (or some other specification of it's role/relation to domain models) instead of example data, and of course note the expected properties, their types, and whether they are optional. It's less than formal, but has served well so far.
I found a great tutorial on how to use Model Binding and List, Editable Grid / List Binding in MVC2. It shows how to create objects containing lists of type List<T>. But when I use the ADO.NET entity data model I cannot make the this:
SomeEntityCollection[i]
And thereby I can not make what is done in the tutorial.
Is there a way to work around this? Maybe make the ADO.NET use lists instead, if that's possible?
The best way I have found to map from ADO.NET to models is to use AutoMapper. It's a very elegant way to formalize mapping between structures.
From their site:
AutoMapper uses a fluent configuration API to define an object-object mapping strategy. AutoMapper uses a convention-based matching algorithm to match up source to destination values. Currently, AutoMapper is geared towards model projection scenarios to flatten complex object models to DTOs and other simple objects, whose design is better suited for serialization, communication, messaging, or simply an anti-corruption layer between the domain and application layer.
I am working on a project using entity framework. Is it okay to use partial classes of the EF generated classes as the business layer. I am begining to think that this is how EF is intended to be used.
I have attempted to use a DTO pattern and soon realized that i am just creating a bunch of mapping classes that is duplicating my effort and also a cause for more maintenance work and an additional layer.
I want to use self-tracking-entities and pass the EF entities to all the layers. Please share your thoughts and ideas. Thanks
I had a look at using partial classes and found that exposing the database model up towards the UI layer would be restrictive.
For a few reasons:
The entity model created includes a deep relational object model which, depending on your schema, would get exposed to the UI layer (say the presenter of MVP or the ViewModel in MVVM).
The Business logic layer typically exposes operations that you can code against. If you see a save method on the BLL and look at the parameters needed to do the save and see a model that require the construction of other entities (cause of the relational nature the entity model) just to do the save, it is not keeping the operation simple.
If you have a bunch of web services then the extra data will need to be sent across for no apparent gain.
You can create more immutable DTO's for your operations parameters rather than encountering side effects cause the same instance was modified in some other part of the application.
If you do TDD and follow YAGNI then you will tend to have a structure specifically designed for the operation you are writing, which would be easier to construct tests against (not requiring to create other objects not realated to the test just because they are on the model). In this case you might have...
public class Order
{ ...
public Guid CustomerID { get; set; }
... }
Instead of using the Entity model generated by the EF which have references exposed...
public class Order
{ ...
public Customer Customer { get; set; }
... }
This way the id of the customer is only needed for an operation that takes an order. Why would you need to construct a Customer (and potentially other objects as well) for an operation that is concerned with taking orders?
If you are worried about the duplication and mapping, then have a look at Automapper
I would not do that, for the following reasons:
You loose the clear distinction between the data layer and the business layer
It makes the business layer more difficult to test
However, if you have some data model specific code, place that is a partial class to avoid it being lost when you regenerate the model.
I think partial class will be a good idea. If the model is regenerated then you will not loose the business logic in the partial classes.
As an alternative you can also look into EF4 Code only so that you don't need to generate your model from the database.
I would use partial classes. There is no such thing as data layer in DDD-ish code. There is a data tier and it resides on SQL Server. The application code should only contain business layer and some mappings which allow persisting business objects in the mentioned data tier.
Entity Framework is you data access code so you shouldn't built your own. In most cases the database schema would be modified because the model have changed, not the opposite.
That being said, I would discourage you to share your entities in all the layers. I value separation of UI and domain layer. I would use DTO to transfer data in and out of the domain. If I have the necessary freedom, I would even use CQRS pattern to get rid of mapping entities to DTO -- I would simply create a second EF data access project meant only for reading data for the UI. It would be built on top of the same database. You read data through read (anemic -- without business logic) model, but you modify it by issuing commands that are executed against real model implemented using EF and partial methods.
Does this answer your question?
I wouldn't do that. Try too keep the layers independent as possible. So a tiny change in your database schema will not affect all your layers.
Entities can be used for data layer but they should not.
If at all, provide interfaces to be used and let your entities implement them (on the partial file) the BL should not know the entities but the interfaces.