NEventStore get projection which lists all aggregates of a given type - cqrs

I am using the IRepository interface from the NEventStore.Domain project.
I would like to create a projection that is a list of all the aggregates of the given aggregate type. How would I go about doing this:
So if I have a ReportBatch aggregate, I am looking to list all the report batches I have saved. How can I accomplish this? Am I barking up the wrong tree with projections? Should I just be saving to a ReportBatchList aggregate when I get Created events for the ReportBatch.

Should I just be saving to a ReportBatchList aggregate when I get
Created events for the ReportBatch.
No. You should have a projection that writes to a read model each time it receives a Created event. You would then query this read model to get the list. The read model could be a database (SQL or NoSQL), an in-memory construct, a text file, etc.
Note that it will not be "a list of all the aggregates of the given aggregate type". It's a read model, and while a read model may have knowledge of data that is generated by an aggregate, it does not directly represent an aggregate.
Event Sourcing is an advanced form of Command Query Responsibility Segregation (CQRS), in which writes (aggregates et al) and reads (projections and read models) are conceptually completely separate.

Related

merge - upsert/delete in google cloud datastore

I am working on a POC (to move part of functionality from relational DB to cloud datastore). I have few questions:
I would need to refresh few "kind" every night as the data comes up
from a different data source (via flat files). I read about it, and
understood that there is not TRUNCATE kind of functionality in
datastore. I believe, only option is to retrieve the keys from the
"kind" in a loop and delete entity by entity. And use import functionality to load the new set of data. Is there any better
option?
Assume I have a kind called department, and a kind called
store. Now, I need a kind called dept-store. So for this parent
nodes are department and store. Is there a way to enforce this kind
of relationship? From the documentation I see that there can only be
one parent.
If i have a child entity in kind1 whose parent is
present in kind2, and they are linked together, is there a way to
query all the properties present in kind1 and kind2 together? From
relational DB perspective, it is like equi-join with "SELECT *". I
am looking for an equivalent functionality in datastore.
In order to answer your questions:
There is two ways to delete multiple entities. First, you can use Cloud Dataflow to delete entities in Bulk [1]. Second, once keys are retrieved you can make a batch delete operation by passing the keys to Datastore delete function, you have the usage example here [2]. In order to retrieve the keys you can run keys-only query [3].
In Datastore an entitiy can have only one parent but can have multiple children. But for your use case you may try to have a third kind, dept-store, and assign its properties as the keys of the entities from the department and the store kinds. This solution might need a good understanding of your neeeds for implementation, as Datastore by nature is Non-relational database.
You can lookup multiple entities providing the keys retrieved from kind1 and kind2 with batch operations [2].

OData REST API where table has columns unique to customer

We would like to create an OData REST API. Our data model is such that each customer has their own database. All database objects have the same definition across all customer databases, with the exception of a single table.
The customer specific table we will call Contact. When a customer adds a column the system creates a column with a standardised name with a definition translated from options selected by the user in the UI. The user only refers to the column data by a field name they have specified to enable the user to be able to generate friendly queries.
It seems to me that the following approaches could be used to enable OData for the model described:
1) Create an OData open type to cater for the dynamic properties. This has the disadvantage of user requests for a customer not providing an indication of the dynamic properties that can be queried against. Even though they will be known for the user (via token authentication). Also, because dynamic properties are a dictionary, some data pivoting and inefficient query writing would be required. Not sure how to implement the IQueryable handling of query options for the dynamic properties to enable our own custom field querying.
2) Create a POCO class with e.g. 50 properties; CustomField1, CustomField2... Then somehow control which fields are exposed for use in OData calls. We would then include a separate API call to expose the custom field mapping. E.g. custom field friendly name of MobileNumber = CustomField12.
3) At runtime, check to see if column definitions of table changed since last check. If have, generate class specific to customer using CodeDom and register it with OData. Aiming for a unique URL for each customer. E.g. http://domain.name/{customer guid}/odata
I think the ideal for us is option 2. However, the fact the CustomField1 could be an underlying SQL data type of nvarchar, int, decimal, datetime, etc, there are added complications.
Has anyone a working example of how to achieve what has been described, satisfactorily?
Thanks in advance for any help.
Rik
We have run into a similar situation but with our entire dataset being unknown until runtime. Using the ODataConventionModelBuilder and EdmModel classes, you can add properties dynamically to the model at runtime.
I'm not sure whether you will have to manually add all of the properties for this object type even though only some of them are unknown or whether you can add your main object and then add your dynamic ones afterwards, but I guess either would be workable.
If you can get hold of which type of user it is on the server, you could then add only the properties that you are interested in (like option 3 but not having to CodeDom).
There is an example of this kind of untyped OData server in the OData samples here that should get you started: https://github.com/OData/ODataSamples/tree/master/WebApi/v4/ODataUntypedSample
The research we carried out actually posed Option 1 as the most suitable approach for some operations. i.e. Create an SQL view that unpivots the data in a table to a key/value pair of column name/column value for each column in the table. This was suitable for queries returning small datasets. This was far less effort than Option 3 and less confusing for the user than Option 2. The unpivot query converted the field values to nvarchar (string) values and thus meant that filtering in the UI by column value data types was not simple to achieve. (If we decide to implement this ability, I believe this can be achieved by creating a custom attribute that derives from EnablQueryAttribute, marking the controller action with it and manipulate the IQueryable before execution).
However, we wanted to expose a /Contacts/Export endpoint that when called would output the columns from a table with a fixed schema joined on a table with a client specific schema and output to a CSV file. All the while utilising the OData supported filter syntax. One of our customer databases has more than 12 million rows of data and is made up of approximately 30 columns.
To achieve this it looks like our best bet would have been to work with the Microsoft.OData.Core.UriParser.UriQueryExpressionParser class, unfortunately Microsoft in their wisdom have declared this as internal, as well as many of it's dependants.
Walking an abstract syntax tree built from OData supported query options and applying our own visitor to each node to build some dynamic Linq query/SQL seems like a possible solution.
For the time-being we will simply implement a cut-down set of supported $filter criteria without the support for grouping parenthesis.

Data Mapper pattern implementation with zend

I am implementing data mapper in my zend framework 1.12 project and its working fine as expected. Now further more to enhance it i wants to optimize it in following way.
While fetching any data what id i wants to fetch any 3 field data out of 10 fields in my model table? - The current issue is if i fetches the only required values then other valus in domain object class remains blank and while saving that data i am saving while model object not a single field value.
Can any one suggest the efficient way of doing this so that i can fetch/update only required values and no need to fetch all field data to update the record.
If property is NULL ignore it when crafting the update? If NULLs are valid values, then I think you would need to track loaded/dirty states per property.
How do you go about white-listing the fields to retrieve when making the call to the mapper? If you can persist that information I think it would make sense to leverage that knowledge when going to craft the update.
I don't typically go down this path. I will lazy load certain fields on a model when it makes sense, but I don't allow loading parts of the object like this, rather I create an alternate object for use in rendering a list when loading the full object is too resource intensive. A generic dummy list object I just use with tabular data. It being populated from SQL or stored procedures result-sets, usually with my generic table mapper.

Saving model object to database -MVC

I have a model object and the model object contains many variables and arrays. i need to save most of the values of model to database for future use. How can i achieve this? .
I created a table. Is i must insert each and every value individually?..Or is there any other way?
You may take a look at the tutorials here which illustrate how you could use Entity Framework within your application to map those models to SQL tables and perform queries.

How do I use entity framework with hierarchical data?

I'm working with a large hierarchical data set in sql server - modelled using the standard "EntityID, ParentID" kind of approach. There are about 25,000 nodes in the whole tree.
I often need to access subtrees of the tree, and then access related data that hangs off the nodes of the subtree. I built a data access layer a few years ago based on table-valued functions, using recursive queries to fetch an arbitrary subtree, given the root node of the subtree.
I'm thinking of using Entity Framework, but I can't see how to query hierarchical data like
this. AFAIK there is no recursive querying in Linq, and I can't expose a TVF in my entity data model.
Is the only solution to keep using stored procs? Has anyone else solved this?
Clarification: By 25,000 nodes in the tree I'm referring to the size of the hierarchical dataset, not to anything to do with objects or the Entity Framework.
It may the best to use a pattern called "Nested Set", which allows you to get an arbitrary subtree within one query. This is especially useful if the nodes aren't manipulated very often: Managing hierarchical data in MySQL.
In a perfect world the entity framework would provide possibilities to save and query data using this data pattern.
Everything IS possible with Entity Framework but you have to hack and slash your way in to it. The database I am currently working against has too many "holder tables" since Points for instance is shared with both teams and users. Both users and teams can also have a blog.
When you say 25 000 nodes do you mean navigational properties? If so I think it could be tricky to get the data access in place. It's not hard to navigate, search etc with entity framework but I tend to model on paper then create the database based on how I want to navigate while using entity framework. Sounds like you don't have that option.
Thanks for these suggestions.
I'm beginning to realise that the answer is to remodel the data in the database - either along the lines of nested sets as Georg suggests, or maybe a transitive closure table, which I've just come across.
That way, I'm hoping to get two key benefits:
a) faster querying aginst arbitrary subtrees
b) a data model which no longer requires recursive querying - so perhaps bringing it within easy reach of the Entity Framework!
It's always amazing how so often the right answer to a difficult problem is not to answer it, but to do something else instead!