AddItemToSet vs StoreRelatedEntities - servicestack.redis

I am trying to understand when someone would use AddItemToSet vs StoreRelatedEntities.
It seems the former is a way to associate a set label with a string-based item handle.
The latter is a way to associate two entities, which seems like a more generalized operation.
What is it that AddItemToSet does that StoreRelatedEntities can't do?
Thanks

The AddItemToSet API in ServiceStack.Redis is a 1:1 mapping that calls Redis' Server SADD Operation, i.e. adds an item to a Redis SET.
The StoreRelatedEntities is a higher-level operation that also maintains an index containing relationship between the entities described in detail in this Storing Related Entities in Redis answer.

Related

hibernate-search for one-directional associations

According to the spec, when #IndexedEmbedded points to an entity, the association has to be directional and the other side has to be annotated with #ContainedIn. If not, Hibernate Search has no way to update the root index when the associated entity is updated.
Am I right to assume the word directional should be bi-directional? I have exactly the problem that my index is not updated. I have one-directional relationships, e.g. person to order but the order does not know the person. Now when I change the order the index is not updated.
If changing the associations to become bi-directional is no option which possibilities would I have to still use hibernate-search? Would it be possible to create two separate indices and to combine queries?
Am I right to assume the word directional should be bi-directional?
Yes. I will fix this typo.
If changing the associations to become bi-directional is no option which possibilities would I have to still use hibernate-search?
If Person is indexed and embeds Order, but Order doesn't have an inverse association to Person, then Hibernate Search cannot retrieve the Persons that have to be reindexed when an Order changes.
Thus you will have to reindex manually: https://docs.jboss.org/hibernate/search/5.11/reference/en-US/html_single/#manual-index-changes .
You can adopt one of two strategies:
The easy path: reindex all the Person entities periodically, e.g. every night.
The hard path: reindex the affected Person entities whenever an Order changes. This basically means adding code to your services so that whenever an order is created/updated/deleted, you run a query to retrieve all the corresponding persons, and reindex them manually.
The first solution is fairly simple, but has the big disadvantage that the Person index will be up to 24 hours out of date. Depending on your use case, that may be ok or that may not.
The second solution is prone to errors and you would basically be doing Hibernate Search's work.
All in all, you really have to ask yourself if adding the inverse side of the association to your model wouldn't be better.
Would it be possible to create two separate indices and to combine queries?
Technically, if you are using the Lucene integration (not the Elasticsearch one), then yes, it would be possible.
But:
you would need above-average knowledge of Lucene.
you would have to bypass Hibernate Search APIs, and would need to write code to do what Hibernate Search usually does.
you would have to use experimental (read: unstable) Lucene APIs.
I am unsure as to how well that would perform, as I never tried it.
So I wouldn't recommend it if you're not familiar with Lucene's APIs. If you really want to take that path, here are a few pointers:
How to use the index readers directly: https://docs.jboss.org/hibernate/search/5.11/reference/en-US/html_single/#IndexReaders
Lucene's documentation for joins (what you're looking for is query-time joins): https://lucene.apache.org/core/5_5_5/join/org/apache/lucene/search/join/package-summary.html

How to expose read model from shared module

I am working on developing a set of assemblies that encapsulate parts of our domain that will be shared by many applications. Using the example of an order management system, one such assembly will contain all of the core operations an application can perform to/with an order. We are applying a simple version of CQS/CQRS so that all operations that change the state of the "system" are represented as public commands, such as CancelOrderCommand, ShipOrderCommand and CreateORderCommand. The command handlers are internal to the assembly.
The question I am struggling to answer is how to best expose the read model to consuming code?
The read model will be used by consuming code to perform queries. I don't know how all of the ways the read model will be used so the interface needs to be flexible to allow any query.
What complicates it for me is that I not only need to expose my aggregate root but there are also several "lookup" lists of related data that client applications may use. For example, each order has an associated OrderType which is data-driven (i.e., not an enum) and contains several properties that will drive some of our business rules that control what operations can/cannot be performed, etc. It is easy inside my module to manage this relationship; however, a client application that allows order creation will most likely need to display the list of possible OrderTypes to the user. As a result, I need to not only expose the list of Order aggregates but the supporting list of OrderTypes (and other lookup lists) from my read model.
How is this typically done?
I'm not sure what else to explain that will help trigger a solution, so please ask away...
I have never seen a CQRS based implementation expose a full dataset for ad-hoc querying so this is an interesting situation! In a typical CQRS scenario you would expose very specific queries because you may want to raise events when they are called (for caching for example - see this post for more details on that).
However since this is your design, let's not worry about "typical" or "correct" CQRS, I guess you just need a solution! One of the best new mechanisms for exposing data for flexible querying I have seen is the Open Data Protocol (OData). It will allow consumers to implement their own filtering, sorting and paging over a data source you expose.
Most implementations of this seem to deal with relational data. If you are dealing with a relational data source then OData might be a nice way to go. I suspect by your comment of "expose my aggregate root" that you might be using a document database? If so, there is one example I have seen of OData services on top of MongoDB: http://bloggingabout.net/blogs/vagif/archive/2012/10/11/mongodb-odata-provider-now-supports-arrays-and-nested-collections.aspx.
I hope that helps, OData is definitely worth looking into. It seems to be growing really quickly and is getting good support on both server and client technology platforms.

Need some advice concerning MVVM + Lightweight objects + EF

We develop the back office application with quite large Db.
It's not reasonable to load everything from DB to memory so when model's proprties are requested we read from DB (via EF)
But many of our UIs are just simple lists of entities with some (!) properties presented to the user.
For example, we just want to show Id, Title and Name.
And later when user select the item and want to perform some actions the whole object is needed. Now we have list of items stored in memory.
Some properties contain large textst, images or other data.
EF works with entities and reading a bunch of large objects degrades performance notably.
As far as I understand, the problem can be solved by creating lightweight entities and using them in appropriate context.
First.
I'm afraid that each view will make us create new LightweightEntity and we eventually will end with bloated object context.
Second. As the Model wraps EF we need to provide methods for various entities.
Third. ViewModels communicate and pass entities to each other.
So I'm stuck with all these considerations and need good architectural design advice.
Any ideas?
For images an large textst you may consider table splitting, which is commonly used to split a table in a lightweight entity and a "heavy" entity.
But I think what you call lightweight "entities" are data transfer objects (DTO's). These are not supplied by the context (so it won't get bloated) but by projection from entities, which is done in a repository or service.
For projection you can use AutoMapper, especially its newer feature that I describe here. This allows you to reduce the number of methods you need to provide "for various entities" (DTO's), because the type to project to can be given in a generic type parameter.

Multiple entity replacement in a RESTful interface

I have a service with some entities that I would like to expose in a RESTful way. Due to some of the requirements I have some trouble finding a way I find good.
These are the 'normal' operations I intend to support:
GET /rest/entity[?filter=<query>] # Return (matching) entities. The filter is optional and just a convenience for us CLI curl-users :)
GET /rest/entity/<id> # Return specific entity
POST /rest/entity # Creates one or more new entities
PUT /rest/entity/<id> # Updates specific entity
PUT /rest/entity # Updates many entities (json-dict or multipart. Haven't decided yet)
DELETE /rest/entity/<id> # Deletes specific entity
DELETE /rest/entity # Deletes all entities (dangerous but very useful to us :)
Now, the additional requirements:
We need to be able to replace the entire set of entities with a completely new set of entities (merging can occur internally as an optimization).
I thought of using POST /rest/entity for that, but that would remove the ability to create single entities unless I move that functionality. I've seen /rest/entity/new-style paths in other places, but it always seemed a bit odd to reuse the id path segment for that as there might or might not be a collision in IDs (not in my case, but mixing namespaces like that gives me an itch :)
Are there any common practices for this type of operation? I've also considered /rest/import/entity as a separate path for similar non-restful operations for other entity types we might have, but I don't like moving it outside of the entity home path.
We need to be able to perform most operations in a "dry-run"-mode for validation purposes.
Query strings are usually considered anathema, but I'm already a sinner for the filter one. For the validation mode, would adding a ?validate or ?dryrun flag be ok? Have anyone done anything similar? What are the drawbacks? This is meant as an aid for user-facing interfaces to implement validation easily.
We don't expect to have to use any caching mechanism as this is a tiny configuration service rarely touched, so optimization for caching is not strictly necessary
We need to be able to replace the entire set of entities with a
completely new set of entitiescompletely new set of entities
That's what this does, no?
PUT /rest/entity
PUT has replace semantics. Maybe you could use the PATCH verb to support doing partial updates.
Personally, I would change the resource name to "EntityList" or "EntityCollection", but that's just because it is clearer for me.

How do I use entity framework with hierarchical data?

I'm working with a large hierarchical data set in sql server - modelled using the standard "EntityID, ParentID" kind of approach. There are about 25,000 nodes in the whole tree.
I often need to access subtrees of the tree, and then access related data that hangs off the nodes of the subtree. I built a data access layer a few years ago based on table-valued functions, using recursive queries to fetch an arbitrary subtree, given the root node of the subtree.
I'm thinking of using Entity Framework, but I can't see how to query hierarchical data like
this. AFAIK there is no recursive querying in Linq, and I can't expose a TVF in my entity data model.
Is the only solution to keep using stored procs? Has anyone else solved this?
Clarification: By 25,000 nodes in the tree I'm referring to the size of the hierarchical dataset, not to anything to do with objects or the Entity Framework.
It may the best to use a pattern called "Nested Set", which allows you to get an arbitrary subtree within one query. This is especially useful if the nodes aren't manipulated very often: Managing hierarchical data in MySQL.
In a perfect world the entity framework would provide possibilities to save and query data using this data pattern.
Everything IS possible with Entity Framework but you have to hack and slash your way in to it. The database I am currently working against has too many "holder tables" since Points for instance is shared with both teams and users. Both users and teams can also have a blog.
When you say 25 000 nodes do you mean navigational properties? If so I think it could be tricky to get the data access in place. It's not hard to navigate, search etc with entity framework but I tend to model on paper then create the database based on how I want to navigate while using entity framework. Sounds like you don't have that option.
Thanks for these suggestions.
I'm beginning to realise that the answer is to remodel the data in the database - either along the lines of nested sets as Georg suggests, or maybe a transitive closure table, which I've just come across.
That way, I'm hoping to get two key benefits:
a) faster querying aginst arbitrary subtrees
b) a data model which no longer requires recursive querying - so perhaps bringing it within easy reach of the Entity Framework!
It's always amazing how so often the right answer to a difficult problem is not to answer it, but to do something else instead!