tastypie: save_m2m - tastypie

My understanding of why save_m2m is needed in a tastypie resource is not yet clear. In a POST, if I post data only pertinent to the creation of one model and do not send anything related to the m2m object, do I still need to do a save_m2m. Why is it needed? What happens if I override save_m2m to do nothing? It seems to work fine and my resource is created, I'm not sure of any hidden implications that this might lead to. Could you please comment.

If you don't have any fields marked is_m2m=True the method won't actually do anythig. From tastypie docstrings in save_m2m:
"""
Handles the saving of related M2M data.
Due to the way Django works, the M2M data must be handled after the
main instance, which is why this isn't a part of the main ``save`` bits.
Currently slightly inefficient in that it will clear out the whole
relation and recreate the related data as needed.
"""
Inside the tastypie's resources save_m2m method checks for fields with is_m2m set to True if none is found it just doesn't do anything, so if your resource class doesn't have any m2m and any other resources won't inherit from it you can override the save_m2m method to do nothing.
You will be actually one loop ahead of tastypie (a tiny speedup woohoo! ;)).

Related

Is it a bad practice to open a new context or send one as a parameter into sub-methods?

A colleague asked for my opinion regarding the following setup. It's based on a declaration of a context in a method and then providing it into the called submethods. Schematically, it looks like so.
public void SuperMethod()
{
using(Context context = new Context())
{
...
SetMethod(context, ...);
...
GetMethodDuo(context, ...);
...
}
}
public void SetMethod(Context context, ...) { ... }
public Some GetMethod(Context context, ...) { ... }
I advised him against it, motivating my answer by the idea of opening/closing access to the database as near the actual operations as possible. But now that I think about it, I'm uncertain if that was the best advice in any circumstances.
Question 1: Was the suggestion of mine correct in a general case or should I consider altering it?
I also noticed that the super-method calling the sub-methods used the context itself. My suggestion was to move the part that talked to the database in a new sub-method, hence freeing the super-method from any references to the context. I felt that it made sense to make the super-method a controller, while performing all the database related operations in the workers.
Question 2: Does it make sense to have a controlling method that calls a (possibly large) number of sub-methods carrying the actual work?
Please note that both questions are related to the usage of the context while working with Entity Framework and not a general class structure.
IMO, the context should be opened and disposed with every unit of work (so for example, within a simple function) - which doesn't mean you shouldn't pass your context to underlying functions. This can be exactly what you want, especially considering connection pooling and context entry lifetime.
This has a few pretty simple reasons:
It's pretty cheap to open a new context, it will take almost no time relatively to the main performance issues in EF like materializing values (DataSet to object and vice versa) and creating the queries - and those two have to be done with an already open context as well.
One main reason against opening and disposing a context every time is the opening/disposing of connections (some DBMS, I know particularly of SQL CE, have incredible problems with the creation of connections to certain databases - and EF will create a new connection based on the provided connection string whenever it needs one). However, you can easily surpass this, by keeping a connection open (or letting it timeout isn't too bad most of the time either) and pass it to your context upon creating, using the DbContext(Connection, bool) overload with ContextOwnsConnection=false.
When you keep the context open over the whole lifetime, you can't possibly know which objects are already in the change tracker, materialized or there in another form, and which aren't. For me, this was a problem when rewriting the BL of my project. I tried to modify an object, which I added earlier. It was in the context (unchanged) but not in the change tracker, I couldn't set its state, because it wasn't in the change tracker. And I couldn't attach it again, because it was already in the context. This kind of behavior will be pretty hard to control.
Another form of this is as follows. Whenever a new object enters the context, EF will try to set the navigation properties of these regarding the other objects in the context. This is called relationship fixup and is one of the main reasons Include() is working so well. This means that most of the time, you'll have a huge object tree in your context. Then, upon adding/deleting it (or whatever else operation) EF will try to carry out this to the whole tree (well... sometimes ;) ), which can cause a lot of trouble, especially when trying to add a new entry with FK to already existing items.
A database context is, like already mentioned, basically an object tree, which can be, depending on its lifetime, gigantic. And here, EF has to do a few things, like... Checking if an item is already there, because of obvious reasons... In best case complexity O(n*log(n)+m), where m is the number of object types and n the number of objects of this type in the context. ...Checking if an object has been modified since retrieval - well, you can imagine, since EF has to do this for every single object in every single call, this can slow things down pretty far.
A bit corresponding to the last issue. What do you really want, when calling SaveChanges()? Most likely, you want to be able to tell: "ok, these are the actions I did, so EF should now issue these and these calls to the db", right? Well... But, since EF has been tracking the entities, and maybe you modified some values, or another thread did something there and there... How can you be sure, these are the only things SaveChanges() will do? How can you be sure that over the whole lifetime of the context, there's nothing fishy in your database then (which will cancel the transaction, which can be pretty big)?
But yeah, of course, there are a few issues, where you need to keep the context open (well, you don't need to - you could just pass it). For me, this was mostly in a few cases where FK correction was hard to maintain (but still within one function, while sometimes within one function I just had to dispose and re-create the context for the sake of simplicity) and whenever you call sub-functions from multiple places in your code - I had the issue that I had a context open within a calling function, which called another function, which still needed the context. Usually, that's not a problem but my connection handling is kind of... advanced. That led to performance loss, I dealt with this by passing the already open context through an optional additional context parameter to the sub-function - just like what you already mentioned, however it shouldn't really be necessary.
For additional reference, here are some links that might be helpful in this regard. One's straight from MSDN and the other from a blog.
As #DevilSuichiro mentioned the DbContext is meant as a Unit of Work container. By default DbContext stores all loaded objects in the memory and track their changes. When the SaveChanges method is called all changes are sent to a DB in the single transaction.
So if your SuperMethod handles some kind of a logical unit of work (e.g. a HTTP request in a web application) I would instantiate the context only once and pass it as a parameter to submethods.
Regarding your second question - if you instantiate the context only once, it's IMO better to have more methods that are simple, easier to maintain and have meaningful names. If you want to create a new instance of the context in every submethod, it depends on what "possibly large" number means :-)

Structuring nested rest API

I'm writing an API with spring boot, trying to keep it restful but the structure is quite nested. So say I have:
/api/examboard/{ebid}/qualification/{qid}/subject/{sid}/module/{mid}/
I have a controller for every noun that will take in all Id's, the problem with this is that I don't really need an ebid or a qid for modules, they only really need to be concerned with subjects most of the time. The mapping between them all is quite simple. An examboard will have many qualifications, a qualification will have many subjects etc....
Now the problem is say I go for a simpler API design where I only need the parent Id so the Subject controller will also have:
api/subject/{sid}/module
then I need to include multiple services in my controller based on the way JPA works. As I need to include SubjectEntity based calls and ModuleEntity based calls. However I want to maintain a one to one relationship between my controllers/services and services/repositories. This is why I opted for the longer url as I've mentioned above, but it does seem like overkill. Does anyone have any advice on how I should structure an API like this, most examples are quite small and don't really fit.
Without knowing more about your models and the relations between them, this answer will have to stay a bit diffuse.
First of all - "it depends". I know, but it really does. The way you should design an API depends heavily on your use cases that will define required access patterns. Do you often need all modules for a subject? Then introduce /subjects/{sid}/modules, if you need the details for a module of a subject in a qualification in an examboard - by all means have a /examboards/{ebid}/qualifications/{qid}/subjects/{sid}/modules/{mid}
As you say there are many relations between your entities. That is fine, but it does not mean that you need your API to capture each of these relations in a dedicated endpoint. You should distiguish between retrieving and modifying entities here. Find below examples for certain operations you might want to have (not knowing your models, this may not apply - let's consider this an illustration)
Retrieve qualifications for an examboard
GET /examboards/{ebid}/qualifications plain and simple
GET /qualifications?ebid={ebid} if you feel you might need sophisticated filtering later on
or create a new qualitication for an examboard
POST /examboards/{ebid}/qualifications with the details submitted in the body
POST /qualifications with the details submitted in the body and making the associated examboard ebid part of the submitted data
or update an existing qualification
PUT /qualifications/{qid} (if this operation is idempotent)
POST /qualifications/{qid} (if it should not be considered idempotent)
or delete qualifications
DELETE /qualifications/{qid} deletes entities, cascade-deletes associations
DELETE /examboards/{ebid}/qualifications clears all qualifications from an examboard, without actually deleting the qualification entities
There are certainly more ways to let an API do all these things, but this should demonstrate that you need to think of your use cases first and design your API around them.
Please note the pluralisation of collection resources in the previous examples. This comes down to personal preference, but I tends to follow the argumentation of Sam Ruby in RESTful Web Services (available as PDF) that collections should be first-class citizens in an API
Usually, there should not be a reason to have 1:1:1 relationships between controllers, services and repositories. Usually, this is not even possible. Now, I don't know the reason why you might want to do this, but following through with this will force you to put a lot of logic into your database queries and models. While this (depending on your setup and skills) may or may not be easily testable, it certainly shifts the required test types from unit (simpler, usually faster, more fine-grained) to integration tests (require more setup, more complex, usually slower), when instead of having the bulk of your business logic in your services you put them into many joins and subselects in your repositories.
I will only address your REST API structure question.
As you already pointed out
The problem with this is that I don't really need an ebid or a qid for modules, they only really need to be concerned with subjects most of the time
You need to think of your entities as resources if your entity can stand for itself give it its own top level resource. If instead your entity exists only as a part of another entity build a subresource below its parent. This should correspond with the association type aggregation and composition in your object model design.
Otherwise every entity that is part of a many relationship should also be accessible via a subresource on the other side of the relationship.
As I understood you you have a OneToMany relationship between examboard and qualification so we get:
api/examboards/{eid}/qualifications
api/qualifications/{qid}/examboard
Yo could also remove the examboard subresource and include it in the qualification response.
For ManyToMany realtionships you need two subresources:
api/foos/{fid}/bars
api/bars/{bid}/foos
And another resource to manipulate the relationship itself.
api/foosToBars/{fid}+{bid}
Or likewise.

How do you handle deep relational trees in Entity Framework?

I have a very deep relational tree in my model design, that is, the root entity contains a collection of entities that contains more collections of other entities that contains more collections and on an on ... I develop a business layer that other developers have to use to perform operations, including get/save data.
Then, I am thinking about what is the best strategy to cope with this situation. I cannot allow that when retrieving a entity, EF resolves all the dependency tree, since it will end in a lot of useless JOIN (useless because maybe I do not need that data in the next level).
If I disable lazy loading and enforce eager loading for what is needed, it works as expected, but if other developer calls child.Parent.Id instead of child.ParentId trying to do something new (like a new requirement or feature not considered at the beggining), it will get a NullReferenceException if that dependency was not included, which is bad... but it will be a "fast error", and it could be fixed straight away.
If I enable lazy loading, accessing child.Parent.Id instead of child.ParentId will end in a standalone query to the DB each time it is accessed. It won't fail, but it is worse because there is no error, only a decrement in the performance, and all the code should be reviewed.
I am not happy with any of these two solutions.
I am not happy having entities that contains null or empty collections, when in reality, it is not true.
I am not happy with letting EF perform arbitrary queries to the DB at any moment. I would like to get all the information in one shoot if possible.
So, I come up with several possible solutions that involve disabling lazy loading and enforcing eager loading, but not sure which is better:
I can create a EntityBase class, that contains the data in the table without the collections, so they cannot be accessed. And concrete implementations that contains the relationships, the problem is that you do not have much flexibility since C# does not allow multi-inheritance.
I can create interfaces that "mask" the objects hidding the properties that are not available at that method call. For example, if I have a User.Roles property, in order to show a grid will all users, I do not need to resolve the .Roles property, so I could create an interface 'IUserData' that does not contain such property.
But I do not if this additional work is worth, maybe a fast NullReferenceException indicating "This property has not been loaded" would be enough.
Would it be possible to throw a specific exception type if the property is virtual and it has not been overridden/set ?
What method do you use?
Thanks.
In my opinion you are trying to protect the developers from the need to understand what they are doing when they access data and what performance implications it can have - which might result in an unnecessary convoluted API with a lot of helper classes, base classes, interfaces, etc.
If a developer uses user.MiddleName.Trim() and MiddleName is null he gets a NullReferenceException and did something wrong, either didn't check for null or didn't make sure that the MiddleName is set to a value. The same when he accesses user.Roles and gets a NullReferenceException: He didn't check for null or didn't call the appropriate method of your API that loads the Roles of the user.
I would say: Explain how navigation properties work and that they have to be requested explicitly and let the application crash if a developer doesn't follow the rules. He needs to understand the mistake and fix it.
As a help you could make loading related data explicit somehow in the API, for example with methods like:
public User GetUser(int userId);
public User GetUserWithRoles(int userId);
Or:
public User GetUser(int userId, params Expression<Func<User,object>>[] includes);
which could be called with:
var userWithoutRoles = layer.GetUser(1);
var userWithRoles = layer.GetUser(2, u => u.Roles);
You could also leverage explicit loading instead of lazy loading to force the developers to call a method when they want to load a navigation property and not just access the property.
Two additional remarks:
...lazy loading ... will end in a standalone query to the DB each time
it is accessed.
"...and not yet loaded" to complete this. If the navigation property has already been loaded within the same context, accessing the property again won't trigger a query to the database.
I would like to get all the information in one shoot if possible.
Multiple queries do not necessarily result in worse performance than one query with a lot of Includes. In fact complex eager loading can lead to data multiplication on the wire and make entity materialization very time consuming and slower than multiple lazy or explicit loading queries. (Here is an example where a query's performance has been improved by a factor of 50 by changing it from a single query with Includes to more than 1000 queries without Include.) Quintessence is: You cannot reliably predict what's the best loading strategy in a specific situation without measuring the performance (if the performance matters in that situation).

Multiple entity replacement in a RESTful interface

I have a service with some entities that I would like to expose in a RESTful way. Due to some of the requirements I have some trouble finding a way I find good.
These are the 'normal' operations I intend to support:
GET /rest/entity[?filter=<query>] # Return (matching) entities. The filter is optional and just a convenience for us CLI curl-users :)
GET /rest/entity/<id> # Return specific entity
POST /rest/entity # Creates one or more new entities
PUT /rest/entity/<id> # Updates specific entity
PUT /rest/entity # Updates many entities (json-dict or multipart. Haven't decided yet)
DELETE /rest/entity/<id> # Deletes specific entity
DELETE /rest/entity # Deletes all entities (dangerous but very useful to us :)
Now, the additional requirements:
We need to be able to replace the entire set of entities with a completely new set of entities (merging can occur internally as an optimization).
I thought of using POST /rest/entity for that, but that would remove the ability to create single entities unless I move that functionality. I've seen /rest/entity/new-style paths in other places, but it always seemed a bit odd to reuse the id path segment for that as there might or might not be a collision in IDs (not in my case, but mixing namespaces like that gives me an itch :)
Are there any common practices for this type of operation? I've also considered /rest/import/entity as a separate path for similar non-restful operations for other entity types we might have, but I don't like moving it outside of the entity home path.
We need to be able to perform most operations in a "dry-run"-mode for validation purposes.
Query strings are usually considered anathema, but I'm already a sinner for the filter one. For the validation mode, would adding a ?validate or ?dryrun flag be ok? Have anyone done anything similar? What are the drawbacks? This is meant as an aid for user-facing interfaces to implement validation easily.
We don't expect to have to use any caching mechanism as this is a tiny configuration service rarely touched, so optimization for caching is not strictly necessary
We need to be able to replace the entire set of entities with a
completely new set of entitiescompletely new set of entities
That's what this does, no?
PUT /rest/entity
PUT has replace semantics. Maybe you could use the PATCH verb to support doing partial updates.
Personally, I would change the resource name to "EntityList" or "EntityCollection", but that's just because it is clearer for me.

How do I add relationships at runtime using DBIx::Class and Catalyst?

In the application I am building, users can specify relationships between tables.
Since I only determine this at runtime, I can't specify has_many or belongs_to relationships in the schema modules for startup.
So given two tables; system and place, I would like to add the relationship to join records between them.
I have part of the solution below:
$rs = $c->model('DB::system')->result_source;
$rs->add_relationship('locations','DB::place',{'foreign.fk0' => 'self.id'});
So the column fk0 would be the foreign key mapping to the location primary key id.
I know there must be a re-registration to allow future access to the relationship but I can't figure it out.
I don't believe you can re-define these relationships after an application is already running. At least not without discarding any existing DBIC objects, and re-creating them all from scratch. At that point, it would be easier to just re-start your application, I suspect.
If you're content defining these things dynamically at compile time, that is possible... we do something similar in one of our applications.
If that would be useful to you, I can provide some sample code.
The DBIx::Class::ResultSet::View module might provide a rough approximation of what you're looking for, by letting you execute arbitrary code, but retrieving the results as DBIx objects.
My general opinion on things like this, is that any abstraction layer (and an ORM is an abstraction layer), is intended to make life easier. When it gets in the way of making your application do what it wants, it's no longer making life easier, and ought to be discarded (for that specific use--not necessarily for every use). For this reason, I would suggest using DBI, as you suggested in one of your comments. I suspect it will make your life much easier in this case.
I've done this by calling the appropriate methods on the relevant result sources, e.g. $resultset->result_source-><relationship method>. It does work even in an active application.