GraphQL mutations on nested resources - rest

Mutations are queries for manipulating data. If so then my root query and root mutation tree should look similar right? They both should allow nested fields (nested mutations). I was playing with this (using express-graphql) and it works.
Example:
// PUT /projects/:project_id/products/:id
mutation {
findProject(id: 1) { // make sure that project exists and we can access it before mutating data
updateProduct(id: 1, name: "Foo") { // the resolve function receives a valid `project` as the first argument
id
}
}
}
Is this a valid example? Should mutations be nested like this? If no, how should I handle nested resources? I can not find any real-life example that would mutate nested resources. All examples define mutations only on the first level (fields on the root mutation).

The product has a unique ID, so that's all you need to identify it.
mutation {
updateProduct(id: 1, name: "Foo") {
id
}
}
To verify that the user is authorized to modify the product, you should check the products' project. You'd probably have some centralized authorization anyway:
resolve({ id }, { user }) {
authorize(user, 'project', Product.find(id).project) // or whatever
... // update
}
Old answer:
This is certainly a valid example.
I'd guess the lack of nested object mutation examples may be due to the fact that even if a product is linked to a project, it would in most cases still have a unique ID -- so you can find the product even without the project id.
Another alternative would be to include the project id as an argument to updateProduct:
mutation {
updateProduct(projectId: 1, id: 1, name: "Foo") {
id
}
}
Your solution seems nicer to me, though.
As a note, mutations are, in fact, exactly the same as queries. The only difference is that the resolve function typically includes some permanent change, like modifying some data. Even then though, the mutation behaves just like a query would -- validate args, call resolve function, return data of the declared type.
We declare such method as a mutation (and not a query) to make explicit that some data are going to be changed but also for the more important reason: The order in which you modify data is important. If you declare multiple mutations in one request, the executor will run them sequentially to maintain consistency (this doesn't attempt to solve distributed writes though, that's a whole another problem).

Related

How to find DecisionTable by name from a drools DMNModel object

I am using RedHat drools DMN capability. I have set up a 'DMNRuntimeEventListener' in order to collect information about the decision table that has just run. In that event listener there is a callback method afterEvaluateDecisionTable which is called. That method receives a AfterEvaluateDecisionTableEvent object. That object does not provide a pointer to the DecisionTable that it just executed.
Instead, that object provides (1) the name of the decision node, and (2) the name of the decision table. This is a little odd because the DecisionTable object does not have a name. It has an id, and a label. But it does not have a getName() method or anything that I can find related to that.
I have the DMNModel object for the model that is being executed. From that, it is easy to find the DecisionNode by name, and the name given works. If the entire decision node is a decision table, then I don't need the decision table name, and everything works fine.
But DMN allows you to nest decision tables with other expression structures, such as a Context object. Context objects can be nested within other Context objects. So the decision node can be an elaborate tree with multiple DecisionTable objects within it. In this case, I need to find the decision table by name.
I can iterate through the tree of children, and occasionally I find objects that are in fact instanceof DecisionTable objects. OK, all I need to do is to find the name of that decision table.
getId() returns a GUID and does not match the name given
getLabel() returns a null
getIdentifierString() returns the same as getId()
I was pointed to this example code for getting a name:
public static String nameOrIDOfTable(DecisionTable sourceDT) {
if (sourceDT.getOutputLabel() != null && !sourceDT.getOutputLabel().isEmpty()) {
return sourceDT.getOutputLabel();
} else if (sourceDT.getParent() instanceof NamedElement) { // DT is decision logic of Decision, and similar cases.
return ((NamedElement) sourceDT.getParent()).getName();
} else if (sourceDT.getParent() instanceof FunctionDefinition && sourceDT.getParent().getParent() instanceof NamedElement) { // DT is decision logic of BKM.
return ((NamedElement) sourceDT.getParent().getParent()).getName();
} else {
return new StringBuilder("[ID: ").append(sourceDT.getId()).append("]").toString();
}
}
 
This suggests looking at the following and here at the values that I get:
 
getOutputLabel() is null
parent (ContextEntry) is not a NamedElement and does not have a getName() method
parent of the parent (Context) is not a NamedElement and does not have a getName() method
So this method would generate a name for the decision table, but it does not match the name I have been given by the listener. No method provides the name of the decision table (feature request: it would be really nice if the listener callback just provided a link to the DecisionTable object)
 
So then I start looking into the drools source to figure out how the evaluation code finds the name. What I find is that the name is PASSED IN to the evaluation method, so the name of the decision table depends on the method that calls for evaluation. I would have to analyze all the code that calls for evaluation, in order to determine what this value for decisionTableName means, because it seems to mean different things depending on who is calling for the evaluation.
Can anyone suggest a CORRECT way to find the name of the decision table that will always correspond to the name that the listener callback has provided?
The most appropriate way for your real use-case (correlate AfterEvaluateDecisionTableEvent with the "xml" DecisionTable element) would be to go via element ID.
To locate a DecisionTable in a given DMNModel, you can use something similar to this one-liner:
public static DecisionTable locateDTbyId(DMNModel dmnModel, String id) {
return dmnModel.getDefinitions()
.findAllChildren(DecisionTable.class)
.stream().filter(d -> d.getId().equals(id))
.findFirst().orElseThrow(IllegalStateException::new);
}
as for the listener events for the Decision Table before/after to be augmented with the decision table ID, PR is submitted, ref https://github.com/kiegroup/drools/pull/4860
This would align, for instance, to ContextEntry event logic as well (there again ID are used).
Thanks for having reported the gap, for feature request you can follow the instruction in https://www.drools.org/community/getHelp.html

How to create a one-way one to many relationship in SailsJS? [duplicate]

I'm interested in a one-way-many association. To explain:
// Dog.js
module.exports = {
attributes: {
name: {
type: 'string'
},
favorateFoods: {
collection: 'food',
dominant: true
}
}
};
and
// Food.js
module.exports = {
attributes: {
name: {
type: 'string'
},
cost: {
type: 'integer'
}
}
};
In other words, I want a Dog to be associated w/ many Food entries, but as for Food, I don't care which Dog is associated.
If I actually implement the above, believe it or not it works. However, the table for the association is named in a very confusing manner - even more confusing than normal ;)
dog_favoritefoods__food_favoritefoods_food, with id, dog_favoritefoods, and food_favoritefoods_food.
REST blueprints function with the Dog model just fine, I don't see anything that "looks bad" except for the funky table name.
So, the question is, is it supposed to work this way, and does anyone see something that might potentially go haywire?
I think you should be ok.
However, there does not really seem any reason to not complete the association for a Many to Many. The reason would be because everything is already being created for that single collection. The join table and its attributes are already there. The only thing missing in this equation is the reference back on food.
I could understand if putting the association on food were to create another table or create another weird join, but that has already been done. There really is no overhead to creating the other association.
So in theory you might as well create it, thus avoiding any potential conflicts unless you have a really compelling reason not to?
Edited: Based on the comments below we should note that one could experience overhead in lift based the blueprints and dynamic finders created.

How to specify different cache keys on the same key object in simple-spring-memcached

I am trying to implement a distributed cache with spring-memcached. The docs suggest that
to use an object as the key I need to have a method in my domain class with #CacheKeyMethod
annotation on it.
But the problem is I am using the same domain class in different scenarios and the key to be generated in each case has different logic. For examples for a User class one of the scenarios requires the key to be unique in terms of city and gender and but in the other case it requires to be unique in terms of the user's email, it's essentially what your lookup is based on.
Although a user's email would determine the city and gender, so I can use email as the key in first case as well but that would mean separate cache entries for each user while the cached data would be same as long as the gender and city are same, which is expected to increase the hit ratio by a huge margin(just think how many users you can expect to be males from bangalore).
Is there a way I could define different keys. Also it would be great if the logic of
generating the key could be externalised from the domain class itself.
I read the docs and figured out that something called CacheKeyBuilder and/or CacheKeyBuilderImpl could do the trick but I couldn't understand how to proceed.
Edit..
ok.. I got one clue! What CacheKeyBuliderImpl does is, it calls the generateKey method on defaultKeyProvider instance which looks for #cachekeyannotation on the provided domain class's methods and executes the method found to obtain the key.
So replacing either the CacheKeyBuilderImpl with custom Impl or replacing KeyProvider's default implementation within CacheKeyBuilderImpl with yours might do the trick... but the keyprovider reference is hardwired to DefaultKeyProvider.
Can anybody help me implement CacheKeyBuilder(with respect to what different methods do;the documentation doesn't clarify it) and also how do I inject it to be used instead of ususal CacheKeyBuilderImpl
Simple Spring Memcached (SSM) hasn't be designed to allow such low level customization. As you wrote one of way is to replace CacheKeyBuilderImpl with own implementation. The default implementation is hardwired but it can be easily replaces using custom simplesm-context.xml configuration.
As I understand your question, you want to cache your User objects under different keys depends on use case. It's supported out of the box because by default SSM uses method argument to generate cache key not the result.
Example:
#ReadThroughMultiCache(namespace = "userslist.cityandgenre", expiration = 3600
public List<User> getByCityAndGenre(#ParameterValueKeyProvider(order = 0) String city, #ParameterValueKeyProvider(order = 1) String genre) {
// implementation
}
#ReadThroughSingleCache(namespace = "users", expiration = 3600)
public User getByEmail(#ParameterValueKeyProvider String email) {
// implementation
}
In general the #CacheKeyMethod is only used to generate cache key if object that contains the method is passed as a parameter to the method and the parameter is annotated by #ParameterValueKeyProvider

validating that a field is unique using Bean Validation (or JSF)

I have an simple object that has a name
public class Foo {
private String name
}
Each user on the site may have up to 10 Foo's associated with them. Within this context, when a new Foo is created, I would like to validate that there isn't another foo associated with the same user that already exists.
I could Create a custom Bean Validator But annotations require the paramaeters to be defined during compilation. How would I then pass across the names of the existing Foos?
As suggested in various places, I could use EL expressions as an alternative way to pick up the data. This feels like using a sledgehammer to crack a nut. It also brings in a whole bunch of potential issues to consider least of all being ease of testing.
I could do class-wide validation using a boolean field
#AssertTrue(message="Name already exists")
public boolean isNameUnique() {
return (existingNames.contains(name));
}
But the validation message would not show up next to the name field. It is a cosmetic issue and this can be a backup plan. However, its not ideal.
Which brings me to the question:
Is there a simple way to write a Bean Validator that can check the value against a collection of values at the field level and meet the following restrictions ?
Previous values determined at runtime
Not using things like EL expressions
Field level validation instead of class level.
EDIT in reponse to Hardy:
The Foo class is an entity persisted within a database. They are picked up and used through a DAO interface.
I could loop through the entities but that means plugging the DAO into the validator and not to mention that the I would need to write the same thing again if I have another class that too has this constraint.
It would help to see how you want to use the Foo class. Can you extend your example code? Are they kept in a list of Foo instances. A custom constraint seems to be a good fit. Why do you need to pass any parameters to the constraints. I would just iterate over the foos and check whether the names are unique.

Derived Type with DateTime Condition

I have a Show table, and I would like to have a derived type called ActiveShow which only returns shows in the future
Show.ShowDateTime > DateTime.Now
Is there a way that I can achieve this using the designer or some other way so that creating an instance of ActiveShow will always adhere to the date condition?
Absolutely you could do this using a DefiningQuery (which is essentially a TSQL view) in the SSDL.
But I don't recommend it.
The problem is type memberships would be transient, when it should be permanent, or at the very least require you to explicitly change it.
I.e. you could end up in a situation where at one point something is an ActiveShow (and loaded in memory) but if you do a subsequent query you might attempt to load the same object as a Show. In this situation what would happen to identity resolution is anyone's guess.
This will more than likely resort in some very nasty unexpected side-effects.
As an alternative perhaps an extra Property in your Context added in a partial class:
i.e.
public partial class MyContext
{
public ObjectQuery<Show> ActiveShows
{
get{
return this.Shows.Where(s => ShowDateTime > DateTime.Now)
as ObjectQuery<Show>;
}
}
}
This probably gives you most of the benefits without most of the risks.
Hope this helps
Alex