How should I determine the explicit type returned by a query to a GraphQL interface type? - interface

Say we have this GraphQL schema that uses inheritance:
Query {
maintenanceEvents: [MaintenanceEvent]!
}
interface MaintenaceEvent {
time: DateTime
}
type OilChange implements MaintenanceEvent {
time: DateTime
oilType: OilType
}
type TireRotation implements MaintenanceEvent {
time: DateTime
pattern: RotationPattern
}
... more types
And say the client will display a summary of the types of events that occurred on a timeline. The problem is that (as far as I can tell) there is no straightforward way for the client to determine the types of each event in the array it receives from the server.
Some options I have come up with:
Add a "type" field to the interface and to each type that implements it.
This looks like this:
enum MaintenanceEventType {
OIL_CHANGE
TIRE_ROTATION
... more types
}
interface MaintenanceEvent {
time: DateTime
type: MaintenanceEventType
}
type OilChange implements MaintenanceEvent {
time: DateTime
type: MaintenanceEventType
oilType: OilType
}
...etc.
This is the best option I have come up with, but there there are things about it I don't like. There are two lists of maintenance event types to keep in sync: the enum, and the set of interfaces. Also, it seems redundant to send a field that is always the same for a given type.
Determine the type in the front-end based on which fields are present.
This is not a good option. It is error-prone and the logic to determine which type is which will change if fields are added to the types. Also, it cannot handle types that implement the interface but do not have additional fields.
Is there an established pattern for this, or a part of the GraphQL specification that handles this? Is this not a good application of GraphQL interfaces?

You should use the __typename field. From the spec:
GraphQL supports type name introspection at any point within a query by the meta‐field __typename: String! when querying against any Object, Interface, or Union. It returns the name of the object type currently being queried.
This is most often used when querying against Interface or Union types to identify which actual type of the possible types has been returned.
This field is implicit and does not appear in the fields list in any defined type.
Your query would look something like:
query {
maintenanceEvents {
__typename
time
... on OilChange {
oilType
}
... on TireRotation {
pattern
}
}
}
Note that if you use Apollo Client, the __typename field will automatically be added to your queries -- there's no need to explicitly add it yourself in that case.

Related

How to completely customize the way that type information gets written to a document by Spring Data MongoDB?

Is it possible to use different type attribute (instead of _class) for each polymorphic collection like it's implemented in Doctrine(PHP) or Jackson libraries? Current solution allows to store type information in document field. By default it is a full class name stored in the field named _class.
We can easy change it to save some custom string (alias) instead of full class name and change default discriminator field name from _class to something else.
In my situation I'm working on legacy database while legacy application is still in use. Legacy application uses Doctrine (PHP) ODM as datalayer.
Doctrine allows to define discriminator field name (_class in SpringData) by annotation and have it different for each collection.
In Spring Data when I pass typeKey to DefaultMongoTypeMapper it used for all collections.
Thanks.
// MyCustomMongoTypeMapper.java
// ...
#SuppressWarnings("unchecked")
#Override
public <T> TypeInformation<? extends T> readType(DBObject source, TypeInformation<T> basicType) {
Assert.notNull(basicType);
Class<?> documentsTargetType = null;
Class<? super T> parent = basicType.getType();
while (parent != null && parent != java.lang.Object.class) {
final String discriminatorKey = getDiscriminatorKey(parent); //fetch key from annotation
if (null == discriminatorKey) {
parent = parent.getSuperclass();
} else {
accessor.setKey(discriminatorKey);
return super.readType(source, basicType);
}
}
accessor.resetKey();
return super.readType(source, basicType);
}
Something that should work for you is completely exchanging the MongoTypeMapper instance that MappingMongoConverter uses. As you discovered the already available implementation assumes a common field name and takes yet another strategy to either write the fully-qualified class name or an alias or the like.
However, you should be able to just write your own and particularly focus on the following methods:
void writeType(TypeInformation<?> type, DBObject dbObject) — you basically get the type and have complete control over where and how to put that into the DBObject.
<T> TypeInformation<? extends T> readType(DBObject source, TypeInformation<T> defaultType); — you get the type declared on the reading side (i.e. usually the most common type of the hierarchy) and based on that have to lookup the type from the given source document. I guess that's exactly the inverse of what's to be implemented in the other method.
On a final note, I would strongly recommend against using different type field names for different collections as on the reading side you might run into places where just Object is declared on the property and you basically don't get no clue where to even look for in the document.

GraphQL mutations on nested resources

Mutations are queries for manipulating data. If so then my root query and root mutation tree should look similar right? They both should allow nested fields (nested mutations). I was playing with this (using express-graphql) and it works.
Example:
// PUT /projects/:project_id/products/:id
mutation {
findProject(id: 1) { // make sure that project exists and we can access it before mutating data
updateProduct(id: 1, name: "Foo") { // the resolve function receives a valid `project` as the first argument
id
}
}
}
Is this a valid example? Should mutations be nested like this? If no, how should I handle nested resources? I can not find any real-life example that would mutate nested resources. All examples define mutations only on the first level (fields on the root mutation).
The product has a unique ID, so that's all you need to identify it.
mutation {
updateProduct(id: 1, name: "Foo") {
id
}
}
To verify that the user is authorized to modify the product, you should check the products' project. You'd probably have some centralized authorization anyway:
resolve({ id }, { user }) {
authorize(user, 'project', Product.find(id).project) // or whatever
... // update
}
Old answer:
This is certainly a valid example.
I'd guess the lack of nested object mutation examples may be due to the fact that even if a product is linked to a project, it would in most cases still have a unique ID -- so you can find the product even without the project id.
Another alternative would be to include the project id as an argument to updateProduct:
mutation {
updateProduct(projectId: 1, id: 1, name: "Foo") {
id
}
}
Your solution seems nicer to me, though.
As a note, mutations are, in fact, exactly the same as queries. The only difference is that the resolve function typically includes some permanent change, like modifying some data. Even then though, the mutation behaves just like a query would -- validate args, call resolve function, return data of the declared type.
We declare such method as a mutation (and not a query) to make explicit that some data are going to be changed but also for the more important reason: The order in which you modify data is important. If you declare multiple mutations in one request, the executor will run them sequentially to maintain consistency (this doesn't attempt to solve distributed writes though, that's a whole another problem).

CQRS Commands and Events as generic classes?

In most examples I saw, commands and events are represented as classes. That means you have to write a CorrectNameCommand class with name property and a NameCorrectedEvent class with name property. Given that both commands and events are serialized and deserialized in most cases and send to other parties (there goes the compile time type safety), what is the advantage of this explicit classes over a more generic class?
Example:
A Command class with a Name (that represents the type of the command), the key of the ag that should handle the command and an array of objects or name/value pairs for any other parameters.
An Event class essentially the same (perhaps we can put the shared parts in an CommandEventBase class).
The command (and event) handlers have now to check the name of the command (event) instead of its class type and have to rely on the correctness of the parameters in the list (like the deserializer has to rely that the serialized format is correct).
Is that a good approach? If yes, why is it not used in the samples and tutorials? If not, what are the problems?
Duplication
It's a fair point that when Commands and Events are serialized, compile-time safety is lost, but in a statically typed language, I'd still prefer strongly typed Command and Event types.
The reason for this is that it gives you a single piece of the code base responsible for interpreting message elements. Serialization tends to be quite (type-)safe; deserialization is where you may encounter problems.
Still, I'd prefer to deal with any such problems in a single place, instead of spread out over the entire code base.
This is particularly relevant for Events, since you may have multiple Event Handlers handling the same type of Event. If you treat events as weakly typed dictionaries, you'll need to duplicate the implementation of a Tolerant Reader in each and every Event Handler.
On the other hand, if you treat Events and Commands as strong types, your deserializer can be the single Tolerant Reader you'd have to maintain.
Types
All this said, I can understand why you, in languages like C# or Java, find that defining immutable DTOs for each and every message seems like a lot of overhead:
public sealed class CorrectNameCommand
{
private readonly string userId;
private readonly string newName;
public CorrectNameCommand(string userId, string newName)
{
this.userId = userId;
this.newName = newName;
}
public string UserId
{
get { return this.userId; }
}
public string NewName
{
get { return this.newName; }
}
public override bool Equals(object obj)
{
var other = obj as UserName;
if (other == null)
return base.Equals(obj);
return object.Equals(this.userId, other.userId)
&& object.Equals(this.newName, other.newName);
}
public override int GetHashCode()
{
return this.userId.GetHashCode() ^ this.newName.GetHashCode();
}
}
That, indeed, seems like a lot of work.
This is the reason that I these days prefer other languages for implementing CQRS. On .NET, F# is a perfect fit, because all of the above code boils down to this one-liner:
type CorrectNameCommand = { UserId : string; NewName : string }
That's what I'd do, instead of passing weakly typed dictionaries around. Last time I heard Greg Young talk about CQRS (NDC Oslo 2015), he seemed to have 'converted' to F# as well.

MongoDB C# Select specific columns

I know that MongoDb C# driver doesn't support projections so I searched a little bit and I found that many people uses a mongoCursor to perform such queries, I'm trying to select only specific fields and my code is the following:
public T GetSingle<T>(Expression<Func<T, bool>> criteria,params Expression<Func<T, object>>[] fields) where T : class
{
Collection = GetCollection<T>();
return Collection.FindAs<T>(Query<T>.Where(criteria)).SetFields(Fields<T>.Include(fields)).SetLimit(1).SingleOrDefault();
}
I got and custom repository for users on top of that:
public User GetByEmail(string mail, params Expression<Func<User, object>>[] fields)
{
return GetSingle<User>(x=>x.Email==mail,fields);
}
this is the usage:
_repository.GetByEmail(email, x=>x.Id,x=>x.DisplayName,x=>x.ProfilePicture)
but I'm getting the fields included in the parameter but also all the Enums,dates and Boolean values that are part of the class User, the values that are string and not included in the field list are null so that's fine
what can I do to avoid that?
By using SetFields, you can specify what goes through the wire. However, you're still asking the driver to return hydrated objects of type T, User in this case.
Now, similar to say an int, enum and boolean are value types, so their value can't be null. So this is strictly a C#-problem: there is simply no value for these properties to indicate that they don't exist. Instead, they assume a default value (e.g. false for bool and 0 for numeric types). A string, on the other hand, is a reference type so it can be null.
Strategies
Make the properties nullable You can use nullable fields in your models, e.g.:
class User {
public bool? GetMailNotifications { get; set; }
}
That way, the value type can have one of its valid values or be null. This can, however, be clumsy to work with because you'll have to do null checks and use myUser.GetMailNotifications.Value or the myUser.GetMailNotifications.GetValueOrDefault helper whenever you want to access the property.
Simply include the fields instead this doesn't answer the question of how to it, but there are at least three good reasons why it's a good idea to include them:
When passing a User object around, it's desirable that the object is in a valid state. Otherwise, you might pass a partially hydrated object to a method which passes it further and at some point, someone attempts an operation that doesn't make sense because the object is incomplete
It's easier to use
The performance benefit is negligible, unless you're embedding huge arrays which I would suggest to refrain from anyway and which isn't the case here.
So the question is: why do you want to make all the effort of excluding certain fields?

Zend - Design Pattern DataMapper & Table Gateway

This is directly out of the Zend Quick Start guide. My question is: why would you need the setDbTable() method when the getDbTable() method assigns a default Zend_Db_Table object? If you know this mapper uses a particular table, why even offer the possibility of potentially using the "wrong" table via setDbTable()? What flexibility do you gain by being able to set the table if the rest of the code (find(), fetchAll() etc.) is specific to Guestbook?
class Application_Model_GuestbookMapper
{
protected $_dbTable;
public function setDbTable($dbTable)
{
if (is_string($dbTable)) {
$dbTable = new $dbTable();
}
if (!$dbTable instanceof Zend_Db_Table_Abstract) {
throw new Exception('Invalid table data gateway provided');
}
$this->_dbTable = $dbTable;
return $this;
}
public function getDbTable()
{
if (null === $this->_dbTable) {
$this->setDbTable('Application_Model_DbTable_Guestbook');
}
return $this->_dbTable;
}
... GUESTBOOK SPECIFIC CODE ...
}
class Application_Model_DbTable_Guestbook extends Zend_Db_Table_Abstract
{
protected $_name = 'guestbook_table';
}
Phil is correct, this is known as lazy-loading design pattern. I just implemented this pattern in a recent project, because of these benefits:
When I call on getMember() method, I will get a return value, regardless if it has been set before or not. This is great for method chaining: $this->getCar()->getTires()->getSize();
This pattern offers flexibility in that outside calling code is still able to set member values: $myClass->setCar(new Car());
-- EDIT --
Use caution when implementing the lazy-loading design pattern. If your objects are not properly hydrated, a query will be issued for every piece of data which is NOT available. The best thing to do is tail your db query log, during the dev phase, to ensure the number and type of queries are what you expect. A project I was working on was issuing over 27 queries for a "detail" page, and I had no idea until I saw the queries.
This method is called lazy-loading. It allows a property to remain null until requested unless it is set earlier.
One use for setDbTable() would be testing. This way you could set a mock DB table or something like that.
One addition: if setDbTable() is solely for lazy-loading, wouldn't it make more sense to make it private? That way it will avoid accidental assignment and to wrong table as originally mentioned by Sam.
Should we be compromising the design for the sake of testability?