Manage data transfer in REST Architecture (Interface) - rest

I'm working on my first client-server project and using REST.
So my question is where and how do I handle the data.
Options:
Define a datamodel and share it to the server and client. So I could you use JSON and object transfering, but each change of the datamodel requires also possible changes in the server and client implementation.
Simply transfer the data as basic data types (strings, boolean etc.). So only a datamodel is required in the client.
What do you recommend?

As you want to develop REST APIs and REST evolves around resource representations so first option (define data model) is way to go.
Note that all data model changes will not break the APIs and thus client implementations. Only when you re-structure your resource representation OR you take out one of attribute from model - that's when you will need to version your APIs.

Related

What are the differences of using value proxies for my entities instead of entity proxies?

So far i understand that i will have no more need to define an #version field in my entitites and no more need to use an entity locator. And for value proxies i will have to usenormal editors. Any other diffrences, advantages, disadvantages? What about in the context of using request factory in conjunction with spring
The main difference is that with EntityProxy, the client can send a diff of changes rather than the entire object graph. This is made possible because EntityProxys have an identity, so the server can fetch the identity from the datastore and then apply the diff/patch sent from the client, and only then the entity will be passed to your service methods.
With ValueProxy you basically have an equivalent of GWT-RPC: the object is reconstructed from scratch on the server, and not associated with your datastore (in the case of JPA for instance, it's not attached to the session). Depending on your datastore API, this can make things more complex to handle in your service methods.
Other than that, you'll also lose the EntityProxyChange events.

Java Rest Client consuming JSON - how to create JAXB objects?

I need to write a rest client (in Java - using RestEasy) that can consume JSON responses. Regarding the need for the rest client (or wrapping service) to translate the JSON responses to a Java type, I see the following options:
1. map the response to a string and then use JsonParser tools to extract data and build types manually.
2. Use JAXB annotated POJOs - in conjunction with jackson - to automatically bind the json response to an object.
Regarding 2, is it desirable / correct to define an XSD to generate the JAXB annotated POJOs? I can advantages to doing this using, e.g. reuse by an XML client.
Thanks.
I'm a fan of #2.
The reasoning is that your JAXB annotated model objects essentially are the contract for the business/domain logic that you're trying to represent on a transport level, and POJOs obviously give you excellent control over getter/setter validation, and you can control your element names and namespaces with fine granularity.
With that said, I like having an additional "inner" model of POJOs (if necessary, depending on problem complexity/project scope) to isolate the transport layer from the domain objects. Also, you get a nice warm feeling that you're not directly tied to your transport layer if things need to change internally in your business/domain object representation. A co-worker mentioned Dozer, a tool for mapping beans to beans, but I have no direct experience with it to comment further.
I'm not a fan of generating code from XSDs. Often the code is ugly or downright unreadable; and managing change, however subtle or insignificant can introduce unexpected results. Maybe I'm wrong about that but I require good unit-tests on a proven model.
This is based on my personal experience writing a customer-facing SDK with a hairy XML-over-HTTP (we don't call it REST) API. JAXB/Jackson annotated POJOs made it relatively painless. Hope that helps.

.NET Session State Caching with Redis, MongoDB, ServiceStack

I have been doing some research on whether it is ok or not to cache .NET Session State in external Dbs such as Redis, MongoDb, or other highly scalable tools.
The output of my research was that even though MongoDB has more integration to do this kind of things, it seems that Redis is far more performant and has way more options (key expiration, sets, etc) to use.
There is this other framework called ServiceStack which has an implementation of a RedisClient but IMHO the way it is implemented is far more coupled than I would like to.
public override object OnGet(CachedOrders request)
{
var cacheKey = "some_unique_key_for_order";
return base.RequestContext.ToOptimizedResultUsingCache(this.CacheClient, cacheKey, () =>
{
//This delegate will be executed if the cache doesn't have an item
//with the provided key
//Return here your response DTO
//It will be cached automatically
});
}
So after this research, I would like to know your opinion and whether you have implemented this kind of caching in any of your apps. Can you please share you experiences?
Thanks!
ServiceStack's caching isn't coupled, the ToOptimizedResultUsingCache() method is just a convenience Extension method allowing you implement a common caching pattern in the minimal boilerplate necessary. The ToOptimizedResult method returns the most optimized result based on the MimeType and CompressionType from the IRequestContext. e.g. in a JSON service it would normally be the deflate'd output of the JSON Response DTO.
You don't have to use the extension method and can access the ICacheClient API directly as it's an auto-wired property in the ServiceBase class. If you require more functionality than the ICacheClient API can provide, I recommend using Redis and ServiceStack's C# RedisClient which gives you fast, atomic access to distributed comp-sci collections.
The benefit of using an ICacheClient API is that it's a testable implementation agnostic caching interface that currently has InMemory, Redis and Memcached providers.
I wrote a NuGet package for this 2 years ago, and we've been using it in production since then. It overrides the .NET SessionStateProvider with a custom class that allows persistence to Redis, with various features like:
Only write to Redis on values changing
Concurrent session access from multiple requests with the same session ID
Ability to access the session from Web API (although doing so violates REST principles, if you care)
Easy customization of serialization format (roll your own or change Json structure)
You can get it here: https://www.nuget.org/packages/RedisSessionProvider
Docs: https://github.com/welegan/RedisSessionProvider

Transfering OWL data from client to server using GWT

I am working on a web application which is being developed using GWT. I am also using OWL ontologies and Jena framework to structure semantic contents in the application.
A simple function in the application would be getting some data from the user and send it to the servers side to be stored as a data graph using the ontology. I suppose one way would be to store the data as java class objects equivalent to the ontology classes and send them using the GWT async communication. To convert OWL classes to java, I used Jastor.
My question is that after the server receives the java class, is it possible to easily convert is to an OWL individual and add it to the data graph, using the functions of Jena and/or Jastor? For instance in the server side interface implementation we call something like this:
Public void StoreUser (User userObj) {
//User: a Jastor created java class. userObj is instantiated using the user data on the client side.
OntModel ontModel = ModelFactory.createOntologyModel(OntModelSpec.OWL_DL_MEM);
//Open the ontology here using inputstream and ontModel.read!
Individual indiv = (Individual) userObj.resource();
//Add the individual to the model here! }
Unfortunately I wasn't able to find any Jena function that can add an existing individual to the model.
Would you suggest another way to pass the ontology data to server side and store it, rather than using Jastor created classes (for instance using an XML file)?
Thanks for your help
There are two parts to the answer. First, an Individual is a sub-class of a Jena Resource, which is definitely something that you can add to a model. However, individual resources, or properties or literals are not stored in a Model. A Model stores only triples, represented as Statement objects in the Java API. So to add some resource to a model, you have to include it in a triple.
In Jena, an individual is defined as a subject of a triple whose predicate is rdf:type and whose object is not one of the built-in language classes. So if you have:
ex:my_car rdf:type ex:Ferrari .
ex:Ferrari rdf:type owl:Class .
(note: this example is entirely fictitious!), then ex:my_car would be an individual, but ex:Ferrari would not (because OWL Class is a built-in type). So, to add your individual to your model, you just need to assert that it is of some type. Since I don't know GWT and don't use Jastor, I can't say whether the type association that is normally part of a Jena Individual is retained after serialization. I suspect not, in which case you'll need to have some other means of determining the type of the individual you want to add, or use a different predicate than rdf:type to add the resource to the the Model.
All that said, personally I probably wouldn't solve your problem this way at all. Typically, when I'm working with client-side representations of server-side RDF, I send just the minimal information (e.g. URI and label) to the client as JSON. If I need any more data on a given resource, I either send it along with the initial JSON serialization, or it's just an Ajax call away. But, as I say, I don't use GWT so that advice may not be of any use to you.

What is a domain model ? Why is it preferred than a dataset in .net?

I was wondering, in assignments I have been using datasets. Now when I started working in this software company people are using something called DTO - data transfer object. Where does domain model come in ? What is it really ?
Thanks
DTOs are simple data structure objects that serve only to transfer data out of a database (often via an ORM) and make those data available to higher layers of the application. If a DTO is used to feed into a proper domain model layer, this is architecturally valid (though perhaps redundant). If you treat your DTOs as a domain model layer (in other words, you have no domain logic separate from the user interface), then you are using your DTOs as an anemic domain model, which is a severe architectural anti-pattern.
DTO = Data Transfer Object are as it sounds. Object that transfer data between system layers. The purpose is often to adapt the request and response data so it suits the use case. Example can be that you request a CV through a HR system's CandidateService in application layer. The Candidate Service loads information than spans over different domain entities: WorkExperince, Education, Personal Letter etc. To avoid a complex and massive response object graph we can flatten the repsponse by building a DTO object that is exactly design for what the client (GUI) needs.
There are a lot to say about DTO. But I do not want to write a novel :) But DTO do not belong in the Domain Model, in the Core. DTO is mostly refered in DDD as tool for communication between application services to clients, especially if you use web services (WCF etc). Then DTO is a perfect way of serializing part of your domain into a web service message (serialized DTO).
Hopefully you can ask your collegue/co-workers as well what they intended to accomplish with DTO's. There are several drawback with DTO's, usually it gives you an extra layer and that means more to do during maintenance phase...
(almost a novel by now) I use DTO's only when there is a really benefit and thats when you can deliver a complex responses with DTO that matches the clients needs exactly. Otherwise the client usually need to call different services or methods to gather enough information.