.NET Session State Caching with Redis, MongoDB, ServiceStack - mongodb

I have been doing some research on whether it is ok or not to cache .NET Session State in external Dbs such as Redis, MongoDb, or other highly scalable tools.
The output of my research was that even though MongoDB has more integration to do this kind of things, it seems that Redis is far more performant and has way more options (key expiration, sets, etc) to use.
There is this other framework called ServiceStack which has an implementation of a RedisClient but IMHO the way it is implemented is far more coupled than I would like to.
public override object OnGet(CachedOrders request)
{
var cacheKey = "some_unique_key_for_order";
return base.RequestContext.ToOptimizedResultUsingCache(this.CacheClient, cacheKey, () =>
{
//This delegate will be executed if the cache doesn't have an item
//with the provided key
//Return here your response DTO
//It will be cached automatically
});
}
So after this research, I would like to know your opinion and whether you have implemented this kind of caching in any of your apps. Can you please share you experiences?
Thanks!

ServiceStack's caching isn't coupled, the ToOptimizedResultUsingCache() method is just a convenience Extension method allowing you implement a common caching pattern in the minimal boilerplate necessary. The ToOptimizedResult method returns the most optimized result based on the MimeType and CompressionType from the IRequestContext. e.g. in a JSON service it would normally be the deflate'd output of the JSON Response DTO.
You don't have to use the extension method and can access the ICacheClient API directly as it's an auto-wired property in the ServiceBase class. If you require more functionality than the ICacheClient API can provide, I recommend using Redis and ServiceStack's C# RedisClient which gives you fast, atomic access to distributed comp-sci collections.
The benefit of using an ICacheClient API is that it's a testable implementation agnostic caching interface that currently has InMemory, Redis and Memcached providers.

I wrote a NuGet package for this 2 years ago, and we've been using it in production since then. It overrides the .NET SessionStateProvider with a custom class that allows persistence to Redis, with various features like:
Only write to Redis on values changing
Concurrent session access from multiple requests with the same session ID
Ability to access the session from Web API (although doing so violates REST principles, if you care)
Easy customization of serialization format (roll your own or change Json structure)
You can get it here: https://www.nuget.org/packages/RedisSessionProvider
Docs: https://github.com/welegan/RedisSessionProvider

Related

Manage data transfer in REST Architecture (Interface)

I'm working on my first client-server project and using REST.
So my question is where and how do I handle the data.
Options:
Define a datamodel and share it to the server and client. So I could you use JSON and object transfering, but each change of the datamodel requires also possible changes in the server and client implementation.
Simply transfer the data as basic data types (strings, boolean etc.). So only a datamodel is required in the client.
What do you recommend?
As you want to develop REST APIs and REST evolves around resource representations so first option (define data model) is way to go.
Note that all data model changes will not break the APIs and thus client implementations. Only when you re-structure your resource representation OR you take out one of attribute from model - that's when you will need to version your APIs.

Best way to re-use ServiceStack web api interface

I have many models that defines all my db tables; I wondering which is the best way to create one single CRUD ServiceStack interface for all these models without write the same code for each one.
I'd like to keep it DRY to ease future maintaining.
Thank you.
Checkout AutoQuery which lets you expose a rich, queryable API's for each table by just declaring its Request DTO:
[Route("/movies")]
public class FindMovies : QueryBase<Movie> {}
You want a typed Request DTO for each Service, but other than that you can use a base class, shared extension or utility methods to execute common logic as you would in normal C#. The built-in Auto Mapping also reduces the boilerplate for populating a Table POCO from a request DTO.

What are the differences of using value proxies for my entities instead of entity proxies?

So far i understand that i will have no more need to define an #version field in my entitites and no more need to use an entity locator. And for value proxies i will have to usenormal editors. Any other diffrences, advantages, disadvantages? What about in the context of using request factory in conjunction with spring
The main difference is that with EntityProxy, the client can send a diff of changes rather than the entire object graph. This is made possible because EntityProxys have an identity, so the server can fetch the identity from the datastore and then apply the diff/patch sent from the client, and only then the entity will be passed to your service methods.
With ValueProxy you basically have an equivalent of GWT-RPC: the object is reconstructed from scratch on the server, and not associated with your datastore (in the case of JPA for instance, it's not attached to the session). Depending on your datastore API, this can make things more complex to handle in your service methods.
Other than that, you'll also lose the EntityProxyChange events.

Java Rest Client consuming JSON - how to create JAXB objects?

I need to write a rest client (in Java - using RestEasy) that can consume JSON responses. Regarding the need for the rest client (or wrapping service) to translate the JSON responses to a Java type, I see the following options:
1. map the response to a string and then use JsonParser tools to extract data and build types manually.
2. Use JAXB annotated POJOs - in conjunction with jackson - to automatically bind the json response to an object.
Regarding 2, is it desirable / correct to define an XSD to generate the JAXB annotated POJOs? I can advantages to doing this using, e.g. reuse by an XML client.
Thanks.
I'm a fan of #2.
The reasoning is that your JAXB annotated model objects essentially are the contract for the business/domain logic that you're trying to represent on a transport level, and POJOs obviously give you excellent control over getter/setter validation, and you can control your element names and namespaces with fine granularity.
With that said, I like having an additional "inner" model of POJOs (if necessary, depending on problem complexity/project scope) to isolate the transport layer from the domain objects. Also, you get a nice warm feeling that you're not directly tied to your transport layer if things need to change internally in your business/domain object representation. A co-worker mentioned Dozer, a tool for mapping beans to beans, but I have no direct experience with it to comment further.
I'm not a fan of generating code from XSDs. Often the code is ugly or downright unreadable; and managing change, however subtle or insignificant can introduce unexpected results. Maybe I'm wrong about that but I require good unit-tests on a proven model.
This is based on my personal experience writing a customer-facing SDK with a hairy XML-over-HTTP (we don't call it REST) API. JAXB/Jackson annotated POJOs made it relatively painless. Hope that helps.

Sending persisted JDO instances over GWT-RPC

I've just started learning Google Web Toolkit and finished writing the Stock Watcher tutorial app.
Is my thinking correct that if one wants to persist a business object (like a Stock) using JDO and send it back and forth to/from the client over RPC then one has to create two separate classes for that object: One with the JDO annotations for persisting it on the server and another which is serialisable and used over RPC?
I notice the Stock Watcher has separate classes and I can theorise why:
Otherwise the gwt compiler would try
to generate javascript for everything
the persisted class referenced like
JDO and com.google.blah.users.User, etc
Also there may be logic on the server-side
class which doesn't apply to the client
and vice-versa.
I just want to make sure I'm understanding this correctly. I don't want to have to create two versions of all my business object classes which I want to use over RPC if I don't have to.
The short answer is: you don't need to create duplicate classes.
I recommend that you take a look from the following google groups discussion on the gwt-contributors list:
http://groups.google.com/group/google-web-toolkit-contributors/browse_thread/thread/3c768d8d33bfb1dc/5a38aa812c0ac52b
Here is an interesting excerpt:
If this is all you're interested in, I
described a way to make GAE and
GWT-RPC work together "out of the
box". Just declare your entities as:
#PersistenceCapable(identityType =
IdentityType.APPLICATION, detachable
= "false") public class MyPojo implements Serializable { }
and everything will work, but you'll
have to manually deal with
re-attachment when sending objects
from the client back to the server.
You can use this option, and you will not need a mirror (DTO) class.
You can also try gilead (former hibernate4gwt), which takes care of some details within the problems of serializing enhanced objects.
Your assessment is correct. JDO replaces instances of Collections with their own implementations, in order to sniff when the object graph changes, I suppose. These implementations are not known by the GWT compiler, so it will not be able to serialize them. This happens often for classes that are composed of otherwise GWT compliant types, but with JDO annotations, especially if some of the object properties are Collections.
For a detailed explanation and a workaround, check out this pretty influential essay on the topic: http://timepedia.blogspot.com/2009/04/google-appengine-and-gwt-now-marriage.html
I finally found a solution. Don't change your object at all, but for the listing do it this way:
List<YourCustomObject> secureList=(List<YourCustomObject>)pm.newQuery(query).execute();
return new ArrayList<YourCustomObject>(secureList);
The actual problem is not in Serializing the Object... the problem is to Serialize the Collection class which is implemented by Google and is not allowed to Serialize out.
You do not have to create two versions of the domain model.
Here are two tips:
Use a String encoded key, not the Appengine Key class.
pojo = pm.detachCopy(pojo)
...will remove all the JDO enhancements.
You don't have to create separate instances at all, in fact you're better off not doing it. Your JDO objects should be plain POJOs anyway, and should never contain business logic. That's for your business layer, not your persistent objects themselves.
All you need to do is include the source for the annotations you are using and GWT should compile your class just fine. Also, you want to avoid using libraries that GWT can't compile (like things that use reflection, etc.), but in all the projects I've done this has never been a problem.
I think that a better format to send objects through GWT is through JSON. In this case from the server a JSON string would be sent which would then have to be parsed in the client. The advantage is that the final Javascript which is rendered in the browser has a smaller size. thus causing the page to load faster.
Secondly to send objects through GWT, the objects should be serializable. This may not be the case for all objects
Thirdly GWT has inbuilt functions to handle JSON... so no issues on the client end