Is there an OnDeserializing/OnDeserialized equivalent for ServiceStack's TypeSerializer? - datacontractserializer

I want to switch some code from using .NET's DataContractSerializer to using ServiceStack's TypeSerializer for the increased speed benefits. Unfortunately, the code I inherited relies rather heavily on OnSerializing/OnSerialized/OnDeserializing/OnDeserialized, which ServiceStack appears not to call. Am I missing something? Assuming not, is there a good way to abuse things to fake out the intended functionality? OnSerialized/OnDeserialized can be roughly approximated through reflection, but I'm at a loss for OnSerializing and OnDeserializing.

No, there is no such facility; you'd have to fork ServiceStack or jacket the serialization/deserialization yourself with appropriate calls.

Related

Why use Perl OO simply as a data encapsulation technique?

I am trying to use a Perl API that has been written to use the Moose OO system but there is absolutely no inheritance, aggregation, or composition involved between the objects.
And, apart from a single optional role for debugging, there are no roles or mixins involved as well.
As far as I can see at the moment, using Moose just seems to add a massive amount of complication and compile-time overhead for very little benefit.
Why would you use Moose, or OO, as a simple method of encapsulation instead of using the far simpler technique of packaging the code into Perl modules?
Just to clarify, I am totally convinced of the many advantages of using Moose to do OO in Perl correctly and completely. I just don't understand why use OO at all as a simple encapsulation technique? I am not after a subjective argument in favour or otherwise of Perl OO. I am hoping that I am missing some advantage to using the OO paradigm here that I am simply not seeing atm.
This question has an excellent series of points about data encapsulation in Perl. N.B. I am not talking about enforced encapsulation where the system stops you from looking where you shouldn't, more about just only exposing methods in a package that manipulate the data you want to play with.
Is there some advantage to using OO here that I am missing?
Edit 1: After a bit of detective work, I have just seen that the author of the Perl API is also heavily involved in the maintenance and support of the Moose framework.
Edit 2: I have just seen this question which asks a similar thing from a slightly different angle. It's seems like my answer is actually "why would you want to add the complication in the first place?", especially given the info in edit 2 above.
POSSIBLE ANSWER
The API in question seems to be only using the Moose OO system in order to prevent namespace pollution.
This could also be done more than adequately by using Perl packages though, as #amon point out in the comment below, using the standard package mechanism can become cumbersome quite quickly. BTW A big thanks to for all the comments to help clarify what I was asking.
Every situation is diffferent, and whether you choose to use Moose or another object framework (or none at all) really comes down to what you're planning to do and what your requirements are.
There might not be any immediate advantage to writing the system in question with Moose, but consider these:
You get free access to Moose's metaobject system, so you can interrogate the objects for useful information in a well-defined way
You can extend the provided classes using Moose's inheritance system; so even if they don't use inheritance themselves, the framework is already in place for you to do so if needed
You have peace-of-mind because you know the system was built on the most widely-deployed and well-tested object framework for Perl.
People know Moose, meaning there is a higher probability of getting answers to questions if something breaks.
IOW, there is inherent value in using popular tools.
Not sure it's relevant to the API in question, but no-one seems to have mentioned data types yet - that's a big benefit of Moose or Moo, having easily defined and understood (and re-usable) type validation and coercion for attributes.

F# for the VM in a MVVM WPF environment?

Obviously F# would rule the model part, but would the choice be as clear for the VM part ?
Would the tooling support lost (any?) be compensated by the gain in the flexibility in the langage in a large application ?
Given that you don't rely on any major features of some MVVM framework, I'd say mostly it's as much as the choice of F# over C#, if that's the case of comparison.
The current F# compiler misses one feature that is useful in VM code though -- the support for CallerMemberName attribute, which is nice to use for INPC object properties. But this is probably a minor shortcoming compared to benefits you get (e.g. all the cool stuff in F#).
Since VM is often about transforming and processing the M data to use it in the V, I don't yet see why F# would be a bad fit.

Why use MEMCACHED_BEHAVIOR_NOREPLY?

I am writing a library that wraps libmemcached.
I noticed that there is an interesting behaviour setting for memcached which says that I can indicate that I do not care about the results of my memcached commands...
It is known as MEMCACHED_BEHAVIOR_NOREPLY. Why would somebody want to use this?
It would be great if someone could point out a few use cases? multi-get / multi-set spring to mind, but I am not clear how this would be useful.
ASCII protocol noreply was a mistake. Don't use it.
Binary protocol noreply can be used to make really awesome optimizations without any loss of correctness.

Why use a post compiler?

I am battling to understand why a post compiler, like PostSharp, should ever be needed?
My understanding is that it just inserts code where attributed in the original code, so why doesn't the developer just do that code writing themselves?
I expect that someone will say it's easier to write since you can use attributes on methods and then not clutter them up boilerplate code, but that can be done using DI or reflection and a touch of forethought without a post compiler. I know that since I have said reflection, the performance elephant will now enter - but I do not care about the relative performance here, when the absolute performance for most scenarios is trivial (sub millisecond to millisecond).
Let's try to take an architectural point on the issue. Say you are an architect (everyone wants to be an architect ;)
You need to deliver the architecture to your team:
a selected set of libraries, architectural patterns, and design patterns. As a part of your design, you say: "we will implement caching using the following design pattern:"
string key = string.Format("[{0}].MyMethod({1},{2})", this, param1, param2 );
T value;
if ( !cache.TryGetValue( key, out value ) )
{
using ( cache.Lock(key) )
{
if (!cache.TryGetValue( key, out value ) )
{
// Do the real job here and store the value into variable 'value'.
cache.Add( key, value );
}
}
}
This is a correct way to do tracing. Developers are going to implement this pattern thousands of times, so you write a nice Word document telling how you want the pattern to be implemented. Yeah, a Word document. Do you have a better solution? I'm afraid you don't. Classic code generators won't help. Functional programming (delegates)? It works fairly well for some aspects, but not here: you need to pass method parameters to the pattern. So what's left? Describe the pattern in natural language and trust developers will implement them.
What will happen?
First, some junior developer will look at the code and tell "Hm. Two cache lookups. Kinda useless. One is enough." (that's not a joke -- ask the DNN team about this issue). And your patterns cease to be thread-safe.
As an architect, how do you ensure that the pattern is properly applied? Unit testing? Fair enough, but you will hardly detect threading issues this way. Code review? That's maybe the solution.
Now, what is you decide to change the pattern? For instance, you detect a bug in the cache component and decide to use your own? Are you going to edit thousands of methods? It's not just refactoring: what if the new component has different semantics?
What if you decide that a method is not going to be cached any more? How difficult will it be to remove caching code?
The AOP solution (whatever the framework is) has the following advantages over plain code:
It reduces the number of lines of code.
It reduces the coupling between components, therefore you don't have to change much things when you decide to change the logging component (just update the aspect), therefore it improves the capacity of your source code to cope with new requirements over time.
Because there is less code, the probability of bugs is lower for a given set of features, therefore AOP improves the quality of your code.
So if you put it all together:
Aspects reduce both development costs and maintenance costs of software.
I have a 90 min talk on this topic and you can watch it at http://vimeo.com/2116491.
Again, the architectural advantages of AOP are independent of the framework you choose. The differences between frameworks (also discussed in this video) influence principally the extent to which you can apply AOP to your code, which was not the point of this question.
Suppose you already have a class which is well-designed, well-tested etc. You want to easily add some timing on some of the methods. Yes, you could use dependency injection, create a decorator class which proxies to the original but with timing for each method - but even that class is going to be a mess of repetition...
... or you can add reflection to the mix and use a dynamic proxy of some description, which lets you write the timing code once, but requires you to get that reflection code just right -which isn't as easy as it might be, especially if generics are involved.
... or you can add an attribute to each method that you want timed, write the timing code once, and apply it as a post-compile step.
I know which seems more elegant to me - and more obvious when reading the code. It can be applied even in situations where DI isn't appropriate (and it really isn't appropriate for every single class in a system) and with no other changes elsewhere.
AOP (PostSharp) is for attaching code to all sorts of points in your application, from one location, so you don't have to place it there.
You cannot achieve what PostSharp can do with Reflection.
I personally don't see a big use for it, in a production system, as most things can be done in other, better, ways (logging, etc).
You may like to review the other threads on this matter:
Anyone with Postsharp experience in production?
Other than logging, and transaction management what are some practical applications of AOP?
Aspect Oriented Programming: What do you use PostSharp for?
etc (search)
Aspects take away all the copy & paste - code and make adding new features faster.
I hate nothing more than, for example, having to write the same piece of code over and over again. Gael has a very nice example regarding INotifyPropertyChanged on his website (www.postsharp.net).
This is exactly what AOP is for. Forget about the technical details, just implement what you are being asked for.
In the long run, I think we all should say goodbye to the way we are writing software now. It's tedious and plainly stupid to write boilerplate code and iterate manually.
The future belongs to declarative, functional style being held together by an object oriented framework - and the cross cutting concerns being handled by aspects.
I guess the only people who will not get it soon are the guys who are still payed for lines of code.

Code generation for Java JVM / .NET CLR

I am doing a compilers discipline at college and we must generate code for our invented language to any platform we want to. I think the simplest case is generating code for the Java JVM or .NET CLR. Any suggestion which one to choose, and which APIs out there can help me on this task? I already have all the semantic analysis done, just need to generate code for a given program.
Thank you
From what I know, on higher level, two VMs are actually quite similar: both are classic stack-based machines, with largely high-level operations (e.g. virtual method dispatch is an opcode). That said, CLR lets you get down to the metal if you want, as it has raw data pointers with arithmetic, raw function pointers, unions etc. It also has proper tailcalls. So, if the implementation of language needs any of the above (e.g. Scheme spec mandates tailcalls), or if it is significantly advantaged by having those features, then you would probably want to go the CLR way.
The other advantage there is that you get a stock API to emit bytecode there - System.Reflection.Emit - even though it is somewhat limited for full-fledged compiler scenarios, it is still generally enough for a simple compiler.
With JVM, two main advantages you get are better portability, and the fact that bytecode itself is arguably simpler (because of less features).
Another option that i came across what a library called run sharp that can generate the MSIL code in runtime using emit. But in a nicer more user friendly way that is more like c#. The latest version of the library can be found here.
http://code.google.com/p/runsharp/
In .NET you can use the Reflection.Emit Namespace to generate MSIL code.
See the msdn link: http://msdn.microsoft.com/en-us/library/3y322t50.aspx