CMFWorkflow and Marker Interfaces - workflow

I'm currently prototyping a small project in Plone and trying to KISS as much as possible while the requirements are still in flux. To that end, I've resisted creating any custom content types for now and have been using marker interfaces to distinguish between "types" of content.
Now that I'm looking at workflow, I've realised that they're bound to types, and there doesn't seem to be a mechanism for assigning them to markers. I think I could wrap portal_workflow with my own version that looks for markers and returns the appropriate workflow if found, however, this doesn't feel like a tenable approach.
Is there a way of assigning workflow to markers that I've missed, or should I just bite the bullet and create some lightweight custom content types instead?

There's not really a built-in feature to use markers, but at http://www.martinaspeli.net/articles/dcworkflows-hidden-gems, Martin Aspeli hints that it is possible:
Note that in Plone, the workflow chain of an object is looked up by
multi-adapting the object and the workflow to the IWorkflowChain
interface. The adapter factory should return a tuple of string
workflow names (IWorkflowChain is a specialisation of IReadSequence,
i.e. a tuple). The default obviously looks at the mappings in the
portal_workflow tool, but it is possible to override the mapping, e.g.
in resopnse to some marker interface.

Related

xstream conditionally unmarshall to a class

Because of legacy (unfortunate) reasons, we have the same xml roots for 2 different representations. With xstream, how would we let unmarshaller to use a class we need while unmarshalling.
I am thinking of passing some context (through ThreadContext) so that xstream would use that information to pick the right class during unmarshalling, though I am not sure where to start. Any suggestions are very appreciated.
Notes:
Root tags are same for both XML
No other information (attribute) on root tag is available to distingish 2 representations
Cannot change the xml because of legacy reasons
Ideally I would like the solution to work with Spring-OXM but will take shortcuts if needed
You know in advance which of the two representations you are about to parse.
So you can create two instances of the xStream in the beginning, and configure the converters and aliases differently for each of them, and use one instance per representation.
This approach seems to me cleaner and more controllable than setting a global context variable and then having a bunch of ifs inside the converters, and dealing with potential ambiguities.

How to do a new extension with Schema.org + Microdata?

I have seen in a post that the slash is no longer up to date for creating new extensions in Schema.org.
I am using Microdata and would prefer to stick to it across my site.
What is the new way to create a new extension?
For example I want to create a new extension for MedicalTourism under the category Travel Agency. Before it would have been
http://schema.org/TravelAgency/MedicalTourism
What is the new way?
And what would the code look like?
You may still use Schema.org’s "slash-based" extension mechanism. It’s "outdated", but not invalid.
But it’s not (and never was) a good idea to use this mechanism if you want other consumers to understand or make special use of your extensions.
In some cases you could use Schema.org’s Role type, which allows you to give some additional data about a property, but not about types.
Alternatives
Propose new types/properties: If they are useful and the sponsors agree, they might get added to the Schema.org vocabulary at some point.
Use an existing vocabulary that defines types/properties for your use case (or create a new vocabulary if you don’t find one):
Either instead of Schema.org,
or in addition to Schema.org (while this works nicely with RDFa, Microdata is pretty limited: you’d have to use Schema.org’s additionalType property for additional types and full URIs for additional properties).

Has anyone implement DITA 1.2 Subject scheme in their work?

I would like to know if there is anyone who has implemented the subjectscheme maps of DITA1.2 in their work? If yes, can you please break-up the example to show:
how to do it?
when not to use it?
I am aware of the theory behind it, but I am yet to implement the same and I wanted to know if there are things I must keep in mind during the planning and implementation phase.
An example is here:
How to use DITA subjectSchemes?
The DITA 1.2 spec also has a good example (3.1.5.1.1).
What you can currently do with subject scheme maps is:
define a taxonomy
bind the taxonomy to a profiling or flagging attribute, so that it the attribute only takes a value that you have defined
filter or flag elements that have a defined value with a DITAVAL file.
Advantage 1: Since you have a taxonomy, filtering a parent value also filters its children, which is convenient.
Advantage 2: You can fully define and thus control the list of values, which prevents tag bloat.
Advantage 3: You can reuse the subject scheme map in many topic maps, in the usual modular DITA way, so you can apply the same taxonomies anywhere.
These appear to be the main uses for a subject scheme map at present.
The only disadvantages I have found is that I can think of other hypothetical uses for subject scheme maps such as faceted browsing, but I don't think any implementation exists. The DITA-OT doesn't have anything like that yet anyway.

UIMA: Plug & Play Annotators for differen teams' chains

Assume I have a UIMA toolchain that does something like this:
tokenize -> POS tagging -> assign my custom tags/annotations -> use the custom tags to assign more tags -> further processing.
Would it be possible tou use a third party, let's say entity-recognition (that uses POS tags but does not need much more), right after the POS-tagging, in between the two custom things or afterwards?
I'm asking this questions because I can see complications due to the type systems. In particular the most difficult case may be pluggin a thrid party ER annotator in between the custom things or right after them. The third party annotator won't expect our custom tags to be there.
However, there are just additional annotations that have to be "passed through" the annotator without looking at them or modifying them. So, in principle, I'd assume that this is possible. I just don't know if UIMA supports this or is all about writing full chains on your own with strict typing everywhere.
If this isn't possible out of the box, could we write the custom annotators in a way such that they can be plugged anywhere where POS tags are available independent from if there are other annotations present. I.e. as authors of annotators take care that there may be some necessary annotations, some annotations we add and any number of annotations that may be present or not and we do not care about them and only pass them through?
The third party annotator won't expect our custom tags to be there.
If I understand correctly, you are concerned that your custom annotations might collide with the third-party NER, right? It won't, unless your code adds exactly the same annotations.
This is the strength of UIMA: every Analysis Engine (AE) is independent of the others, it only cares about the annotations that are passed in the CAS. For example, say you have an AE that expects annotation of type my.namespace.Token. It doesn't matter which AE created these annotations, as long as there are present in the CAS.
The price to pay for this flexibility is that you (as a developer) have to make sure that the required annotation for each AE are present. For example, if an AE expects annotations of type my.namespace.Sentence but none are present, this AE won't be able to do any processing.

Is it possible to use C# Object Initializers with Factories [duplicate]

This question already has answers here:
Is it possible to use a c# object initializer with a factory method?
(7 answers)
Closed 9 years ago.
I'm looking at the new object initializers in C# 3.0 and would like to use them. However, I can't see how to use them with something like Microsoft Unity. I'm probably missing something but if I want to keep strongly typed property names then I'm not sure I can. e.g. I can do this (pseudo code)
Dictionary<string,object> parms = new Dictionary<string,object>();
parms.Add("Id", "100");
IThing thing = Factory.Create<IThing>(parms)();
and then do something in Create via reflection to initialise the parms... but if I want it strongly typed at the Create level, like the new object intitalisers then I don't see how I can.
Is there a better way?
Thanks
AFAIK, it's not possible to do. I would stay away from reflection in this case if I were you, as reflection is really really slow and could become easily a bottleneck in your application. I think using reflection for this is actually abusing reflection.
Do not shoot yourself in the leg because you want to emulate some syntactic sugar. Just set the fields / properties of the instance one by one, the classic way. It will be much faster than reflection.
Remember: No design pattern is silver bullet. Factory methods and object initializers are nice but try to use common sense and only use them when they actually make sense.
I'm not that familiar with Unity, but the idea behind IoC/DI is that you do not construct object yourself, so of course you can't use object initializer syntax.
I guess what you could use from C# 3.0 in your example is an anonymous type instead of Dictionary. No idea if Unity can consume that.
Anyway, if you make those calls like Factory.Create() you are probably using IoC in a wrong way.
Consider investing the time in leaning Unity or Castle or any of the other IOC frameworks available for .net. The IOC will let you transfer the complexity of object initialization from your code to a configuration file. In your application you will use interfaces to access the objects initialize in by the IOC container. The other service the IOC gives you is control of the lifetime of the objects (singletons, etc).
Graham, the idea behind IoC/DI is that your components declare their dependencies as either constructor parameters or public properties of specific types (usually interfaces).
The container then builds a dependency graph and satisfies those (with regards to component lifestyle if case of mature IoC's like Castle Windsor).
So basically you write your components, state dependencies and then register contracts and implementations in IoC container and don't care about constructing objects anymore. You just make a single call similar to Application.Start() :)
The bad thing is that some frameworks are hard to integrate with. Like ASP.NET WebForms, where you don't have control over IHttpHandler creation and cannot setup factories that would use IoC to instantiate those. You can check Ayende's "Rhino Igloo" which does it's best to use IoC in an environment like this.
Maybe this can help... Have you checked out this blog post?
http://www.cookcomputing.com/blog/archives/000578.html
The author presents a method for building an object from a factory and registering the object in the IoC container.
This is answered better in this question: Is it possible to use a c# object initializer with a factory method?.
There were four possibilities:
Accept a lambda as an argument
Return a builder instead
Use a default constructor, passing in an initialized object
Use an anonymous object