I am using XStream for serializing and de-serializing an object. For example, a class named Rating is defined as follows:
Public Class Rating {
String id;
int score;
int confidence;
// constructors here...
}
However, in this class, the variable confidence is optional.
So, when the confidence value is known (not 0), an XML representation of a Rating object should look like:
<rating>
<id>0123</id>
<score>5</score>
<confidence>10</confidence>
</rating>
However, when the confidence is unknown (the default value will be 0), the confidence
attribute should be omitted from the XML representation:
<rating>
<id>0123</id>
<score>5</score>
</rating>
Could anyone tell me how to conditionally serialize a field using XStream?
One option is to write a converter.
Here's one that I quickly wrote for you:
import com.thoughtworks.xstream.converters.Converter;
import com.thoughtworks.xstream.converters.MarshallingContext;
import com.thoughtworks.xstream.converters.UnmarshallingContext;
import com.thoughtworks.xstream.io.HierarchicalStreamReader;
import com.thoughtworks.xstream.io.HierarchicalStreamWriter;
public class RatingConverter implements Converter
{
#Override
public boolean canConvert(Class clazz) {
return clazz.equals(Rating.class);
}
#Override
public void marshal(Object value, HierarchicalStreamWriter writer,
MarshallingContext context)
{
Rating rating = (Rating) value;
// Write id
writer.startNode("id");
writer.setValue(rating.getId());
writer.endNode();
// Write score
writer.startNode("score");
writer.setValue(Integer.toString(rating.getScore()));
writer.endNode();
// Write confidence
if(rating.getConfidence() != 0)
{
writer.startNode("confidence");
writer.setValue(Integer.toString(rating.getConfidence()));
writer.endNode();
}
}
#Override
public Object unmarshal(HierarchicalStreamReader arg0,
UnmarshallingContext arg1)
{
return null;
}
}
All that's left for you to do is to register the converter, and provide accessor methods (i.e. getId, getScore, getConfidence) in your Rating class.
Note: your other option would be to omit the field appropriately.
Related
I am trying to have proper assertion (with implicit null checks) for a property of a list element.
The first assertion is working as expected, except that it will generate no proper error message if actual is null.
The second is supposed to provide proper null check for actual, but it's not compiling.
Is there an option tweak the second assertion to make it work?
import java.util.List;
import org.junit.jupiter.api.Test;
import static org.assertj.core.api.Assertions.assertThat;
class ExampleTest {
private static class Sub {
private String value;
public String getValue() {
return value;
}
}
private static class Example {
private List<Sub> subs;
public List<Sub> getSubs() {
return subs;
}
}
#Test
void test() {
Example actual = null;
assertThat(actual.getSubs())//not null safe
.extracting(Sub::getValue)
.contains("something");
// assertThat(actual)
// .extracting(Example::getSubs)
// .extracting(Sub::getValue)//not compiling
// .contains("something");
}
}
For type-specific assertions, extracting(Function, InstanceOfAssertFactory) should be used:
assertThat(actual)
.extracting(Example::getSubs, as(list(Sub.class)))
.extracting(Sub::getValue) // compiles
.contains("something");
Assertions.as(InstanceOfAssertFactory) is an optional syntax sugar to improve readability
InstanceOfAssertFactories.list(Class) provides the list-specific assertions after the extracting call
How can i use the values from hashset (the docid and offset) to the reduce writable so as to connect map writable with reduce writable?
The mapper (LineIndexMapper) works fine but in the reducer (LineIndexReducer) i get the error that it can't get string as argument when i type this:
context.write(key, new IndexRecordWritable("some string");
although i have the public String toString() in the ReduceWritable too.
I believe the hashset in reducer's writable (IndexRecordWritable.java) maybe isn't taking the values correctly?
I have the below code.
IndexMapRecordWritable.java
import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.io.Writable;
public class IndexMapRecordWritable implements Writable {
private LongWritable offset;
private Text docid;
public LongWritable getOffsetWritable() {
return offset;
}
public Text getDocidWritable() {
return docid;
}
public long getOffset() {
return offset.get();
}
public String getDocid() {
return docid.toString();
}
public IndexMapRecordWritable() {
this.offset = new LongWritable();
this.docid = new Text();
}
public IndexMapRecordWritable(long offset, String docid) {
this.offset = new LongWritable(offset);
this.docid = new Text(docid);
}
public IndexMapRecordWritable(IndexMapRecordWritable indexMapRecordWritable) {
this.offset = indexMapRecordWritable.getOffsetWritable();
this.docid = indexMapRecordWritable.getDocidWritable();
}
#Override
public String toString() {
StringBuilder output = new StringBuilder()
output.append(docid);
output.append(offset);
return output.toString();
}
#Override
public void write(DataOutput out) throws IOException {
}
#Override
public void readFields(DataInput in) throws IOException {
}
}
IndexRecordWritable.java
import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
import java.util.HashSet;
import org.apache.hadoop.io.Writable;
public class IndexRecordWritable implements Writable {
// Save each index record from maps
private HashSet<IndexMapRecordWritable> tokens = new HashSet<IndexMapRecordWritable>();
public IndexRecordWritable() {
}
public IndexRecordWritable(
Iterable<IndexMapRecordWritable> indexMapRecordWritables) {
}
#Override
public String toString() {
StringBuilder output = new StringBuilder();
return output.toString();
}
#Override
public void write(DataOutput out) throws IOException {
}
#Override
public void readFields(DataInput in) throws IOException {
}
}
Alright, here is my answer based on a few assumptions. The final output is a text file containing the key and the file names separated by a comma based on the information in the reducer class's comments on the pre-condition and post-condition.
In this case, you really don't need IndexRecordWritable class. You can simply write to your context using
context.write(key, new Text(valueBuilder.substring(0, valueBuilder.length() - 1)));
with the class declaration line as
public class LineIndexReducer extends Reducer<Text, IndexMapRecordWritable, Text, Text>
Don't forget to set the correct output class in the driver.
That must serve the purpose according to the post-condition in your reducer class. But, if you really want to write a Text-IndexRecordWritable pair to your context, there are two ways approach it -
with string as an argument (based on your attempt passing a string when you IndexRecordWritable class constructor is not designed to accept strings) and
with HashSet as an argument (based on the HashSet initialised in IndexRecordWritable class).
Since your constructor of IndexRecordWritable class is not designed to accept String as an input, you cannot pass a string. Hence the error you are getting that you can't use string as an argument. Ps: if you want your constructor to accept Strings, you must have another constructor in your IndexRecordWritable class as below:
// Save each index record from maps
private HashSet<IndexMapRecordWritable> tokens = new HashSet<IndexMapRecordWritable>();
// to save the string
private String value;
public IndexRecordWritable() {
}
public IndexRecordWritable(
HashSet<IndexMapRecordWritable> indexMapRecordWritables) {
/***/
}
// to accpet string
public IndexRecordWritable (String value) {
this.value = value;
}
but that won't be valid if you want to use the HashSet. So, approach #1 can't be used. You can't pass a string.
That leaves us with approach #2. Passing a HashSet as an argument since you want to make use of the HashSet. In this case, you must create a HashSet in your reducer before passing it as an argument to IndexRecordWritable in context.write.
To do this, your reducer must look like this.
#Override
protected void reduce(Text key, Iterable<IndexMapRecordWritable> values, Context context) throws IOException, InterruptedException {
//StringBuilder valueBuilder = new StringBuilder();
HashSet<IndexMapRecordWritable> set = new HashSet<>();
for (IndexMapRecordWritable val : values) {
set.add(val);
//valueBuilder.append(val);
//valueBuilder.append(",");
}
//write the key and the adjusted value (removing the last comma)
//context.write(key, new IndexRecordWritable(valueBuilder.substring(0, valueBuilder.length() - 1)));
context.write(key, new IndexRecordWritable(set));
//valueBuilder.setLength(0);
}
and your IndexRecordWritable.java must have this.
// Save each index record from maps
private HashSet<IndexMapRecordWritable> tokens = new HashSet<IndexMapRecordWritable>();
// to save the string
//private String value;
public IndexRecordWritable() {
}
public IndexRecordWritable(
HashSet<IndexMapRecordWritable> indexMapRecordWritables) {
/***/
tokens.addAll(indexMapRecordWritables);
}
Remember, this is not the requirement according to the description of your reducer where it says.
POST-CONDITION: emit the output a single key-value where all the file names are separated by a comma ",". <"marcello", "a.txt#3345,b.txt#344,c.txt#785">
If you still choose to emit (Text, IndexRecordWritable), remember to process the HashSet in IndexRecordWritable to get it in the desired format.
I want to import the following file with Spring Batch
key;value
A;9,5
I model it with the bean
class CsvModel
{
String key
Double value
}
The shown code here is Groovy but the language is irrelevant for the problem.
#Bean
#StepScope
FlatFileItemReader<CsvModel> reader2()
{
// set the locale for the tokenizer, but this doesn't solve the problem
def locale = Locale.getDefault()
def fieldSetFactory = new DefaultFieldSetFactory()
fieldSetFactory.setNumberFormat(NumberFormat.getInstance(locale))
def tokenizer = new DelimitedLineTokenizer(';')
tokenizer.setNames([ 'key', 'value' ].toArray() as String[])
// and assign the fieldSetFactory to the tokenizer
tokenizer.setFieldSetFactory(fieldSetFactory)
def fieldMapper = new BeanWrapperFieldSetMapper<CsvModel>()
fieldMapper.setTargetType(CsvModel.class)
def lineMapper = new DefaultLineMapper<CsvModel>()
lineMapper.setLineTokenizer(tokenizer)
lineMapper.setFieldSetMapper(fieldMapper)
def reader = new FlatFileItemReader<CsvModel>()
reader.setResource(new FileSystemResource('output/export.csv'))
reader.setLinesToSkip(1)
reader.setLineMapper(lineMapper)
return reader
}
Setting up a reader is well known, what was new for me was the first code block, setting up a numberFormat / locale / fieldSetFactory and assign it to the tokenizer. However this doesn't work, I still receive the exception
Field error in object 'target' on field 'value': rejected value [5,0]; codes [typeMismatch.target.value,typeMismatch.value,typeMismatch.float,typeMismatch]; arguments [org.springframework.context.support.DefaultMessageSourceResolvable: codes [target.value,value]; arguments []; default message [value]]; default message [Failed to convert property value of type 'java.lang.String' to required type 'float' for property 'value'; nested exception is java.lang.NumberFormatException: For input string: "9,5"]
at org.springframework.batch.item.file.mapping.BeanWrapperFieldSetMapper.mapFieldSet(BeanWrapperFieldSetMapper.java:200) ~[spring-batch-infrastructure-4.1.2.RELEASE.jar:4.1.2.RELEASE]
at org.springframework.batch.item.file.mapping.DefaultLineMapper.mapLine(DefaultLineMapper.java:43) ~[spring-batch-infrastructure-4.1.2.RELEASE.jar:4.1.2.RELEASE]
at org.springframework.batch.item.file.FlatFileItemReader.doRead(FlatFileItemReader.java:180) ~[spring-batch-infrastructure-4.1.2.RELEASE.jar:4.1.2.RELEASE]
So the question is: how do I import floats in the locale de_AT (we write our decimals with a comma like this: 3,141592)? I could avoid this problem with a FieldSetMapper but I want to understand what's going on here and want to avoid the unnecessary mapper class.
And even the FieldSetMapper solution doesn't obey locales out of the box, I have to read a string and convert it myself in a double:
class PnwExportFieldSetMapper implements FieldSetMapper<CsvModel>
{
private nf = NumberFormat.getInstance(Locale.getDefault())
#Override
CsvModel mapFieldSet(FieldSet fieldSet) throws BindException
{
def model = new CsvModel()
model.key = fieldSet.readString(0)
model.value = nf.parse(fieldSet.readString(1)).doubleValue()
return model
}
}
The class DefaultFieldSet has a function setNumberFormat, but when and where do I call this function?
This unfortunately seems to be a bug. I have the same Problem and debugged into the code.
The BeanWrapperFieldSetMapper is not using the methods of DefaultFieldSetFactory, that would do the right conversion, but instead just uses FieldSet.getProperties and does the conversion by itself.
So, I see the following options: Provide the BeanWrapperFieldSetMapper either with PropertyEditors or a ConversionService, or use a different mapper.
Here is a sketch of a conversion Service:
private static class CS implements ConversionService {
#Override
public boolean canConvert(Class<?> sourceType, Class<?> targetType) {
return sourceType == String.class && targetType == double.class;
}
#Override
public boolean canConvert(TypeDescriptor sourceType, TypeDescriptor targetType) {
return sourceType.equals(TypeDescriptor.valueOf(String.class)) &&
targetType.equals(TypeDescriptor.valueOf(double.class)) ;
}
#Override
public <T> T convert(Object source, Class<T> targetType) {
return (T)Double.valueOf(source.toString().replace(',', '.'));
}
#Override
public Object convert(Object source, TypeDescriptor sourceType, TypeDescriptor targetType) {
return Double.valueOf(source.toString().replace(',', '.'));
}
}
and use it:
final BeanWrapperFieldSetMapper<IBISRecord> mapper = new BeanWrapperFieldSetMapper<>();
mapper.setTargetType(YourClass.class);
mapper.setConversionService(new CS());
...
new FlatFileItemReaderBuilder<IBISRecord>()
.name("YourReader")
.delimited()
.delimiter(";")
.includedFields(fields)
.names(names)
.fieldSetMapper(mapper)
.saveState(false)
.resource(resource)
.build();
Is there a simple way to make Hibernate Search to index all its values in lower case ? Instead of the default mixed-case.
I'm using the annotation #Field. But I can't seem to be able to configure some application-level set
Fool that I am ! The StandardAnalyzer class is already indexing in lowercase. It's just a matter of setting the search terms in lowercase too. I was assuming the query would do that.
However, if a different analyzer were to be used, application-wide, then it can be set using the property hibernate.search.analyzer.
Lowercasing, term splitting, removing common terms and many more advanced language processing functions are applied by the Analyzer.
Usually you should process user input meant to match indexed strings with the same Analyzer used at indexing; configuring hibernate.search.analyzer sets the default (global) Analyzer, but you can customize it per index, per entity type, per field and even on different entity instances.
It is for example useful to have language specific analysis, so to process Chinese descriptions with Chinese specific routines, Italian descriptions with Italian tokenizers.
The default analyzer is ok for most use cases, and does lowercasing and splits terms on whitespace.
Consider as well that when using the Lucene Queryparser the API requests you the appropriate Analyzer.
When using the Hibernate Search QueryBuilder it attempts to apply the correct Analyzer on each field; see also http://docs.jboss.org/hibernate/search/4.1/reference/en-US/html_single/#search-query-querydsl .
There are multiple way to make sort insensitive in string type field only.
1.First Way is add #Fields annotation in field/property on entity.
Like
#Fields({#Field(index=Index.YES,analyze=Analyze.YES,store=Store.YES),
#Field(index=Index.YES,name = "nameSort",analyzer = #Analyzer(impl=KeywordAnalyzer.class), store = Store.YES)})
private String name;
suppose you have name property with custom analyzer and sort on that. so it's not possible then you can add new Field in index with nameSort apply sort on that field.
you must apply Keyword Analyzer class because that is not tokeniz field and by default apply lowercase factory class in field.
2.Second way is that you can implement your comparison class on sorting like
#Override
public FieldComparator newComparator(String field, int numHits, int sortPos, boolean reversed) throws IOException {
return new StringValComparator(numHits, field);
}
Make one class with extend FieldComparatorSource class and implement above method.
Created new Class name with StringValComparator and implements FieldComparator
and implement following method
class StringValComparator extends FieldComparator {
private String[] values;
private String[] currentReaderValues;
private final String field;
private String bottom;
StringValComparator(int numHits, String field) {
values = new String[numHits];
this.field = field;
}
#Override
public int compare(int slot1, int slot2) {
final String val1 = values[slot1];
final String val2 = values[slot2];
if (val1 == null) {
if (val2 == null) {
return 0;
}
return -1;
} else if (val2 == null) {
return 1;
}
return val1.toLowerCase().compareTo(val2.toLowerCase());
}
#Override
public int compareBottom(int doc) {
final String val2 = currentReaderValues[doc];
if (bottom == null) {
if (val2 == null) {
return 0;
}
return -1;
} else if (val2 == null) {
return 1;
}
return bottom.toLowerCase().compareTo(val2.toLowerCase());
}
#Override
public void copy(int slot, int doc) {
values[slot] = currentReaderValues[doc];
}
#Override
public void setNextReader(IndexReader reader, int docBase) throws IOException {
currentReaderValues = FieldCache.DEFAULT.getStrings(reader, field);
}
#Override
public void setBottom(final int bottom) {
this.bottom = values[bottom];
}
#Override
public String value(int slot) {
return values[slot];
}
}
Apply sorting on Fields Like
new SortField("name",new StringCaseInsensitiveComparator(), true);
I'm trying to create a page which is very similar to the Google Form creation page.
This is how I am attempting to model it using the GWT MVP framework (Places and Activities), and Editors.
CreateFormActivity (Activity and presenter)
CreateFormView (interface for view, with nested Presenter interface)
CreateFormViewImpl (implements CreateFormView and Editor< FormProxy >
CreateFormViewImpl has the following sub-editors:
TextBox title
TextBox description
QuestionListEditor questionList
QuestionListEditor implements IsEditor< ListEditor< QuestionProxy, QuestionEditor>>
QuestionEditor implements Editor < QuestionProxy>
QuestionEditor has the following sub-editors:
TextBox questionTitle
TextBox helpText
ValueListBox questionType
An optional subeditor for each question type below.
An editor for each question type:
TextQuestionEditor
ParagraphTextQuestionEditor
MultipleChoiceQuestionEditor
CheckboxesQuestionEditor
ListQuestionEditor
ScaleQuestionEditor
GridQuestionEditor
Specific Questions:
What is the correct way to add / remove questions from the form. (see follow up question)
How should I go about creating the Editor for each question type? I attempted to listen to the questionType value changes, I'm not sure what to do after. (answered by BobV)
Should each question-type-specific editor be wrapper with an optionalFieldEditor? Since only one of can be used at a time. (answered by BobV)
How to best manage creating/removing objects deep in the object hierarchy. Ex) Specifying answers for a question number 3 which is of type multiple choice question. (see follow up question)
Can OptionalFieldEditor editor be used to wrap a ListEditor? (answered by BobV)
Implementation based on Answer
The Question Editor
public class QuestionDataEditor extends Composite implements
CompositeEditor<QuestionDataProxy, QuestionDataProxy, Editor<QuestionDataProxy>>,
LeafValueEditor<QuestionDataProxy>, HasRequestContext<QuestionDataProxy> {
interface Binder extends UiBinder<Widget, QuestionDataEditor> {}
private CompositeEditor.EditorChain<QuestionDataProxy, Editor<QuestionDataProxy>> chain;
private QuestionBaseDataEditor subEditor = null;
private QuestionDataProxy currentValue = null;
#UiField
SimplePanel container;
#UiField(provided = true)
#Path("dataType")
ValueListBox<QuestionType> dataType = new ValueListBox<QuestionType>(new Renderer<QuestionType>() {
#Override
public String render(final QuestionType object) {
return object == null ? "" : object.toString();
}
#Override
public void render(final QuestionType object, final Appendable appendable) throws IOException {
if (object != null) {
appendable.append(object.toString());
}
}
});
private RequestContext ctx;
public QuestionDataEditor() {
initWidget(GWT.<Binder> create(Binder.class).createAndBindUi(this));
dataType.setValue(QuestionType.BooleanQuestionType, true);
dataType.setAcceptableValues(Arrays.asList(QuestionType.values()));
/*
* The type drop-down UI element is an implementation detail of the
* CompositeEditor. When a question type is selected, the editor will
* call EditorChain.attach() with an instance of a QuestionData subtype
* and the type-specific sub-Editor.
*/
dataType.addValueChangeHandler(new ValueChangeHandler<QuestionType>() {
#Override
public void onValueChange(final ValueChangeEvent<QuestionType> event) {
QuestionDataProxy value;
switch (event.getValue()) {
case MultiChoiceQuestionData:
value = ctx.create(QuestionMultiChoiceDataProxy.class);
setValue(value);
break;
case BooleanQuestionData:
default:
final QuestionNumberDataProxy value2 = ctx.create(BooleanQuestionDataProxy.class);
value2.setPrompt("this value doesn't show up");
setValue(value2);
break;
}
}
});
}
/*
* The only thing that calls createEditorForTraversal() is the PathCollector
* which is used by RequestFactoryEditorDriver.getPaths().
*
* My recommendation is to always return a trivial instance of your question
* type editor and know that you may have to amend the value returned by
* getPaths()
*/
#Override
public Editor<QuestionDataProxy> createEditorForTraversal() {
return new QuestionNumberDataEditor();
}
#Override
public void flush() {
//XXX this doesn't work, no data is returned
currentValue = chain.getValue(subEditor);
}
/**
* Returns an empty string because there is only ever one sub-editor used.
*/
#Override
public String getPathElement(final Editor<QuestionDataProxy> subEditor) {
return "";
}
#Override
public QuestionDataProxy getValue() {
return currentValue;
}
#Override
public void onPropertyChange(final String... paths) {
}
#Override
public void setDelegate(final EditorDelegate<QuestionDataProxy> delegate) {
}
#Override
public void setEditorChain(final EditorChain<QuestionDataProxy, Editor<QuestionDataProxy>> chain) {
this.chain = chain;
}
#Override
public void setRequestContext(final RequestContext ctx) {
this.ctx = ctx;
}
/*
* The implementation of CompositeEditor.setValue() just creates the
* type-specific sub-Editor and calls EditorChain.attach().
*/
#Override
public void setValue(final QuestionDataProxy value) {
// if (currentValue != null && value == null) {
chain.detach(subEditor);
// }
QuestionType type = null;
if (value instanceof QuestionMultiChoiceDataProxy) {
if (((QuestionMultiChoiceDataProxy) value).getCustomList() == null) {
((QuestionMultiChoiceDataProxy) value).setCustomList(new ArrayList<CustomListItemProxy>());
}
type = QuestionType.CustomList;
subEditor = new QuestionMultipleChoiceDataEditor();
} else {
type = QuestionType.BooleanQuestionType;
subEditor = new BooleanQuestionDataEditor();
}
subEditor.setRequestContext(ctx);
currentValue = value;
container.clear();
if (value != null) {
dataType.setValue(type, false);
container.add(subEditor);
chain.attach(value, subEditor);
}
}
}
Question Base Data Editor
public interface QuestionBaseDataEditor extends HasRequestContext<QuestionDataProxy>, IsWidget {
}
Example Subtype
public class BooleanQuestionDataEditor extends Composite implements QuestionBaseDataEditor {
interface Binder extends UiBinder<Widget, BooleanQuestionDataEditor> {}
#Path("prompt")
#UiField
TextBox prompt = new TextBox();
public QuestionNumberDataEditor() {
initWidget(GWT.<Binder> create(Binder.class).createAndBindUi(this));
}
#Override
public void setRequestContext(final RequestContext ctx) {
}
}
The only issue left is that QuestionData subtype specific data isn't being displayed, or flushed. I think it has to do with the Editor setup I'm using.
For example, The value for prompt in the BooleanQuestionDataEditor is neither set nor flushed, and is null in the rpc payload.
My guess is: Since the QuestionDataEditor implements LeafValueEditor, the driver will not visit the subeditor, even though it has been attached.
Big thanks to anyone who can help!!!
Fundamentally, you want a CompositeEditor to handle cases where objects are dynamically added or removed from the Editor hierarchy. The ListEditor and OptionalFieldEditor adaptors implement CompositeEditor.
If the information required for the different types of questions is fundamentally orthogonal, then multiple OptionalFieldEditor could be used with different fields, one for each question type. This will work when you have only a few question types, but won't really scale well in the future.
A different approach, that will scale better would be to use a custom implementation of a CompositeEditor + LeafValueEditor that handles a polymorphic QuestionData type hierarchy. The type drop-down UI element would become an implementation detail of the CompositeEditor. When a question type is selected, the editor will call EditorChain.attach() with an instance of a QuestionData subtype and the type-specific sub-Editor. The newly-created QuestionData instance should be retained to implement LeafValueEditor.getValue(). The implementation of CompositeEditor.setValue() just creates the type-specific sub-Editor and calls EditorChain.attach().
FWIW, OptionalFieldEditor can be used with ListEditor or any other editor type.
We implemented similar approach (see accepted answer) and it works for us like this.
Since driver is initially unaware of simple editor paths that might be used by sub-editors, every sub-editor has own driver:
public interface CreatesEditorDriver<T> {
RequestFactoryEditorDriver<T, ? extends Editor<T>> createDriver();
}
public interface RequestFactoryEditor<T> extends CreatesEditorDriver<T>, Editor<T> {
}
Then we use the following editor adapter that would allow any sub-editor that implements RequestFactoryEditor to be used. This is our workaround to support polimorphism in editors:
public static class DynamicEditor<T>
implements LeafValueEditor<T>, CompositeEditor<T, T, RequestFactoryEditor<T>>, HasRequestContext<T> {
private RequestFactoryEditorDriver<T, ? extends Editor<T>> subdriver;
private RequestFactoryEditor<T> subeditor;
private T value;
private EditorDelegate<T> delegate;
private RequestContext ctx;
public static <T> DynamicEditor<T> of(RequestFactoryEditor<T> subeditor) {
return new DynamicEditor<T>(subeditor);
}
protected DynamicEditor(RequestFactoryEditor<T> subeditor) {
this.subeditor = subeditor;
}
#Override
public void setValue(T value) {
this.value = value;
subdriver = null;
if (null != value) {
RequestFactoryEditorDriver<T, ? extends Editor<T>> newSubdriver = subeditor.createDriver();
if (null != ctx) {
newSubdriver.edit(value, ctx);
} else {
newSubdriver.display(value);
}
subdriver = newSubdriver;
}
}
#Override
public T getValue() {
return value;
}
#Override
public void flush() {
if (null != subdriver) {
subdriver.flush();
}
}
#Override
public void onPropertyChange(String... paths) {
}
#Override
public void setDelegate(EditorDelegate<T> delegate) {
this.delegate = delegate;
}
#Override
public RequestFactoryEditor<T> createEditorForTraversal() {
return subeditor;
}
#Override
public String getPathElement(RequestFactoryEditor<T> subEditor) {
return delegate.getPath();
}
#Override
public void setEditorChain(EditorChain<T, RequestFactoryEditor<T>> chain) {
}
#Override
public void setRequestContext(RequestContext ctx) {
this.ctx = ctx;
}
}
Our example sub-editor:
public static class VirtualProductEditor implements RequestFactoryEditor<ProductProxy> {
interface Driver extends RequestFactoryEditorDriver<ProductProxy, VirtualProductEditor> {}
private static final Driver driver = GWT.create(Driver.class);
public Driver createDriver() {
driver.initialize(this);
return driver;
}
...
}
Our usage example:
#Path("")
DynamicEditor<ProductProxy> productDetailsEditor;
...
public void setProductType(ProductType type){
if (ProductType.VIRTUAL==type){
productDetailsEditor = DynamicEditor.of(new VirtualProductEditor());
} else if (ProductType.PHYSICAL==type){
productDetailsEditor = DynamicEditor.of(new PhysicalProductEditor());
}
}
Would be great to hear your comments.
Regarding your question why subtype specific data isn't displayed or flushed:
My scenario is a little bit different but I made the following observation:
GWT editor databinding does not work as one would expect with abstract editors in the editor hierarchy. The subEditor declared in your QuestionDataEditor is of type QuestionBaseDataEditor and this is fully abstract type (an interface). When looking for fields/sub editors to populate with data/flush GWT takes all the fields declared in this type. Since QuestionBaseDataEditor has no sub editors declared nothing is displayed/flushed. From debugging I found out that is happens due to GWT using a generated EditorDelegate for that abstract type rather than the EditorDelegate for the concrete subtype present at that moment.
In my case all the concrete sub editors had the same types of leaf value editors (I had two different concrete editors one to display and one to edit the same bean type) so I could do something like this to work around this limitation:
interface MyAbstractEditor1 extends Editor<MyBean>
{
LeafValueEditor<String> description();
}
// or as an alternative
abstract class MyAbstractEditor2 implements Editor<MyBean>
{
#UiField protected LeafValueEditor<String> name;
}
class MyConcreteEditor extends MyAbstractEditor2 implements MyAbstractEditor1
{
#UiField TextBox description;
public LeafValueEditor<String> description()
{
return description;
}
// super.name is bound to a TextBox using UiBinder :)
}
Now GWT finds the subeditors in the abstract base class and in both cases I get the corresponding fields name and description populated and flushed.
Unfortunately this approach is not suitable when the concrete subeditors have different values in your bean structure to edit :(
I think this is a bug of the editors framework GWT code generation, that can only be solved by the GWT development team.
Isn't the fundamental problem that the binding happens at compile time so will only bind to QuestionDataProxy so won't have sub-type specific bindings? The CompositeEditor javadoc says "An interface that indicates that a given Editor is composed of an unknown number of sub-Editors all of the same type" so that rules this usage out?
At my current job I'm pushing to avoid polymorphism altogether as the RDBMS doesn't support it either. Sadly we do have some at the moment so I'm experimenting with a dummy wrapper class that exposes all the sub-types with specific getters so the compiler has something to work on. Not pretty though.
Have you seen this post: http://markmail.org/message/u2cff3mfbiboeejr this seems along the right lines.
I'm a bit worried about code bloat though.
Hope that makes some sort of sense!