I have a bit of code I'm using to handle any type of Map, copied from here: MongoDb Map<K, V> codec - Maps MUST have string keys FIX
#Override
public void encode(final BsonWriter writer, final Map<K, T> map, final EncoderContext encoderContext) {
try (var dummyWriter = new BsonDocumentWriter(new BsonDocument())) {
dummyWriter.writeStartDocument();
writer.writeStartDocument();
for (final Map.Entry<K, T> entry : map.entrySet()) {
var dummyId = UUID.randomUUID().toString();
dummyWriter.writeName(dummyId);
keyCodec.encode(dummyWriter, entry.getKey(), encoderContext);
//TODO: could it be simpler by something like JsonWriter?
writer.writeName(dummyWriter.getDocument().asDocument().get(dummyId).asString().getValue());
valueCodec.encode(writer, entry.getValue(), encoderContext);
}
dummyWriter.writeEndDocument();
}
writer.writeEndDocument();
}
This seems to work fine most of the time, except for when the Map is of type Map<ObjectId, *>, causing the following error:
Value expected to be of type STRING is of unexpected type OBJECT_ID
org.bson.BsonInvalidOperationException: Value expected to be of type STRING is of unexpected type OBJECT_ID
on the dummyWriter.getDocument().asDocument().get(dummyId).asString() line, in encode().
I have attempted to work arond this via the following code, but also fails; looks like there is an issue with how the type is written...
BsonValue cur = dummyWriter.getDocument().asDocument().get(dummyId);
if(cur.isObjectId()){
writer.writeName(
dummyWriter.getDocument().asDocument().get(dummyId).asObjectId().getValue().toHexString()
);
} else {
writer.writeName(
dummyWriter.getDocument().asDocument().get(dummyId).asString().getValue()
);
}
An exception occurred when decoding using the AutomaticPojoCodec.
Decoding into a '' failed with the following exception:
Cannot find a public constructor for ''.
I am beginning to suspect this code shouldn't work, and the default codec for Map<String, *> is actually being used where I thought I tested this codec before... What can get my codecs to play nice?
Related
I'm trying to write an extension method to include a certain property (text element, themselves containing a collection of translations) that are present in many of my entity models.
I had no problem with the .Include function:
public static IIncludableQueryable<T, IEnumerable<Translation>> IncludeTextBitWithTranslations<T>(this IQueryable<T> source, Expression<Func<T, TextBit>> predicate) where T: class
{
var result = source.Include(predicate).ThenInclude(t => t.Translations);
return result;
}
And tests proved successful.
Now, in some cases, I have entities that have all their texts in a child - for example Article entity has an ArticleInfo property that contains a few text elements. So I figure I just needed to do another extension that was a ThenInclude instead. With a few differences I finally get this :
public static IIncludableQueryable<TEntity, ICollection<Translation>> ThenIncludeTextBitWithTranslations<TEntity, TPreviousProperty, TextBit>(this IIncludableQueryable<TEntity, TPreviousProperty> source, Expression<Func<TPreviousProperty, TextBit>> predicate) where TEntity: class
{
var result = source.ThenInclude(predicate)
.ThenInclude(t => t.Translations);
return result;
}
And now I get this error:
'TextBit' does not contain a definition for 'Translations' and no extension method 'Translations' accepting an argument of 'TextBit' type was found
This error appears on the last lambda expression t => t.Translations.
This error is extremely weird for me, I've been looking all over the internet for some help on the matter but I was unsuccessful.
I tried forcing the type to the ThenInclude by adding them manually :
var result = source.ThenInclude(predicate)
.ThenInclude<TEntity, TextBit, ICollection<Translation>>(t => t.Translations);
but without success.
Does anyone have some clues as to why?
I'm very much at a loss here
You have extra type parameter TextBit in second one (ThenIncludeTextBitWithTranslations<TEntity, TPreviousProperty, TextBit>), so it is considered as a generic type, not an actual one, remove it:
public static IIncludableQueryable<TEntity, ICollection<Translation>> ThenIncludeTextBitWithTranslations<TEntity, TPreviousProperty>(this IIncludableQueryable<TEntity, TPreviousProperty> source, Expression<Func<TPreviousProperty, TextBit>> predicate) where TEntity: class
{
var result = source.ThenInclude(predicate).ThenInclude(t => t.Translations);
return result;
}
I'm trying to create a consumer using StreamListener annotation and condition attirbute.However , i'm getting the following exception :
org.springframework.core.convert.ConversionFailedException: Failed to convert from type [java.lang.String] to type [java.lang.Integer] for value 'test'; nested exception is java.lang.NumberFormatException: For input string: "test"
TestListener:
#StreamListener(target=ITestSink.CHANNEL_NAME,condition="payload['test'] == 'test'")
public void test(#Payload TestObj message) {
log.info("message is {}",message.getName());
}
TestObj:
#Data
#ToString(callSuper=true)
public class TestObj {
#JsonProperty("test")
private String test;
#JsonProperty("name")
private String name;
}
can someone assist with this issue?
the payload of the message is not yet converted from the wire format (byte[]) to the desired type. In other words, it has not yet gone through the type conversion process described in the Content Type Negotiation.
So, unless you use a SPeL expression that evaluates raw data (for example, the value of the first byte in the byte array), use message header-based expressions (such as condition = "headers['type']=='dog'").
Example:
#StreamListener(target = Sink.INPUT, condition = "headers['type']=='bogey'")
public void receiveBogey(#Payload BogeyPojo bogeyPojo) {
// handle the message
}
Check the Spring documentation here.
From what you show, it should work. I suggest you remove the condition, then set breakpoint to debug. Then you should be able to know which actual type is.
I have a transformation which outputs a type with a wildcard: Feature<? extends Geomery>. I have specified a coder for this class and created a pipeline.
final Pipeline pipeline = Pipeline.create();
final TypeDescriptor<Feature<? extends Geometry>> featureTypeDescriptor =
new TypeDescriptor<Feature<? extends Geometry>>() {
};
pipeline.getCoderRegistry().registerCoderForType(featureTypeDescriptor, FeatureCoder.of());
final List<String> data = Arrays.asList("a", "b");
final PCollection<Feature<? extends Geometry>> features =
pipeline.apply(Create.of(data).withCoder(StringUtf8Coder.of()))
.apply(ParDo.of(new DoFn<String, Feature<? extends Geometry>>() {
#ProcessElement
public void process(ProcessContext processContext) {
final String name = processContext.element();
processContext.output(new FeatureImpl(name));
}
}));
pipeline.run().waitUntilFinish();
When I run this pipeline, I get the following error:
Exception in thread "main" java.lang.ClassCastException: org.apache.beam.sdk.repackaged.com.google.common.reflect.Types$WildcardTypeImpl cannot be cast to java.lang.reflect.TypeVariable
at org.apache.beam.sdk.coders.CoderRegistry.getCoderFromTypeDescriptor(CoderRegistry.java:623)
at org.apache.beam.sdk.coders.CoderRegistry.getCoderFromParameterizedType(CoderRegistry.java:656)
at org.apache.beam.sdk.coders.CoderRegistry.getCoderFromTypeDescriptor(CoderRegistry.java:618)
at org.apache.beam.sdk.coders.CoderRegistry.getCoder(CoderRegistry.java:252)
at org.apache.beam.sdk.values.PCollection.inferCoderOrFail(PCollection.java:149)
at org.apache.beam.sdk.values.PCollection.finishSpecifyingOutput(PCollection.java:89)
This can be traced back in the Beam code to the following line in org.apache.beam.sdk.coders.CoderRegistry#getCoderFromTypeDescriptor where type gets eventually assigned to ? extends Geometry:
else if (type instanceof WildcardType) {
// No coder for an unknown generic type.
throw new CannotProvideCoderException(
String.format("Cannot provide a coder for type variable %s"
+ " (declared by %s) because the actual type is unknown due to erasure.",
type,
((TypeVariable<?>) type).getGenericDeclaration()),
ReasonCode.TYPE_ERASURE);
}
The example below is a simplification of the real problem. In reality, I do not read from a list of strings but from HBase, so I cannot simply specify my coder in Create.of(data).withCoder(...). However, the behavior is the same.
Is this expected behavior and should I avoid using wild cards? Or should I approach this in another way? Why is my specified coder not used for this?
I have been working to setup Ormlite as the primary data access layer between a PostgreSQL database and Java application. Everything has been fairly straightforward, until I started messing with PostgreSQL's array types. In my case, I have two tables that make use of text[] array type. Following the documentation, I created a custom data persister as below:
public class StringArrayPersister extends StringType {
private static final StringArrayPersister singleTon = new StringArrayPersister();
private StringArrayPersister() {
super(SqlType.STRING, new Class<?>[]{String[].class});
}
public static StringArrayPersister getSingleton() {
return singleTon;
}
#Override
public Object javaToSqlArg(FieldType fieldType, Object javaObject) {
String[] array = (String[]) javaObject;
if (array == null) {
return null;
} else {
String join = "";
for (String str : array) {
join += str +",";
}
return "'{" + join.substring(0,join.length() - 1) + "}'";
}
}
#Override
public Object sqlArgToJava(FieldType fieldType, Object sqlArg, int columnPos) {
String string = (String) sqlArg;
if (string == null) {
return null;
} else {
return string.replaceAll("[{}]","").split(",");
}
}
}
And then in my business object implementation, I set up the persister class on the column likeso:
#DatabaseField(columnName = TAGS_FIELD, persisterClass = StringArrayPersister.class)
private String[] tags;
When ever I try inserting a new record with the Dao.create statement, I get an error message saying tags is of type text[], but got character varying... However, when querying existing records from the database, the business object (and text array) load just fine.
Any ideas?
UPDATE:
PostGresSQL 9.2. The exact error message:
Caused by: org.postgresql.util.PSQLException: ERROR: column "tags" is
of type text[] but expression is of type character varying Hint: You
will need to rewrite or cast the expression.
I've not used ormlite before (I generally use MyBatis), however, I believe the proximal issue is this code:
private StringArrayPersister() {
super(SqlType.STRING, new Class<?>[]{String[].class});
}
SqlType.String is mapped to varchar in SQL in the ormlite code, and so therefore I believe is the proximal cause of the error you're getting. See ormlite SQL Data Types info for more detail on that.
Try changing it to this:
private StringArrayPersister() {
super(SqlType.OTHER, new Class<?>[]{String[].class});
}
There may be other tweaks necessary as well to get it fully up and running, but that should get you passed this particular error with the varchar type mismatch.
I have some problem with the Google's AutoBean serialization and deserialization.
I have an AutoBean that contains primitive types and Maps as well. I can serialize and deserialize the primitive types without any problem, but when i try to read the deserialized Map, i get NullPointerException.
Have you ever met with a similar problem before? There is a JUnit test that representes my problem. The first two asserts are passes, but the third fails.
public class AutoBeanTest {
#Test
public void test() throws Exception {
MyFactory myFactory = AutoBeanFactorySource.create(MyFactory.class);
Options options = myFactory.options().as();
options.setMyInt(5);
HashMap<Double, Boolean> map = newHashMap();
map.put(8.0, true);
map.put(9.1, false);
options.setMyMap(map);
Options deserialized = AutoBeanCodex.decode(myFactory, Options.class, AutoBeanCodex.encode(AutoBeanUtils.getAutoBean(options)).getPayload()).as();
assertEquals(deserialized.getMyInt(),5);
assertTrue(options.getMyMap().containsKey(8d));
assertTrue(deserialized.getMyMap().containsKey(8d));
}
public interface MyFactory extends AutoBeanFactory {
AutoBean<Options> options();
}
public interface Options {
public int getMyInt();
void setMyInt(int myInt);
Map<Double, Boolean> getMyMap();
void setMyMap(Map<Double, Boolean> myMap);
}
}
I've been playing around with the AutoBean functionality a while ago. I think it is still kind a buggy. I'm quite sure the exceptions is caused by a bug in the AutoBean code, not in your code.
If you run the above sample code in a debugger and check the generated JSON, things look fine. You can even call deserialized.getMyMap().size() and get the correct value, but once you want to access the content errors occur.
There is a workaround, just use Map<String, String> instead of Double or Boolean and it works...
Ackchyually... Autobeans is doing it correctly as in JSON only strings are allowed as keys. But of course the error message should be more helpful.