In the C# mongodb driver, there are 3 possible representations for Dictionaries:
Document, ArrayOfArrays, ArrayOfDocuments.
https://mongodb.github.io/mongo-csharp-driver/2.8/reference/bson/mapping/#dictionary-serialization-options
As far as I understand, the Java driver supports (only or by default) the "Document" representation.
Is there a Convention or other built-in way to configure the driver to use ArrayOfArrays?
I was not able to see anything related in the MongoDB Java Driver documentation.
According to the Java driver team, the answer is that while there's isn't a simple flag:
You can do it with a custom codec to handle Map conversions to a nested key, value array for all Maps.
Alternatively, you could create a custom annotation could be used to set the codec for a single Class field in the POJO. That way you would not have to worry about all Maps being treated the same by the codec registry.
If you want to store all maps in the same way, the first option is obviously easier. You can refer to the driver code to see how the built-in annotations are built.
Related
I'm trying to get some java.time types (LocalDate, ZonedDateTime) to work with Grails 5, GORM 7, with a MongoDb 3.4 database. I'm using the MongoDb plugin v7.3.0, mongodb-driver-core/mongodb-driver-sync v4.5.0.
The appropriate codecs seem to be available from the package org.grails.datastore.bson.codecs, but they don't seem be be getting used. When a document is stored to Mongo, LocalDate and ZonedDateTime are serialized as strings, and when I try to retrieve them, I get this error:
Cannot convert value of type 'java.lang.String' to required type 'java.time.LocalDate' for property 'dob': no matching editors or conversion strategy found.
I've reviewed the GORM for MongoDb docs (https://gorm.grails.org/latest/mongodb/manual/), but there isn't much help there. I would assume the appropriate codecs would be automatically registered, but in case the weren't I followed that documentation to register the codecs in my application.yml file, but that didn't work either. Not sure next step on this, maybe I can't do this in Grails?
Edit: Added example repo: https://github.com/MichaelJRussell/mongo-test
I'm using Spring Boot and am trying to figure out how I can see a list of all the default properties for the MongoDB connection.
I looked at AbstractMongoClientConfiguration but it's abstract and I can't see where the defaults come from. Looking at the class hierarchy for this class I can't see any default implementation in the Spring libs I have.
Where are the defaults for this connection? I don't see any properties files either, but might be missing them.
Had the same issue not too long ago. The official spring-data-mongodb documentation doesn't really mention any details about connection properties and their default values.
However, you can find more detailed information about the connection parameters in the MongoDB Java Driver Documentation and even more detailed in the Connection string docs including some default values.
Spring Data MongoDB uses default values generated by calling the builder method for MongoClientSettings.
Is there any integration point allowing to add a meta-field on all indexed documents transparently, right before they are indexed, similarly to _hibernate_class?
Currently using Hibernate 5.11
As discussed over the chat, the only option in Search 5 is to use the programmatic mapping API to add a class bridge to every single indexed entity type.
In Search 6, you can use the new programmatic mapping API to add a type bridge to the Object type, and it will be applied to every type. It will also be applied to embedded types, though, so that may not be what you're after.
Is this possible to generate random Avro data by the specified schema using org.apache.avro library?
I need to produce this data with Kafka.
I tried to find some kind of random data generator for test, however, I have stumbled upon tools for such data generator or GenericRecord usage. Tools are not very suitable for me as there is a specific file dependency (like reading the file and so on) and GenericRecord should be generated one-by-one as I've understood.
Are there any other solutions for Java/Scala?
UPDATE: I have found this class but it does not seem to beaccessible from org.apache.avro version version 1.8.2
The reason you need to read a file, is that it matches a Schema, which defines the fields that need to be created, and of which types.
That is not a hard requirement, and there would be nothing preventing creation of random Generic or Specific Records that are built in code via Avro's SchemaBuilder class
See this repo for example, that uses a POJO generated from an AVSC schema (which again, could be done with SchemaBuilder instead) into a Java class.
Even the class you linked to uses a schema file
So I personally would probably use Avro4s (https://github.com/sksamuel/avro4s) in conjunction with scalachecks (https://www.scalacheck.org) Gen to model such tests.
You could use scalacheck to generate random instances of case classes and avro4s to convert them to generic records, extract their schema etc etc.
There's also avro-mocker https://github.com/speedment/avro-mocker though I don't know how easy it is to hook into the code.
I'd just use Podam http://mtedone.github.io/podam/ to generate POJOs and then just output them to Avro using Java Avro library https://avro.apache.org/docs/1.8.1/gettingstartedjava.html#Serializing
I need to store Scala class in Morphia. With annotations it works well unless I try to store collection of _ <: Enumeration
Morphia complains that it does not have serializers for that type, and I am wondering, how to provide one. For now I changed type of collection to Seq[String], and fill it with invoking toString on every item in collection.
That works well, however I'm not sure if that is right way.
This problem is common to several available layers of abstraction on the top of MongoDB. It all come back to a base reason: there is no enum equivalent in json/bson. Salat for example has the same problem.
In fact, MongoDB Java driver does not support enums as you can read in the discussion going on here: https://jira.mongodb.org/browse/JAVA-268 where you can see the problem is still open. Most of the frameworks I have seen to use MongoDB with Java do not implement low-level functionalities such as this one. I think this choice makes a lot of sense because they leave you the choice on how to deal with data structures not handled by the low-level driver, instead of imposing you how to do it.
In general I feel that the absence of support comes not from technical limitation but rather from design choice. For enums, there are multiple way to map them with their pros and their cons, while for other data types is probably simpler. I don't know the MongoDB Java driver in detail, but I guess supporting multiple "modes" would have required some refactoring (maybe that's why they are talking about a new version of serialization?)
These are two strategies I am thinking about:
If you want to index on an enum and minimize space occupation, you will map the enum to an integer ( Not using the ordinal , please can set enum start value in java).
If your concern is queryability on the mongoshell, because your data will be accessed by data scientist, you would rather store the enum using its string value
To conclude, there is nothing wrong in adding an intermediate data structure between your native object and MongoDB. Salat support it through CustomTransformers, on Morphia maybe you would need to do the conversion explicitely. Go for it.