How can I avoid converting an empty HashMap to null in morphia? - mongodb

We are using org.mongodb.morphia to convert objects BasicDBObjects before persistence. One issue encountered is that in some cases the object to convert contains an empty HashMap whose size is 0, after conversion, the HashMap is converted to null. So NullPointerException throw in later accessing. I want to ask experts for help, Is there any way to avoid this? I mean, after conversion, it's still an HashMap with size 0.
Part of the class to be converted:
public class ProjectServiceAdapterConfig {
#NotNull
private String serviceAdapterId;
#NotNull
private String projectId;
#Embedded
#Flatten
private Map<String, Map<String, String>> connections = new HashMap<>();
//...... setter and getter skipped here
}
code for conversion:
// create a mapper with default MapperOptions
private Mapper createMapper() {
return new Mapper();
}
ReplaceableItem objectToItem(final ProjectServiceAdapterConfig obj) {
final Mapper mapper = createMapper();
final MappedClass mc = mapper.getMappedClass(obj.getClass());
final Map<String, Object> map = mapper.toDBObject(obj).toMap();
}
the obj is created in other place. After some debug, I found that, the obj contains an empty Map(following data copied from IntelliJ IDEA debugger):
connections = {java.util.LinkedHashMap#8890} size = 1
[0] = {java.util.LinkedHashMap$Entry#8894}"accounts" -> size = 0
key: java.lang.String = {java.lang.String#8895}"accounts"
value: java.util.LinkedHashMap = {java.util.LinkedHashMap#8896} size = 0
and the one after converted:
[2] = {java.util.LinkedHashMap$Entry#8910}"connections" -> size = 1
key: java.lang.String = {java.lang.String#8911}"connections"
value: com.mongodb.BasicDBObject = {com.mongodb.BasicDBObject#8912} size = 1
[0] = {java.util.LinkedHashMap$Entry#8923}"accounts" -> null
key: java.lang.String = {java.lang.String#8895}"accounts"
value: = null
As you can see , it's converted to null which we try to avoid.
Thanks

Before you call morphia.mapPackage(), do this:
morphia.getMapper().getOptions().storeEmpties = true;
That should map back probably to an empty map for you.

I assume I cannot avoid it without customizing the MapOfValuesConverter. See from the source code that the empty map will be always converted to null:
#Override
public Object encode(Object value, MappedField mf) {
if (value == null)
return null
Map<Object, Object> map = (Map<Object, Object>) value;
if ((map != null) && (map.size() > 0)) {
Map mapForDb = new HashMap();
for (Map.Entry<Object, Object> entry : map.entrySet()) {
String strKey = converters.encode(entry.getKey()).toString();
mapForDb.put(strKey, converters.encode(entry.getValue()));
}
return mapForDb;
}
return null;
}

In case morphia.getMapper().getOptions().setStoreEmpties(true); doesn't work for you another solution would be to use the #PostLoad annotation to check whether you have a null collection and create an empty one if necessary.
import java.util.*;
import org.mongodb.morphia.annotations.*;
import org.bson.types.ObjectId;
#Entity
public class Model {
#Id
private ObjectId id;
private Map<String, String> map;
protected Model() {}
public Model(HashMap<String, String> map) {
super();
setMap(map);
}
public void setMap(HashMap<String, String> map) {
this.map = map;
checkForNullMap();
}
#PostLoad
private void checkForNullMap() {
if (map == null) {
map = new HashMap<String, String>();
}
}
}

Related

Gson with Scala causes StackOverflow for Enumerations

I have an enum defined in Scala class as follows
// define compression types as enumerator
object CompressionType extends Enumeration
{
type CompressionType = Value
val None, Gzip, Snappy, Lz4, Zstd = Value
}
and I have class that I want to Serialize in JSON
case class ProducerConfig(batchNumMessages : Int, lingerMs : Int, messageSize : Int,
topic: String, compressionType: CompressionType.Value )
That class includes the Enum object. It seems that using GSON to serialize causes StackOverflow due to some circular dependency.
val gson = new Gson
val jsonBody = gson.toJson(producerConfig)
println(jsonBody)
Here is the stack trace I get below. I saw this question here and answer except the solution seems to be Java solution and didn't work for scala. Can someone clarify?
17:10:04.475 [ERROR] i.g.a.Gatling$ - Run crashed
java.lang.StackOverflowError: null
at com.google.gson.stream.JsonWriter.beforeName(JsonWriter.java:617)
at com.google.gson.stream.JsonWriter.writeDeferredName(JsonWriter.java:400)
at com.google.gson.stream.JsonWriter.value(JsonWriter.java:526)
at com.google.gson.internal.bind.TypeAdapters$7.write(TypeAdapters.java:233)
at com.google.gson.internal.bind.TypeAdapters$7.write(TypeAdapters.java:218)
at com.google.gson.internal.bind.TypeAdapterRuntimeTypeWrapper.write(TypeAdapterRuntimeTypeWrapper.java:69)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$1.write(ReflectiveTypeAdapterFactory.java:127)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.write(ReflectiveTypeAdapterFactory.java:245)
at com.google.gson.internal.bind.TypeAdapterRuntimeTypeWrapper.write(TypeAdapterRuntimeTypeWrapper.java:69)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$1.write(ReflectiveTypeAdapterFactory.java:127)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.write(ReflectiveTypeAdapterFactory.java:245)
at com.google.gson.internal.bind.TypeAdapterRuntimeTypeWrapper.write(TypeAdapterRuntimeTypeWrapper.java:69)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$1.write(ReflectiveTypeAdapterFactory.java:127)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.write(ReflectiveTypeAdapterFactory.java:245)
at com.google.gson.internal.bind.TypeAdapterRuntimeTypeWrapper.write(TypeAdapterRuntimeTypeWrapper.java:69)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$1.write(ReflectiveTypeAdapterFactory.java:127)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.write(ReflectiveTypeAdapterFactory.java:245)
at com.google.gson.internal.bind.TypeAdapterRuntimeTypeWrapper.write(TypeAdapterRuntimeTypeWrapper.java:69)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$1.write(ReflectiveTypeAdapterFactory.java:127)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.write(ReflectiveTypeAdapterFactory.java:245)
at com.google.gson.internal.bind.TypeAdapterRuntimeTypeWrapper.write(TypeAdapterRuntimeTypeWrapper.java:69)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$1.write(ReflectiveTypeAdapterFactory.java:127)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.write(ReflectiveTypeAdapterFactory.java:245)
at com.google.gson.internal.bind.TypeAdapterRuntimeTypeWrapper.write(TypeAdapterRuntimeTypeWrapper.java:69)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$1.write(ReflectiveTypeAdapterFactory.java:127)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.write(ReflectiveTypeAdapterFactory.java:245)
I'm not a Scala guy but I think Gson is a wrong tool to use here.
Firstly, Gson is not aware of scala.Enumeration therefore handling it as a regular data bag that's traversable using reflection.
Secondly, there is no an easy (if any?) way of deserializing to the original value state (can be ignored if you're going only to produce, not consume, JSON documents).
Here is why:
object Single
extends Enumeration {
val Only = Value
}
final class Internals {
private Internals() {
}
static void inspect(final Object o, final Excluder excluder, final boolean serialize)
throws IllegalAccessException {
inspect(o, clazz -> !excluder.excludeClass(clazz, serialize), field -> !excluder.excludeField(field, serialize));
}
static void inspect(final Object o, final Predicate<? super Class<?>> inspectClass, final Predicate<? super Field> inspectField)
throws IllegalAccessException {
for ( Class<?> c = o.getClass(); c != null; c = c.getSuperclass() ) {
if ( !inspectClass.test(c) ) {
continue;
}
System.out.println(c);
for ( final Field f : c.getDeclaredFields() ) {
if ( !inspectField.test(f) ) {
continue;
}
f.setAccessible(true);
System.out.printf("\t%s: %s\n", f, f.get(o));
}
}
}
}
final Object value = Single.Only();
Internals.inspect(value, gson.excluder(), true);
produces:
class scala.Enumeration$Val
private final int scala.Enumeration$Val.i: 0
private final java.lang.String scala.Enumeration$Val.name: null
class scala.Enumeration$Value
private final scala.Enumeration scala.Enumeration$Value.scala$Enumeration$$outerEnum: Single
class java.lang.Object
As you can see, there are two crucial fields:
private final java.lang.String scala.Enumeration$Val.name gives null unless named (the enumeration element can be obtained using toString though).
private final scala.Enumeration scala.Enumeration$Value.scala$Enumeration$$outerEnum is actually a reference to the concrete enumeration outer class (that's actually the cause of the infinite recursion and hence stack overflow error).
These two prevent from proper deserialization.
The outer enum type can be obtained in at least three ways:
either implement custom type adapters for all types that can contain such enumerations (pretty easy for data bags (case classes in Scala?) as fields already contain the type information despite Gson provides poor support of this; won't work for single primitive literals like the above or collections);
or bake the outer enumeration name to JSON holding two entries for the name and outer type.
The latter could be done like this (in Java, hope it's easy to simplify it in Scala):
final class ScalaStuff {
private static final Field outerEnumField;
private static final Map<String, Method> withNameMethodCache = new ConcurrentHashMap<>();
static {
try {
outerEnumField = Enumeration.Value.class.getDeclaredField("scala$Enumeration$$outerEnum");
outerEnumField.setAccessible(true);
} catch ( final NoSuchFieldException ex ) {
throw new RuntimeException(ex);
}
}
private ScalaStuff() {
}
#Nonnull
static String toEnumerationName(#Nonnull final Enumeration.Value value) {
try {
final Class<? extends Enumeration> aClass = ((Enumeration) outerEnumField.get(value)).getClass();
final String typeName = aClass.getTypeName();
final int length = typeName.length();
assert !typeName.isEmpty() && typeName.charAt(length - 1) == '$';
return typeName.substring(0, length - 1);
} catch ( final IllegalAccessException ex ) {
throw new RuntimeException(ex);
}
}
#Nonnull
static Enumeration.Value fromEnumerationValue(#Nonnull final String type, #Nonnull final String enumerationName)
throws ClassNotFoundException, NoSuchMethodException {
// using get for exception propagation cleanliness; computeIfAbsent would complicate exception handling
#Nullable
final Method withNameMethodCandidate = withNameMethodCache.get(type);
final Method withNameMethod;
if ( withNameMethodCandidate != null ) {
withNameMethod = withNameMethodCandidate;
} else {
final Class<?> enumerationClass = Class.forName(type);
withNameMethod = enumerationClass.getMethod("withName", String.class);
withNameMethodCache.put(type, withNameMethod);
}
try {
return (Enumeration.Value) withNameMethod.invoke(null, enumerationName);
} catch ( final IllegalAccessException | InvocationTargetException ex ) {
throw new RuntimeException(ex);
}
}
}
final class ScalaEnumerationTypeAdapterFactory
implements TypeAdapterFactory {
private static final TypeAdapterFactory instance = new ScalaEnumerationTypeAdapterFactory();
private ScalaEnumerationTypeAdapterFactory() {
}
static TypeAdapterFactory getInstance() {
return instance;
}
#Override
#Nullable
public <T> TypeAdapter<T> create(final Gson gson, final TypeToken<T> typeToken) {
if ( !Enumeration.Value.class.isAssignableFrom(typeToken.getRawType()) ) {
return null;
}
#SuppressWarnings("unchecked")
final TypeAdapter<T> typeAdapter = (TypeAdapter<T>) Adapter.instance;
return typeAdapter;
}
private static final class Adapter
extends TypeAdapter<Enumeration.Value> {
private static final TypeAdapter<Enumeration.Value> instance = new Adapter()
.nullSafe();
private Adapter() {
}
#Override
public void write(final JsonWriter out, final Enumeration.Value value)
throws IOException {
out.beginObject();
out.name("type");
out.value(ScalaStuff.toEnumerationName(value));
out.name("name");
out.value(value.toString());
out.endObject();
}
#Override
public Enumeration.Value read(final JsonReader in)
throws IOException {
in.beginObject();
#Nullable
String type = null;
#Nullable
String name = null;
while ( in.hasNext() ) {
switch ( in.nextName() ) {
case "type":
type = in.nextString();
break;
case "name":
name = in.nextString();
break;
default:
in.skipValue();
break;
}
}
in.endObject();
if ( type == null || name == null ) {
throw new JsonParseException("Insufficient enum data: " + type + ", " + name);
}
try {
return ScalaStuff.fromEnumerationValue(type, name);
} catch ( final ClassNotFoundException | NoSuchMethodException ex ) {
throw new JsonParseException(ex);
}
}
}
}
The following JUnit 5 test will passed:
private static final Gson gson = new GsonBuilder()
.disableHtmlEscaping()
.registerTypeAdapterFactory(ScalaEnumerationTypeAdapterFactory.getInstance())
.create();
#Test
public void test() {
final Enumeration.Value before = Single.Only();
final String json = gson.toJson(before);
System.out.println(json);
final Enumeration.Value after = gson.fromJson(json, Enumeration.Value.class);
Assertions.assertSame(before, after);
}
where the json variable would hold the following JSON payload:
{"type":"Single","name":"Only"}
The ScalaStuff class above is most likely not complete. See more at how to deserialize a json string that contains ## with scala' for Scala and Gson implications.
Update 1
Since you don't need to consume the produced JSON documents assuming the JSON consumers can deal with the enumeration deserialization themselves, you can produce an enumeration value name that's more descriptive than producing nameless ints. Just replace the Adapter above:
private static final class Adapter
extends TypeAdapter<Enumeration.Value> {
private static final TypeAdapter<Enumeration.Value> instance = new Adapter()
.nullSafe();
private Adapter() {
}
#Override
public void write(final JsonWriter out, final Enumeration.Value value)
throws IOException {
out.value(value.toString());
}
#Override
public Enumeration.Value read(final JsonReader in) {
throw new UnsupportedOperationException();
}
}
Then following test will be green:
Assertions.assertEquals("\"Only\"", gson.toJson(Single.Only()));

Mapstruct How to generate mapping source/target in txt file at build time

I need to generate somewhere (maybe in directory "target/generated/annotations/..../ MyMapper.txt at buildtime) all the sources/target by mapper.
to then be able to read the txt files at runtime
example :
#Mapper
public interface MyMapper {
#Mapping(target = "a", source = "source.x.y.z")
#Mapping(target = "b", source = "source.r.s.t")
#Mapping(target = "c", source = "source.o.p.q")
MyObject map(MySource source);
}
Content of the generated file : target/generated/annotations/MyMapper.txt
mypackage.MyObject.a=mypackage.MySource.x.y.z
mypackage.MyObject.b=mypackage.MySource.r.s.t
mypackage.MyObject.c=mypackage.MySource.o.p.q
how can i do that?
thank you in advance for your help
MapStruct cannot do this out of the box.
You'll need to write your own annotation processor that will use the #Mapper annotation and generate your own text file.
Thank you #Filip for your advice,
I post here a solution, which maybe can help others.
#Filip : What do you think about this solution?
#SupportedAnnotationTypes({ "org.mapstruct.Mapping", "org.mapstruct.Mappings" })
#SupportedSourceVersion(SourceVersion.RELEASE_8)
#AutoService(Processor.class)
public class MapperProcessor extends AbstractProcessor {
private static final String SOURCE = "source";
private static final String TARGET = "target";
private static final Pattern PATTERN_TARGET = Pattern.compile(".*" + TARGET + "=\"([^\"]*)\".*");
private static final Pattern PATTERN_SOURCE = Pattern.compile(".*" + SOURCE + "=\"([^\"]*)\".*");
private static final String DEST_PATH = "META-INF/mapstruct/";
#Override
public boolean process(Set<? extends TypeElement> annotations, RoundEnvironment roundEnv) {
processingEnv.getMessager().printMessage(Diagnostic.Kind.NOTE, " MapperProcessor : creating metadata from mapstruct annotation");
// init param
Map<String, Map<String, String>> globalMapping = new HashMap<>();
/////////// annotation : #Mappings
for (Element element : roundEnv.getElementsAnnotatedWith(Mappings.class)) {
final String className = ((TypeElement) element.getEnclosingElement()).getQualifiedName().toString();
Map<? extends ExecutableElement, ? extends AnnotationValue> elementValues = element.getAnnotationMirrors().get(0).getElementValues();
elementValues.values().forEach(value -> {
Map<String, String> mapTargetSource = getTargetSourceValue((List<?>) value.getValue());
globalMapping.put(className, mapTargetSource);
});
}
/////////// annotation : #Mapping
for (Element element : roundEnv.getElementsAnnotatedWith(Mapping.class)) {
final String className = ((TypeElement) element.getEnclosingElement()).getQualifiedName().toString();
List<? extends AnnotationMirror> annotationMirrors = element.getAnnotationMirrors();
Map<? extends ExecutableElement, ? extends AnnotationValue> elementValues = annotationMirrors.get(0).getElementValues();
String target = null;
String source = null;
for (Entry<? extends ExecutableElement, ? extends AnnotationValue> entrySet : elementValues.entrySet()) {
final ExecutableElement key = entrySet.getKey();
final AnnotationValue value = entrySet.getValue();
if (StringUtils.equals(key.getSimpleName(), TARGET)) {
target = value.getValue().toString();
}
if (StringUtils.equals(key.getSimpleName(), SOURCE)) {
source = value.getValue().toString();
}
if (StringUtils.isNoneBlank(target, source)) {
break;
}
}
Map<String, String> mapTargetSource = new HashMap<>();
mapTargetSource.put(target, source);
globalMapping.put(className, mapTargetSource);
}
writeMetaData(globalMapping);
return true;
}
///////////////// private methods
private Map<String, String> getTargetSourceValue(List<?> listValue) {
final Map<String, String> result = new HashMap<>();
listValue.forEach(objValueBrut -> {
String target = getValue(PATTERN_TARGET, objValueBrut.toString());
String source = getValue(PATTERN_SOURCE, objValueBrut.toString());
if (StringUtils.isNoneBlank(target, source)) {
result.put(target, source);
}
});
return result;
}
private static String getValue(Pattern pattern, String valueBrut) {
Matcher matcher = pattern.matcher(valueBrut);
if (matcher.matches()) {
return matcher.group(1);
}
return null;
}
private void writeMetaData(Map<String, Map<String, String>> globalMapping) {
for (Entry<String, Map<String, String>> entrySet : globalMapping.entrySet()) {
String className = entrySet.getKey();
Map<String, String> mapping = entrySet.getValue();
if (mapping.isEmpty()) {
continue;
}
try {
FileObject resource = processingEnv.getFiler().createResource(StandardLocation.CLASS_OUTPUT, "", DEST_PATH + className + ".txt");
try (PrintWriter out = new PrintWriter(resource.openWriter())) {
mapping.forEach((k, v) -> out.println(k + "=" + v));
}
} catch (IOException e) {
e.printStackTrace();
}
}
}
}

Replacement for "GROUP BY" in ContentResolver query in Android Q ( Android 10, API 29 changes)

I'm upgrading some legacy to target Android Q, and of course this code stop working:
String[] PROJECTION_BUCKET = {MediaStore.Images.ImageColumns.BUCKET_ID,
MediaStore.Images.ImageColumns.BUCKET_DISPLAY_NAME,
MediaStore.Images.ImageColumns.DATE_TAKEN,
MediaStore.Images.ImageColumns.DATA,
"COUNT(" + MediaStore.Images.ImageColumns._ID + ") AS COUNT",
MediaStore.Files.FileColumns.MEDIA_TYPE,
MediaStore.MediaColumns._ID};
String BUCKET_GROUP_BY = " 1) and " + BUCKET_WHERE.toString() + " GROUP BY 1,(2";
cur = context.getContentResolver().query(images, PROJECTION_BUCKET,
BUCKET_GROUP_BY, null, BUCKET_ORDER_BY);
android.database.sqlite.SQLiteException: near "GROUP": syntax error (code 1 SQLITE_ERROR[1])
Here it supposed to obtain list of images with album name, date, count of pictures - one image for each album, so we can create album picker screen without querying all pictures and loop through it to create albums.
Is it possible to group query results with contentResolver since SQL queries stoped work?
(I know that ImageColumns.DATA and "COUNT() AS COUNT" are deprecated too, but this is a question about GROUP BY)
(There is a way to query albums and separately query photo, to obtain photo uri for album cover, but i want to avoid overheads)
Unfortunately Group By is no longer supported in Android 10 and above, neither any aggregated functions such as COUNT. This is by design and there is no workaround.
The solution is what you are actually trying to avoid, which is to query, iterate, and get metrics.
To get you started you can use the next snipped, which will resolve the buckets (albums), and the amount of records in each one.
I haven't added code to resolve the thumbnails, but is easy. You must perform a query for each bucket Id from all the Album instances, and use the image from the first record.
public final class AlbumQuery
{
#NonNull
public static HashMap<String, AlbumQuery.Album> get(#NonNull final Context context)
{
final HashMap<String, AlbumQuery.Album> output = new HashMap<>();
final Uri contentUri = MediaStore.Images.Media.EXTERNAL_CONTENT_URI;
final String[] projection = {MediaStore.Images.Media.BUCKET_DISPLAY_NAME, MediaStore.Images.Media.BUCKET_ID};
try (final Cursor cursor = context.getContentResolver().query(contentUri, projection, null, null, null))
{
if ((cursor != null) && (cursor.moveToFirst() == true))
{
final int columnBucketName = cursor.getColumnIndexOrThrow(MediaStore.Images.Media.BUCKET_DISPLAY_NAME);
final int columnBucketId = cursor.getColumnIndexOrThrow(MediaStore.Images.Media.BUCKET_ID);
do
{
final String bucketId = cursor.getString(columnBucketId);
final String bucketName = cursor.getString(columnBucketName);
if (output.containsKey(bucketId) == false)
{
final int count = AlbumQuery.getCount(context, contentUri, bucketId);
final AlbumQuery.Album album = new AlbumQuery.Album(bucketId, bucketName, count);
output.put(bucketId, album);
}
} while (cursor.moveToNext());
}
}
return output;
}
private static int getCount(#NonNull final Context context, #NonNull final Uri contentUri, #NonNull final String bucketId)
{
try (final Cursor cursor = context.getContentResolver().query(contentUri,
null, MediaStore.Images.Media.BUCKET_ID + "=?", new String[]{bucketId}, null))
{
return ((cursor == null) || (cursor.moveToFirst() == false)) ? 0 : cursor.getCount();
}
}
public static final class Album
{
#NonNull
public final String buckedId;
#NonNull
public final String bucketName;
public final int count;
Album(#NonNull final String bucketId, #NonNull final String bucketName, final int count)
{
this.buckedId = bucketId;
this.bucketName = bucketName;
this.count = count;
}
}
}
This is a more efficient(not perfect) way to do that.
I am doing it for videos, but doing so is the same for images to. just change MediaStore.Video.Media.X to MediaStore.Images.Media.X
public class QUtils {
/*created by Nasib June 6, 2020*/
#RequiresApi(api = Build.VERSION_CODES.Q)
public static ArrayList<FolderHolder> loadListOfFolders(Context context) {
ArrayList<FolderHolder> allFolders = new ArrayList<>();//list that we need
HashMap<Long, String> folders = new HashMap<>(); //hashmap to track(no duplicates) folders by using their ids
String[] projection = {MediaStore.Video.Media._ID,
MediaStore.Video.Media.BUCKET_ID,
MediaStore.Video.Media.BUCKET_DISPLAY_NAME,
MediaStore.Video.Media.DATE_ADDED};
ContentResolver CR = context.getContentResolver();
Uri root = MediaStore.Video.Media.getContentUri(MediaStore.VOLUME_EXTERNAL);
Cursor c = CR.query(root, projection, null, null, MediaStore.Video.Media.DATE_ADDED + " desc");
if (c != null && c.moveToFirst()) {
int folderIdIndex = c.getColumnIndexOrThrow(MediaStore.Video.Media.BUCKET_ID);
int folderNameIndex = c.getColumnIndexOrThrow(MediaStore.Video.Media.BUCKET_DISPLAY_NAME);
int thumbIdIndex = c.getColumnIndexOrThrow(MediaStore.Video.Media._ID);
int dateAddedIndex = c.getColumnIndexOrThrow(MediaStore.Video.Media.DATE_ADDED);
do {
Long folderId = c.getLong(folderIdIndex);
if (folders.containsKey(folderId) == false) { //proceed only if the folder data has not been inserted already :)
long thumbId = c.getLong(thumbIdIndex);
String folderName = c.getString(folderNameIndex);
String dateAdded = c.getString(dateAddedIndex);
Uri thumbPath = ContentUris.withAppendedId(MediaStore.Video.Media.EXTERNAL_CONTENT_URI, thumbId);
folders.put(folderId, folderName);
allFolders.add(new FolderHolder(String.valueOf(thumbPath), folderName, dateAdded));
}
} while (c.moveToNext());
c.close(); //close cursor
folders.clear(); //clear the hashmap becuase it's no more useful
}
return allFolders;
}
}
FolderHolder model class
public class FolderHolder {
private String folderName;
public long dateAdded;
private String thumbnailPath;
public long folderId;
public void setPath(String thumbnailPath) {
this.thumbnailPath = thumbnailPath;
}
public String getthumbnailPath() {
return thumbnailPath;
}
public FolderHolder(long folderId, String thumbnailPath, String folderName, long dateAdded) {
this.folderId = folderId;
this.folderName = folderName;
this.thumbnailPath = thumbnailPath;
this.dateAdded = dateAdded;
}
public String getFolderName() {
return folderName;
}
}
GROUP_BY supporting in case of using Bundle:
val bundle = Bundle().apply {
putString(
ContentResolver.QUERY_ARG_SQL_SORT_ORDER,
"${MediaStore.MediaColumns.DATE_MODIFIED} DESC"
)
putString(
ContentResolver.QUERY_ARG_SQL_GROUP_BY,
MediaStore.Images.ImageColumns.BUCKET_ID
)
}
contentResolver.query(
uri,
arrayOf(
MediaStore.Images.ImageColumns.BUCKET_ID,
MediaStore.Images.ImageColumns.BUCKET_DISPLAY_NAME,
MediaStore.Images.ImageColumns.DATE_TAKEN,
MediaStore.Images.ImageColumns.DATA
),
bundle,
null
)

CQ5 multifield configuration service

I'm trying to create a CQ5 service with a multifield configuration interface. It would be something like this but at the click of PLUS button it would add not just a new row but a group of N rows.
Property
Field1 +-
Field2
....
FieldN
Any advice?
As far as I know there is no such possibility in the Apache Felix.
Depending on your actual requirement I would consider decomposing the configuration. Try moving all the fieldsets (groups of fields that you'd like to add through the plus button) into a separated configuration. So, closely to the slf4j.Logger configuration you would have a Configuration Factory approach.
A simple configuration factory can look like following
#Component(immediate = true, configurationFactory = true, metatype = true, policy = ConfigurationPolicy.OPTIONAL, name = "com.foo.bar.MyConfigurationProvider", label = "Multiple Configuration Provider")
#Service(serviceFactory = false, value = { MyConfigurationProvider.class })
#Properties({
#Property(name = "propertyA", label = "Value for property A"),
#Property(name = "propertyB", label = "Value for property B") })
public class MyConfigurationProvider {
private String propertyA;
private String propertyB;
#Activate
protected void activate(final Map<String, Object> properties, final ComponentContext componentContext) {
propertyA = PropertiesUtil.toStringArray(properties.get("propertyA"), defaultValue);
propertyB = PropertiesUtil.toStringArray(properties.get("propertyB"), defaultValue);
}
}
Using it is as simple as adding a reference in any #Component
#Reference(cardinality = ReferenceCardinality.OPTIONAL_MULTIPLE, referenceInterface = MyConfigurationProvider.class, policy = ReferencePolicy.DYNAMIC)
private final List<MyConfigurationProvider> providers = new LinkedList<MyConfigurationProvider>();
protected void bindProviders(MyConfigurationProvider provider) {
providers.add(provider);
}
protected void unbindProviders(MyConfigurationProvider provider) {
providers.remove(provider);
}
This is one way of doing it.
#Component(label = "My Service", metatype = true, immediate = true)
#Service(MyService.class)
#Properties({
#Property(name = "my.property", description = "Provide details Eg: url=http://www.google.com|size=10|path=/content/project", value = "", unbounded = PropertyUnbounded.ARRAY) })
public class MyService {
private String[] myPropertyDetails;
#Activate
protected void activate(ComponentContext ctx) {
this.myPropertyDetails = getPropertyAsArray(ctx.getProperties().get("my.property"));
try {
if (null != myPropertyDetails && myPropertyDetails.length > 0) {
for(String myPropertyDetail : myPropertyDetails) {
Map<String, String> map = new HashMap<String, String>();
String[] propertyDetails = myPropertyDetails.split("|");
for (String keyValuePair : propertyDetails) {
String[] keyValue = keyValuePair.split("=");
if (null != keyValue && keyValue.length > 1) {
map.put(keyValue[0], keyValue[1]);
}
}
/* the map now has all the properties in the form of key value pairs for single field
use this for logic execution. when there are no multiple properties in the row,
you can skip the logic to split and add in the map */
}
}
} catch (Exception e) {
log.error( "Exception ", e.getMessage());
}
}
private String[] getPropertyAsArray(Object obj) {
String[] paths = { "" };
if (obj != null) {
if (obj instanceof String[]) {
paths = (String[]) obj;
} else {
paths = new String[1];
paths[0] = (String) obj;
}
}
return paths;
}
}

Mongo DB grouping datatype changes

I came across an odd occurrence while using mongodb + their java driver.
When I do a grouping query the datatype for the key changes from an int to a double.
(ie. I am grouping on a key for 'hours', which is stored as an int within all the objects, but the key changes into a double type in the results I get back).
It isn't a huge issue...but it is weird that it would just arbitrarily change the datatype of a key-value pair like that. Has anyone else had this come up? is this normal behaviour?
Thanks,
p.s. Doing a regular .find() query returns correct datatype, fyi.
Edit:
Some example code:
import com.mongodb.BasicDBObject;
import com.mongodb.DBCollection;
import com.mongodb.DBCursor;
import com.mongodb.DBObject;
import com.mongodb.QueryOperators;
public class MongoTestQueries {
private static final String TESTDBNAME = "badgerbadgerbadger";
private static final String TESTCOLNAME = "mushroom";
private static final Long TESTMAX = 50L;
private static final String KEY1 = "a";
private static final String KEY2 = "snake";
private static final String KEY3 = "plane";
/**
* This starts running it.
*
* #param args
* the arguments.
*/
public static void main(final String[] args) {
//You'll need to write your own code here for connecting to db as you see fit.
MongoConnection mc = new MongoConnection("someserver.com", TESTDBNAME);
mc.setCurCol(TESTCOLNAME);
mc.getCurCol().drop();
mc.setCurCol(TESTCOLNAME);
DBCollection col = mc.getCurCol();
populateCollection(col);
System.out.println(col.count() + " inserted into db.");
regGroupSearch(col);
}
private static void populateCollection(DBCollection col) {
for (Long l = 0L; l < TESTMAX; l++) {
col.insert(new BasicDBObject(KEY1, new Integer(l.intValue())).append(KEY2,
Math.random()).append(KEY3, (TESTMAX - l) + "a string"));
}
}
private static void regGroupSearch(final DBCollection col) {
System.out.println("Group Search:");
DBObject key = new BasicDBObject(KEY1, true).append(KEY3, true);
DBObject cond = new BasicDBObject().append(KEY1, new BasicDBObject(QueryOperators.GT, 4.0));
DBObject initial = new BasicDBObject("count", 0).append("sum", 0);
String reduce = "function(obj,prev){prev.sum+=obj." + KEY2 + ",prev.count+=1}";
String finalize = "function(obj){obj.ave = obj.sum/obj.count}";
DBObject groupResult = col.group(key, cond, initial, reduce, finalize);
printDBObject(groupResult);
System.out.println("Done.");
}
private static void printDBObject(final DBObject toPrint) {
for (String k : toPrint.keySet()) {
System.out.println(k + ": " + toPrint.get(k));
}
}
}