Reading of DBname.system.indexes failed on Atlas cluster by mongobee after getting connected - mongodb

I have a Jhipster Spring boot project. Recently I shifted from mlabs standalone sandboxes to Atlas cluster sandbox M0 Free tier replica set. It even worked and I had made some database operations on it. But now for some reason then there is a read permission error
Error creating bean with name 'mongobee' defined in class path resource [DatabaseConfiguration.class]: Invocation of init method failed; nested exception is com.mongodb.MongoQueryException: Query failed with error code 8000 and error message 'user is not allowed to do action [find] on [test.system.indexes]' on server ********-shard-00-01-mfwhq.mongodb.net:27017
You can see the full stack here https://pastebin.com/kaxcr7VS
I have searched high and low and all I could find is that M0 tier user doesn't have permissions to overwrite admin database which I am not doing.
Even now connection to Mlabs DB works fine but have this issue on Atlas DB M0 tier.
Mongo DB version : 3.4
Jars and It's version
name: 'mongobee', version: '0.10'
name: 'mongo-java-driver', version: '3.4.2'
#Neil Lunn
The userId I am using to connect is that of admin's and the connection read and write works through shell or Robo3T(mongo client)

After discussion with MongoDB support team, MongoDB 3.0 deprecates direct access to the system.indexes collection, which had previously been used to list all indexes in a database. Applications should use db.<COLLECTION>.getIndexes() instead.
From MongoDB Atlas docs it can be seen that they may forbid calls to system. collections:
Optionally, for the read and readWrite role, you can also specify a collection. If you do not specify a collection for read and readWrite, the role applies to all collections (excluding some system. collections) in the database.
From the stacktrace it's visible that MongoBee is trying to make this call, so it's now the library issue and it should be updated.
UPDATE:
In order to fix an issue until MongoBee has released new version:
Get the latest sources of MongoBee git clone git#github.com:mongobee/mongobee.git, cd mongobee
Fetch pull request git fetch origin pull/87/head:mongobee-atlas
Checkout git checkout mongobee-atlas
Install MongoBee jar mvn clean install
Get compiled jar from /target folder or local /.m2
Use the jar as a dependency on your project

Came across this issue this morning. Heres a quick and dirty monkey-patch for the problem:
package com.github.mongobee.dao;
import com.github.mongobee.changeset.ChangeEntry;
import com.mongodb.BasicDBObject;
import com.mongodb.DB;
import com.mongodb.DBCollection;
import com.mongodb.DBObject;
import java.util.List;
import static com.github.mongobee.changeset.ChangeEntry.CHANGELOG_COLLECTION;
public class ChangeEntryIndexDao {
public void createRequiredUniqueIndex(DBCollection collection) {
collection.createIndex(new BasicDBObject()
.append(ChangeEntry.KEY_CHANGEID, 1)
.append(ChangeEntry.KEY_AUTHOR, 1),
new BasicDBObject().append("unique", true));
}
public DBObject findRequiredChangeAndAuthorIndex(DB db) {
DBCollection changelogCollection = db.getCollection(CHANGELOG_COLLECTION);
List<DBObject> indexes = changelogCollection.getIndexInfo();
if (indexes == null) return null;
for (DBObject index : indexes) {
BasicDBObject indexKeys = ((BasicDBObject) index.get("key"));
if (indexKeys != null && (indexKeys.get(ChangeEntry.KEY_CHANGEID) != null && indexKeys.get(ChangeEntry.KEY_AUTHOR) != null)) {
return index;
}
}
return null;
}
public boolean isUnique(DBObject index) {
Object unique = index.get("unique");
if (unique != null && unique instanceof Boolean) {
return (Boolean) unique;
} else {
return false;
}
}
public void dropIndex(DBCollection collection, DBObject index) {
collection.dropIndex(index.get("name").toString());
}
}

Caused by: java.lang.NoSuchMethodError: com.github.mongobee.dao.ChangeEntryIndexDao.<init>(Ljava/lang/String;)V
at com.github.mongobee.dao.ChangeEntryDao.<init>(ChangeEntryDao.java:34)
at com.github.mongobee.Mongobee.<init>(Mongobee.java:87)
at com.xxx.proj.config.DatabaseConfiguration.mongobee(DatabaseConfiguration.java:62)
at com.xxx.proj.config.DatabaseConfiguration$$EnhancerBySpringCGLIB$$4ae465a5.CGLIB$mongobee$1(<generated>)
at com.xxx.proj.config.DatabaseConfiguration$$EnhancerBySpringCGLIB$$4ae465a5$$FastClassBySpringCGLIB$$f202afb.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:228)
at org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(ConfigurationClassEnhancer.java:358)
at com.xxx.proj.config.DatabaseConfiguration$$EnhancerBySpringCGLIB$$4ae465a5.mongobee(<generated>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:162)
... 22 common frames omitted
jhipster 5 must be using a different version, because i get that when implementing the above code. looks like its expecting a different version.

This access to system.indexes is an open issue in mongobee. The issue has been fixed in the project, but the project was abandoned before the fix was ever released.
Due to this project abandonment, two successor libraries have since been forked from mongobee which have fixed this issue: Mongock and mongobeeJ.
Switching your application's dependency from the mongobee library to one of these successor libraries will allow you to run mongobee database migrations on Atlas.
To summarize these libraries:
Mongock - Forked from mongobee in 2018. Actively maintained. Has evolved significantly from the original, including built-in support for Spring, Spring Boot, and both versions 3 & 4 of the Mongo Java driver.
mongobeeJ - Forked from mongobee in 2018. Five updated versions have been released. Minimal evolution from the original mongobee. Mongo Java Driver 4 support was implemented in August, 2020. This project was deprecated in August, 2020, with a recommendation from its creators to use a library such as Mongock instead.

Related

Why does Gorm not recognize mongo domain class in Grails 4?

Grails 4.0.10. Mongodb 5.0.9 Community.
I'm following the instructions at https://gorm.grails.org/latest/mongodb/manual/ with a Grails Plugin project.
First anomaly, build.gradle:
compile 'org.grails.plugins:mongodb:7.3.0'
Once I do this I get dependency errors and also have to add
compile 'org.mongodb:mongodb-driver-core:4.7.0'
compile 'org.mongodb:mongodb-driver-sync:4.7.0'
Ok, everything else is pretty much a vanilla project. I created a test domain class "QtxResponse" as
#Entity
class QtxResponse {
static mapWith = "mongo"
String objectId
static constraints = {
}
static mapping = {
//id column: "object_id"
objectId index: true
}
}
The project fires up without error. Using the console I create a new QtxResponse via create-domain-class with a String objectId property and try to save it. This is what I get:
java.lang.IllegalStateException: Either class [domainobject.qualtrics.QtxResponse] is not a domain class or GORM has not been initialized correctly or has already been shutdown. Ensure GORM is loaded and configured correctly before calling any methods on a GORM entity.
What is this telling me? Is it something with Gorm setup or something with mapping to mongodb? I have tried both with and without Hibernate.
For Grails 4.0.10, this configuration worked (with hibernate):
compile 'org.grails.plugins:mongodb:6.1.7'
compile 'org.mongodb:bson:4.7.0'
compile "org.grails.plugins:hibernate5"
compile "org.hibernate:hibernate-core:5.4.0.Final"
compile "org.hibernate:hibernate-ehcache:5.1.3.Final"

Javers async commit into Mongo DB

Looking for some proper documentation on async commit for mongo db . We have a spring boot app where we are trying to generate audits for our domain objects , we would like to commit the audits generated by javers into mongo db asynchronously while our main SQL based transaction is fr of this mongodb call. Any pointers on this would be really helpful.
If you are using the Javers Spring Boot Mongo starter, you can simply put the #JaversAuditableAsync ann on a repository method.
There are limitations:
It works only with Mongo and there is no integration for magical-autogenerated ReactiveMongoRepository yet. So you have to put the #JaversAuditableAsync on an actual method which does the save.
#Repository
interface DummyObjectReactiveRepository
extends ReactiveMongoRepository<DummyObject, String> { }
...
#Repository
class MyRepository {
#Autowired DummyObjectReactiveRepository dummyObjectReactiveRepository;
#JaversAuditableAsync
void save(DummyObject d){
dummyObjectReactiveRepository.save(d)
}
}

MongoDB "no SNI name sent" error from heroku java application

I followed this tutorial to deploy a sample application to Heroku. I just added the below method in MyResource class and returned the result from it instead of "Hello World" from getIt() method. I'm connecting to an atlas free tier cluster:
static String getMessage() {
MongoClient mongoClient = new MongoClient(new MongoClientURI("mongodb://<USER>:<PASSWORD>#cluster0-shard-00-00-2lbue.mongodb.net:27017,cluster0-shard-00-01-2lbue.mongodb.net:27017,cluster0-shard-00-02-2lbue.mongodb.net:27017/test?ssl=true&replicaSet=Cluster0-shard-0&authSource=admin"));
DB database = mongoClient.getDB("mastery");
DBCollection collection = database.getCollection("summary");
DBObject query = new BasicDBObject("_id", new ObjectId("5c563fa2645d6b444c018dcb"));
DBCursor cursor = collection.find(query);
return (String)cursor.one().get("message");
}
This is the driver I'm using:
<dependency>
<groupId>org.mongodb</groupId>
<artifactId>mongodb-driver</artifactId>
<version>3.9.1</version>
</dependency>
This is my import:
import com.mongodb.*;
The application works fine from my local system. But I face the below error when I deploy the application to Heroku and hit the service:
INFO: Exception in monitor thread while connecting to server cluster0-shard-00-01-2lbue.mongodb.net:27017
com.mongodb.MongoCommandException: Command failed with error 8000 (AtlasError): 'no SNI name sent, make sure using a MongoDB 3.4+ driver/shell.' on server cluster0-shard-00-01-2lbue.mongodb.net:27017. The full response is { "ok" : 0, "errmsg" : "no SNI name sent, make sure using a MongoDB 3.4+ driver/shell.", "code" : 8000, "codeName" : "AtlasError" }
at com.mongodb.internal.connection.ProtocolHelper.getCommandFailureException(ProtocolHelper.java:179)
at com.mongodb.internal.connection.InternalStreamConnection.receiveCommandMessageResponse(InternalStreamConnection.java:299)
at com.mongodb.internal.connection.InternalStreamConnection.sendAndReceive(InternalStreamConnection.java:255)
at com.mongodb.internal.connection.CommandHelper.sendAndReceive(CommandHelper.java:83)
at com.mongodb.internal.connection.CommandHelper.executeCommand(CommandHelper.java:33)
at com.mongodb.internal.connection.InternalStreamConnectionInitializer.initializeConnectionDescription(InternalStreamConnectionInitializer.java:106)
at com.mongodb.internal.connection.InternalStreamConnectionInitializer.initialize(InternalStreamConnectionInitializer.java:63)
at com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:127)
at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:117)
at java.lang.Thread.run(Thread.java:748)
What is this SNI name? I can understand that the drivers are able to pick it from my machine, but not from Heroku machine. But I'm clueless on how to go about solving this! Is there a way to configure Heroku to reveal the SNI name when the driver asks for it? Can we get this value manually from somewhere in Heroku and directly feed it to the MongoDB drivers? Any help is appreciated.
EDIT:
It turned out that the client mentions the SNI name of the server it wishes to connect to as part of TLS security. And there seems to be a way to manually indicate the name in python driver. Is there any way to do this from java? Still puzzled why this is not an issue when running the app locally.
The code I was using to connect to the cluster turned out to be wrong. I followed the directions from the docs and it mentioned this:
To connect to an Atlas M0 (Free Tier) cluster, you must use Java
version 8 or greater and use a Java driver version that supports
MongoDB 3.4.
So I changed the java version to 1.8 in system.properties file:
java.runtime.version=1.8
It was earlier set to 1.7. I was also getting a deprecation warning on one of the methods I had used. So I again followed the docs to use the latest code and it worked like a charm.
The real takeaway here is to refer to the official docs everytime :)

Can't restore OrientDB Backup

I'm having trouble restoring an OrientDB database from a backup. I'm using OrientDB version 1.2.0 (this backup is from November 2012) and the backup was produced by OrientDB (same version) using the built-in backup utility. I'm trying to restore the backup to a new database using the OrientDB console:
create database remote:localhost/dbname root password local graph
import database backup.json
But when I run those commands, I get the following error in the console:
Importing indexes ...
- Index 'dictionary'...Error on database import happened just before line 22258, column 6
com.orientechnologies.orient.core.exception.OConcurrentModificationException: Cannot update record #0:1 in storage 'dbname' because the version is not the latest. Probably you are updating an old record or it has been modified by another user (db=v2 your=v1)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
at com.orientechnologies.orient.enterprise.channel.binary.OChannelBinary.createException(OChannelBinary.java:429)
at com.orientechnologies.orient.enterprise.channel.binary.OChannelBinary.handleStatus(OChannelBinary.java:382)
at com.orientechnologies.orient.enterprise.channel.binary.OChannelBinaryAsynch.beginResponse(OChannelBinaryAsynch.java:145)
at com.orientechnologies.orient.enterprise.channel.binary.OChannelBinaryAsynch.beginResponse(OChannelBinaryAsynch.java:59)
at com.orientechnologies.orient.client.remote.OStorageRemote.beginResponse(OStorageRemote.java:1556)
at com.orientechnologies.orient.client.remote.OStorageRemote.command(OStorageRemote.java:727)
at com.orientechnologies.orient.client.remote.OStorageRemoteThread.command(OStorageRemoteThread.java:191)
at com.orientechnologies.orient.core.command.OCommandRequestTextAbstract.execute(OCommandRequestTextAbstract.java:60)
at com.orientechnologies.orient.core.index.OIndexManagerRemote.dropIndex(OIndexManagerRemote.java:80)
at com.orientechnologies.orient.core.index.OIndexManagerProxy.dropIndex(OIndexManagerProxy.java:80)
at com.orientechnologies.orient.core.db.tool.ODatabaseImport.importIndexes(ODatabaseImport.java:687)
at com.orientechnologies.orient.core.db.tool.ODatabaseImport.importDatabase(ODatabaseImport.java:127)
at com.orientechnologies.orient.console.OConsoleDatabaseApp.importDatabase(OConsoleDatabaseApp.java:1419)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at com.orientechnologies.common.console.OConsoleApplication.execute(OConsoleApplication.java:238)
at com.orientechnologies.common.console.OConsoleApplication.executeCommands(OConsoleApplication.java:127)
at com.orientechnologies.common.console.OConsoleApplication.run(OConsoleApplication.java:92)
at com.orientechnologies.orient.console.OConsoleDatabaseApp.main(OConsoleDatabaseApp.java:130)
All of the records import correctly, but it fails on the indexes. I have 15+ backups of the same database and they all have this issue, so it seems unlikely that they're all corrupted. How can I restore my database? (I'm OK with having to modify the JSON if needbe.)
When trying to use local mode rather than remote mode, I get a different error:
Started import of database 'local:dbname' from dbname.json...
Importing database info...OK
Importing clusters...
- Creating cluster 'internal'...OK, assigned id=0
- Creating cluster 'default'...Error on database import happened just before line 13, column 52
com.orientechnologies.orient.core.exception.OConfigurationException: Imported cluster 'default' has id=3 different from the original: 2. To continue the import drop the cluster 'manindex' that has 1 records
at com.orientechnologies.orient.core.db.tool.ODatabaseImport.importClusters(ODatabaseImport.java:544)
at com.orientechnologies.orient.core.db.tool.ODatabaseImport.importDatabase(ODatabaseImport.java:130)
at com.orientechnologies.orient.console.OConsoleDatabaseApp.importDatabase(OConsoleDatabaseApp.java:1414)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at com.orientechnologies.common.console.OConsoleApplication.execute(OConsoleApplication.java:269)
at com.orientechnologies.common.console.OConsoleApplication.executeCommands(OConsoleApplication.java:157)
at com.orientechnologies.common.console.OConsoleApplication.run(OConsoleApplication.java:97)
at com.orientechnologies.orient.graph.console.OGremlinConsole.main(OGremlinConsole.java:53)
Error: com.orientechnologies.orient.core.db.tool.ODatabaseExportException: Error on importing database 'dbname' from file: dbname.json
Error: com.orientechnologies.orient.core.exception.OConfigurationException: Imported cluster 'default' has id=3 different from the original: 2. To continue the import drop the cluster 'manindex' that has 1 records
It seems like the issue is that my old cluster IDs don't match the ones in the new database. Perhaps there are creation options that affect which clusters are created by default?
It turns out that the issue was that the database was being imported with OrientDB version 1.2.0 but was created with 1.0.1 (or 1.0.0). The JSON file listed version 1.2.0 as the engine-version, but that was the engine that backed it up, not created it. After I tried Lvca's suggestion of using local mode, I discovered that the cluster IDs weren't matching up. From there I guessed that the version numbers might not be correct. So it seems that even though I was running the database using the 1.2.0 server and the 1.2.0 Java library, the database was never "upgraded" to use the new 1.2.0 schema.
Thanks to Lvca for a nudge in the right direction.

Moving HDFS data into MongoDB

I am trying to move HDFS data into MongoDB. I know how to export data into mysql by using sqoop. I dont think I can use sqoop for MongoDb. I need help understanding how to do that.
This recipe will use the MongoOutputFormat class to load data from an HDFS instance
into a MongoDB collection.
Getting ready
The easiest way to get started with the Mongo Hadoop Adaptor is to clone the Mongo-Hadoop
project from GitHub and build the project configured for a specific version of Hadoop. A Git
client must be installed to clone this project.
This recipe assumes that you are using the CDH3 distribution of Hadoop.
The official Git Client can be found at http://git-scm.com/downloads .
The Mongo Hadoop Adaptor can be found on GitHub at https://github.com/mongodb/
mongo-hadoop . This project needs to be built for a specific version of Hadoop. The resulting
JAR file must be installed on each node in the $HADOOP_HOME/lib folder.
The Mongo Java Driver is required to be installed on each node in the $HADOOP_HOME/
lib folder. It can be found at https://github.com/mongodb/mongo-java-driver/
downloads .
How to do it...
Complete the following steps to copy data form HDFS into MongoDB:
1. Clone the mongo-hadoop repository with the following command line:
git clone https://github.com/mongodb/mongo-hadoop.git
2. Switch to the stable release 1.0 branch:
git checkout release-1.0
3. Set the Hadoop version which mongo-hadoop should target. In the folder
that mongo-hadoop was cloned to, open the build.sbt file with a text editor.
Change the following line:
hadoopRelease in ThisBuild := "default"
to
hadoopRelease in ThisBuild := "cdh3"
4. Build mongo-hadoop :
./sbt package
This will create a file named mongo-hadoop-core_cdh3u3-1.0.0.jar in the
core/target folder.
5. Download the MongoDB Java Driver Version 2.8.0 from https://github.com/
mongodb/mongo-java-driver/downloads .
6. Copy mongo-hadoop and the MongoDB Java Driver to $HADOOP_HOME/lib on
each node:
cp mongo-hadoop-core_cdh3u3-1.0.0.jar mongo-2.8.0.jar $HADOOP_
HOME/lib
7. Create a Java MapReduce program that will read the weblog_entries.txt file
from HDFS and write them to MongoDB using the MongoOutputFormat class:
import java.io.*;
import org.apache.commons.logging.*;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.*;
import org.bson.*;
import org.bson.types.ObjectId;
import com.mongodb.hadoop.*;
import com.mongodb.hadoop.util.*;
public class ExportToMongoDBFromHDFS {
private static final Log log = LogFactory.getLog(ExportToMongoDBFromHDFS.class);
public static class ReadWeblogs extends Mapper<LongWritable, Text, ObjectId, BSONObject>{
public void map(Text key, Text value, Context context)
throws IOException, InterruptedException{
System.out.println("Key: " + key);
System.out.println("Value: " + value);
String[] fields = value.toString().split("\t");
String md5 = fields[0];
String url = fields[1];
String date = fields[2];
String time = fields[3];
String ip = fields[4];
BSONObject b = new BasicBSONObject();
b.put("md5", md5);
b.put("url", url);
b.put("date", date);
b.put("time", time);
b.put("ip", ip);
context.write( new ObjectId(), b);
}
}
public static void main(String[] args) throws Exception{
final Configuration conf = new Configuration();
MongoConfigUtil.setOutputURI(conf,"mongodb://<HOST>:<PORT>/test. weblogs");
System.out.println("Configuration: " + conf);
final Job job = new Job(conf, "Export to Mongo");
Path in = new Path("/data/weblogs/weblog_entries.txt");
FileInputFormat.setInputPaths(job, in);
job.setJarByClass(ExportToMongoDBFromHDFS.class);
job.setMapperClass(ReadWeblogs.class);
job.setOutputKeyClass(ObjectId.class);
job.setOutputValueClass(BSONObject.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(MongoOutputFormat.class);
job.setNumReduceTasks(0);
System.exit(job.waitForCompletion(true) ? 0 : 1 );
}
}
8. Export as a runnable JAR file and run the job:
hadoop jar ExportToMongoDBFromHDFS.jar
9. Verify that the weblogs MongoDB collection was populated from the Mongo shell:
db.weblogs.find();
The basic problem is that mongo stores its data in BSON format (binary JSON), while you hdfs data may have different formats (txt, sequence, avro). The easiest thing to do would be to use pig to load your results using this driver:
https://github.com/mongodb/mongo-hadoop/tree/master/pig
into mongo db. You'll have to map your values to your collection - there's a good example on the git hub page.