Starting from version 3.0, mongodb support pluggable storage engine. How to know which storage engine is being used in a system?
Easiest way to find the storage engine being used currently in from mongo console.
Inside mongo console, type (You might need admin access to run this command)
db.serverStatus().storageEngine
If It returns,
{ "name" : "wiredTiger" }
WireTiger Storage engine is being used.
Once it is confirmed that wiredTiger is being used then type
db.serverStatus().wiredTiger
to get all the configuration details of wiredTiger.
On the console, Mayank's answer makes more sense.
On the other hand, by using MongoDB GUI like MongoChef or Robomongo storageEngine may be found by using the ways below;
On Robomongo;
On MongoChef;
You can detect this via:
db.serverStatus().wiredTiger
So at "present" where this "exists" then there is a different storage engine configured other than the default "MMAPv1" where "WiredTiger" is not used.
This applies to the present "MongoDB 3.0x" series
Related
i’ve done all actions by this guide https://www.prisma.io/docs/getting-started/setup-prisma/start-from-scratch/mongodb-typescript-mongodb
Now Im trying to add some information to database and get an error massage
Storage engine does not support read concern: { readConcern: { level: \"majority\", provenance: \"clientSupplied\" } })
my version of mongodb 4.4.15
we have a replicaset and enableMajorityReadConcern: false setting
How can i solve this problem with create method ?
Im expecting successful create operation
I was tryign to install the latest mongo database on my MacOS via the official tutorial. So the following commands effectively seem to work for me:
brew services start mongodb-community#4.4
brew services list
lists the following
mongodb-community started naman .../LaunchAgents/homebrew.mxcl.mongodb-community.plist
Further, when I am trying to execute mongo, I am able to successfully create a session for shell
MongoDB shell version v4.4.3
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("57e62dd9-77f2-48c2-8fe8-8fe9fe79a1d2") }
and access the databases, collections, execute the queries, and all.
But further trying to connect to this URI using MongoDB Compass to make use of its visual tree for explain, I am getting to see that the compression has to be enabled as a must:
and upon specifying the compression as snappy, it would read the following:
An error occurred while loading navigation: Attempted to use Snappy
compression, but Snappy is not installed. Install or disable Snappy
compression and try again.
The MongoDB Version in use is v4.4.3 and compass version Version 1.25.0, is there a workaround for this or do I need to tweak my configs from defaults?
Since here described in urioption.compressors:
Comma-delimited string of compressors to enable network compression for communication between this client and a mongod/mongos instance.
You can specify the following compressors:
snappy
zlib (Available in MongoDB 3.6 or greater)
zstd (Available in MongoDB 4.2 or greater)
Why compressors=disabled is working in mongo-shell(mongo) and not in mongo-compass?
Value for compressors must be at least one of: snappy, zlib
So here,
net.compression.compressors, cmdoption-mongod, cmdoption-mongos,
they have specified
To disable network compression, set the value to disabled.
compression-options in Connection String URI Format and cmdoption-mongo,
They have not specified any validation like first point
As per above points might be it is required in mongo-compass and not in mongo-shell! For detailed and specific answer you can ask to MongoDB community Forum or post a bug in MongoDB Jira,
and upon specifying the compression as snappy, it would read the following:
An error occurred while loading navigation: Attempted to use Snappy compression, but Snappy is not installed. Install or disable Snappy compression and try again.
See here term-snappy,
A compression/decompression library designed to balance efficient computation requirements with reasonable compression rates. Snappy is the default compression library for MongoDB’s use of WiredTiger. See Snappy and the WiredTiger compression documentation for more information.
You need to install it separately if you want to use Snappy.
Conclusion:
You can use zlib instead of Snappy, or more better if you don't specify compressors=disabled or compressors=zlib because by default it will specify compressors=snappy,zstd,zlib all 3 options when you don't specify in uri.
I want to use DocumentDB but there is no connector for PySpark. Looks like DocumentDB also supports MongoDB Protocol as mentioned here, which means all existing MongoDB drivers should work. Since there is PySpark connector for MongoDB, I wanted to try this out.
df = spark.read.format("com.mongodb.spark.sql.DefaultSource").load()
This throws error.
com.mongodb.MongoCommandException: Command failed with error 115: ''$sample' is not supported' on server example.documents.azure.com:10250. The full response is { "_t" : "OKMongoResponse", "ok" : 0, "code" : 115, "errmsg" : "'$sample' is not supported", "$err" : "'$sample' is not supported" }
It looks like DocumentDB MongoDB API doesn't support all MongoDB features, but I can't find any documentation about. Or am I missing something else?
I want to use DocumentDB but there is no connector for PySpark.
A preview of a Spark to DocumentDB connector (including a pyDocumentDB package) was made available in early April 2017.
Looks like DocumentDB also supports MongoDB Protocol as mentioned here, which means all existing MongoDB drivers should work
DocumentDB supports the MongoDB wire protocol for communication and reports its version as MongoDB 3.2.0, but this does not mean that it is a drop-in replacement with full support for all MongoDB features (or that DocumentDB implements features with identical behaviour and limits). A notable absence at the moment is any support for MongoDB's aggregation pipeline, which includes the $sample operator that the PySpark connector is expecting to be available given a connection to a server claiming to be MongoDB 3.2.
You can find more examples of potential compatibility issues in the comments on the DocumentDB API for MongoDB documentation you referenced in your question.
I would like to get an object from couchbase memcached bucket that is not the default, using memcached interface (or any other method).
Right now to get data from the default bucket I just do:
echo "get someKey" | nc couchbase.server 11211
But how to retrieve data from different bucket? Memcached doesn't have a notion of buckets (at least I couldn't find any info about that).
Or if that way is not possible can I use a different interface to retrieve the data from the shell (using nc or curl)?
You can do this but you will need to use "client-side moxi"
You can find more information about it here
http://docs.couchbase.com/admin/admin/Concepts/bp-deployment-strategies.html
Alternatively you can install the Couchbase C SDK and use "cbc cat" which is bucket-aware.
the C Couchbase SDK comes with a command line tool called cbc that can be used to retrieve documents from any bucket in a Couchbase cluster, cbc cat.
see the documentation on how to install and use cbc
I'm currently writing a Grails app using Grails 2.2.2 and MySQL, and have been deploying it to Cloudfoundry.
Until recently I've just used a single MySQL datasource for my domain, which Cloudfoundry detects and automagically creates and binds a MySQL service instance to.
I now have a requirement to store potentially large files somewhere, so I figured I'd take a look at MongoDB's GridFS. Cloudfoundry supports MongoDB, so I'd assumed Cloudfoundry would do some more magic when I deployed my app and would provide me with a MongoDB datasource as well.
Unfortunately I'm not prompted to create/bind a MongoDB service when I deploy my app, and I think this may be down to the way I'm connecting to Mongo.
I'm not using the MongoDB plugin, as this conflicts with another plugin I'm using, and in any case I don't need to persist any of my domain to Mongo - just some large files - so I'm using the Mongo java driver directly (similar to this - http://jameswilliams.be/blog/entry/171).
I'm unsure how Cloudfoundry detects that your application requires a particular datasource, but I'd assumed it would figure this out somehow from DataSource.groovy.
Mine looks like this...
environments {
development {
dataSource {
driverClassName = "com.mysql.jdbc.Driver"
dbCreate = "create-drop"
...
}
dataSourceMongo {
host = "localhost"
port = 27017
dbName = "my_mongo_database_name"
...
}
}
}
Is there something I'm missing? Or do I need to manually bind the MongoDB service somehow?
Using answer instead of comments for better formatting. :)
I guess you have already followed step to create the MongoDB service in Cloudfoundry as mentioned here otherwise this has to be done. Plus, it will be lot easier if you use the Groovy wrapper of the Java Driver of MongoDB called GMongo. Refer the GitHUb Source and this Mongo blog for more details.