i’ve done all actions by this guide https://www.prisma.io/docs/getting-started/setup-prisma/start-from-scratch/mongodb-typescript-mongodb
Now Im trying to add some information to database and get an error massage
Storage engine does not support read concern: { readConcern: { level: \"majority\", provenance: \"clientSupplied\" } })
my version of mongodb 4.4.15
we have a replicaset and enableMajorityReadConcern: false setting
How can i solve this problem with create method ?
Im expecting successful create operation
Related
I'm working on a new website, and although things were working well as we developed locally we've run into an issue when we tried to deploy on Vercel. The app uses the Sapper framework for both the pages and an API, and a database in MongoDB Atlas that we access through Mongoose. The behavior we have locally is that we npm run dev and a single DB connection is made which persists until we shut the app down.
When it gets deployed to Vercel though, the code which makes the DB connection and prints that "DB connection successful" message and is only supposed to run once is instead run on every API request
This seems to quickly get out of hand, reaching our database's limit of 500 connections:
As a result, after the website is used briefly even by a single user some of our API requests start failing with this error (We have the db accepting any connection rather than an IP whitelist, so the suggestion the error gives isn't helpful):
Our implementation is that we have a call to mongoose.connect in a .js file:
mongoose.connect(DB, {
useNewUrlParser: true,
useCreateIndex: true,
useFindAndModify: false,
useUnifiedTopology: true
}).then(() => console.log("DB connection successful!")).catch(console.error);
and then we import that file in Sapper's server.js. The recommendation we've been able to find is "just cache the connection", but that hasn't been successful and seems to be more of a node-mongodb-native thing. Regardless, this is what we tried which didn't work better or worse locally, but also didn't fix the problems on Vercel:
let cachedDb = {};
exports.connection = async () => {
if (cachedDb.isConnected)
return;
try {
const db = await mongoose.connect(DB, {
useNewUrlParser: true,
useCreateIndex: true,
useFindAndModify: false,
useUnifiedTopology: true
});
cachedDb.isConnected = db.connections[0].readyState;
console.log("DB connection successful!");
return cachedDb;
} catch(error) {console.log("Couldn't connect to DB", error);}
So... is there a way to make this work without replacing at least one of the pieces? The website isn't live yet so replacing something isn't the end of the world, but "just change a setting" is definitely preferred to starting from scratch.
Summary
Serverless functions on Vercel work like a self-contained process. While it is possible to cache the connection "per function," it is not a good idea to deploy a serverful-ready library to a serverless environment. Here are a few questions that you need to answer:
Is your framework or DB library caching the connection?
Is your code prepared for Serverless?
What type of workload is Vercel optimized for?
Further Context
Vercel is an excellent platform for your frontend that would use Serverless Functions as helpers. The CDN available in conjunction with the workflow makes the deployment process very quick and allows you to move faster. Deploying a full-blown API or serverful workload will never be a good idea. Let's suppose I need to use MySQL with Vercel. Instead of mysql, you should use mysql-serverless, which is optimized for the serverless primitives. Even with that in mind, it will be probably cheaper to just use a VM/Container for the API depending on the level of requests you are expecting. Therefore, we would end up with the following ideal solution:
Frontend (Vercel - Serverless) --> Backend (Serverful - External provider) --> DB
Disclaimer: At the moment, I work for Vercel.
If you are using the cloud database of MongoDB Atlas, then you can use the mongodb-data-api library, which is encapsulated based on the Data API of MongoDB Atlas. All data operations are performed through the HTTPS interface, and there is no connection problem.
import { MongoDBDataAPI, Region } from 'mongodb-data-api'
const api = new MongoDBDataAPI({
apiKey: '<your_mongodb_api_key>',
appId: '<your_mongodb_app_id>'
})
api
.findOne({
dataSource: '<target_cluster_name>',
database: '<target_database_name>',
collection: '<target_collection_name>',
filter: { name: 'Surmon' }
})
.then((result) => {
console.log(result.document)
})
The example codes provided by NextJS say to cache the database connection yet this is the issue that happens with myself as well.
Both here
https://github.com/vercel/next.js/blob/canary/examples/with-mongodb-mongoose/utils/dbConnect.js
And here
https://github.com/vercel/next.js/blob/canary/examples/with-mongodb/util/mongodb.js
are caching the connection and if I copy the example i get the same issue as the OP.
It also says here
https://nextjs.org/docs/basic-features/data-fetching#getstaticprops-static-generation
that i can interact directly with my database. Massively conflicting information where I'm told on one hand to cache the connection, while a member of the team tells me its not suitable for this approach despite the docs & examples telling me otherwise. Is this a bug report type situtation?
I was struggling with the similar issue but I came across an example here:
https://github.com/vercel/next.js/blob/canary/examples/with-mongodb/util/mongodb.js
Apparently the trick is to use the global variable:
let cached = global.mongo
if (!cached) {
cached = global.mongo = { conn: null, promise: null }
}
Why i need to pass { useUnifiedTopology: true } in my app.js .When i dont't pass the { useUnifiedTopology: true } still everything works. SO is it okay to not pass it in my server file.
Is it going to effect my project.
There are several deprecations in the MongoDB Node.js driver that Mongoose users should be aware of. Mongoose provides options to work around these deprecation warnings, but you need to test whether these options cause any problems for your application.
MongoDB driver 3.3.x, which introduced a significant refactor of how it handles monitoring all the servers in a replica set or sharded cluster. In MongoDB parlance, this is known as server discovery and monitoring.
To opt in to using the new topology engine, uses the below line:
('useUnifiedTopology', true);
The useUnifiedTopology option removes support for several connection options that are no longer relevant with the new topology engine:
autoReconnect
reconnectTries
reconnectInterval
When you enable useUnifiedTopology, please remove those options from your mongoose.connect() or createConnection() calls.
Reference: https://mongoosejs.com/docs/deprecations.html
I have setup a Azure Function with a Azure Cosmos DB(document) output. The cosmos database is configured to be a MongoDB.
And added the following simple code to try and add a new document:
module.exports = function (context, eventHubMessages) {
context.bindings.document = {
text : "Test data"
}
context.done();
};
When i test run i get success, but when i try to open the the collection using Studio 3T i get:
Query failed with error code 1 and error message 'Unknown server error occurred when processing this request.'
When i use the same code to write to a DocumentDB i get success and i can view data in Azure. Do you need to use a different API to save data to mongoDB?
The DocumentDB output binding is using the DocumentDB API to connect and save information in the database. But your database (from what you are saying) is using the MongoDB API, they are different APIs (links point to the docs).
As you surely know, MongoDB has some requirements (like the existence of an "_id" attribute) that are covered when you connect to the database from a MongoDB client (either an SDK or a third-party client), but since you are communicating through the DocumentDB API, it's probably failing to fulfill those requirements.
You might want to try and use the Mongo driver in the function to connect to your Cosmos DB database through the MongoDB API.
I want to use DocumentDB but there is no connector for PySpark. Looks like DocumentDB also supports MongoDB Protocol as mentioned here, which means all existing MongoDB drivers should work. Since there is PySpark connector for MongoDB, I wanted to try this out.
df = spark.read.format("com.mongodb.spark.sql.DefaultSource").load()
This throws error.
com.mongodb.MongoCommandException: Command failed with error 115: ''$sample' is not supported' on server example.documents.azure.com:10250. The full response is { "_t" : "OKMongoResponse", "ok" : 0, "code" : 115, "errmsg" : "'$sample' is not supported", "$err" : "'$sample' is not supported" }
It looks like DocumentDB MongoDB API doesn't support all MongoDB features, but I can't find any documentation about. Or am I missing something else?
I want to use DocumentDB but there is no connector for PySpark.
A preview of a Spark to DocumentDB connector (including a pyDocumentDB package) was made available in early April 2017.
Looks like DocumentDB also supports MongoDB Protocol as mentioned here, which means all existing MongoDB drivers should work
DocumentDB supports the MongoDB wire protocol for communication and reports its version as MongoDB 3.2.0, but this does not mean that it is a drop-in replacement with full support for all MongoDB features (or that DocumentDB implements features with identical behaviour and limits). A notable absence at the moment is any support for MongoDB's aggregation pipeline, which includes the $sample operator that the PySpark connector is expecting to be available given a connection to a server claiming to be MongoDB 3.2.
You can find more examples of potential compatibility issues in the comments on the DocumentDB API for MongoDB documentation you referenced in your question.
Starting from version 3.0, mongodb support pluggable storage engine. How to know which storage engine is being used in a system?
Easiest way to find the storage engine being used currently in from mongo console.
Inside mongo console, type (You might need admin access to run this command)
db.serverStatus().storageEngine
If It returns,
{ "name" : "wiredTiger" }
WireTiger Storage engine is being used.
Once it is confirmed that wiredTiger is being used then type
db.serverStatus().wiredTiger
to get all the configuration details of wiredTiger.
On the console, Mayank's answer makes more sense.
On the other hand, by using MongoDB GUI like MongoChef or Robomongo storageEngine may be found by using the ways below;
On Robomongo;
On MongoChef;
You can detect this via:
db.serverStatus().wiredTiger
So at "present" where this "exists" then there is a different storage engine configured other than the default "MMAPv1" where "WiredTiger" is not used.
This applies to the present "MongoDB 3.0x" series