I have a node.js app using Mongoose (v5.11.17) drivers. Before I used standalone db, but recently for reliability concerns the app was migrated to replicas set (of 3). There is a code that saves some object a in the db, and then could be any number of calls reading object a. Since migrating to the replica set some times reading can return as not found, that has never happened with the standalone db. My guess is the read preferences are not right somehow and the read goes to a non-primary db (or may be the write went to non-primary). Can I find out from the connection in mongo shell and query its read preference?
Related
We are facing stale issue stale read issue for some percentage of users for our MongoDB Spring framework based app. It's a very low volume app with hits less than 10K a day as well as a record count of less than 100K or even less. Following is our app tech stack.
Mongo DB version db version v3.2.8.
compile group: 'org.springframework.data', name: 'spring-data-mongodb', version:'1.5.5.RELEASE'
compile group: 'org.mongodb', name: 'mongo-java-driver', version:'2.13.2'.
Users reported that in case of a new record insert or update, that value is not available to read for a certain duration say half an hour. After which the latest values in reading got reflected and available for reading across all users. However, when connecting with the mongo terminal, we are able to see the latest values in DB.
We confirmed that there is no application-level cache involved in reported flows. Also for JSP's we added timestamp on reported pages as well tried private browsing mode to rule out any browser issue.
We also tried changing Write concern in MongoClient and Mongo Template but no change in behavior:
MongoClientOptions.builder().writeConcern(WriteConcern.FSYNCED).build(); //Mongo Client
mongoTemplate.setWriteConcern(WriteConcern.FSYNCED); // Spring Mongo template
mongoTemplate.setWriteResultChecking(WriteResultChecking.LOG);
Also, DB logs look clean, no exceptions or errors seem to be generated on MongoDB logs.
We also didn't introduce any new library or DB changes and this setup was working prefect for the past 2 years. Any pointers would be helpful.
NOTE: It's a single mongo Instance with no slaves or secondary configured.
Write concern does not affect reads.
Most likely you have some cache in your application or on your user's system (like their web browser) that you are overlooking.
Second likely reason is you are reading from secondaries (i.e. using anything other than primary read preference).
If a system is already running SQL Server, is it possible to use a NoSQL database (i,e MongoDb in particular) as the failover database in a SQL Server failover environment? Such that if the primary SQL node fails the secondary node running/hosting MongoDb takes the primary place.
The short answer to this question is "no". The long answer is anything is possible given enough code and resources.
SQL and MongoDB do not speak the same language, so there would need to be an intermediary that can translate. But this adds another failure mode to the system. It also needs to be complex enough to understand such concepts as "primary". There are connectors out there that will handle either SQL -> MongoDB or MongoDB -> SQL, but I'm not aware of any that are capable of syncing the two in real time. Additionally, it would be up to your application to determine where to query data from and where to write data to. This would be outside something a connector like these will do.
I would like to ask you about why my external instance MongoDB is slower than launched by Meteor.js.
I set the MONGO_URL environment variable to connect with my local database so the connection should be as fast as the database created by the Meteor.js.
However, when I tried to test publications with external database and I saw that I have one or two seconds latency, but when Meteor.js runs database all works properly (I saw the new data from database without delay).
Thanks for any help!
Cheers
Meteor has two ways to access changes in MongoDB:
Pull: Meteor checks for updates at regular intervals. You may notice a few seconds delay.
Push, also known as "oplog tailing": MongoDB sends data changes right when they are performed. Meteor registers it instantaneously.
You'll need to set the MONGO_OPLOG_URL environment variable to enable oplog tailing and have instantaneous updates. When Meteor starts up a local Mongo instance, is also sets up oplog tailing automatically.
Here's a detailed article about it: https://meteorhacks.com/mongodb-oplog-and-meteor/
I'm trying to set up an app that will act as a front end to an externally updated mongo database. The data will be pushed into the database by another process.
I so far have the app connecting to the external mongo instance and pulling data out with on issues, but its not reactive (not seeing any of the new data going into the mongo database).
I've done some digging and it so far can only find that I might need to set up replica sets and use oplog, is there a way to do this without going to replica sets (or is that the best way anyway)?
The code so far is really simple, a single collection, a single publication (pulling out the last 10 records from the database) and a single template just displaying that data.
No deps that I've written (not sure if that's what I'm missing).
Thanks.
Any reason not to use Oplog? For what I've read it is the recommended approach even if your DB isn't updated by an external process, and a must if it does.
Nevertheless, without Oplog your app should see the changes on the DB made by the external process anyway. It should take longer (up to 10 seconds), but it should update.
We have all of our unit tests written so that they create and populate tables in HSQL. I want the developers who use this to be able to write queries against this HSQL DB ( 1) by writing queries they can better understand the data model and the ones not as familiar with SQL can play with the data before writing the runtime statements and 2) since they don't have access to the test DB/security reasons). Is there a way to persist the results of the test data so that it may be examine and analyzed with a an sql client?
Right now I am jury rigging it by switching the data source to a different DB (like DB2/mysql, then connecting to that DB on my machine so I can play with persistant data), however it would be easier for me if HSQL supports persisting this than to explain how to do this to every new developer.
Just to be clear, I need an SQL client to interact with persistent data, so debugging and checking memory won't be clean. This has more to do with initial development and not debugging/maintenance/testing.
If you use an HSQLDB Server instance for your tests, the data will survive the test run.
If the server uses a jdbc:hsqldb:mem:aname (all-in-memory) url for its database, then the data will be available while the server is running. Alternatively the server can use a jdbc:hsqldb:file:filepath url and the data is persisted to files.
The latest HSQLDB docs explain the different options. Most of the observations also apply to older (1.8.x) versions. However, the latest version 2.0.1 supports starting a server and creating databases dynamically upon the first connection, which can simplify testing a lot.
http://hsqldb.org/doc/2.0/guide/deployment-chapt.html#N13C3D