We have one mongodb Replica Set with two instances(127.0.0.1:27017 - Primary, 127.0.0.1:27018- Secondary) and one arbiter(127.0.0.1:27019). When I step down from primary by using rs.stepDown(60) it should become secondary and secondary should become primary instance, and all write operations should happen in the secondary instance(primary after step down). But after stepping down I am getting the exception "Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.".
App.Config has the following keys:
<appSettings>
<add key="mongoServerIP" value="127.0.0.1" />
<add key="mongoServerPort" value="27017"/>
<!--<add key="dbname" value="PSLRatingEngine"/>-->
<add key="MongoDbDatabaseName" value="PSLRatingEngine" />
<add key="testcollectionname" value="Test"/>
</appSettings>
C# code :
using MongoDB.Driver;
using System;
using System.Collections.Generic;
using System.Configuration;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace ReplicaSetTest
{
class Program
{
static void Main(string[] args)
{
string mongoserverip = ConfigurationManager.AppSettings["mongoServerIP"].ToString();
int port = Convert.ToInt32(ConfigurationManager.AppSettings["mongoServerPort"].ToString());
string dbname = ConfigurationManager.AppSettings["MongoDbDatabaseName"].ToString();
string customercollectionname = ConfigurationManager.AppSettings["testcollectionname"].ToString();
try
{
MongoServerSettings settings = new MongoServerSettings();
settings.Server = new MongoServerAddress(mongoserverip, port);
// Create server object to communicate with our server
MongoServer server = new MongoServer(settings);
// Get our database instance to reach collections and data
var db = server.GetDatabase(dbname);
// Get user collection reference
var collection = db.GetCollection(customercollectionname);
int i = 0;
while(true)
{
Employee emp = new Employee();
emp.EmpId = i + 1000;
Random r = new Random();
emp.Age = r.Next(20, 70);
emp.Salary = r.NextDouble() * 100000;
emp.Name = "emp" + i;
emp.Dept = "Engineering";
collection.Insert<Employee>(emp);
i++;
}
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
}
}
}
Please let me know how I can automatically switch from primary to secondary by C# code if anything happens to primary instance of the replica set.
Suppose you send an acknowledge write to the primary and then primary is fried by an orbital ion cannon after it partially completes the write but before it finishes and acknowledges the write is complete. How should you "automatically switch from primary to secondary" in that case? What does it mean for that connection and operation?
Elections take a few seconds to happen, during which there is no primary to send writes to. What does the driver do while there's no primary? How can it maintain the same connections when the connections go to a node that isn't available anymore?
The point is, the errors from the application during failover are expected and your application needs to be able to deal with them. When the primary fails, MongoDB/the driver can't magically make everything work like nothing happened in all cases. The driver will, after a new primary is elected, automatically switch to the new one and then it will seem as if nothing happened.
Related
For performance optimisation we are trying to read data from Mongo secondary server for selected scenarios. I am using the inline query using "withReadPreference(ReadPreference.secondaryPreferred())" to read the data, PFB the code snippet.
What I want to confirm the data we are getting is coming from secondary server after executing the inline query highlighted, is there any method available to check the same from Java or Springboot
public User read(final String userId) {
final ObjectId objectId = new ObjectId(userId);
final User user = collection.withReadPreference(ReadPreference.secondaryPreferred()).findOne(objectId).as(User.class);
return user;
}
Pretty much the same way in Java. Note we use secondary() not secondaryPrefered(); this guarantees reads from secondary ONLY:
import com.mongodb.ReadPreference;
{
// This is your "regular" primaryPrefered collection:
MongoCollection<BsonDocument> tcoll = db.getCollection("myCollection", BsonDocument.class);
// ... various operations on tcoll, then create a new
// handle that FORCES reads from secondary and will timeout and
// fail if no secondary can be found:
MongoCollection<BsonDocument> xcoll = tcoll.withReadPreference(ReadPreference.secondary());
BsonDocument f7 = xcoll.find(queryExpr).first();
}
I'm having a problem with lots of connections being opened to the mongo db.
The readme on the Github page for the C# driver gives the following code:
using MongoDB.Bson;
using MongoDB.Driver;
var client = new MongoClient("mongodb://localhost:27017");
var server = client.GetServer();
var database = server.GetDatabase("foo");
var collection = database.GetCollection("bar");
collection.Insert(new BsonDocument("Name", "Jack"));
foreach(var document in collection.FindAll())
{
Console.WriteLine(document["Name"]);
}
At what point does the driver open the connection to the server? Is it at the GetServer() method or is it the Insert() method?
I know that we should have a static object for the client, but should we also have a static object for the server and database as well?
Late answer... but the server connection is created at this point:
var client = new MongoClient("mongodb://localhost:27017");
Everything else is just getting references for various objects.
See: http://docs.mongodb.org/ecosystem/tutorial/getting-started-with-csharp-driver/
While using the latest MongoDB drivers for C#, the connection happens at the actual database operation. For eg. db.Collection.Find() or at db.collection.InsertOne().
{
//code for initialization
//for localhost connection there is no need to specify the db server url and port.
var client = new MongoClient("mongodb://localhost:27017/");
var db = client.GetDatabase("TestDb");
Collection = db.GetCollection<T>("testCollection");
}
//Code for db operations
{
//The connection happens here.
var collection = db.Collection;
//Your find operation
var model = collection.Find(Builders<Model>.Filter.Empty).ToList();
//Your insert operation
collection.InsertOne(Model);
}
I found this out after I stopped my mongod server and debugged the code with breakpoint. Initialization happened smoothly but error was thrown at db operation.
Hope this helps.
We have two different query strategies that we'd ideally like to operate in conjunction on our site without opening redundant connections. One strategy uses the enterprise library to pull Database objects and Execute_____(DbCommand)s on the Database, without directly selecting any sort of connection. Effectively like this:
Database db = DatabaseFactory.CreateDatabase();
DbCommand q = db.GetStoredProcCommand("SomeProc");
using (IDataReader r = db.ExecuteReader(q))
{
List<RecordType> rv = new List<RecordType>();
while (r.Read())
{
rv.Add(RecordType.CreateFromReader(r));
}
return rv;
}
The other, newer strategy, uses a library that asks for an IDbConnection, which it Close()es immediately after execution. So, we do something like this:
DbConnection c = DatabaseFactory.CreateDatabase().CreateConnection();
using (QueryBuilder qb = new QueryBuilder(c))
{
return qb.Find<RecordType>(ConditionCollection);
}
But, the connection returned by CreateConnection() isn't the same one used by the Database.ExecuteReader(), which is apparently left open between queries. So, when we call a data access method using the new strategy after one using the old strategy inside a TransactionScope, it causes unnecessary promotion -- promotion that I'm not sure we have the ability to configure for (we don't have administrative access to the SQL Server).
Before we go down the path of modifying the query-builder-library to work with the Enterprise Library's Database objects ... Is there a way to retrieve, if existent, the open connection last used by one of the Database.Execute_______() methods?
Yes, you can get the connection associated with a transaction. Enterprise Library internally manages a collection of transactions and the associated database connections so if you are in a transaction you can retrieve the connection associated with a database using the static TransactionScopeConnections.GetConnection method:
using (var scope = new TransactionScope())
{
IEnumerable<RecordType> records = GetRecordTypes();
Database db = DatabaseFactory.CreateDatabase();
DbConnection connection = TransactionScopeConnections.GetConnection(db).Connection;
}
public static IEnumerable<RecordType> GetRecordTypes()
{
Database db = DatabaseFactory.CreateDatabase();
DbCommand q = db.GetStoredProcCommand("GetLogEntries");
using (IDataReader r = db.ExecuteReader(q))
{
List<RecordType> rv = new List<RecordType>();
while (r.Read())
{
rv.Add(RecordType.CreateFromReader(r));
}
return rv;
}
}
Platform: 64 Bit windows OS, spymemcached-2.7.3.jar, J2EE
We want to use two memcache/membase servers for caching solution. We want to allocate 1 GB memory to each memcache/membase server so total we can cache 2 GB data.
We are using spymemcached java client for setting and getting data from memcache. We are not using any replication between two membase servers.
We loading memcacheClient object at the time of our J2EE application startup.
URI server1 = new URI("http://192.168.100.111:8091/pools");
URI server2 = new URI("http://127.0.0.1:8091/pools");
ArrayList<URI> serverList = new ArrayList<URI>();
serverList.add(server1);
serverList.add(server2);
client = new MemcachedClient(serverList, "default", "");
After that we are using memcacheClient to get and set value in memcache/membase server.
Object obj = client.get("spoon");
client.set("spoon", 50, "Hello World!");
Looks like memcacheClient is setting and getting and value only from server1.
If we stop server1, it fails to get/set value. Should it not use server2 in case of server1 down? Please let me know if we are doing anything wrong here...
aspymemcached java client dos not handle membase failover for particular node.
Ref : https://blog.serverdensity.com/handling-memcached-failover/
We need to handle it manually(by our code)
We can do this by using ConnectionObserver
Here is my code :
public static void main(String a[]) throws InterruptedException{
try {
URI server1 = new URI("http://192.168.100.111:8091/pools");
URI server2 = new URI("http://127.0.0.1:8091/pools");
final ArrayList<URI> serverList = new ArrayList<URI>();
serverList.add(server1);
serverList.add(server2);
final MemcachedClient client = new MemcachedClient(serverList, "bucketName", "");
client.addObserver(new ConnectionObserver() {
#Override
public void connectionLost(SocketAddress arg0) {
//method call when connection lost
for(MemcachedNode node : client.getNodeLocator().getAll()){
if(!node.isActive()){
client.shutdown();
//re init your client here, and after re-init it will connect to your secodry node
break;
}
}
}
#Override
public void connectionEstablished(SocketAddress arg0, int arg1) {
//method call when connection established
}
});
Object obj = client.get("spoon");
client.set("spoon", 50, "Hello World!");
} catch (Exception e) {
}
}
client.get() would use first available node and therefore your value would be stored/updated on one node only.
You seems to be a bit contradicting in your requirements - first you're saying that 'we want to allocate 1 GB memory to each memcache/membase server so total we can cache 2 GB data' which implies distributed cache model (particular key is stored on one node in the cache farm) and then you expect to fetch it if that node is down, which obviously won't happen.
If you need your cache farm to survive node failure without losing data cached on that node you should use replication, which is available in MemBase but obviously you would pay the price of storing the same values multiple times so your desire of '1GB per server...total 2GB of cache' won't be possible.
We require programmatic access to a SQL Server Express service as part of our application. Depending on what the user is trying to do, we may have to attach a database, detach a database, back one up, etc. Sometimes the service might not be started before we attempt these operations. So we need to ensure the service is started. Here is where we are running into problems. Apparently the ServiceController.WaitForStatus(ServiceControllerStatus.Running) returns prematurely for SQL Server Express. What is really puzzling is that the master database seems to be immediately available, but not other databases. Here is a console application to demonstrate what I am talking about:
namespace ServiceTest
{
using System;
using System.Data.SqlClient;
using System.Diagnostics;
using System.ServiceProcess;
using System.Threading;
class Program
{
private static readonly ServiceController controller = new ServiceController("MSSQL$SQLEXPRESS");
private static readonly Stopwatch stopWatch = new Stopwatch();
static void Main(string[] args)
{
stopWatch.Start();
EnsureStop();
Start();
OpenAndClose("master");
EnsureStop();
Start();
OpenAndClose("AdventureWorksLT");
Console.ReadLine();
}
private static void EnsureStop()
{
Console.WriteLine("EnsureStop enter, {0:N0}", stopWatch.ElapsedMilliseconds);
if (controller.Status != ServiceControllerStatus.Stopped)
{
controller.Stop();
controller.WaitForStatus(ServiceControllerStatus.Stopped);
Thread.Sleep(5000); // really, really make sure it stopped ... this has a problem too.
}
Console.WriteLine("EnsureStop exit, {0:N0}", stopWatch.ElapsedMilliseconds);
}
private static void Start()
{
Console.WriteLine("Start enter, {0:N0}", stopWatch.ElapsedMilliseconds);
controller.Start();
controller.WaitForStatus(ServiceControllerStatus.Running);
// Thread.Sleep(5000);
Console.WriteLine("Start exit, {0:N0}", stopWatch.ElapsedMilliseconds);
}
private static void OpenAndClose(string database)
{
Console.WriteLine("OpenAndClose enter, {0:N0}", stopWatch.ElapsedMilliseconds);
var connection = new SqlConnection(string.Format(#"Data Source=.\SQLEXPRESS;initial catalog={0};integrated security=SSPI", database));
connection.Open();
connection.Close();
Console.WriteLine("OpenAndClose exit, {0:N0}", stopWatch.ElapsedMilliseconds);
}
}
}
On my machine, this will consistently fail as written. Notice that the connection to "master" has no problems; only the connection to the other database. (You can reverse the order of the connections to verify this.) If you uncomment the Thread.Sleep in the Start() method, it will work fine.
Obviously I want to avoid an arbitrary Thread.Sleep(). Besides the rank code smell, what arbitary value would I put there? The only thing we can think of is to put some dummy connections to our target database in a while loop, catching the SqlException thrown and trying again until it works. But I'm thinking there must be a more elegant solution out there to know when the service is really ready to be used. Any ideas?
EDIT: Based on feedback provided below, I added a check on the status of the database. However, it is still failing. It looks like even the state is not reliable. Here is the function I am calling before OpenAndClose(string):
private static void WaitForOnline(string database)
{
Console.WriteLine("WaitForOnline start, {0:N0}", stopWatch.ElapsedMilliseconds);
using (var connection = new SqlConnection(string.Format(#"Data Source=.\SQLEXPRESS;initial catal
using (var command = connection.CreateCommand())
{
connection.Open();
try
{
command.CommandText = "SELECT [state] FROM sys.databases WHERE [name] = #DatabaseName";
command.Parameters.AddWithValue("#DatabaseName", database);
byte databaseState = (byte)command.ExecuteScalar();
Console.WriteLine("databaseState = {0}", databaseState);
while (databaseState != OnlineState)
{
Thread.Sleep(500);
databaseState = (byte)command.ExecuteScalar();
Console.WriteLine("databaseState = {0}", databaseState);
}
}
finally
{
connection.Close();
}
}
Console.WriteLine("WaitForOnline exit, {0:N0}", stopWatch.ElapsedMilliseconds);
}
I found another discussion dealing with a similar problem. Apparently the solution is to check the sys.database_files of the database in question. But that, of course, is a chicken-and-egg problem. Any other ideas?
Service start != database start.
Service is started when the SQL Server process is running and responded to the SCM that is 'alive'. After that the server will start putting user databases online. As part of this process, it runs the recovery process on each database, to ensure transactional consistency. Recovery of a database can last anywhere from microseconds to whole days, it depends on the ammount of log to be redone and the speed of the disk(s).
After the SCM returns that the service is running, you should connect to 'master' and check your database status in sys.databases. Only when the status is ONLINE can you proceed to open it.