MongoDB Driver 2.0 C# is there a way to find out if the server is down? In the new driver how do we run the Ping command? - mongodb

How do you call the Ping command with the new C# driver 2.0?
In the old driver it was available via Server.Ping()? Also, Is there a way to find out if the server is running/responding without running the actual query?
Using mongoClient.Cluster.Description.State doesn't help because it still gave the disconnected state even after the mongo server started responding.

You can check the cluster's status using its Description property:
var state = _client.Cluster.Description.State
If you want a specific server out of that cluster you can use the Servers property:
var state = _client.Cluster.Description.Servers.Single().State;

This worked for me on both c# driver 2 and 1
int count = 0;
var client = new MongoClient(connection);
// This while loop is to allow us to detect if we are connected to the MongoDB server
// if we are then we miss the execption but after 5 seconds and the connection has not
// been made we throw the execption.
while (client.Cluster.Description.State.ToString() == "Disconnected") {
Thread.Sleep(100);
if (count++ >= 50) {
throw new Exception("Unable to connect to the database. Please make sure that "
+ client.Settings.Server.Host + " is online");
}
}

As #i3arnon's answer I can tell it was reliable for me in this way:
var server = client.Cluster.Description.Servers.FirstOrDefault();
var serverState = ServerState.Disconnected;
if (server != null) serverState = server.State;
or in new versions of .Net
var serverState = client.Cluster.Description.Servers.FirstOrDefault()?.State
?? ServerState.Disconnected;
But if you realy want to run a ping command you can do it like this:
var command = new CommandDocument("ping", 1);
try
{
db.RunCommand<BsonDocument>(command);
}
catch (Exception ex)
{
// ping failed
}

Related

avoid concurrent access of postgres db

We have two .net services (.Net core console applications) which are accessing a postgres db table.
Service 1 inserts some 500 rows every 1 minute. It runs as a background thread.
Service 2 reads data from the same table continuously. There is an MQTT publisher which keeps reading data from this table when any new data is requested. This also happens very frequently i.e atleast 4/5 times a minute.
We are getting "FATAL: sorry, too many clients already " error.
What I am assuming is since write and read is happening simultaneously too frequently, the connection is not getting dispose properly.
Is there a way to avoid read whenever a write is happening.
EDITED
Thanks for the reply.. I know some connection pooling is happening but not sure where.. so my question was how to avoid concurrent access of postgres db..
Was not sure what part of code I can post to make the question clear
I am having using clause on dbcontext and also disposed like the below..
This is retrieval section
using (PlatinumDBContext platinumDBContext = new PlatinumDBContext())
{
try
{
var data = platinumDBContext.TrendPoints.Where(x => ids.Contains(x.TrendPointID) && x.TimeStamp >= DateTime.Now.AddHours(-timeinHours));
result = data.Select(x => new Last24hours
{
Label = x.TrendPointID.ToString(),
Value = (double)x.TrendPointValue,
time = x.TimeStamp.ToString("MM/dd/yyyy HH:mm:ss")
}).ToList();
}
catch (Exception oE)
{
}
finally {
platinumDBContext.Dispose();
}
}
This is the insertion section
using (PlatinumDBContext platinumDBContext = new PlatinumDBContext())
{
try
{
foreach (var point in trendPoints)
{
if (point != null)
{
TrendPoint item = new TrendPoint();
item.CreatedDate = DateTime.Now;
item.ObjectState = ObjectState.Added;
item.TrendPointID = point.TrendID;
item.TrendPointValue = double.IsNaN(point.Value) ? decimal.MinValue : (decimal)point.Value;
item.TimeStamp = new DateTime(point.TimeStamp);
platinumDBContext.Add(item);
}
}
platinumDBContext.SaveChanges();
}
catch (Exception ex)
{
}
finally
{
platinumDBContext.Dispose();
}
}
Regards,
Geervani

OPC UA Client capture the lost item values from the UA server after a disconnect/connection error?

I am building a OPC UA Client using OPC Foundation SDK. I am able to create a subscription containing some Monitoreditems.
On the OPC UA server these monitored items change value constantly (every second or so).
I want to disconnect the client (simulate a connection broken ), keep the subcription alive and wait for a while. Then I reconnect having my subscriptions back, but I also want all the monitored Item values queued up during the disconnect. Right now I only get the last server value on reconnect.
I am setting a queuesize:
monitoredItem.QueueSize = 100;
To kind of simulate a connection error I have set the "delete subscription" to false on ClosesSession:
m_session.CloseSession(new RequestHeader(), false);
My question is how to capture the content of the queue after a disconnect/connection error???
Should the ‘lost values’ be “new MonitoredItem_Notification” automatically when the client reconnect?
Should the SubscriptionId be the same as before the connection was broken?
Should the sessionId be the same or will a new SessionId let med keep the existing subscriptions? What is the best way to simulate a connection error?
Many questions :-)
A sample from the code where I create the subscription containing some MonitoredItems and the MonitoredItem_Notification event method.
Any OPC UA Guru out there??
if (node.Displayname == "node to monitor")
{
MonitoredItem mon = CreateMonitoredItem((NodeId)node.reference.NodeId, node.Displayname);
m_subscription.AddItem(mon);
m_subscription.ApplyChanges();
}
private MonitoredItem CreateMonitoredItem(NodeId nodeId, string displayName)
{
if (m_subscription == null)
{
m_subscription = new Subscription(m_session.DefaultSubscription);
m_subscription.PublishingEnabled = true;
m_subscription.PublishingInterval = 3000;//1000;
m_subscription.KeepAliveCount = 10;
m_subscription.LifetimeCount = 10;
m_subscription.MaxNotificationsPerPublish = 1000;
m_subscription.Priority = 100;
bool cache = m_subscription.DisableMonitoredItemCache;
m_session.AddSubscription(m_subscription);
m_subscription.Create();
}
// add the new monitored item.
MonitoredItem monitoredItem = new MonitoredItem(m_subscription.DefaultItem);
//Each time a monitored item is sampled, the server evaluates the sample using a filter defined for each monitoreditem.
//The server uses the filter to determine if the sample should be reported. The type of filter is dependent on the type of item.
//DataChangeFilter for Variable, Eventfilter when monitoring Events. etc
//MonitoringFilter f = new MonitoringFilter();
//DataChangeFilter f = new DataChangeFilter();
//f.DeadbandValue
monitoredItem.StartNodeId = nodeId;
monitoredItem.AttributeId = Attributes.Value;
monitoredItem.DisplayName = displayName;
//Disabled, Sampling, (Report (includes sampling))
monitoredItem.MonitoringMode = MonitoringMode.Reporting;
//How often the Client wish to check for new values on the server. Must be 0 if item is an event.
//If a negative number the SamplingInterval is set equal to the PublishingInterval (inherited)
//The Subscriptions KeepAliveCount should always be longer than the SamplingInterval/PublishingInterval
monitoredItem.SamplingInterval = 500;
//Number of samples stored on the server between each reporting
monitoredItem.QueueSize = 100;
monitoredItem.DiscardOldest = true;//Discard oldest values when full
monitoredItem.CacheQueueSize = 100;
monitoredItem.Notification += m_MonitoredItem_Notification;
if (ServiceResult.IsBad(monitoredItem.Status.Error))
{
return null;
}
return monitoredItem;
}
private void MonitoredItem_Notification(MonitoredItem monitoredItem, MonitoredItemNotificationEventArgs e)
{
if (this.InvokeRequired)
{
this.BeginInvoke(new MonitoredItemNotificationEventHandler(MonitoredItem_Notification), monitoredItem, e);
return;
}
try
{
if (m_session == null)
{
return;
}
MonitoredItemNotification notification = e.NotificationValue as MonitoredItemNotification;
if (notification == null)
{
return;
}
string sess = m_session.SessionId.Identifier.ToString();
string s = string.Format(" MonitoredItem: {0}\t Value: {1}\t Status: {2}\t SourceTimeStamp: {3}", monitoredItem.DisplayName, (notification.Value.WrappedValue.ToString().Length == 1) ? notification.Value.WrappedValue.ToString() : notification.Value.WrappedValue.ToString(), notification.Value.StatusCode.ToString(), notification.Value.SourceTimestamp.ToLocalTime().ToString("HH:mm:ss.fff"));
richTextBox1.AppendText(s + "SessionId: " + sess);
}
catch (Exception exception)
{
ClientUtils.HandleException(this.Text, exception);
}
}e here
I don't know how much of this, if any, the SDK you're using does for you, but the approach when reconnecting is generally:
try to resume (re-activate) your old session. If this is successful your subscriptions will already exist and all you need to do is send more PublishRequests. Since you're trying to test by closing the session this probably won't work.
create a new session and then call the TransferSubscription service to transfer the previous subscriptions to your new session. You can then start sending PublishRequests and you'll get the queued notifications.
Again, depending on the stack/SDK/toolkit you're using some or none of this may be handled for you.

Cloud Service for incoming TCP connections hangs

I'm developing a cloud service (worker role) for collecting data from a number of instruments. These instruments reports data randomly every minute or so. The service itself is not performance critical and doesn't need to be asynchronous. The instruments are able to resend their data up to an hour on failed connection attempt.
I have tried several implementations for my cloud service including this one:
http://msdn.microsoft.com/en-us/library/system.net.sockets.tcplistener.stop(v=vs.110).aspx
But all of them hang my cloud server sooner or later (sometimes within an hour).
I suspect something is wrong with my code. I have a lot of logging in my code but I get no errors. The service just stops to receive incoming connections.
In Azure portal it seems like the service is running fine. No error logs and no suspicious cpu usage etc.
If I restart the service it will run fine again until it hangs next time.
Would be most grateful if someone could help me with this.
public class WorkerRole : RoleEntryPoint
{
private LoggingService _loggingService;
public override void Run()
{
_loggingService = new LoggingService();
StartListeningForIncommingTCPConnections();
}
private void StartListeningForIncommingTCPConnections()
{
TcpListener listener = null;
try
{
listener = new TcpListener(RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["WatchMeEndpoint"].IPEndpoint);
listener.Start();
while (true)
{
_loggingService.Log(SeverityLevel.Info, "Waiting for connection...");
var client = listener.AcceptTcpClient();
var remoteEndPoint = client.Client != null ? client.Client.RemoteEndPoint.ToString() : "Unknown";
_loggingService.Log(SeverityLevel.Info, String.Format("Connected to {0}", remoteEndPoint));
var netStream = client.GetStream();
var data = String.Empty;
using (var reader = new StreamReader(netStream, Encoding.ASCII))
{
data = reader.ReadToEnd();
}
_loggingService.Log(SeverityLevel.Info, "Received data: " + data);
ProcessData(data); //data is processed and stored in database (all resources are released when done)
client.Close();
_loggingService.Log(SeverityLevel.Info, String.Format("Connection closed for {0}", remoteEndPoint));
}
}
catch (Exception exception)
{
_loggingService.Log(SeverityLevel.Error, exception.Message);
}
finally
{
if (listener != null)
listener.Stop();
}
}
private void ProcessData(String data)
{
try
{
var processor = new Processor();
var lines = data.Split('\n');
foreach (var line in lines)
processor.ProcessLine(line);
processor.ProcessMessage();
}
catch (Exception ex)
{
_loggingService.Log(SeverityLevel.Error, ex.Message);
throw new Exception(ex.InnerException.Message);
}
}
}
One strange observation i just did:
I checked the log recently and no instrument has connected for the last 30 minutes (which indicates that the service is down).
I connected to the service myself via a TCP client i've written myself and uploaded some test data.
This worked fine.
When I checked the log again my test data had been stored.
The strange thing is, that 4 other instruments had connected about the same time and send their data successfully.
Why couldn't they connect by themself before I connected with my test client?
Also, what does this setting in .csdef do for an InputEndpoint, idleTimeoutInMinutes?
===============================================
Edit:
Since a cuple of days back my cloud service has been running successfully.
Unfortunately this morning last log entry was from this line:
_loggingService.Log(SeverityLevel.Info, String.Format("Connected to {0}", remoteEndPoint));
No other connections could be made after this. Not even from my own test TCP client (didn't get any error though, but no data was stored and no new logs).
This makes me think that following code causes the service to hang:
var netStream = client.GetStream();
var data = String.Empty;
using (var reader = new StreamReader(netStream, Encoding.ASCII))
{
data = reader.ReadToEnd();
}
I've read somewhere that StremReader's ReadToEnd() could hang. Is this possible?
I have now changed this piece of code to this:
int i;
var bytes = new Byte[256];
var data = new StringBuilder();
const int dataLimit = 10;
var dataCount = 0;
while ((i = netStream.Read(bytes, 0, bytes.Length)) != 0)
{
data.Append(Encoding.ASCII.GetString(bytes, 0, i));
if (dataCount >= dataLimit)
{
_loggingService.Log(SeverityLevel.Error, "Reached data limit");
break;
}
dataCount++;
}
Another explanation could be something hanging in the database. I use the SqlConnection and SqlCommand classes to read and write to my database. I always close my connection afterwards (finally block).
SqlConnection and SqlCommand should have default timeouts, right?
===============================================
Edit:
After some more debugging I found out that when the service wasn't responding it "hanged" on this line of code:
while ((i = netStream.Read(bytes, 0, bytes.Length)) != 0)
After some digging I found out that the NetStream class and its read methods could actually hang. Even though MS declares otherwise.
NetworkStream read hangs
I've now changed my code into this:
Thread thread = null;
var task = Task.Factory.StartNew(() =>
{
thread = Thread.CurrentThread;
while ((i = netStream.Read(bytes, 0, bytes.Length)) != 0)
{
// Translate data bytes to a ASCII string.
data.Append(Encoding.ASCII.GetString(bytes, 0, i));
}
streamReadSucceeded = true;
});
task.Wait(5000);
if (streamReadSucceeded)
{
//Process data
}
else
{
thread.Abort();
}
Hopefully this will stop the hanging.
I'd say that part of your problem is you are processing your data on the thread that listens for connections from clients. This would prevent new clients from connecting if another client has started a long running operation of some type. I'd suggest you defer your processing to worker threads thus freeing the "listener" thread to accept new connections.
Another problem you could be experiencing, if your service throws an error, then the service will stop accepting connections as well.
private static void ListenForClients()
{
tcpListener.Start();
while (true)
{
TcpClient client = tcpListener.AcceptTcpClient();
Thread clientThread = new Thread(new ParameterizedThreadStart(HandleClientComm));
clientThread.Start(client);
}
}
private static void HandleClientComm(object obj)
{
try
{
using(TcpClient tcpClient = (TcpClient)obj)
{
Console.WriteLine("Got Client...");
using (NetworkStream clientStream = tcpClient.GetStream())
using (StreamWriter writer = new StreamWriter(clientStream))
using(StreamReader reader = new StreamReader(clientStream))
{
//do stuff
}
}
}
catch(Exception ex)
{
}
}

At what point does the MongoDB C# driver open a connection?

I'm having a problem with lots of connections being opened to the mongo db.
The readme on the Github page for the C# driver gives the following code:
using MongoDB.Bson;
using MongoDB.Driver;
var client = new MongoClient("mongodb://localhost:27017");
var server = client.GetServer();
var database = server.GetDatabase("foo");
var collection = database.GetCollection("bar");
collection.Insert(new BsonDocument("Name", "Jack"));
foreach(var document in collection.FindAll())
{
Console.WriteLine(document["Name"]);
}
At what point does the driver open the connection to the server? Is it at the GetServer() method or is it the Insert() method?
I know that we should have a static object for the client, but should we also have a static object for the server and database as well?
Late answer... but the server connection is created at this point:
var client = new MongoClient("mongodb://localhost:27017");
Everything else is just getting references for various objects.
See: http://docs.mongodb.org/ecosystem/tutorial/getting-started-with-csharp-driver/
While using the latest MongoDB drivers for C#, the connection happens at the actual database operation. For eg. db.Collection.Find() or at db.collection.InsertOne().
{
//code for initialization
//for localhost connection there is no need to specify the db server url and port.
var client = new MongoClient("mongodb://localhost:27017/");
var db = client.GetDatabase("TestDb");
Collection = db.GetCollection<T>("testCollection");
}
//Code for db operations
{
//The connection happens here.
var collection = db.Collection;
//Your find operation
var model = collection.Find(Builders<Model>.Filter.Empty).ToList();
//Your insert operation
collection.InsertOne(Model);
}
I found this out after I stopped my mongod server and debugged the code with breakpoint. Initialization happened smoothly but error was thrown at db operation.
Hope this helps.

MongoDB connection over SSL in Play Framework

I am using Play 1.2.5, MongoDB and Morphia module 1.2.9 in my application.
To create a secure and encrypted connection to the db, I installed MongoDB by enabling SSL using the follwoing links
http://docs.mongodb.org/manual/administration/ssl/
http://www.mongodb.org/about/tutorial/build-mongodb-on-linux/
Now I'm able to connect to the mongo shell using mongo --ssl also able to verify whether MongoDB is running or not using https://mylocalhost.com:27017/
But after enabling SSL in MongoDB, I am not able to connect to it through my play application.
Following are the lines I used in the application.conf to connect to the db
morphia.db.host=localhost
morphia.db.port=27017
morphia.db.db=test
Is there any configurations available to connect over SSL?
I did some googling and I am not able to find any solutions. Please help me over this?
Thanks in advance.
Morphia module does not support ssl connection for the moment. And I am not sure morphia library support it. Please create an issue on github to track this requirement: https://github.com/greenlaw110/play-morphia/issues?state=open
I use spring-data and came up against the same issue. With spring-data i was able to construct a Mongo object myself and passes it as a constructor param. Morphia might have the same mechanism. The key is:
options.socketFactory = SSLSocketFactory.getDefault();
After that, make sure you install the SSL public key into your key store and it should work.
public class MongoFactory {
public Mongo buildMongo (String replicaSet, boolean slaveOk, int writeNumber , int connectionsPerHost, boolean useSSL) throws UnknownHostException{
ServerAddress addr = new ServerAddress();
List<ServerAddress> addresses = new ArrayList<ServerAddress>();
int port =0;
String host = new String();
if ( replicaSet == null )
throw new UnknownHostException("Please provide hostname");
replicaSet = replicaSet.trim();
if ( replicaSet.length() == 0 )
throw new UnknownHostException("Please provide hostname");
StringTokenizer tokens = new StringTokenizer(replicaSet, ",");
while(tokens.hasMoreTokens()){
String token = tokens.nextToken();
int idx = token.indexOf( ":" );
if ( idx > 0 ){
port = Integer.parseInt( token.substring( idx + 1 ) );
host = token.substring( 0 , idx ).trim();
}
addr = new ServerAddress(host.trim(), port);
addresses.add(addr);
}
MongoOptions options = new MongoOptions();
options.autoConnectRetry = true;
if (useSSL){
options.socketFactory = SSLSocketFactory.getDefault();
}
options.connectionsPerHost=connectionsPerHost;
options.w=writeNumber;
options.fsync=false;
options.wtimeout=5000;
options.connectTimeout=5000;
options.autoConnectRetry=true;
options.socketKeepAlive=true;
Mongo m = new Mongo(addresses, options);
if(slaveOk){
m.setReadPreference(ReadPreference.SECONDARY);
}
return m;
}
}