We have two .net services (.Net core console applications) which are accessing a postgres db table.
Service 1 inserts some 500 rows every 1 minute. It runs as a background thread.
Service 2 reads data from the same table continuously. There is an MQTT publisher which keeps reading data from this table when any new data is requested. This also happens very frequently i.e atleast 4/5 times a minute.
We are getting "FATAL: sorry, too many clients already " error.
What I am assuming is since write and read is happening simultaneously too frequently, the connection is not getting dispose properly.
Is there a way to avoid read whenever a write is happening.
EDITED
Thanks for the reply.. I know some connection pooling is happening but not sure where.. so my question was how to avoid concurrent access of postgres db..
Was not sure what part of code I can post to make the question clear
I am having using clause on dbcontext and also disposed like the below..
This is retrieval section
using (PlatinumDBContext platinumDBContext = new PlatinumDBContext())
{
try
{
var data = platinumDBContext.TrendPoints.Where(x => ids.Contains(x.TrendPointID) && x.TimeStamp >= DateTime.Now.AddHours(-timeinHours));
result = data.Select(x => new Last24hours
{
Label = x.TrendPointID.ToString(),
Value = (double)x.TrendPointValue,
time = x.TimeStamp.ToString("MM/dd/yyyy HH:mm:ss")
}).ToList();
}
catch (Exception oE)
{
}
finally {
platinumDBContext.Dispose();
}
}
This is the insertion section
using (PlatinumDBContext platinumDBContext = new PlatinumDBContext())
{
try
{
foreach (var point in trendPoints)
{
if (point != null)
{
TrendPoint item = new TrendPoint();
item.CreatedDate = DateTime.Now;
item.ObjectState = ObjectState.Added;
item.TrendPointID = point.TrendID;
item.TrendPointValue = double.IsNaN(point.Value) ? decimal.MinValue : (decimal)point.Value;
item.TimeStamp = new DateTime(point.TimeStamp);
platinumDBContext.Add(item);
}
}
platinumDBContext.SaveChanges();
}
catch (Exception ex)
{
}
finally
{
platinumDBContext.Dispose();
}
}
Regards,
Geervani
Related
I'm developing a cloud service (worker role) for collecting data from a number of instruments. These instruments reports data randomly every minute or so. The service itself is not performance critical and doesn't need to be asynchronous. The instruments are able to resend their data up to an hour on failed connection attempt.
I have tried several implementations for my cloud service including this one:
http://msdn.microsoft.com/en-us/library/system.net.sockets.tcplistener.stop(v=vs.110).aspx
But all of them hang my cloud server sooner or later (sometimes within an hour).
I suspect something is wrong with my code. I have a lot of logging in my code but I get no errors. The service just stops to receive incoming connections.
In Azure portal it seems like the service is running fine. No error logs and no suspicious cpu usage etc.
If I restart the service it will run fine again until it hangs next time.
Would be most grateful if someone could help me with this.
public class WorkerRole : RoleEntryPoint
{
private LoggingService _loggingService;
public override void Run()
{
_loggingService = new LoggingService();
StartListeningForIncommingTCPConnections();
}
private void StartListeningForIncommingTCPConnections()
{
TcpListener listener = null;
try
{
listener = new TcpListener(RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["WatchMeEndpoint"].IPEndpoint);
listener.Start();
while (true)
{
_loggingService.Log(SeverityLevel.Info, "Waiting for connection...");
var client = listener.AcceptTcpClient();
var remoteEndPoint = client.Client != null ? client.Client.RemoteEndPoint.ToString() : "Unknown";
_loggingService.Log(SeverityLevel.Info, String.Format("Connected to {0}", remoteEndPoint));
var netStream = client.GetStream();
var data = String.Empty;
using (var reader = new StreamReader(netStream, Encoding.ASCII))
{
data = reader.ReadToEnd();
}
_loggingService.Log(SeverityLevel.Info, "Received data: " + data);
ProcessData(data); //data is processed and stored in database (all resources are released when done)
client.Close();
_loggingService.Log(SeverityLevel.Info, String.Format("Connection closed for {0}", remoteEndPoint));
}
}
catch (Exception exception)
{
_loggingService.Log(SeverityLevel.Error, exception.Message);
}
finally
{
if (listener != null)
listener.Stop();
}
}
private void ProcessData(String data)
{
try
{
var processor = new Processor();
var lines = data.Split('\n');
foreach (var line in lines)
processor.ProcessLine(line);
processor.ProcessMessage();
}
catch (Exception ex)
{
_loggingService.Log(SeverityLevel.Error, ex.Message);
throw new Exception(ex.InnerException.Message);
}
}
}
One strange observation i just did:
I checked the log recently and no instrument has connected for the last 30 minutes (which indicates that the service is down).
I connected to the service myself via a TCP client i've written myself and uploaded some test data.
This worked fine.
When I checked the log again my test data had been stored.
The strange thing is, that 4 other instruments had connected about the same time and send their data successfully.
Why couldn't they connect by themself before I connected with my test client?
Also, what does this setting in .csdef do for an InputEndpoint, idleTimeoutInMinutes?
===============================================
Edit:
Since a cuple of days back my cloud service has been running successfully.
Unfortunately this morning last log entry was from this line:
_loggingService.Log(SeverityLevel.Info, String.Format("Connected to {0}", remoteEndPoint));
No other connections could be made after this. Not even from my own test TCP client (didn't get any error though, but no data was stored and no new logs).
This makes me think that following code causes the service to hang:
var netStream = client.GetStream();
var data = String.Empty;
using (var reader = new StreamReader(netStream, Encoding.ASCII))
{
data = reader.ReadToEnd();
}
I've read somewhere that StremReader's ReadToEnd() could hang. Is this possible?
I have now changed this piece of code to this:
int i;
var bytes = new Byte[256];
var data = new StringBuilder();
const int dataLimit = 10;
var dataCount = 0;
while ((i = netStream.Read(bytes, 0, bytes.Length)) != 0)
{
data.Append(Encoding.ASCII.GetString(bytes, 0, i));
if (dataCount >= dataLimit)
{
_loggingService.Log(SeverityLevel.Error, "Reached data limit");
break;
}
dataCount++;
}
Another explanation could be something hanging in the database. I use the SqlConnection and SqlCommand classes to read and write to my database. I always close my connection afterwards (finally block).
SqlConnection and SqlCommand should have default timeouts, right?
===============================================
Edit:
After some more debugging I found out that when the service wasn't responding it "hanged" on this line of code:
while ((i = netStream.Read(bytes, 0, bytes.Length)) != 0)
After some digging I found out that the NetStream class and its read methods could actually hang. Even though MS declares otherwise.
NetworkStream read hangs
I've now changed my code into this:
Thread thread = null;
var task = Task.Factory.StartNew(() =>
{
thread = Thread.CurrentThread;
while ((i = netStream.Read(bytes, 0, bytes.Length)) != 0)
{
// Translate data bytes to a ASCII string.
data.Append(Encoding.ASCII.GetString(bytes, 0, i));
}
streamReadSucceeded = true;
});
task.Wait(5000);
if (streamReadSucceeded)
{
//Process data
}
else
{
thread.Abort();
}
Hopefully this will stop the hanging.
I'd say that part of your problem is you are processing your data on the thread that listens for connections from clients. This would prevent new clients from connecting if another client has started a long running operation of some type. I'd suggest you defer your processing to worker threads thus freeing the "listener" thread to accept new connections.
Another problem you could be experiencing, if your service throws an error, then the service will stop accepting connections as well.
private static void ListenForClients()
{
tcpListener.Start();
while (true)
{
TcpClient client = tcpListener.AcceptTcpClient();
Thread clientThread = new Thread(new ParameterizedThreadStart(HandleClientComm));
clientThread.Start(client);
}
}
private static void HandleClientComm(object obj)
{
try
{
using(TcpClient tcpClient = (TcpClient)obj)
{
Console.WriteLine("Got Client...");
using (NetworkStream clientStream = tcpClient.GetStream())
using (StreamWriter writer = new StreamWriter(clientStream))
using(StreamReader reader = new StreamReader(clientStream))
{
//do stuff
}
}
}
catch(Exception ex)
{
}
}
I am trying to solve situation with rolling back our datacontexts.
We are using one TransactionScope and inside two data contexts of two different databases.
At the end we want to save changes on both databases so we call .SaveChanges but the problem is that when an error occurs on the other database the changes on the first database are still saved.
What am I doing wrong in there that the first database doesn't roll back?
Thank you,
Jakub
public void DoWork()
{
using (var scope = new TransactionScope())
{
using (var rawData = new IntranetRawDataDevEntities())
{
rawData.Configuration.AutoDetectChangesEnabled = true;
using (var dataWareHouse = new IntranetDataWareHouseDevEntities())
{
dataWareHouse.Configuration.AutoDetectChangesEnabled = true;
... some operations with the data - no savechanges() is being called.
// Save changes for all items.
if (!errors)
{
// First database save.
rawData.SaveChanges();
// Fake data to fail the second database save.
dataWareHouse.Tasks.Add(new PLKPIDashboards.DataWareHouse.Task()
{
Description = string.Empty,
Id = 0,
OperationsQueue = new OperationsQueue(),
Queue_key = 79,
TaskTypeSLAs = new Collection<TaskTypeSLA>(),
Tasktype = null
});
// Second database save.
dataWareHouse.SaveChanges();
scope.Complete();
}
else
{
scope.Dispose();
}
}
}
}
From this article http://blogs.msdn.com/b/alexj/archive/2009/01/11/savechanges-false.aspx
try to use
rawData.SaveChanges(false);
dataWareHouse.SaveChanges(false);
//if everything is ok
scope.Complete();
rawData.AcceptAllChanges();
dataWareHouse.AcceptAllChanges();
I need to save to two different databases after some user action. Currently, I have the following:
using (EFEntities1 dc = new EFEntities1())
{
dc.USERS.Add(user);
dc.SaveChanges();
}
using (EFEntities2 dc = new EFEntities2())
{
dc.USERS.Add(user);
dc.SaveChanges();
}
These are two separate code blocks within the same method, so I believe if the second one fails, the first one won't rollback. How do I make sure both transactions rollback if something fails?
You can wrap them in a TransactionScope. Note that this will probably call the DTC.
using (TransactionScope scope = new TransactionScope())
{
using (EFEntities1 dc = new EFEntities1())
{
dc.USERS.Add(user);
dc.SaveChanges();
}
using (EFEntities2 dc = new EFEntities2())
{
dc.USERS.Add(user);
dc.SaveChanges();
}
scope.complete();
}
We are receiving a file from a client (Silverlight) via WCF and on the serverside I parse this file. Each line in the file is transformed into an object and stored into the database. if the file is very large (10000 entries and more), I get the following error (MSSQLEXPRESS):
The transaction associated with the current connection has completed but has not been disposed. The transaction must be disposed before the connection can be used to execute SQL statements.
I tried a lot (TransactionOptions timeout set and so on), but nothings works. The above exception message is either raised after 3000, sometimes after 6000 objects processed, but I can't succeed in processing all objects.
I append my source, hopefully somebody got an idea and can help me:
public xxxResponse SendLogFile (xxxRequest request
{
const int INTERMEDIATE_SAVE = 100;
using (var context = new EntityFramework.Models.Cubes_ServicesEntities())
{
// start a new transactionscope with the timeout of 0 (unlimited time for developing purposes)
using (var transactionScope = new TransactionScope(TransactionScopeOption.RequiresNew,
new TransactionOptions
{
IsolationLevel = System.Transactions.IsolationLevel.Serializable,
Timeout = TimeSpan.FromSeconds(0)
}))
{
try
{
// open the connection manually to prevent undesired close of DB
// (MSDTC)
context.Connection.Open();
int timeout = context.Connection.ConnectionTimeout;
int Counter = 0;
// read the file submitted from client
using (var reader = new StreamReader(new MemoryStream(request.LogFile)))
{
try
{
while (!reader.EndOfStream)
{
Counter++;
Counter2++;
string line = reader.ReadLine();
if (String.IsNullOrEmpty(line)) continue;
// Create a new object
DomainModel.LogEntry le = CreateLogEntryObject(line);
// an attach it to the context, set its state to added.
context.AttachTo("LogEntry", le);
context.ObjectStateManager.ChangeObjectState(le, EntityState.Added);
// while not 100 objects were attached, go on
if (Counter != INTERMEDIATE_SAVE) continue;
// after 100 objects, make a call to SaveChanges.
context.SaveChanges(SaveOptions.None);
Counter = 0;
}
}
catch (Exception exception)
{
// cleanup
reader.Close();
transactionScope.Dispose();
throw exception;
}
}
// do a final SaveChanges
context.SaveChanges();
transactionScope.Complete();
context.Connection.Close();
}
catch (Exception e)
{
// cleanup
transactionScope.Dispose();
context.Connection.Close();
throw e;
}
}
var response = CreateSuccessResponse<ServiceSendLogEntryFileResponse>("SendLogEntryFile successful!");
return response;
}
}
There is no bulk insert in entity framework. You call SaveChanges after 100 records but it will execute 100 separate inserts with database round trip for each insert.
Setting timeout of the transaction is also dependent on transaction max timeout which is configured on machine level (I think default value is 10 minutes). How lond does it take before your operation fails?
The best way you can do is rewriting your insert logic with common ADO.NET or with bulk insert.
Btw. throw exception and throw e? That is incorrect way to rethrow exceptions.
Important edit:
SaveChanges(SaveOptions.None) !!! means do not accept changes after saving so all records are still in added state. Because of that the first call to SaveChanges will insert first 100 records. The second call will insert first 100 again + next 100, the third call will insert first 200 + next 100, etc.
I had exactly same issue. I did EF code to insert bulk 1000 records each time.
I was working since the beginning, with a little problem with msDTC that I put to allow remot clients and admin , but after that it was ok. I did lot of work with this, but one day it JUST STOP WORKING.
I am getting
The transaction associated with the current connection has completed but has not been disposed. The transaction must be disposed before the connection can be used to execute SQL statements.
VERY WEIRD! Sometimes the error changes. My suspect is the msDTC somehow , strange behaviors.
I am changing now for not using TransactionScope!
I hate when it did work and just stop. I also tried to run this in a vm, another enourmous waste of time...
My code:
private void AddTicks(FileHelperTick[] fhTicks)
{
List<ForexEF.Entities.Tick> Ticks = new List<ForexEF.Entities.Tick>();
var str = LeTicks(ref fhTicks, ref Ticks);
using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Required, new TransactionOptions()
{
IsolationLevel = System.Transactions.IsolationLevel.Serializable,
Timeout = TimeSpan.FromSeconds(180)
}))
{
ForexEF.EUR_TICKSContext contexto = null;
try
{
contexto = new ForexEF.EUR_TICKSContext();
contexto.Configuration.AutoDetectChangesEnabled = false;
int count = 0;
foreach (var tick in Ticks)
{
count++;
contexto = AddToContext(contexto, tick, count, 1000, true);
}
contexto.SaveChanges();
}
finally
{
if (contexto != null)
contexto.Dispose();
}
scope.Complete();
}
}
private ForexEF.EUR_TICKSContext AddToContext(ForexEF.EUR_TICKSContext contexto, ForexEF.Entities.Tick tick, int count, int commitCount, bool recreateContext)
{
contexto.Set<ForexEF.Entities.Tick>().Add(tick);
if (count % commitCount == 0)
{
contexto.SaveChanges();
if (recreateContext)
{
contexto.Dispose();
contexto = new ForexEF.EUR_TICKSContext();
contexto.Configuration.AutoDetectChangesEnabled = false;
}
}
return contexto;
}
It times out due the TransactionScope default Maximum Timeout, check the machine.config for that.
Check out this link:
http://social.msdn.microsoft.com/Forums/en-US/windowstransactionsprogramming/thread/584b8e81-f375-4c76-8cf0-a5310455a394/
I have the following calls (actually a few more than this - it's the overall method that's in question here):
ThreadPool.QueueUserWorkItem(Database.Instance.RefreshEventData);
ThreadPool.QueueUserWorkItem(Database.Instance.RefreshLocationData);
ThreadPool.QueueUserWorkItem(Database.Instance.RefreshActData);
1st point is - is it OK to call methods that call WCF services like this? I tried daisy chaining them and it was a mess.
An example of one of the refresh methods being called above is (they all follow the same pattern, just call different services and populate different tables):
public void RefreshEventData (object state)
{
Console.WriteLine ("in RefreshEventData");
var eservices = new AppServicesClient (new BasicHttpBinding (), new EndpointAddress (this.ServciceUrl));
//default the delta to an old date so that if this is first run we get everything
var eventsLastUpdated = DateTime.Now.AddDays (-100);
try {
eventsLastUpdated = (from s in GuideStar.Data.Database.Main.Table<GuideStar.Data.Event> ()
orderby s.DateUpdated descending
select s).ToList ().FirstOrDefault ().DateUpdated;
} catch (Exception ex1) {
Console.WriteLine (ex1.Message);
}
try {
eservices.GetAuthorisedEventsWithExtendedDataAsync (this.User.Id, this.User.Password, eventsLastUpdated);
} catch (Exception ex) {
Console.WriteLine ("error updating events: " + ex.Message);
}
eservices.GetAuthorisedEventsWithExtendedDataCompleted += delegate(object sender, GetAuthorisedEventsWithExtendedDataCompletedEventArgs e) {
try {
List<Event> newEvents = e.Result.ToList ();
GuideStar.Data.Database.Main.EventsAdded = e.Result.Count ();
lock (GuideStar.Data.Database.Main) {
GuideStar.Data.Database.Main.Execute ("BEGIN");
foreach (var s in newEvents) {
GuideStar.Data.Database.Main.InsertOrUpdateEvent (new GuideStar.Data.Event {
Name = s.Name,
DateAdded = s.DateAdded,
DateUpdated = s.DateUpdated,
Deleted = s.Deleted,
StartDate = s.StartDate,
Id = s.Id,
Lat = s.Lat,
Long = s.Long
});
}
GuideStar.Data.Database.Main.Execute ("COMMIT");
LocationsCount = 0;
}
} catch (Exception ex) {
Console.WriteLine("error InsertOrUpdateEvent " + ex.Message);
} finally {
OnDatabaseUpdateStepCompleted (EventArgs.Empty);
}
};
}
OnDatabaseUpdateStepCompleted - just iterates an updateComplete counter when it's called and when it knows that all of the services have come back ok it removes the waiting spinner and the app carries on.
This works OK 1st time 'round - but then sometimes it doesn't with one of these: http://monobin.com/__m6c83107d
I think the 1st question is - is all this OK? I'm not used to using threading and locks so I am wandering into new ground for me. Is using QueueUserWorkItem like this ok? Should I even be using lock before doing the bulk insert/update? An example of which:
public void InsertOrUpdateEvent(Event festival){
try {
if (!festival.Deleted) {
Main.Insert(festival, "OR REPLACE");
}else{
Main.Delete<Event>(festival);
}
} catch (Exception ex) {
Console.WriteLine("InsertOrUpdateEvent failed: " + ex.Message);
}
}
Then the next question is - what am I doing wrong that is causing these sqlite issues?
w://
Sqlite is not thread safe.
If you want to access Sqlite from more than one thread, you must take a lock before you access any SQLite related structures.
Like this:
lock (db){
// Do your query or insert here
}
Sorry, no specific answers, but some thoughts:
Is SqlLite even threadsafe? I'm not sure - it may be that it's not (to the wrapper isn't). Can you lock on a more global object, so no two threads are inserting at the same time?
It's possible that the MT GC is getting a little overenthusiastic, and releasing your string before it's been used. Maybe keep a local reference to it around during the insert? I've had this happen with view controllers, where I had them in an array (tabcontrollers, specificially), but if I didn't keep an member variable around with the reference, they got GC'ed.
Could you get the data in a threaded manner, then queue everything up and insert them in a single thread? Atleast as a test anyway.