I have a MongoDB server running on an 64-bit Amazon EC2 instance (journaling enabled). Yesterday I updated some documents and refreshed the webpage to make sure it reflects the changes. It did.
But today I see that not only yesterday's changes are gone. I lost a week of updates!
Why could this be and is it possible to recover the lost data?
Maybe there's something wrong in the way I make the changes?
public function edit_app()
{
$query = array('_id' => $_POST['id']);
$apps = $this->mongo->db->apps;
if ($app = $apps->findOne($query)) {
$app['title'] = $_POST['title'];
$app['version'] = $_POST['version'];
$app['author'] = $_POST['author'];
...
$apps->save($app);
}
}
There is not much that can be definitively said based on the information you have provided. I can however provide some hints to take you in the right direction:
Think about whether there is a possibility that an application
process could have been holding some documents in memory (loaded
before your update) and and re-saved after your update?
Is the server part of a replica set? If so, were all members of the
replica set healthy with primary server up and elected correctly?
I apologize, I must have been blind. There was an error in the edit_app() function:
$app['visible'] = $_POST['visible']; // was
$app['visible'] = isset($_POST['visible']); // fixed
Related
I have such a method in my Meteor app:
addLocation: (location, tid) => {
location = LocationSchema.clean(location);
LocationSchema.validate(location);
var lid = Locations.insert(location);
Thoughts.update({_id: tid}, {$push: {locations: lid}});
}
I subscribe both Locations and Thoughts collections. But unfortunately after calling my method only changes in Locations collection are visible - my modified thought is still the same and I must reload page to see its changes. Is it bug in Meteor or my mistake? Do you know some ways to solve or bypass this problem?
What's more, when I push some value to locations array in some thought with Robomongo, changes are visible though. It looks like the problem is that two changes try to be seen at the same time.
I am facing an error, which is something peculiar. I am using AEM 5.6.1.
I have 2 author instances(a1 and a2) and both are in cluster. We are performing tar optimization on the instances daily between 2a.m. - 5a.m.(London Timezone). Now, in the error.log of a2, I am seeing the below error everyday in the above mentioned time:
419 ERROR [pool-6-thread-1] org.apache.sling.discovery.impl.cluster.ClusterViewServiceImpl getEstablishedView: the existing established view does not incude the enter code herelocal instance yet! Assming isolated mode.
Now, I did some research on this and has come to know that, AEM users ClusterViewServiceImpl.java for clustering. And in that, the below mentioned code snippet is basically failing:
EstablishedClusterView clusterViewImpl = new EstablishedClusterView(
config, view, getSlingId());
boolean foundLocal = false;
for (Iterator<InstanceDescription> it = clusterViewImpl
.getInstances().iterator(); it.hasNext();) {
InstanceDescription instance = it.next();
if (instance.isLocal()) {
foundLocal = true;
break;
}
}
if (foundLocal) {
return clusterViewImpl;
} else {
logger.info("getEstablishedView: the existing established view does not incude the local instance yet! Assuming isolated mode.");
return getIsolatedClusterView();
}
Can someone help me to understand more in depth regarding the same. Does it mean that, the clustering is not properly working? What can be the possible impacts because of this error?
I think you've got a classic case of split brain.
Clustering authors is not a good approach and has been disfavoured in future versions of AEM, as the authors often get out of sync when they can't talk to each other for whatever reason, usually temporarily network related. Believe me, they are sensitive.
When communication drops, the slave thinks it no longer has a master, and claims to be the master itself. When that occurs, and communication is re-established the damage has been done as there is no recovery mechanism.
At best, only ever allow users to connect to the primary author and have the secondary author as a High Availability server.
Better still, set up replication from the primary author that everyone writes to, and have it auto replicate on write to the secondary backup author.
Hope that helps.
Because changed data (inserted by application A) needs to be displayed in application B in realtime we decided to go with .find().observe(...).
It does look like:
App A -> Insert -> mongodb <- observe -> publish -> Display App B
This works fine but it has a delay of about 3-5 seconds between Inserting in A and displaying in B. How can i change this?
Initially i thought, Oplog-Observe-Driver is default in Meteor > Version 1 and does react in realtime. Does it still POLL or is there some other reason for the delay????
Thanks for your expanations.
If you're using Oplog, then the changes will be immediate. If you're using poll then it'll take a few seconds as you wrote.
You need to set MONGO_OPLOG_URL correctly to make this work. (And of course your MongoDB needs to be Oplog enabled.)
Also, you don't need to use find().observe() if you're in a reactive context, find() is enough. On the server though you might need find().observe() depending on what you're doing.
Did you use DDP.connect? You also have to use the onReconnect
Remote = DDP.connect('http://yourremoteserver');
MyCollection = new Mongo.Collection('same_name', Remote);
// do whatever you need with collection
let watchCollection = function (query={}, project={}) {
return MyCollection.find(query, project).observe({
changed: function () { console.log('Something changed!') }
});
}
DDP.onReconnect(watchCollection);
I have a windows service, running workflows. The workflows are XAMLs loaded from database (users can define their own workflows using a rehosted designer). It is configured with one instance of the SQLWorkflowInstanceStore, to persist workflows when becoming idle. (It's basically derived from the example code in \ControllingWorkflowApplications from Microsoft's WCF/WF samples).
But sometimes I get an error like below:
System.Runtime.DurableInstancing.InstanceOwnerException: The execution of an InstancePersistenceCommand was interrupted because the instance owner registration for owner ID 'a426269a-be53-44e1-8580-4d0c396842e8' has become invalid. This error indicates that the in-memory copy of all instances locked by this owner have become stale and should be discarded, along with the InstanceHandles. Typically, this error is best handled by restarting the host.
I've been trying to find the cause, but it is hard to reproduce in development, on production servers however, I get it once in a while. One hint I found : when I look at the LockOwnersTable, I find the LockOnwersTable lockexpiration is set to 01/01/2000 0:0:0 and it's not getting updated anymore, while under normal circumstances the should be updated every x seconds according to the Host Lock Renewal period...
So , why whould SQLWorkflowInstanceStore stop renewing this LockExpiration and how can I detect the cause of it?
This happens because there are procedures running in the background and trying to extend the lock of the instance store every 30 seconds, and it seems that once the connection fail connecting to the SQL service it will mark this instance store as invalid.
you can see the same behaviour if you delete the instance store record from [LockOwnersTable] table.
The proposed solution is when this exception fires, you need to free the old instance store and initialize a new one
public class WorkflowInstanceStore : IWorkflowInstanceStore, IDisposable
{
public WorkflowInstanceStore(string connectionString)
{
_instanceStore = new SqlWorkflowInstanceStore(connectionString);
InstanceHandle handle = _instanceStore.CreateInstanceHandle();
InstanceView view = _instanceStore.Execute(handle,
new CreateWorkflowOwnerCommand(), TimeSpan.FromSeconds(30));
handle.Free();
_instanceStore.DefaultInstanceOwner = view.InstanceOwner;
}
public InstanceStore Store
{
get { return _instanceStore; }
}
public void Dispose()
{
if (null != _instanceStore)
{
var deleteOwner = new DeleteWorkflowOwnerCommand();
InstanceHandle handle = _instanceStore.CreateInstanceHandle();
_instanceStore.Execute(handle, deleteOwner, TimeSpan.FromSeconds(10));
handle.Free();
}
}
private InstanceStore _instanceStore;
}
you can find the best practices to create instance store handle in this link
Workflow Instance Store Best practices
This is an old thread but I just stumbled on the same issue.
Damir's Corner suggests to check if the instance handle is still valid before calling the instance store. I hereby quote the whole post:
Certain aspects of Workflow Foundation are still poorly documented; the persistence framework being one of them. The following snippet is typically used for setting up the instance store:
var instanceStore = new SqlWorkflowInstanceStore(connectionString);
instanceStore.HostLockRenewalPeriod = TimeSpan.FromSeconds(30);
var instanceHandle = instanceStore.CreateInstanceHandle();
var view = instanceStore.Execute(instanceHandle,
new CreateWorkflowOwnerCommand(), TimeSpan.FromSeconds(10));
instanceStore.DefaultInstanceOwner = view.InstanceOwner;
It's difficult to find a detailed explanation of what all of this
does; and to be honest, usually it's not necessary. At least not,
until you start encountering problems, such as InstanceOwnerException:
The execution of an InstancePersistenceCommand was interrupted because
the instance owner registration for owner ID
'9938cd6d-a9cb-49ad-a492-7c087dcc93af' has become invalid. This error
indicates that the in-memory copy of all instances locked by this
owner have become stale and should be discarded, along with the
InstanceHandles. Typically, this error is best handled by restarting
the host.
The error is closely related to the HostLockRenewalPeriod property
which defines how long obtained instance handle is valid without being
renewed. If you try monitoring the database while an instance store
with a valid instance handle is instantiated, you will notice
[System.Activities.DurableInstancing].[ExtendLock] being called
periodically. This stored procedure is responsible for renewing the
handle. If for some reason it fails to be called within the specified
HostLockRenewalPeriod, the above mentioned exception will be thrown
when attempting to persist a workflow. A typical reason for this would
be temporarily inaccessible database due to maintenance or networking
problems. It's not something that happens often, but it's bound to
happen if you have a long living instance store, e.g. in a constantly
running workflow host, such as a Windows service.
Fortunately it's not all that difficult to fix the problem, once you
know the cause of it. Before using the instance store you should
always check, if the handle is still valid; and renew it, if it's not:
if (!instanceHandle.IsValid)
{
instanceHandle = instanceStore.CreateInstanceHandle();
var view = instanceStore.Execute(instanceHandle,
new CreateWorkflowOwnerCommand(), TimeSpan.FromSeconds(10));
instanceStore.DefaultInstanceOwner = view.InstanceOwner;
}
It's definitely less invasive than the restart of the host, suggested
by the error message.
you have to be sure about expiration of owner user
here how I am used to handle this issue
public SqlWorkflowInstanceStore SetupSqlpersistenceStore()
{
SqlWorkflowInstanceStore sqlWFInstanceStore = new SqlWorkflowInstanceStore(ConfigurationManager.ConnectionStrings["DB_WWFConnectionString"].ConnectionString);
sqlWFInstanceStore.InstanceCompletionAction = InstanceCompletionAction.DeleteAll;
InstanceHandle handle = sqlWFInstanceStore.CreateInstanceHandle();
InstanceView view = sqlWFInstanceStore.Execute(handle, new CreateWorkflowOwnerCommand(), TimeSpan.FromSeconds(30));
handle.Free();
sqlWFInstanceStore.DefaultInstanceOwner = view.InstanceOwner;
return sqlWFInstanceStore;
}
and here how you can use this method
wfApp.InstanceStore = SetupSqlpersistenceStore();
wish this help
[I am new to ADO.NET and the Entity Framework, so forgive me if this questions seems odd.]
In my WPF application a user can switch between different databases at run time. When they do this I want to be able to do a quick check that the database is still available. What I have easily available is the ObjectContext. The test I am preforming is getting the count on the total records of a very small table and if it returns results then it passed, if I get an exception then it fails. I don't like this test, it seemed the easiest to do with the ObjectContext.
I have tried setting the connection timeout it in the connection string and on the ObjectConntext and either seem to change anything for the first scenario, while the second one is already fast so it isn't noticeable if it changes anything.
Scenario One
If the connect was down when before first access it takes about 30 seconds before it gives me the exception that the underlying provider failed.
Scenario Two
If the database was up when I started the application and I access it, and then the connect drops while using the test is quick and returns almost instantly.
I want the first scenario described to be as quick as the second one.
Please let me know how best to resolve this, and if there is a better way to test the connectivity to a DB quickly please advise.
There really is no easy or quick way to resolve this. The ConnectionTimeout value is getting ignored with the Entity Framework. The solution I used is creating a method that checks if a context is valid by passing in the location you which to validate and then it getting the count from a known very small table. If this throws an exception the context is not valid otherwise it is. Here is some sample code showing this.
public bool IsContextValid(SomeDbLocation location)
{
bool isValid = false;
try
{
context = GetContext(location);
context.SomeSmallTable.Count();
isValid = true;
}
catch
{
isValid = false;
}
return isValid;
}
You may need to use context.Database.Connection.Open()