We've run out of options already. We've tried out clearing the cache (alot of times already), manually terminate workflows that are in progress, with errors, failed to start. We've also tried to restore our site collection. Refreshed jobs through stsadm and run execadmsvcjobs. We found this hotfix - https://www.microsoft.com/en-us/download/details.aspx?id=21066 but we weren't able to install that in MS Win Server 2003 R2. Added Heapsettings.
After we've done all of those, these are the log entries that are recurring:
XX heaps created, above warning threshold of 32. Check for excessive SPWeb or SPSite usage.
The previous instance of the timer job 'Config Refresh', id '{08A3BB10-8FA2-478C-9CCB-F8415F6D7485}' for service '{298FBFD4-A717-466C-A270-0AF3B6CC2D6C}' is still running, so the current instance will be skipped. Consider increasing the interval between jobs.
The previous instance of the timer job 'Immediate Alerts', id '{BA39B9DF-AB1C-466F-9FCD-4E21AE5806DC}' for service '{EC394CD5-4AFA-4DB9-80CC-1F6792251B30}' is still running, so the current instance will be skipped. Consider increasing the interval between jobs.
The previous instance of the timer job 'Workflow', id '{A16329F3-D83D-4E6F-8A05-4A33876AB5F4}' for service '{EC394CD5-4AFA-4DB9-80CC-1F6792251B30}' is still running, so the current instance will be skipped. Consider increasing the interval between jobs.
Potentially excessive number of SPRequest objects (9) currently unreleased on thread 6. Ensure that this object or its parent (such as an SPWeb or SPSite) is being properly disposed. This object is holding on to a separate native heap. Allocation Id for this object: {6E418B9A-AF0C-4FA1-9871-1028CE638F6F} Stack trace of current allocation: at Microsoft.SharePoint.SPRequestManager.Add(SPRequest request, Boolean shareable) at Microsoft.SharePoint.SPGlobal.CreateSPRequestAndSetIdentity(Boolean bNotGlobalAdminCode, String strUrl, Boolean bNotAddToContext, Byte[] UserToken, String userName, Boolean bIgnoreTokenTimeout, Boolean bAsAnonymous) at Microsoft.SharePoint.SPSite.GetSPRequest() at Microsoft.SharePoint.SPSite.get_Request() at Microsoft.SharePoint.SPSite.OpenWeb(Guid gWebId, Int32 mondoHint) at Microsoft.SharePoint.SPSite.OpenWeb(Guid gWebId) at Microsoft.SharePoint.Workflow.SPWinOeWorkflow.get_InitiatorWeb() at Microsoft.SharePoint.Workflow.SPWinOEWSSService.GetWebForListItemService() at Microsoft.SharePoint.Workflow.SPWinOEWSSService.UpdateListItem(Guid id, Guid listId, Int32 itemId, Hashtable itemProperties) at Microsoft.SharePoint.WorkflowActions.ActivityHelper.DoCorrectUpdateMethod(WorkflowContext theContext, Int32 item, Guid listId, Hashtable properties, IListItemService hostInterface) at Microsoft.SharePoint.WorkflowActions.UpdateItemActivity.Execute(ActivityExecutionContext provider) at System.Workflow.ComponentModel.ActivityExecutor´1.Execute(T activity, ActivityExecutionContext executionContext) at System.Workflow.ComponentModel.ActivityExecutor´1.Execute(Activity activity, ActivityExecutionContext executionContext) at System.Workflow.ComponentModel.ActivityExecutorOperation.Run(IWorkflowCoreRuntime workflowCoreRuntime) at System.Workflow.Runtime.Scheduler.Run() at System.Workflow.Runtime.WorkflowExecutor.RunScheduler() at System.Workflow.Runtime.WorkflowExecutor.RunSome(Object ignored) at System.Workflow.Runtime.Hosting.DefaultWorkflowSchedulerService.WorkItem.Invoke(WorkflowSchedulerService service) at System.Workflow.Runtime.Hosting.DefaultWorkflowSchedulerService.QueueWorkerProcess(Object state) at System.Threading.ExecutionContext.runTryCode(Object userData) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading._ThreadPoolWaitCallback.PerformWaitCallbackInternal(_ThreadPoolWaitCallback tpWaitCallBack) at System.Threading._ThreadPoolWaitCallback.PerformWaitCallback(Object state)
ERROR: request not found in the TrackedRequests. We might be creating and closing webs on different threads. ThreadId = 9, Free call stack = at Microsoft.SharePoint.SPRequestManager.Release(SPRequest request) at Microsoft.SharePoint.SPSite.Close() at Microsoft.SharePoint.SPSite.Dispose() at Microsoft.SharePoint.Workflow.SPWorkflowAutostartEventReceiver.AutoStartWorkflow(SPItemEventProperties properties, Boolean bCreate, Boolean bChange, AssocType atyp) at Microsoft.SharePoint.Workflow.SPWorkflowAutostartEventReceiver.AutoStartWorkflow(SPItemEventProperties properties, Boolean bCreate, Boolean bChange) at Microsoft.SharePoint.Workflow.SPWorkflowAutostartEventReceiver.ItemUpdated(SPItemEventProperties properties) at Microsoft.SharePoint.SPEventManager.RunItemEventReceiver(SPItemEventReceiver receiver, SPItemEventProperties properties, SPEventContext context, String receiverData) at Microsoft.SharePoint.SPEventManager.RunItemEventReceiverHelper(Object receiver, Object properties, SPEventContext context, String receiverData) at Microsoft.SharePoint.SPEventManager.<>c__DisplayClass8`1.b__0() at Microsoft.SharePoint.SPSecurity.CodeToRunElevatedWrapper(Object state) at Microsoft.SharePoint.SPSecurity.RunAsUser(SPUserToken userToken, Boolean bResetContext, WaitCallback code, Object param) at Microsoft.SharePoint.SPSecurity.RunAsUser(SPUserToken userToken, CodeToRunElevated code) at Microsoft.SharePoint.SPEventManager.InvokeEventReceivers[ReceiverType](SPUserToken userToken, RunEventReceiver runEventReceiver, Object receivers, Object properties, Boolean checkCancel) at Microsoft.SharePoint.SPEventManager.InvokeEventReceivers[ReceiverType](Byte[] userTokenBytes, RunEventReceiver runEventReceiver, Object receivers, Object properties, Boolean checkCancel) at Microsoft.SharePoint.SPEventManager.HandleEventCallback[ReceiverType,PropertiesType](Object callbackData) at Microsoft.SharePoint.Utilities.SPThreadPool.WaitCallbackWrapper(Object state) at System.Threading.ExecutionContext.runTryCode(Object userData) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading._ThreadPoolWaitCallback.PerformWaitCallbackInternal(_ThreadPoolWaitCallback tpWaitCallBack) at System.Threading._ThreadPoolWaitCallback.PerformWaitCallback(Object state) , Allocation call stack (if present) null
Now we're going to try to remove all workflows except for one to narrow down our troubleshooting.
Please help guys. Thank you!
Related
I have a Transformer with a state store that uses punctuate to operate on said state store.
After a few iterations of punctuate, the operation may have finished, so I'd like to cancel the punctuate -- but only for the Task that has actually finished the operation on the partition's respective state store. The punctuate operations for the Tasks that are not done yet should keep running. To that purpose my transformer keeps a reference to the Cancellable returned by schedule().
As far as I can tell, every Task always gets its own isolated Transformer instance and every Task gets its own isolated scheduled punctuate() within that instance (?)
However, since this is effectively state, but not inside a stateStore, I'm not sure how safe this is. For instance, are there certain scenarios in which one transformer instance might be shared across tasks (and therefore absolutely no state must be kept outside of StateStores)?
public class CoolTransformer implements Transformer {
private KeyValueStore stateStore;
private Cancellable taskPunctuate; // <----- Will this lead to conflicts between tasks?
public void init(ProcessorContext context) {
this.store = context.getStateStore(...);
this.taskPunctuate = context.schedule(Duration.ofMillis(...), PunctuationType.WALL_CLOCK_TIME, this::scheduledOperation);
}
private void scheduledOperation(long l) {
stateStore.get(...)
// do stuff...
if (done) {
this.taskPunctuate.cancel(); // <----- Will this lead to conflicts between tasks?
}
}
public KeyValue transform(key, value) {
// do stuff
stateStore.put(key, value)
}
public void close() {
taskPunctuate.cancel();
}
}
You might be able to look into TransformerSupplier, specifically TransformSupplier#get(), this will ensure that ensure we new transformer will be created for when they should be kept independent. Also the Transformers should not share objects, so be careful of this with your Cancellable taskPunctuate. If either of these cases are violated you should see errors like org.apache.kafka.streams.errors.StreamsException: Current node is unknown, ConcurrentModificationException or InstanceAlreadyExistsException.
When I attempt to deploy DACPACs via SqlPackage.exe,
I encounter the error below :
An unexpected failure occurred: Object reference not set to an instance of an object..
Unhandled Exception: System.NullReferenceException: Object reference not set to an instance of an object.
at Microsoft.Data.Tools.Schema.Sql.SchemaModel.ReverseEngineerPopulators.Sql90TableBaseColumnPopulator`1.InsertElementIntoParent(SqlColumn element, TElement parent, ReverseEngineerOption option)
at Microsoft.Data.Tools.Schema.Sql.SchemaModel.ReverseEngineerPopulators.ChildModelElementPopulator`2.CreateChildElement(TParent parent, EventArgs e, ReverseEngineerOption option)
at Microsoft.Data.Tools.Schema.Sql.SchemaModel.ReverseEngineerPopulators.ChildModelElementPopulator`2.PopulateAllChildrenFromCache(IDictionary`2 cache, SqlReverseEngineerRequest request, OrdinalSqlDataReader reader, ReverseEngineerOption option)
at Microsoft.Data.Tools.Schema.Sql.SchemaModel.ReverseEngineerPopulators.TopLevelElementPopulator`1.Populate(SqlReverseEngineerRequest request, OrdinalSqlDataReader reader, ReverseEngineerOption option)
at Microsoft.Data.Tools.Schema.Sql.SchemaModel.SqlReverseEngineerImpl.ExecutePopulators(ReliableSqlConnection conn, IList`1 populators, Int32 totalPopulatorsCount, Int32 startIndex, Boolean progressAlreadyUpdated, ReverseEngineerOption option, SqlReverseEngineerRequest request)
at Microsoft.Data.Tools.Schema.Sql.SchemaModel.SqlReverseEngineerImpl.ExecutePopulatorsInPass(SqlReverseEngineerConnectionContext context, ReverseEngineerOption option, SqlReverseEngineerRequest request, Int32 totalCount, Tuple`2[] populatorsArray)
at Microsoft.Data.Tools.Schema.Sql.SchemaModel.SqlReverseEngineerImpl.PopulateBatch(SqlReverseEngineerConnectionContext context, SqlSchemaModel model, ReverseEngineerOption option, ErrorManager errorManager, SqlReverseEngineerRequest request, SqlImportScope importScope)
at Microsoft.Data.Tools.Schema.Sql.SchemaModel.SqlReverseEngineer.PopulateAll(SqlReverseEngineerConnectionContext context, ReverseEngineerOption option, ErrorManager errorManager, Boolean filterManagementScopedElements, SqlImportScope importScope, Boolean optimizeForQuery, ModelStorageType modelType)
at Microsoft.Data.Tools.Schema.Sql.Deployment.SqlDeploymentEndpointServer.ImportDatabase(SqlReverseEngineerConstructor constructor, DeploymentEngineContext context, ErrorManager errorManager)
at Microsoft.Data.Tools.Schema.Sql.Deployment.SqlDeploymentEndpointServer.OnLoad(ErrorManager errors, DeploymentEngineContext context)
at Microsoft.Data.Tools.Schema.Sql.Deployment.SqlDeployment.PrepareModels()
at Microsoft.Data.Tools.Schema.Sql.Deployment.SqlDeployment.InitializePlanGeneratator()
at Microsoft.SqlServer.Dac.DacServices.<>c__DisplayClass21.<CreateDeploymentArtifactGenerationOperation>b__1f(Object operation, CancellationToken token)
at Microsoft.SqlServer.Dac.Operation.Microsoft.SqlServer.Dac.IOperation.Run(OperationContext context)
at Microsoft.SqlServer.Dac.ReportMessageOperation.Microsoft.SqlServer.Dac.IOperation.Run(OperationContext context)
at Microsoft.SqlServer.Dac.OperationExtension.Execute(IOperation operation, DacLoggingContext loggingContext, CancellationToken cancellationToken)
at Microsoft.SqlServer.Dac.DacServices.GenerateDeployScript(DacPackage package, String targetDatabaseName, DacDeployOptions options, Nullable`1 cancellationToken)
at Microsoft.Data.Tools.Schema.CommandLineTool.DacServiceUtil.<>c__DisplayClasse.<DoDeployAction>b__4(DacServices service)
at Microsoft.Data.Tools.Schema.CommandLineTool.DacServiceUtil.ExecuteDeployOperation(String connectionString, String filePath, MessageWrapper messageWrapper, Boolean sourceIsPackage, Boolean targetIsPackage, Func`1 generateScriptFromPackage, Func`2 generateScriptFromDatabase)
at Microsoft.Data.Tools.Schema.CommandLineTool.DacServiceUtil.DoDeployAction(DeployArguments parsedArgs, Action`1 writeError, Action`2 writeMessage, Action`1 writeWarning)
at Microsoft.Data.Tools.Schema.CommandLineTool.Program.DoDeployActions(CommandLineArguments parsedArgs)
at Microsoft.Data.Tools.Schema.CommandLineTool.Program.Run(String[] args)
at Microsoft.Data.Tools.Schema.CommandLineTool.Program.Main(String[] args)
Below is the command I run:
SET vardeploy2=/Action:Script
set varBlockOnDriftParameter=/p:BlockWhenDriftDetected=False
"SSDTBinaries\SqlPackage.exe" %vardeploy2% %varBlockOnDriftParameter% /SourceFile:"dacpacs\DBName.dacpac" /Profile:"Profiles\%1.DBName.Publish.xml" >> Log.txt 2>>&1
I deploy to a SQL Server 2008 R2. The SqlPackage.exe version is 11.0.2820.0.
The issue is intermittent, which suggests it's not related to the DACPAC being deployed or the destination database's schema. My best guess is that something about the state of the database is causing the problem.
Still, I haven't been able to identify anything unusual at the time of the failures.
When recreating the issue locally, using schema locks results in a different error message.
Has anyone know of a solution?
Upgrade to a more resent SQL Server Data Tools.
As soon as the database becomes slow for some reason ( long running query, backup, performance analyzer)
My Web Application start getting, eventually the following errors:
System.InvalidOperationException: There is already an open DataReader associated with this Command which must be closed first.
at System.Data.SqlClient.SqlInternalConnectionTds.ValidateConnectionForExecute(SqlCommand command)
at System.Data.SqlClient.SqlCommand.ValidateCommand(String method, Boolean async)
at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(TaskCompletionSource`1 completion, String methodName, Boolean sendToPipe, Int32 timeout, Boolean asyncWrite)
at System.Data.SqlClient.SqlCommand.ExecuteNonQuery()
at CodeFluent.Runtime.CodeFluentPersistence.InternalExecuteNonQuery(Boolean firstTry)
at CodeFluent.Runtime.CodeFluentPersistence.InternalExecuteNonQuery(Boolean firstTry)
.... _> stack trace continues to my code
CodeFluent.Runtime.CodeFluentRuntimeException: CF1044: Nested persistence command is not supported. Current command: '[Contract_Search]', nested command: '[zref_template_document_LoadBlobFile]'.
at CodeFluent.Runtime.CodeFluentPersistence.CreateStoredProcedureCommand(String schema, String package, String intraPackageName, String name)
at CodeFluent.Runtime.BinaryServices.BinaryLargeObject.GetInputStream(CodeFluentContext context, Int64 rangeStart, Int64 rangeEnd)
.... _> stack trace continues to my code
The second error CF1044 happens when I open two browser windows and doing different actions. Search in one, generate a document in another.
It's difficult to reproduce. It never happens the same way.
There is a race condition somewhere I can't figure out.
Here is what actually worked for me:
public byte[] GetRtfDocumentStreamBuffer(TemplateDocumentType templateType, int culture)
{
var template = TemplateDocument.LoadActiveByDocumentTypeAndLcid(DateTime.Today, templateType.Id, culture);
var resultStream = new MemoryStream();
using (var cf = CodeFluentContext.GetNew(Constants.MyApplicationStoreName))
using (var templateStream = template.File.GetInputStream(cf, 0, 0))
using (var resultWriter = new StreamWriter(resultStream, Encoding.GetEncoding("windows-1252")))
{
GenerateRtfDocument(....);
resultWriter.Flush();
}
return resultStream.GetBuffer();
}
what I saw in decompiled cf runtime that is CodeFluentContext.Dispose() call CodeFluentPersistence.Dispose() which close reader and dispose connection.
I have a WF (4.5) workflow activity that creates a child workflow (evaluating a VisualBasicValue expression). I need the result before I complete the parent workflow.
I add the expression to the metadata like this:
private VisualBasicValue<string> _expression;
protected override void CacheMetadata(NativeActivityMetadata metadata)
{
base.CacheMetadata(metadata);
var visualBasicValue = (VisualBasicValue<string>)(_childActivity.Text.Expression);
var expressionText = visualBasicValue.ExpressionText;
_expression = new VisualBasicValue<string>(expressionText);
metadata.AddChild(_expression);
}
I tried scheduling the activity in the Execute method like this:
protected override void Execute(NativeActivityContext context)
{
context.ScheduleActivity(context, _expression, OnCompleted);
Result.Set(context, _value);
}
With a callback of:
private void OnCompleted(NativeActivityContext context, ActivityInstance completedInstance, string result)
{
_value = result;
}
Unfortunately, the _expression activity is only executed after the parent's execution method returns. Adding it as an implementation child doesn't work (it cannot work as an implementation child, as it is supposed to evaluate an expression that contains variables external to the parent).
Any ideas how to overcome this and execute within the execution context?
In code, as in real life, you can't schedule something to the past (yet :).
ScheduleActivity() will place the activity within an execution queue and execute it as soon as it can. As the parent activity is still running, _expression will only execute after it. Bottom-line, it's an asynchronous call.
If you want to control when _expression is called, just use WorkflowInvoker to execute it, synchronously, whenever you want.
public class MyNativeActivity : NativeActivity
{
private readonly VisualBasicValue<string> _expression;
public MyNativeActivity()
{
// 'expression' construction logic goes here
_expression = new VisualBasicValue<string>("\"Hi!\"");
}
protected override void Execute(NativeActivityContext context)
{
var _value = WorkflowInvoker.Invoke(_expression);
Console.WriteLine("Value returned by '_expression': " + _value);
// use '_value' for something else...
}
}
Took me a few days but I managed to resolve my own issue (without breaking the normal of how WF works).
What I ended up doing is, using reflection, iterated over the child's properties and created a LinkedList of evaluation expressions (using VisualBasicValue) of each of its arguments, in the CacheMetadata method. Then in the execution phase, I scheduled the execution of the first evaluation. In its callback I iterate over the remaining evaluations, scheduling the execution of the next evaluations, adding the result to a dictionary, until its done.
Finally, if there are no more evaluations to schedule, I schedule a final activity that takes the dictionary as its argument, and can do whatever it wants with it. Upon its own, it optionally returns the final result to the container's OutArgument.
What I previously failed to understand, is that even though the scheduling occurs after the instantiating activity's execution, the callback runs before control is returned to the host workflow application, and in that space I could work.
Quote from C# language specification 3.9:
'2. If the object, or any part of it, cannot be accessed by any possible
continuation of execution, other than
the running of destructors, the object
is considered no longer in use, and it
becomes eligible for destruction...
For instance would the DispatcherTimer be eligible for garbage collection before the Tick event fires?
public void DispatchCallbackAfter(Action callback, TimeSpan period)
{
DispatcherTimer timer = new DispatcherTimer(DispatcherPriority.Normal, AppSettings.MainWindow.Dispatcher);
timer.Tick += new EventHandler(DispatchCallback);
timer.Interval = period;
timer.Tag = new object[] {timer, callback};
timer.Start();
}
private void DispatchCallback(object sender, EventArgs args)
{
DispatcherTimer t = (DispatcherTimer)sender;
t.Stop();
((Action)((object[])t.Tag)[1])();
}
NOTE: There is self reference to the timer in timer.Tag but I imagine that would not make any difference?
While the DispatcherTimer is running, the Dispatcher has a reference to it, and it will not get GCed. Once the timer stops and there is no external reference to it, it can be collected. That is, if your only references to the timer and the callback are within the timer and the callback, and the timer is stopped, the timer can be collected.
You can tell that the dispatcher takes a reference to a running timer by looking in Reflector (or your favorite decompiler) and seeing that the timer calls _dispatcher.AddTimer(this); in its start function and _dispatcher.RemoveTimer(this); in its stop function.