I am calling, document.Reload() in an event in VSTO plugin for Word.
Now, microsoft here says,
This method reloads the document asynchronously.
but this function isn't declared async,
So, how can I await for completion of this function's execution, cz else my code proceeds with further execution and an exception is generated as that particular document is unloaded at that moment.
Here is the snippet from my code, I had handled AfterSave event,
public async void Application_DocumentAfterSave(Word.Document Doc, XMLValue newXMLValue = null) {
//Some code
Doc.Reload();
//Some more code
var fullNameOfDoc = Doc.FullName;
//Here I am getting an exception as
//System.Runtime.InteropServices.COMException: 'Object has been deleted.'
}
Related
I'm using Spring batch to write a batch process and I'm having issues handling the exceptions.
I have a reader that fetches items from a database with an specific state. The reader passes the item to the processor step that can launch the exception MyException.class. When this exception is thrown I want to skip the item that caused that exception and continue reading the next one.
The issue here is that I need to change the state of that item in the database so it's not fetched again by the reader.
This is what I tried:
return this.stepBuilderFactory.get("name")
.<Input, Output>chunk(1)
.reader(reader())
.processor(processor())
.faultTolerant()
.skipPolicy(skipPolicy())
.writer(writer())
.build();
In my SkipPolicy class I have the next code:
public boolean shouldSkip(Throwable throwable, int skipCount) throws SkipLimitExceededException {
if (throwable instanceof MyException.class) {
// log the issue
// update the item that caused the exception in database so the reader doesn't return it again
return true;
}
return false;
}
With this code the exception is skipped and my reader is called again, however the SkipPolicy didn't commit the change or did a rollback, so the reader fetches the item and tries to process it again.
I also tried with an ExceptionHandler:
return this.stepBuilderFactory.get("name")
.<Input, Output>chunk(1)
.reader(reader())
.processor(processor())
.faultTolerant()
.skip(MyException.class)
.exceptionHandler(myExceptionHandler())
.writer(writer())
.build();
In my ExceptionHandler class I have the next code:
public void handleException(RepeatContext context, Throwable throwable) throws Throwable {
if (throwable.getCause() instanceof MyException.class) {
// log the issue
// update the item that caused the exception in database so the reader doesn't return it again
} else {
throw throwable;
}
}
With this solution the state is changed in the database, however it doesn't call the reader, instead it calls the method process of the processor() again, getting in an infinite loop.
I imagine I can use a listener in my step to handle the exceptions, but I don't like that solution because I will have to clone a lot of code asumming this exception could be launched in different steps/processors of my code.
What am I doing wrong?
EDIT: After a lot of tests and using different listeners like SkipListener, I couldn't achieve what I wanted, Spring Batch is always doing a rollback of my UPDATE.
Debugging this is what I found:
Once my listener is invoked and I update my item, the program enters the method write in the class FaultTolerantChunkProcessor (line #327).
This method will try the next code (copied from github):
try {
doWrite(outputs.getItems());
} catch (Exception e) {
status = BatchMetrics.STATUS_FAILURE;
if (rollbackClassifier.classify(e)) {
throw e;
}
/*
* If the exception is marked as no-rollback, we need to
* override that, otherwise there's no way to write the
* rest of the chunk or to honour the skip listener
* contract.
*/
throw new ForceRollbackForWriteSkipException(
"Force rollback on skippable exception so that skipped item can be located.", e);
}
The method doWrite (line #151) inside the class SimpleChunkProcessor will try to write the list of output items, however, in my case the list is empty, so in the line #159 (method writeItems) will launch an IndexOutOfBoundException, causing the ForceRollbackForWriteSkipException and doing the rollback I'm suffering.
If I override the class FaultTolerantChunkProcessor and I avoid writing the items if the list is empty, then everything works as intended, the update is commited and the program skips the error and calls the reader again.
I don't know if this is actually a bug or it's caused by something I'm doing wrong in my code.
A SkipListener is better suited to your use case than an ExceptionHandler in my opinion, as it gives you access to the item that caused the exception. With the exception handler, you need to carry the item in the exception or the repeat context.
Moreover, the skip listener allows you to know in which phase the exception happened (ie in read, process or write), while with the exception handler you need to find a way to detect that yourself. If the skipping code is the same for all phases, you can call the same method that updates the item's status in all the methods of the listener.
In my code, I am making async call to do validation. Depeding upon return value of the validation, I need to execute some lines.
But I am not able to put that lines in the callback method of Async = public void success(Boolean valid).
Since one of the line is super.onDrop(context) which is method of another class that can't be called inside Async callback method.
Please see the below line. I need super.onDrop(context) will be executed after async call is completed.
stepTypeFactory.onDropValidation(stepTypeFactory,new AsyncCallbackModal(null) {
public void success(Boolean valid) {
if(valid==Boolean.TRUE){
//super.onDrop(context);
}
};
});
//condition is here
super.onDrop(context);
Is there any way, i will tell gwt wait 1 or 2 seconds before super.onDrop(context) is executed. Right now what happening is,
super.onDrop(context) is executed before the call back method is completed.
You can do:
stepTypeFactory.onDropValidation(stepTypeFactory,new AsyncCallbackModal(null) {
public void success(Boolean valid) {
if(valid==Boolean.TRUE){
drop();
}
};
});
private void drop() {
super.onDrop(context);
}
An alternative solution would be, like mentioned from Thomas Broyer in the comments:
stepTypeFactory.onDropValidation(stepTypeFactory,new AsyncCallbackModal(null) {
public void success(Boolean valid) {
if(valid==Boolean.TRUE){
ContainingClass.super.onDrop(context);
}
};
});
Eclipse does not suggests this solution when using the code completion, but it works.
Also i would possibly reconsider your design, because it can get very tricky (by experience) when you have many Callbacks which are connecting/coupling classes. But this is just a quick thought, i neither know the size of your project nor the design.
I have a WF (4.5) workflow activity that creates a child workflow (evaluating a VisualBasicValue expression). I need the result before I complete the parent workflow.
I add the expression to the metadata like this:
private VisualBasicValue<string> _expression;
protected override void CacheMetadata(NativeActivityMetadata metadata)
{
base.CacheMetadata(metadata);
var visualBasicValue = (VisualBasicValue<string>)(_childActivity.Text.Expression);
var expressionText = visualBasicValue.ExpressionText;
_expression = new VisualBasicValue<string>(expressionText);
metadata.AddChild(_expression);
}
I tried scheduling the activity in the Execute method like this:
protected override void Execute(NativeActivityContext context)
{
context.ScheduleActivity(context, _expression, OnCompleted);
Result.Set(context, _value);
}
With a callback of:
private void OnCompleted(NativeActivityContext context, ActivityInstance completedInstance, string result)
{
_value = result;
}
Unfortunately, the _expression activity is only executed after the parent's execution method returns. Adding it as an implementation child doesn't work (it cannot work as an implementation child, as it is supposed to evaluate an expression that contains variables external to the parent).
Any ideas how to overcome this and execute within the execution context?
In code, as in real life, you can't schedule something to the past (yet :).
ScheduleActivity() will place the activity within an execution queue and execute it as soon as it can. As the parent activity is still running, _expression will only execute after it. Bottom-line, it's an asynchronous call.
If you want to control when _expression is called, just use WorkflowInvoker to execute it, synchronously, whenever you want.
public class MyNativeActivity : NativeActivity
{
private readonly VisualBasicValue<string> _expression;
public MyNativeActivity()
{
// 'expression' construction logic goes here
_expression = new VisualBasicValue<string>("\"Hi!\"");
}
protected override void Execute(NativeActivityContext context)
{
var _value = WorkflowInvoker.Invoke(_expression);
Console.WriteLine("Value returned by '_expression': " + _value);
// use '_value' for something else...
}
}
Took me a few days but I managed to resolve my own issue (without breaking the normal of how WF works).
What I ended up doing is, using reflection, iterated over the child's properties and created a LinkedList of evaluation expressions (using VisualBasicValue) of each of its arguments, in the CacheMetadata method. Then in the execution phase, I scheduled the execution of the first evaluation. In its callback I iterate over the remaining evaluations, scheduling the execution of the next evaluations, adding the result to a dictionary, until its done.
Finally, if there are no more evaluations to schedule, I schedule a final activity that takes the dictionary as its argument, and can do whatever it wants with it. Upon its own, it optionally returns the final result to the container's OutArgument.
What I previously failed to understand, is that even though the scheduling occurs after the instantiating activity's execution, the callback runs before control is returned to the host workflow application, and in that space I could work.
The problem which i am facing has been nagging for a week now and here it is:
I have a class AdminBlocageBackgroundProcessing.java which processes a CSV file by reading data from it and validating it and storing it in a array list as:
public Object call( ) {
// TODO Auto-generated method stub
try{
data = ImportMetier.extractFromCSV(
new String(fichier.getFileData(),
"ISO-8859-1"),blocage);
}
catch (Exception e) {
e.printStackTrace();
}
return data;
}
And i am calling it from my action class using :
ServletContext servletContext=getServlet().getServletContext();
ExecutorService execService = (ExecutorService)servletContext.getAttribute("threadPoolAlias");
AdminBlocageBackgroundProcessing adminBlocageBackgroundProcessing= new AdminBlocageBackgroundProcessing(fichier,blocage);
if(status==0 && refreshParam.equals("eventParameter"))
{
future= execService.submit(adminBlocageBackgroundProcessing);
status=1;
autoRefControl=1;
req.setAttribute("CHARGEMENT_EN_COURS","chargement");
return mapping.findForward("self");
}
if(status==1)
{
// for checking if the submitted future task is completed or not
isFutureDone=future.isDone();
if(isFutureDone)
{
data=future.get();
status=0;
System.out.println("Process is Completed");
req.setAttribute("TRAITEMENT_TERMINE","termine");
//sessiondata.putBean(Constantes.BEAN_CONTRATCLIENT_CONTRAT_CLE_FIA, null);
//formulaire.set("refreshParam","" );
execService.shutdown();
isFutureDone=false;
}
else{
System.out.println("Les données sont encore en cours de traitement");
req.setAttribute("CHARGEMENT_EN_ENCORE","encore");
return mapping.findForward("self");
}
}
Now the problem is CSV is having too much data and when we click for importing it, the process is started in background asynchronously but it never gets to completion although have used autorefresh in jsp to maintain the session.
How can we make sure that it is completed although the code is working fine for small data?
but for large data this functionality crumbles and cannot be monitored.
The threadpool which i am using is provided by the container :
public class ThreadPoolServlet implements ServletContextListener
{
public void contextDestroyed(ServletContextEvent arg0) {
final ExecutorService execService = (ExecutorService) arg0.getServletContext().getAttribute("threadPoolAlias");
execService.shutdown();
System.out.println("ServletContextListener destroyed");
// TODO Auto-generated method stub
}
//for initializing the thread pool
public void contextInitialized(ServletContextEvent arg0) {
// TODO Auto-generated method stub
final ExecutorService execService = Executors.newFixedThreadPool(25);
final ServletContext servletContext = arg0.getServletContext();
servletContext.setAttribute("threadPoolAlias", execService);
System.out.println("ServletContextListener started");
}
}
Had a quick look.. your isFutureDone depends on status, which is executed right after the submission of the task - which is fairly quick. status is updated only once, and not updated again. This is fine in the case of very short, seemingly instant, tasks, though it will break for large tasks. It breaks because you use the future.get method conditionally based on the isFutureDone, which will be false for longer tasks. So you never get a result, even though your task completed in the executor. Do away with isFutureDone. Read up a bit on [Future.get][1] (both versions [with and without timeout] block, which is what you need here - to wait for the task to finish). It would be a good idea to utilize a timeout in your code that calls the CSV service, to allow for a failure if it takes inappropriately long.
I am executing a series of Caliburn.Micro IResults by yield returning them from an IEnumerable method called by a Caliburn.Micro action message. The first IResult calls a WCF RIA service Invoke operation. Sometimes this operation fails and throws an exception. This is handled in the IResult where the InvokeOperation object is checked for error, I mark the error as handled and set the IResult's error message field to the error so I can recover it from the client.
The problem is that for some reason this interrupts the co-routine executing. I can't think of any good reason why, but when I'm in debug mode VS skips to the server code and bring up the unhandled exception helper telling me there was an uncaught exception (duh), and the co-routine never continues executing the other members of the IEnumerable.
Here is some of the code.
Called from the Action Message:
public IEnumerable<IResult> Submit()
{
var register = new RegisterUserResult(Username, Password, Email, _userModel);
yield return register;
if (register.Success)
{
if (RegisterAsTrainer)
yield return new ApplyRoleToUserResult(Username, "Trainer", _userModel);
yield return new NavigateResult(new Uri("/MainPageViewModel", UriKind.Relative));
}
else ErrorMessage = register.ErrorMessage;
}
The code in the DomainService (which sometimes throws an exception)
[Invoke]
public void CreateUser(string username, string password, string email)
{
Membership.CreateUser(username, password, email);
}
...where Membership is the ASP.NET class, which I am using for membership management.
The IResult that calls the above service (some details elided for clarity):
public void Execute(ActionExecutionContext context)
{
ErrorMessage = null;
Success = false;
var ctx = new TrainingContext();
ctx.CreateUser(_username, _password, _email, CreateUserCallback, null);
}
private void CreateUserCallback(InvokeOperation invokeOp)
{
if (invokeOp.HasError)
invokeOp.MarkErrorAsHandled();
Completed(this, new ResultCompletionEventArgs
{
Error = invokeOp.Error,WasCancelled = invokeOp.IsCanceled
});
}
The IResult.Completed DOES fire, but the rest of the method never executes. I'm literally tearing my hair out with this, please please help me.
Ugh I figured this out, stupid me. I was setting the IResult Error field, thinking I'd need to use that information later. I didn't know that having a non-null Error field would cause co-routine execution to halt (I thought only the Canceled field would do that). I'll leave this here in case anyone else runs into this issue.