How to delay modificactions to the resource tree when it is locked - eclipse

I'm getting the exception:
org.eclipse.core.internal.resources.ResourceException: The resource tree is locked for modifications.
After some searching I found out, that this comes from the fact that I am trying to add markers to a file. I'm doing this, when I am notified of a file change. So when my modification code is called the workspace is still in the middle of the notifying process and does not allow modifications to the resource tree.
How can I save markers so that I can add them to the file later or what would be another way to delay this changes?

You can't modify the resource tree from a resource delta event handler (imagine the potential for complete chaos if you could). The most common approach that I know of is to schedule a Job and make the modifications within the run() method of the Job. This means you need to remember the modifications that you want to make so that they can be done within the Job. It also means you can't make too many assumptions about the state of the resource tree because theoretically some other Job might run before yours that makes changes to the tree.

Change IResourceChangeEvent.PRE_BUILD will solve this problem.. Worked for me

Related

What are the best practices when working with data from multiple sources in Flutter/Bloc?

The Bloc manual describes the example of a simple Todos app. It works as an example, but I get stuck when trying to make it into a more realistic app. Clearly, a more realistic Todos app needs to keep working when the user temporarily loses network connection, and also needs to occasionally check the server for updates that the user might have added from another device.
So as a basic data model I have:
dataFromServer, which is refreshed every five minutes, and
localData, that describes what changes have been made locally but haven't been synchronized to the server yet.
My current idea is to have three kinds of events:
on<GetTodosFromServer>() which runs every few minutes to check the server for updates and only changes the dataFromServer,
on<TodoAdded>() (and its friends TodoDeleted, TodoChecked, and so on) which get triggered when the user changes the data, and only change the localData, and
on<SyncTodoToServer>() which runs whenever the user changes the todo list, or when network connectivity is restored, and tries to send the changes to the server, retrieves the new value from the server, and then sets the new dataFromServer and localData.
So obviously there's a lot of interaction between these three methods. When a new todo is added after the synchronization to the server starts, but before synchronization is finished, it needs to stay in the local changes object. When GetTodosFromServer and SyncTodoToServer both return server data, they need to find out who has the latest data and keep that. And so on.
Coming from a Redux background, I'm used to having two reducers (one for local data, one for server data) that would only respond to simple actions. E.g. an action { "type": "TodoSuccessfullySyncedToServer", uploadedData: [...], serverResponse: [...] } would be straightforward to parse for both the localData and the dataFromServer reducer. The reducer doesn't contain any of the business logic, it receives actions one by one and all you need to think about inside the reducer is the state before the action, the action itself, and the state after the action. Anything you rely on to handle the action will be in the action itself, not in the context. So different pieces of code that generate those actions can just fire these actions without thinking, knowing that the reducer will handle them correctly.
Bloc on the other hand seems to mix business logic and updating the state. API calls are made within the event handlers, which will emit a value possibly many seconds later. So every time you return from an asynchronous call in an event handler, you need to think about how the state might have changed while that call was happening and the consequences this has on what you're currently doing. Also, an object in the state can be updated by different events that need to coordinate among themselves how to avoid conflicts while doing so.
Is there a best practice on how to avoid the complexity that brings? Is it best practice to split large events into "StartSyncToServer" and "SuccessfullySyncedToServer" events where the second behaves a lot like a Redux reducer? I don't see any of that in the examples, so is there another way this complexity is typically avoided in Bloc? Or is Bloc entirely unopinionated on these things?
I'm not looking for personal opinions here, only if there's something I missed in the Bloc manual (or other authoritative source) about how this was intended to work.

How to wait until dam update asset workflow is completed instead of thread dot sleep

In AEM CQ , I am using asset manager api to write content(images uploaded from) in dam. This triggers out of the box Dam Update Asset workflow. I require to read renditions and asset properties that would be written available once the workflow is completed.
My question is how to wait until the workflow is completed for reading the asset properties instead of thread.sleep.
I tried with a recursive function call to iterate while the asset property is present. This gave null pointer exception. But when I put a thread.sleep of 50 ms inside the iteration it works for me.
Another approach I tried to get the workflow object inside the service to read workflow status but found that it takes few milliseconds for the ootb workflow to start after the content is written. Here also had to give thread.sleep.
One more attempt to use a event handler to listen workflow events. We are able to enter the event type as workflow completed. How to notify the service or jsp that the workflow is completed and we can read the asset properties and renditions?
It would be great if someone can share their suggestions feedback on the approach. Thank you.
You have the wrong approach to solve this problem. In my eyes you have exactly 2 reasonable solutions on this.
Create workflow process/step and extend the Dam Update Asset Workflow with your custom step.
OR
Create JCR observation listener and listen for Event.PROPERTY_ADDED for example or use the higher sling APIs and create event handler with the appropriate topic and than execute your business logic as soon the property you look for is added or changed.
Why not to use Thread.sleep() or other similar solution:
you don't know when the workflow is executed exactly. it may be
delayed if many asssets are uploaded or just get stuck
you cannot assure that your thread will be able to execute it's
logic. the instance may be stopped for example
creating a new tread for every resource uploaded may be expensive task. you also waste resources when you create an infinite loop and put those threads to sleep, than wake them and check again and again ... and so on until eventually the thread is able to do it's job

Monitoring a directory in Cocoa/Cocoa Touch

I am trying to find a way to monitor the contents of a directory for changes. I have tried two approaches.
Use kqueue to monitor the directory
Use GCD to monitor the directory
The problem I am encountering is that I can't find a way to detect which file has changed. I am attempting to monitor a directory with potentially thousands of files in it and I do not want to call stat on every one of them to find out which ones changed. I also do not want to set up a separate dispatch source for every file in that directory. Is this currently possible?
Note: I have documented my experiments monitoring files with kqueue and GCD
My advice is to just bite the bullet and do a directory scan in another thread, even if you're talking about thousands of files. But if you insist, here's the answer:
There's no way to do this without rolling up your sleeves and going kernel-diving.
Your first option is to use the FSEvents framework, which sends out notifications when a file is created, edited or deleted (as well as things to do with attributes). Overview is here, and someone wrote an Objective C wrapper around the API, although I haven't tried it. But the overview doesn't mention the part about events surrounding file changes, just directories (like with kqueue). I ended up using the code from here along with the header file here to compile my own logger which I could use to get events at the individual file level. You'd have to write some code in your app to run the logger in the background and monitor it.
Alternatively, take a look at the "fs_usage" command, which constantly monitors all filesystem activity (and I do mean all). This comes with Darwin already, so you don't have to compile it yourself. You can use kqueue to listen for directory changes, while at the same time monitoring the output from "fs_usage". If you get a notification from kqueue that a directory has changed, you can look at the output from fs_usage, see which files were written to, and check the filenames against the directory that was modified. fs_usage is a firehose, so be prepared to use some options along with Grep to tame it.
To make things more fun, both your FSEvents logger and fs_usage require root access, so you'll have to get authorization from the user before you can use them in your OS X app (check out the Authorization Services Programming Guide for info on how to do it).
If this all sounds horribly complicated, that's because it is. But at least you didn't have to find out the hard way!

Windows Workflow: Persistence and Polling

I'm currently learning the WF framework, so bear with me; mostly I'm looking for where to start looking, not necessarily a direct answer. I just can't seem to figure out how to begin researching what I'd like in The Google.
Let's say I have a simple one-step workflow (much more complicated than that, but for simplicity's sake). This workflow needs to watch a certain record in the database to see when it changes. I don't have the capability to "push" via a trigger from the database when the row changes, so I need to poll for it every so often.
This workflow needs to be persisted to the database to be durable against restarts and whatnot as this is a long-running workflow. I'm trying to figure out the best way to get it to check every 3 minutes or so and also persist to the database. Do the persistence capabilities of the framework allow for that? It seems to be time-based. And since the workflow won't be reawakened by an external event, how does it reload from the database and check the same step it did previously again? Does it attempt the last unfulfilled activity automatically upon reloading?
Do "while" activities with a delay attached to it work at all, or can it be handled solely through the persistence services?
I'm not sure what you mean by "handled soley through persistence services"? Persistence refers only to the storing of an idle workflow.
You could have a Delay and a Code activity in a Sequence in a While loop. When in the Delay the workflow will go idle and may be persisted if necessary. However depending on how much state is needed when persisting the workflow and/or how many such workflows you would have running at any one time may mean that a leaner approach is necessary.
A leaner approach would be to externalise the DB watching and have some "DB watching" workflow service raise an event when the desired change has occured. This service would be added to Workflow runtime.
To that end you need a service contract which is defined by an Inteface with the [ExternalDataExchange] attribute. This interface in turn defines an event that the service will raise when the desired DB change is detected. It also defines a method that a Workflow can call to specify what what change this service should be looking for. The method should accept an instance GUID so that the requesting instance can be found when the DB change is detected.
In the workflow you use a CallExternalMethodActivity to call this services method. You then flow to a HandleExternalEventActivity which listen for the event. At this point the workflow will go idle and can be persisted. It will remain there until the service raises the event.

How do you make a build that includes only one of many pending changes?

In my current environment, we have a "clean" build machine, which has an exact copy of all committed changes, nothing more, nothing less.
And of course I have my own machine, with dozens of files in an "in-progress" state.
Often I need to build my application with only one change in place. For example, I've finished task ABC, and I want to build an EXE with only that change.
But of course I can't commit the change to the repository until it's tested.
Branching seems like overkill for this. What do you do in your environment to isolate changes for test builds and releases?
#Matt b: So while you wait for feedback on your change, what do you do? Are you always working on exactly one thing?
So you are asking how to handle working on multiple "tasks" at once, right? Except branching.
You can have multiple checkouts of the source on the local machine, suffixing the directory name with the name of the ticket you are working on. Just make sure to make changes in the right directory, depending on the task...
Mixing multiple tasks in one working copy / commit can get very confusing, especially if somebody needs to review your work later.
I prefer to make and test builds on my local machine/environment before committing or promoting any changes.
For your specific example, I would have checked out a clean copy of the source before starting task ABC, and after implementing ABC, created a build locally with that in it.
Something like that: git stash && ./bootstrap.sh && make tests :)
I try hard to make each "commit" operation represent a single, cohesive change. Sometimes it's a whole bug fix or whole feature, and sometimes it's a single small refactoring on the way to something bigger. There's no simple way to decide what a unit is here, just by gut feel. I also ask (beg!) my teammates to do the same.
When this is done well, you get a number of benefits:
You can write a high quality, detailed description for the change.
Reading the first line of the description of each change gives you a sense of the flow of the code.
The diffs of a change are easy to read & understand.
If a change introduces a bug / build break / other problem, it's easy to isolate, understand, and back out if necessary.
If I'm half-way through a change and decide to abort, I don't lose much.
If I'm not sure how to proceed next, I can spend a few minutes on each of several approaches, and then pick the one I like, discarding the others.
My coworkers pick up most of my changes sooner, dramatically simplifying the merge problem.
When I'm feeling stuck about a big problem, I can take a few small steps that I'm confident in, checking them in as I go, thereby making the big problem a little smaller.
Working like this can help reduce the need for small branches, since you take a small, confident step, validate it, and commit it, then repeat. I've talked about how to make the step small & confident, but for this to work, you also need to make validation phase go quickly. Having a strong battery of fast, fine-grained unit tests + high quality, fast application tests is key.
Teams that I have worked on before required code reviews before checking in; that adds latency, which interferes with my small-step work style. Making code reviews a high-urgency interrupt works; so does switching to pair programming.
Still, my brain seems to like heavy multitasking. To make that work, I still want multiple in-progress changes. I've used multiple branches, multiple local copies, multiple computers, and tools that make backups of pending changes. All of them can work. (And all of them are equivalent, implemented in different ways.) I think that multiple branches is my favorite, although you need a source control system that is good at spinning up new branches quickly & easily, without being a burden on the server. I've heard BitKeeper is good at this, but I haven't had a chance to check it out yet.