Threading concept - c#-3.0

Can somebody help me on this:
private Thread workerThread;
private EventWaitHandle waitHandle;
if (workerThread == null)
{
workerThread = new Thread(new ThreadStart(Work));
workerThread.Start();
//workerThread.Join();
}
else if (workerThread.ThreadState == ThreadState.WaitSleepJoin)
{
waitHandle.Set();
}
private void Work()
{
while (true)
{
string filepath = RetrieveFile();
if (filepath != null)
ProcessFile(filepath);
else
// If no files left to process then wait
waitHandle.WaitOne();
}
}
private void ProcessFile(string filepath)
{
XMLCreation myXML = new XMLCreation();
myXML.WriteXml(filepath, XMLFullFilePath);
}
private string RetrieveFile()
{
if (workQueue.Count > 0)
return workQueue.Dequeue();
else
return null;
}
see this is how all this work.
i have a filewatcher event that fires only when new file is being add to that folder, now the problem is its a small part of bigger application and when the file watcher fires there is another process which is accessing that file and i get error like this file is being used by another process. so i have tried to implement through threading but with the above code only some files are being processed, but in the log i can see all the files are being processed. Is it the right way to do it or am i missing something in it
thanks in adv.

You will have to use a mutex to control who is accessing the file and allow only one process at a time to work with that file at the very first time. If you think that there is the possibility that more than one thread will be waiting to work with the same file then you will have to implement a producer-consumer threading system with a queue.
Here is the best documentation about threads you can find in .NET:
http://www.albahari.com/threading/

Related

Vert.x: How to wait for a future to complete

Is there a way to wait for a future to complete without blocking the event loop?
An example of a use case with querying Mongo:
Future<Result> dbFut = Future.future();
mongo.findOne("myusers", myQuery, new JsonObject(), res -> {
if(res.succeeded()) {
...
dbFut.complete(res.result());
}
else {
...
dbFut.fail(res.cause());
}
}
});
// Here I need the result of the DB query
if(dbFut.succeeded()) {
doSomethingWith(dbFut.result());
}
else {
error();
}
I know the doSomethingWith(dbFut.result()); can be moved to the handler, yet if it's long, the code will get unreadable (Callback hell ?) It that the right solution ? Is that the omny solution without additional libraries ?
I'm aware that rxJava simplifies the code, but as I don't know it, learning Vert.x and rxJava is just too much.
I also wanted to give a try to vertx-sync. I put the dependency in the pom.xml; everything got downloaded fine but when I started my app, I got the following error
maurice#mickey> java \
-javaagent:~/.m2/repository/co/paralleluniverse/quasar-core/0.7.5/quasar-core-0.7.5-jdk8.jar \
-jar target/app-dev-0.1-fat.jar \
-conf conf/config.json
Error opening zip file or JAR manifest missing : ~/.m2/repository/co/paralleluniverse/quasar-core/0.7.5/quasar-core-0.7.5-jdk8.jar
Error occurred during initialization of VM
agent library failed to init: instrument
I know what the error means in general, but I don't know in that context... I tried to google for it but didn't find any clear explanation about which manifest to put where. And as previously, unless mandatory, I prefer to learn one thing at a time.
So, back to the question : is there a way with "basic" Vert.x to wait for a future without perturbation on the event loop ?
You can set a handler for the future to be executed upon completion or failure:
Future<Result> dbFut = Future.future();
mongo.findOne("myusers", myQuery, new JsonObject(), res -> {
if(res.succeeded()) {
...
dbFut.complete(res.result());
}
else {
...
dbFut.fail(res.cause());
}
}
});
dbFut.setHandler(asyncResult -> {
if(asyncResult.succeeded()) {
// your logic here
}
});
This is a pure Vert.x way that doesn't block the event loop
I agree that you should not block in the Vertx processing pipeline, but I make one exception to that rule: Start-up. By design, I want to block while my HTTP server is initialising.
This code might help you:
/**
* #return null when waiting on {#code Future<Void>}
*/
#Nullable
public static <T>
T awaitComplete(Future<T> f)
throws Throwable
{
final Object lock = new Object();
final AtomicReference<AsyncResult<T>> resultRef = new AtomicReference<>(null);
synchronized (lock)
{
// We *must* be locked before registering a callback.
// If result is ready, the callback is called immediately!
f.onComplete(
(AsyncResult<T> result) ->
{
resultRef.set(result);
synchronized (lock) {
lock.notify();
}
});
do {
// Nested sync on lock is fine. If we get a spurious wake-up before resultRef is set, we need to
// reacquire the lock, then wait again.
// Ref: https://stackoverflow.com/a/249907/257299
synchronized (lock)
{
// #Blocking
lock.wait();
}
}
while (null == resultRef.get());
}
final AsyncResult<T> result = resultRef.get();
#Nullable
final Throwable t = result.cause();
if (null != t) {
throw t;
}
#Nullable
final T x = result.result();
return x;
}

List files under a folder within a Camel rest service

I am creating a rest service using Camel Rest DSL. Within the service I need to list all files under a folder and do some processing on them. PFB the code -
from("direct:postDocument")
.to("file:/home/s469457/service/content-util/composite?noop=true")
.setBody(constant(null))
.log("Scanning file ${file:name.noext}.${file:name.ext}...");
Please advice.
~ Arunava
I would suggest to write a processor or a bean to list files in directory. I think that would be more efficient and so much simpler. Using Camel's file component you would have to deal with intricacies you might not expect.
Regardless. You will need to do pollEnrich and afterwards aggregate the whole result. I also think that you would run into trouble and will not be able to read files multiple times, to solve that you might need to create idempotent repository, but when reading files there might be concurrency/file locking issues...
Here's some pseudocode to get you started if you want to go that way:
from("direct:listFiles")
.pollEnrich("file:"+getFullPath()+"?noop=true")
.aggregate(new AggregationStrategy {
public Exchange aggregate(Exchange oldExchange, Exchange newExchange) {
String filename = newExchange.getIn().getHeader("CamelFileName", String.class)
if (oldExchange == null) {
newExchange.getIn().setBody(new ArrayList<String>(Arrays.asList(filename)));
return newExchange;
} else {
...
}
})
//Camel Rest Api to list files
rest().path("/my-api/")
.get()
.produces("text/plain")
.to("direct:listFiles");
//Camel Route to list files
List<String> fileList = new ArrayList<String>();
from("direct:listFiles")
.loopDoWhile(body().isNotNull())
.pollEnrich("file:/home/s469457/service/content-util/composite?noop=true&recursive=true&idempotent=false&include=.*.csv")
.choice()
.when(body().isNotNull())
.process( new Processor(){
#Override
public void process(Exchange exchange) throws Exception {
File file = exchange.getIn().getBody(File.class);
fileList.add(file.getName());
}
})
.otherwise()
.process( new Processor(){
#Override
public void process(Exchange exchange) throws Exception {
if (fileList.size() != 0)
exchange.getOut().setBody(String.join("\n", fileList));
fileList.clear();
}
})
.end();

Read large file using vertx

I am new to using vertx and I am using vertx filesystem api to read file of large size.
vertx.fileSystem().readFile("target/classes/readme.txt", result -> {
if (result.succeeded()) {
System.out.println(result.result());
} else {
System.err.println("Oh oh ..." + result.cause());
}
});
But the RAM is all consumed while reading and the resource is not even flushed after use. The vertx filesystem api also suggest
Do not use this method to read very large files or you risk running out of available RAM.
Is there any alternative to this?
To read large file you should open an AsyncFile:
OpenOptions options = new OpenOptions();
fileSystem.open("myfile.txt", options, res -> {
if (res.succeeded()) {
AsyncFile file = res.result();
} else {
// Something went wrong!
}
});
Then an AsyncFile is a ReadStream so you can use it together with a Pump to copy the bits to a WriteStream:
Pump.pump(file, output).start();
file.endHandler((r) -> {
System.out.println("Copy done");
});
There are different kind of WriteStream, like AsyncFile, net sockets, HTTP server responses, ...etc.
To read/process a large file in chunks you need to use the open() method which will return an AsyncFile on success. On this AsyncFile you setReadBufferSize() (or not, the default is 8192), and attach a handler() which will be passed a Buffer of at most the size of the read buffer you just set.
In the example below I have also attached an endHandler() to print a final newline to stay in line with the sample code you provided in the question:
vertx.fileSystem().open("target/classes/readme.txt", new OpenOptions().setWrite(false).setCreate(false), result -> {
if (result.succeeded()) {
result.result().setReadBufferSize(READ_BUFFER_SIZE).handler(data -> System.out.print(data.toString()))
.endHandler(v -> System.out.println());
} else {
System.err.println("Oh oh ..." + result.cause());
}
});
You need to define READ_BUFFER_SIZE somewhere of course.
The reason for that is that internally .readFile calls to Files.readAllBytes.
What you should do instead is create a stream out of your file, and pass it to Vertx handler:
try (InputStream steam = new FileInputStream("target/classes/readme.txt")) {
// Your handling here
}

Random "404 Not Found" error on PasrseObject SaveAsynch (on Unity/Win)

I started using Parse on Unity for a windows desktop game.
I save very simple objects in a very simple way.
Unfortunately, 10% of the time, i randomly get a 404 error on the SaveAsynch method :-(
This happen on different kind of ParseObject.
I also isolated the run context to avoid any external interference.
I checked the object id when this error happen and everything looks ok.
The strange thing is that 90% of the time, i save these objects without an error (during the same application run).
Did someone already got this problem ?
Just in case, here is my code (but there is nothing special i think):
{
encodedContent = Convert.ToBase64String(ZipHelper.CompressString(jsonDocument));
mLoadedParseObject[key]["encodedContent "] = encodedContent ;
mLoadedParseObject[key].SaveAsync().ContinueWith(OnTaskEnd);
}
....
void OnTaskEnd(Task task)
{
if (task.IsFaulted || task.IsCanceled)
OnTaskError(task); // print error ....
else
mState = States.SUCEEDED;
}
private void DoTest()
{
StartCoroutine(DoTestAsync());
}
private IEnumerator DoTestAsync()
{
yield return 1;
Terminal.Log("Starting 404 Test");
var obj = new ParseObject("Test1");
obj["encodedContent"] = "Hello World";
Terminal.Log("Saving");
obj.SaveAsync().ContinueWith(OnTaskEnd);
}
private void OnTaskEnd(Task task)
{
if (task.IsFaulted || task.IsCanceled)
Terminal.LogError(task.Exception.Message);
else
Terminal.LogSuccess("Thank You !");
}
Was not able to replicate. I got a Thank You. Could you try updating your Parse SDK.
EDIT
404 could be due to a bad username / password match. The exception messages are not the best.

How to run ant from an Eclipse plugin, send output to an Eclipse console, and capture the build result (success/failure)?

From within an Eclipse plugin, I'd like to run an Ant build script. I also want to display the Ant output to the user, by displaying it in an Eclipse console. Finally, I also want to wait for the Ant build to be finished, and capture the result: did the build succeed or fail?
I found three ways to run an Ant script from eclipse:
Instantiate an org.eclipse.ant.core.AntRunner, call some setters and call run() or run(IProgressMonitor). The result is either normal termination (indicating success), or a CoreException with an IStatus containing a BuildException (indicating failure), or else something else went wrong. However, I don't see the Ant output anywhere.
Instantiate an org.eclipse.ant.core.AntRunner and call run(Object), passing a String[] containing the command line arguments. The result is either normal termination (indication success), or an InvocationTargetException (indicating failure), or else something else went wrong. The Ant output is sent to Eclipse's stdout, it seems; it is not visible in Eclipse itself.
Call DebugPlugin.getDefault().getLaunchManager(), then on that call getLaunchConfigurationType(IAntLaunchConfigurationConstants.ID_ANT_BUILDER_LAUNCH_CONFIGURATION_TYPE), then on that set attribute "org.eclipse.ui.externaltools.ATTR_LOCATION" to the build file name (and attribute DebugPlugin.ATTR_CAPTURE_OUTPUT to true) and finally call launch(). The Ant output is shown in an Eclipse console, but I have no idea how to capture the build result (success/failure) in my code. Or how to wait for termination of the launch, even.
Is there any way to have both console output and capture the result?
Edit 05/16/2016 #Lii alerted me to the fact that any output between the ILaunchConfigurationWorkingCopy#launch call and when the IStreamListener is appended will be lost. He made a contribution to this answer here.
Original Answer
I realize this is an old post, but I was able to do exactly what you want in one of my plugins. If it doesn't help you at this point, maybe it will help someone else. I originally did this in 3.2, but it has been updated for 3.6 API changes...
// show the console
final IWorkbenchPage activePage = PlatformUI.getWorkbench()
.getActiveWorkbenchWindow()
.getActivePage();
activePage.showView(IConsoleConstants.ID_CONSOLE_VIEW);
// let launch manager handle ant script so output is directed to Console view
final ILaunchManager manager = DebugPlugin.getDefault().getLaunchManager();
ILaunchConfigurationType type = manager.getLaunchConfigurationType(IAntLaunchConstants.ID_ANT_LAUNCH_CONFIGURATION_TYPE);
final ILaunchConfigurationWorkingCopy workingCopy = type.newInstance(null, [*** GIVE YOUR LAUNCHER A NAME ***]);
workingCopy.setAttribute(ILaunchManager.ATTR_PRIVATE, true);
workingCopy.setAttribute(IExternalToolConstants.ATTR_LOCATION, [*** PATH TO ANT SCRIPT HERE ***]);
final ILaunch launch = workingCopy.launch(ILaunchManager.RUN_MODE, null);
// make sure the build doesnt fail
final boolean[] buildSucceeded = new boolean[] { true };
((AntProcess) launch.getProcesses()[0]).getStreamsProxy()
.getErrorStreamMonitor()
.addListener(new IStreamListener() {
#Override
public void streamAppended(String text, IStreamMonitor monitor) {
if (text.indexOf("BUILD FAILED") > -1) {
buildSucceeded[0] = false;
}
}
});
// wait for the launch (ant build) to complete
manager.addLaunchListener(new ILaunchesListener2() {
public void launchesTerminated(ILaunch[] launches) {
boolean patchSuccess = false;
try {
if (!buildSucceeded[0]) {
throw new Exception("Build FAILED!");
}
for (int i = 0; i < launches.length; i++) {
if (launches[i].equals(launch)
&& buildSucceeded[0]
&& !((IProgressMonitor) launches[i].getProcesses()[0]).isCanceled()) {
[*** DO YOUR THING... ***]
break;
}
}
} catch (Exception e) {
[*** DO YOUR THING... ***]
} finally {
// get rid of this listener
manager.removeLaunchListener(this);
[*** DO YOUR THING... ***]
}
}
public void launchesAdded(ILaunch[] launches) {
}
public void launchesChanged(ILaunch[] launches) {
}
public void launchesRemoved(ILaunch[] launches) {
}
});
I'd like to add one thing to happytime harry's answer.
Sometimes the first writes to the stream happens before the stream listener is added. Then streamAppended on the listener is never called for those writes so output is lost.
See for example this bug. I think happytime harry's solution might have this problem. I myself registered my stream listener in ILaunchListener.launchChanged and this happened 4/5 times.
If one wants to be sure to get all the output from a stream then the IStreamMonitor.getContents method can be used to fetch the output that happened before the listener got added.
The following is an attempt on a utility method that handles this. It is based on the code in ProcessConsole.
/**
* Adds listener to monitor, and calls listener with any content monitor already has.
* NOTE: This methods synchronises on monitor while listener is called. Listener may
* not wait on any thread that waits for monitors monitor, what would result in dead-lock.
*/
public static void addAndNotifyStreamListener(IStreamMonitor monitor, IStreamListener listener) {
// Synchronise on monitor to prevent writes to stream while we are adding listener.
// It's weird to synchronise on monitor because that's a shared object, but that's
// what ProcessConsole does.
synchronized (monitor) {
String contents = monitor.getContents();
if (!contents.isEmpty()) {
// Call to unknown code while synchronising on monitor. This is dead-lock prone!
// Listener must not wait for other threads that are waiting in line to
// synchronise on monitor.
listener.streamAppended(contents, monitor);
}
monitor.addListener(listener);
}
}
PS: There is some weird stuff going on in ProcessConsole.java. Why is the content buffering switched of from the ProcessConsole.StreamListener constructor?! If the ProcessConsole.StreamListener runs before this one maybe this solution doesn't work.