Large message file is never get deleted from large message directory. The following code in Artemis is actually deleting the large message file from large message directory. The following method is in class org.apache.activemq.artemis.core.io.AbstractSequentialFile
#Override
public final void delete() throws IOException, InterruptedException, ActiveMQException {
if (isOpen()) {
close();
}
try {
Files.deleteIfExists(file.toPath()); //if file is not delete
} catch (Throwable t) {
logger.trace("Fine error while deleting file", t);
ActiveMQJournalLogger.LOGGER.errorDeletingFile(this);
}
}
In above code somehow if file is not deleted then this file is persist forever in large message directory. Can someone let me how I will solve this issue.
If a problem occurs when deleting the file then an ERROR message will be logged like:
Failed to delete file /path/to/file
You can use this information to determine which specific file was supposed to be deleted and then delete that file manually.
If no exception is logged (for whatever reason) and you want to get rid of the large message file on disk and that message isn't referenced in any queue on the broker then stop the broker, delete the file, and restart the broker.
Related
I am using MultiResourceItemWriter for generating files. Suppose each file will have 15 lines and chunk size is 5, then there will be 3 chunks for each file.
If an exception occurs in the second or third chunk of a file, the file gets created and it contains the data till the last committed chunk. After restart, the rest of the data is written to the file as expected.
But if an exception occurs in the first chunk of a file, the file is not getting generated. Now If I restart the failed job, I get a "File is not writable: [filename]" error message.
Is there a way to restart the job when the first chunk of a file fails?
I am in a situation where I have two clients (ClientA and ClientB) connected to IMAP server. ClientA is running mailkit. When I delete or move a folder with ClientB, mailkit client is getting error on attempt to open or fetch messages from the deleted folder. Actually, I am getting disconnected from the server when i try to fetch message from a deleted folder(I guess that is the expected behavior from the server), because of that I am trying to detect if the folder I am about to execute command to, still exists.
I see mailkit uses FolderCache and when I use GetFolder method even after I reconnect the client, I am still getting IMailFolder reference for the deleted folder when I use GetFolder(string path) method. To avoid the FolderCache, I am creating a new instance of MailClient each time I am about to synchronize remote folders to avoid having not existing folders in the cache. I would like to know if that is recommended approach in that situation?
UPDATE:
So, I am now using GetSubfolders command and I can see a LIST command is sent to the server. However it seems there is an issue with that command in the following scenario:
ClientB is deleting a folder INBOX.spam.op, ClientA is trying to move folder with path INBOX.spam.op.folder1. What happens is - the server is creating a new folder INBOX.spam.op with Attributes NonExistent. That is the expected server behavior in order to create folder with path INBOX.spam.op.folder1
But see what happens with Mailkit when I used GetSubfolders on INBOX.spam - I am getting an instance of IMailFolder with Name = "op", Attributes = a mix of the new attributes NonExistent and the attributes of the old "op" folder (the folder in the FolderCache). UidValidity should be 0 for NonExistent but it is the same as the UIDValidity of "op" folder in the FolderCache even if the server response is this
C: A00000102 LIST "" "INBOX.spam.%" RETURN (SUBSCRIBED CHILDREN STATUS (UIDVALIDITY))
S: * LIST (\NonExistent \HasChildren) "." INBOX.spam.op
S: A00000102 OK List completed (0.001 + 0.000 secs).
I tried to inherit ImapClient and add my own method GetFolderNoCache(string path) but this doesn't work, because of the internal classes. Any other suggestions?
What you want to do is get the top-level folder from the namespace. Then, using that ImapFolder object, get the list of its children (and so on if you are trying to see if a deeply nested folder).
var toplevel = client.GetFolder (client.PersonalNamespaces[0]);
foreach (var folder in toplevel.GetSubfolders ()) {
// look for the folder you are interested in...
// if it's not here, then the folder has been deleted
}
When attempting to publish a Service Fabric application to a local cluster, the cluster fails to load the application stating the error in the title. The stack trace points me to an exception line in OwinCommunicationListener.cs:
try
{
this.eventSource.LogInfo("Starting web server on " + this.listeningAddress);
this.webApp = WebApp.Start(this.listeningAddress, appBuilder => this.startup.Invoke(appBuilder));
this.eventSource.LogInfo("Listening on " + this.publishAddress);
return Task.FromResult(this.publishAddress);
}
catch (Exception ex)
{
var logString = $"Web server failed to open endpoint {endpointName}. {ex.ToString()}";
this.eventSource.LogFatal(logString);
this.StopWebServer();
throw ex; // points to this line from cluster manager
}
I am unable to inspect the exception thrown, but there is no useful exception information other than a TargetInvocationException with a stack trace to the line noted above. Why won't this application load on my local cluster?
It's hard to say without an actual exception message or stack trace but judging by the location from which the exception was thrown and the fact that the problem resolved itself the next morning, the most likely and most common cause of this is that the port you were trying to use to open the web listener was taken by some other process at the time, and the next morning the port was free again. This, by the way, isn't really specific to Service Fabric. You're just trying to open a socket on a port that was taken by someone else.
I'm honestly more curious about why you couldn't inspect the exception. I can think of three things off the top of my head to help with that:
Use "throw" instead of "throw ex" so you don't reset the stack trace.
Look at your logs. It looks like you're writing out an ETW event in your catch block. What did it say?
Use the Visual Studio debugger: Simply set a breakpoint in the catch block and start the application with debugging by pressing F5.
If Install4J could not delete a file, a message is shown which says:
com.install4j...DeleteFileAction failed
Is there any way to show the path to the file which could not be deleted in the error message? Like a variable which i can use in the custom error description?
Thanks!
The detailed error messages are in the installation log file (.install4j/installation.log).
An error has crashed my application server and I can't seem to figure out what could be causing the issue. My application is built with Meteor and hosted on modulus.io. Here are my application logs:
Error: no chunks found for file, possibly corrupt
at /mnt/data/2/node_modules/mongodb/lib/mongodb/gridfs/gridstore.js:817:20
at /mnt/data/2/node_modules/mongodb/lib/mongodb/gridfs/gridstore.js:594:7
at /mnt/data/2/node_modules/mongodb/lib/mongodb/cursor.js:758:35
at Cursor.close (/mnt/data/2/node_modules/mongodb/lib/mongodb/cursor.js:989:5)
at Cursor.nextObject (/mnt/data/2/node_modules/mongodb/lib/mongodb/cursor.js:758:17)
at commandHandler (/mnt/data/2/node_modules/mongodb/lib/mongodb/cursor.js:727:14)
at /mnt/data/2/node_modules/mongodb/lib/mongodb/db.js:1916:9
at Server.Base._callHandler (/mnt/data/2/node_modules/mongodb/lib/mongodb/connection/base.js:448:41)
at /mnt/data/2/node_modules/mongodb/lib/mongodb/connection/server.js:481:18
at [object Object].MongoReply.parseBody (/mnt/data/2/node_modules/mongodb/lib/mongodb/responses/mongo_reply.js:68:5)
[2015-03-29T22:05:57.573Z] Application CRASH detected. Exit code 8.
Most probably this is a mongo bug with gridfs (has been fixed)
Writing two or more different files concurrently from different node
processes using the GridStore.writeFile command results in some files
not being correctly written (ending up with a number of corrupt files
in the gridstore). Ending up with corrupt files even with all
writeFile calls being successfull and no indication of error.
writeFile occasionally fails with error "chunks out of order", but
this happens very rarely (something like 1 failed writeFile for 100
corrupt files or more).
Based on the comments with in a discussion, the problem will be fixed if you will update mongo (the gridfs files should be removed, as they are corrupt).
Error: no chunks found for file, possibly corrupt
at /home/developer/rundir/node_modules/mongoose/node_modules/mongodb/lib/mongodb/gridfs/gridstore.js:808:20
at /home/developer/rundir/node_modules/mongoose/node_modules/mongodb/lib/mongodb/gridfs/gridstore.js:586:5
at /home/developer/rundir/node_modules/mongoose/node_modules/mongodb/lib/mongodb/collection/query.js:164:5
at /home/developer/rundir/node_modules/mongoose/node_modules/mongodb/lib/mongodb/cursor.js:778:35
I had a similar occurance, but it ended up the file sought in a GFS read stream had actually been deleted - so in my case it wasn't corrupt, but gone! Above is a log from when that happened.