MailKit - FolderCache and deleted folder - mailkit

I am in a situation where I have two clients (ClientA and ClientB) connected to IMAP server. ClientA is running mailkit. When I delete or move a folder with ClientB, mailkit client is getting error on attempt to open or fetch messages from the deleted folder. Actually, I am getting disconnected from the server when i try to fetch message from a deleted folder(I guess that is the expected behavior from the server), because of that I am trying to detect if the folder I am about to execute command to, still exists.
I see mailkit uses FolderCache and when I use GetFolder method even after I reconnect the client, I am still getting IMailFolder reference for the deleted folder when I use GetFolder(string path) method. To avoid the FolderCache, I am creating a new instance of MailClient each time I am about to synchronize remote folders to avoid having not existing folders in the cache. I would like to know if that is recommended approach in that situation?
UPDATE:
So, I am now using GetSubfolders command and I can see a LIST command is sent to the server. However it seems there is an issue with that command in the following scenario:
ClientB is deleting a folder INBOX.spam.op, ClientA is trying to move folder with path INBOX.spam.op.folder1. What happens is - the server is creating a new folder INBOX.spam.op with Attributes NonExistent. That is the expected server behavior in order to create folder with path INBOX.spam.op.folder1
But see what happens with Mailkit when I used GetSubfolders on INBOX.spam - I am getting an instance of IMailFolder with Name = "op", Attributes = a mix of the new attributes NonExistent and the attributes of the old "op" folder (the folder in the FolderCache). UidValidity should be 0 for NonExistent but it is the same as the UIDValidity of "op" folder in the FolderCache even if the server response is this
C: A00000102 LIST "" "INBOX.spam.%" RETURN (SUBSCRIBED CHILDREN STATUS (UIDVALIDITY))
S: * LIST (\NonExistent \HasChildren) "." INBOX.spam.op
S: A00000102 OK List completed (0.001 + 0.000 secs).
I tried to inherit ImapClient and add my own method GetFolderNoCache(string path) but this doesn't work, because of the internal classes. Any other suggestions?

What you want to do is get the top-level folder from the namespace. Then, using that ImapFolder object, get the list of its children (and so on if you are trying to see if a deeply nested folder).
var toplevel = client.GetFolder (client.PersonalNamespaces[0]);
foreach (var folder in toplevel.GetSubfolders ()) {
// look for the folder you are interested in...
// if it's not here, then the folder has been deleted
}

Related

syslog-ng - Passing FILENAME from client to server when using wildcard_file

I am using syslog-ng, to remote log the application logs of multiple containers of the same image. I am using the source config as below.
source s_wild { wildcard-file(
base-dir("/var/myapp/logs")
filename-pattern("*")
recursive(no)
flags(no-parse)
follow-freq(1)
); };
When I am using the logging in the local machine (for testing purposes), using the MACRO, ${FILE_NAME}, it works. But the filename is not being passed on, over network when testing with the remote server.
Aug 3 19:39:46 46fc878e92cf syslog-ng[2320]: Error opening file for writing; filename='', error='Is a directory (21)'
There are around 20-25 files and am looking for auto mapping of the filenames in both client and server side. Is it possible. Not sure how the wildcard_file maps to remote server. Logically it may not be possible. Still wondering on a solution.
I am wondering whether I can avoid manual 1-1 mapping by defining multiple source and destination or using log_prefix.
The $FILE_NAME macro works only if syslog-ng receives messages from a file or a wildcard-file source and it does not work over network(). A couple of options you have here to pass file names over network are:
Use the structured-data section of a RFC 5424 syslog message
Use template() with json-parser() to send messages from client-side and parse them on server side
Use ewmm() (Enterprise-wide message model) which supports delivery of structured messages
In the first method, sending the RFC5424-formatted (IETF-syslog) messages allows you to set the FILE_NAME in the SDATA field. Use the syslog() on the source and destination side instead of network() to send the messages using IETF syslog protocol. The source file wildcard can be defined like this. The whole configuration would be something like below:
syslog-ng client side
source s_wild {
wildcard-file(
base-dir("/var/log_syslog")
filename-pattern("*")
recursive(no)
follow-freq(1)
);
};
rewrite r_set_filename{
set(
"$FILE_NAME",
value(".SDATA.file#18372.4.name")
);
};
rewrite r_use_basename {
subst(
"/var/log_syslog/",
"",
value(".SDATA.file#18372.4.name")
type("string")
flags("prefix")
);
};
destination d_container_logs {
syslog(
"192.168.10.48"
transport("tcp")
port(5141)
);
};
log {source(s_wild); rewrite(r_set_filename); rewrite(r_use_basename); destination(d_container_logs);};
The r_set_filename gets the absolute path of file and we chop-off the path bit and retains only the filename using r_use_basename
syslog-ng server side
source s_network {
syslog(
transport("tcp")
port(5141)
keep_hostname(yes)
);
};
destination d_container_logs {
file(
"/var/sys_log/${.SDATA.file#18372.4.name}"
create_dirs(yes)
);
};
log {source(s_network); destination(d_container_logs);};

setting dropbox as custom backp on whm/cpanel error

I am trying to set dropbox as custom backup destination following below cpanel blog. The connection is working, but the backup files are not being transferred to DropBox. And when I press validate to custom backup destination it gives following error .
https://blog.cpanel.com/cpanel-whm-custom-backup-transport-example-dropbox/
Error: Validation for transport “dropbox” failed: Could not list files in
destination: Executed /usr/local/bin/backup_transport_dropbox.pl ls /
remotehost remoteuser : 2018-08-26T15:54:21 [WebService::Dropbox] [ERROR]
https://api.dropboxapi.com/2/files/list_folder {"path":"/"} -> [400] Error in
call to API function "files/list_folder": request body: path: Specify the root
folder as an empty string rather than as "/". at
/usr/local/share/perl5/WebService/Dropbox.pm line 184.
I am new to dropbox api and have no idea of perl so could not figure out what is discusses on below links.
https://github.com/silexlabs/unifile/issues/77
The error message is correctly indicating that the Dropbox API expects the value "" when referring to the root alone. The code is instead sending the value "/". This looks like a bug in the code.
It looks like you've already opened an issue with the developer for this:
https://github.com/CpanelInc/backup-transport-dropbox/issues/3
They should update the code to use "" when referring to the root folder on Dropbox.

Static resource reload with akka-http

In short: is it possible to reload static resources using akka-http?
A bit more:
I have Scala project.
I'm using App object to launch my Main
class.
I'm using getFromResourceDirectory to locate my resource
folder.
What I would like to have is to hot-swap my static resources during development.
For example, I have index.html or application.js, which I change and I want to see changes after I refresh my browser without restarting my server. What is the best practise of doing such thing?
I know that Play! allows that, but don't want to base my project on Play! only because of that.
Two options:
Easiest: use the getFromDirectory directive instead when running locally and point it to the path where your files you want to 'hotload' are, it serves them directly from the file system, so every time you change a file and load it through Akka HTTP it will be the latest version.
getFromResourceDirectory loads files from the classpath, the resources are available because SBT copies them into the class directory under target every time you build (copyResources). You could configure sbt using unmanagedClasspath to make it include the static resource directory in the classpath. If you want to package the resources in the artifact when running package however this would require some more sbt-trixery (if you just put src/resources in unmanagedClasspath it will depend on classpath ordering if the copied ones or the modified ones are used).
I couldn't get it to work by adding to unmanagedClasspath so I instead used getFromDirectory. You can use getFromDirectory as a fallback if getFromResourceDirectory fails like this.
val route =
pathSingleSlash {
getFromResource("static/index.html") ~
getFromFile("../website/static/index.html")
} ~
getFromResourceDirectory("static") ~
getFromDirectory("../website/static")
First it tries to look up the file in the static resource directory and if that fails, then checks if ../website/static has the file.
The below code try to find the file in the directory "staticContentDir". If the file is found, it is sent it back to the client. If it is not found, it tries by fetching the file from the directory "site" in the classpath.
The user url is: http://server:port/site/path/to/file.ext
/site/ comes from "staticPath"
val staticContentDir = calculateStaticPath()
val staticPath = "site"
val routes = pathPrefix(staticPath) {
entity(as[HttpRequest]) { requestData =>
val fullPath = requestData.uri.path
encodeResponse {
if (Files.exists(staticContentDir.resolve(fullPath.toString().replaceFirst(s"/$staticPath/", "")))) {
getFromBrowseableDirectory(staticContentDir.toString)
} else {
getFromResourceDirectory("site")
}
}
}
}
I hope it is clear.

Problems with Chronicle Map on Windows

I am trying to use ChronicleMap for my index structure, this seems to work fine on Linux but when I am running my JUnit test on Windows (which is my development environment), I keep getting an error: java.io.IOException: Unable to wait until the file is ready, likely the process which created the file crashed or hung for more than 1 minute.
Here's the code snippet that is problematic:
File file = new File(idxFullPath);
ChronicleMap<Integer, int[]> idx =
ChronicleMapBuilder.of(Integer.class, int[].class)
.averageValue(getSampleIdxList())
.entries(IDX_MAX_SIZE)
.createPersistedTo(file);
The following exception is thrown:
[2016-06-17 14:32:47.779] ERROR main com.mcm.op.persistence.Persistence ERR java.io.IOException: Unable to wait until the file is ready, likely the process which created the file crashed or hung for more than 1 minute
at net.openhft.chronicle.map.ChronicleMapBuilder.waitUntilReady(ChronicleMapBuilder.java:1520)
at net.openhft.chronicle.map.ChronicleMapBuilder.openWithExistingFile(ChronicleMapBuilder.java:1583)
at net.openhft.chronicle.map.ChronicleMapBuilder.createWithFile(ChronicleMapBuilder.java:1444)
at net.openhft.chronicle.map.ChronicleMapBuilder.createPersistedTo(ChronicleMapBuilder.java:1405)
at com.mcm.op.persistence.Persistence.initIdx(Persistence.java:131)
at com.mcm.op.persistence.Persistence.init(Persistence.java:177)
at com.mcm.op.persistence.PersistenceTest.initPersist(PersistenceTest.java:47)
at com.mcm.op.persistence.PersistenceTest.setUp(PersistenceTest.java:29)
Indeed, it is likely that the process which created the file has crashed, or stopped terminated debugging, or something like that.
If it's ok to have a fresh index from unit test-to-test runs, I recommend to try either delete the file at idxFullPath before creating a Chronicle Map, or randomize the mapping file via something like File.createTempFile(). In either case File.deleteOnExit() could appear to be helpful.
If you want to keep the index between unit test runs and always use the same file at idxFullPath for persistence, you could try to use builder.createOrRecoverPersistedTo() instead of plain createPersistedTo() map creation method. However this might slow down the map creation.

XML Import Warning: Informatica

I am getting the following warning message while I tried to import XML file in Informatica repository.
Warning: Unexpected condition at: Wcursor.cpp: 305
Contact Informatica technical support for assistance
Continuing may result in damage to your repository.
The XML file is around 70mb and has got around 4500 objects in it. I am migrating an entire application from one server to another.
Not sure why this issue happens. I tried several times and from other client system as well, but no luck.
For importing the XML via command line using "pmrep" command, we need control file. But I dont have any control file for this XML. So cant go with that option.
It would be great if somebody can help me sort out this issue.
Details:
Infa version 9.1
Mounted on Unix environment.
Had the same problem back some time ago. XML parsing takes a lot of memory and / or GUI can't handle it. My solution was to use pmrep command line tool. Worked for me - my workflow was composed of around 3600 objects afair.
If you don't have a control file - create one! Here's a very simple template:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE IMPORTPARAMS SYSTEM "impcntl.dtd">
<!--IMPORTPARAMS This inputs the options and inputs required for import operation -->
<!--CHECKIN_AFTER_IMPORT Check in objects on successful import operation -->
<!--CHECKIN_COMMENTS Check in comments -->
<!--APPLY_LABEL_NAME Apply the given label name on imported objects -->
<!--RETAIN_GENERATED_VALUE Retain existing sequence generator, normalizer and XML DSQ current values in the destination -->
<!--COPY_SAP_PROGRAM Copy SAP program information into the target repository -->
<!--APPLY_DEFAULT_CONNECTION Apply the default connection when a connection used by a session does not exist in the target repository -->
<IMPORTPARAMS CHECKIN_AFTER_IMPORT="YES" CHECKIN_COMMENTS="PMREP_IMPORT_TEST" RETAIN_GENERATED_VALUE="NO" COPY_SAP_PROGRAM="NO" APPLY_DEFAULT_CONNECTION="NO">
<!--FOLDERMAP matches the folders in the imported file with the folders in the target repository -->
<FOLDERMAP SOURCEFOLDERNAME="YOUR FIRST SOURCE FOLDER NAME" SOURCEREPOSITORYNAME="REP_DEV" TARGETFOLDERNAME="YOUR FIRST SOURCE FOLDER NAME" TARGETREPOSITORYNAME="REP_TEST"/>
<FOLDERMAP SOURCEFOLDERNAME="YOUR SECOND TARGET FOLDER NAME" SOURCEREPOSITORYNAME="REP_DEV" TARGETFOLDERNAME="YOUR SECOND TARGET FOLDER NAME" TARGETREPOSITORYNAME="REP_TEST"/>
<!--Import will only import the objects in the selected types in TYPEFILTER node -->
<!--TYPENAME type name to import. This should comforming to the element name in powermart.dtd, e.g. SOURCE, TARGET and etc.-->
<!--RESOLVECONFLICT allows to specify resolution for conflicting objects during import. The combination of specified child nodes can be supplied -->
<RESOLVECONFLICT>
<!--TYPEOBJECT allows objects of certain type to apply replace/reuse upon conflict-->
<!--TYPEOBJECT = ALL conflict resolution for ALL types of objects -->
<TYPEOBJECT OBJECTTYPENAME="ALL" RESOLUTION="REPLACE"/>
<!--SPECIFICOBJECT allows a particular object(name, typename etc.) to apply replace/reuse upon conflict -->
<!--NAME Object name-->
<!--EXTRANAME Source DBD name - required for source object to identify uniquely-->
<!--OBJECTTYPENAME Object type name-->
<!--FOLDERNAME Folder which the object belongs to-->
<!--REPOSITORYNAME Repository name that this object belongs to-->
<!--RESOLUTION Resolution to apply for the object in case of conflict-->
<!--SPECIFICOBJECT NAME="your_object" OBJECTTYPENAME="your_object_type" FOLDERNAME="your_source_folder" REPOSITORYNAME="your_source_repo" RESOLUTION="REPLACE"/-->
</RESOLVECONFLICT>
</IMPORTPARAMS>