Detect and switch Domino servers from within VBA - email

We are having issues with our mail server which have highlighted a weakness in a system that I set up a couple of years ago to email departments on completion of reports.
The code that currently sets up the mail server is hardcoded as
Set objNotesMailFile = objNotesSession.GETDATABASE("XXX-BASE-MAIL-04/CompanyName", dbString)
The problem we're having is that the 04 server is flaky at best at the moment and everyone is being routed through one of the replication servers when it falls over. Not too much of a problem for the desktop Notes clients as they handle this, but the application is simply failing to get any mail out, and is doing so without giving any failure notifications.
Is there a way I can test for the presence of an available database on the main server, and if not, fall back on one of the replication servers?

The NotesDatabase object has a property "IsOpen" - boolean - which can be used to check if a database was successfully opened, after a call to notesSession.getDatabase. So, you could do something like the following:
Set objNotesMailFile = objNotesSession.GETDATABASE("XXX-BASE-MAIL-04/CompanyName", dbString)
if not (objNotesMailFile.IsOpen) then
' try next server
...
end if
EDIT: Just for completeness... There is also an optional third argument you can pass to the GetDatabase method - a boolean - which specifies whether to return a valid object when the database (or server) cannot be opened, or to return a value of NOTHING. Specifying the 3rd argument as FALSE will return NOTHING, which you can check for. Same result, in the end.

You probably want to use something like this:
Dim db As New NotesDatabase( "", "" )
Call db.OpenWithFailover( "XXX-BASE-MAIL-04/CompanyName", dbString )
If the database can't be opened on the specific server but the server belongs to a cluster, OpenWithFailover automatically looks for a replica of the specified server on the same cluster. If the method finds a replica, that database is opened instead, and the server property adjusts accordingly.

Related

ImapClient.ServerCertificateValidationCallback vs ServicePointManager.ServerCertificateValidationCallback

Can I consider ImapClient.ServerCertificateValidationCallback and ServicePointManager.ServerCertificateValidationCallback same? I mean same object (on behind scenes).
In my scenario, I have to collect URLs/values from message body and store in DB, these URLs are WebServices address, values are parameters to be used with WebService.
With all data collected, have to get response from WebServices
For email I HAVE to set ImapClient.ServerCertificateValidationCallback to accept any certificate.
On the other hand for some WebServices I can't bypass certificate validation, so ServicePointManager.ServerCertificateValidationCallback should not be set.
Right now, I'm setting and unsetting each like
????.ServerCertificateValidationCallback = Function(s, c, h, k) True
...do whatetever I need....
????.ServerCertificateValidationCallback = nothing
This seems fine if working in sequence (Mail then WebService).
But what will happen if one user start to check mails and another user start to check URLs? Is there any chance one setting interfere on another?
MailKit will use the callback that you assign to the ImapClient if non-null, and only fall back to ServicePointManager's callback if none is set on the ImapClient itself.

Add date header to incoming email with Sieve

I'm looking for a way to do in Sieve something that I've been doing in Procmail for years, which is to insert an unambiguous date header in incoming messages that makes it clear to me -- independent of buried "received" headers from possibly multiple servers and however my mail client interprets the date the message was sent -- when my server received the message. This is how I did it in Procmail:
# First create the "date_received" variable for my time zone:
date_received=`/bin/date -ud '+2 hour' +'%A %Y-%m-%d %H:%M:%S +0200'`
# Second, insert the header containing the date_received variable:
:0 fh w
| formail -i "X-Local-Date-Received: $date_received"
I found "addheader" (RFC 5293) which will, obviously, add a header, but due to something else I read (sorry, don't remember where) I believe that Sieve won't run the "date" command in the shell due to either a limitation or an intended (and understandable) preference not to run shell commands for security reasons.
Other possibly useful information: I'm doing this through Roundcube 1.3.6, but I have a feeling (also due to something I read) that Roundcube might overwrite a custom Sieve filter set if I edit the raw code within Roundcube. If necessary I'm quite happy to edit or create a Sieve configuration file on the server directly to achieve this for all users on the server, but having run Sendmail and Procmail for years I'm unsure of the best place to do this.
EDIT:
As a test in Roundcube I added this at the top of my Sieve filter set:
require ["fileinto","editheader"];
# rule:[test editheader]
if true
{
addheader "X-Test-Header" "This is a test header.";
}
I didn't actually add the line "require ["fileinto","editheader"];"; I just added "editheader" to the existing line at the top of the filter set, like so:
require ["copy","fileinto","regex","editheader"];
I expect this to add ...
X-Test-Header: This is a test header.
... to every incoming message, but Roundcube won't let me save it:
An error occurred.
Unable to save filter. Server error occurred.
A search for this error returns one related result, with no solution posted.
I'm not intending to focus on Roundcube, however. Like I said earlier, I'll add this Sieve filter from the command line if necessary.
The Pigeonhole Sieve Editheader extension isn't available by default. Per its documentation, you need to ensure it's added in your list of sieve extensions on the server:
plugin {
# Use editheader
sieve_extensions = +editheader
}
If you want to run arbitrary scripts using sieve on Dovecot like you can with procmail, then you can use its external programs plugins, configure in Dovecot which external programs you want to allow users to use, and then the users can use the "vnd.dovecot.execute" extension to run those programs. You might be able to use this to port over whatever scripts you used with procmail.
In the general case, the purpose of sieve is for users to be able to configure their own mail filtering, while it seems like you're trying to actually do something globally for the server. Dovecot should add its own Received header when it processes the mail, which is the standard method for marking when a mail system gets a message, so it's not clear to me why you're not just using that, or what changes you want to make to its default behavior. It may be that what you're looking to do may be better handled in your mail transport agent rather than in your mail delivery agent.
Here is my sieve script that converts Received to Date:
require "editheader";
require "regex";
require "variables";
if not exists "Date" {
if header :regex "Received" "^from[[:space:]]+.*[[:space:]]+by[[:space:]]+mail.mydomain.com[[:space:]]+with[[:space:]]+.*[[:space:]]+for[[:space:]]+.*;(.*)$" {
addheader :last "Date" "${1}";
}
}
Note that mail.mydomain.com is a stand-in for the actual mail server address, which means it only matches the header when the message was received on a specific mail server. I made this work with dovecot-2.3.5.1
You can use date plugin. See: rfc5260:
require "date";
require "editheader";
if currentdate :matches "std11" "*" {
addheader :last "X-Local-Date-Received" "${1}";
}

How do I send a Diameter message to an IP other than Destination-Host's value in mobicents

In all Diameter implementations I saw, the messages originating from the server is always sent towards the DNS resolved IP address of whats in the Destination-Host AVP. But, in commercial servers, we see an option to configure a DRA or a DEA which takes in all the messages and routes them.
Thus, when it comes to the mobicents diameter stack, this approach is sometimes hard to do. I can anyway re-configure the hosts file so that the message ends up in a DRA/DEA, yet, its a pain. I see no option to send these messages to a central diameter agent which will take care of all the dirty work for me.
The next issue is, if I plan to create such a DRA/DEA, the stack does not accept messages to a different host. Where, the message's Destination-Host parameter might contain a different hostname than ours. (which would be the ultimate destination it needs to go)
Is there a hack to achieve this without meddling with the internals of the jdiameter code and RA code?
If you change jdiameter's config to something like this:
<Network>
<Peers>
<Peer name="aaa://127.0.0.1:21812" attempt_connect="false" rating="1" />
<Peer name="aaa://CUSTOM_HOST:4545" attempt_connect="false" rating="1" />
</Peers>
<Realms>
<Realm name="custom.realm" peers="CUSTOM_HOST" local_action="LOCAL" dynamic="false" exp_time="1">
<ApplicationID>
...
</ApplicationID>
</Realm>
</Realms>
</Network>
In your sbb, then you'll need to create a client session providing your custom realm using this method:
DiameterCCAResourceAdaptor.CreditControlProviderImpl.createClientSession(DiameterIdentity destinationHost, DiameterIdentity destinationRealm)
Example:
ccaRaSbb.createClientSession(null, "custom.realm")
where ccaRaSbb is a CreditControlProvider instance (resource adaptor interface)
finally, when creating your CCR, the method CreditControlClientSession.createCreditControlRequest() will use the session' realm to find an available peer previously configured.
Let me know if this makes sense to you
Posting the method I used to solve this problem.
As it turns out its not possible out of the box to send a diameter message towards a peer which is not configured in the stack's jdiameter-config.xml file.
For me, the option to alter the stack in this case was also not feasible. So I devised a workaround for the problem by co-operating with the DRA we have. (most DRA's should be able to handle this method)
I added two custom AVPs to the outgoing request, namely Ultimate-Destination-Host and Ultimate-Destination-Realm.
In the DRA, I asked the admin to delete my Destination-Host and Destination-Realm AVPs and replace them with the ones created in step 1.
Now, whenever I send a packet destined to other diameter peers outside the configured peer, I target them towards the DRA and set these 'Ultimate' destination AVPs.
Ours is an Oracle DSR which is capable of doing this AVP manipulation. Most commercial ones should be able to handle it. Hope someone who wanted an answer for this question found this useful.

IndyTCP Socket Behaves unpredictably

Why is this code behaves unpredictably?
procedure ConnectToShell(ID: Integer; Password: String);
var
cmd:String;
begin
if (ID <> Length(ContextList) - 1)
or (ContextList[ID].Context.Connection = nil) then
writeln('This user not found')
else begin
ContextList[ID].Context.Connection.Socket.WriteLn('AUTH 1.0');
Path := ContextList[ID].Context.Connection.Socket.ReadLnWait();
if ContextList[ID].Context.Connection.Socket.ReadLnWait() = 'ADMIT' then
begin
ContextList[ID].Context.Connection.Socket.WriteLn(Password);
if ContextList[ID].Context.Connection.Socket.ReadLnWait() = 'GRANTED' then
begin
ActiveU := ID;
writeln('Access granted');
end else
writeln('Access is denied');
end else
writeln('Access id denied');
end;
end;
What it do. This is code from server program. Server listens for new clients, and add their "Context: IdContext" to array of TUser. TUser is a record, that contains three fields: HostName, ID and Context.
In this code program trying to "connect (authorize)" to client from array. It takes ID (index in array) and sent command "AUTH 1.0", after this is waiting Path (path to the folder). After that client must send "ADMIT" word. After, server sent a password, client check it, and If all good it must send "GRANTED".
Instead the client, I use Putty in Raw mode. Putty gets "AUTH 1.0", I write:
C:\
ADMIT
And here I have a problem. In this moment server doesn't send a password, he wait for I don't know what.... But If I send "ADMIT" repeatedly, server nevertheless sent me a password. With "GRANTED" the same story.
if (ID <> Length(ContextList) - 1)
This is true for all the clients except the single one, the last one registered.
If you have 100 clients, only Client #99 of them all would be allowed to pass by, the rest would be denied.
This is code from server program.
is it? then where is code from client ?
It listens for new clients, and add their "Context: IdContext" to array of TUser
No, it does not - there is not a single line that modifies ContextList[ID] array.
Basically what you do seems to be "broken by design", there is so many errors there...
Why server sends password to client and not client to sever? what do you try to achieve? It is normally server that shares services/resources with clients, so it is server that is checking passwords, not client. What is the overall scheme of your software complex? What task and how you try to solve? Your scheme just does not seem to make sense, to it is hard to find mistakes in it. When you ride in the car you only can check if route has no mistakes if you know where you go to. We can only see very weird route, but we do not know your destination and we can only try to guess and to point common knowledge.
Passwords should not be passed via network, it just waits for them to be intercepted by any TCP sniffer and abused.
Passwords are to be known either by server or by client. The side that checks the password should not know it.
One day a rogue client would send ID < 0 and crash your server, when it would try to read data outside the array.
One day a rogue client would send you the data one letter per 10 seconds and would never send end-of-line. Your server would be locked FOREVER inside Connection.Socket.ReadLnWait(); - your system is frozen by most simplistic DoS attack ever.
And that is only from the first glance.
Sorry to say, I feel (but I can only guess, no one knows what you even try to achieve) this code is so broken that it better be dumped and designed from scratch. It is just gut feelings, I may be wrong.
procedure ConnectToShell
This is code from server program
Well, well, if it is not an attempt to write a virus, that would give the Control Server full access ("shell") to the infected clients, then I wonder what it is...

GWT RequestFactory: Send changes twice

I need your help with the gwt requestfactory
considering following scenario:
I get an existing entity (let's say a invoice) from the server:
InvoiceEntityProxy invoice = request1.getInvoice();
I want to make some changes, so I edit it with a new request:
InvoiceEntityProxy editableInvoice = request2.edit(invoice);
//make some changes to editableInvoice
Now I send the changes made with the second request to the server, to create a preview:
request2.createPreview(editableInvoice);
When the request is sent, the invoice proxy is frozen and I re-enable editing by assigning the proxy to a new request:
editableInvoice = request3.edit(editableInvoice);
If everything is okay, i want to update the proxy and send it to the server, using the latest request:
request3.update(editableInvoice);
But the changes never arrive on the server, because the latest request (request3) doesn't know anything about the changes made to the proxy assigned to the request2.
I thought about following solutions:
I could redo the changes on the latest proxy. But for that, I've to iterate over all attributes and set them again (not very friendly solution, because I've to adjust the method each time I add some attributes to the proxy)
Another approach would be to send the proyx without an id to the server and send the id as second parameter of the update-method. But this would be a shame, because not only the deltas would be sent to the server (which is one of the greate features of the requestFactory).
So what is the best and most common practice to let the request3 know about the changes already made to the proxy, when it was assigned to another request.
You simply forget to call fire(). Example
request2.createPreview(editableInvoice).fire();
Bear in mind that if the following request depend on the result of the previous one, you should put your code in the OnSuccess methode because the request is asynchronous
It's also possible to append multiple requests
EDIT
It important to use the same request for the edit and fire operations. So replace this line
request.update(editableInvoice);
with
request3.update(editableInvoice);
Nice! I found the solution for my problem.
I still have an instance of the original proxy, because the edit() method of the context always return a new instance of the proxy. So I save the original proxy before sending any request.
After each successful request, I re-enable editing the proxy by call the edit method again:
editableInvoice = request3.edit(editableInvoice);
Now the crux:
I can set the original proxy of a proxy, which is used to consider if it changed and what changed. This is done by using AutoBean and set the PARENT_OBJECT Tag like this:
AutoBean<InvoiceEntityProxy> editableInvoiceBean = AutoBeanUtils.getAutoBean(editableInvoice);
AutoBean<InvoiceEntityProxy> originalInvoiceBean = AutoBeanUtils.getAutoBean(originalInvoice);
editableInvoiceBean.setTag(Constants.PARENT_OBJECT, originalInvoiceBean);
On the next request all changed properties are send to the server again.
Thank you for your help and thank you for the hint with the AutoBean #Zied Hamdi
You also can use AutoBeans to duplicate the object before you start changing it. You can keep the original one untouched then request.edit() it and apply changes (introspection like changes) from the "dirty" object.
You'll maybe have to do some research on how to handle EntityProxies since they are "special AutoBeans" : I had to use special utility objects to serialize them to json (available in GWT). So there might be some special handling in doing a deep copy too.
There is an issue maybe with GWT keeping only one version of each EntityProxy (I never checked if it is global or only in the context of a request)