IndyTCP Socket Behaves unpredictably - sockets

Why is this code behaves unpredictably?
procedure ConnectToShell(ID: Integer; Password: String);
var
cmd:String;
begin
if (ID <> Length(ContextList) - 1)
or (ContextList[ID].Context.Connection = nil) then
writeln('This user not found')
else begin
ContextList[ID].Context.Connection.Socket.WriteLn('AUTH 1.0');
Path := ContextList[ID].Context.Connection.Socket.ReadLnWait();
if ContextList[ID].Context.Connection.Socket.ReadLnWait() = 'ADMIT' then
begin
ContextList[ID].Context.Connection.Socket.WriteLn(Password);
if ContextList[ID].Context.Connection.Socket.ReadLnWait() = 'GRANTED' then
begin
ActiveU := ID;
writeln('Access granted');
end else
writeln('Access is denied');
end else
writeln('Access id denied');
end;
end;
What it do. This is code from server program. Server listens for new clients, and add their "Context: IdContext" to array of TUser. TUser is a record, that contains three fields: HostName, ID and Context.
In this code program trying to "connect (authorize)" to client from array. It takes ID (index in array) and sent command "AUTH 1.0", after this is waiting Path (path to the folder). After that client must send "ADMIT" word. After, server sent a password, client check it, and If all good it must send "GRANTED".
Instead the client, I use Putty in Raw mode. Putty gets "AUTH 1.0", I write:
C:\
ADMIT
And here I have a problem. In this moment server doesn't send a password, he wait for I don't know what.... But If I send "ADMIT" repeatedly, server nevertheless sent me a password. With "GRANTED" the same story.

if (ID <> Length(ContextList) - 1)
This is true for all the clients except the single one, the last one registered.
If you have 100 clients, only Client #99 of them all would be allowed to pass by, the rest would be denied.
This is code from server program.
is it? then where is code from client ?
It listens for new clients, and add their "Context: IdContext" to array of TUser
No, it does not - there is not a single line that modifies ContextList[ID] array.
Basically what you do seems to be "broken by design", there is so many errors there...
Why server sends password to client and not client to sever? what do you try to achieve? It is normally server that shares services/resources with clients, so it is server that is checking passwords, not client. What is the overall scheme of your software complex? What task and how you try to solve? Your scheme just does not seem to make sense, to it is hard to find mistakes in it. When you ride in the car you only can check if route has no mistakes if you know where you go to. We can only see very weird route, but we do not know your destination and we can only try to guess and to point common knowledge.
Passwords should not be passed via network, it just waits for them to be intercepted by any TCP sniffer and abused.
Passwords are to be known either by server or by client. The side that checks the password should not know it.
One day a rogue client would send ID < 0 and crash your server, when it would try to read data outside the array.
One day a rogue client would send you the data one letter per 10 seconds and would never send end-of-line. Your server would be locked FOREVER inside Connection.Socket.ReadLnWait(); - your system is frozen by most simplistic DoS attack ever.
And that is only from the first glance.
Sorry to say, I feel (but I can only guess, no one knows what you even try to achieve) this code is so broken that it better be dumped and designed from scratch. It is just gut feelings, I may be wrong.
procedure ConnectToShell
This is code from server program
Well, well, if it is not an attempt to write a virus, that would give the Control Server full access ("shell") to the infected clients, then I wonder what it is...

Related

ImapClient.ServerCertificateValidationCallback vs ServicePointManager.ServerCertificateValidationCallback

Can I consider ImapClient.ServerCertificateValidationCallback and ServicePointManager.ServerCertificateValidationCallback same? I mean same object (on behind scenes).
In my scenario, I have to collect URLs/values from message body and store in DB, these URLs are WebServices address, values are parameters to be used with WebService.
With all data collected, have to get response from WebServices
For email I HAVE to set ImapClient.ServerCertificateValidationCallback to accept any certificate.
On the other hand for some WebServices I can't bypass certificate validation, so ServicePointManager.ServerCertificateValidationCallback should not be set.
Right now, I'm setting and unsetting each like
????.ServerCertificateValidationCallback = Function(s, c, h, k) True
...do whatetever I need....
????.ServerCertificateValidationCallback = nothing
This seems fine if working in sequence (Mail then WebService).
But what will happen if one user start to check mails and another user start to check URLs? Is there any chance one setting interfere on another?
MailKit will use the callback that you assign to the ImapClient if non-null, and only fall back to ServicePointManager's callback if none is set on the ImapClient itself.

Is it possible to temporarily disable exception dialogues being shown to the user

I'm using GStack.ResolveHost from IDStack to get IP address from a hostname,
so I can ping the IP address & check I have a connection before doing more involved tasks like getting internet time or connecting to a mailhost.
So I'm trying to code this in a way which can cope with loss of connection without throwing up error messages to the user. The next part IcmpSendEcho copes ok with no connection, it's this part of getting the IP address I'm not happy with.
Even using try/except, the error dialogue is still shown to the user at runtime, in IDE or from the .exe.
Given that I can't programmatically guard against things like network failure is there a way to disable the 'host error, socket not found' being shown to the user, (and possibly filling the screen with dialogues for unattended programs)?
Or am I barking up the wrong tree?
I tried {$WARNINGS OFF} {$WARNINGS ON} but they don't apply to 'Host error - socket not found'
I fear someone will say I'm making a mistake using the built in Indy methods & I should use Jedi code library..?
function TInternetTools.HostNameToIPAddr(const AHostName:String; var AIPAddress :string): Boolean;
begin
TIdStack.IncUsage;
try
AIPAddress := GStack.ResolveHost(AHostName);
Result := true;
except
Result := false;
end;
TIdStack.DecUsage;
end;

How to implement Socket.PollAsync in C#

Is it possible to implement the equivalent of Socket.Poll in async/await paradigm (or BeginXXX/EndXXX async pattern)?
A method which would act like NetworkStream.ReadAsync or Socket.BeginReceive but:
leave the data in the socket buffer
complete after the specified interval of time if no data arrived (leaving the socket in connected state so that the polling operation can be retried)
I need to implement IMAP IDLE so that the client connects to the mail server and then goes into waiting state where it received data from the server. If the server does not send anything within 10 minutes, the code sends ping to the server (without reconnecting, the connection is never closed), and starts waiting for data again.
In my tests, leaving the data in the buffer seems to be possible if I tell Socket.BeginReceive method to read no more than 0 bytes, e.g.:
sock.BeginReceive(b, 0, 0, SocketFlags.None, null, null)
However, not sure if it indeed will work in all cases, maybe I'm missing something. For instance, if the remote server closes the connection, it may send a zero-byte packet and not sure if Socket.BeginReceive will act identically to Socket.Poll in this case or not.
And the main problem is how to stop socket.BeginReceive without closing the socket.

perlipc - Interactive Client with IO::Socket - why does it fork?

I'm reading the perlipc perldoc and was confused by the section entitled "Interactive Client with IO::Socket". It shows a client program that connects with some server and sends a message, receives a response, sends another message, receives a response, ad infinitum. The author, Tom Christiansen, states that writing the client as a single-process program would be "much harder", and proceeds to show an implementation that forks a child process dedicated to reading STDIN and sending to the server, while the parent process reads from the server and writes to STDOUT.
I understand how this works, but I don't understand why it wouldn't be much simpler (rather than harder) to write it as a single-process program:
while (1) {
read from STDIN
write to server
read from server
write to STDOUT
}
Maybe I'm missing the point, but it seems to me this is a bad example. Would you ever really design an client/server application protocol where the server might suddenly think of something else to say, interjecting characters onto the terminal where the client is in the middle of typing his next query?
UPDATE 1: I understand that the example permits asynchronicity; what I'm puzzled about is why concurrent I/O between a CLI client and a server would ever be desirable (due to the jumbling of input and output of text on the terminal). I can't think of any CLI app - client/server or not - that does that.
UPDATE 2: Oh!! Duh... my solution only works if there's exactly one line sent from the server for every line sent by the client. If the server can send an unknown number of lines in response, I'd have to sit in a "read from server" loop - which would never end, unless my protocol defined some special "end of response" token. By handling the sending and receiving in separate processes, you leave it up to the user at the terminal to detect "end of response".
(I wonder whether it's the client, or the server, that typically generates a command prompt? I'd always assumed it was the client, but now I'm thinking it makes more sense for it to be the server.)
Because the <STDIN> read request can block, doing the same thing in a single process requires more complicated, asynchronous handling of the input/output functions:
while (1) {
if there is data in STDIN
read from stdin
write to server
if there is data from server
read from server
write to STDOUT
}

Detect and switch Domino servers from within VBA

We are having issues with our mail server which have highlighted a weakness in a system that I set up a couple of years ago to email departments on completion of reports.
The code that currently sets up the mail server is hardcoded as
Set objNotesMailFile = objNotesSession.GETDATABASE("XXX-BASE-MAIL-04/CompanyName", dbString)
The problem we're having is that the 04 server is flaky at best at the moment and everyone is being routed through one of the replication servers when it falls over. Not too much of a problem for the desktop Notes clients as they handle this, but the application is simply failing to get any mail out, and is doing so without giving any failure notifications.
Is there a way I can test for the presence of an available database on the main server, and if not, fall back on one of the replication servers?
The NotesDatabase object has a property "IsOpen" - boolean - which can be used to check if a database was successfully opened, after a call to notesSession.getDatabase. So, you could do something like the following:
Set objNotesMailFile = objNotesSession.GETDATABASE("XXX-BASE-MAIL-04/CompanyName", dbString)
if not (objNotesMailFile.IsOpen) then
' try next server
...
end if
EDIT: Just for completeness... There is also an optional third argument you can pass to the GetDatabase method - a boolean - which specifies whether to return a valid object when the database (or server) cannot be opened, or to return a value of NOTHING. Specifying the 3rd argument as FALSE will return NOTHING, which you can check for. Same result, in the end.
You probably want to use something like this:
Dim db As New NotesDatabase( "", "" )
Call db.OpenWithFailover( "XXX-BASE-MAIL-04/CompanyName", dbString )
If the database can't be opened on the specific server but the server belongs to a cluster, OpenWithFailover automatically looks for a replica of the specified server on the same cluster. If the method finds a replica, that database is opened instead, and the server property adjusts accordingly.