Perl send() UDP packet to multiple active clients - perl

I am implementing a perl based UDP server/client model. Socket functions recv() and send() are used for server/client communication. It seems that send() takes the return address from the recv() call and I can only get the server to respond back to the client that sent the initial request. However, I'm looking for the server to send the data to all active clients instead of only the source client. If the peer_address and peer_port for each active client is known, how could I use perl send() to route packets to the specific clients?
Some attempts:
foreach (#active_clients) {
#$_->{peer_socket}->send($data);
#$socket->send($_->{peer_socket}, $data, 0, $_->{peer_addr});
#send($_->{peer_socket}, $data, 0, $_->{peer_addr});
$socket->send($data) #Echos back to source client
}
Perldoc send briefly describes parameter passing. I have attempted to adopt this and the result is either 1) The client simply receives the socket glob object or 2) the packet is never received or 3) an error is thrown. Any examples of how to use send() to route to known clients would be much appreciated.

Yes, just store the addresses some place and answer on them:
# receiving some packets
while (my $a = recv($sock, $buf, 8192, 0)) {
print "Received $buf\n";
push #addrs, $a;
# exit at some point
}
# send the answers
for my $a (#addrs) {
send($sock, "OK", 0, $a);
}

Related

can't detect closed connection with $sock->connected() [duplicate]

This question already has answers here:
How to detect when socket connection is lost?
(2 answers)
Closed 1 year ago.
I'm trying - and failing - to get a perl server to detect and get rid of connection with a client who broke the connection. Everywhere I looked, the suggested method is to use the socket's ->connected() method, but in my case it fails.
This is the server absolutely minimized:
#!/usr/bin/perl
use IO::Socket;
STDOUT->autoflush(1);
my $server = new IO::Socket::INET (
Listen => 7,
Reuse => 1,
LocalAddr => '192.168.0.29',
LocalPort => '11501',
Proto => 'tcp',
);
die "Could not create socket: $!\n" unless $server;
print "Waiting for clients\n";
while ($client = $server->accept()) {
print "Client connected\n";
do {
$client->recv($received,1024);
print $received;
select(undef, undef, undef, 0.1); # wait 0.1s before next read, not to spam the console if recv returns immediately
print ".";
} while( $client->connected() );
print "Client disconnected\n";
}
I connect to the server with Netcat, and everything works fine, the server receiving anything I send, but when I press ctrl-C to interrupt Netcat, 'recv' is no longer waiting, but $client->connected() still returns a true-like value, the main loop never returns to waiting to the next client.
(note - the above example has been absolutely minimized to show the problem, but in the complete program the socket is set to non-blocking, so I believe I can't trivially depend on recv returning an empty string. Unless I'm wrong?)
connected can't be used to reliably learn whether the peer has initiated a shutdown. It's mentioned almost word for word in the documentation:
Note that this method considers a half-open TCP socket to be "in a connected state". [...] Thus, in general, it cannot be used to reliably learn whether the peer has initiated a graceful shutdown because in most cases (see below) the local TCP state machine remains in CLOSE-WAIT until the local application calls "shutdown" in IO::Socket or close. Only at that point does this function return undef.
(Emphasis mine.)
If the other end disconnected, recv will return 0. So just check the value returned by recv.
while (1) {
my $rv = $client->recv(my $received, 64*1024);
die($!) if !defined($rv); # Error occurred when not defined.
last if $received eq ""; # EOF reached when zero.
print($received);
}
Additional bug fix: The above now calls recv before print.
Additional bug fix: Removed the useless sleep. recv will block if there's nothing received.
Performance fix: No reason to ask for just 1024 bytes. If there's any data available, it will be returned. So you might as well ask for more to cut down on the number of calls to recv and print.
Note that even with this solution, an ungraceful disconnection (power outage, network drop, etc) will go undetected. One could use a timeout or a heartbeat mechanism to solve that.

How to check the return value of the recv() and send() using TCP protocol in perl?

I wrote the sample client and server program using TCP protocol in Perl.
So I used send() and recv() function.
In that I tried to get the return value of recv() function.
The sample code is,
my $return_value;
$return_value = recv($client_socket, $data, 50, 0);
print "return value is: $return_value\n";
When I execute my code, it doesn't print any value.
For example:
return value is:
After that I read the below link, even though I didn't get any return value.
return value of recv() function in perl
How could I solve this?
Thanks.
It's probably undef, indicating an error occurred[1].
use Socket qw( unpack_sockaddr_in );
defined($return_value)
or die("Can't recv: $!\n");
my ($peer_port, $peer_addr) = unpack_sockaddr_in($return_value);
You already know the address and port of the peer with which you are communicating, so this isn't very useful. This brings up the question of why are you using recv (instead of sysread) with connected streaming protocols like TCP? Do you realize you will sometimes get less than the 50 characters you requested?
Always use use strict; use warnings 'all';! If $return_value is undef, "return value is: $return_value\n" would have warned.

Why is IO::Socket::Socks unreliable when creating a proxy chain?

I want to use this module to create a connection using multiple SOCKS proxies, but the module I am using is very unreliable and slow compared to using an external solution such as proxychains. After successfully connecting through the chain of proxies, the while loop begins sending the data through the proxy chain. However, the program crashes with the error: "Out of memory". Does anyone have advice on how to fix the code? Any help would be appreciated.
while ($sock) {
$sock->syswrite (
"GET / HTTP/1.1\r\n"
);
}
When I look at the Wireshark capture, the initial request is sent without any problems but following requests have duplicate http requests in each packet. See: Wireshark capture
Edit: I tried using IO::Async, but the requests aren't sent.
my $loop = IO::Async::Loop->new;
my $handle = IO::Async::Handle->new(
write_handle => $sock,
on_write_ready => sub {
my $request = "GET / HTTP/1.1\r\n"
print $sock $request;
}
);
$loop->add( $handle );
$loop->loop_forever;
Edit: I was able to fix the script by establishing a new chain for each request I send. Is it possible to send a CONNECT request through the socks chain to establish a new connection each time I send a http request?

Perl Net::SSLeay check if socket is available to read

I am using a perl module called Net::APNS::Persistent. It helps me to open up a persistent connection with apple's apns server and send push notifications through APNS. This module uses Net::SSLeay for ssl communication with APNS server.
Now, I want to read from my socket periodically to check if APNS sends back any response. Net::APNS::Persistent already has a function called _read() which looks like below:
sub _read {
my $self = shift;
my ($socket, $ctx, $ssl) = #{$self->_connection};
my $data = Net::SSLeay::ssl_read_all( $ssl );
die_if_ssl_error("error reading from ssl connection: $!");
return $data;
}
However, this function works only after APNS drops the connection and I get error while trying to write. On other times my script gets stuck at,
my $data = Net::SSLeay::ssl_read_all( $ssl );
I checked Net::SSLeay doc and found it has a method called peek
Copies $max bytes from the specified $ssl into the returned value. In contrast to the Net::SSLeay::read() function, the data in the SSL buffer is unmodified after the SSL_peek() operation.
I though it might be useful, so I added another function within the Net::APNS::Persistent module:
sub ssl_peek {
my $self = shift;
my ($socket, $ctx, $ssl) = #{$self->_connection};
print "Peeking \n";
my $data = Net::SSLeay::peek( $ssl, $pending );
print "Done peeking \n";
return $data;
}
Unfortunately this also gave me the same problem. It only prints Peeking and never reaches the line where it would print Done peeking. Had same problem using Net::SSLeay::read. Is there a way to check if the socket can be read or maybe set a read timeout so that my script doesnt get stuck while trying to read from socket?
The APNS documentation says the following:
If you send a notification that is accepted by APNs, nothing is returned.
If you send a notification that is malformed or otherwise unintelligible, APNs returns an error-response packet and closes the connection. Any notifications that you sent after the malformed notification using the same connection are discarded, and must be resent
As long as your notifications as accepted, there won't be any data to read and thus a read operation on the socket will block. The only time there's data available is when there's an error, and then the connection is immediately closed. That should explain the behaviour you're observing.
To check if the underlying socket can be read use select, i.e.
IO::Select->new(fileno($socket))->can_read(timeout);
timeout can be 0 to just check and not wait, can be a number of seconds or can be undef to wait forever. But before you do the select check if data are still available in the SSL buffer:
if (Net::SSLeay::pending($ssl)) { ... use SSL_peek or SSL_read ... }
Apart from that it does look like that the module you use does not even attempt to validate the servers certificate :(

Sending Multiple Payloads Over Socket in Perl

Edit: the problem is with IIS, not with the Perl code I'm using. Someone else was talking about the same problem here: https://stackoverflow.com/a/491445/1179075
Long-time reader here, first time posting.
So I'm working on some existing code in Perl that does the following:
Create socket
Send some data
Close socket
Loop back to 1 until all data is sent
To avoid the overhead of creating and closing sockets all the time, I decided to do this:
Create socket
Send some data
Loop back to 2 until all data is sent
Close socket
The thing is, only the first payload is being sent - all subsequent ones are ignored. I'm sending this data to a .NET web service, and IIS isn't receiving the data at all. Somehow the socket is being closed, and I have no further clue why.
Here's the script I'm using to test my new changes:
use IO::Socket;
my $sock = new IO::Socket::INET(PeerAddr => $hostname, PeerPort => 80, Proto => "tcp", Timeout => "1000") || die "Failure: $! ";
while(1){
my $sent = $sock->send($basic_http_ping_message);
print "$sent\n";
sleep(1);
}
close($sock);
So this doesn't work - IIS only receives the very first ping. If I move $sock's assignment and closing into the loop, however, IIS correctly receives every single ping.
Am I just using sockets incorrectly here, or is there some arcane setting in IIS that I need to change?
Thanks!
I think your problem is buffering. Turn off buffering on the socket, or flush it after each write (closing the socket has the side-effect of flushing it).
What output are you getting? You have a print after the send(), if send() fails, it will return undef. You can print out the error like:
my $sent = $sock->send($msg);
die "Failed send: $!\n" unless defined $sent;
print "Sent $sent bytes\n";
My own guess is that the service that you're connecting to is closing the connection, which is why only one gets through, and also why creating a new connection each time would work.