why IO::Socket always gets TIMEOUT? - perl

whenever I try to connect with perl Socket it gets timeout... like this code:
#!/usr/bin/perl
use v5.26;
use IO::Socket;
my $sock = IO::Socket::INET->new(PeerAddr=> 'www.google.com',
PeerPort => 'http(80)',
Proto => 'tcp') or die $!;
my $line = <$sock>;
say $line;
can anyone tell me why...

In a conversation with a web server, the user-agent speaks first. You haven't sent a request. The server waits for you to send the request then gives up. If you want to make web requests, something like Mojo::UserAgent will do most of the work for you.
Also realize that some hosts may be wise to you. It's better to test this sort of stuff locally instead.

Related

Basic network chat app in Perl

I am trying to write a basic network chat app in Perl for learning purposes. I currently have a server and client program that function almost as I want them to. Multiple clients can connect to the server and send messages to and from it. However, I'm not really sure how to go about sending messages from one client to another and would appreciate a push in the right direction here. Here is the code I have so far, thoughts?
Note: This is my first ever attempt at using networking or using Perl for a proper project so any other guidance on how it's written would also be appreciated.
chat_server.pl
#!/usr/bin/perl -w
# chat_server.pl
use strict;
use IO::Socket::INET;
my $port = shift or die "Port required!\n";
my $socket = IO::Socket::INET->new(
LocalPort => $port,
Proto => 'tcp',
Listen => SOMAXCONN
) or die "Can't create socket: $!!\n";
my $child;
print "Listening for clients on $port...\n";
REQUEST:
while(my $client = $socket->accept) {
my $addr = gethostbyaddr($client->peeraddr, AF_INET);
my $port = $client->peerport;
if($child = fork) {
print "New connection from $addr:$port\n";
close $client;
next REQUEST;
} die "fork failed!\n" unless defined $child;
while (<$client>) {
print "[$addr:$port] says: $_";
print $client "[$addr:$port] says: $_";
}
}
close $socket;
chat_client.pl
#!/usr/bin/perl -w
# chat_client.pl
use strict;
use IO::Socket::INET;
my $port = shift or die "No port\n";
my $server = shift or die "No server\n";
my $client_socket = IO::Socket::INET->new(
PeerPort => $port,
PeerAddr => $server,
Proto => 'tcp'
) or die "Can't create send socket: $!!\n";
my $child;
if($child = fork) {
while(1) {
sleep(1);
print scalar <$client_socket>;
}
}
die "fork failed!\n" unless defined $child;
print "Connected to $server:$port!\n";
do {
print "> ";
print $client_socket $_ if defined $_;
} while(<STDIN>);
print "Closing connection";
close $client_socket;
A single client to a single server isn't too difficult - what you're doing with your code there is - effectively - creating a 1 to 1 relationship. Your forked server is talking exclusively to your client.
To get information to propagate (via the server) between multiple clients, you're going to have to get a bit more complicated - because you have separate processes, these processes now need to communicate with each other. This is a big enough question that there's a whole segment of the perl documentation about it: perlipc.
This is actually going to increase the complexity of your code substantially, because you're moving to a 1-to-many relationship on your communications, and they'll all be happening asynchronously.
Socket based communication is one form of inter-process communication (IPC) and you're already doing that. But your 'gotcha' here is that you're moving from 1 to 1 comms to 1 to many comms. You need to be able to broadcast, and this mode of communications doesn't support that particularly well.
What I would suggest is look at IO::Pipe - I've some example code here: How to extract data from Parallel::ForkManager in perl
Then use IO::Select and can_read to asynchronously decide if there's any data coming in on the pipe. You'll probably need an array of pipes - one per client - otherwise you might get concurrent stuff overlapping.
E.g.:
(From IO::Pipe doc page:
my $pipe = IO::Pipe->new();
if($pid = fork()) { # Parent
$pipe->reader();
while(<$pipe>) {
...
}
}
elsif(defined $pid) { # Child
$pipe->writer();
print $pipe ...
}
Unfortunately there's a slight gotcha here - your pipes will be created by the forking process, but that in turn means you'll need to figure out how to handle an array of pipe and checking to see if they're readable.
That means you can't sit in a while loop around accepting sockets any more - that blocks, so you'd have messages queued until another client connects (which is really not going to be what you want). So you'll also need to use select again to check whether there's something ready to accept first.

Perl transparent proxy

I'm trying to create one transparent HTTP proxy. It's purpose is to stay between the browser and the web server and be invisible. Here is the code I'm using. Unfortunately it's not working very well. When I open the web page (referenced by $base_uri) there are different results depending on that whether I've opened it using the browser only or the browser and the proxy. I'm trying it on a web site which is returning all kinds of responses including "transfer-encoding: chunked" (so I guess may be the problem could be there?!). I think that there could be also problems with the cookies but I don't know how to solve them (if any...).
#!/usr/bin/perl
use strict;
use HTTP::Daemon;
use LWP::UserAgent;
use HTTP::Cookies;
my $cookie_jar = HTTP::Cookies->new();
my $ua = LWP::UserAgent->new( max_redirect => 0, env_proxy => 0,keep_alive => 1, timeout => 30, agent => "Mozilla/4.76 [en] (Win98; U)");
my $d = HTTP::Daemon->new(
LocalHost => "localhost", # remove this to listen from other machines
# (i.e. open-relay... be careful of spammers!)
LocalPort => 33331
) || die;
print "[Proxy URL:", $d->url, "]\n";
fork(); fork(); fork(); # 2^3 = 8 processes
$ua->cookie_jar($cookie_jar);
my $base_uri = 'http://example.com/';
while (my $c = $d->accept) {
while (my $request = $c->get_request) {
my $uri = $base_uri . $request->uri->as_string();
my $method = $request->method;
my $req = HTTP::Request->new($method, $uri);
$request->uri($uri);
print "[[ $method >> $uri ]]\n";
my $response = $ua->simple_request($request);
$c->send_response( $response );
}
$c->close;
undef($c);
}
Thank you in advance!
It is not clear, what you really want. You should describe it much better. If you describe what and why you need that proxy and what features it needs, any help can be much better. Nevertheless I'll try.
What you currently do is to take an incoming connection, extract the URI and the call method and pass it to your source. Well, HTTP is much more complex - you strip everything like the transported data (e.g. for POST requests) as well as all the header lines (cookies, login data, browser identification, language specs, ...), which usually carry important information. Also you modify the timing behavior a lot. Then you sent it to your proxy target.
Now you take the server answer and again strip everything relevant. Also you only reply the answer after it is finished. For streaming data this will not work (you already mentioned the chunked transfer mode). Also your method requires a lot of memory for large files (e.g. a DVD image - 4GB).
Without further details about your application no real suggestion is possible, but some ideas:
a) As told in the comments there are Perl modules available. You may test them and see if they fit your needs.
b) You can go down a level. Use IO::Socket::INET or INET6 and directly work on the socket level. Send each packet as it comes in directly to the output. This is nearly 100% transparent (except for IP address and probably the TCP packet sizes). Thought if you want to change or inspect data, you directly need to care for the HTTP yourself, which can be really complicated nowadays (especially due to transfer encoding).
c) Maybe don't code yourself, but use an existing proxy (e.g. the Apache webserver or specific proxy programs).

Sending Multiple Payloads Over Socket in Perl

Edit: the problem is with IIS, not with the Perl code I'm using. Someone else was talking about the same problem here: https://stackoverflow.com/a/491445/1179075
Long-time reader here, first time posting.
So I'm working on some existing code in Perl that does the following:
Create socket
Send some data
Close socket
Loop back to 1 until all data is sent
To avoid the overhead of creating and closing sockets all the time, I decided to do this:
Create socket
Send some data
Loop back to 2 until all data is sent
Close socket
The thing is, only the first payload is being sent - all subsequent ones are ignored. I'm sending this data to a .NET web service, and IIS isn't receiving the data at all. Somehow the socket is being closed, and I have no further clue why.
Here's the script I'm using to test my new changes:
use IO::Socket;
my $sock = new IO::Socket::INET(PeerAddr => $hostname, PeerPort => 80, Proto => "tcp", Timeout => "1000") || die "Failure: $! ";
while(1){
my $sent = $sock->send($basic_http_ping_message);
print "$sent\n";
sleep(1);
}
close($sock);
So this doesn't work - IIS only receives the very first ping. If I move $sock's assignment and closing into the loop, however, IIS correctly receives every single ping.
Am I just using sockets incorrectly here, or is there some arcane setting in IIS that I need to change?
Thanks!
I think your problem is buffering. Turn off buffering on the socket, or flush it after each write (closing the socket has the side-effect of flushing it).
What output are you getting? You have a print after the send(), if send() fails, it will return undef. You can print out the error like:
my $sent = $sock->send($msg);
die "Failed send: $!\n" unless defined $sent;
print "Sent $sent bytes\n";
My own guess is that the service that you're connecting to is closing the connection, which is why only one gets through, and also why creating a new connection each time would work.

How can I listen on multiple sockets in Perl?

I want to listen on different sockets on a TCP/IP client written in Perl. I know I
have to use select() but I don't know exactly how to implement it.
Can someone show me examples?
Use the IO::Select module. perldoc IO::Select includes an example.
Here's a client example. Not guarneteed to be typo free or even work right:
use IO::Select;
use IO::Socket;
# also look at IO::Handle, which IO::Select inherits from
$lsn1 = IO::Socket::INET->new(PeerAddr=>'example.org', PeerPort=>8000, Proto=>'tcp');
$lsn2 = IO::Socket::INET->new(PeerAddr=>'example.org', PeerPort=>8001, Proto=>'tcp');
$lsn3 = IO::Socket::INET->new(PeerAddr=>'example.org', PeerPort=>8002, Proto=>'tcp');
$sel = IO::Select->new;
$sel->add($lsn1);
$sel->add($lsn2);
# don't add the third socket to the select if you are never going to read form it.
while(#ready = $sel->can_read) {
foreach $fh (#ready) {
#read your data
my $line = $fh->getline();
# do something with $line
#print the results on a third socket
$lsn3->print("blahblahblah");
}
}
this was too big to put in a comment field
You need to better define what you want to do. You have stated that you need to read from port A and write to port B. This is what the above code does. It waits for data to come in on the sockets $lsn1 and $lsn2 (ports 8000 and 8001), reads a line, then writes something back out to example.com on port 8002 (socket $lsn3).
Note that select is really only necessary if you need to read from multiple sockets. If you strictly need to read from only one socket, then scrap the IO::Select object and the while loop and just do $line = < $lsn1 > . That will block until a line is received.
Anyway, by your definition, the above code is a client. The code does actively connect to the server (example.org in this case). I suggest you read up on how IO::Socket::INET works. The parameters control whether it's a listening socket or not.

IO::Socket timing out when getting response

I'm trying to connect to a web service using IO::Socket::INET (yes, I know that there are lots of better modules for doing this, but I don't have them and can't add them, so please don't suggest it), but I'm timing out (I think that's what it's doing) waiting for a response.
Here's the basic crux of my code (I previously populate the content with all the proper headers, and set it up, etc):
$httpSock->print($content);
my #lines = $httpSock->getlines();
foreach my $line ( #lines ) {
print $line;
}
It appears that my request is made immediately, then it waits about 2 minutes before spitting back the response. If I alter the code to use a raw socket recv instead of getlines(), ala:
$httpSock->recv($data, 1024);
I get the response immediately (although only the first 1024 chars). Am I doing something wrong here? I'm using a late enough version of IO::Socket that autoflush should be enabled by default, but turning it on explicitly didn't seem to make any difference. I could probably also just keep reading from the socket until I got the entire response, but that's definitely messier than using getlines() or <$httpSock>.
Thanks in advance.
I'm having an issue re-creating the problem with the code snippet you've posted. Here's the code I tested with:
use strict;
use warnings;
use IO::Socket;
my $httpSock = new IO::Socket::INET(
PeerAddr => 'www.google.com',
PeerPort => '80',
Proto => 'tcp',
);
my $content = "HEAD / HTTP/1.0\r\nHost: www.google.com\r\n\r\n";
$httpSock->print($content);
my #lines = $httpSock->getlines();
foreach my $line (#lines) {
print $line;
}
Here are the results:
$ time ./1.pl
HTTP/1.0 200 OK
-snip-
real 0m0.084s
user 0m0.025s
sys 0m0.007s
The problem is that getlines() waits until the connection is closed. If the web service you are connecting to doesn't close your connection, the getlines function will wait, thinking more data is on the way. When your connection times out after those 2 minutes or so, getlines is seeing the connection close, and returning the lines it received to you. Recv on the other hand will grab everything up to the predetermined limit that is on the connection at that time and return it to the buffer you hand it immediately, but it will wait until it gets some data if there is none currently. I know you think its messy, but this might work out for you:
$httpSock->recv($buf, 1024);
$message = "";
while (length($buf) > 0) {
$message .= $buf;
$httpSock->recv($buf, 1024, MSG_DONTWAIT);
}
print $message;
The MSG_DONTWAIT will cause recv to not wait for a message if the connection is empty. You can also increase 1024 to some big number to decrease the number of loops, or even possibly even get the whole message at once.
This should also let you keep the sockets open for further use until you close it yourself.
I am wondering if the google example works because google.com is closing the connection after it responds.