I want to use this module to create a connection using multiple SOCKS proxies, but the module I am using is very unreliable and slow compared to using an external solution such as proxychains. After successfully connecting through the chain of proxies, the while loop begins sending the data through the proxy chain. However, the program crashes with the error: "Out of memory". Does anyone have advice on how to fix the code? Any help would be appreciated.
while ($sock) {
$sock->syswrite (
"GET / HTTP/1.1\r\n"
);
}
When I look at the Wireshark capture, the initial request is sent without any problems but following requests have duplicate http requests in each packet. See: Wireshark capture
Edit: I tried using IO::Async, but the requests aren't sent.
my $loop = IO::Async::Loop->new;
my $handle = IO::Async::Handle->new(
write_handle => $sock,
on_write_ready => sub {
my $request = "GET / HTTP/1.1\r\n"
print $sock $request;
}
);
$loop->add( $handle );
$loop->loop_forever;
Edit: I was able to fix the script by establishing a new chain for each request I send. Is it possible to send a CONNECT request through the socks chain to establish a new connection each time I send a http request?
Related
This is not the same as a background/asynchronous HTTP request.
Is there a way to fire an HTTP PUT request and not wait on response or determination of success/failure?
My codebase is single-threaded and single-process.
I'm open to monkey-patching LWP::UserAgent->request if needed.
You could just abandon processing the response when the first data come in:
use LWP::UserAgent;
my $ua = LWP::UserAgent->new;
my $resp = $ua->put('http://example.com/', ':content_cb' => sub { die "break early" });
print $resp->as_string;
You might also create a request using HTTP::Request-new('PUT',...)->as_string, create a socket with IO::Socket::IP->new(...) or (for https) IO::Socket::SSL->new(...) and send this request over the socket - then leave the socket open for a while while doing other things in your program.
But the first approach with the early break in the :content_cb is probably simpler. And contrary to crafting and sending the request yourself it guarantees that the server at least started to process your request since it started to send a response back.
If you are open to alternatives to LWP, Mojo::UserAgent is a non-blocking user agent which allows you to control how the response is handled by the event loop when used that way. For example, as described in the cookbook, you can change the handling of the response stream. This could be used to simply ignore the stream, or do something else.
I am using a perl module called Net::APNS::Persistent. It helps me to open up a persistent connection with apple's apns server and send push notifications through APNS. This module uses Net::SSLeay for ssl communication with APNS server.
Now, I want to read from my socket periodically to check if APNS sends back any response. Net::APNS::Persistent already has a function called _read() which looks like below:
sub _read {
my $self = shift;
my ($socket, $ctx, $ssl) = #{$self->_connection};
my $data = Net::SSLeay::ssl_read_all( $ssl );
die_if_ssl_error("error reading from ssl connection: $!");
return $data;
}
However, this function works only after APNS drops the connection and I get error while trying to write. On other times my script gets stuck at,
my $data = Net::SSLeay::ssl_read_all( $ssl );
I checked Net::SSLeay doc and found it has a method called peek
Copies $max bytes from the specified $ssl into the returned value. In contrast to the Net::SSLeay::read() function, the data in the SSL buffer is unmodified after the SSL_peek() operation.
I though it might be useful, so I added another function within the Net::APNS::Persistent module:
sub ssl_peek {
my $self = shift;
my ($socket, $ctx, $ssl) = #{$self->_connection};
print "Peeking \n";
my $data = Net::SSLeay::peek( $ssl, $pending );
print "Done peeking \n";
return $data;
}
Unfortunately this also gave me the same problem. It only prints Peeking and never reaches the line where it would print Done peeking. Had same problem using Net::SSLeay::read. Is there a way to check if the socket can be read or maybe set a read timeout so that my script doesnt get stuck while trying to read from socket?
The APNS documentation says the following:
If you send a notification that is accepted by APNs, nothing is returned.
If you send a notification that is malformed or otherwise unintelligible, APNs returns an error-response packet and closes the connection. Any notifications that you sent after the malformed notification using the same connection are discarded, and must be resent
As long as your notifications as accepted, there won't be any data to read and thus a read operation on the socket will block. The only time there's data available is when there's an error, and then the connection is immediately closed. That should explain the behaviour you're observing.
To check if the underlying socket can be read use select, i.e.
IO::Select->new(fileno($socket))->can_read(timeout);
timeout can be 0 to just check and not wait, can be a number of seconds or can be undef to wait forever. But before you do the select check if data are still available in the SSL buffer:
if (Net::SSLeay::pending($ssl)) { ... use SSL_peek or SSL_read ... }
Apart from that it does look like that the module you use does not even attempt to validate the servers certificate :(
I am implementing a perl based UDP server/client model. Socket functions recv() and send() are used for server/client communication. It seems that send() takes the return address from the recv() call and I can only get the server to respond back to the client that sent the initial request. However, I'm looking for the server to send the data to all active clients instead of only the source client. If the peer_address and peer_port for each active client is known, how could I use perl send() to route packets to the specific clients?
Some attempts:
foreach (#active_clients) {
#$_->{peer_socket}->send($data);
#$socket->send($_->{peer_socket}, $data, 0, $_->{peer_addr});
#send($_->{peer_socket}, $data, 0, $_->{peer_addr});
$socket->send($data) #Echos back to source client
}
Perldoc send briefly describes parameter passing. I have attempted to adopt this and the result is either 1) The client simply receives the socket glob object or 2) the packet is never received or 3) an error is thrown. Any examples of how to use send() to route to known clients would be much appreciated.
Yes, just store the addresses some place and answer on them:
# receiving some packets
while (my $a = recv($sock, $buf, 8192, 0)) {
print "Received $buf\n";
push #addrs, $a;
# exit at some point
}
# send the answers
for my $a (#addrs) {
send($sock, "OK", 0, $a);
}
In my Catalyst app I have a very important connection to a remote server using SOAP with WSDL.
Everything works fine, but when the remote server goes down due to any reason, ALL my app waits until the timeout expires. EVERYTHING. ALL the controllers and processes, ALL the clients!!
If I set a 15 secs timeout for the SOAP LITE transport error, everything waits for 15 secs.
Any page from any user or connection can't be displayed during the timeout wait.
I use Fast CGI and Ngnix for the Catalyst app. If I use multiple fcgi processes when one waits, others take care of the connections, but if all of them try to access the faulty SOAP service... they all wait and wait for an answer until they reach their timeouts. When all of them are waiting, no more connections are allowed.
Looking for answers I have read somewhere that SOAP::LITE is "single threaded".
Is it true? Does it means that ALL my app, with ALL the visitors can only use one SOAP connection? It is hard to believe.
This is my code for the call:
sub check_result {
my ($self, $code, $IP, $PORT) = #_;
my $soap = SOAP::Lite->new( proxy => "http://$IP:$PORT/REMOTE_SOAP
+");
$soap->autotype(0);
$soap->default_ns('http://REMOTENAMESPACE/namespace/default');
$soap->transport->timeout(15);
$soap-> on_fault(sub { my($soap, $res) = #_;
eval { die ref $res ? $res->faultstring : $soap->transport->st
+atus };
return ref $res ? $res : new SOAP::SOM;
});
my $som = $soap->call("remote_function",
SOAP::Data->name( 'Entry1' )->value( $code ),
);
return $som->paramsout;
}
I also tried this slightly different approach kindly suggested at perlmonks, but nothing got better
Please, can someone point me in the rigth direction?
Migue
This is not a problem with SOAP::Lite or Catalyst per se. Pretty much any resource you query will most likely wait for the return (i.e.: file read on disk, database access). If the resource blocks for a long time, there's a chance that you could "starve" other requests while waiting for this return.
There's not an easy answer to this problem, but you could create a "job queue" that a separate process executes, then instead of calling the other service you would add the entry to the queue and get a token. When the request is finished, the queue stores the result associated with that token, then your app, in a separate request checks if the token you want already has a result or not.
There are specialized "job queue" frameworks, such as RabbitMQ, ApacheMQ and even some solutions on top of Redis. If your web application uses rich Javascript, you could even have the "job queue" notification reach the javascript client using, for instance, WebSockets, but otherwise, just poll every second to see if there is a response or not.
Edit: the problem is with IIS, not with the Perl code I'm using. Someone else was talking about the same problem here: https://stackoverflow.com/a/491445/1179075
Long-time reader here, first time posting.
So I'm working on some existing code in Perl that does the following:
Create socket
Send some data
Close socket
Loop back to 1 until all data is sent
To avoid the overhead of creating and closing sockets all the time, I decided to do this:
Create socket
Send some data
Loop back to 2 until all data is sent
Close socket
The thing is, only the first payload is being sent - all subsequent ones are ignored. I'm sending this data to a .NET web service, and IIS isn't receiving the data at all. Somehow the socket is being closed, and I have no further clue why.
Here's the script I'm using to test my new changes:
use IO::Socket;
my $sock = new IO::Socket::INET(PeerAddr => $hostname, PeerPort => 80, Proto => "tcp", Timeout => "1000") || die "Failure: $! ";
while(1){
my $sent = $sock->send($basic_http_ping_message);
print "$sent\n";
sleep(1);
}
close($sock);
So this doesn't work - IIS only receives the very first ping. If I move $sock's assignment and closing into the loop, however, IIS correctly receives every single ping.
Am I just using sockets incorrectly here, or is there some arcane setting in IIS that I need to change?
Thanks!
I think your problem is buffering. Turn off buffering on the socket, or flush it after each write (closing the socket has the side-effect of flushing it).
What output are you getting? You have a print after the send(), if send() fails, it will return undef. You can print out the error like:
my $sent = $sock->send($msg);
die "Failed send: $!\n" unless defined $sent;
print "Sent $sent bytes\n";
My own guess is that the service that you're connecting to is closing the connection, which is why only one gets through, and also why creating a new connection each time would work.