I am creating a UDP-proxy in go, but while doing some load test using iperf, I start to get this error:
socket: too many open files
After searching and testing, I found that if I create a pool using a map of opening connections being the key *net.UDPAddr.String() and the value an instance of UDP-proxy containing an *net.UDPConn, I am available to reuse existing connection in case the client address is the same:
var clients map[string]*UDPProxy.UDPProxy = make(map[string]*UDPProxy.UDPProxy)
This block of code looks something like:
// wait for connections
for {
n, clientAddr, err := conn.ReadFromUDP(buffer)
if err != nil {
log.Println(err)
}
counter++
if *d {
log.Printf("new connection from %s", clientAddr.String())
}
fmt.Printf("Connections: %d, clients: %d\n", counter, len(clients))
proxy, found = clients[clientAddr.String()]
if !found {
// make new connection to remote server
proxy = UDPProxy.New(conn, clientAddr, raddr_udp, *d)
clients[clientAddr.String()] = proxy
}
go proxy.Start(buffer[0:n])
}
This seems to be working, but the problem I have now, is that I need find a way of expiring,cleaning the map when the client exists or is not using any more the proxy so that I could avoid having multiple unused connections.
Any idea how of could I improve this or even better, how could I replace totally the map, I don't know if channels could be help full?
Thanks in advance.
Since you are creating UDP proxies, you probably know that you have to come up with your own solution for deciding when to "terminate" the proxy session. The session is just an abstraction when it comes to UDP - unless the UDPProxy package you're using has an established mechanism already.
Depending on why you are creating UDP proxies, it might be easy to arbitrarily cleanup connections ...
So if you know that a client is exiting, call the Close() method on the proxy (assuming there is one) and use delete on the map entry.
How to decide that a client is exiting is up to you. Could use a slice as a FIFO, or pick one randomly, or try setting timers for each.
Related
I am interested in learning Vapor, so I decided to work on a website that displays government issued weather alerts. Alert distribution is done via a TCP/IP data stream (streaming1.naad-adna.pelmorex.com port 8080).
What I have in mind is to use IBM's BlueSocket (https://github.com/IBM-Swift/BlueSocket) to create a socket, though after this point, I gave it a bit of thought but was unable to come to a conclusion on what the next steps would be.
Alerts are streamed over the data stream, so I am aware the socket would need to be opened and listened on but wasn't able to get to much past that.
A few things with the data stream are that the start and end of an alert is detected using the start and end tags of the XML document (alert and /alert). There are no special or proprietary headers added to the data, it's only raw XML. I know some alerts also include an XML declaration so I assume the encoding should be taken into account if the declaration is available.
I was then thinking of using XMLParser to parse the XML and use the data I am interested in from the alert.
So really, the main thing I am struggling with is, when the socket is open, what would be the method to listen to it, determine the start and end of the alert and then pass that XML alert for processing.
I would appreciate any input, I am also not restricted to BlueSocket so if there is a better option for what I am trying to achieve, I would be more than open to it.
So really, the main thing I am struggling with is, when the socket is
open, what would be the method to listen to it, determine the start
and end of the alert and then pass that XML alert for processing.
The method that you should use is read(into data: inout Data). It stores any available data that the server has sent into data. There are a few reasons for this method to fail, such as the connection disconnecting.
Here's an example of how to use it:
import Foundation
import Socket
let s = try Socket.create()
try s.connect(to: "streaming1.naad-adna.pelmorex.com", port: 8080)
while true {
if try Socket.wait(for: [s], timeout: 0, waitForever: true) != nil {
var alert = Data()
try s.read(into: &alert)
if let message = String(data: alert, encoding: .ascii) {
print(message)
}
}
}
s.close()
First create the socket. The default is what we want, a IPv4 TCP Stream.
Second connect() to the server using the hostname and port. Without this step, the socket isn't connected and cannot receive or send any data.
wait() until hostname has sent us some data. It returns a list of sockets that have data available to read.
read() the data, decode it and print it. By default this call will block if there is no data available on the socket.
close() the socket. This is good practice.
You might also like to consider thinking about:
non blocking sockets
error handling
streaming (a single call to read() might not give a complete alert).
I hope this answers your question.
I'm having what doesn't seem to be a problem but I don't understand the error. Basically, I've been trying to work on code that "gracefully" shuts down a tcp socket on both ends. At the end of my program, on both sides, I shutdown my sockets before closing them. On a quick restart, which is the thing I'm trying to solve, the much worse problem of crashing because the sockets are lingering on both sides doesn't happen anymore. The shutdown/then close seems to work on that front.
However, i get "Address already in use" still, which usually made it so that I couldn't connect. Now I'm able to connect after that error just fine. I've read a lot on the subject of graceful shutdown, reuse address, and the like. And I guess my question is, how, if the socket error'd on bind ("Address already in use"), after a successful open, is it possibly able to connect to the endpoint? In other words, if the address is actually already in use, how is the connection being made? Also, of note, reuse address doesn't work in this situation. Because I'm using the same socket settings, local/remote addresses and ip.
Failing to bind() the socket to an address does not invalidate the underlying socket. As such, the connect() operation will continue with an unbound socket, deferring to the kernel to bind to a local endpoint.
Here is a complete example demonstrating this behavior:
#include <boost/asio.hpp>
#include <boost/bind.hpp>
// This example is not interested in all handlers, so provide a noop function
// that will be passed to bind to meet the handler concept requirements.
void noop() {}
int main()
{
using boost::asio::ip::tcp;
// Create all I/O objects.
boost::asio::io_service io_service;
tcp::acceptor acceptor(io_service, {tcp::v4(), 0});
tcp::socket server(io_service, tcp::v4());
// Open socket1, binding to a random port.
tcp::socket socket1(io_service, {boost::asio::ip::address_v4::any(), 0});
tcp::socket socket2(io_service); // non-open
// Explicitly open client2, which will bind it to the any address.
boost::system::error_code error;
socket2.open(tcp::v4(), error);
assert(!error);
assert(socket2.local_endpoint().port() == 0);
// Attempt to bind socket2 to socket1's address will fail with
// an already in use error, leaving socket2 bound to the any endpoint.
// (e.g. a failed bind does not have side effects on the socket)
socket2.bind(socket1.local_endpoint(), error);
assert(error == boost::asio::error::address_in_use);
assert(socket2.local_endpoint().port() == 0);
// Connect will defer to the kernel to bind the socket.
acceptor.async_accept(server, boost::bind(&noop));
socket2.async_connect(acceptor.local_endpoint(),
[&error](const boost::system::error_code& ec) { error = ec; });
io_service.run();
io_service.reset();
assert(!error);
assert(socket2.local_endpoint().port() != 0);
}
I apologize before hand if some of these questions might be obvious for expert network programmers. I have researched and read about coding in networking and it is still not clear to me how to do this.
Assume that I want to write a tcp proxy (in go) with the connection between some TCP client and some TCP server. Something like this:
First assume that these connection are semi-permanent (will be closed after a long long while) and I need the data to arrive in order.
The idea that I want to implement is the following: whenever I get a request from the client, I want to forward that request to the backend server and wait (and do nothing) until the backend server responds to me (the proxy) and then forward that response to the client (assume that both TCP connection will be maintained in the common case).
There is one main problem that I am not sure how to solve. When I forward the request from the proxy to the server, and get the response, how do I know when the server has sent me all the information that I need if I do not know beforehand the format of the data being sent from the server to the proxy (i.e. I don't know if the response from the server is of the form of type-length-value scheme nor do I know if `\r\n\ indicates the end of the message form the server). I was told that I should assume that I get all the data from the server connection whenever my read size from the tcp connection is zero or smaller than the read size that I expected. However, this does not seem correct to me. The reason it might not be correct in general is the following:
Assume that the server for some reason is only writing to its socket one byte at a time but the total length of the response to the "real" client is much much much longer. Therefore, isn't it possible that when the proxy reads the tcp socket connected to the server, that the proxy only reads one byte and if it loops fast enough (to do a read before it receives more data), then read zero and incorrectly concludes that It got all the message that the client intended to receive?
One way to fix this might be to wait after each read from the socket, so that the proxy doesn't loop faster than it gets bytes. The reason that I am worried is, assume there is a network partition and i can't talk to the server anymore. However, it is not disconnected from me long enough to timeout the TCP connection. Thus, isn't it possible that I try to read from the tcp socket to the server again (faster than I get data) and read zero and incorrectly conclude that its all the data and then send it pack to the client? (remember, the promise I want to keep is that I only send whole messages to the client when i write to the client connection. Thus, its illegal to consider correct behaviour if the proxy goes, reads the connection again at a later time after it already wrote to the client, and sends the missing chunk at a later time, maybe during the response of a different request).
The code that I have written is in go-playground.
The analogy that I like to use to explain why I think this method doesn't work is the following:
Say we have a cup and the proxy is drinking half the cup every time it does a read from the server, but the server only puts 1 teaspoon at a time. Thus, if the proxy drinks faster than it gets teaspoons it might reach zero too soon and conclude that its socket is empty and that its ok to move on! Which is wrong if we want to guarantee we are sending full messages every time. Either, this analogy is wrong and some "magic" from TCP makes it work or the algorithm that assumes until the socket is empty is just plain wrong.
A question that deals with a similar problems here suggests to read until EOF. However, I am unsure why that would be correct. Does reading EOF mean that I got the indented message? Is an EOF sent each time someone writes a chunk of bytes to a tcp socket (i.e. I am worried that if the server writes one byte at a time, that it sends 1 EOF per bytes)? However, EOF might be some of the "magic" of how a TCP connection really works? Does sending EOF's close the connection? If it does its not a method that I want to use. Also, I have no control of what the server might be doing (i.e. I do not know how often it wants to write to the socket to send data to the proxy, however, its reasonable to assume it writes to the socket with some "standard/normal writing algorithm to sockets"). I am just not convinced that reading till EOF from the socket from server is correct. Why would it? When can I even read to EOF? Are EOFs part of the data or are they in the TCP header?
Also, the idea that I wrote about putting a wait just epsilon bellow the time-out, would that work in the worst-case or only on average? I was also thinking, I realized that if the Wait() call is longer than the time-out, then if you return to the tcp connection and it doesn't have anything, then its safe to move on. However, if it doesn't have anything and we don't know what happened to the server, then we would time out. So its safe to close the connection (because the timeout would have done that anyway). Thus, I think if the Wait call is at least as long as the timeout, this procedure does work! What do people think?
I am also interested in an answer that can justify maybe why this algorithm work on some cases. For example, I was thinking, even if the server only write a byte at a time, if the scenario of deployment is a tight data centre, then on average, because delays are really small and the wait call is almost certainly enough, then wouldn't this algorithm be fine?
Also, are there any risks of the code I wrote getting into a "deadlock"?
package main
import (
"fmt"
"net"
)
type Proxy struct {
ServerConnection *net.TCPConn
ClientConnection *net.TCPConn
}
func (p *Proxy) Proxy() {
fmt.Println("Running proxy...")
for {
request := p.receiveRequestClient()
p.sendClientRequestToServer(request)
response := p.receiveResponseFromServer() //<--worried about this one.
p.sendServerResponseToClient(response)
}
}
func (p *Proxy) receiveRequestClient() (request []byte) {
//assume this function is a black box and that it works.
//maybe we know that the messages from the client always end in \r\n or they
//they are length prefixed.
return
}
func (p *Proxy) sendClientRequestToServer(request []byte) {
//do
bytesSent := 0
bytesToSend := len(request)
for bytesSent < bytesToSend {
n, _ := p.ServerConnection.Write(request)
bytesSent += n
}
return
}
// Intended behaviour: waits until ALL of the response from backend server is obtained.
// What it does though, assumes that if it reads zero, that the server has not yet
// written to the proxy and therefore waits. However, once the first byte has been read,
// keeps writting until it extracts all the data from the server and the socket is "empty".
// (Signaled by reading zero from the second loop)
func (p *Proxy) receiveResponseFromServer() (response []byte) {
bytesRead, _ := p.ServerConnection.Read(response)
for bytesRead == 0 {
bytesRead, _ = p.ServerConnection.Read(response)
}
for bytesRead != 0 {
n, _ := p.ServerConnection.Read(response)
bytesRead += n
//Wait(n) could solve it here?
}
return
}
func (p *Proxy) sendServerResponseToClient(response []byte) {
bytesSent := 0
bytesToSend := len(request)
for bytesSent < bytesToSend {
n, _ := p.ServerConnection.Write(request)
bytesSent += n
}
return
}
func main() {
proxy := &Proxy{}
proxy.Proxy()
}
Unless you're working with a specific higher-level protocol, there is no "message" to read from the client to relay to the server. TCP is a stream protocol, and all you can do is shuttle bytes back and forth.
The good news is that this is amazingly easy in go, and the core part of this proxy will be:
go io.Copy(server, client)
io.Copy(client, server)
This is obviously missing error handling, and doesn't shut down cleanly, but clearly shows how the core data transfer is handled.
Here is my question:
Is it bad to set socket to nonblocking before I call accept or connect? or it should be using blocking accept and connect, then change the socket to nonblocking?
I'm new to OpenSSL and not very experienced with network programming. My problem here is I'm trying to use OpenSSL with a nonblocking socket network to add security. When I call SSL_accept on server side and SSL_connect on client side, and check return error code using
SSL_get_error(m_ssl, n);
char error[65535];
ERR_error_string_n(ERR_get_error(), error, 65535);
the return code from SSL_get_error indicates SSL_ERROR_WANT_READ, while ERR_error_string_n prints out "error:00000000:lib(0):func(0):reason(0)", which I think it means no error. SSL_ERROR_WANT_READ means I need to retry both SSL_accept and SSL_connect.
Then I use a loop to retry those function, but this just leads to a infinite loop :(
I believe I have initialized SSL properly, here is the code
//CRYPTO_malloc_init();
SSL_library_init();
const SSL_METHOD *method;
// load & register all cryptos, etc.
OpenSSL_add_all_algorithms();
// load all error messages
SSL_load_error_strings();
if (server) {
// create new server-method instance
method = SSLv23_server_method();
}
else {
// create new client-method instance
method = SSLv23_client_method();
}
// create new context from method
m_ctx = SSL_CTX_new(method);
if (m_ctx == NULL) {
throwError(-1);
}
If there is any part I haven't mentioned but you think it could be the problem, please let me know.
SSL_ERROR_WANT_READ means I need to retry both SSL_accept and SSL_connect.
Yes, but this is not the full story.
You should retry the call only after the socket gets readable, e.g. you need to use select or poll or similar functions to wait, until the socket gets readable. Same applies to SSL_ERROR_WANT_WRITE, but here you have to wait for the socket to get writable.
If you just retry without waiting it will probably eventually succeed, but only after having 100s of failed calls. While doing select does not guarantee that it will succeed immediately at the next call it will take only a few calls of SSL_connect/SSL_accept until it succeeds and it will not busy loop and eat CPU in the mean time.
I'm opening up a tcp connection to a server which is easy enough to do but I need a way to keep that socket open without having to call net.createConnection(port, host) again and again.
What I'm trying to implement is a socket server which accepts multiple connections then channels the requests through the one socket as mentioned above. I then need to channel the response to the correct socket. However, the only issue I'm having is to maintain an open socket which I'm trying to create outside the listening server code.
I've approached it with the Singleton pattern to create the socket..
var Singleton = (function() {
var socket = null;
function connectToHost(port, host) {
socket = net.createConnection(port, host);
return socket;
}
return {
connectToHost: connectToHost
};
})();
But from what I can see, on Event('end') that socket is no longer writable. If I reconnect the socket.
socket.on('end', function() {
socket = Singleton.connectToHost(port, host);
});
the same thing will happen on Event('end').
How can I approach this so that I can create and maintain one socket connection?
A late response to this.
If I understand your question correctly, are you trying to do something like this?
socket.on('close', function() {
socket.connect(port, host);
});
According to the net Node.js v.0.12.0 documentation
It might work, but it will hammer the server pretty badly, so a setTimeout might be wise.
I'm curious: what did you end up with in the end?
It sounds like you want a mux/demux (multiplexer/demultiplexer) in front of your server which, presumably, replies in such a way as the frontend can properly route the reply.
There's nothing in TCP to support this so you'll have to write it yourself or find one already written. http://www.google.com/search?q=tcp+multiplexer
This link looks promising: http://sourceforge.net/projects/tcpmultiplexer/
(Don't confuse what you're looking for with "tcpmux" on port #1; that's completely different.)