How to check a socket is closed or not in luasocket library? - sockets

I am writing a server using Lua programming language, and the network layer is based on LuaSocket.
And I cannot find any method to detect a socket is closed or not in its reference manual except by just try to read data from it(it will return nil and string 'close' when calling that).
My code looks like this:
local socket = require 'socket'
local server = socket.tcp()
local port = 9527
server:bind('*', port)
local status, errorMessage = server:listen()
if status == 1 then
printf('Server is launched successfully on port %u', port)
else
printf('Server listen failed, error message is %s', errorMessage)
return
end
local sockets = {server}
while true do
local results = socket.select(sockets)
for _, sock in ipairs(results) do
if sock == server then
local s = server:accept()
callback('Connected', s)
table.insert(sockets, s)
printf('%s connected', s:getsockname())
else
-- trying to detect socket is closed
if sock:isClosed() then
callback('Disconnected', sock)
for i, s in ipairs(sockets) do
if s == sock then
table.remove(sockets, i)
break
end
end
printf('%s disconnected', sock:getsockname())
else
callback('ReadyRead', sock)
end
end
end
end

except by just try to read data from it (it will return nil and string 'close' when calling that).
I'm not aware of any other method. Why doesn't checking the result from reading a socket work for you?
You need to use settimeout to make the call non-blocking and check the error returned for closed (not close). You can read one byte and store it or you can try reading zero bytes.

Related

Sending messages to handler function fibers in TCP server

If handler function passed to tcp_server() from socket module runs as fiber is there possibility to communicate with each tcp_connection by fiber.channel?
Yes, it is.
#!/usr/bin/tarantool
local fiber = require('fiber')
local socket = require('socket')
local clients = {}
function rc_handle(s)
-- You can save socket reference in some table
clients[s] = true
-- You can create a channel
-- I recommend attaching it to the socket
-- so it'll be esier to collect garbage
s.channel = fiber.channel()
-- You can also get the reference to the handling fiber.
-- It'll help you to tell alive clients from dead ones
s.fiber = fiber.self()
s:write(string.format('Message for %s:%s: %s',
s:peer().host, s:peer().port, s.channel:get()
))
-- Don't forget to unref the client if it's done manually
-- Or you could make clients table a weak table.
clients[s] = nil
end
server = socket.tcp_server('127.0.0.1', 3003, {
name = 'srv',
handler = rc_handle,
})
function greet_all(msg)
-- So you can broadcast your message to all the clients
for s, _ in pairs(clients) do
s.channel:put(msg)
end
end
require('console').start()
Of course, this snippet if far from being perfect, but I hope it'll help you to get the work done.

Why there's no isClosed function at Network.Socket

In the Network.Socket module there are functions like isConnected, isBound, etc. but there is no isClosed function, to check whether a socket is closed.
I'm writing a module that needs to close a socket connection:
import qualified Network.Socket as Socket
-- ...
close Connection{connSock, serverSock} = do
Socket.close connSock
Socket.close serverSock
The problem is that this if the socket is already closed (for instance on the client side), I'll get the following error:
epollControl: does not exist (No such file or directory)
So I now I'm checking that the socket is open before closing it:
closeIfOpen sock = do
let MkSocket _ _ _ _ stMV = sock
st <- readMVar stMV
case st of
Closed -> return ()
_ -> Socket.close sock
While this works, the lack of an isClosed function makes me wonder whether the code above has some potential problem that I'm not aware of (for instance it has a race condition, but my guess is that isConnected and isBound should have it as well...).

Do I need a write buffer for socket in go?

Suppose I had a Tcp server in linux, it would create a new goroutine for a new connnection. When I want to write data to the tcp connection, should I do it just like this
conn.Write(data)
or do it in a goroutine especially for writing, like this
func writeRoutine(sendChan chan []byte){
for {
select {
case msg := <- sendChan :
conn.Write(msg)
}
}
}
just in case that the network was busy.
In a short, Did I need a write buffer in go just like in c/c++ when writing to a socket?
PS maybe I didn't exclaim the problem clearly.
1 I talked of the server, meaning a tcp server runing in linux. It would create a new goroutine for a new connnection. like this
listener, err := net.ListenTCP("tcp", tcpAddr)
if err != nil {
log.Error(err.Error())
os.Exit(-1)
}
for {
conn, err := listener.AcceptTCP()
if err != nil {
continue
}
log.Debug("Accept a new connection ", conn.RemoteAddr())
go handleClient(conn)
}
2 I think my problem isn't much concerned with the code. As we know, when we use size_t write(int fd, const void *buf, size_t count); to write a socket fd in c/c++, for a tcp server, we need a write buffer for a socket in your code necessaryly, or maybe only some of the data is writen successfully. I mean, Do I have to do so in go ?
You are actually asking two different questions here:
1) Should you use a goroutine per accepted client connection in my TCP server?
2) Given a []byte, how should I write to the connection?
For 1), the answer is yes. This is the type of pattern that go is most suited for. If you take a look at the source code for the net/http, you will see that it spawns a goroutine for each connection.
As for 2), you should do the same that you would do in a c/c++ server: write, check how much was written and keep on writing until your done, always checking for errors. Here is a code snippet on how to do it:
func writeConn(data []byte) error {
var start,c int
var err error
for {
if c, err = conn.Write(data[start:]); err != nil {
return err
}
start += c
if c == 0 || start == len(data) {
break
}
}
return nil
}
server [...] create a new goroutine for a new connnection
This makes sense because the handler goroutines can block without delaying the server's accept loop.
If you handled each request serially, any blocking syscall would essentially lock up the server for all clients.
goroutine especially for writing
This would only make sense in use cases where you're writing either a really big chunk of data or to a very slow connection and you need your handler to continue unblocked, for instance.
Note that this is not what is commonly understood as a "write buffer".

How can I send [SYN] with bare sockets?

I'm writing a bare bone ftp client just using sockets on VxWorks and I now would like to receive directory contents.
For that I need to send a Request: LIST and following a [SYN] which initiates the data transfer back to me but I'm wondering how I do this with simple sockets?
My code to send the LIST just looks like this:
char lst[6] = "LIST";
lst[4] = 0x0d; // add empty characters on back
lst[5] = 0x0a;
if (write (sFd, (char *) &lst, 6) == ERROR)
{
perror ("write");
close (sFd);
return ERROR;
}
if (read (sFd, replyBuf, REPLY_MSG_SIZE) < 0) {
perror ("read");
close (sFd);
return ERROR;
}
printf ("MESSAGE FROM SERVER:\n%s\n", replyBuf);
but it actually gets stuck in the read() until it times out as the server doesn't respond unless i send a 'SYNC` to initiate the connection.
edit
Upon suggestion, I replaced the addtion of 0x0d and 0x0a at the end of my string with \r\n directly top the string which changed my code to:
char lst[6] = "LIST\r\n";
if (write (sFd, (char *) &lst, strlen(lst)) == ERROR)
{
perror ("write");
close (sFd);
return ERROR;
}
if (read (sFd, replyBuf, REPLY_MSG_SIZE) < 0) {
perror ("read");
close (sFd);
return ERROR;
}
printf ("MESSAGE FROM SERVER:\n%s\n", replyBuf);
but I get exactly the same result, my client does not send a SYNC message - why not I am wondering...?
For that I need to send a Request: LIST and following a [SYN] which initiates the data transfer back to me
No you don't. The [SYN] is sent automatically when you connect a TCP socket.
char lst[6] = "LIST";
The problem is here. It should be
char[] lst = "LIST\r\n";
All FTP commands are terminated by a line terminator, which is defined as \r\n.

epoll_wait() wakes up on EPOLLIN even I have consumed all reader buffer

Hello there I am getting mad with this issue since I'm using a simple pattern.
Ok, I have a infinite while in which I use epoll_wait on server socket and already connected sockets. All went ok if a new socket send a connection request; my problem is when connected sockets (right now I am just using one socket sending 390k packet) sends data: doesn't matter if I use EPOLLONESHOT or EPOLLET, after consumed all request buffer on that socket, rearming socket or receiving EAGAIN on recv(), epoll_wait always wakes up again with wrong buffer!
My server works with threadpool, but right now is just one thread that does all work (to simplify testing):
while (TRUE) {
int num = epoll_wait(efd, events, MAX_EVENTS, -1);
for (int i = 0; i < num; i++) { // ciclo epoll()
if (events[i].events & EPOLLERR || events[i].events & EPOLLHUP || !(events[i].events & EPOLLIN)) {
fprintf (stderr, "epoll error on socket: closed\n");
s = epoll_ctl(efd, EPOLL_CTL_DEL, events[i].data.fd, NULL);
}
else if (events[i].data.fd == serverSocket) {
while (TRUE) {
newInfoClient = server->m_bgServer->AcceptClient(&newSocketClient);
if (newInfoClient == NULL) { // nessun client
break;
}
else {
printf("\nSocket accettato: %d", newSocketClient);
s = CSocket::MakeNonBlocking(newSocketClient);
if (s == -1)
abort();
event.data.fd = newSocketClient;
event.events = EPOLLIN | EPOLLET;
s = epoll_ctl(efd, EPOLL_CTL_ADD, newSocketClient, &event);
if (s == -1) {
abort();
}
}
}
else {
AbstractTcpServerGame::DownloadTcpRequest(client);
}
}
}
I have just omitted some checks and other internal codes.
AbstractTcpServerGame::DownloadTcpRequest(...)
This function is just a recv in loop to rescues my own header, get buffer body and just to verify empty buffer outside loop I call a simple recv() that returns -1 (errno=EAGAIN or EWOULDBLOCK).
After this when I rearm the socket with epoll_ctr() in DownloadTcpRequest() in EPOLLONESHOT case, when it returns epoll_wait() wakes up again on the same socket!! this is my execution log:
New socket (6) request (errno 11) <--- when epoll_wait() emits EPOLLIN on socket 6
Download of 18 bytes (socket 6) <-- inside AbstractTcpServerGame::DownloadTcpRequest()
Download of 380k (socket 6) <-- another recv() loop to rescue body request
------------------- empty buffer on socket 6 ----------- <-- dummy recv to show empty buffer
New socket (6) request (errno 11)
Download of 18 bytes (socket 6)
Download of -1556256155 (socket 6)
Error on socket 6 (bad::alloc exception)
Client sends 398k (18 header + body) and all data are correctly received as shown above, but rearming socket or using EPOLLET, epoll_wait() generates another request and I don't know where those are taken infact are not correct!