I'm trying to connect to a socket (ssl) using https://github.com/meh/elixir-socket
Socket.Web.connect! "stream-api.betfair.com", secure: true
But I'm facing with this error:
** (MatchError) no match of right hand side value: {:http_error, "{\"op\":\"connection\",\"connectionId\":\"203-270420013200-944388\"}\r\n"}
(socket 0.3.13) lib/socket/web.ex:251: Socket.Web.connect!/3
But its not an error. The server accepts my connection, but elixir-socket returns an error. So what is wrong?
The error in question happens here, meaning Socket.Stream.recv!/2 had returned somewhat unexpected.
It is delegated to Socket.Stream.Protocol.
Depending on whether you use ssl or not, it comes from here or from here.
This library is ancient and AFAICT very strict. The only way to go further with I can think of, would be to fork it, examine the responses you expect to be correct, amend handling of Socket.Stream.recv!/2 to something that meets your requirements:
response =
case Socket.Stream.recv!(client, global) do
{:http_response, _, 101, _} -> :ok
{:http_error, _json} -> :ok
_ -> :error
end
And handle it accordingly. Why your server responds in such a weird way is out of scope here.
Related
I'm trying to create a server that sets up a Unix socket and listens for clients which send/receive data. I've made a small repository to recreate the problem.
The server runs and it can receive data from the clients that connect, but I can't get the server response to be read from the client without an error on the server.
I have commented out the offending code on the client and server. Uncomment both to recreate the problem.
When the code to respond to the client is uncommented, I get this error on the server:
thread '' panicked at 'called Result::unwrap() on an Err value: Os { code: 11, kind: WouldBlock, message: "Resource temporarily unavailable" }', src/main.rs:77:42
MRE Link
Your code calls set_read_timeout to set the timeout on the socket. Its documentation states that on Unix it results in a WouldBlock error in case of timeout, which is precisely what happens to you.
As to why your client times out, the likely reason is that the server calls stream.read_to_string(&mut response), which reads the stream until end-of-file. On the other hand, your client calls write_all() followed by flush(), and (after uncommenting the offending code) attempts to read the response. But the attempt to read the response means that the stream is not closed, so the server will wait for EOF, and you have a deadlock on your hands. Note that none of this is specific to Rust; you would have the exact same issue in C++ or Python.
To fix the issue, you need to use a protocol in your communication. A very simple protocol could consist of first sending the message size (in a fixed format, perhaps 4 bytes in length) and only then the actual message. The code that reads from the stream would do the same: first read the message size and then the message itself. Even better than inventing your own protocol would be to use an existing one, e.g. to exchange messages using serde.
Help me track the status of a specific port: "LISTENING", "CLOSE_WAIT", "ESTABLISHED".
I have an analog solution with the netstat command:
local command = 'netstat -anp tcp | find ":1926 " '
local h = io.popen(command,"rb")
local result = h:read("*a")
h:close()
print(result)
if result:find("ESTABLISHED") then
print("Ok")
end
But I need to do the same with the Lua socket library.
Is it possible?
Like #Peter said, netstat uses the proc file system to gather network information, particularly port bindings. LuaSockets has it's own library to retrieve connection information. For example,
Listening
you can use master:listen(backlog) which specifies the socket is willing to receive connections, transforming the object into a server object. Server objects support the accept, getsockname, setoption, settimeout, and close methods. The parameter backlog specifies the number of client connections that can be queued waiting for service. If the queue is full and another client attempts connection, the connection is refused. In case of success, the method returns 1. In case of error, the method returns nil followed by an error message.
The following methods will return a string with the local IP address and a number with the port. In case of error, the method returns nil.
master:getsockname()
client:getsockname()
server:getsockname()
There also exists this method:
client:getpeername() That will return a string with the IP address of the peer, followed by the port number that peer is using for the connection. In case of error, the method returns nil.
For "CLOSE_WAIT", "ESTABLISHED", or other connection information you want to retrieve, please read the Official Documentation. It has everything you need with concise explanations of methods.
You can't query the status of a socket owned by another process using the sockets API, which is what LuaSocket uses under the covers.
In order to access information about another process, you need to query the OS instead. Assuming you are on Linux, this usually means looking at the proc filesystem.
I'm not hugely familiar with Lua, but a quick Google gives me this project: https://github.com/Wiladams/lj2procfs. I think this is probably what you need, assuming they have written a decoder for the relevant /proc/net files you need.
As for which file? If it's just the status, I think you want the tcp file as covered in http://www.onlamp.com/pub/a/linux/2000/11/16/LinuxAdmin.html
Let's say I have a bunch of clients who all have their own numeric IDs. Each of them connect to my server through SockJS, with something like:
var sock = new SockJS("localhost:8080/sock/100");
In this case, 100 is that client's numeric ID, but it could be any number with any number of digits. How can I set up a SockJS router in my server-side code that allows for the client to set up a SockJS connection through a URL that varies based on what the user's ID is? Here's a simplified version of what I have on the server-side right now:
public void start() {
HttpServer server = vertx.createHttpServer();
SockJSHandler sockHandler = SockJSHandler.create(vertx);
router.route("/sock/*").handler(sockHandler);
server.requestHandler(router::accept).listen(8080);
}
This works fine if the client connects through localhost:8080/sock, but it doesn't seem to work if I add "/100" to the end of the URL. Instead of getting the default "Welcome to SockJS!" message, I just get "Not Found." I tried setting a path regex and I got an error saying that sub-routers can't use pattern URLs. So is there some way to allow for the client to connect through a variable URL, whether it's /sock/100, /sock/15, or /sock/1123123?
Ideally, I'd be able to capture the numeric ID that the client uses (like with routing REST API calls, when you could add "/:ID" to the routing path and then capture the value that the client uses), but I can't find anything that works for SockJS connections.
Since it seems that SockJS connections are considered to be the same as sub-routers, and sub-routers can't have pattern URLs, is there some work-around for this? Or is it not possible?
Edit
Just to add to what I said above, I've tried a couple different things which haven't seemed to work yet.
I tried setting up an initial, generic main router, which then re-directs to the SockJS handler. Here's the idea I had:
router.routeWithRegex("/sock/\\d+").handler(context -> {
context.reroute("/final");
});
router.route("/final").handler(SockJSHandler.create(vertx));
With this, if I access localhost:8080/sock/100 directly through the browser, it takes me to the "Welcome to SockJS!" page, and the Chrome network tab shows that a websocket connection has been created when I test it through my client.
However, I still get an error because the websocket shows a 200 status code rather than 101, and I'm not 100% sure as to why that is happening, but I would guess that it has to do with the response that the initial handler produces. If I try to set the initial handler's status code to 101, I still get an error, because then the initial handler fails.
If there's some way to work around these status codes (it seems like the websocket is expecting 101 but the initial handler is expecting 200, and I think I can only pick one), then that could potentially solve this. Any ideas?
We are writing a message broker in Haskell (HMB). Therefore messages have to be parsed (Data.Binary) after they are received from socket (Network.Socket). We've been testing on loopback (localhost) so far - for producing and parsing messages. This worked quiet well. If we benchmark by producing messages from another machine we are facing problems: Suddenly the parser does not have enough bytes to parse.
The first 4 bytes of each message defines the length of the message and thus describes the message to be parsed. As hinted above, we do parsing with Data.Binary - so this is lazy. For testing purposes we switched parsing of the first 4 bytes to strict by using the cereal library. This the same problem. We now even tried to completely parse the requests with cereal only and the problem also remains.
In the code you'll see that we do threading. However, we also tried without a channel (single threaded setup) but this didn't solve the problem either.
Here is a part of the code (Thread1) where the received bytes are written to a channel to be further consumed/parsed. (As mentioned, nothing changes if we omit channeling and directly parse input):
runConnection :: (Socket, SockAddr) -> RequestChan -> Bool -> IO()
runConnection conn chan False = return ()
runConnection conn chan True = do
r <- recvFromSock conn
case (r) of
Left e -> do
handleSocketError conn e
runConnection conn chan False
Right input -> do
threadDelay 5000 -- THIS FIXES THE PROBLEM!?
writeToReqChan conn chan input
runConnection conn chan True
Here is the part (Thread2) where input is beeing parsed:
runApiHandler :: RequestChan -> ResponseChan -> IO()
runApiHandler rqChan rsChan = do
(conn, req) <- readChan rqChan
case readRequest req of -- readRequest IS THE PARSER
Left (bs, bo, e) -> handleHandlerError conn $ ParseRequestError e
Right (bs, bo, rm) -> do
res <- handleRequest rm
case res of
Left e -> handleHandlerError conn e
Right bs -> writeToResChan conn rsChan bs
runApiHandler rqChan rsChan
Now I figured out, that if the process of parsing is delayed a bit (see threadDelay in the first code block), everything works fine. Which basically means, the parser doesn't wait for bytes received from the socket.
Why is that? Why does the parser not wait for the socket the have enough bytes? Is there a general mistake in our setup?
I would bet that the problem has nothing to do with the parser but is instead due to the blocking semantics of UNIX sockets.
While a loopback
interface will likely pass the packet directly from the sender to the receiver,
an Ethernet interface may need to break up the packet to fit in the Maximum
Transmission Unit (MTU) of the link. This is known as packet fragmentation.
The len
argument to the recv system call is merely
the upper bound on the received length (e.g. the size of the target buffer); the
call may produce less data than you ask for. To quote the manpage,
If no messages are available at the socket, the receive calls wait for a
message to arrive, unless the socket is nonblocking (see fcntl(2)), in which
case the value -1 is returned and the external variable errno is set to
EAGAIN or EWOULDBLOCK. The receive calls normally return any data
available, up to the requested amount, rather than waiting for receipt of
the full amount requested.
For this reason, you may need multiple recv calls to retrieve the entire packet. Your example works if you delay the recv as the operating system can reassemble the original packet since all fragments have arrived by the time it is requested.
As meiersi pointed out, there are a variety of streaming I/O libraries that have developed in the Haskell world for solving this problem, among others. These include pipes, conduit, io-streams, and others. Depending upon your goals, this may be a natural way to handle this issue.
You might want to try the socket support in conduit-extra combined with binary-conduit to properly handle the parsing of the chunked streaming, which happens due to the reasons pointed out by bgamari.
First of all, consider yourself lucky to observe this. On many platforms perhaps only one out of a thousand packets exhibit this behaviour, causing a lot of such (sorry) bad networking code to fail seldom and randomly.
The problem is that you start processing before the data is ready. Instead of the threadDelay (which introduces a permanent delay and might not be long enough in all cases), the solution is to make sure you have at least one item/message/packet to process before you start processing it. Your protocol where the first 32bit word contains the length is perfect for this. Read data until you have at least 4 bytes (the length). Then read data until you have the required number of bytes. If any calls to recvFromSock returns less than the required number, call it again to get some more. Remember to also handle the case of 0 bytes, this means the other party closed the connection.
I have implemented this for a similar protocol (SMPP, packets also starts with the length) and it works perfectly.
I'm writing a REST based web service, and I'm trying to figure out the best way to handle error conditions.
Currently the service is returning HTTP Errors, such as Bad Request, but how can I return extra information to give developers using the web service an idea what they're doing wrong?
For example: creating a user with a null username returns an error of Bad Request. How can I add that the error was caused by a null username parameter?
According to the HTTP spec, the text that comes after the three digit response code, the "Reason-Phrase", can only be replaced with a logical equivalent. So you can't respond with 400 null user and expect anything useful to happen. Indeed, The client is not required to examine or display the Reason- Phrase.
In general, the HTTP response entity (typically the page that accompanies the response) should contain information useful to the client to guide them forward, even when the response is an error. On the web, most such errors are HTML, and are devoid of machine readable information, but most browsers do show the error to the user (and SO's error page is pretty good!).
So for a primarily machine readable resource you have two options:
Pass a human readable message anyway. Return 400 Bad Request with a HTML response, which the client may opt to show to the user. It's dead easy but it's a bit like throwing an unchecked exception, it passes all the hard work to the client, or indeed the end user.
Allow clients to recover. Return 400 Bad Request with a machine readable response which is part of your API, so clients can recover from known error conditions. This is harder, like throwing a checked exception, it becomes part of the API, and it allows clients to recover gracefully if they want to.
You could even make the server support both scenarios by defining a media type for the machie readable error recovery document, and allow clients to "accept" them: Accept: application/atom+xml, application/my.proprietary.errors+json
Clients that forget the mandatory field can opt in to getting machine readable errors or human readable errors by choosing to Accepting the error media type.
It's stated in the HTTP spec that most error codes should return some basic text that gives a clarification of why the error is being returned. The basic Java Servlet Spec defines the HttpServletResponse.sendError(int Code, String message) for this purpose.
String desc = "my Description";
throw new WebApplicationException(Response.status(Status.BAD_REQUEST).entity(desc).type("text/plain").build());