Communicating with WebSocket server using a TCP class - sockets

Would it be possible to send and receive data/messages/frames to and from a WebSocket server interface using a standard TCP class?
Or would I need to fundamentally change the TCP class?
If this is possible, could you show me a little example of how it could look like? (programming language doesn't really matter)
For example I found this node.js code which represents a simple tcp client:
var net = require('net');
var client = new net.Socket();
client.connect(1337, '127.0.0.1', function() {
console.log('Connected');
client.write('Hello, server!');
});
client.on('data', function(data) {
console.log('Received: ' + data);
});
Maybe you could show me what would have to be changed to make it communicate with a WebSocket.

Websockets is a protocol that runs over TCP/IP, as detailed in the standard's draft.
So, in fact, it's all about using the TCP/IP connection (TCP connection class / object) to implement the protocol's specific handshake and framing of data.
The Plezi Framework, written in Ruby, does exactly that.
It wraps the TCPSocket class in it's wrapper called Connection (or SSLConnection) and runs the data through a Protocol input layer (the WSProtocol and HTTPProtocol classes) to the app layer and then through the Protocol output layer (the WSResponse and HTTPResponse classes) to the Connection:
TCP/IP receive -> Protocol input layer ->
App -> Protocol output -> TCP/IP send
A Websocket handshake always starts as an HTTP request. You can read the Plezi's handshake code here*.
* the handshake method receives an HTTPRequest, HTTPResponse and an App Controller and uses them to send the required HTTP reply before switching to Websockets.
Once the handshake is complete, each message received is made up of message frames (one or more). You can read the frame decoding and message extraction code used in the Plezi Framework here.
Before sending messages back, they are divided into one or more Websocket Protocol frames using this code and then they are sent using the TCP/IP connection.
There are plenty of examples out there if you google. I'm sure some of them will be in a programming language you prefer.

Related

How to get WebSocket close code from Akka HTTP?

We are using Akka HTTP to handle our web socket connections using the akka streams API. We are using a Flow that pipes the incoming messages to a "connection actor". A snippet of the code is below:
val connection = system.actorOf(ConnectionActor.props())
val in = Flow[Message]
.to(Sink.actorRef[Message](connection, WebSocketClosed))
val out = Source
.actorRef[Message](500, OverflowStrategy.fail)
.mapMaterializedValue(ws => connection ! WebSocketOpened(ws))
Flow.fromSinkAndSource(in, out)
When the web socket is closed, the connection actor is sent the "WebSocketClose" message and we clean up internal resources. We now have the requirement to know what the reason for closing the connection was according to the standard WebSocket CloseEvent codes.
Is there a way to get the close code from Akka HTTP and send it on to the connection actor so it can take the appropriate action?
I was able to handle client (browser) error code in an akka-http 10.2.6 server.
My use case was to pipe incoming messages to a Sink created by ActorSink.actorRef[T](). When creating the sink, 2 callbacks onCompleteMessage onFailureMessage can be set to converts normal WebSocket close (code=1000) or error to our custom message types.
I suppose that client close/error maps to Flow complete/failure, that means other sinks should be able to handle close/error in a similar way.
my code
`
As it turns out, this is not presently possible in Akka HTTP. See the following GitHub issue:
https://github.com/akka/akka-http/issues/2458
It looks as though this will need to be addressed before this is possible.

How to use publish data over TCP socket instead of UDP

I have been trying to to use pubnub in order to sent data stream through peers. What is happening though is that the message size on the one side is different from the one the other, though the number of messages sent and received are the same.What i have in mind is that by somehow part of the packets are lost
pubnub.publish({
channel: 'my_channel',
'message' : {
'packet': array_of_packets[counter_array_of_packets],
'which_packet_is': counter_array_of_packets,
'payload_size': calculate_payload_size('my_channel'array_of_packets[counter_array_of_packets])
}
callback : function(m){console.log(m)}
});
pubnub.subscribe({
channel: 'my_channel',
message: function(m){wait_(m)},
uuid: 'Mitsos',
error: function (error) {
// Handle error here
console.log(JSON.stringify(error));
}
});
The function used to calculate the size is:
function calculate_payload_size( channel, message ) {
return encodeURIComponent(
channel + JSON.stringify(message)
).length + 100;
}
So how can i use the above two functions publish and subscribe in a way that the TCP (reliable transmitting) is used?
(if this can be of any help here is implemented a working example of pubnub - index.html where packets reach the right way the other side, though i can't seem to find if he uses tcp anywhere link)
All PubNub client libraries communicate over a TCP socket connection only.
If you are using PubNub JavaScript, Java or Objective-C SDK, then the SDK will take care of Keeping the TCP Socket Connection open for you automatically after you have subscribed to a data channel. This guide on http-streaming-over-tcp-with-telnet-example will provide an easy way to use Telnet as an example of Streaming your JSON message payloads over TCP socket.
You can keep a TCP Socket active and alive forever with PubNub's unlimited TTL Socket Session Policy by writing an initial data payload over the socket. After you've established the TCP connection, send an initial payload. Watch this video on Keeping a TCP Socket Connection Open on your first Network Call which walks you through the steps of how to keep a TCP socket connection open.

Can ZeroMQ be used to accept traditional socket requests?

I'm trying to re-write one of our old Servers using ZeroMQ, for now I have the following Server setup, (which works for Zmq requests):
using (var context = ZmqContext.Create())
using (var server = context.CreateSocket(SocketType.REP)) {
server.Bind("tcp://x.x.x.x:5705");
while (true) { ... }
This kind of setup works fine if I use the Zmq client library to connect context.CreateSocket(SocketType.REQ)
But unfortunately we've got a lot of legacy code that needs to connect to this server and the sockets are created using .net socket libs:
Socket = new Socket(ipAddress.AddressFamily, SocketType.Stream, ProtocolType.Tcp);
Socket.Connect(ipAddress, port);
Is there a way to write a ZeroMQ Server to accept these traditional .net socket connections?
You can achieve this using ZMQ_STREAM sockets.
Please note that since zeroMQ 4.x, the RAW router option has been deprecated for a new ZMQ_STREAM socket type, that works the same way as ROUTER + RAW.
It seems it is bound to evolve, though.
I recently tried ZMQ_STREAM sockets in version 4.0.1.
You can open one, use zmq_rcv until you receive the whole message (you have to check it is whole yourself), or zmq_msg_rcv to let ZeroMQ handle it. You will receive an identifier message part, just like the identifier you would find in ROUTER sockets, directly followed by one ONLY body part. There is no empty delimiter between them like there would be using a REQ Socket talking to a ROUTER Socket. So if you route them, be sure to add it yourself.
Beware though: if there is latency on the other end or if your message exceeds ZeroMQ ZMQ_STREAM buffers (mine are 8192 bytes long), your message can be interpreted by zeroMQ as a series of messages.
In that case, you will receive as many different ZeroMQ messages including both the identifier part and the body part, and it is your job to aggregate them, knowing that if several clients are talking to the STREAM socket, they might get mixed up. I personnally use a hash table using the binary identifier as a key, and delete the entry from the table when I know the message is complete and sent to the next node.
Sending through a ZMQ_STREAM with zmq_msg_send or zmq_send works fine as is.
You probably have to use zmq's RAW socket type (instead of REP) to connect with and read client data without zmq-specific framing.
HTTP Server in C (from Pieter's blog)
http://hintjens.com/blog:42
RAW Socket type info
https://github.com/hintjens/libzmq/commit/777c38ae32a5d1799b3275d38ff8d587c885dd55

wrapping socket with websocket

Is it possible to use a webserver with websockets as a wrapper to another server to pass messages from the "real server" to a web client and back?
Im curious of this as I have a game server written in Ada that has an OS-tied client. I would like to swap this client to a webclient based on Javascript, so that the game can be played in a normal browser. What can be done?
That is the purpose of websockify. It is designed to bridge between WebSocket clients and regular TCP servers. It was created as part of the noVNC which is an HTML5 VNC app that can connect to normal VNC servers. However, websockify is generic and there are now many other projects using it.
Disclaimer: I created websockify and noVNC.
You can easily accomplish this by using AWS:
http://libre.adacore.com/tools/aws/
There's support for websockets in AWS, and you can make use of it's excellent socket (AWS.Net) packages for normal socket support.
Websockets are, contrary to what some people believe, not pure sockets. The raw data is encapsulated and masked by the websocket protocol which isn't widely supported yet. That means an application which wasn't designed for it, you can't communicate with it directly via web sockets.
When you have an application which uses a protocol based on normal sockets, and you want to communicate with it with websockets, there are two options.
Either you use a websocket gateway which does the unpacking / packing of the websocket traffic and forwards it as pure socket traffic to the application. This has the advantage that you needn't modify the application, but it has the disadvantage that it also masks the real IP address of the client which might or might not be a problem for certain applications.
Or you implement websocket in your application directly. This can be done by having two different ports the server listens to, one for normal connections and one for websocket connections. Any data which is received or sent through the websocket-port is sent through your websocket implementation before sending / after receiving it, and otherwise processed by the same routines.
THe Kaazing HTML5 Gateway is a great way of bringing your TCP-based protocol to a web client. The Kaazing gateway basically takes your protocol running on top of TCP and converts it to WebSocket so you can access the protocol in the client. You would still need to write a JavaScript protocol library for the protocol that your back end uses. But if you can work with the protocol on top of TCP, then it's not hard to do it with JavaScript.
I used the following code in ruby to wrapp my sockets. The code was adapted from em-websocket-proxy. There might be some specifics for my project in it but generally switching remote_host and remote_port and connecting to localhost:3000 should set you up with a new connection to your server through a websocket.
require 'rubygems'
require 'em-websocket'
require 'sinatra/base'
require 'thin'
require 'haml'
require 'socket'
class App < Sinatra::Base
get '/' do
haml :index
end
end
class ServerConnection < EventMachine::Connection
def initialize(input, output)
super
#input = input
#output = output
#input_sid = #input.subscribe { |msg| send_data msg+ "\n" }
end
def receive_data(msg)
#output.push(msg)
end
def unbind
#input.unsubscribe(#input_sid)
end
end
# Configuration of server
options = {:remote_host => 'your-server', :remote_port => 4000}
EventMachine.run do
EventMachine::WebSocket.start(:host => '0.0.0.0', :port => 8080) do |ws|
ws.onopen {
output = EM::Channel.new
input = EM::Channel.new
output_sid = output.subscribe { |msg| ws.send msg; }
EventMachine::connect options[:remote_host], options[:remote_port], ServerConnection, input, output
ws.onmessage { |msg| input.push(msg)}
ws.onclose {
output.unsubscribe(output_sid)
}
}
end
App.run!({:port => 3000})
end
Enjoy! And ask if you have questions.

Nodejs Websocket Close Event Called...Eventually

I've been having some problems with the below code that I've pieced together. All the events work as advertised however, when a client drops off-line without first disconnecting the close event doesn't get call right away. If you give it a minute or so it will eventually get called. Also, I find if I continue to send data to the client it picks up a close event faster but never right away. Lastly, if the client gracefully disconnects, the end event is called just fine.
I understand this is related to the other listen events like upgrade and ondata.
I should also state that the client is an embedded device.
client http request:
GET /demo HTTP/1.1\r\n
Host: example.com\r\n
Upgrade: Websocket\r\n
Connection: Upgrade\r\n\r\n
//nodejs server (I'm using version 6.6)
var http = require('http');
var net = require('net');
var sys = require("util");
var srv = http.createServer(function (req, res){
});
srv.on('upgrade', function(req, socket, upgradeHead) {
socket.write('HTTP/1.1 101 Web Socket Protocol Handshake\r\n' +
'Upgrade: WebSocket\r\n' +
'Connection: Upgrade\r\n' +
'\r\n\r\n');
sys.puts('upgraded');
socket.ondata = function(data, start, end) {
socket.write(data.toString('utf8', start, end), 'utf8'); // echo back
};
socket.addListener('end', function () {
sys.puts('end'); //works fine
});
socket.addListener('close', function () {
sys.puts('close'); //eventually gets here
});
});
srv.listen(3400);
Can anyone suggest a solution to pickup an immediate close event? I am trying to keep this simple without use of modules. Thanks in advance.
close event will be called once TCP socket connection is closed by one or another end with few complications of rare cases when system "not realising" that socket been already closed, but this are rare cases. As WebSockets start from HTTP request server might just keep-alive till it timeouts the socket. That involves the delay.
In your case you are trying to perform handshake and then send data back and forth, but WebSockets are a bit more complex process than that.
The handshake process requires some security procedure to validate both ends (server and client) and it is HTTP compatible headers. But different draft versions supported by different platforms and browsers do implement it in a different manner so your implementation should take this in account as well and follow official documentation on WebSockets specification based on versions you need to support.
Then sending and receiving data via WebSockets is not pure string. Actual data sent over WebSockets protocol has data-framing layer, which involves adding header to each message you send. This header has details over message you sending, masking (from client to server), length and many other things. data-framing depends on version of WebSockets again, so implementations will vary slightly.
I would encourage to use existing libraries as they already implement everything you need in nice and clean manner, and have been used extensively across commercial projects.
As your client is embedded platform, and server I assume is node.js as well, it is easy to use same library on both ends.
Best suit here would be ws - actual pure WebSockets.
Socket.IO is not good for your case, as it is much more complex and heavy library that has multiple list of protocols support with fallbacks and have some abstraction that might be not what you are looking for.