I'm new in Erlang.
My Problem is, that when I start the client for the 1st time everything seems okay, I get the sorted list: <<1,5,72,97,108,108,111>>.
But by the 2nd time it won't receive the sorted list, because I think the socket is closed. The output from the Client is "Connection closed".
Here is my code:
Client
-module(client).
-export([client/0]).
client() ->
case gen_tcp:connect("localhost", 6000, [{mode, binary}]) of
{ok, Sock} ->
Data = [1, "Hallo", 5],
gen_tcp:send(Sock, Data),
receive
{tcp, _, Bin} ->
io:fwrite("Received sorted list from server: ~w~n", [Bin]);
{tcp_closed, _} ->
io:fwrite("Connection closed"),
gen_tcp:close(Sock)
end;
{error,_} ->
io:fwrite("Connection error! Quitting...~n")
end.
Server
-module(server).
-export([server/0]).
-import(mergeSort,[do_recv/1]).
%creates a tcp socket on Port 6000
server() ->
{ok, Listen} = gen_tcp:listen(6000, [{keepalive, true}, %send keepalive packets
{reuseaddr, true}, %reuse address
{active, once}, %socket is active once
{mode, list}]), %binary traffic
spawn(fun() -> parallel_connection(Listen) end).
%server is listening
%accepts the connection
%starts MergeSort
parallel_connection(Listen) ->
io:fwrite("Listening connections..~n"),
{ok, Socket} = gen_tcp:accept(Listen),
io:fwrite("Connection accepted from ~w~n", [Socket]),
spawn(fun() -> parallel_connection(Listen) end),
do_recv(Socket).
MergeSort
-module(mergeSort).
-export([do_recv/1]).
merge_sort(List) -> m(List, erlang:system_info(schedulers)).
%break condition
m([L],_) ->
[L];
%for more than one scheduler
m(L, N) when N > 1 ->
{L1,L2} = lists:split(length(L) div 2, L),
%self () returns Pid, make_ref() returns almost unique reference
{Parent, Ref} = {self(), make_ref()},
%starts a new process for each half of the list
%and sends Message to Parent
spawn(fun()-> Parent ! {l1, Ref, m(L1, N-2)} end),
spawn(fun()-> Parent ! {l2, Ref, m(L2, N-2)} end),
{L1R, L2R} = receive_results(Ref, undefined, undefined),
lists:merge(L1R, L2R);
m(L, _) ->
{L1,L2} = lists:split(length(L) div 2, L),
lists:merge(m(L1, 0), m(L2, 0)).
receive_results(Ref, L1, L2) ->
receive
{l1, Ref, L1R} when L2 == undefined -> receive_results(Ref, L1R, L2);
{l2, Ref, L2R} when L1 == undefined -> receive_results(Ref, L1, L2R);
{l1, Ref, L1R} -> {L1R, L2};
{l2, Ref, L2R} -> {L1, L2R}
after 5000 -> receive_results(Ref, L1, L2)
end.
do_recv(Socket) ->
%{ok, {Address, _}} = inet:peername(Socket),
receive
{tcp, Socket, List} ->
try
Data = merge_sort(List),
gen_tcp:send(Socket, Data),
io:fwrite("Sent sorted list to ~w | Job was done! Goodbye :)~n", [Socket]),
gen_tcp:close(Socket)
catch
_:_ ->
io:fwrite("Something went wrong with ~w | Worker terminated and connection closed!~n", [Socket]),
gen_tcp:close(Socket)
end;
{tcp_closed, _} ->
io:fwrite("Connection closed ~n");
{error, _} ->
io:fwrite("Connection error from ~w | Worker terminated and connection closed!~n", [Socket]),
gen_tcp:close(Socket)
end.
Can anyone help me?
When you call client:client/0, it creates a connection, sends its data, receives the response, then returns. Meanwhile, the server closes the socket. When you call client:client/0 again, it again creates a connection and sends data, but then it receives the tcp_closed message for the previous socket, and then it returns.
You can fix this by specifying the client socket in your receive patterns:
receive
{tcp, Sock, Bin} ->
io:fwrite("Received sorted list from server: ~w~n", [Bin]);
{tcp_closed, Sock} ->
io:fwrite("Connection closed"),
gen_tcp:close(Sock)
end;
In this code, the variable Sock replaces both the underscores in the original code, in the {tcp, _, Bin} and {tcp_closed, _} tuples. This forces the messages to match only for the specified socket.
Related
I need to stream database data/string to client using Erlang/Yaws. I found this documentation to achieve this but this example uses open_port to send data.
this is the example from [http://yaws.hyber.org/stream.yaws][1]
out(A) ->
%% Create a random number
{_A1, A2, A3} = now(),
random:seed(erlang:phash(node(), 100000),
erlang:phash(A2, A3),
A3),
Sz = random:uniform(100000),
Pid = spawn(fun() ->
%% Read random junk
S="dd if=/dev/urandom count=1 bs=" ++
integer_to_list(Sz) ++ " 2>/dev/null",
P = open_port({spawn, S}, [binary,stream, eof]),
rec_loop(A#arg.clisock, P)
end),
[{header, {content_length, Sz}},
{streamcontent_from_pid, "application/octet-stream", Pid}].
rec_loop(Sock, P) ->
receive
{discard, YawsPid} ->
yaws_api:stream_process_end(Sock, YawsPid);
{ok, YawsPid} ->
rec_loop(Sock, YawsPid, P)
end,
port_close(P),
exit(normal).
rec_loop(Sock, YawsPid, P) ->
receive
{P, {data, BinData}} ->
yaws_api:stream_process_deliver(Sock, BinData),
rec_loop(Sock, YawsPid, P);
{P, eof} ->
yaws_api:stream_process_end(Sock, YawsPid)
end.
I need to stream string, I managed to understand the process until here except port_close(p)-which obviously closes the port.
rec_loop(Sock, P) ->
receive
{discard, YawsPid} ->
yaws_api:stream_process_end(Sock, YawsPid);
{ok, YawsPid} -> rec_loop(Sock, YawsPid, P)
end,
port_close(P),
exit(normal).
What I do not understand is this part.
rec_loop(Sock, YawsPid, P) ->
receive
{P, {data, BinData}} ->
yaws_api:stream_process_deliver(Sock, BinData),
rec_loop(Sock, YawsPid, P);
{P, eof} ->
yaws_api:stream_process_end(Sock, YawsPid)
end.
Now, There is no documentation on {P, {data, BinData}} -> nor {P, eof} -> and I need to change the content type
{streamcontent_from_pid, "application/octet-stream", Pid}. to {streamcontent_from_pid, "text/html; charset=utf-8", Pid}
So the question is How do I stream text/string without using port?
The Yaws example creates an OS process to read from /dev/urandom, a special file that delivers pseudorandom values, and it uses a port to communicate with that process. It runs the port within an Erlang process that serves as the content streamer.
The content streamer process first awaits directions from Yaws by receiving either {discard, YawsPid} or {ok, YawsPid}. If it gets the discard message, it has no work to do, otherwise it calls rec_loop/3, which loops recursively, taking in data from the port and streaming it to a Yaws HTTP socket. When rec_loop/3 gets an end-of-file indication from the port, it terminates its streaming by calling yaws_api:stream_process_end/2 and then returns to rec_loop/2, which in turn closes the port and exits normally.
For your application, you need a streaming process that, just like the Yaws example, first handles either {discard, YawsPid} or {ok, YawsPid}. If it gets {ok, YawsPid}, it should then go into a loop receiving messages that supply the text you want to stream to the HTTP client. When there's no more text to send, it should receive some sort of message telling it to stop, after which it should exit normally.
I have a program in Haskell that get all input from socket and print it.
main = withSocketsDo $ do
sock <- listenOn $ PortNumber 5002
netLoop sock
netLoop sock = do
(h,_,_) <- accept sock
hSetBuffering h NoBuffering
forkIO $ workLoop h
netLoop sock
workLoop :: Handle -> IO ()
workLoop h = do
str <- hGetContents h
putStr str
--do any work
But the problem is that this solution is closing a socket, but I want to write out the results of computation to the same socket.
But if I try to use hGetLine instead hGetContents I faced with some strange behaviour. My program show nothing until I press Ctrl-C, and then I see the first line of my network data sended. I suggest that this behaviour related with lasy execution, but why hGetContents works as expected and hGetLine not?
You need to use LineBuffering if you want to read line-by-line using hGetLine. I got it working with
import Network
import System.IO
import Control.Concurrent
main :: IO ()
main = withSocketsDo $ do
sock <- listenOn $ PortNumber 5002
netLoop sock
netLoop :: Socket -> IO ()
netLoop sock = do
putStrLn "Accepting socket"
(h,_,_) <- accept sock
putStrLn "Accepted socket"
hSetBuffering h LineBuffering
putStrLn "Starting workLoop"
forkIO $ workLoop h
netLoop sock
workLoop :: Handle -> IO ()
workLoop h = do
putStrLn "workLoop started"
str <- hGetLine h
putStrLn $ "Read text: " ++ str
-- do any work
And tested it using the python script
import socket
s = socket()
s.connect(('127.0.0.1', 5002))
s.send('testing\n')
s.close()
And I got the output
Accepting socket
Accepted socket
Starting workLoop
workLoop started
Accepting socket
Read text: testing
And I get the same behavior if I change it to NoBuffering and hGetContents
I create one udp client, and need to send message every 5s, so i write
start() ->
{ok, Sock} = gen_udp:open(0, []),
send(Sock).
send(Sock) ->
gen_udp:send(Sock, "127.0.0.1", 3211, "hello world"),
timer:sleep(5000),
send(Sock).
I want to know a good place to close the socket
If your goal is to send a message every 5 seconds, then why would you want to close the socket? If you have some logic to determine when you have sent enough messages (you count them for example), then that would be the place to close the socket.
Here's an example of how you could count the messages in a long-running process:
start() ->
{ok, Sock} = gen_udp:open(...),
send(Sock, 0),
gen_udp:close(Sock).
send(Sock, N) when N >= ?MAX_MESSAGE_COUNT ->
ok;
send(Sock, N) ->
...
send(Sock, N+1).
By counting up to a given number, instead of down, you can change this number while the process is running by simply reloading the code.
After reading this answer, I want to understand if the same applies to the calls to gen_tcp:recv(Socket, Length). My understanding of the documentation is that this if more than Length bytes are available in the buffer, they remain there; if there is less than Length bytes, the call blocks until enough is available or connection closes.
In particular, this should work when packets are prefixed by 2 bytes holding packet length in little-endian order:
receive_packet(Socket) ->
{ok, <<Length:16/integer-little>>} = gen_tcp:recv(Socket, 2),
gen_tcp:recv(Socket, Length).
Is this correct?
Yes (or No, see comments for details).
Consider:
Shell 1:
1> {ok, L} = gen_tcp:listen(8080, [binary, {packet, 0}, {active, false}]).
{ok,#Port<0.506>}
2> {ok, C} = gen_tcp:accept(L). %% Blocks
...
Shell 2:
1> {ok, S} = gen_tcp:connect("localhost", 8080, [binary, {packet, 0}]).
{ok,#Port<0.516>}
2> gen_tcp:send(S, <<0,2,72,105>>).
ok
3>
Shell 1 cont:
...
{ok,#Port<0.512>}
3> {ok, <<Len:16/integer>>} = gen_tcp:recv(C, 2).
{ok,<<0,2>>}
4> Len.
2
5> {ok, Data} = gen_tcp:recv(C, Len).
{ok,<<"Hi">>}
6>
However this is useful if you only want to confirm the behaviour. In reality you would change the {packet, N} option to define how many bytes that should be the packet length (on big-endian systems).
Same as before but without extracting length explicitly (note packet length = 2 in shell 1):
Shell 1:
1> {ok, L} = gen_tcp:listen(8080, [binary, {packet, 2}, {active, false}]).
{ok,#Port<0.506>}
2> {ok, C} = gen_tcp:accept(L). %% Blocks
...
In this case Erlang will strip the first 2 bytes and recv/2 will block until as many bytes it needs. In this case read-length must be 0 in recv/2.
Shell 2:
1> {ok, S} = gen_tcp:connect("localhost", 8080, [binary, {packet, 0}]).
{ok,#Port<0.516>}
2> gen_tcp:send(S, <<0,2,72,105>>).
ok
3>
Shell 1:
...
{ok,#Port<0.512>}
3> {ok, Data} = gen_tcp:recv(C, 0).
{ok,<<"Hi">>}
In this case I don't specify the {packet, N} option in shell 2 just to show the idea but normally it is not 0. If the packet option is set then gen_tcp will automatically append/strip that many bytes from the package.
If you specify packet 0 then you must do a recv/2 with a length >= 0 and the behaviour is the same as in C. You can simulate non-blocking receives by giving a short time out when doing the receive and this will return {error, timeout} in that case.
More on that can be read here:
http://www.erlang.org/doc/man/gen_tcp.html
http://www.erlang.org/doc/man/inet.html#setopts-2
Hope this clears things up.
I found an interesting problem when using gen_tcp behavior. I have a server and a client. The server accepts connections and the client creates many processes that all try to connect to the listening server.
If I try to start the client which spawns many processes that all try to connect to the socket at the same time then many fail. However if I put timer:sleep(x) then every socket is being accepted.
Does this mean that gen_tcp:accept() has a limit where it can accept some connection request?
Code for the server and the client follows:
accept(State = #state{lsocket = LSocket, num = Num}) ->
case gen_tcp:accept(LSocket) of
{ok, Socket} ->
io:format("Accepted ~p ~n", [Num]),
{sockets, List} = hd(ets:lookup(csockets, sockets)),
NewList = [Socket | List],
ets:insert(csockets, {sockets, NewList}),
Pid = spawn(fun() -> loop(Socket) end),
gen_tcp:controlling_process(Socket, Pid),
accept(State#state{num = Num + 1});
{error, closed} -> State
end.
loop(Socket) ->
case gen_tcp:recv(Socket, 0) of
{ok, Data} ->
gen_tcp:send(Socket, Data),
loop(Socket);
{error, closed} ->
io:format(" CLOSED ~n"),
ok
end.
Client:
send(State = #state{low = Low, high = Low}) ->
State;
send(State = #state{low = Low}) ->
N = Low rem 10,
Dest = lists:nth(N + 1, State#state.dest),
spawn(?MODULE, loop, [Dest, Low]),
%%timer:sleep(1),
NewState = State#state{low = Low + 1},
send(NewState).
loop({IP, Port}, Low) ->
case gen_tcp:connect(IP, Port, [binary]) of
{ok, Socket} ->
io:format("~p Connected ~n", [Low]),
gen_tcp:send(Socket, "Hi"),
receive
{tcp, RecPort, Data} ->
io:format("I have received ~p on port ~p ~p ~n", [Data, RecPort, Low])
end;
_Else ->
io:format("The connection failed ~n"),
loop({IP, Port}, Low)
end.
It is true that a single process can only gen_tcp:accept/1 so fast, though I'm not sure that that's the bottleneck you're running in to.
You might be interested in Ranch, the TCP library for the Cowboy webserver. The manual includes a section on internal features that talks about using multiple acceptors.
In your case, you should try to produce more debugging output for yourself. Printing the error when the client fails to connect would be a good start -- there are lots reasons why a TCP client might fail to connect.