Julia TCP select - sockets

I have one problem with TCP connection.
I have made server like:
server = listen(5000)
sock = accept(server)
while isopen(sock)
yes=read(sock,Float64,2)
println(yes)
end
I want that it will continually print [0.0,0.0] when there is nothing to read, otherwise it will print what it reads from server.
This will go to loop(trying to read something), if there is nothing to read or it crashes.
I try to do this with task like:
begin
server = listen(5000)
while true
sock = accept(server)
while isopen(sock)
yes=read(sock,Float64,2)
println(yes)
end
println([0.0,0.0])
end
end
but this will only print what it reads. I'm making connection with other console and ride through consol:
clientside=connect(5000)
write(clientside,[2.0,2.0])
So I'm trying to make server that prints [0.0,0.0], if there is nothing to read and it will print what it reads when there is something to read.
Any good ideas?

Maybe, one strategy to make the server is to run the accept / print block asynchronously (since the accept call blocks the main thread).
Following the tutorial "Using TCP Sockets in Julia", one way to make the server is:
notwaiting = true
server = listen(5000)
while true
if notwaiting
notwaiting = false
# Runs accept async (does not block the main thread)
#async begin
sock = accept(server)
ret = read(sock, Float64, 2)
println(ret)
global notwaiting = true
end
end
println([0.0, 0.0])
sleep(1) # slow down the loop
end
The variable notwaiting makes the async block runs only once per connection (without it, the server runs a kind of "race condition").
Testing it with two calls to the client program, produces the following output:
C:\research\stackoverflow\EN-US>julia s.jl
[0.0,0.0]
[0.0,0.0]
[0.0,0.0]
[0.0,0.0]
[2.0,2.0]
[0.0,0.0]
[0.0,0.0]
[0.0,0.0]
[2.0,2.0]
[0.0,0.0]
[0.0,0.0]
[0.0,0.0]
tested with Julia Version 0.5.0-rc3+0

Related

What does it mean when I get a RC (-2) from LINKPGM in a REXX exec?

I "borrowed" the LPINFOX REXX program from this url: [http://www.longpelaexpertise.com/toolsLPinfoX.php]
When I run it "directly" (EX 'hlq.EXEC(LPINFOX)') it runs fine:
------------------------------------------------------
LPInfo: Information for z/OS ssssssss as of 18 Mar 2021
------------------------------------------------------
z/OS version: 02.04
Sysplex name: LOCAL
JES: JES2 z/OS 2.4 (Node nnnn)
Security Software: RACF
CEC: 3907-Z02 (IBM Z z14 ZR1)
CEC Serial: ssssss
CEC Capacity mmmm MSU
LPAR name: llll
LPAR Capacity mmm`enter code here` MSU
Not running under a z/VM image
But, if I insert the call into another exec, I get a RC -2 from the address LINKPGM call:
------------------------------------------------------
LPInfo: Information for z/OS ssssssss as of 18 Mar 2021
------------------------------------------------------
z/OS version: 02.04
Sysplex name: LOCAL
JES: JES2 z/OS 2.4 (Node N1)
Security Software: RACF
79 - Address Linkpgm 'IWMQVS QVS_Out'
+++ RC(-2) +++
CEC: -
CEC Serial:
LPAR name:
Not running under a z/VM image
I'm sure this has to do with the second level of REXX program running, but what can I do about the error (besides queueing up the EXecution of the second REXX)? I'm also stumped on where this RC is documented...my Google search for "REXX ADDRESS RC -2" comes up short.
Thanks,
Scott
PS(1), per answer from #phunsoft:
Interesting. I didn't copy the code to my other REXX. I invoked LPINFOX from within another rexx: I have a hlq.LOGIN.EXEC that has a "EX 'hlq.LPINFOX.EXEC" statement within it. When I reduce the first exec to "TEST1" (follows), it fails the same way:
/* REXX */
"EXECUTIL TS"
"EX 'FAGEN.LPINFOX.EXEC'"
exit 0
When I run TEST1, this is the output from the EXECUTIL from around the IWMQVS call:
When I run LPINFOX.EXEC directly from the command line, the output is the same, except the address LINKPGM IWMQVS works fine:
I can only surmise that there is some environmental difference when I run the exec "standalone" vs. when I run the exec from another exec.
PS(2), per question about replacing IWMQVS with IEFBR14 from phunsoft:
Changing the program to IEFBR14 doesn't change the result, RC=-2.
LINKPGM is a TSO/E REXX host command environment, so you need to search in the TSO/E REXX Reference. From that book:
Additionally, for the LINKMVS, ATTCHMVS, LINKPGM, and ATTCHPGM
environments, the return code set in RC may be -2, which indicates that processing
of the variables was not successful. Variable processing may have been
unsuccessful because the host command environment could not:
o Perform variable substitution before linking to or attaching the program
o Update the variables after the program completed
Difficult to say what th problem is without seeing the code.
You may want to use REXX's trace feature to debug. Do you run this REXX from TSO/E foreground? If so, you might run TSO EXECUTIL TS just before you start that REXX. It will then run as if trace ?i wa specified as the fist line of the code.
I've had look at the LPINFOX EXEC and saw that variable QVS_Out is set as follows just before linking to IWMQVS:
QVS_Outlen = 500 /* Output area length */
QVS_Outlenx = Right(x2c(d2x(QVS_Outlen)),4,d2c(0))
/* Get length as fullword */
QVS_Out = QVS_Outlenx || Copies('00'X,QVS_Outlen-4)
Did you do this also when you copied the call to your other REXX?

Opening a DGRAM socket from within a docker container fails (permission denied)

I'm running an application which builds and sends ICMP ECHO requests to a few different ip addresses. The application is written in Crystal. When attempting to open a socket from within the crystal docker container, Crystal raises an exception: Permission Denied.
From within the container, I have no problem running ping 8.8.8.8.
Running the application on macos, I have no problem.
Reading the https://docs.docker.com/engine/security/apparmor/ and https://docs.docker.com/engine/security/seccomp/ pages on apparmor and seccomp I was sure I'd found the solution, but the problem remains unresolved, even when running as docker run --rm --security-opt seccomp=unconfined --security-opt apparmor=unconfined socket_permission
update/edit: After digging into capabilities(7), I added the following line to my dockerfile: RUN setcap cap_net_raw+ep bin/ping trying to let the socket get opened but without change.
Thanks!
Relevant crystal socket code, full working code sample below:
# send request
address = Socket::IPAddress.new host, 0
socket = IPSocket.new Socket::Family::INET, Socket::Type::DGRAM, Socket::Protocol::ICMP
socket.send slice, to: address
Dockerfile:
FROM crystallang/crystal:0.23.1
WORKDIR /opt
COPY src/ping.cr src/
RUN mkdir bin
RUN crystal -v
RUN crystal build -o bin/ping src/ping.cr
ENTRYPOINT ["/bin/sh","-c"]
CMD ["/opt/bin/ping"]
Running the code, first native, then via docker:
#!/bin/bash
crystal run src/ping.cr
docker build -t socket_permission .
docker run --rm --security-opt seccomp=unconfined --security-opt apparmor=unconfined socket_permission
And finally, a 50 line crystal script which fails to open a socket in docker:
require "socket"
TYPE = 8_u16
IP_HEADER_SIZE_8 = 20
PACKET_LENGTH_8 = 16
PACKET_LENGTH_16 = 8
MESSAGE = " ICMP"
def ping
sequence = 0_u16
sender_id = 0_u16
host = "8.8.8.8"
# initialize packet with MESSAGE
packet = Array(UInt16).new PACKET_LENGTH_16 do |i|
MESSAGE[ i % MESSAGE.size ].ord.to_u16
end
# build out ICMP header
packet[0] = (TYPE.to_u16 << 8)
packet[1] = 0_u16
packet[2] = sender_id
packet[3] = sequence
# calculate checksum
checksum = 0_u32
packet.each do |byte|
checksum += byte
end
checksum += checksum >> 16
checksum = checksum ^ 0xffff_ffff_u32
packet[1] = checksum.to_u16
# convert packet to 8 bit words
slice = Bytes.new(PACKET_LENGTH_8)
eight_bit_packet = packet.map do |word|
[(word >> 8), (word & 0xff)]
end.flatten.map(&.to_u8)
eight_bit_packet.each_with_index do |chr, i|
slice[i] = chr
end
# send request
address = Socket::IPAddress.new host, 0
socket = IPSocket.new Socket::Family::INET, Socket::Type::DGRAM, Socket::Protocol::ICMP
socket.send slice, to: address
# receive response
buffer = Bytes.new(PACKET_LENGTH_8 + IP_HEADER_SIZE_8)
count, address = socket.receive buffer
length = buffer.size
icmp_data = buffer[IP_HEADER_SIZE_8, length-IP_HEADER_SIZE_8]
end
ping
It turns out the answer is that Linux (and by extension docker) does not give the same permissions that macOS does for DGRAM sockets. Changing the socket declaration to socket = IPSocket.new Socket::Family::INET, Socket::Type::RAW, Socket::Protocol::ICMP allows the socket to connect under docker.
A little more still is required to run the program in a non-root context. Because raw sockets are restricted to root, the binary must also be issued the correct capability for access to a raw socket, CAP_NET_RAW. However, in docker, this isn't necessary. I was able to get the program to run outside of super-user context by running sudo setcap cap_net_raw+ep bin/ping. This is a decent primer on capabilities and the setpcap command
MacOS doesn't use the same system of permissions, so setcap is just an unrecognized command. As a result, to get the above code to compile and run successfully on macOS without super-user context, I changed the socket creation code to:
socket_type = Socket::Type::RAW
{% if flag?(:darwin) %}
socket_type = Socket::Type::DGRAM
{% end %}
socket = IPSocket.new Socket::Family::INET, socket_type, Socket::Protocol::ICMP
Applying the CAP_NET_RAW capability for use in linux happens elsewhere in the build process if needed.
With those changes, I'm not seeing any requirement for changes to seccomp or apparmor from the default shipped with Docker in order to run the program.

Port 51347 seems to be used by another program

On running the sample code given in the dispy documentation
def compute(n):
import time, socket
time.sleep(n)
host = socket.gethostname()
return (host, n)
if name == 'main':
import dispy, random
cluster = dispy.JobCluster(compute)
jobs = []
for i in range(10):
# schedule execution of 'compute' on a node (running 'dispynode')
# with a parameter (random number in this case)
job = cluster.submit(random.randint(5,20))
job.id = i # optionally associate an ID to job (if needed later)
jobs.append(job)
# cluster.wait() # wait for all scheduled jobs to finish
for job in jobs:
host, n = job() # waits for job to finish and returns results
print('%s executed job %s at %s with %s' % (host, job.id, job.start_time, n))
# other fields of 'job' that may be useful:
# print(job.stdout, job.stderr, job.exception, job.ip_addr, job.start_time, job.end_time)
cluster.print_status()
I get the following output
2017-03-29 22:39:52 asyncoro - version 4.5.2 with epoll I/O notifier
2017-03-29 22:39:52 dispy - dispy client version: 4.7.3
2017-03-29 22:39:52 dispy - Port 51347 seems to be used by another program
And then nothing happens.
How to free the 51347 port?
If you are under Linux, run sudo netstat -tuanp | grep 51347 and take note of the pid using that port.
Then execute ps ax | grep <pid> to check which service/program is running with that pid.
Then execute kill <pid> to terminate the process using that port.
Please check which process is using the port before killing it just in case it is something that you should not kill.

Error handling in Net::Openssh (host ,async =>1) option

I am using Openssh module to connect to hosts using the (async => 1) option.
How is it possible to trap connection errors for those hosts that are not able to connect.I do not want the error to appear in the terminal but instead be stored in a data structure, since I want to finally format all the data as a cgi script.When I run the script the hosts that has a connection problem throw error in the terminal.The code executes further and try to run commands on disconnected hosts.I want to isolate the disconnected hosts.
my (%ssh, %ls); #Code copied from CPAN Net::OpenSSH
my #hosts = qw(host1 host2 host3 host4 );
# multiple connections are stablished in parallel:
for my $host (#hosts) {
$ssh{$host} = Net::OpenSSH->new($host, async => 1);
$ssh{$host}->error and die "no remote connection "; <--- doesn't work here! :-(
}
# then to run some command in all the hosts (sequentially):
for my $host (#hosts) {
$ssh{$host}->system('ls /');
}
$ssh{$host}->error and die "no remote connection doesn't work".
Any help will be appreciated.
Thanks
You run async connections. So program continue work and dont wait when connection is establised.
After new with async option you try to check error but it is not defined because connection in progress and no information about error.
As i understand you need wait after first loop until connection process got results.
Try to use ->wait_for_master(0);
If a false value is given, it will finalize the connection process and wait until the multiplexing socket is available.
It returns a true value after the connection has been succesfully established. False is returned if the connection process fails or if it has not yet completed (then, the "error" method can be used to distinguish between both cases).
for my $host (#hosts) {
$ssh{$host} = Net::OpenSSH->new($host, async => 1);
}
for my $host (#hosts) {
unless ($ssh{$host}->wait_for_master(0)) {
# check $ssh{$host}->error here. For example delete $ssh{$host}
}
}
# Do work here
I don't check this code.
PS: Sorry for my English. Hope it helps you.

Unix : Epoll, catch ctrl+d and ctrl+c in server

I use epoll to build a server, this is the code where I init epoll :
core->fd_epoll = epoll_create(LIMIT_CLIENT);
ev.events = EPOLLIN | EPOLLPRI | EPOLLERR | EPOLLHUP;
ev.data.fd = core->socket_main;
epoll_ctl(core->fd_epoll, EPOLL_CTL_ADD, core->socket_main, &ev);
while (1)
{
nfds = epoll_wait(core->fd_epoll, &ev, 90000, -1);
...
}
And when I use it to check if there's something new on my fds :
for (i = 0; i < nfds; i++)
{
fd = ev[i].data.fd;
if (fd == core->socket_main)
{
socket_fils = socket_accept(core->socket_main, 0);
event.data.fd = socket_fils;
event.events = EPOLLIN | EPOLLET | EPOLLRDHUP;
xepoll_ctl(core->fd_epoll, EPOLL_CTL_ADD, socket_fils, &event);
printf("Incoming => FD fils %d\n", socket_fils);
}
else
printf("Event %x\n", ev[i].events);
}
When I use netcat to send a message to the server the bitfield events is equal to 1 (EPOLLIN)
When I send a ctrl+c, netcat quits and my bitfield is equal to 2001 (EPOLLIN and EPOLLRDHUP)
When I send a ctrl+d, netcat doesn't quit but my bitfield is equal to 2001 too...
After a ctrl+d, my server closes the socket. It's not normal... A ctrl+d should'nt close the socket and return a different bitfield.
How can I know, in the server, if it's ctrl+c or ctrl+d ?
Thank you.
ctrl+c and ctrl+d keypresses on the terminal that is running netcat cannot be "seen" directly by your server. They cause, respectively, a SIGINT signal to be sent to netcat, and an EOF condition to be seen by netcat on its stdin. What netcat does with that is really up to netcat, not up to your server. Here's what they do for me:
ctrl+c which sends SIGINT to netcat: netcat is killed because that is the default action of SIGINT, and netcat doesn't change it. When netcat dies the socket is automatically closed. The server senses this as available incoming data, consistent with the EPOLLIN|EPOLLRDHUP condition you are seeing. If you read the socket, you will find that an EOF is waiting for you.
ctrl+d which sends an EOF on netcat's stdin: netcat noticed the EOF. It will send no further data through the socket. However, it continues running and reading from the socket in case the server has more data to send.
In other words, I can't reproduce the netcat behaviour you are seeing (with Linux 2.6 and netcat v1.10-38). Perhaps your version of netcat shuts down the socket for writing after reading an EOF on stdin?