Minimal TCP client-server: accept() never receives a connection? - sockets

I copied the example code for the book TCP/IP Sockets in C from http://cs.ecs.baylor.edu/~donahoo/practical/CSockets/winsock.html and got it to compile on MinGW (without warnings by changing clntLen from unsigned int to int and void main to int main).
$ gcc.exe -Wall -o TCPEchoServerWS TCPEchoServerWS.c HandleTCPClientWS.c DieWithErrorWS.c -lws2_32
$ gcc.exe -o TCPEchoClientWS TCPEchoClientWS.c DieWithErrorWS.c -lws2_32
When I run the executables the server but not the client triggers a Windows firewall notification.
$ ./TCPEchoServerWS.exe 5000
inside for loop
$ ./TCPEchoClientWS.exe 169.1.1.1 "Echo this" 5000
connect() failed: 10060
The from printf debugging
for (;;) /* Run forever */
{
printf("inside for loop");
clntLen = sizeof(echoClntAddr);
if ((clntSock = accept(servSock, (struct sockaddr *) &echoClntAddr, &clntLen)) < 0)
DieWithError("accept() failed");
printf("Handling client %s\n", inet_ntoa(echoClntAddr.sin_addr));
it appears that the accept() never returns. I assume this is because it never has a connection to extract?? Any ideas please? I've also tried linking with -lwsock32, and disabling windows firewall.

It turns out that I was using the wrong IP (I just copied the one from the command it the book).
I just needed to use the IPv4 address from ipconfig as Remy suggested.

Related

pySerial running command to list ports

I am using pySerial and I am running this command using CMD to list available COM ports and displays a COM port number when found:
python -m serial.tools.list_ports
I know that the command line will import the serial module when I use the python -m flag and I can access the objects inside it so it should show the output. However, the same command however does not work when run using the IDLE shell:
import serial
print(serial.tools.list_ports_common)
This returns an error AttributeError: module 'serial' has no attribute 'tools'
Why is it not working at IDLE?
You need to import it first:
from serial.tools import list_ports
list_ports.main() # Same result as python -m serial.tools.list_ports
You can check out the source here
You can simply try connecting to each possible port (COM0...COM255). Then add the ports with successful connections to a list. Here is my example:
import serial
def connectedCOMports ():
allPorts = [] #list of all possible COM ports
for i in range(256):
allPorts.append("COM" + str(i))
ports = [] #a list of COM ports with devices connected
for port in allPorts:
try:
s = serial.Serial(port) #attempt to connect to the device
s.close()
ports.append(port) #if it can connect, add it the the list
except:
pass #if it can't connect, don't add it to the list
return(ports)
print(connectedCOMports())
When I ran this program, it printed ['COM7'] to the console. This represents the ESP32 microcontroller that I connected to my USB port.

OpenOCD exit on breakpoint

I'm developing an application on an STM32F042.
I drive everything from a makefile, including my unit tests.
I use OpenOCD and ST-LINK to flash the target.
My unit tests run on the host and on the target.
The host unit test driver returns 0 from main() on success and non-zero on failure, so the makefile knows if the tests pass.
The makefile flashes and starts tests on the target, but doesn't know if they succeed or fail.
The embedded test application turns on a red LED for fail and green for pass, so I know--now I want to automate this step.
I want to set two breakpoints in the code, one in the failure handler and one at the end of main, and tell OpenOCD to exit with zero or non-zero status if it hits one or the other breakpoint.
So my question boils down to two specific ones:
To set a breakpoint, I need to know the PC value at a specific line of code. How do I get that value from the arm-gcc toolchain?
Can I configure OpenOCD to exit on specific breakpoints, and with specific status?
Here's what I ended up with. For each target unit test, I start an OpenOCD server and connect to it with gdb. Gdb runs a script that sets two breakpoints, one for success, one for failure. If it hits either breakpoint, it shuts down the OCD server and exits with a code that communicates success and failure to the shell. To run the same tests on the host, I simply compile them as regular executables.
Makefile:
# target unit test binaries
foo_tests.elf bar_tests.elf baz_tests.elf bop_tests.elf: unit_test_runner.ao
# disable optimization for target unit test driver to avoid optimizing
# away functions that serve as breakpoint labels
unit_test_runner.ao: CFLAGS += -O0 -g
# link target unit test binaries for semihosting
%_tests.elf: ARM_LDLIBS += -specs=rdimon.specs -lrdimon
# host unit test binaries
foo_tests bar_time_tests baz_tests bop_tests: unit_test_runner.o
# run target unit test binaries through gdb and OpenOCD; redirecting stderr
# leaves printf output from `assert()' clearly visible on the console
%.tut: %.elf
openocd -f interface/stlink-v2-1.cfg -f target/stm32f0x.cfg 2> $#.log &
gdb-multiarc -batch-silent -x tut.gdb $< 2> $#-gdb.log
# run host binary
%.run: %
./$*
tests: foo_tests.run bar_time_tests.run baz_tests.run bop_tests.run \
foo_tests.tut bar_time_tests.tut baz_tests.tut bop_tests.tut
tut.gdb:
target remote localhost:3333
monitor arm semihosting enable # let assert()'s printf() through
monitor reset halt
load
monitor reset init
break success # set breakpoint on function `sucess()'
commands # on hitting this bp, execute the following:
monitor shutdown # shutdown OpenOCD server
quit 0 # exit GDB with success code
end
break failure # set breakpoint on function `sucess()'
commands
monitor shutdown
quit 1 # exit GDB with failure code
end
continue
unit_test_runner.c:
#include <stdlib.h>
/* These two functions serve as labels where gdb can place
breakpoints. */
void success() {}
void failure() {}
/* Implementation detail for `assert()' macro */
void assertion_failure(const char *file,
int line,
const char *function,
const char *expression)
{
printf("assertion failure in %s:%d (%s): `%s'\n",
file, line, function, expression);
failure();
exit(1);
}
/* This function is necessary for ARM semihosting */
extern void initialise_monitor_handles(void);
int main(int argc, char* argv[])
{
#ifdef __arm__
initialise_monitor_handles();
#endif
tests(); /* client code implements this function */
success();
return 0;
}

Opening a DGRAM socket from within a docker container fails (permission denied)

I'm running an application which builds and sends ICMP ECHO requests to a few different ip addresses. The application is written in Crystal. When attempting to open a socket from within the crystal docker container, Crystal raises an exception: Permission Denied.
From within the container, I have no problem running ping 8.8.8.8.
Running the application on macos, I have no problem.
Reading the https://docs.docker.com/engine/security/apparmor/ and https://docs.docker.com/engine/security/seccomp/ pages on apparmor and seccomp I was sure I'd found the solution, but the problem remains unresolved, even when running as docker run --rm --security-opt seccomp=unconfined --security-opt apparmor=unconfined socket_permission
update/edit: After digging into capabilities(7), I added the following line to my dockerfile: RUN setcap cap_net_raw+ep bin/ping trying to let the socket get opened but without change.
Thanks!
Relevant crystal socket code, full working code sample below:
# send request
address = Socket::IPAddress.new host, 0
socket = IPSocket.new Socket::Family::INET, Socket::Type::DGRAM, Socket::Protocol::ICMP
socket.send slice, to: address
Dockerfile:
FROM crystallang/crystal:0.23.1
WORKDIR /opt
COPY src/ping.cr src/
RUN mkdir bin
RUN crystal -v
RUN crystal build -o bin/ping src/ping.cr
ENTRYPOINT ["/bin/sh","-c"]
CMD ["/opt/bin/ping"]
Running the code, first native, then via docker:
#!/bin/bash
crystal run src/ping.cr
docker build -t socket_permission .
docker run --rm --security-opt seccomp=unconfined --security-opt apparmor=unconfined socket_permission
And finally, a 50 line crystal script which fails to open a socket in docker:
require "socket"
TYPE = 8_u16
IP_HEADER_SIZE_8 = 20
PACKET_LENGTH_8 = 16
PACKET_LENGTH_16 = 8
MESSAGE = " ICMP"
def ping
sequence = 0_u16
sender_id = 0_u16
host = "8.8.8.8"
# initialize packet with MESSAGE
packet = Array(UInt16).new PACKET_LENGTH_16 do |i|
MESSAGE[ i % MESSAGE.size ].ord.to_u16
end
# build out ICMP header
packet[0] = (TYPE.to_u16 << 8)
packet[1] = 0_u16
packet[2] = sender_id
packet[3] = sequence
# calculate checksum
checksum = 0_u32
packet.each do |byte|
checksum += byte
end
checksum += checksum >> 16
checksum = checksum ^ 0xffff_ffff_u32
packet[1] = checksum.to_u16
# convert packet to 8 bit words
slice = Bytes.new(PACKET_LENGTH_8)
eight_bit_packet = packet.map do |word|
[(word >> 8), (word & 0xff)]
end.flatten.map(&.to_u8)
eight_bit_packet.each_with_index do |chr, i|
slice[i] = chr
end
# send request
address = Socket::IPAddress.new host, 0
socket = IPSocket.new Socket::Family::INET, Socket::Type::DGRAM, Socket::Protocol::ICMP
socket.send slice, to: address
# receive response
buffer = Bytes.new(PACKET_LENGTH_8 + IP_HEADER_SIZE_8)
count, address = socket.receive buffer
length = buffer.size
icmp_data = buffer[IP_HEADER_SIZE_8, length-IP_HEADER_SIZE_8]
end
ping
It turns out the answer is that Linux (and by extension docker) does not give the same permissions that macOS does for DGRAM sockets. Changing the socket declaration to socket = IPSocket.new Socket::Family::INET, Socket::Type::RAW, Socket::Protocol::ICMP allows the socket to connect under docker.
A little more still is required to run the program in a non-root context. Because raw sockets are restricted to root, the binary must also be issued the correct capability for access to a raw socket, CAP_NET_RAW. However, in docker, this isn't necessary. I was able to get the program to run outside of super-user context by running sudo setcap cap_net_raw+ep bin/ping. This is a decent primer on capabilities and the setpcap command
MacOS doesn't use the same system of permissions, so setcap is just an unrecognized command. As a result, to get the above code to compile and run successfully on macOS without super-user context, I changed the socket creation code to:
socket_type = Socket::Type::RAW
{% if flag?(:darwin) %}
socket_type = Socket::Type::DGRAM
{% end %}
socket = IPSocket.new Socket::Family::INET, socket_type, Socket::Protocol::ICMP
Applying the CAP_NET_RAW capability for use in linux happens elsewhere in the build process if needed.
With those changes, I'm not seeing any requirement for changes to seccomp or apparmor from the default shipped with Docker in order to run the program.

C interp: unknown symbol name 'inetstatShow'

I have some Vxworks embedded os and I want to check the netstat.
This is what I tried:
-> inetstatShow
And the output is:
C interp: unknown symbol name 'inetstatShow'.
How can I have netstat command in this?
inetstatShow is provided by netShow library - you need to be sure that your OS configuration includes netShow, or you can dynamically load it using ld.
The lkup function can be used to list symbols that are available to the shell. Try lkup "Show" to list all symbols that include the sub-string "Show" for example.
VxWorks supports netstat command.
-> netstat "-n -a" /* state of sockets */
-> netstat "-n -r" /* routing table */

Unix : Epoll, catch ctrl+d and ctrl+c in server

I use epoll to build a server, this is the code where I init epoll :
core->fd_epoll = epoll_create(LIMIT_CLIENT);
ev.events = EPOLLIN | EPOLLPRI | EPOLLERR | EPOLLHUP;
ev.data.fd = core->socket_main;
epoll_ctl(core->fd_epoll, EPOLL_CTL_ADD, core->socket_main, &ev);
while (1)
{
nfds = epoll_wait(core->fd_epoll, &ev, 90000, -1);
...
}
And when I use it to check if there's something new on my fds :
for (i = 0; i < nfds; i++)
{
fd = ev[i].data.fd;
if (fd == core->socket_main)
{
socket_fils = socket_accept(core->socket_main, 0);
event.data.fd = socket_fils;
event.events = EPOLLIN | EPOLLET | EPOLLRDHUP;
xepoll_ctl(core->fd_epoll, EPOLL_CTL_ADD, socket_fils, &event);
printf("Incoming => FD fils %d\n", socket_fils);
}
else
printf("Event %x\n", ev[i].events);
}
When I use netcat to send a message to the server the bitfield events is equal to 1 (EPOLLIN)
When I send a ctrl+c, netcat quits and my bitfield is equal to 2001 (EPOLLIN and EPOLLRDHUP)
When I send a ctrl+d, netcat doesn't quit but my bitfield is equal to 2001 too...
After a ctrl+d, my server closes the socket. It's not normal... A ctrl+d should'nt close the socket and return a different bitfield.
How can I know, in the server, if it's ctrl+c or ctrl+d ?
Thank you.
ctrl+c and ctrl+d keypresses on the terminal that is running netcat cannot be "seen" directly by your server. They cause, respectively, a SIGINT signal to be sent to netcat, and an EOF condition to be seen by netcat on its stdin. What netcat does with that is really up to netcat, not up to your server. Here's what they do for me:
ctrl+c which sends SIGINT to netcat: netcat is killed because that is the default action of SIGINT, and netcat doesn't change it. When netcat dies the socket is automatically closed. The server senses this as available incoming data, consistent with the EPOLLIN|EPOLLRDHUP condition you are seeing. If you read the socket, you will find that an EOF is waiting for you.
ctrl+d which sends an EOF on netcat's stdin: netcat noticed the EOF. It will send no further data through the socket. However, it continues running and reading from the socket in case the server has more data to send.
In other words, I can't reproduce the netcat behaviour you are seeing (with Linux 2.6 and netcat v1.10-38). Perhaps your version of netcat shuts down the socket for writing after reading an EOF on stdin?