streaming with tcp using opencv and socket - sockets

i have done simple tcp client/server program got working well with strings and character data...i wanted to take each frames(from a webcam) and sent it to server.. here is the part of client program where error happened:
line:66 if(send(sock, frame, sizeof(frame), 0)< 0)
error:
client.cpp:66:39: error: cannot convert ‘cv::Mat’ to ‘const void*’ for argument ‘2’ to ‘ssize_t send(int, const void*, size_t, int)
i cant recognise this error....kindly help...the following complete client program:
#include<stdio.h>
#include<sys/types.h>
#include<sys/socket.h>
#include<netinet/in.h>
#include<string.h>
#include<stdlib.h>
#include<netdb.h>
#include<unistd.h>
#include "opencv2/objdetect.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"
#include <iostream>
using namespace std;
using namespace cv;
int main(int argc,char *argv[])
{
int sock;
struct sockaddr_in server;
struct hostent *hp;
char buff[1024];
VideoCapture capture;
Mat frame;
capture.open( 1 );
if ( ! capture.isOpened() ) { printf("--(!)Error opening video capture\n"); return -1; }
begin:
capture.read(frame);
if( frame.empty() )
{
printf(" --(!) No captured frame -- Break!");
goto end;
}
sock=socket(AF_INET,SOCK_STREAM,0);
if(sock<0)
{
perror("socket failed");
exit(1);
}
server.sin_family =AF_INET;
hp= gethostbyname(argv[1]);
if(hp == 0)
{
perror("get hostname failed");
close(sock);
exit(1);
}
memcpy(&server.sin_addr,hp->h_addr,hp->h_length);
server.sin_port = htons(5000);
if(connect(sock,(struct sockaddr *) &server, sizeof(server))<0)
{
perror("connect failed");
close(sock);
exit(1);
}
int c = waitKey(30);
if( (char)c == 27 ) { goto end; }
if(send(sock, frame, sizeof(frame), 0)< 0)
{
perror("send failed");
close(sock);
exit(1);
}
goto begin;
end:
printf("sent\n",);
close(sock);
return 0;
}

Because TCP provides a stream of bytes, before you can send something over a TCP socket, you must compose the exact bytes you want to send. Your use of sizeof is incorrect. The sizeof function tells you how many bytes are needed on your system to store the particular type. This bears no relationship to the number of bytes the data will require over the TCP connection which depends on the particular protocol layered on top of TCP you are implementing which must specify how data is to be sent at the byte level.

like david already said, you got the length wrong. sizeof() won't help, what you want is probably
frame.total() * frame.channels()
you can't send a Mat object, but you can send the pixels ( the data pointer ) , so this would be:
send(sock, frame.data,frame.total() * frame.channels(), 0)
but still a bad idea. sending uncompressed pixels over the netwotrk ? bahh.
please look at imencode/imdecode
i'm pretty sure, you got the client / server roles in reverse here.
usually the server holds the information to retrieve ( the webcam ), and the client connects to that
and requests an image.

Related

BPF program is not valid - pcap sniffing

Hey everyone I'm trying to sniff packets using the pcap library. I have just one problem that I can not figure out: ERROR: BPF program is not valid.
I'm trying to start the sniffing but this error is blocking me I searched on the web and found nothing.
My code is based after this program: https://github.com/levans248/packetSniffingAndSpoofing/blob/master/sniff.c
It is due to SEED labs I know people do not help when it is homework but I just need to figure why this is happening I have no clue.
#include <pcap.h>
#include <stdio.h>
#include <stdlib.h>
#include <arpa/inet.h>
void got_packet(u_char *args, const struct pcap_pkthdr *header, const u_char *packet)
{
printf("Got a packet \n");
}
int main()
{
pcap_t *handle;
char errbuf[PCAP_ERRBUF_SIZE];
struct bpf_program fp;
char filter_exp[] = "ip proto icmp";
bpf_u_int32 net;
// Open live pcap session
handle = pcap_open_live("enp0s3", BUFSIZ, 1, 1000, errbuf);
// Compile Filter into the Berkeley Packet Filter (BPF)
pcap_compile(handle, &fp, filter_exp, 0, net);
if (pcap_setfilter(handle, &fp) == -1)
{
pcap_perror(handle, "ERROR");
exit(EXIT_FAILURE);
}
// Sniffing..
pcap_loop(handle, -1, got_packet, NULL);
pcap_close(handle);
return 0;
}
There was a SYNTAX mistake in the filter_exp ,
I was working on C-Shell so was needed to change to ip proto \icmp
Thank you very much everyone !

Problem in reading packets from tunnel using read()

I have been trying to receive and process the packets from tunnel. There are separate blocks for processing v4 and v6 packets. If the packet does not fall under the either of the categories, they will be dropped. For me, every packets are being dropped during execution. When I used wireshark to capture the packets from the tunnel, I noticed the difference in packet size, i.e., length of the packet. For example, when the length of a received packet in Wireshark is 60 whereas the program prints it 64 as length. I noticed the 4 bytes difference in all packets. I am unable to find out, what I am doing wrong here? Would anyone help me. I also attached the screen of wireshark and program execution for perusal.
Image: Captured packets from tunnel through wireshark and program
#define MTU 1600
void processPacket(const uint8_t *packet, const size_t len) {
//1st octet identifies the IP version
uint8_t version = (*packet) >> 4;
//...
printf("IP version - %d\n", version);
if (version == 4 ) {
//ipv4 packet process ...
} else if (version == 6) {
//ipv6 packet process ...
} else {
//drop packet
printf("Unknown IP version, drop packet\n");
}
}
int main() {
struct ifreq ifr;
int fd;
uint8_t *buffer = (uint8_t *)(malloc(MTU));
ssize_t len;
if ( (fd = open("/dev/net/tun", O_RDWR)) == -1 ) {
perror("Unable to open /dev/net/tun");
exit(EXIT_FAILURE);
}
memset(&ifr, 0, sizeof(ifr));
ifr.ifr_flags = IFF_TUN;
strncpy(ifr.ifr_name, "tun0", IFNAMSIZ);
if ( (err = ioctl(fd, TUNSETIFF, (void *) &ifr)) == -1 ) {
perror("Error encountered during ioctl TUNSETIFF");
close(fd);
exit(EXIT_FAILURE);
}
printf("Device tun0 opened\n");
while(1) {
len = read(fd, buffer, MTU);
printf("Read %lu bytes from tun0\n", len);
processPacket(buffer, len);
}
printf("\nPress any key to exit...");
getchar();
close(fd);
}
The tunnel device pre-pends the IP packet with additional information, so the first byte is not the IP version. If you don't need it, you can add IFF_NO_PI to ifr_flags. See kernel documentation.

Why is pcap only capturing PTP messages in live capture mode?

I am using a Intel i210-T1 Network Interface Card.
I am running the avnu gptp client (https://github.com/Avnu/gptp) with:
sudo ./daemon_cl -S -V
The other side is a gPTP Master.
I want to live capture incoming UDP packets on an network interface with hardware timestamps.
I can see the UDP Packets with wireshark, so the packets are actually on the wire.
My problem is that pcap doesn't return any packets other than PTP (ethertype 0x88f7) at all.
Is this a bug or am i using pcap the wrong way?
I wrote a minimal example to show my problem.
The code prints:
enp1s0
returnvalue pcap_set_tstamp_type: 0
returnvalue pcap_set_tstamp_precision: 0
returnvalue pcap_activate: 0
and afterwards only:
packet received with ethertype:88f7
#include <iostream>
#include <netinet/in.h>
#include <netinet/if_ether.h>
#include <pcap/pcap.h>
int main(int argc, char **argv)
{
char errbuf[PCAP_ERRBUF_SIZE];
std::string dev = "enp1s0";
pcap_t* pcap_dev;
int i = 0;
printf("%s\n", dev.c_str());
pcap_dev = pcap_create(dev.c_str(), errbuf);
if(pcap_dev == NULL)
{
printf("pcap_create(): %s\n", errbuf);
exit(1);
}
i = pcap_set_tstamp_type(pcap_dev, PCAP_TSTAMP_ADAPTER_UNSYNCED);
printf("returnvalue pcap_set_tstamp_type: %i\n", i);
i = pcap_set_tstamp_precision(pcap_dev, PCAP_TSTAMP_PRECISION_NANO);
printf("returnvalue pcap_set_tstamp_precision: %i\n", i);
i = pcap_activate(pcap_dev);
printf("returnvalue pcap_activate: %i\n", i);
struct pcap_pkthdr* pkthdr;
const u_char* bytes;
while (pcap_next_ex(pcap_dev, &pkthdr, &bytes))
{
struct ether_header* ethhdr = (struct ether_header*) bytes;
std::cout << "packet received with ethertype:" << std::hex << ntohs(ethhdr->ether_type) << std::endl;
}
}
The solution is to enable promiscuous mode by using function:
https://linux.die.net/man/3/pcap_set_promisc
promiscuous mode disables any filtering by lower layers so you get every message arriving on the interface.
int pcap_set_promisc(pcap_t *p, int promisc);
pcap_set_promisc() sets whether promiscuous mode should be set on a capture handle when the handle is activated. If promisc is non-zero, promiscuous mode will be set, otherwise it will not be set.
Return Value
pcap_set_promisc() returns 0 on success or PCAP_ERROR_ACTIVATED if called on a capture handle that has been activated.

SCTP: What should be the sctp_status.sstate value of an SCTP socket after succesful connect() call?

I'm trying to connect to a remote peer (which I don't have directory access other than connecting to it via socket and ping) via SCTP. Assuming that I have connected succesfully, what should be the value of my sctp_status.sstate if I try calling getsocktopt()? Mine is SCTP_COOKIE_ECHOED(3) according to sctp.h. Is it correct? Shouldn't it be SCTP_ESTABLISHED?
Because I tried sending message to the remote peer with this code:
ret = sctp_sendmsg (connSock, (void *) data, (size_t) strlen (data), (struct sockaddr *) &servaddr, sizeof (servaddr), 46, 0, 0, 0, 0);
It returned the number of bytes I tried sending. Then when I tried catching if there's any response:
ret = sctp_recvmsg (connSock, (void *) reply, sizeof (reply), NULL,
NULL, NULL, &flags);
It returns -1 with errno of ECONNRESET(104). What are the possible mistakes in my code, or maybe in my flow? Did I miss something?
Thanks in advance for answering. Will gladly appreciate that. :)
Update: Here down below is my client code in connecting to the remote peer. It's actually a node addon for me to use since SCTP is not fully supported in node. Using lksctp-tools package to include the headers.
#include <string.h>
#include <unistd.h>
#include <fcntl.h>
#include <sys/socket.h>
#include <sys/types.h>
#include <netinet/in.h>
#include <netinet/sctp.h>
#include <arpa/inet.h>
#include <signal.h>
#define MAX_BUFFER 1024
int connSock = 0;
int connect(char host[], int port, char remote_host[], int remote_port, int timeout) {
int ret, flags;
fd_set rset, wset;
struct sockaddr_in servaddr;
struct sockaddr_in locaddr;
struct sctp_initmsg initmsg;
struct timeval tval;
struct sctp_status status;
socklen_t opt_len;
errno = 0;
connSock = socket (AF_INET, SOCK_STREAM, IPPROTO_SCTP);
flags = fcntl(connSock, F_GETFL, 0);
fcntl(connSock, F_SETFL, flags | O_NONBLOCK);
if (connSock == -1)
{
return (-1);
}
memset(&locaddr, 0, sizeof(locaddr));
locaddr.sin_family = AF_INET;
locaddr.sin_port = htons(port);
locaddr.sin_addr.s_addr = inet_addr(host);
ret = bind(connSock, (struct sockaddr *)&locaddr, sizeof(locaddr));
if (ret == -1)
{
return (-1);
}
memset (&initmsg, 0, sizeof (initmsg));
initmsg.sinit_num_ostreams = 5;
initmsg.sinit_max_instreams = 5;
initmsg.sinit_max_attempts = 10;
ret = setsockopt(connSock, IPPROTO_SCTP, SCTP_INITMSG, &initmsg, sizeof(initmsg));
if (ret == -1)
{
return (-1);
}
memset (&servaddr, 0, sizeof (servaddr));
servaddr.sin_family = AF_INET;
servaddr.sin_port = htons (remote_port);
servaddr.sin_addr.s_addr = inet_addr (remote_host);
if((ret = connect (connSock, (struct sockaddr *) &servaddr, sizeof (servaddr))) < 0)
if (errno != EINPROGRESS)
return (-1);
if (ret == 0) {
fcntl(connSock, F_SETFL, flags);
return 0;
}
FD_ZERO(&rset);
FD_SET(connSock, &rset);
wset = rset;
tval.tv_sec = timeout;
tval.tv_usec = 0;
ret = select(connSock+1, &rset, &wset, NULL, timeout ? &tval : NULL);
if (ret == 0) {
close(connSock);
errno = ETIMEDOUT;
return(-1);
}
else if (ret < 0) {
return(-1);
}
fcntl(connSock, F_SETFL, flags);
opt_len = (socklen_t) sizeof(struct sctp_status);
getsockopt(connSock, IPPROTO_SCTP, SCTP_STATUS, &status, &opt_len);
printf ("assoc id = %d\n", status.sstat_assoc_id);
printf ("state = %d\n", status.sstat_state);
printf ("instrms = %d\n", status.sstat_instrms);
printf ("outstrms = %d\n", status.sstat_outstrms);
return 0;
}
int sendMessage(char remote_host[], int remote_port, char data[]) {
int ret, flags;
struct sockaddr_in servaddr;
char reply[1024];
errno = 0;
memset (&servaddr, 0, sizeof (servaddr));
servaddr.sin_family = AF_INET;
servaddr.sin_port = htons (remote_port);
servaddr.sin_addr.s_addr = inet_addr (remote_host);
printf("\nSending %s (%li bytes)", data, strlen(data));
ret = sctp_sendmsg (connSock, (void *) data, (size_t) strlen (data),
(struct sockaddr *) &servaddr, sizeof (servaddr), 46, 0, 0, 0, 0);
if (ret == -1)
{
printf("\nError sending errno(%d)", errno);
return -1;
}
else {
ret = sctp_recvmsg (connSock, (void *) reply, sizeof (reply), NULL,
NULL, NULL, &flags);
if (ret == -1)
{
printf("\nError receiving errno(%d)", errno);
return -1;
}
else {
printf("\nServer replied with %s", reply);
return 0;
}
}
}
int getSocket() {
return connSock;
}
I don't know if there's anything significant I need to set first before connecting that I missed out. I got the snippet from different sources so it's quite messy.
Another update, here's the tshark log of that code when executed:
3336.919408 local -> remote SCTP 82 INIT
3337.006690 remote -> local SCTP 810 INIT_ACK
3337.006727 local -> remote SCTP 774 COOKIE_ECHO
3337.085390 remote -> local SCTP 50 COOKIE_ACK
3337.086650 local -> remote SCTP 94 DATA
3337.087277 remote -> local SCTP 58 ABORT
3337.165266 remote -> local SCTP 50 ABORT
Detailed tshark log of this here.
Looks like the remote sent its COOKIE_ACK chunk but my client failed to set its state to ESTABLISHED (I double checked the sstate value of 3 here).
If the association setup processes completed the state should be SCTP_ESTABLISHED. SCTP_COOKIE_ECHOED indicated that association has not completely established. It means that originating side (your localhost in this case) has sent (once or several times) COOKIE_ECHO chunk which has not been acknowledged by COOKIE_ACK from remote end.
You can send messages in this state (SCTP will simply buffer it until it get COOKIE_ACK and resend it later on).
It is hard to say what went wrong based on information you provided. At this stage it is probably will be worth diving into wireshark trace, to see what remote side is replying on your COOKIE_ECHO.
Also if you can share your client/server side code that might help to identify the root cause.
UPDATE #1:
It should be also noted that application can abort association them self (e.g. if this association is not configured on that server). If you trying to connect to the random server (rather than your specific one) that is quite possible and actually makes sense in your case. In this case state of association on your side is COOKIE_ECHOED because COOKIE_ACK has not arrived yet (just a race condition). As I said previously SCTP happily accepts your data in this state and just buffers it until it receives COOKIE_ACK. SCTP on remote side sends COOKIE_ACK straight away, even before the application received execution control in accept(). If application decided to terminate the association in ungraceful way, it will send ABORT (that is your first ABORT in wireshark trace). Your side has not received this ABORT yet and sends DATA chunk. Since remote side considers this association as already terminated it cannot process DATA chunk, so it treats it as out of the blue (see RFC 4960 chapter 8.4) and sends another ABORT with t-bit set to 1.
I guess this is what happened in your case. You can confirm it easily just by looking into wireshark trace.

TCP_FASTOPEN undeclared

I'm coding a small server that uses TCP Fast Open option through setsockopt(). However I am getting this error from gcc :
$gcc server.c
server.c: In function 'main':
server.c:35:34: error: 'TCP_FASTOPEN' undeclared (first use in this function)
if (setsockopt(sock, IPPROTO_TCP, TCP_FASTOPEN, &qlen, sizeof(qlen) == -1)
Here is the server's code:
#include <stdio.h>
#include <errno.h>
#include <string.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <netinet/ip.h>
int main(int argc, char *argv[])
{
short port = 45000;
int max_conn = 10;
int fd = socket(AF_INET, SOCK_STREAM, 0);
if (fd == -1)
{
printf("Couldn't create socket: %s\n", strerror(errno));
return -1;
}
struct sockaddr_in ssi;
ssi.sin_family = AF_INET;
ssi.sin_port = htons(port);
ssi.sin_addr.s_addr = INADDR_ANY;
if (bind(fd, (struct sockaddr *)&ssi, sizeof(struct sockaddr_in)) != 0)
{
printf("Couldn't bind socket: %s\n", strerror(errno));
return -1;
}
// TFO
int qlen = 5;
if (setsockopt(fd, IPPROTO_TCP, TCP_FASTOPEN, &qlen, sizeof(qlen)) == -1)
{
printf("Couldn't set TCP_FASTOPEN option: %s\n", strerror(errno));
return -1;
}
if (listen(fd, max_conn) != 0)
{
printf("Could'nt listen on socket: %s\n", strerror(errno));
return -1;
}
struct sockaddr_in csi;
int clen = sizeof(csi);
int cfd = accept(fd, (struct sockaddr *)&csi, &clen);
return 0;
}
Why does gcc gives this error?
The macro TCP_FASTOPEN is located in include/uapi/linux/tcp.h in the kernel and its value is 23 so I tried to redefine it in my code, then it does compile and run but the option is not sent by the server as an answer to a TFO request (in the SYN-ACK).
Does anybody knows why? Is this related to the compilation issue?
/proc/sys/net/ipv4/tcp_fastopen needs to be set to 2 to enable server-side use of TCP fast open option:
The tcp_fastopen file can be used to view or set a value that enables the operation of different parts of the TFO functionality. Setting bit 0 (i.e., the value 1) in this value enables client TFO functionality, so that applications can request TFO cookies. Setting bit 1 (i.e., the value 2) enables server TFO functionality, so that server TCPs can generate TFO cookies in response to requests from clients. (Thus, the value 3 would enable both client and server TFO functionality on the host.)
Also, TCP_FASTOPEN macro needs to be included with #include <netinet/tcp.h>.
Looks like your glibc doesn't have support for TCP_FASTOPEN - even if your keernel has (since it's not available when you include standard socket headers). So you can't really use it using glibc glue code (of which setsockopt() is part of).