I'm trying to write a simple SIP sniffer using libtins which works nice.
I then try to parse the packet received to libosip.
Although it does parses the message properly, it dies silently.
I've no idea what could be wrong here, some help would be greatly appreciated!
this is my source:
#include <iostream>
#include "tins/tins.h"
#include <osip2/osip.h>
#include <osipparser2/osip_message.h>
#include <vector>
using namespace Tins;
bool invalidChar (char c);
void stripUnicode(std::string & str);
bool callback(const PDU &pdu)
{
const IP &ip = pdu.rfind_pdu<IP>(); // Find the IP layer
const UDP &udp = pdu.rfind_pdu<UDP>(); // Find the TCP layer
osip_message *sip;
osip_message_init(&sip);
// First here we print Source and Destination Information
std::cout << ip.src_addr() << ':' << udp.sport() << " -> "
<< ip.dst_addr() << ':' << udp.dport() << std::endl;
// Extract the RawPDU object.
const RawPDU& raw = udp.rfind_pdu<RawPDU>();
// Finally, take the payload (this is a vector<uint8_t>)
const RawPDU::payload_type& payload = raw.payload();
// We create a string message
std::string message( payload.begin(), payload.end() );
std::string sip_message;
// Try to parse the message
std::cout << "copying message with len " << message.size() << std::endl;
const char *msg = message.c_str();
std::cout << "parsing message with size " << strlen(msg) << std::endl;
osip_message_parse( sip, msg, strlen( msg ) );
std::cout << "freeing message" << std::endl;
osip_message_free(sip);
return true;
}
int main(int argc, char *argv[])
{
if(argc != 2) {
std::cout << "Usage: " << *argv << " <interface>" << std::endl;
return 1;
}
// Sniff on the provided interface in promiscuos mode
Sniffer sniffer(argv[1], Sniffer::PROMISC);
// Only capture udp packets sent to port 53
sniffer.set_filter("port 5060");
// Start the capture
sniffer.sniff_loop(callback);
}
The output is this:
1.2.3.4:5060 -> 4.3.2.1:5060
copying message with len 333
parsing message with size 333
And it dies silently.
If I remove the line:
osip_message_parse( sip, msg, strlen( msg ) );
It keeps going perfectly...
Thanks a lot for your help!
I finally found the problem.
it is necessary to initialise the parser with
parser_init();
It's not documented anywhere :(
Now it's not dying on me anymore, but the parsing is not working properly. I need to investigate more.
Thanks everyone!
David
First, if a memory corruption happens before, the crash may happen in osip_message_parse but this might not be the origin of the initial corruption.
In order to test a sip message with libosip, you can go into the build directory of osip and create a file containing your sip message: mymessage.txt
$> ./src/test/torture_test mymessage.txt 0 -v
and even for a deeper check with valgrind:
$> valgrind ./src/test/.libs/torture_test mymessage.txt 0 -v
If your code is failing for all sip message, I guess the issue is a memory corruption outside libosip.
You do have another bug with the size of the SIP message:
osip_message_parse( sip, msg, strlen( msg ) );
A SIP message can contain binary data with \0 char inside, so your code should use the exact length of binary payload not strlen(). Such a change is required (but won't fix your main issue):
osip_message_parse( sip, msg, payload.end() - payload.begin() );
I also advise you to try the latest osip git and complete your question with a copy of a SIP message failing.
EDIT: As David found, the init wasn't done and that was the origin of the issue. However, the correct way to init is as specified by first line of documentation:
How-To initialize libosip2
When using osip, your first task is to initialize the parser and the state machine. This must be done prior to any use of libosip2.
#include <sys/time.h>
#include <osip2/osip.h>
int i;
osip_t *osip;
i=osip_init(&osip);
if (i!=0)
return -1;
Related
I'm developing a C++ timestamp parser that could check if any given string can be a timestamp representation, covering various formats.
I've tested some libraries and finally, I'm using the single header one developed by #howard-hinnant.
The only problem is with the Kitchen format 03:04AM (HH:MM<AM|PM>).
This is the code that I'm using:
#include "date.h"
#include <iostream>
#include <string>
#include <sstream>
int main()
{
std::string const fmt = "%I:%M%p" ;
std::string const time;
std::string const name;
date::fields<std::chrono::nanoseconds> fds {};
std::chrono::minutes offset {};
std::string abbrev;
const std::string in = "3:04a.m.";
std::stringstream ss(in);
std::unordered_map<std::string, std::string> result;
date::from_stream(ss, fmt.c_str(), fds, &abbrev, &offset);
if (!ss.fail())
{
if (fds.has_tod && fds.tod.in_conventional_range())
{
std::cout << "result hour " << std::to_string(fds.tod.hours().count()) << std::endl;
std::cout << ". minutes " << std::to_string(fds.tod.minutes().count())<< std::endl;
std::cout << ". seconds " << std::to_string(fds.tod.seconds().count())<< std::endl;
}
}
else
{
std::cout << "failed" << std::endl;
}
}
What I'm doing wrong, the code works great with other formats? is there a chance that parsing a date requires more fields in order to process it fully (year, month, day)?
Hope I made myself clear, thanks in advance!
In the "C" locale, %p refers to one of AM or PM. You have "a.m.". Removing the '.' works for me.
There is one other caveat: The POSIX spec for strptime specifies that case should be ignored. And my date lib follows the POSIX spec on this. However by default this library forwards to your std::library for this functionality. And some implementations didn't get the memo on this. They may not accept lower case.
If this happens for you, you can work around this std::lib bug by compiling with -DONLY_C_LOCALE on the command line (or set ONLY_C_LOCALE=1 in your IDE wherever macros are set). This tells the date lib to do the %p parse itself, instead of forwarding to the std::lib. And it will correctly do a case-insensitive parse. However it assumes that the "C" locale is in effect.
I am quite new to using pcap lib, so please bear with me.
I am trying to use pcap_getnonblock function, the documentation says the following:
pcap_getnonblock() returns the current 'non-blocking' state of
the capture descriptor; it always returns 0 on 'savefiles' . If
there is an error, PCAP_ERROR is returned and errbuf is filled in
with an appropriate error message.
errbuf is assumed to be able to hold at least PCAP_ERRBUF_SIZE
chars.
I got -3 returned and the errbuf is an empty string, I couldn't understand the meaning of such result.
I believe this caused a socket error: 10065.
This problem happened only once and I could not reproduce it, but still it would be great to find its causing to prevent it in future executions.
Thanks in advance.
pcap_getnonblock() can return -3 - that's PCAP_ERROR_NOT_ACTIVATED. Unfortunately, that's not documented; I'll fix that.
Here's a minimal reproducible example that demonstrates this:
#include <pcap/pcap.h>
#include <stdio.h>
int
main(int argc, char **argv)
{
pcap_t *pcap;
char errbuf[PCAP_ERRBUF_SIZE];
if (argc != 2) {
fprintf(stderr, "Usage: this_program <interface_name>\n");
return 1;
}
pcap = pcap_create(argv[1], errbuf);
if (pcap == NULL) {
fprintf(stderr, "this_program: pcap_create(%s) failed: %s\n",
argv[1], errbuf);
return 2;
}
printf("pcap_getnonblock() returns %d on non-activated pcap_t\n",
pcap_getnonblock(pcap, errbuf));
return 0;
}
(yes, that's minimal, as 1) names of interfaces are OS-dependent, so it has to be a command-line argument and 2) if you don't run the program correctly, it should let you know what's happening, so you know what you have to do in order to reproduce the problem).
Perhaps pcap_getnonblock() and pcap_setnonblock() should be changed so that you can set non-blocking mode before activating the pcap_t, so that, when activated, it will be in non-blocking mode. It doesn't work that way currently, however.
I.e., you're allocating a pcap_t with pcap_create(), but you're not activating it with pcap_activate(). You need to do both in order to have a pcap_t on which you can capture.
When importing a ply-file into my program I get an Error-message saying that something went wrong with the following message:
C:\Users\...\data\apple.ply:8: property 'list uint8 int32 vertex_indices' of element 'face' is not handled
I used a sample ply file from: https://people.sc.fsu.edu/~jburkardt/data/ply/apple.ply
I have already tried different ply files from different sources but none of them work. When debugging the program the io::loadPLYFile doesn't generate a valid pointcloud. Runtime Library for PCL and for my program are the same.
#include <iostream>
#include <pcl/io/pcd_io.h>
#include <pcl/io/ply_io.h>
#include <pcl/point_types.h>
#include <pcl/search/kdtree.h>
#include <pcl/features/normal_3d_omp.h>
#include <pcl/surface/marching_cubes_rbf.h>
using namespace pcl;
using namespace std;
int
main (int argc, char** argv)
{
PointCloud<PointXYZ>::Ptr cloud (new PointCloud<PointXYZ>);
std::cout << "Start Debug?" << std::endl;
std::cin.ignore();
if(io::loadPLYFile<PointXYZ> (argv[1], *cloud) == -1){
cout << "ERROR: couldn't find file" << endl;
return (1);
} else {
cout << "loaded" << endl;
NormalEstimationOMP<PointXYZ, Normal> ne;
search::KdTree<PointXYZ>::Ptr tree1 (new search::KdTree<PointXYZ>);
tree1->setInputCloud (cloud);
ne.setInputCloud (cloud);
ne.setSearchMethod (tree1);
ne.setKSearch (20);
PointCloud<Normal>::Ptr normals (new PointCloud<Normal>);
ne.compute (*normals);
I would expect the PCL function io::loadPLYFile to load the files properly as described in the documentation http://docs.pointclouds.org/1.3.1/group__io.html
the console output is just a warning as #kanstar already suggested! It can easily be ignored. The reason my program crashed in Debug but not in Release was that my Visual Studio linked to the wrong library version of boost which resulted in the crash. Fixing the linkage made the pcl::NormalEstimationOMP work as expected.
I am new to modbus. I have spent hours reading the Help(?) files, which never seem to give you an example! I am using C on a Raspberry Pi, model3 and have installed libmodbus. I am trying to talk to an epSolar solar panel controller via an FTDI USB to RS485 converter.
The epSolar docs say that the Read Input registers start at address 3000 and continue to 311D. I am trying to read 3104.
I modified the code below. It connects to the device but trying to read input register 0x04 always returns -1:
#include <stdio.h>
#include <unistd.h>
#include <string.h>
#include <stdlib.h>
#include <errno.h>
#include <modbus.h>
enum {TCP, RTU};
int main(int argc, char *argv[])
{
int socket;
modbus_t *ctx;
modbus_mapping_t *mb_mapping;
int rc;
int use_backend;
int i;
uint16_t tab_reg[64];
use_backend = RTU;
printf("Waiting for Serial connection\n");
ctx = modbus_new_rtu("/dev/SOLAR", 115200, 'N', 8, 1);
modbus_set_slave(ctx, 0);
//modbus_connect(ctx);
if(modbus_connect(ctx) == -1)
{
fprintf(stderr, "Serial connection failed:
%s\n", modbus_strerror(errno));
modbus_free(ctx);
return -1;
}
printf("Serial connection started!\n");
mb_mapping = modbus_mapping_new(MODBUS_MAX_READ_BITS, 0,
MODBUS_MAX_READ_REGISTERS, 0);
if(mb_mapping == NULL)
{
fprintf(stderr, "Failed to allocate the mapping: %s\n",
modbus_strerror(errno));
modbus_free(ctx);
return -1;
}
rc = modbus_read_input_registers(ctx, 1, 0x0A, tab_reg);
if(rc == -1)
{
fprintf(stderr, "%s\n", modbus_strerror(errno));
return -1;
}
for(i=0; i < rc; i++)
printf("reg[%d]=%d (0x%X)\n", i, tab_reg[i], tab_reg[i]);
modbus_mapping_free(mb_mapping);
modbus_free(ctx);
modbus_close(ctx);
return 0;
}
It connects fine and allocates the mapping, but rc is always -1 with error message that the port has timed out.
I have run out of ideas and feel like I am navigating through treacle!
Any help most appreciated.
I am also new to Modbus. With my current experience, make sure you are allocating enough memory for the tab_reg for storing the results. Also try setting the Debug mode on i.e modbus_set_debug(ctx, TRUE); to Check for the request and response code.
I know this is a really old question, but hopefully this answer will help anyone who lands here via a Google search.
I can see a few points that need some help.
As commented by Saad above, the modbus server ID above is incorrect. ID 0 is reserved for broadcast messages, which a slave will not respond to. Find out what the Modbus ID for the target device is, and use that.
I think what's tricking you is that you'll also always get a proper "connect" as long as the serial port you provided is valid. This isn't a connection to any particular device so much as it's a connection to the Modbus network port. You're getting a timeout because a response was expected by libmodbus, but no response was received on the wire.
There are several other little troubles in the code presented, but given the age of this post I almost feel like I'm nitpicking something the OP probably already solved. The big problem is the unworkable slave ID. Other minor problems include: unnecessary use of modbus_mapping (struct for use on server/slaves), possible misallocation of modbus_mapping (no space allocated for input registers).
I'm trying to pass arguments in XCode and understand you need to add them from the Args tab, using the Get Info button, in the Executables of the Groups and Files pane. I'm trying to see if I can get it to work, but am having some difficulty. My program is simply:
#include <iostream>
#include <ostream>
using namespace std;
int main(int argc, char *argv[]) {
for (int i = 0; i < argc; i++) {
cout << argv[i];
}
return 0;
}
And in the Args tab, I have the number 2 and then in another line the number 1. I do not get any output when I run the program. What am I doing wrong? Thanks!
Your code works fine and it displays the arguments.
You may want to print a new line after each argument to make the output more readable:
cout << argv[i] << "\n";
Output is visible in the console (use Command+Shift+R to bring up the console).