I tried to compile static binary using latest Gstreamer Libs 1.8.0. I want to get incomming RTSP stream and put it into file. The pipeline is:
rtspsrc location=rtsp://X.X.X.X/ protocols=GST_RTSP_LOWER_TRANS_TCP ! queue ! rtph264depay ! h264parse ! flvmux name=\"mux\" streamable=\"true\" ! fakesink
Running compiled binary results in error:
rtpbasedepayload
gstrtpbasedepayload.c:484:gst_rtp_base_depayload_handle_buffer:[00m
error: No RTP format was negotiated.
int main(int argc, char *argv[]) {
GstElement *pipeline;
GstBus *bus;
GstStateChangeReturn ret;
GMainLoop *main_loop;
CustomData data;
/* Initialize GStreamer */
gst_init (&argc, &argv);
registerGstStaticPlugins();
/* Initialize our data structure */
memset (&data, 0, sizeof (data));
/* Build the pipeline */
pipeline = gst_parse_launch ("rtspsrc location=rtsp://X.X.X.X/ protocols=GST_RTSP_LOWER_TRANS_TCP ! queue ! rtph264depay ! h264parse ! flvmux name=\"mux\" streamable=\"true\" ! fakesink", NULL);
bus = gst_element_get_bus (pipeline);
/* Start playing */
ret = gst_element_set_state (pipeline, GST_STATE_PLAYING);
if (ret == GST_STATE_CHANGE_FAILURE) {
g_printerr ("Unable to set the pipeline to the playing state.\n");
gst_object_unref (pipeline);
return -1;
} else if (ret == GST_STATE_CHANGE_NO_PREROLL) {
data.is_live = TRUE;
}
main_loop = g_main_loop_new (NULL, FALSE);
data.loop = main_loop;
data.pipeline = pipeline;
gst_bus_add_signal_watch (bus);
g_signal_connect (bus, "message", G_CALLBACK (cb_message), &data);
g_main_loop_run (main_loop);
/* Free resources */
g_main_loop_unref (main_loop);
gst_object_unref (bus);
gst_element_set_state (pipeline, GST_STATE_NULL);
gst_object_unref (pipeline);
return 0;
}
Complete output: http://pastebin.com/Ln06d0iP
As the source is RTSP with SDP data - I don't need to set caps manually. Interesting part that running this pipeline using Gstreamer 0.10 works fine.
Fixed by myself. Gstreamer doesn't complain about missing plugins if you dont use them in pipeline directly. Static registration of plugins udp and rtpmanager solved the problem.
Related
I am new into using snort and I don't know how to properly create rules.
I want someone to explain me how to create a rule for detection of a specific content. For example: I want to generate an alert when I search on Google the word 'terrorism'.
I tried to create the rule with what I've seen on Youtube or Google, as examples, but none of them works and I don't know what to try anymore. For instance, I am using Snort 2.9.9
alert tcp $HOME_NET any -> $EXTERNAL_NET any (msg:"terrorism content found"; content:"terrorism"; nocase; sid:1000000;)
I don't have any errors from the local.rules file, but I got the line 'include $RULE_PATH/snort.rules' commented because of some problems with PulledPork.
I expect to have an alert in the CLI, but there is no output.
I know that this is already too late but here's the answer for future reference.
The packets are probably being sent using HTTPS connection (which is why they are encrypted).
This might be a reason why there are no alerts.
Please refer here for a detailed explanation.
rules are ready, u just replace, alert with sdrop:
find /home/www \( -type d -name .git -prune \) -o -type f -print0 | xargs -0 sed -i 's/subdomainA\.example\.com/subdomainB.example.com/g'
and you can use include in config file
O.K
Answer is here: http://manpages.ubuntu.com/manpages/xenial/man8/u2spewfoo.8.html
Download Snort source, Make logs costume, write ur code to get log stream in control
Then Build source and run
Be successful :)
It is possible to send alert messages and some packet relevant data
from snort through a unix socket, to perform additional separate
processing of alert data.
Snort has to be built with spo_unsock.c/h output plugin is built in and
-A unsock (or its equivalent through the config file) is
used. The unix socket file should be created in /dev/snort_alert. Your
‘client’ code should act as ‘server’ listening to this unix socket.
Snort will be sending you Alertpkt structures which contain alert
message, event id. Original datagram, libpcap pkthdr, and offsets to
datalink, netlayer, and transport layer headers.
Below is an example how unix sockets could be used. If you have any
comments bug reports, and feature requests, please contact
snort-devel#lists.sourceforge.net or drop me an email to fygrave at
tigerteam dot net.
-Fyodor
[for copyright notice, see snort distribution code]
#include <sys/types.h>
#include <sys/socket.h>
#include <sys/un.h>
#include
#include "snort.h"
int sockfd;
void
sig_term (int sig)
{
printf (“Exiting!\n”);
close (sockfd);
unlink (UNSOCK_FILE);
exit (1);
}
int
main (void)
{
struct sockaddr_un snortaddr;
struct sockaddr_un bogus;
Alertpkt alert;
Packet *p;
int recv;
socklen_t len = sizeof (struct sockaddr_un);
if ((sockfd = socket (AF_UNIX, SOCK_DGRAM, 0)) < 0)
{
perror (“socket”);
exit (1);
}
bzero (&snortaddr, sizeof (snortaddr));
snortaddr.sun_family = AF_UNIX;
strcpy (snortaddr.sun_path, UNSOCK_FILE);
if (bind (sockfd, (struct sockaddr *) &snortaddr, sizeof (snortaddr)) < 0)
{
perror (“bind”);
exit (1);
}
signal(SIGINT, sig_term);
while ((recv = recvfrom (sockfd, (void *) &alert, sizeof (alert),
0, (struct sockaddr *) &bogus, &len)) > 0)
{
/* do validation of recv if you care */
if (!(alert.val & NOPACKET_STRUCT))
{
if ((p = calloc (1, sizeof (Packet))) == NULL)
{
perror ("calloc");
exit (1);
}
p->pkt = alert.pkt;
p->pkth = &alert.pkth;
if (alert.dlthdr)
p->eh = (EtherHdr *) (alert.pkt + alert.dlthdr);
if (alert.nethdr)
{
p->iph = (IPHdr *) (alert.pkt + alert.nethdr);
if (alert.transhdr)
{
switch (p->iph->ip_proto)
{
case IPPROTO_TCP:
p->tcph = (TCPHdr *) (alert.pkt + alert.transhdr);
break;
case IPPROTO_UDP:
p->udph = (UDPHdr *) (alert.pkt + alert.transhdr);
break;
case IPPROTO_ICMP:
p->icmph = (ICMPHdr *) (alert.pkt + alert.transhdr);
break;
default:
printf ("My, that's interesting.\n");
} /* case */
} /* thanshdr */
} /* nethdr */
if (alert.data)
p->data = alert.pkt + alert.data;
/* now do whatever you want with these packet structures */
} /* if (!NOPACKET_STRUCT) */
printf ("%s [%d]\n", alert.alertmsg, alert.event.event_id);
if (!(alert.val & NOPACKET_STRUCT))
if (p->iph && (p->tcph || p->udph || p->icmph))
{
switch (p->iph->ip_proto)
{
case IPPROTO_TCP:
printf ("TCP from: %s:%d ",
inet_ntoa (p->iph->ip_src),
ntohs (p->tcph->th_sport));
printf ("to: %s:%d\n", inet_ntoa (p->iph->ip_dst),
ntohs (p->tcph->th_dport));
break;
case IPPROTO_UDP:
printf ("UDP from: %s:%d ",
inet_ntoa (p->iph->ip_src),
ntohs (p->udph->uh_sport));
printf ("to: %s:%d\n", inet_ntoa (p->iph->ip_dst),
ntohs (p->udph->uh_dport));
break;
case IPPROTO_ICMP:
printf ("ICMP type: %d code: %d from: %s ",
p->icmph->type,
p->icmph->code, inet_ntoa (p->iph->ip_src));
printf ("to: %s\n", inet_ntoa (p->iph->ip_dst));
break;
}
}
}
perror (“recvfrom”);
close (sockfd);
unlink (UNSOCK_FILE);
return 0;
}
Until recently, the following code worked perfectly in my project. But since a few days ago, it no longer works. I can replace the NSLog statements with printf statements, replace the other Obj-C style statements and compile with g++ in terminal it works just fine.
It should just connect to a very primitive server on a Raspberry Pi, send a single character 'R', and read back a 2-Byte integer. When I compiled or ran it in XCode months ago it worked. When I compile now in terminal with g++ it works. When I run in XCode now, though, it fails to open the socket and reports setDAC: connection failed.
I fear I may be going insane. Did Apple hide some new setting I need to turn on network access in XCode 9.4.1? Any advice?
Previously functional code in XCode:
#include <sys/socket.h>
#include <netinet/in.h>
#include <arpa/inet.h>
#include <errno.h>
#include <string.h>
#include <sys/types.h>
#include "stdio.h"
.
.
.
float readDAC(uint8_t ch){
if(!isConnected){
const char *servIP = [[txtIPAddress stringValue] UTF8String];
in_port_t servPort = 5001;
int sock = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
if(sock < 0){
NSLog(#"setDAC: Socket creation failed\n");
ok = false;
}
struct sockaddr_in servAddr;
memset(&servAddr, 0, sizeof(servAddr));
servAddr.sin_family = AF_INET;
int rtnVal = inet_pton(AF_INET, servIP, &servAddr.sin_addr.s_addr);
if(ok){
if(rtnVal == 0){
NSLog(#"setDAC: inet_pton() failed: invalid address string\n");
ok = false;
}
else if (rtnVal < 0){
NSLog(#"setDAC: inet_pton() failed\n");
ok = false;
}
servAddr.sin_port = htons(servPort);
}
if(ok) if(connect(sock, (struct sockaddr *) &servAddr, sizeof(servAddr)) < 0){
NSLog(#"setDAC: connection failed\n");
ok = false;
}
datastream = fdopen(sock, "r+");
isConnected = true;
}
//send 'R' to read
//send 'W' to write
char writeChar = 'R';
if([AD5754 intValue]==1){
uint8_t writeChannel;
int16_t setVal;
float theVal;
uint8_t nDAC = 0;
if(ch>3) nDAC = 1;
ch = ch%4;
ch = 16*nDAC+ch;
writeChannel = ch;
fwrite(&writeChar, sizeof(writeChar), 1, datastream);
fwrite(&writeChannel, sizeof(writeChannel), 1, datastream);
fread(&setVal, sizeof(setVal), 1, datastream);
int16_t theSetVal;
theSetVal = ntohs(setVal);
theVal = (float)theSetVal/100;
NSLog(#"Read channel %i: %0.2f", ch, theVal);
fflush(datastream);
fclose(datastream);
return theVal;
}
I paid Apple the $99 annual fee to join the developer program and now the network coding works again. Not impressed with Apple, but ok.
I wouldn't mind paying to recover the functionality if it was documented or some notice was given. But I struggled for a few days before getting desperate enough to try throwing money at the problem, randomly.
I would like to create a VR so I created a RTSP server to link to my Zedmini. It is working if I use a h265 encoder, but the bad thing is the RTSP only works if I use Iphone7 VLC app or computer window 8 VLC software, my Android phone huawei p7 Onvifer app cannot use this RTSP address at all. I need to use huawei p7 for my project as I am going to create the app and link to this RTSP server.
Based on my checking, Some Android device do not support h265 encoder, so I decided to use h264 and I have been googling a lot for few weeks but became frustrated for not finding a solution to use h264.
This is the code which I amend from test-readme.c------>
#include <gst/gst.h>
#include <gst/rtsp-server/rtsp-server.h>
int main (int argc, char *argv[]) { GMainLoop *loop; GstRTSPServer *server; GstRTSPMountPoints *mounts; GstRTSPMediaFactory *factory;
gst_init (&argc, &argv);
loop = g_main_loop_new (NULL, FALSE);
/* create a server instance */ server = gst_rtsp_server_new ();
/* get the mount points for this server, every server has a default object
that be used to map uri mount points to media factories */ mounts = gst_rtsp_server_get_mount_points (server);
/* make a media factory for a test stream. The default media factory can use
gst-launch syntax to create pipelines.
any launch line works as long as it contains elements named pay%d. Each
element with pay%d names will be a stream */ factory = gst_rtsp_media_factory_new ();
//working case for streaming video //gst_rtsp_media_factory_set_launch (factory,"( videotestsrc is-live=1 ! x264enc ! rtph264pay name=pay0 pt=96 )");
//working case for external camera //gst_rtsp_media_factory_set_launch (factory,"( v4l2src is-live=1 device=/dev/video1 ! video/x-raw, width=(int)720, height=(int)480 framerate=30/1 format=I420 ! timeoverlay ! omxh265enc ! rtph265pay name=pay0 pt=96 )");
//working case for JX2 camera //gst_rtsp_media_factory_set_launch (factory,"( nvcamerasrc sensor-id=0 ! video/x-raw(memory:NVMM), width=1920, height=1080, framerate=30/1, format=I420 ! nvvidconv flip-method=4 !video/x-raw, width=(int)720, height=(int)480 framerate=30/1 format=I420 ! timeoverlay ! omxh265enc ! rtph265pay name=pay0 pt=96 )");
//Fail or not working case for Zed mini camera testing FOR H264 gst_rtsp_media_factory_set_launch (factory,"(v4l2src is-live=1 device=/dev/video1 ! video/x-raw, width=2560, height=720, framerate=30/1, format=I420 ! nvvidconv !video/x-raw, width=(int)720, height=(int)480, framerate=30/1, format=NV12 ! omxh264enc bitrate=10000000 ! rtph264pay name=pay0 pt=96 )");
//working case for Zed mini camera FOR H265 //gst_rtsp_media_factory_set_launch (factory,"(v4l2src is-live=1 device=/dev/video1 ! video/x-raw, width=2560, height=720, framerate=30/1, format=I420 ! nvvidconv !video/x-raw, width=(int)720, height=(int)480 framerate=30/1 format=I420 ! timeoverlay ! omxh265enc ! rtph265pay name=pay0 pt=96 )");
gst_rtsp_media_factory_set_shared (factory, TRUE); /* attach the test factory to the /test url */ gst_rtsp_mount_points_add_factory (mounts, "/test", factory);
/* don't need the ref to the mapper anymore */ g_object_unref (mounts);
/* attach the server to the default maincontext */ gst_rtsp_server_attach (server, NULL);
/* start serving */ g_print ("stream ready at rtsp://172.16.124.75:8554/test\n"); g_main_loop_run (loop);
return 0; }
This code is working on streaming video, JX2 camera, simple USB camera (low end), also zedmini camera but using h265. I need the code to run using h264, there must be some element missed out here or wrong.
gst_rtsp_media_factory_set_launch (factory,"(v4l2src is-live=1 device=/dev/video1 ! video/x-raw, width=2560, height=720, framerate=30/1, format=I420 ! nvvidconv !video/x-raw, width=(int)720, height=(int)480, framerate=30/1, format=NV12 ! omxh264enc bitrate=10000000 ! rtph264pay name=pay0 pt=96 )");
I am trying to write a simple application that sends and receives broadcasts. For testing purposes. However something doesn't work. Receive command seems to work, however sending fails. Could anyone help?
Important is that I have to use glib sockets.
My code for receiving:
GError *err = nullptr;
GInetAddress *iaddr = g_inet_address_new_any(G_SOCKET_FAMILY_IPV4);
GSocketAddress *addr = g_inet_socket_address_new(iaddr, 7070);
GSocket *sock = g_socket_new(G_SOCKET_FAMILY_IPV4, G_SOCKET_TYPE_DATAGRAM, G_SOCKET_PROTOCOL_UDP, &err);
if (err)
WERROR("ERR1");
g_socket_set_broadcast(sock, TRUE);
g_socket_bind(sock, addr, TRUE, &err);
if (err)
WERROR("ERR2");
char buf[200] = {0};
WDEBUG("LISTENING!");
g_socket_receive(sock, buf, 200, nullptr, &err);
if (err)
WERROR("ERR3");
else
WDEBUG("BUF = %s", buf);
Application starts to wait for incoming packets. Here's code for sending a broadcast:
GError *err = nullptr;
GInetAddress *iaddr = g_inet_address_new_any(G_SOCKET_FAMILY_IPV4);
GSocketAddress *addr = g_inet_socket_address_new(iaddr, 7070);
GSocket *sock = g_socket_new(G_SOCKET_FAMILY_IPV4, G_SOCKET_TYPE_DATAGRAM, G_SOCKET_PROTOCOL_UDP, &err);
if (err)
WERROR("ERR1");
g_socket_set_broadcast(sock, TRUE);
g_socket_send_to(sock, addr, "TEST", 5, nullptr, &err);
if (err)
WERROR("ERR2");
WDEBUG("SENT!");
I've been looking for some examples on sending broadcasts with glib, however I failed to find them. Can anybody help?
You shall create specific broadcast address.
Instead of
GInetAddress *iaddr = g_inet_address_new_any(G_SOCKET_FAMILY_IPV4);
use for example
GInetAddress *iaddr = g_inet_address_new_from_string("127.255.255.255");
This will send broadcast to loopback interface.
For more details about broadcast address calculation see http://jodies.de/ipcalc.
I have been writing a lcd kernel driver for a LCD module. All was going well, I can write to the display, create a /dev/lcd node that I can write into and it will display the results on the screen. I thought using the llseek fops callback to position the cursor on the lcd would be good, this way I could use rewind fseek etc. However it is not working as I expected, below is a summary of what I am seeing:
The relevant lines of code from the driver side are:
loff_t lcd_llseek(struct file *filp, loff_t off, int whence)
{
switch (whence) {
case 0: // SEEK_SET
if (off > 4*LINE_LENGTH || off < 0) {
printk(KERN_ERR "unsupported SEEK_SET offset %llx\n", off);
return -EINVAL;
}
lcd_gotoxy(&lcd, off, 0, WHENCE_ABS);
break;
case 1: // SEEK_CUR
if (off > 4*LINE_LENGTH || off < -4*LINE_LENGTH) {
printk(KERN_ERR "unsupported SEEK_CUR offset %llx\n", off);
return -EINVAL;
}
lcd_gotoxy(&lcd, off, 0, WHENCE_REL);
break;
case 2: // SEEK_END (not supported, hence fall though)
default:
// how did we get here !
printk(KERN_ERR "unsupported seek operation\n");
return -EINVAL;
}
filp->f_pos = lcd.pos;
printk(KERN_INFO "lcd_llseek complete\n");
return lcd.pos;
}
int lcd_open(struct inode *inode, struct file *filp)
{
if (!atomic_dec_and_test(&lcd_available)) {
atomic_inc(&lcd_available);
return -EBUSY; // already open
}
return 0;
}
static struct file_operations fops = {
.owner = THIS_MODULE,
.write = lcd_write,
.llseek = lcd_llseek,
.open = lcd_open,
.release = lcd_release,
};
int lcd_init(void)
{
...
// allocate a new dev number (this can be dynamic or
// static if passed in as a module param)
if (major) {
devno = MKDEV(major, 0);
ret = register_chrdev_region(devno, 1, MODULE_NAME);
} else {
ret = alloc_chrdev_region(&devno, 0, 1, MODULE_NAME);
major = MAJOR(devno);
}
if (ret < 0) {
printk(KERN_ERR "alloc_chrdev_region failed\n");
goto fail;
}
// create a dummy class for the lcd
cl = class_create(THIS_MODULE, "lcd");
if (IS_ERR(cl)) {
printk(KERN_ERR "class_simple_create for class lcd failed\n");
goto fail1;
}
// create cdev interface
cdev_init(&cdev, &fops);
cdev.owner = THIS_MODULE;
ret = cdev_add(&cdev, devno, 1);
if (ret) {
printk(KERN_ERR "cdev_add failed\n");
goto fail2;
}
// create /sys/lcd/fplcd/dev so udev will add our device to /dev/fplcd
device = device_create(cl, NULL, devno, NULL, "lcd");
if (IS_ERR(device)) {
printk(KERN_ERR "device_create for fplcd failed\n");
goto fail3;
}
...
}
To test the lseek call I have the following unit test:
#include <stdio.h>
#include <stdlib.h>
#include <fcntl.h>
#define log(msg, ...) fprintf(stdout, __FILE__ ":%s():[%d]:" msg, __func__, __LINE__, __VA_ARGS__)
int lcd;
void test(void)
{
int k;
// a lot of hello's
log("hello world test\n",1);
if (lseek(lcd, 0, SEEK_CUR) == -1) {
log("failed to seek\n", 1);
}
}
int main(int argc, char **argv)
{
lcd = open("/dev/lcd", O_WRONLY);
if (lcd == -1) {
perror("unable to open lcd");
exit(EXIT_FAILURE);
}
test();
close(lcd);
return 0;
}
The files are cross compiled like so:
~/Workspace/ts4x00/lcd-module$ cat Makefile
obj-m += fls_lcd.o
all:
make -C $(KPATH) M=$(PWD) modules
$(CROSS_COMPILE)gcc -g -fPIC $(CFLAGS) lcd_unit_test.c -o lcd_unit_test
clean:
make -C $(KPATH) M=$(PWD) clean
rm -rf lcd_unit_test
~/Workspace/ts4x00/lcd-module$ make CFLAGS+="-march=armv4 -ffunction-sections -fdata-sections"
make -C ~/Workspace/ts4x00/linux-2.6.29 M=~/Workspace/ts4x00/lcd-module modules
make[1]: Entering directory `~/Workspace/ts4x00/linux-2.6.29'
CC [M] ~/Workspace/ts4x00/lcd-module/fls_lcd.o
~/Workspace/ts4x00/lcd-module/fls_lcd.c:443: warning: 'lcd_entry_mode' defined but not used
Building modules, stage 2.
MODPOST 1 modules
CC ~/Workspace/ts4x00/lcd-module/fls_lcd.mod.o
LD [M] ~/Workspace/ts4x00/lcd-module/fls_lcd.ko
make[1]: Leaving directory `~/Workspace/ts4x00/linux-2.6.29'
~/Workspace/ts4x00/arm-2008q3/bin/arm-none-linux-gnueabi-gcc -g -fPIC -march=armv4 -ffunction-sections -fdata-sections lcd_unit_test.c -o lcd_unit_test
This is the output of running the driver with the unit test is:
root#ts4700:~/devel# insmod ./fls_lcd.ko
root#ts4700:~/devel# ./lcd_unit_test
lcd_unit_test.c:test():[61]:hello world test
lcd_unit_test.c:test():[63]:failed to seek
root#ts4700:~/devel# dmesg
FLS LCD driver started
unsupported SEEK_SET offset bf0a573c
I cannot figure out why the parameters are being mucked up so badly on the kernel side, I tried to SEEK_CUR to position 0 and in the driver I get a SEEK_SET (no matter what I put in the unit test) and a crazy big number for off?
Does anyone know what is going on please ?
btw I am compiling for kernel 2.6.29 on a arm dev kit
OK sorry guys after trying to debug this all last night it comes down to compiling against the wrong kernel (I had KPATH left to a different config of the kernel than was on the sdcard)
sorry for wasting everyones time, but hopefully if someone is seeing what looks like a crazy stack in their kernel driver this might set them straight.
oh and thanks for all the help :)