I'm utilizing argus against a pcap file and need to filter by datetime, source and destination IP, and port. Currently, I can take a pcap file and convert to argus:
argus -r packet.pcap -w packet.argus
Then, to read and display argus data:
ra -r packet.argus
At this point, it seems that one can filter the argus data by my aforementioned parameters by using the ra command but I can't seem to find the correct syntax. Any ideas?
info in
https://www.systutorials.com/docs/linux/man/1-ra/
so an example from a batch file I used...
ra -s {column names} -r $l.ra > $l.txt
also suggest adding state and flags, giving;
ra -s stime proto saddr sport daddr dport state -nzr $l.ra > $l.txt
Related
I am new to using subprocess calls. Please help me in figuring out the issue in following script..
I am trying to write a new PCAP file (filter1.pcap) that would contain only packets from a specific IP address (ipadd) from a big set of packets from various IP addresses contained in a larger PCAP file(superset.pcap)
The error is: OSError: [Errno 36] File name too long
The code is as follows:
from subprocess import *
pcapfile = rdpcap("superset.pcap")
ipadd = "192.168.1.1"
fileout = "filter1.pcap"
command = "sudo tcpdump -w %s -r %s src %s" %(fileout,pcapfile,ipadd)
subprocess.call( [command] )
BTW the below command in Linux works perfectly fine:
sudo tcpdump -w filter1.pcap -r superset.pcap src 192.168.1.1
Any help would be great !!
Thank you,
cks
This is resolved.. There was a logical error here. I was reading the complete PCAP file using rdpcap and passing the value to tcpdump. So tcpdump was reading the complete file itself as the file name.
I changed the code as below and it's working now !
import os.path
pcapfile = "superset.pcap"
ipadd = "192.168.1.1"
fileout = "filter1.pcap"
command = "sudo tcpdump -w %s -r %s src %s" %(fileout,pcapfile,ipadd)
os.system(command)
I'm using Buildbot V.0.9.0rc3
My Buildbot triggers when I send a change via command line or if I receieve an http Post request to the correct address.
Currently I'm sending changes to Buildbot in two different ways:
$ buildbot sendchange -m localhost:9999 -a example-user:pass -W me -C default
or
curl -X POST -d author=aalvz -d comments=mycomment -d project=my_project -d category=default -d repository=some http://192.168.33.20:8020/change_hook/base
My schedulers are defined like this:
c['schedulers'].append(schedulers.SingleBranchScheduler(
name="waiter",
builderNames=["runtests"],
change_filter=util.ChangeFilter(category='default')))
c['www'] = dict(port=8020,
plugins=dict(waterfall_view={}, console_view={}),
change_hook_dialects={
'base': True,
'somehook': {'option1':True,
'option2':False}})
And my Step in factory cloning a repo looks like this:
factory.addStep(steps.Git(repourl='git#github.com:AAlvz/my_repo.git', mode='full', workdir='newFolder', branch='my_branch', submodules=True, clobberOnFailure=True))
I would like to receive a POST with some data and use that data to trigger different commands. Something like: (using $ to make the variables noticeable)
factory.addStep(steps.Git(repourl=$myjson.name, mode='full', workdir=$myjson.path, branch=$myjson.branch, submodules=True, clobberOnFailure=True))
That way I could send a JSON like:
{myjson: {name: github/myrepo.git, path: /tmp/my/path, branch: my_branch}}
and be able to clone the repository provided by the JSON.
Thanks in advance! I hope the question is clear enough. I can provide with logs or any needed configuration. Thanks!
This is solved Using Buildbot Properties.
You cand send them via command line (with PBChangeSource) using the flag
buildbot sendchange ... --properties=my_property:myvalue
The flag can be used multiple timpes if multiple properties are needed.
We have a data set of 15k classified tweets with which we need to perform sentiment analysis. I would like to test against a test set of 5k classified tweets. Due to Weka needing the same attributes within the header of the test set as exist in the header of training set, I will have to use batch filtering if I want to be able to run my classifier against this 5k test set.
However, there are several filters that I need to run my training set through, so I figured the running a multifilter against the training set would be a good idea. The multifilter works fine when not running the batch argument, but when I try to batch filter I get an error from the CLI as it tried to execute the first filter within the multi-filter:
CLI multiFilter command w/batch argument:
java weka.filters.MultiFilter -F "weka.filters.supervised.instance.Resample -B 1.0 -S 1 -Z 15.0 -no-replacement" \
-F "weka.filters.unsupervised.attribute.StringToWordVector -R first-last -W 100000 -prune-rate -1.0 -N 0 -S -stemmer weka.core.stemmers.NullStemmer -M 2 -tokenizer weka.core.tokenizers.AlphabeticTokenizer" \
-F "weka.filters.unsupervised.attribute.Reorder -R 2-last,first"\
-F "weka.filters.supervised.attribute.AttributeSelection -E \"weka.attributeSelection.InfoGainAttributeEval \" -S \"weka.attributeSelection.Ranker -T 0.0 -N -1\"" \
-F weka.filters.AllFilter \
-b -i input\Train.arff -o output\Train_b_out.arff -r input\Test.arff -s output\Test_b_out.arff
Here is the resultant error from the CLI:
weka.core.UnassignedClassException: weka.filters.supervised.instance.Resample: Class attribute not set!
at weka.core.Capabilities.test(Capabilities.java:1091)
at weka.core.Capabilities.test(Capabilities.java:1023)
at weka.core.Capabilities.testWithFail(Capabilities.java:1302)
at weka.filters.Filter.testInputFormat(Filter.java:434)
at weka.filters.Filter.setInputFormat(Filter.java:452)
at weka.filters.SimpleFilter.setInputFormat(SimpleFilter.java:195)
at weka.filters.Filter.batchFilterFile(Filter.java:1243)
at weka.filters.Filter.runFilter(Filter.java:1319)
at weka.filters.MultiFilter.main(MultiFilter.java:425)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at weka.gui.SimpleCLIPanel$ClassRunner.run(SimpleCLIPanel.java:265)
And here are the headers with a portion of data for both the training and test input arffs:
Training:
#RELATION classifiedTweets
#ATTRIBUTE ##sentence## string
#ATTRIBUTE ##class## {1,-1,0}
#DATA
"Conditioning be very important for curly dry hair",0
"Combine with Sunday paper coupon and",0
"Price may vary by store",0
"Oil be not really moisturizers",-1
Testing:
#RELATION classifiedTweets
#ATTRIBUTE ##sentence## string
#ATTRIBUTE ##class## {1,-1,0}
#DATA
"5",0
"I give the curl a good form and discipline",1
"I have be cowashing every day",0
"LOL",0
"TITLETITLE Walgreens Weekly and Midweek Deal",0
"And then they walk away",0
Am I doing something wrong here? I know that supervised resampling requires the class attribute to be on the bottom of the attribute list within the header, and it is... within both the test and training input files.
EDIT:
Further testing reveals that this error does not occur with relationship to the batch filtering, it occurs whenever I run the supervised resample filter from the CLI... The data that I use works on every other filter I've tried within the CLI, so I don't understand why this filter is any different... resampling the data in the GUI works fine as well...
Update:
This also happens with the SMOTE filter instead of the resample filter
Could not get the batch filter to work with any resampling filter. However, our workaround was to simply resample (and then randomize) the training data as step 1. From this reduced set we ran batch filters for everything else we wanted on the test set. This seemed to work fine.
You could have used the multifilter along with the ClassAssigner method to make it work:
java -classpath $jcp weka.filters.MultiFilter
-F "weka.filters.unsupervised.attribute.ClassAssigner -C last"
-F "weka.filters.supervised.instance.Resample -B 1.0 -S 1 -Z 66.0"
So, site that used to use FTP now has an HTTP front-end and won't allow FTP connections. The site in question (for an example directory) will show a page with links to different dates. Inside each of these different dates, there are many files, and I typically just need to get some file with some clear pattern e.g. *h17v04*.hdf. I thought this could work:
wget -I "${PLATFORM}/${PRODUCT}/${YEAR}.*" -r -l 4 \
--user-agent="Mozilla/5.0 (Windows NT 5.2; rv:2.0.1) Gecko/20100101 Firefox/4.0.1" \
--verbose -c -np -nc -nd \
-A "*h17v04*.hdf" http://e4ftl01.cr.usgs.gov/$PLATFORM/$PRODUCT/
where PLATFORM=MOLT, PRODUCT=MOD09GA.005 and YEAR=2004, for example. This seems to start looking into all the useful dates, finds the index.html, and then just skips to the next directory, without downloading the relevant hdf file:
--2013-06-14 13:09:18-- http://e4ftl01.cr.usgs.gov/MOLT/MOD09GA.005/2004.01.01/
Reusing existing connection to e4ftl01.cr.usgs.gov:80.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: `e4ftl01.cr.usgs.gov/MOLT/MOD09GA.005/2004.01.01/index.html'
[ <=> ] 174,182 134K/s in 1.3s
2013-06-14 13:09:20 (134 KB/s) - `e4ftl01.cr.usgs.gov/MOLT/MOD09GA.005/2004.01.01/index.html' saved [174182]
Removing e4ftl01.cr.usgs.gov/MOLT/MOD09GA.005/2004.01.01/index.html since it should be rejected.
--2013-06-14 13:09:20-- http://e4ftl01.cr.usgs.gov/MOLT/MOD09GA.005/2004.01.02/
[...]
If I ignore the -A option, only the index.html file is downloaded to my system, but it appears it's not parsed and the links are not followed. I don't really know what more is required to make this work, as I can't see why it doesn't!!!
SOLUTION
In the end, the problem was due to an old bug in the local version of wget. However, I ended up writing my own script for downloading MODIS data from the server above. The script is pure Python, and is available from here.
Consider to use pyModis instead of wget which is a Free and Open Source Python based library to work with MODIS data. It offers bulk-download for user selected time ranges, mosaicking of MODIS tiles, and the reprojection from Sinusoidal to other projections, convert HDF format to other formats. See
http://www.pymodis.org/
Does anyone know how to extract teletext subtitles?
I have tried ffmpeg, it says
Invalid frame dimensions 0x0
CCExtractor, it says
"Missing ASF header. Abort
telxcc, it says
! Invalid TS packet header; TS seems to be misaligned
I have done a lot of research, but have no luck. Can anyone offer some help!
dvb_subtitles cannot be extracted with ffmpeg easily because is an image that overlays the original. Good explanation: https://stackoverflow.com/a/20887655/2119685
There is a way to extract the dvb_teletext, which normally includes the subtiltes too.
Install the next dependency:
sudo apt-get install libzvbi-dev
Then recompile from source ffmpeg with:
--enable-libzvbi
Good tutorial here on how to compile FFMPEG from source -https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu
Then execute the next command to fetch the subtitles to a .srt file:
ffmpeg -txt_format text -i INPUT1 -an -vn -scodec srt test.srt
And voila, your .srt subtitles will be in test.srt
Did you try using gstreamer ? appsrc->tsdemux->fakesink. Make pipeline like this, and then get the PES data from fakesink callback.