I'm using logstash(2.3.2) to read gz file by using gzip_lines codec.
The log file example (sample.log) is
127.0.0.2 - - [11/Dec/2013:00:01:45 -0800] "GET /xampp/status.php HTTP/1.1" 200 3891 "http://cadenza/xampp/navi.php" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:25.0) Gecko/20100101 Firefox/25.0"
The command I used to append to a gz file is:
cat sample.log | gzip -c >> s.gz
The logstash.conf is
input {
file {
path => "./logstash-2.3.2/bin/s.gz"
codec => gzip_lines { charset => "ISO-8859-1"}
}
}
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
#match => { "message" => "message: %{GREEDYDATA}" }
}
#date {
# match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
#}
}
output {
stdout { codec => rubydebug }
}
I have installed gzip_line plugin with bin/logstash-plugin install logstash-codec-gzip_lines
start logstash with ./logstash -f logstash.conf
When I feed s.gz with
cat sample.log | gzip -c >> s.gz
I expect that the console prints the data. but there is nothing print out.
I have tried it on mac and ubuntu, and get same result.
Is anything wrong with my code?
I checked the code for gzip_lines and it seemed obvious to me that this plugin is not working. At least for version 2.3.2. May be it is outdated. Because it does not implement the methods specified here:
https://www.elastic.co/guide/en/logstash/2.3/_how_to_write_a_logstash_codec_plugin.html
So current internal working is like that:
file input plugin reads file line by line and send it to codec.
gzip_lines codec tryies to create a new GzipReader object with GzipReader.new(io)
It then go through the reader line by line to create events.
Because you specify a gzip file, file input plugin tries to read gzip file as a regular file and sends lines to codec. Codec tries to create a GzipReader with that string and it fails.
You can modify it to work like that:
Create a file that contains list of gzip files:
-- list.txt
/path/to/gzip/s.gz
Give it to file input plugin:
file {
path => "/path/to/list/list.txt"
codec => gzip_lines { charset => "ISO-8859-1"}
}
Changes are:
Open vendor/bundle/jruby/1.9/gems/logstash-codec-gzip_lines-2.0.4/lib/logstash/codecs/gzip_lines.r file. Add register method:
public
def register
#converter = LogStash::Util::Charset.new(#charset)
#converter.logger = #logger
end
And in method decode change:
#decoder = Zlib::GzipReader.new(data)
as
#decoder = Zlib::GzipReader.open(data)
The disadvantage of this approach is it wont tail your gzip file but the list file. So you will need to create a new gzip file and append it to list.
I had a variant of this problem where I needed to decode bytes in a files to an intermediate string to prepare for a process input that only accepts strings.
The fact that encoding / decoding issues were ignored in Pyhton 2 is actually really bad IMHO. you may end up with various corrupt data problems especially if you needed to re-encode the string back into data.
using ISO-8859-1 works for both gz and text files alike. while utf-8 only worked for text files. I haven't tried it for png's yet.
Here's an example of what worked for me
data = os.read(src, bytes_needed)
chunk += codecs.decode(data,'ISO-8859-1')
# do the needful with the chunk....
Related
openvino 2021.1 up and running
downloaded yolov3_tiny.weights and yolov3_tiny.cfg files from https://pjreddie.com/darknet/yolo/
As suggested in this link (https://colab.research.google.com/github/luxonis/depthai-ml-training/blob/master/colab-notebooks/Easy_TinyYolov3_Object_Detector_Training_on_Custom_Data.ipynb#scrollTo=2tojp0Wd-Pdw) downloaded https://github.com/mystic123/tensorflow-yolo-v3
used convert_weights_pb.py file to convert the weights and cfg file to a frozen yolov3 tiny .pb model.
python3 convert_weights_pb.py --class_names /home/user/depthai-python/my_job/coco.names --data_format NHWC --weights_file /home/user/depthai-python/my_job/yolov3-tiny.weights --tiny
used openvino mo.py file to convert yolov3_tiny .pb model to IR files .xml and .bin
python3 mo.py --input_model /home/user/depthai-python/my_job/frozen_darknet_yolov3_model.pb --tensorflow_use_custom_operations_config /home/user/depthai-python/my_job/yolo_v3_tiny.json --batch 1 --data_type FP16 --reverse_input_channel --output_dir /home/user/depthai-python/my_job
used this script as a python file to convert .xml and .bin to .blob file
blob_dir = "./my_job/"
binfile = "./my_job/frozen_darknet_yolov3_model.bin"
xmlfile = "./my_job/frozen_darknet_yolov3_model.xml"
import requests
url = "http://69.164.214.171:8083/compile" # change if running against other URL
payload = {
'compiler_params': '-ip U8 -VPU_NUMBER_OF_SHAVES 8 -VPU_NUMBER_OF_CMX_SLICES 8',
'compile_type': 'myriad'
}
files = {
'definition': open(xmlfile, 'rb'),
'weights': open(binfile, 'rb')
}
params = {
'version': '2021.1', # OpenVINO version, can be "2021.1", "2020.4", "2020.3", "2020.2", "2020.1", "2019.R3"
}
response = requests.post(url, data=payload, files=files, params=params)
print(response.headers)
print(response.content)
blobnameraw = response.headers.get('Content-Disposition')
print('blobnameraw',blobnameraw)
blobname = blobnameraw[blobnameraw.find('='):][1:]
with open(blob_dir + blobname, 'wb') as f:
f.write(response.content)
got the following error
{'Content-Type': 'application/json', 'Content-Length': '564', 'Server': 'Werkzeug/1.0.0 Python/3.6.9', 'Date': 'Fri, 09 Apr 2021 00:25:33 GMT'}
b'{"exit_code":1,"message":"Command failed with exit code 1, command: /opt/intel/openvino/deployment_tools/inference_engine/lib/intel64/myriad_compile -m /tmp/blobconverter/b9ea1f9cdb2c44bcb9bb2676ff414bf3/frozen_darknet_yolov3_model.xml -o /tmp/blobconverter/b9ea1f9cdb2c44bcb9bb2676ff414bf3/frozen_darknet_yolov3_model.blob -ip U8 -VPU_NUMBER_OF_SHAVES 8 -VPU_NUMBER_OF_CMX_SLICES 8","stderr":"stoi\n","stdout":"Inference Engine: \n\tAPI version ............ 2.1\n\tBuild .................. 2021.1.0-1237-bece22ac675-releases/2021/1\n\tDescription ....... API\n"}\n'
blobnameraw None
Traceback (most recent call last):
File "converter.py", line 29, in
blobname = blobnameraw[blobnameraw.find('='):][1:]
AttributeError: 'NoneType' object has no attribute 'find'
Alternatively i have tried the online blob converter tool from openvino http://69.164.214.171:8083/ gives the error for both .xm and .bin to .blob or from .pb to .blob
Anyone have idea.. i have tried all versions of openvino
We recommend you to use the myriad_compile.exe or compile_tool to convert your model to blob. Compile tool is a C++ application that enables you to compile a network for inference on a specific device and export it to a binary file. With the Compile Tool, you can compile a network using supported Inference Engine plugins on a machine that doesn't have the physical device connected and then transfer a generated file to any machine with the target inference device available.
I am trying to find out what format status file in nagios is, it has a .dat extention but is not the standard .dat ( at least not the windows .dat )
Here is an example of the format
contactstatus {
contact_name=noc
modified_attributes=0
modified_host_attributes=0
modified_service_attributes=0
host_notification_period=24x7
service_notification_period=24x7
last_host_notification=0
last_service_notification=1545078717
host_notifications_enabled=1
service_notifications_enabled=1
}
contactstatus {
contact_name=slack
modified_attributes=0
modified_host_attributes=0
modified_service_attributes=0
host_notification_period=24x7
service_notification_period=24x7
last_host_notification=0
last_service_notification=1545078717
host_notifications_enabled=1
service_notifications_enabled=1
}
Found that it is not a standard, but a nagios custom format
I have a set of Tasks inside a build.cake file and I would like to capture the log output from the console into a log file. I know it's possible to use the OnError() function to output errors to file but I would like to output everything to a log file, not just errors.
Below is an example of the build.cake file.
#load "SomeTask.cake"
#load "SomeOtherTask.cake"
var target = Argument("target", "Default");
var someTask = Task("SomeTask")
.Does(() =>
{
SomeMethodInsideSomeTask();
});
var someOtherTask = Task("SomeOtherTask")
.Does(() =>
{
SomeOtherMethodInsideSomeOtherTask();
});
Task("Default")
.IsDependentOn(someTask)
.IsDependentOn(someOtherTask);
RunTarget(target);
N.B. The Tasks are not running any sort of MSBuild commands so it's not possible to use MSBuildFileLogger.
How about pipe the stdout to a file i.e.
./build.ps1 > log.txt
Have you heard about tee ?
It reads standard input and writes it to both standard output and one or more files
I have to read 3 different lines from log files based on some text and then output the fields in a csv file.
sample log data:-
20110607 095826 [.] !! Begin test. Script filename/text.txt
20110607 095826 [.] Full path: filename/test/text.txt
20110607 095828 [.] FAILED: Test Failed()..
i have to read file name after !!Begin test. Script. this is my conf file
filter{
grok
{
match => {"message" => "%{BASE10NUM:Date}%{SPACE:pat}{BASE10NUM:Number}%
{SPACE:pat}[.]%{SPACE:pat}%{SPACE:pat}!! Begin test. Script%
{SPACE:pat}%{GREEDYDATA:file}"
}
overwrite => ["message"]
}
if "_grokparserfailure" in [tags]
{
drop{}
}
}
but its not giving me single record, its parsing full log file in json format no parsed field.
Are there minimal, or even larger, working examples of using SCons and knitr to generate reports from .Rmd files?
kniting an cleaning_session.Rmd file from the command line (bash shell) to derive an .html file, may be done via:
Rscript -e "library(knitr); knit('cleaning_session.Rmd')".
In this example, Rscript and instructions are fed to a Makefile:
RMDFILE=test
html :
Rscript -e "require(knitr); require(markdown); knit('$(RMDFILE).rmd', '$(RMDFILE).md'); markdownToHTML('$(RMDFILE).md', '$(RMDFILE).html', options=c('use_xhtml', 'base64_images')); browseURL(paste('file://', file.path(getwd(),'$(RMDFILE).html'), sep=''
In this answer https://stackoverflow.com/a/10945832/1172302, there is reportedly a solution using SCons. Yet, I did not test enough to make it work for me. Essentially, it would be awesome to have something like the example presented at https://tex.stackexchange.com/a/26573/8272.
[Updated] One working example is an Sconstruct file:
import os
environment = Environment(ENV=os.environ)
# define a `knitr` builder
builder = Builder(action = '/usr/local/bin/knit $SOURCE -o $TARGET',
src_suffix='Rmd')
# add builders as "Knit", "RMD"
environment.Append( BUILDERS = {'Knit' : builder} )
# define an `rmarkdown::render()` builder
builder = Builder(action = '/usr/bin/Rscript -e "rmarkdown::render(input=\'$SOURCE\', output_file=\'$TARGET\')"',
src_suffix='Rmd')
environment.Append( BUILDERS = {'RMD' : builder} )
# define source (and target files -- currently useless, since not defined above!)
# main cleaning session code
environment.RMD(source='cleaning_session.Rmd', target='cleaning_session.html')
# documentation of the Cleaning Process
environment.Knit(source='Cleaning_Process.Rmd', target='Cleaning_Process.html')
# documentation of data
environment.Knit(source='Code_Book.Rmd', target='Code_Book.html')
The first builder calls the custom script called knit. Which, in turn, takes care of the target file/extension, here being cleaning_session.html. Likely the suffix parameter is not needed altogether, in this very example.
The second builder added is Rscript -e "rmarkdown::render(\'$SOURCE\')"'.
The existence of $TARGETs (as in the example at Command wrapper) ensures SCons won't repeat work if a target file already exists.
The custom script (whose source I can't retrieve currently) is:
#!/usr/bin/env Rscript
local({
p = commandArgs(TRUE)
if (length(p) == 0L || any(c('-h', '--help') %in% p)) {
message('usage: knit input [input2 input3] [-n] [-o output output2 output3]
-h, --help to print help messages
-n, --no-convert do not convert tex to pdf, markdown to html, etc
-o output filename(s) for knit()')
q('no')
}
library(knitr)
o = match('-o', p)
if (is.na(o)) output = NA else {
output = tail(p, length(p) - o)
p = head(p, o - 1L)
}
nc = c('-n', '--no-convert')
knit_fun = if (any(nc %in% p)) {
p = setdiff(p, nc)
knit
} else {
if (length(p) == 0L) stop('no input file provided')
if (grepl('\\.(R|S)(nw|tex)$', p[1])) {
function(x, ...) knit2pdf(x, ..., clean = TRUE)
} else {
if (grepl('\\.R(md|markdown)$', p[1])) knit2html else knit
}
}
mapply(knit_fun, p, output = output, MoreArgs = list(envir = globalenv()))
})
The only thing, now, necessary is to run scons.