I am trying to implement a parser plugin for fluentd. Below are the configuration file and the plugin file.
Fluentd config file.
<source>
type syslog
port 9010
bind x.x.x.x
tag flog
format flog_message
</source>
Plugin file
module Fluent
class TextParser
class ElogParser < Parser
Plugin.register_parser("flog_message", self)
config_param :delimiter, :string, :default => " " # delimiter is configurable with " " as default
config_param :time_format, :string, :default => nil # time_format is configurable
# This method is called after config_params have read configuration parameters
def configure(conf)
if #delimiter.length != 1
raise ConfigError, "delimiter must be a single character. #{#delimiter} is not."
end
# TimeParser class is already given. It takes a single argument as the time format
# to parse the time string with.
#time_parser = TimeParser.new(#time_format)
end
def call(text)
# decode text
# ...
# decode text
yield result_hash
end
end
end
end
However the method call is not executed after running fluentd. Any help is greatly appreciated.
Since v0.12, use parse instead of call.
docs.fluentd.org was outdated so I just updated the article: http://docs.fluentd.org/articles/plugin-development#parser-plugins
Sorry for forget to update the document...
Related
I'm having a problem with creating a ticket with the generic interface. I'm using Perl via a SOAP api. I created a webservice with the GenericTicketConnector.yml.
I looked at the debugger of the webservice and the only data what is not provided is that of TicketCreate.
I uploaded the script to the remote (Unix) where the files are stored with the use of WinSCP and i run the script with the use of Putty SSH connection.
Error message: 500 Internal Server Error
# this is the URL for the web service
# the format is
# <HTTP_TYPE>:://<OTRS_FQDN>/nph-genericinterface.pl/Webservice/<WEB_SERVICE_NAME>
# or
# <HTTP_TYPE>:://<OTRS_FQDN>/nph-genericinterface.pl/WebserviceID/<WEB_SERVICE_ID>
my $URL = 'http://localhost/otrs/nph-genericinterface.pl/Webservice/GenericTicketConnector';
# this name space should match the specified name space in the SOAP transport for the web service
my $NameSpace = 'http://www.otrs.org/TicketConnector/';
# this is operation to execute, it could be TicketCreate, TicketUpdate, TicketGet, TicketSearch
# or SessionCreate. and they must to be defined in the web service.
my $Operation = 'TicketCreate';
# this variable is used to store all the parameters to be included on a request in XML format, each
# operation has a determined set of mandatory and non mandatory parameters to work correctly, please
# check OTRS Admin Manual in order to get the complete list
my $XMLData = '
<UserLogin>pp</UserLogin>
<Password>********</Password>
<Ticket>
<Title>some title</Title>
<CustomerUser>Jan2804</CustomerUser>
<Queue>raw</Queue>
<State>open</State>
<Type>Incident</Type>
<Priority>3 normal</Priority>
</Ticket>
<Article>
<Subject>some subject</Subject>
<Body>some body</Body>
<ContentType>text/plain; charset=utf8</ContentType>
</Article>
';
# create a SOAP::Lite data structure from the provided XML data structure
my $SOAPData = SOAP::Data
->type( 'xml' => $XMLData );
my $SOAPObject = SOAP::Lite->uri($NameSpace)->proxy($URL)
->$Operation($SOAPData);
# check for a fault in the soap code
if ( $SOAPObject->fault() ) {
print $SOAPObject->faultcode(), " ", $SOAPObject->faultstring(), "\n";
}
# otherwise print the results
else {
# get the XML response part from the SOAP message
my $XMLResponse = $SOAPObject->context()->transport()->proxy()->http_response()->content();
# deserialize response (convert it into a perl structure)
my $Deserialized = eval {
SOAP::Deserializer->deserialize($XMLResponse);
};
# remove all the headers and other not needed parts of the SOAP message
my $Body = $Deserialized->body();
# just output relevant data and no the operation name key (like TicketCreateResponse)
for my $ResponseKey ( sort keys %{$Body} ) {
print Dumper( $Body->{$ResponseKey} ); ## no critic
}
}
Try to open the webservice in your browser. Is it working?
I recently did it for myself. I used the REST yml file. I also called my webservice GenericTicketConnectorREST. But you should also get some answer aswell
Following URI is accessable for me via browser
http://otrs-test.company.local/otrs/nph-genericinterface.pl/Webservice/GenericTicketConnectorREST/Ticket?UserLogin=username&Password=testtesttest
We are currently using cloud formation to create a glue job (via codebuild and codepipeline). The one thing we are stuck on is how to automate the code that goes into the glue job.
Our current relevant piece of the cloudformation template looks like this:
MyJob:
Type: AWS::Glue::Job
Properties:
Command:
Name: glueetl
ScriptLocation: "s3://aws-glue-scripts//your-script-file.py"
DefaultArguments:
"--job-bookmark-option": "job-bookmark-enable"
ExecutionProperty:
MaxConcurrentRuns: 2
MaxRetries: 0
Name: cf-job1
Role: !Ref MyJobRole
The problem is is the "ScriptLocation". Looks like it is required to be an S3 location. How can we automate the upload of this. The code is in a .py file in our Git repository and I assume is uploaded to the artifact repository as are of the codebuild process, but how to access it?
Would like to hear how others are doing this. Thanks!
EDIT: I was able to find a similar stack overflow post:AWS Glue automatic job creation but it the answers really don't give a solution or understand the question posed.
I've written a tool to handle the upload of stack dependencies, including CloudFormation nested templates and non-inline Lambda functions.
Currently AWS Glue was not handled since I haven't try it in any project yet. But it should be easy to expand to support Glue.
The dependencies were defined in separate config file, and a piece of code within the tool is responsible for the config. Here's the sample config:
Nested CloudFormation templates:
# DEPENDS=( <ParameterName>=<NestedTemplate> )
#
# Required: Yes if has nested template, otherwise No
# Default: None
# Syntax:
# <ParameterName>: The name of template parameter that is referred at the
# value of nested template property `TemplateURL`.
# <NestedTemplate>: A local path or a S3 URL starting with `s3://` or
# `https://` pointing to the nested template.
# The nested templates at local is going to be uploaded
# to S3 Bucket automatically during the deployment.
# Description:
# Double quote the pairs which contain whitespaces or special characters.
# Use `#` to comment out.
# ---
# Example:
# DEPENDS=(
# NestedTemplateFooURL=/path/to/nested/foo/stack.json
# NestedTemplateBarURL=/path/to/nested/bar/stack.json
# )
Lambda functions:
# LAMBDA=( <S3BucketParameterName>:<S3KeyParameterName>=<LambdaFunction> )
#
# Required: Yes if has None-inline Lambda Function, otherwise No
# Default: None
# Syntax:
# <S3BucketParameterName>: The name of template parameter that is referred
# at the value of Lambda property `Code.S3Bucket`.
# <S3KeyParameterName>: The name of template parameter that is referred
# at the value of Lambda property `Code.S3Key`.
# <LambdaFunction>: A local path or a S3 URL starting with `s3://` pointing
# to the Lambda Function.
# The Lambda Functions at local is going to be zipped and
# uploaded to S3 Bucket automatically during the deployment.
# Description:
# Double quote the pairs which contain whitespaces or special characters.
# Use `#` to comment out.
# ---
# Example:
# DEPENDS=(
# S3BucketForLambdaFoo:S3KeyForLambdaFoo=/path/to/LambdaFoo.py
# S3BucketForLambdaBar:S3KeyForLambdaBar=s3://mybucket/LambdaBar.py
# )
The tools were written in bash and come with 2 parts:
xsh: It works as a bash library framework.
xsh-lib/aws: It's a library of xsh.
The code you may need to expand is located in xsh-lib/aws/functions/cfn/deploy.sh.
The example deploy command looks like:
$ xsh aws/cfn/deploy -C /path/to/your/template-and-config-dir -t stack.json -c sample.conf
I'm considering to abstract the dependencies such as CloudFormation template, Lambda functions and Glue, into a single interface for both configs and handlers.
This will make it easier to add new dependency handlers to the deployer.
I want to create web service for my Phonegap Android application which will further call progress 4GL 91.D procedure.
Does any one knowy idea how to create web service for this.
That will be a struggle. You CAN create a server that listens to a socket but you will have to handle everything yourself!
Look at this example.
However, you are likely better off writing the webservice in a language with a better support and then finding another way of getting the data out of the DB. If youre really stuck with a 10+ year old version you really should consider migrating to something else.
You don't have to upgrade everything -- you could just obtain a license for a version 10 client. V10 clients can connect to v9 databases (the rule is that the client can be up to one major release higher) so you could use that to build a SOAP service. Or you could get a v10 "webspeed" license.
Or you could write a simple enough CGI wrapper to some 4GL code if you have those sorts of skills. I occasionally toss together something like this:
#!/bin/bash
#
LOGFILE=/tmp/myservice.log
SVC=sample
# if a FIFO does not exist for the specified service then create it in /tmp
#
# $1 = direction -- in or out
# $2 = unique service name
#
pj_fifo() {
if [ ! -p /tmp/$2.$1 ]
then
echo `date` "Creating FIFO $2.$1" >> ${LOGFILE}
rm -f /tmp/$2.$1 >> ${LOGFILE} &2>&1
/bin/mknod -m 666 /tmp/$2.$1 p >> ${LOGFILE} &2>&1
fi
}
if [ "${REQUEST_METHOD}" = "POST" ]
then
read QUERY_STRING
fi
# header must include a blank line
#
# we're returning XML
#
echo "Content-type: text/xml" # or text/html or text/plain
echo
# debugging echo...
#
# echo $QUERY_STRING
#
# echo "<html><head><title>Sample CGI Interface</title></head><body><pre>QUERY STRING = ${QUERY_STRING}</pre></body></html>"
# ensure that the FIFOs exist
#
pj_fifo in $SVC
pj_fifo out $SVC
# make the request
#
echo "$QUERY_STRING" > /tmp/${SVC}.in
# send the response back to the requestor
#
cat /tmp/${SVC}.out
# all done!
#
echo `date` "complete" >> ${LOGFILE}
Then you just arrange for a background session to be reading /tmp/sample.in:
/* sample.p
*
* mbpro dbname -p sample.p > /tmp/sample.log 2>&1 &
*
*/
define variable request as character no-undo.
define variable result as character no-undo.
input from value( "/tmp/sample.in" ).
output to value( "/tmp/sample.out" ).
do while true:
import unformatted request.
/* parse it and do something with it... */
result = '<?xml version="1.0"?>~n<status>~n'.
result = result + "ok". /* or whatever turns your crank... */
result = result + "</status>~n".
end.
When input arrives parse the line and do whatever. Spit the answer back out to /tmp/sample.out and loop. It's not very fancy but if your needs are modest it is easy to do. If you need more scalability, robustness or security then you might ultimately need something more sophisticated but this will at least let you get started prototyping.
I am defining my server setup like this:
task :test do
role(:frontend) {[server1,server2,server3, {:user=> "frontend-user", :options => {:log_location=>"HOW DO I READ THIS??"}}]}
role(:backend) {...}
role(:db) {...}
role(:mq) {...}
end
task :staging do
role(:frontend) {[server1,server2,server3, {:user=> "frontend-user", :options => {:log_location=>"HOW DO I READ THIS??"}}]}
role(:backend) {...}
role(:db) {...}
role(:mq) {...}
end
task :prod do
role(:frontend) {[server1,server2,server3, {:user=> "frontend-user", :options => {:log_location=>"HOW DO I READ THIS??"}}]}
role(:backend) {...}
role(:db) {...}
role(:mq) {...}
end
This is to embrace all the complexity of a legacy enterpricey system.
Now, from a task, I want to read the log_location.
Task example:
namespace :log do
desc "list all log files"
task :list do
run %(ls -1 #{log_location}/*/*.log)
end
end
The problem is that the variable log_location is undefined.
/.rvm/gems/ruby-2.0.0-p0/gems/capistrano-2.14.2/lib/capistrano/configuration/namespaces.rb:193:in
method_missing': undefined local variable or methodlog_location'
for
# (NameError)
How do I access that variable?
Is there a smarter/simpler way of setting this custom variable?
I'm sorry to say you can't read that. The blocks passed to task() aren't executed in a server context, thus the block in effect doesn't know what server it's operating on.
The classical workaround for this over the years has been to upload a config file which looks something like this:
---
hostname1:
log_file_location: "/var/log/hostname1/foo/bar"
hostname2:
log_file_location: "/var/log/hostname2/foo/bar"
(or similar) and use the machines hostname when loading the configuration.
I know this isn't a great workaround, thus in the forthcoming (see the v3 branch at Github) version of Capistrano there's a feature which looks like this:
host1 = SSHKit::Host.new 'user#example.com'
host2 = SSHKit::Host.new 'user#example.org'
host1.properties = {log_file_location: "/foo/bar"}
host2.properties.log_file_location = "/bar/baz"
on hosts do |host|
target = "/var/www/sites/"
if host.hostname =~ /org/
target += "dotorg"
else
target += "dotcom"
end
execute! :head, '-n 20', host.properties.log_file_location
execute! :git, :clone, "git#git.#{host.hostname}", target
end
(SSHKit Examples) - SSHKit is the new backend driver for Capistrano.
The v3 branch probably isn't ready for prime time yet, we're having a lot of success internally but the documentation is pretty ahem non existent. However the code is quite literally an oder of magnitude less imposing, and I think you'll find quite readable.
You need this: https://github.com/capistrano/capistrano/wiki/2.x-Multistage-Extension
It means that you can isolate stage specific code in separate files named after the stage. If you want to test for the stage name in the shared deploy.rb you can do that too, like this:
Put this in your deploy.rb
task :show_stage do
puts(stage)
end
Test from command line
$ cap staging show_stage
staging
Actually, I was able to pull out the log_location variable, but ended up with a solution that had one restriction:
I am using log location for one environment only. This is no problem in my current project, since I run the capistrano task against one role at a time.
For testing this setup, I made this task:
namespace :support do
desc "Test if the log location variable is correctly fetched from configuration"
task :test_log_location do
find_servers_for_task(current_task).each do |server|
# puts server.host
# puts server.port
# puts server.user
# puts server.options
result = "LOG LOCATION: #{server.options[:log_location]}"
#puts result
logger.info result
end
end
end
Then, for my tasks in the :log namespace, I defined the variable with set :log_location and also define the :current_role variable:
namespace :log do
def set_log_location
#set_log_location
#puts fetch(:log_location)
log_location = nil
options = nil
find_servers_for_task(current_task).each do |server|
# puts server.host
# puts server.port
# puts server.user
# puts server.options
options = server.options
log_location = server.options[:log_location]
#log_location = server.options[:current_role]
end
msg1="FATAL: you need to specify 'ROLES=frontend,backend,mq' (or one of them) from command line"
msg2="FATAL: Could not get log_location from environment/server options. I can only see these options: #{options}"
raise msg1 if ENV['ROLES'].nil?
raise msg2 if log_location.nil?
set :log_location, log_location
set :current_role, ENV['ROLES'].split(',').first
logger.info %(CURRENT_ROLE #{fetch(:current_role)})
logger.info %(THE LOG LOCATION IS: #{fetch(:log_location)})
end
end
Finally, I used a separate method to fully qualify the log path (needed for my setup -- also in the :log namespace):
def log_location
log_names = {
:frontend => "*/play.log",
:backend => "*Weblogic*/*.{log,out}"
}
loc = "#{fetch(:log_location)}/#{log_names[fetch(:current_role).to_sym]}"
logger.info "using the log location of '#{loc}'"
loc
end
Now, each task can use the specific log location like this:
desc "list all log files"
task :list do
set_log_location
run %(ls -l #{log_location})
end
I am sure this can be done more elegant, but it works for me
Where is ipython's configuration file, which starts with c = get_config(), executed? I'm asking because I want to understand what order things are done in ipython, e.g. why certain commands will not work if included as c.InteractiveShellApp.exec_lines.
This is related to my other question, Log IPython output?, because I want access to a logger attribute, but I can't figure out how to access it in the configuration file, and by the time exec_lines are run, the logger has already started (it's too late).
EDIT: I've accepted a solution based on using a startup file in ipython0.12+. Here is my implementation of that solution:
from time import strftime
import os.path
ip = get_ipython()
#ldir = ip.profile_dir.log_dir
ldir = os.getcwd()
fname = 'ipython_log_' + strftime('%Y-%m-%d') + ".py"
filename = os.path.join(ldir, fname)
notnew = os.path.exists(filename)
try:
ip.magic('logstart -o %s append' % filename)
if notnew:
ip.logger.log_write( u"########################################################\n" )
else:
ip.logger.log_write( u"#!/usr/bin/env python\n" )
ip.logger.log_write( u"# " + fname + "\n" )
ip.logger.log_write( u"# IPython automatic logging file\n" )
ip.logger.log_write( u"# " + '# Started Logging At: '+ strftime('%Y-%m-%d %H:%M:%S\n') )
ip.logger.log_write( u"########################################################\n" )
print " Logging to "+filename
except RuntimeError:
print " Already logging to "+ip.logger.logfname
There are only two subtle differences from the proposed solution linked:
1. saves log to cwd instead of some log directory (though I like that more...)
2. ip.magic_logstart doesn't seem to exist, instead one should use ip.magic('logstart')
The config system sets up a special namespace containing the get_config() function, runs the config file, and collects the values to apply them to the objects as they're created. Referring to your previous question, it doesn't look like there's a configuration value for logging output. You may want to start logging yourself after config, when you can control it more precisely. See this example of starting logging automatically.
Your other question mentions that you're limited to 0.10.2 on one system: that has a completely different config system that won't even look at the same file.