Logstash-Forwader 3.1 state file .logstash-forwarder not updating - elastic-stack

I am having an issue with Logstash-forwarder 3.1.1 on Centos 6.5 where the state file /.logstash-forwarder is not updating as information is sent to Logstash.
I have found as activity is logged by logstash-forwarder the corresponding offset is not recorded in /.logstash-forwarder 'logrotate' file. The ./logstash-forwarder file is being recreated each time 100 events are recorded but not updated with data. I know the file has been recreated because I changed permissions to test, and permissions are reset each time.
Below are my configurations (With some actual data italicized/scrubbed):
Logstash-forwarder 3.1.1
Centos 6.5
/etc/logstash-forwarder
Note that the "paths" key does contain wildcards
{
"network": {
"servers": [ "*server*:*port*" ],
"timeout": 15,
"ssl ca": "/*path*/logstash-forwarder.crt"
},
"files": [
{
"paths": [
"/a/b/tomcat-*-*/logs/catalina.out"
],
"fields": { "type": "apache", "time_zone": "EST" }
}
]
}
Per logstash instructions for Centos 6.5 I have configured the LOGSTASH_FORWARDER_OPTIONS value so it looks like the following:
LOGSTASH_FORWARDER_OPTIONS="-config /etc/logstash-forwarder -spool-size 100"
Below is the resting state of the /.logstash-forwarder logrotate file:
{"/a/b/tomcat-set-1/logs/catalina.out":{"source":"/a/b/tomcat-set-1/logs/catalina.out","offset":433564,"inode":*number1*,"device":*number2*},"/a/b/tomcat-set-2/logs/catalina.out":{"source":"/a/b/tomcat-set-2/logs/catalina.out","offset":18782151,"inode":*number3*,"device":*number4*}}
There are two sets of logs that this is capturing. The offset has stayed the same for 20 minutes while activities have been occurred and sent over to Logstash.
Can anyone give me any advice on how to fix this problem whether it be a configuration setting I missed or a bug?
Thank you!

After more research I found it was announced that Filebeats is the preferred forwarder of choice now. I even found a post by the owner of Logstash-Forwarder that the program is full of bugs and is not fully supported any more.
I have instead moved to Centos7 using the latest version of the ELK stack, using Filbeats as the forwarder. Things are going much smoother now!

Related

Log spam with "unable to find container named fluentd-gcp"

Last night my Kubernetes cluster on GKE was upgraded to 1.16.8-gke.9. Since then the logs show error: unable to find container named fluentd-gcp every minute. Logging from my applications still works, but I'd like to know what causes this error and how to get rid of this.
Expanding the error yields slightly more details:
{
"textPayload": "error: unable to find container named fluentd-gcp\n",
"insertId": "v1b2u2ldrnswujhz2",
"resource": {
"type": "k8s_container",
"labels": {
"project_id": "foo",
"pod_name": "fluentd-gke-scaler-cd4d654d7-tgg27",
"cluster_name": "foo-cluster",
"container_name": "fluentd-gke-scaler",
"namespace_name": "kube-system",
"location": "us-east1-d"
}
},
"timestamp": "2020-04-24T16:15:40.224944500Z",
"severity": "ERROR",
"labels": {
"gke.googleapis.com/log_type": "system",
"k8s-pod/k8s-app": "fluentd-gke-scaler",
"k8s-pod/pod-template-hash": "cd4d654d7"
},
"logName": "projects/foo/logs/stderr",
"receiveTimestamp": "2020-04-24T16:15:45.923960735Z"
}
kubectl get all --all-namespaces shows fluentd-gke pods with a fluentd-gke container, not fluentd-gcp.
Any advice would be appreciated and I'm happy to post more details, if you tell me where to look for them.
Edit: More details and related problems on the GKE issue tracker: https://issuetracker.google.com/issues/156965162
This will be fixed in GKE 1.16.9-gke.6 according to the issue tracker: https://issuetracker.google.com/issues/156965162
1.16.8-gke.9 is currently being offered through rapid channel. Keep in mind that such a channel is offered on an early access basis for people to test new releases, as such the version offered may be subject to unresolved issues with no known workaround. That said a possible fix could be to drain and migrate your workloads to another node. If issue persists, then create an issue here.

JPAM Configuration for Apache Drill

I'm trying to configure PLAIN authentification based on JPAM 1.1 and am going crazy since it doesnt work after x times checking my syntax and settings. When I start drill with cluster-id and zk-connect only, it works, but with both options of PLAIN authentification it fails. Since I started with pam4j and tried JPAM later on, I kept JPAM for this post. In general I don't have any preferences. I just want to get it done. I'm running Drill on CentOS in embedded mode.
I've done anything required due to the official documentation:
I downloaded JPAM 1.1, uncompressed it and put libjpam.so into a specific folder (/opt/pamfile/)
I've edited drill-env.sh with:
export DRILLBIT_JAVA_OPTS="-Djava.library.path=/opt/pamfile/"
I edited drill-override.conf with:
drill.exec: {
cluster-id: "drillbits1",
zk.connect: "local",
impersonation: {
enabled: true,
max_chained_user_hops: 3
},
security: {
auth.mechanisms: ["PLAIN"],
},
security.user.auth: {
enabled: true,
packages += "org.apache.drill.exec.rpc.user.security",
impl: "pam",
pam_profiles: [ "sudo", "login" ]
}
}
It throws the subsequent error:
Error: Failure in starting embedded Drillbit: org.apache.drill.exec.exception.DrillbitStartupException: Problem in finding the native library of JPAM (Pluggable Authenticator Module API). Make sure to set Drillbit JVM option 'java.library.path' to point to the directory where the native JPAM exists.:no jpam in java.library.path (state=,code=0)
I've run that *.sh file by hand to make sure that the necessary path is exported since I don't know if Drill is expecting that. The path to libjpam should be know known. I've started Sqlline with sudo et cetera. No chance. Documentation doesn't help. I don't get it why it's so bad and imo incomplete. Sadly there is 0 explanation how to troubleshoot or configure basic user authentification in detail.
Or do I have to do something which is not told but expected? Are there any Prerequsites concerning PLAIN authentification which aren't mentioned by Apache Drill itself?
Try change:
export DRILLBIT_JAVA_OPTS="-Djava.library.path=/opt/pamfile/"
to:
export DRILL_JAVA_OPTS="$DRILL_JAVA_OPTS -Djava.library.path=/opt/pamfile/"
It works for me.

write log to Application insights from local service fabric

I am trying to integrate Azure App insights service into the service fabric app for logging and instrumentation. I am running fabric code on my local VM. I exactly followed the document here [scenario 2]. Other resources on learn.microsoft.com also seem to indicate the same steps. [ex: https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-diagnostics-event-aggregation-eventflow
For some reason, I don’t see any event entries in App insights. No errors in code when I do this:
ServiceEventSource.Current.ProcessedCountMetric("synced",sw.ElapsedMilliseconds, crc.DataTable.Rows.Count);
eventflowconfig.json contents
{
"inputs": [
{
"type": "EventSource",
"sources": [
{ "providerName": "Microsoft-ServiceFabric-Services" },
{ "providerName": "Microsoft-ServiceFabric-Actors" },
{ "providerName": "mystatefulservice" }
]
}
],
"filters": [
{
"type": "drop",
"include": "Level == Verbose"
}
],
"outputs": [
{
"type": "ApplicationInsights",
// (replace the following value with your AI resource's instrumentation key)
"instrumentationKey": "XXXXXXXXXXXXXXXXXXXXXX",
"filters": [
{
"type": "metadata",
"metadata": "metric",
"include": "ProviderName == mystatefulservice && EventName == ProcessedCountMetric",
"operationProperty": "operation",
"elapsedMilliSecondsProperty": "elapsedMilliSeconds",
"recordCountProperty": "recordCount"
}
]
}
],
"schemaVersion": "2016-08-11"
}
In ServiceEventSource.cs
[Event(ProcessedCountMetricEventId, Level = EventLevel.Informational)]
public void ProcessedCountMetric(string operation, long elapsedMilliSeconds, int recordCount)
{
if (IsEnabled())
WriteEvent(ProcessedCountMetricEventId, operation, elapsedMilliSeconds, recordCount);
}
EDIT
Adding diagnostics pipeline code from "Program.cs in fabric stateful service
using (var diagnosticsPipeline =
ServiceFabricDiagnosticPipelineFactory.CreatePipeline($"{ServiceFabricGlobalConstants.AppName}-mystatefulservice-DiagnosticsPipeline")
)
{
ServiceRuntime.RegisterServiceAsync("mystatefulserviceType",
context => new mystatefulservice(context)).GetAwaiter().GetResult();
ServiceEventSource.Current.ServiceTypeRegistered(Process.GetCurrentProcess().Id,
typeof(mystatefulservice).Name);
// Prevents this host process from terminating so services keep running.
Thread.Sleep(Timeout.Infinite);
}
Event Source is a tricky technology, I have been working with it for a while and always have problems. The configuration looks good, It is very hard to investigate without access to the environments, so I will make my suggestions.
There are a few catches you must be aware of:
If you are listening etw events from a different process, your process must be running with a user with permissions on 'Performance Log Users. Check which identity your service is running on and if it is part of performance log users, who has permissions to create event sessions to listen for these events.
Ensure the events are being emitted correctly and you can listen to them from Diagnostics Events Window, if it is not showing there, there is a problem in the provider.
For testing purposes, comment out the line if (IsEnabled()). it is an internal check to validate if your events should be emitted. I had situations where it is always false and skip the emit of events, probably it cache the results for a while, the docs are not clear how it should work.
Whenever possible, use the EventSource from the nuget package instead of the framework one, the framework version is full of bugs and lack fixes found in the nuget version.
Application Insights are not RealTime, sometimes it might take a few minutes to process your events, I would recommend to output the events to a console or file and check if it is listening correctly, afterwards, enable the AppInsights.
The link you provide is quite outdated and there's actually a much better way to log application error and exception info to Application insights. For example, the above won't help with tracking the call hierarchy of an incoming request between multiple services.
Have a look at the Microsoft App Insights Service Fabric nuget packages. It works great for:
Sending error and exception info
Populating the application map with all your services and their dependencies (including database)
Reporting on app performance metrics,
Tracing service call dependencies end-to-end,
Integrating with native as well as non-native SF applications

How to initiate a workflow in Activiti using REST API

I have created a Activit Process using Service Tasks etc with eclipse and deployed the .bar to Activiti which is running on tomcat. It was successfully deployed I can start my process using activiti-explorer without any issue. The deployed process name is "My process" and it is listed under Processes->Deployed Process Definitions in the Activiti-Explorer as well. In the diagram it has the name "myProcess:1:1473"
But I have two questions.
I need to start my process using REST call. (i.e. Without using Activiti-explorer) . What is the URL for that? I tried several variations of (http://localhost:8080/activiti-rest/service/runtime/process-instances) but none of them working.
When I restart the tomcat my process instance is not shown in the Activit -explorer. Each time I restart I need to redeploy the process .bar file. Is that the natural behavior of the engine?
For your first question check this guide for further details:
POST runtime/process-instances should be your endpoint (Be sure to make a POST request, with application/jsonas your content type)
The payload on the other hand should be formatted in one of three templates:
Request body (start by process definition id):
{
"processDefinitionId":"oneTaskProcess:1:158",
"businessKey":"myBusinessKey",
"variables": [
{
"name":"myVar",
"value":"This is a variable",
}
]
}
Request body (start by process definition key):
{
"processDefinitionKey":"oneTaskProcess",
"businessKey":"myBusinessKey",
"tenantId": "tenant1",
"variables": [
{
"name":"myVar",
"value":"This is a variable",
}
]
}
Request body (start by message):
{
"message":"newOrderMessage",
"businessKey":"myBusinessKey",
"tenantId": "tenant1",
"variables": [
{
"name":"myVar",
"value":"This is a variable",
}
]
}
As for your second issue, you should be aware that the OOTB (Out Of The Box) config may involve automatic DB cleaning upon each and every restart, you need to locate that config and override it with values of your choice! Check this section for further info, the databaseSchemaUpdate param might be exactly what you are looking for!

node.js - can't get mongodb to work, may have installed wrong?

I guess this is a shot in the dark since there's not a lot of specific code I can show you...
but I'm using node and trying to use mongodb, however I can't get mongodb to connect. I've tried a couple of tutorials that should pretty much work out of the box. in several cases pages that don't seem to have any immediate reference to the database will load ok. For example, a 'posts/new' page will load. but '/' which likely makes reference to showing posts, will just hang silently (the browser shows page loading).
If for example I got so submit a new post the browser hangs and in the terminal i get:
Express app started on port 3000
Error: Invalid ObjectId
at Function.fromString (/Users/xxx/Node_projects/noobjs/node_modules/mongoose/lib/drivers/node-mongodb-native/objectid.js:27:11)
at ObjectId.cast (/Users/xxx/Node_projects/noobjs/node_modules/mongoose/lib/schema/objectid.js:99:16)
at ObjectId.castForQuery (/Users/xxx/Node_projects/noobjs/node_modules/mongoose/lib/schema/objectid.js:133:17)
at Query.cast (/Users/xxx/Node_projects/noobjs/node_modules/mongoose/lib/query.js:249:32)
at Query.findOne (/Users/xxx/Node_projects/noobjs/node_modules/mongoose/lib/query.js:851:10)
at Function.findOne (/Users/xxx/Node_projects/noobjs/node_modules/mongoose/lib/model.js:714:16)
at /Users/xxx/Node_projects/noobjs/routes/articles.js:13:13
at paramCallback (/Users/xxx/Node_projects/noobjs/node_modules/express/lib/router/index.js:259:7)
at param (/Users/xxx/Node_projects/noobjs/node_modules/express/lib/router/index.js:241:11)
at pass (/Users/xxx/Node_projects/noobjs/node_modules/express/lib/router/index.js:253:5)
GET /article/new 500
GET /articles/new 200
if it helps at all, here's the package.json for the tutorial app used above:
{
"name": "noobjs"
, "description": "A demo app in nodejs illustrating use of express, jade and mongoose"
, "version": "2.0.0"
, "private": true
, "author": "Madhusudhan M S (http://twitter.com/madhums)"
, "engines": {
"node": ">= 0.4.10"
}
, "dependencies": {
"express" : ">= 2.3.12"
, "jade" : ">= 0.12.4"
, "mongoose": ">= 1.4.0"
, "stylus" : ">= 0.13.7"
, "express-csrf" : ">= 0.3.3"
, "gzippo" : ">= 0.0.4"
, "express-messages" : ">= 0.0.2"
}
}
Also when I do ./mongod from the command line i get -bash: ./mongod: No such file or directory. Anything else I try from the terminal or browser doesnt seem to communicate with mongodb at all.
Let me know if there's anything I can do to diagnose/change my installation.
Step 0: ensure that MongoDB is running ps -ef | grep mongod. If it's not started, check the log file for the reason it failed.
Step 1: ensure that you can connect to MongoDB from the command line.
Step 2: check your connection method, there are several ways to do this, but ensure the you have the correct servername:post/dbname
Here's a blog post I wrote. That walks through a simple setup with Node.JS + MongoDB + Cloudfoundry. The simple setup works locally and on Cloudfoundry, so you should be able to ignore the Cloudfoundry steps and still get something working.