Export opentelemetry metrics to open search - opensearch

I am trying to export opentelemetry metrics to open search.
My configurations are as mentioned below
metrics-pipeline:
source:
otel_metrics_source::
processor:
- otel_metrics_raw_processor:
sink:
- opensearch:
hosts: ["https://<domain-name>:443"]
insecure: true
username: "username"
password: "password
I was going through one of the data prepper issue and came to know that metrics support is included recently.
https://github.com/opensearch-project/data-prepper/issues/242
I am not able to find proper documentation on this.
In the data prepper pod, getting the below exception
com.amazon.dataprepper.model.plugin.NoPluginFoundException: Unable to find a plugin named 'otel_metrics_source:'. Please ensure that plugin is annotated with appropriate values.
at com.amazon.dataprepper.plugin.DefaultPluginFactory.lambda$getPluginClass$2(DefaultPluginFactory.java:111) ~[data-prepper.jar:1.5.1]
at java.util.Optional.orElseThrow(Optional.java:401) ~[?:?]
at com.amazon.dataprepper.plugin.DefaultPluginFactory.getPluginClass(DefaultPluginFactory.java:111) ~[data-prepper.jar:1.5.1]
at com.amazon.dataprepper.plugin.DefaultPluginFactory.loadPlugin(DefaultPluginFactory.java:62) ~[data-prepper.jar:1.5.1]
Appreciate any inputs on this.

Currently, there is no section on the opentelemetry metrics support in the general documentation of DataPrepper. You can find documentation within the respective plugin directories:
otel-metrics-source
otel-metrics-raw-processor
There is also a blog post on the OpenTelemetry metrics ingestion with DataPrepper in the OpenSearch blogs. It contains a configuration example.

Just remove the extra colon at the end of otel_metrics_source , set ssl flag to false and add the index in opensearch section.
Thank you #Karsten Schnitter for your help.
Updated configurations
metrics-pipeline:
source:
otel_metrics_source:
ssl: false
processor:
- otel_metrics_raw_processor:
sink:
- opensearch:
hosts: ["https://<domain-name>:443"]
insecure: true
username: "username"
password: "password
index: metrics-otel-v1-%{yyyy.MM.dd}

Related

Multi-line Filebeat templates don’t work with filebeat.inputs - type: filestream

I ran into a multiline processing problem in Filebeat when the filebeat.inputs: parameters specify type: filestream - the logs of the file stream are not analyzed according to the requirements of multiline. pattern: '^[[0-9]{4}-[0-9]{2}-[0-9]{2}', in the output, I see that the lines are not added to the lines, are created new single-line messages with individual lines from the log file.
If you specify type: log in the file bat.inputs: parameters, then everything works correctly, in accordance with the requirements of multiline. pattern: '^[[0-9]{4}-[0-9]{2}-[0-9]{2}' - a multiline message is created.
What is not correctly specified in my config?
filebeat.inputs:
- type: filestream
enabled: true
paths:
- C:\logs\GT\TTL\*\*.log
fields_under_root: true
fields:
instance: xml
system: ttl
subsystem: GT
account: abc
multiline.type: pattern
multiline.pattern: '^\[[0-9]{4}-[0-9]{2}-[0-9]{2}'
multiline.negate: true
multiline.match: after
To get it working you should have something like this:
filebeat.inputs:
- type: filestream
enabled: true
paths:
- C:\logs\GT\TTL\*\*.log
fields_under_root: true
fields:
instance: xml
system: ttl
subsystem: GT
account: abc
parsers:
- multiline:
type: pattern
pattern: '^\[[0-9]{4}-[0-9]{2}-[0-9]{2}'
negate: true
match: after
There are 2 reasons why this works:
The general documentation at the time of this writing regarding multiline handling is not updated to reflect changes done for fileinputstream type. You can find information regarding setting up multiline for fileinputstream under parsers on this page: https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-filestream.html
The documentation I just mentioned above is also wrong (at least at the time of this writing). The example shows configuring the multiline parsers without indenting its children, which will not work as the parser will not initialize any of the values underneath it. This issue is also being discussed here: https://discuss.elastic.co/t/filebeat-filestream-input-parsers-multiline-fails/290543/13 and I expect it will be fixed sometime in the future.

Custom field name not showing in Filebeat

Below is how im trying to add a custom fiels name in my filebeat 7.2.0
filebeat.inputs:
- type: log
enabled: true
paths:
- D:\Oasis\Logs\Admin_Log\*
- D:\Oasis\Logs\ERA_Log\*
- D:\OasisServices\Logs\*
processors:
- add_fields:
fields:
application: oasis
and with this, im expecting a new field called application whose data entries will be 'oasis'.
But i dont get any.
I also tried
fields:
application: oasis/'oasis'
Help me with this.
If you want to add a customized field for every log, you should put the "fields" configuration in the same level of type. Try the following:
- type: log
enabled: true
paths:
- D:\Oasis\Logs\Admin_Log\*
- D:\Oasis\Logs\ERA_Log\*
- D:\OasisServices\Logs\*
fields.application: oasis
There are two ways to add custom fields on filebeat, using the fields option and using the add_fields processor.
To add fields using the fields option, your configuration needs to be something like the one below.
filebeat.inputs:
- type: log
paths:
- 'D:/path/to/your/files/*'
fields:
custom_field: 'custom field value'
fields_under_root: true
To add fields using the add_fields processor, you can try the following configuration.
filebeat.inputs:
- type: log
paths:
- 'D:/path/to/your/files/*'
processors:
- add_fields:
target: ''
fields:
custom_field: 'custom field value'
Both configurations will create a field named custom_field with the value custom field value in the root of your document.
The fields option can be used per input and the add_fields processor is applied to all the data exported by the filebeat instance.
Just remember to pay attention to the indentation of your configuration, if it is wrong filebeat won't work correctly or even start.

Axon4 - kafka ext: Query event not invoked

Command side events are getting processed but query (projector) is not invoked.
Using axon kafka extension 4.0-RC2.
Please check below code reference.
AxonConfig
import org.springframework.context.annotation.Configuration;
#Configuration
public class AxonConfig {
}
application.yml
server:
port: 9001
spring:
application:
name: Query Application
datasource:
url: jdbc:postgresql://localhost:5441/orderdemo
username: orderdemo
password: secret
driver-class-name: org.postgresql.Driver
jpa:
properties:
hibernate:
dialect: org.hibernate.dialect.PostgreSQL95Dialect
jdbc:
lob:
non_contextual_creation: true
hbm2ddl.auto: update
implicit_naming_strategy: org.springframework.boot.orm.jpa.hibernate.SpringImplicitNamingStrategy
physical_naming_strategy: org.springframework.boot.orm.jpa.hibernate.SpringPhysicalNamingStrategy
axon:
eventhandling:
processors:
query:
mode: tracking
source: kafkaMessageSource
kafka:
default-topic: axon-events
consumer:
group-id: query-group
bootstrap-servers: localhost:9092
For this configuration to work, the classes which contain the #EventHandler annotated functions you want to be called for handling the events from Kafka, needs to be part of the processing group query.
This requirement follows from the configuration pattern you've chosen, where "axon. eventhandling.processors.query" defines the Processing Group you want to configure. To specify the Processing Group, I think the easiest approach is to add the #ProcessingGroup annotation to your Event Handling Class. In the annotation, you have to provide the name of the Processing Group, which needs to correspond with what you've set int he configuration file.
Lastly, I would suggest to use a different name than query for your Processing Group. Something more specific to the query model that Event Handler updates would seem more in place to me.
Hope this helps!

Spinnaker "Create Application" menu doesn't load

I'm quite new to the Spinnaker and have to ask for some help I guess. Does anyone knows why it could be that I can't create any Application and just keep seeing this screen.
My installation is through Halyard 1.5.0 and Ubuntu 14.04.
We don't use any cloud provider but I did configure Docker and Kubernetes part
And here is the error I see in the /var/log/spinnaker/echo/echo.log:
2017-11-16 13:52:29.901 INFO 13877 --- [ofit-/pipelines] c.n.s.echo.services.Front50Service : java.net.SocketTimeoutException: timeout
at okio.Okio$3.newTimeoutException(Okio.java:207)
at okio.AsyncTimeout.exit(AsyncTimeout.java:261)
at okio.AsyncTimeout$2.read(AsyncTimeout.java:215)
at okio.RealBufferedSource.indexOf(RealBufferedSource.java:306)
at okio.RealBufferedSource.indexOf(RealBufferedSource.java:300)
at okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.java:196)
at com.squareup.okhttp.internal.http.Http1xStream.readResponse(Http1xStream.java:186)
at com.squareup.okhttp.internal.http.Http1xStream.readResponseHeaders(Http1xStream.java:127)
at com.squareup.okhttp.internal.http.HttpEngine.readNetworkResponse(HttpEngine.java:739)
at com.squareup.okhttp.internal.http.HttpEngine.access$200(HttpEngine.java:87)
at com.squareup.okhttp.internal.http.HttpEngine$NetworkInterceptorChain.proceed(HttpEngine.java:724)
at com.squareup.okhttp.internal.http.HttpEngine.readResponse(HttpEngine.java:578)
at com.squareup.okhttp.Call.getResponse(Call.java:287)
at com.squareup.okhttp.Call$ApplicationInterceptorChain.proceed(Call.java:243)
at com.squareup.okhttp.Call.getResponseWithInterceptorChain(Call.java:205)
at com.squareup.okhttp.Call.execute(Call.java:80)
at retrofit.client.OkClient.execute(OkClient.java:53)
at retrofit.RestAdapter$RestHandler.invokeRequest(RestAdapter.java:326)
at retrofit.RestAdapter$RestHandler.access$100(RestAdapter.java:220)
at retrofit.RestAdapter$RestHandler$1.invoke(RestAdapter.java:265)
at retrofit.RxSupport$2.run(RxSupport.java:55)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at retrofit.Platform$Base$2$1.run(Platform.java:94)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.SocketException: Socket closed
at java.net.SocketInputStream.read(SocketInputStream.java:204)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at okio.Okio$2.read(Okio.java:139)
at okio.AsyncTimeout$2.read(AsyncTimeout.java:211)
... 24 more
2017-11-16 13:52:29.901 INFO 13877 --- [ofit-/pipelines] c.n.s.echo.services.Front50Service : ---- END ERROR
#grizzthedj
thanks again for recommendations. It doesn't seem, however, solved the issue. I wonder if it has something to do with my Docker Registry or Kubernetes.
Here is what I have in my .hal/config:
dockerRegistry:
enabled: true
accounts:
- name: <hidden-name>
requiredGroupMembership: []
address: https://docker-registry.<hidden-name>.net/
cacheIntervalSeconds: 30
repositories:
- hellopod
- demoapp
primaryAccount: <hidden-name>
kubernetes:
enabled: true
accounts:
- name: <username>
requiredGroupMembership: []
dockerRegistries:
- accountName: <hidden-name>
namespaces: []
context: sre-os1-dev
namespaces:
- spinnaker
omitNamespaces: []
kubeconfigFile: /home/<username>/.kube/config
I suspect you may be using redis as the persistent storage type(I ran into the same issue).
If this is the case, persistent storage using redis doesn't seem to be working properly out-of-the-box, and it is not supported. I would try using an S3 target, if available.
More info here on support for redis
To configure S3 using Halyard, use the following commands:
echo <SECRET_ACCESS_KEY> | hal config storage s3 edit --access-key-id <ACCESS_KEY_ID> --endpoint <S3_ENDPOINT> --bucket <BUCKET_NAME> --root-folder spinnaker --secret-access-key
hal config storage edit --type s3
hal deploy apply
#grizzthedj,
Here is what I've found inside front50.log (I wiped out ID's of course for security reasons)
You may be right.
2017-11-20 12:40:29.151 INFO 682 --- [0.0-8080-exec-1] com.amazonaws.latency : ServiceName=[Amazon S3], AWSErrorCode=[NoSuchKey], StatusCode=[404], ServiceEndpoint=[https://s3-us-west-2.amazonaws.com], Exception=[com.amazonaws.services.s3.model.AmazonS3Exception: The specified key does not exist. (Service: Amazon S3; Status Code: 404; Error Code: NoSuchKey; Request ID: ...; S3 Extended Request ID: ...), S3 Extended Request ID: ...], RequestType=[GetObjectRequest], AWSRequestID=[...], HttpClientPoolPendingCount=0, RetryCapacityConsumed=0, HttpClientPoolAvailableCount=1, RequestCount=1, Exception=1, HttpClientPoolLeasedCount=0, ClientExecuteTime=[39.634], HttpClientSendRequestTime=[0.072], HttpRequestTime=[39.213], RequestSigningTime=[0.067], CredentialsRequestTime=[0.001, 0.0], HttpClientReceiveResponseTime=[39.059],
I had a similar issue on kubernetes/aws, when I opened up the chrome dev console I was getting lots of 404 errors trying to connect to localhost:8084, I had to reconfigure the deck and gate baseurls. This is what I did using halyard:
hal config security ui edit --override-base-url http://<deck-loadbalancer-dns-entry>:9000
hal config security api edit --override-base-url http://<gate-loadbalancer-dns-entry>:8084
i did hal deploy apply and when it came back I noticed the developer console was throwing cors errors so I had to do the following.
echo "host: 0.0.0.0" | tee \ ~/.hal/default/service-settings/gate.yml \ ~/.hal/default/service-settings/deck.yml
You may note the lack of TLS and cors config, this is a test system so make better choices in production :)

logstash access with readonly rest plugin

we have a problem with the readonly rest plugin for elasticsearch: we don't get logstash running when the plugin is enabled. We use logstash with filebeat. Can this be the problem? The logstash config is below.
The error message:
[401] Forbidden {:class=>"Elasticsearch::Transport::Transport::Errors::Unauthorized", :level=>:error}
In elasticsearch we have defined the roles as you see below.
readonlyrest:
enable: true
response_if_req_forbidden: <h1>Forbidden</h1>
access_control_rules:
- name: Developer (reads only logstash indices, but can create new charts/dashboards)
auth_key: dev:dev
type: allow
kibana_access: ro+
indices: ["<no-index>", ".kibana*", "logstash*", "default"]
- name: Kibana Server (we trust this server side component, full access granted via HTTP authentication)
auth_key: admin:passwd1
type: allow
- name: "Logstash can write and create its own indices"
auth_key: logstash:logstash
type: allow
actions: ["cluster:*", "indices:data/read/*","indices:data/write/*","indices:admin/*"]
indices: ["logstash*", "filebeat-*", "<no_index>"]
the logstash config:
output{
elasticsearch {
hosts => ["localhost:9200"]
manage_template => true
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
document_id => "%{fingerprint}"
user => ["logstash"]
password => ["logstash"]
}
}
I believe you are not giving logstash the ability to create indexes with your setup. It can write and read, but I am not seeing create.
From the example of the website, can you change your logstash config to:
- name: "Logstash can write and create its own indices"
auth_key: logstash:logstash
type: allow
actions: ["indices:data/read/*","indices:data/write/*","indices:admin/template/*","indices:admin/create"]
indices: ["logstash-*", "<no_index>"]
This setup works for me.
I don't think it has anything to do with filebeat since the output doesn't actually talk to filebeat anymore? But then again, I am using file inputs instead.
Hope that solves the issue.
Artur