Jmx_exporter kafka config example question - apache-kafka

Im looking at the example config for kafka in the official jmx_exporter repo
https://github.com/prometheus/jmx_exporter/blob/master/example_configs/kafka-2_0_0.yml
as well as the one from Kafka
https://github.com/confluentinc/cp-helm-charts/blob/master/charts/cp-kafka/templates/jmx-configmap.yaml
We can see things like
- pattern : kafka.server<type=ReplicaManager, name=(.+)><>(Value|OneMinuteRate)
name: "cp_kafka_server_replicamanager_$1"
- pattern : kafka.controller<type=KafkaController, name=(.+)><>Value
name: "cp_kafka_controller_kafkacontroller_$1"
- pattern : kafka.server<type=BrokerTopicMetrics, name=(.+)><>OneMinuteRate
name: "cp_kafka_server_brokertopicmetrics_$1"
My question concern Value, is it the name of an attribute that can be found on those Means identified by the following patterns kafka.server<type=ReplicaManager, name=(.+)>
I would imagine that OneMinuteRate is one. Although i could not find it in the list of metrics provided by Confluent: https://docs.confluent.io/current/kafka/monitoring.html. My guess was that, that metrics comes from an old version of kafka.
Hence, could someone let me know what Value is ?
Also, is there a place where i could find the official complete list of Kafka Mbeans ...

These metrics exist and are valid.
Value is one of the attributes of the kafka.server<type=ReplicaManager, name=(.+)> MBean when name is AtMinIsrPartitionCount for example.
OneMinuteRate is also a possible attribute on some of the names, for example when name is FailedIsrUpdatesPerSec.
The best way to find all these names is to use jsoncole. Upon starting, attach to the Kafka process and you can explore all the MBeans and find all attributes.

Related

How to introduce versioning for endpoints for akka http

I have 5 controllers in akka-http. Each endpoint has 5 endpoints(routes). Now I need to introduce versioning for those. All endpoints should be prefixed with /version1.
For example if there was an endpoint xyz now it should be /version1/xyz.
One of the ways is to add a pathPrefix But it needs to be added to each controller.
Is there way to add it at a common place so that it appears for all endpoints.
I am using akka-http with scala.
You can create a base route, that accepts paths like /version1/... and refers to internal routes without path prefix.
val version1Route = path("xyz") {
...
}
val version2Route = path("xyz") {
...
}
val route = pathPrefix("version1") {
version1Route
} ~ pathPrefix("version2") {
version2Route
}
Indirect Answer
Aleksey Isachenkov's answer is the correct direct solution.
One alternative is to put versioning in the hostname instead of the path. Once you have "version1" of your Route values in source-control then you can tag that checkin as "version1", deploy it into production, and then use DNS entries to set the service name to version1.myservice.com.
Then, once newer functionality becomes necessary you update your code and tag it in source-control as "version2". Release this updated build and use DNS to set the name as version2.myservice.com, while still keeping the version1 instance running. This would result in two active services running independently.
The benefits of this method are:
Your code does not continuously grow longer as new versions are released.
You can use logging to figure out if a version hasn't been used in a long time and then just kill that running instance of the service to End-Of-Life the version.
You can use DNS to define your current "production" version by having production.myservice.com point to whichever version of the service you want. For example: once you've released version24.myservice.com and tested it for a while you can update the production.myservice.com pointer to go to 24 from 23. The old version can stay running for any users that don't want to upgrade, but anybody who wants the latest version can always use "production".

Can't Change Metric Alias in Grafana Using a Zabbix Plugin

I want to show multiple CPU Usage from different hosts in one graph but they all end up with the same name and I can't tell which line represents which host:
here's the snapshot.
I'm using Grafana 5.2.4 with a Zabbix plugin 3.9.1. My Zabbix version is 3.0.12.
I've tried overriding legends in Grafana but there's no such option. Also, Zabbix plugin doesn't allow connecting directly to DB, so I can't use the ALIAS BY option either. I've tried using macros in Zabbix to include host name in item name, but {HOST.NAME} just ends up as is in the item name (and not replaced by the actual values).
Any solutions will be hugely appreciated.
You should use the templating feature of the Zabbix Grafana plugin,see the attached screens and the following description for a working example.
I have a Routers Zabbix Hostgroup, so I define a Router Grafana variable to match the hosts (Routers.*), see the first screenshot.
Enable both multi-value and Select All
Then in the metrics configuration use a single metric configured this way:
Group: Routers
Host: $Router (mind the $, the variable will be expanded in real time accordingly to the selection)
Item: the common item name (i.e.: ICMP Response Time)
And you will get something similar to the second screenshot, with a host picker on top and multiple selections.
So There's a "Functions" button below each metric when we are configuring and editing our graph. There's an "Alias" option and when you hover over it you see more options. If you click on "setAlias", you can define an alias for each metric.
Since this solution requires setting each alias individually, I recommend the solution that was suggested by Simone Zabberoni above. but this one is also worth knowing since it might come in handy at times.

How to replace withFilenamePolicy in Apache Beam 2.4?

I am trying to read from a Kafka source, partition by a timestamp and write to GCS with Apache Beam 2.4. I want to apply a custom FilenamePolicy for the output files.
According to what I have found on Stackoverflow and by Googling this was possible in the past by using
.apply(TextIO.write()
.to("gs://somebucket/")
.withFilenamePolicy(new PerWindowFiles(prefix))
.withWindowedWrites()
.withNumShards(1));
The withFilenamePolicy option is no longer available. How is it done in Beam 2.4?
I've tried using the writeDynamic() functionality from FileIO from the example in the documentation - but I don't understand why my TextIO is not accepted as an input:
withFilenamePolicy() was removed in 2.2
You can now write your example using the simpler syntax
pipeline.apply(Create.of(...))
.apply(TextIO.write()
.to(new PerWindowFiles("gs://somebucket/"))
.withTempDirectory(
FileBasedSink.convertToFileResourceIfPossible("gs://somebucket/tmp"))
.withWindowedWrites()
.withNumShards(1));
N.B. with a custom FileNamePolicy you will also need to explicitly specify withTempDirectory.
In your second (screenshot) example, you are using the default TextIO.sink() which is a FileIO.Sink<String> to sink Events. You need either instance of Sink<Event> (which will also implement any custom file naming) or to wrap your Event::getPayload with Contextful like this:
.apply(FileIO.<String, Event>writeDynamic()
.by(Event.getEventType)
.via(Contextful.fn(Event::getPayload))

OrientDB Could not access the security JSON file

Following my upgrade from OrientDB 2.1.16 to 2.2.0 I have started to get the following messages during the initialisation:
2016-05-19 09:28:38:690 SEVER ODefaultServerSecurity.loadConfig() Could not access the security JSON file: /config/security.json [ODefaultServerSecurity]
2016-05-19 09:28:39:142 SEVER ODefaultServerSecurity.onAfterActivate() Configuration document is empty [ODefaultServerSecurity]
The database launched but I don't like the warnings. I've looked through the docs but I cant find anything specifically pertaining to this. There are some links on Google that lead to dead Github pages.
First of all I need to get hold of either a copy of the security.json it is expecting (or the docs explaining the expected structure).
Secondly I need to know how and where to set it.
There are 3 ways to specify the location and name of the security.json file used by the new OrientDB security module.
1) Specify the environment variable, ORIENTDB_HOME, and it will look for it here:
"${ORIENTDB_HOME}/config/security.json"
2) Set this property in the orientdb-server-config.xml file: "server.security.file"
3) Pass the location by setting the global variable -Dserver.security.file on startup.
Here's the documentation on the new features + a link to the configuration format.
https://github.com/orientechnologies/orientdb-docs/blob/master/Security-OrientDB-New-Security-Features.md
-Colin
OrientDB LTD
The Company behind OrientDB

Logstash scala log parsing

I've got a problem with logstash. I use logback, logstash, kibana and elasticsearch (docker as logstash input source)
The problem is I have no idea how can I write a correct config file for logstash to get some interesting information.
The single scala log looks like this:
[INFO] [05/06/2016 13:58:31.789] [integration-akka.actor.default-dispatcher-14] [akka://integration/user/InstanceSupervisor/methodRouter/outDispatcher] sending msg: PublishMessage(instance,vouchers,Map(filePath -> /home/mateusz/JETBLUETESTING.csv, importedFiles -> Map(JETBLUETESTING.csv -> Map(status -> DoneStatus, processed -> 1, rows -> 5))),#contentHeader(content-type=application/octet-stream, content-encoding=null, headers=null, delivery-mode=2, priority=0, correlation-id=null, reply-to=null, expiration=null, message-id=null, timestamp=null, type=null, user-id=null, app-id=null, cluster-id=null)
I'd like to get something like tag [INFO], timestamp and of course the whole log in a single kibana result.
As for now i don't event know how exactly the log looks like (because its parsed by logback). Any information you can provide me would be greatly appreciated, because im stuck on this problem for few days.
When learning logstash it's best to find a debugger to help experiment (grok) with patterns. The standard appears to be hosted here. The site allows you to post a snippet from your logs, and then allows you to experiment with either pre-defined or custom patterns. The pre-defined patterns can be found here.
I had the same issue recently when trying to find out what logback was sending to logstash. I found that logback was able to convert the logs to json.A snippet I found useful is:
filter{
json{
source => "message"
}
}
Which I found in this related SO post
Once you can see the logs, it makes it much easier to experiment with patterns.
Hope this is useful.