I want to use the kamon-akka-http, currently my project have the Kamon bundle dependency which contains the kamon akka http.
The issue is that I received many other metrics from all the other instrumentation in the bundle(Akka Instrumentation, Akka Remote Instrumentation, Executor Service Instrumentation, Logback Instrumentation and Scala Future Instrumentation)
I want to receive only the akka http metrics.
I tried to remove the bundle dependency and add only the kamon-akka-http, this required that I will run the instrumation agent (Kanela).
I did that but still I saw other Instrumentation.
How can I run only Akka HTTP Instrumentation?
I mange to disable all Instrumentation by the below configuration
kanela.modules {
akka {
within = ""
}
akka-remote {
within = ""
}
executor-service {
within = ""
}
executor-service-capture-on-submit {
within = ""
}
scala-future {
within = ""
}
logback {
within = ""
}
}
kamon.modules {
host-metrics {
enabled = no
}
process-metrics {
enabled = no
}
jvm-metrics {
enabled = no
}
}
Related
I would like to deploy an application and the pod should not go to running status(it should be non-operational). User might trigger this when it really requires using Infrastructure as Code (Terraform). I am aware of using kubectl scale -- replicas=0 . Any other leads or info will be well appreciated.
You can keep the replica count to zero for the Deployment or POD into your YAML file if you are using it.
Or if you are using the Terraform
resource "kubernetes_deployment" "example" {
metadata {
name = "terraform-example"
labels = {
test = "MyExampleApp"
}
}
spec {
replicas = 0
selector {
match_labels = {
test = "MyExampleApp"
}
}
template {
metadata {
labels = {
test = "MyExampleApp"
}
}
spec {
container {
image = "nginx:1.7.8"
name = "example"
resources {
limits = {
cpu = "0.5"
memory = "512Mi"
}
requests = {
cpu = "250m"
memory = "50Mi"
}
}
liveness_probe {
http_get {
path = "/nginx_status"
port = 80
http_header {
name = "X-Custom-Header"
value = "Awesome"
}
}
initial_delay_seconds = 3
period_seconds = 3
}
}
}
}
}
}
There is no other way around you can use the client of Kubernetes to do this if don't want to use the Terraform.
If you want to edit the local file using the terraform checkout local-exec
This invokes a process on the machine running Terraform, not on the
resource.
resource "aws_instance" "web" {
# ...
provisioner "local-exec" {
command = "echo ${self.private_ip} >> private_ips.txt"
}
}
using sed command in local-exec or any other command you can update the YAML and apply it.
https://www.terraform.io/docs/language/resources/provisioners/local-exec.html
How can I use the full ksqldb client api with gradle? Why are there 2 different packages?
repositories {
mavenCentral()
jcenter()
maven {
url "https://packages.confluent.io/maven/"
}
}
dependencies {
implementation "org.jetbrains.kotlin:kotlin-stdlib"
compile group: 'io.confluent.ksql', name: 'ksqldb-api-client', version: '6.0.0'
}
I would like to reference v0.11.0. It contains more methods:
https://docs.ksqldb.io/en/latest/developer-guide/ksqldb-clients/java-client/api/io/confluent/ksql/api/client/Client.html
https://docs.ksqldb.io/en/0.10.0-ksqldb/developer-guide/ksqldb-clients/java-client/api/io/confluent/ksql/api/client/Client.html
import io.confluent.ksql.api.client.ClientOptions
import io.confluent.ksql.api.client.*
fun main()
{
val KSQLDB_SERVER_HOST = "localhost"
val KSQLDB_SERVER_HOST_PORT = 8089
val clientOptions = ClientOptions.create()
.setHost(KSQLDB_SERVER_HOST)
.setPort(KSQLDB_SERVER_HOST_PORT)
val client: Client = io.confluent.ksql.api.client.Client.create(clientOptions)
val topics = client.listTopics() //not available in 6.0.0
}
Edit:
Based on #Hellmar Becker's post I would like to use the standalone (community) version not the commercial Confluent Platform version. It looks like that the CP version uses an older API version anyway.
I found an example how to do this with pom.xml on Github developer guide, but I would like to use a build.gradle file
There are different numbering schemes for the community licensed KSQLDB (currently v0.12) and the commercial Confluent Platform (currently v6.0.1). Maybe this comparison helps: https://docs.confluent.io/platform/current/ksqldb/index.html#ksqldb-standalone-and-ksqldb-for-cp.
I managed to convert the pom.xml to gradle.build file from the developers guide in the following manner:
plugins {
id 'java'
id 'org.jetbrains.kotlin.jvm' version '1.4.21'
}
version '1.0-SNAPSHOT'
repositories {
jcenter()
maven() {
url "https://ksqldb-maven.s3.amazonaws.com/maven/"
}
}
dependencies {
implementation "org.jetbrains.kotlin:kotlin-stdlib-jdk8"
implementation "org.jetbrains.kotlin:kotlin-stdlib"
implementation "io.confluent.ksql:ksqldb-api-client:0.11.0"
}
compileKotlin {
kotlinOptions.jvmTarget = "1.8"
}
compileTestKotlin {
kotlinOptions.jvmTarget = "1.8"
}
I am running 3-zookeeper-cluster and 3-kafka-cluster on Kubernetes.
Kafka seems to be running.
However if I produce some message to a topic and check the topic, there's no message at all.
Here's my broker saying. That says some invalid receive or something, the funny thing is trying to make topics work well but producing.
also I could watch topics or schemas which i made early on Topics-ui which is GUI tool for broker.
Schema-registry, Connect, Rest's log is fine so the broker seems to be running well.
org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = 1195725856 larger than 104857600)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:104)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:424)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:385)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:651)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:572)
at org.apache.kafka.common.network.Selector.poll(Selector.java:483)
at kafka.network.Processor.poll(SocketServer.scala:863)
at kafka.network.Processor.run(SocketServer.scala:762)
at java.lang.Thread.run(Thread.java:748)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:104)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:424)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:385)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:651)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:572)
at org.apache.kafka.common.network.Selector.poll(Selector.java:483)
at kafka.network.Processor.poll(SocketServer.scala:863)
at kafka.network.Processor.run(SocketServer.scala:762)
at java.lang.Thread.run(Thread.java:748)
and here's my broker configurations with terraform
Statefulset
port {
container_port = 9092
}
env {
name = "KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR"
value = "3"
}
env {
name = "KAFKA_DEFAULT_REPLICATION_FACTOR"
value = "3"
}
env {
name = "KAFKA_LISTENER_SECURITY_PROTOCOL_MAP"
value = "PLAINTEXT:PLAINTEXT,EXTERNAL:PLAINTEXT"
}
env {
name = "KAFKA_ZOOKEEPER_CONNECT"
value = "lucent-zookeeper-0.zookeeper-service.default:2181,lucent-zookeeper-1.zookeeper-service.default:2181,lucent-zookeeper-2.zookeeper-service.default:2181"
}
env {
name = "POD_IP"
value_from {
field_ref {
field_path = "status.podIP"
}
}
}
env {
name = "HOST_IP"
value_from {
field_ref {
field_path = "status.hostIP"
}
}
}
env {
name = "POD_NAME"
value_from {
field_ref {
field_path = "metadata.name"
}
}
}
env {
name = "POD_NAMESPACE"
value_from {
field_ref {
field_path = "metadata.namespace"
}
}
}
command = [
"sh",
"-exec",
"export KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://$${POD_NAME}.kafka-service.$${POD_NAMESPACE}:9092 && export KAFKA_BROKER_ID=$${HOSTNAME##*-} && exec /etc/confluent/docker/run"
]
service
resource "kubernetes_service" "kafka-service" {
metadata {
name = "kafka-service"
labels = {
app = "broker"
}
}
spec {
selector = {
app = "broker"
}
port {
port = 9092
}
cluster_ip = "None"
}
code to try produce
kafka-console-producer --broker-list kafka-service:9092 --topic test
My initial guess would be that you might be trying to receive a request that is too large. The maximum size is the default size for socket.request.max.bytes, which is 100MB. So if you have a message which is bigger than 100MB try to increase the value of this variable under server.properties.
It can also happen if you tried to connect to a non-SSL listener, as if it was SSL.
Verify that 9092 is the SSL listener port on the broker. Source: https://github.com/edenhill/librdkafka/issues/1680#issuecomment-364883669
I am trying to deploy Windows VM on Google Cloud through terraform. The VM is getting deployed and I am able to execute PowerShell scripts by using windows-startup-script-url.
With this approach, I can only use scripts which are already stored in Google Storage. If the script has parameters / variables, then how to pass that parameter, any clue !
provider "google" {
project = "my-project"
region = "my-location"
zone = "my-zone"
}
resource "google_compute_instance" "default" {
name = "my-name"
machine_type = "n1-standard-2"
zone = "my-zone"
boot_disk {
initialize_params {
image = "windows-cloud/windows-2019"
}
}
metadata {
windows-startup-script-url = "gs://<my-storage>/<my-script.ps1>"
}
network_interface {
network = "default"
access_config {
}
}
tags = ["http-server", "windows-server"]
}
resource "google_compute_firewall" "http-server" {
name = "default-allow-http"
network = "default"
allow {
protocol = "tcp"
ports = ["80"]
}
source_ranges = ["0.0.0.0/0"]
target_tags = ["http-server"]
}
resource "google_compute_firewall" "windows-server" {
name = "windows-server"
network = "default"
allow {
protocol = "tcp"
ports = ["3389"]
}
source_ranges = ["0.0.0.0/0"]
target_tags = ["windows-server"]
}
output "ip" {
value = "${google_compute_instance.default.network_interface.0.access_config.0.nat_ip}"
}
Terraform doesn't require startup scripts to be pulled from GCS buckets necessarily.
The example here shows:
}
metadata = {
foo = "bar"
}
metadata_startup_script = "echo hi > /test.txt"
service_account {
scopes = ["userinfo-email", "compute-ro", "storage-ro"]
}
}
More in Official docs for GCE and Powershell scripting here
I'm after an example that would do the following:
Create a Kubernetes cluster on GKE via Terraform's google_container_cluster
... and continue creating namespaces in it, I suppose via kubernetes_namespace
The thing I'm not sure about is how to connect the newly created cluster and the namespace definition. For example, when adding google_container_node_pool, I can do something like cluster = "${google_container_cluster.hosting.name}" but I don't see anything similar for kubernetes_namespace.
In theory it is possible to reference resources from the GCP provider in K8S (or any other) provider in the same way you'd reference resources or data sources within the context of a single provider.
provider "google" {
region = "us-west1"
}
data "google_compute_zones" "available" {}
resource "google_container_cluster" "primary" {
name = "the-only-marcellus-wallace"
zone = "${data.google_compute_zones.available.names[0]}"
initial_node_count = 3
additional_zones = [
"${data.google_compute_zones.available.names[1]}"
]
master_auth {
username = "mr.yoda"
password = "adoy.rm"
}
node_config {
oauth_scopes = [
"https://www.googleapis.com/auth/compute",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring"
]
}
}
provider "kubernetes" {
host = "https://${google_container_cluster.primary.endpoint}"
username = "${google_container_cluster.primary.master_auth.0.username}"
password = "${google_container_cluster.primary.master_auth.0.password}"
client_certificate = "${base64decode(google_container_cluster.primary.master_auth.0.client_certificate)}"
client_key = "${base64decode(google_container_cluster.primary.master_auth.0.client_key)}"
cluster_ca_certificate = "${base64decode(google_container_cluster.primary.master_auth.0.cluster_ca_certificate)}"
}
resource "kubernetes_namespace" "n" {
metadata {
name = "blablah"
}
}
However in practice it may not work as expected due to a known core bug breaking cross-provider dependencies, see https://github.com/hashicorp/terraform/issues/12393 and https://github.com/hashicorp/terraform/issues/4149 respectively.
The alternative solution would be:
Use 2-staged apply and target the GKE cluster first, then anything else that depends on it, i.e. terraform apply -target=google_container_cluster.primary and then terraform apply
Separate out GKE cluster config from K8S configs, give them completely isolated workflow and connect those via remote state.
/terraform-gke/main.tf
terraform {
backend "gcs" {
bucket = "tf-state-prod"
prefix = "terraform/state"
}
}
provider "google" {
region = "us-west1"
}
data "google_compute_zones" "available" {}
resource "google_container_cluster" "primary" {
name = "the-only-marcellus-wallace"
zone = "${data.google_compute_zones.available.names[0]}"
initial_node_count = 3
additional_zones = [
"${data.google_compute_zones.available.names[1]}"
]
master_auth {
username = "mr.yoda"
password = "adoy.rm"
}
node_config {
oauth_scopes = [
"https://www.googleapis.com/auth/compute",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring"
]
}
}
output "gke_host" {
value = "https://${google_container_cluster.primary.endpoint}"
}
output "gke_username" {
value = "${google_container_cluster.primary.master_auth.0.username}"
}
output "gke_password" {
value = "${google_container_cluster.primary.master_auth.0.password}"
}
output "gke_client_certificate" {
value = "${base64decode(google_container_cluster.primary.master_auth.0.client_certificate)}"
}
output "gke_client_key" {
value = "${base64decode(google_container_cluster.primary.master_auth.0.client_key)}"
}
output "gke_cluster_ca_certificate" {
value = "${base64decode(google_container_cluster.primary.master_auth.0.cluster_ca_certificate)}"
}
Here we're exposing all the necessary configuration via outputs and use backend to store the state, along with these outputs in a remote location, GCS in this case. This enables us to reference it in the config below.
/terraform-k8s/main.tf
data "terraform_remote_state" "foo" {
backend = "gcs"
config {
bucket = "tf-state-prod"
prefix = "terraform/state"
}
}
provider "kubernetes" {
host = "https://${data.terraform_remote_state.foo.gke_host}"
username = "${data.terraform_remote_state.foo.gke_username}"
password = "${data.terraform_remote_state.foo.gke_password}"
client_certificate = "${base64decode(data.terraform_remote_state.foo.gke_client_certificate)}"
client_key = "${base64decode(data.terraform_remote_state.foo.gke_client_key)}"
cluster_ca_certificate = "${base64decode(data.terraform_remote_state.foo.gke_cluster_ca_certificate)}"
}
resource "kubernetes_namespace" "n" {
metadata {
name = "blablah"
}
}
What may or may not be obvious here is that cluster has to be created/updated before creating/updating any K8S resources (if such update relies on updates of the cluster).
Taking the 2nd approach is generally advisable either way (even when/if the bug was not a factor and cross-provider references worked) as it reduces the blast radius and defines much clearer responsibility. It's (IMO) common for such deployment to have 1 person/team responsible for managing the cluster and a different one for managing K8S resources.
There may certainly be overlaps though - e.g. ops wanting to deploy logging & monitoring infrastructure on top of a fresh GKE cluster, so cross provider dependencies aim to satisfy such use cases. For that reason I'd recommend subscribing to the GH issues mentioned above.