I am dockerizing sensu infrastructure. Everything goes fine except the execution of checks.
I am using docker-compose according to this structure (docker-compose.yml):
sensu-core:
build: sensu-core/
links:
- redis
- rabbitmq
sensors-production:
build: sensors-production/
links:
- rabbitmq
uchiwa:
build: sensu-uchiwa
links:
- sensu-core
ports:
- "3000:3000"
rabbitmq:
build: rabbitmq/
redis:
image: redis
command: redis-server
My rabbitmq Dockerfile is pretty straightforward:
FROM ubuntu:latest
RUN apt-get -y install wget
RUN wget http://packages.erlang-solutions.com/erlang-solutions_1.0_all.deb
RUN dpkg -i erlang-solutions_1.0_all.deb
RUN wget http://www.rabbitmq.com/rabbitmq-signing-key-public.asc
RUN apt-key add rabbitmq-signing-key-public.asc
RUN echo "deb http://www.rabbitmq.com/debian/ testing main" | sudo tee /etc/apt/sources.list.d/rabbitmq.list
RUN apt-get update
RUN apt-get -y install erlang rabbitmq-server
CMD /etc/init.d/rabbitmq-server start && \
rabbitmqctl add_vhost /sensu && \
rabbitmqctl add_user sensu secret && \
rabbitmqctl set_permissions -p /sensu sensu ".*" ".*" ".*" && \
cd /var/log/rabbitmq/ && \
ls -1 * | xargs tail -f
So do the uchiwa Dockerfile:
FROM podbox/sensu
RUN apt-get -y install uchiwa
RUN echo ' \
{ \
"sensu": [ \
{ \
"name": "Sensu", \
"host": "sensu-core", \
"port": 4567, \
"timeout": 5 \
} \
], \
"uchiwa": { \
"host": "0.0.0.0", \
"port": 3000, \
"interval": 5 \
} \
}' > /etc/sensu/uchiwa.json
EXPOSE 3000
CMD /etc/init.d/uchiwa start && \
tail -f /var/log/uchiwa.log
Sensu core runs sensu-server & sensu-api. Here is his dockerfile:
FROM podbox/sensu
RUN apt-get -y install sensu
RUN echo '{ \
"rabbitmq": { \
"host": "rabbitmq", \
"vhost": "/sensu", \
"user": "sensu", \
"password": "secret" \
}, \
"redis": { \
"host": "redis", \
"port": 6379 \
}, \
"api": { \
"host": "localhost", \
"port": 4567 \
} \
}' >> /etc/sensu/config.json
CMD /etc/init.d/sensu-server start && \
/etc/init.d/sensu-api start && \
tail -f /var/log/sensu/sensu-server.log -f /var/log/sensu-api.log
sensors-production runs sensu-client along with a dumb metric, here is his Dockerfile:
FROM podbox/sensu
RUN apt-get -y install sensu
RUN echo '{ \
"rabbitmq": { \
"host": "rabbitmq", \
"vhost": "/sensu", \
"user": "sensu", \
"password": "secret" \
} \
}' >> /etc/sensu/config.json
RUN mkdir -p /etc/sensu/conf.d
RUN echo '{ \
"client": { \
"name": "wise_oracle", \
"address": "prod_sensors", \
"subscriptions": [ \
"web", "aws" \
] \
} \
' >> /etc/sensu/conf.d/client.json
RUN echo '{ \
"checks": { \
"dumb": { \
"command": "ls", \
"subscribers": [ \
"web" \
], \
"interval": 10 \
} \
} \
}' >> /etc/sensu/conf.d/dumb.json
CMD /etc/init.d/sensu-client start && \
tail -f /var/log/sensu/sensu-client.log
Running
docker-compose up -d
Everything goes OK. No errors in the logs, I can access the uchiwa dashboard, which shows me the defined client alright (keepalive requests seems to be OK). However, no check is available.
I noticed that no check request / check result is present in the log, as if the sensu server consider there is no check to run. Although, I have no idea why is that.
Could someone tell me what's going on more precisely? Thank you.
Check request/result will delivered via RabbitMQ, you can access to http://yourrabbitmqserver:15672 to see the queue and subscribed consumers.
Also make sure your server have some check.json files placed in /sensu/conf.d to schedule checks base on their interval
Related
I want to query a REST API with HTTPie. I am usuale to do so with curl, with which I am able to specify maxKeys and startAfterFilename e.g.
curl --location --request GET -G \
"https://some.thing.some.where/data/v1/datasets/mydataset/versions/2/files" \
-d maxKeys=100 \
-d startAfterFilename=YYYMMDD_HHMMSS.file \
--header "Authorization: verylongtoken"
How can I use those -d options in HTTPie?
In your case the command looks like this:
http -F https://some.thing.some.where/data/v1/datasets/mydataset/versions/2/files \
Authorization:verylongtoken \
startAfterFilename=="YYYMMDD_HHMMSS.file" \
maxKeys=="100"
Although, there is a bunch of methods to pass some data with httpie. For example
http POST http://example.com/posts/3 \
Origin:example.com \ # : HTTP headers
name="John Doe" \ # = string
q=="search" \ # == URL parameters (?q=search)
age:=29 \ # := for non-strings
list:='[1,3,4]' \ # := json
file#file.bin \ # # attach file
token=#token.txt \ # =# read from file (text)
user:=#user.json # :=# read from file (json)
Or, in the case of forms
http --form POST example.com \
name="John Smith" \
cv=#document.txt
I recently updated my Debezium images from 1.4 to 1.5.
However now when I insert my connectors it seems like the sink conenctor just kind of 'dies' without giving any output at all..
Everything runs fine on 1.4 but I need to upgrade to 1.5 due to some new features that are necessary for my project.
below is my DockerFile for the debezium connect image:
FROM debezium/connect:1.5
ENV KAFKA_CONNECT_JDBC_DIR=$KAFKA_CONNECT_PLUGINS_DIR/kafka-connect-jdbc \
KAFKA_CONNECT_ES_DIR=$KAFKA_CONNECT_PLUGINS_DIR/kafka-connect-elasticsearch
ARG POSTGRES_VERSION=42.2.20
ARG KAFKA_JDBC_VERSION=10.0.0
ARG KAFKA_ELASTICSEARCH_VERSION=10.0.0
# Deploy PostgreSQL JDBC Driver
RUN cd /kafka/libs && curl -sO https://jdbc.postgresql.org/download/postgresql-$POSTGRES_VERSION.jar
# Deploy Kafka Connect JDBC
RUN mkdir $KAFKA_CONNECT_JDBC_DIR && cd $KAFKA_CONNECT_JDBC_DIR &&\
curl -sO https://packages.confluent.io/maven/io/confluent/kafka-connect-jdbc/$KAFKA_JDBC_VERSION/kafka-connect-jdbc-$KAFKA_JDBC_VERSION.jar
# Deploy Confluent Elasticsearch sink connector
RUN mkdir $KAFKA_CONNECT_ES_DIR && cd $KAFKA_CONNECT_ES_DIR &&\
curl -sO https://packages.confluent.io/maven/io/confluent/kafka-connect-elasticsearch/$KAFKA_ELASTICSEARCH_VERSION/kafka-connect-elasticsearch-$KAFKA_ELASTICSEARCH_VERSION.jar && \
curl -sO https://repo1.maven.org/maven2/io/searchbox/jest/6.3.1/jest-6.3.1.jar && \
curl -sO https://repo1.maven.org/maven2/org/apache/httpcomponents/httpcore-nio/4.4.4/httpcore-nio-4.4.4.jar && \
curl -sO https://repo1.maven.org/maven2/org/apache/httpcomponents/httpclient/4.5.1/httpclient-4.5.1.jar && \
curl -sO https://repo1.maven.org/maven2/org/apache/httpcomponents/httpasyncclient/4.1.1/httpasyncclient-4.1.1.jar && \
curl -sO https://repo1.maven.org/maven2/org/apache/httpcomponents/httpcore/4.4.4/httpcore-4.4.4.jar && \
curl -sO https://repo1.maven.org/maven2/commons-logging/commons-logging/1.2/commons-logging-1.2.jar && \
curl -sO https://repo1.maven.org/maven2/commons-codec/commons-codec/1.9/commons-codec-1.9.jar && \
curl -sO https://repo1.maven.org/maven2/org/apache/httpcomponents/httpcore/4.4.4/httpcore-4.4.4.jar && \
curl -sO https://repo1.maven.org/maven2/io/searchbox/jest-common/6.3.1/jest-common-6.3.1.jar && \
curl -sO https://repo1.maven.org/maven2/com/google/code/gson/gson/2.8.6/gson-2.8.6.jar && \
curl -sO https://repo1.maven.org/maven2/com/google/guava/guava/20.0/guava-20.0.jar
My Elasticsearch sink connector:
{
"name": "es-sink-connector",
"config": {
"connector.class": "io.confluent.connect.elasticsearch.ElasticsearchSinkConnector",
"tasks.max": "1",
"topics": "report",
"connection.url": "http://elasticsearch:9200",
"transforms.unwrap.type": "io.debezium.transforms.ExtractNewRecordState",
"transforms.unwrap.drop.tombstones": "false",
"transforms.unwrap.drop.deletes": "false",
"key.ignore": "false",
"type.name": "_doc",
"behavior.on.null.values": "delete",
"transforms": "ExtractKey",
"transforms.ExtractKey.type": "org.apache.kafka.connect.transforms.ExtractField$Key",
"transforms.ExtractKey.field": "id"
}
}
And lastly, the docker-compose configurations for my debeium-connect
connect:
build: ./debezium-jdbc-es
ports:
- "8083:8083"
- "5005:5005"
depends_on:
- kafka
- elasticsearch
environment:
- BOOTSTRAP_SERVERS=kafka:9092
- GROUP_ID=1
- CONFIG_STORAGE_TOPIC=my_connect_configs
- OFFSET_STORAGE_TOPIC=my_connect_offsets
- STATUS_STORAGE_TOPIC=my_source_connect_statuses
- host.docker.internal= host.docker.internal
Any idea as to how to fix it/ why it might not be working is appreciated!
Thank you!
UPDATE:
So after looking a bit more into it, it seems to be due to my Postgres connector not finding my table and not my Elasticsearch sink connector.
Debezium 1.4:
Snapshot step 3 - Locking captured tables [repo.report] [io.debezium.relational.RelationalSnapshotChangeEventSource]
Debezium 1.5:
Snapshot step 3 - Locking captured tables [] [io.debezium.relational.RelationalSnapshotChangeEventSource]
This is my Postgres connector:
{
"name": "reporting-connector",
"config": {
"connector.class": "io.debezium.connector.postgresql.PostgresConnector",
"tasks.max": "1",
"database.hostname": "host.docker.internal",
"database.port": "5432",
"database.user": "debezium_user",
"database.password": "postgres",
"database.server.id": "184054",
"database.dbname": "reportdatabase",
"database.server.name": "reporting",
"plugin.name": "pgoutput",
"database.include.list": "reportdatabase",
"database.history.kafka.bootstrap.servers": "kafka:9092",
"database.history.kafka.topic": "reporting",
"schema.include.list":"repo",
"table.include.list":"repo.report",
"transforms": "route",
"transforms.route.type": "org.apache.kafka.connect.transforms.RegexRouter",
"transforms.route.regex": "([^.]+)\\.([^.]+)\\.([^.]+)",
"transforms.route.replacement": "$3"
}
}
I am trying to install audit daemon on Renases RZ/G1E platform
Build Configuration:
BB_VERSION = "1.22.0"
BUILD_SYS = "x86_64-linux"
NATIVELSBSTRING = "Ubuntu-14.04"
TARGET_SYS = "arm-poky-linux-gnueabi"
MACHINE = "iwg22m"
DISTRO = "poky"
DISTRO_VERSION = "1.6.1"
TUNE_FEATURES = "armv7a vfp neon callconvention-hard cortexa7"
TARGET_FPU = "vfp-neon"
meta
meta-yocto
meta-yocto-bsp = "tmp:c4f1f0f491f988901bfd6965f7d10f60cb94a76f"
meta-renesas
meta-rzg1 = "tmp:19bf1ed97d04009722bb88a780268822ee60ff83"
meta-oe
meta-multimedia = "tmp:dca466c074c9a35bc0133e7e0d65cca0731e2acf"
meta-linaro-toolchain = "tmp:8a0601723c06fdb75e62aa0f0cf15fc9d7d90167"
when i give the command
$bitbake audit
Audit daemon is installed and i can see the files inside the image folder of audit
ls tmp/work/cortexa7hf-vfp-neon-poky-linux-gnueabi/audit/2.8.4-r0/image/*
tmp/work/cortexa7hf-vfp-neon-poky-linux-gnueabi/audit/2.8.4-r0/image/etc:
audisp audit default init.d libaudit.conf
tmp/work/cortexa7hf-vfp-neon-poky-linux-gnueabi/audit/2.8.4-r0/image/lib:
libaudit.a libaudit.la libaudit.so libaudit.so.1 libaudit.so.1.0.0 libauparse.a libauparse.la libauparse.so libauparse.so.0 libauparse.so.0.0.0
tmp/work/cortexa7hf-vfp-neon-poky-linux-gnueabi/audit/2.8.4-r0/image/sbin:
audispd audisp-remote auditctl auditd augenrules aureport ausearch autrace
tmp/work/cortexa7hf-vfp-neon-poky-linux-gnueabi/audit/2.8.4-r0/image/usr:
bin include lib share
when i build the rootfs and add audit daemon by adding the following line in conf/local.conf
CORE_IMAGE_EXTRA_INSTALL += " audit"
I only get the following file inside the rootfs
/etc/libaudit.conf
Audit_2.8.4.bb
SUMMARY = "User space tools for kernel auditing"
DESCRIPTION = "The audit package contains the user space utilities for \
storing and searching the audit records generated by the audit subsystem \
in the Linux kernel."
HOMEPAGE = "http://people.redhat.com/sgrubb/audit/"
SECTION = "base"
LICENSE = "GPLv2+ & LGPLv2+"
LIC_FILES_CHKSUM = "file://COPYING;md5=94d55d512a9ba36caa9b7df079bae19f"
SRC_URI = "http://people.redhat.com/sgrubb/${BPN}/${BPN}-${PV}.tar.gz \
file://audit-python-configure.patch \
file://audit-python.patch \
file://fix-swig-host-contamination.patch \
file://auditd \
file://auditd.service \
file://audit-volatile.conf \
"
SRC_URI[md5sum] = "ec9510312564c3d9483bccf8dbda4779"
SRC_URI[sha256sum] = "a410694d09fc5708d980a61a5abcb9633a591364f1ecc7e97ad5daef9c898c38"
inherit autotools pythonnative update-rc.d systemd
UPDATERCPN = "auditd"
INITSCRIPT_NAME = "auditd"
INITSCRIPT_PARAMS = "defaults"
SYSTEMD_PACKAGES = "auditd"
SYSTEMD_SERVICE_auditd = "auditd.service"
DEPENDS += "python tcp-wrappers libcap-ng linux-libc-headers (>= 2.6.30) swig-native"
EXTRA_OECONF += "--without-prelude \
--with-libwrap \
--enable-gssapi-krb5=no \
--with-libcap-ng=yes \
--with-python=yes \
--libdir=${base_libdir} \
--sbindir=${base_sbindir} \
--without-python3 \
--disable-zos-remote \
"
EXTRA_OECONF_append_arm = " --with-arm=yes"
EXTRA_OECONF_append_aarch64 = " --with-aarch64=yes"
EXTRA_OEMAKE += "PYLIBVER='python${PYTHON_BASEVERSION}' \
PYINC='${STAGING_INCDIR}/$(PYLIBVER)' \
pyexecdir=${libdir}/python${PYTHON_BASEVERSION}/site-packages \
STDINC='${STAGING_INCDIR}' \
pkgconfigdir=${libdir}/pkgconfig \
"
SUMMARY_audispd-plugins = "Plugins for the audit event dispatcher"
DESCRIPTION_audispd-plugins = "The audispd-plugins package provides plugins for the real-time \
interface to the audit system, audispd. These plugins can do things \
like relay events to remote machines or analyze events for suspicious \
behavior."
PACKAGES =+ "audispd-plugins"
PACKAGES += "auditd ${PN}-python"
FILES_${PN} = "${sysconfdir}/libaudit.conf ${base_libdir}/libaudit.so.1* ${base_libdir}/libauparse.so.*"
FILES_auditd += "${bindir}/* ${base_sbindir}/* ${sysconfdir}/*"
FILES_audispd-plugins += "${sysconfdir}/audisp/audisp-remote.conf \
${sysconfdir}/audisp/plugins.d/au-remote.conf \
${sbindir}/audisp-remote ${localstatedir}/spool/audit \
"
FILES_${PN}-dbg += "${libdir}/python${PYTHON_BASEVERSION}/*/.debug"
FILES_${PN}-python = "${libdir}/python${PYTHON_BASEVERSION}"
CONFFILES_auditd += "${sysconfdir}/audit/audit.rules"
RDEPENDS_auditd += "bash"
do_install_append() {
rm -f ${D}/${libdir}/python${PYTHON_BASEVERSION}/site-packages/*.a
rm -f ${D}/${libdir}/python${PYTHON_BASEVERSION}/site-packages/*.la
# reuse auditd config
[ ! -e ${D}/etc/default ] && mkdir ${D}/etc/default
mv ${D}/etc/sysconfig/auditd ${D}/etc/default
rmdir ${D}/etc/sysconfig/
# replace init.d
install -D -m 0755 ${S}/../auditd ${D}/etc/init.d/auditd
rm -rf ${D}/etc/rc.d
if ${#bb.utils.contains('DISTRO_FEATURES', 'systemd', 'true', 'false', d)}; then
install -d ${D}${sysconfdir}/tmpfiles.d/
install -m 0644 ${WORKDIR}/audit-volatile.conf ${D}${sysconfdir}/tmpfiles.d/
fi
# install systemd unit files
install -d ${D}${systemd_unitdir}/system
install -m 0644 ${WORKDIR}/auditd.service ${D}${systemd_unitdir}/system
# audit-2.5 doesn't install any rules by default, so we do that here
mkdir -p ${D}/etc/audit ${D}/etc/audit/rules.d
cp ${S}/rules/10-base-config.rules ${D}/etc/audit/rules.d/audit.rules
chmod 750 ${D}/etc/audit ${D}/etc/audit/rules.d
chmod 640 ${D}/etc/audit/auditd.conf ${D}/etc/audit/rules.d/audit.rules
# Based on the audit.spec "Copy default rules into place on new installation"
cp ${D}/etc/audit/rules.d/audit.rules ${D}/etc/audit/audit.rules
}
Audit_2.8.4.bb is a recipe. You run recipies with bitbake .
Recipes produce >= 1 packages. You install packages in an image.
You can look in the packages-split directory in tmp/work/cortexa7hf-vfp-neon-poky-linux-gnueabi/audit/2.8.4-r0/ to see what goes into which package.
I'd like to see the 'config' details as shown by the command of:
kubectl config view
However this shows the entire config details of all contexts, how can I filter it (or perhaps there is another command), to view the config details of the CURRENT context?
kubectl config view --minify displays only the current context
use the below command to get the full config including certificates
kubectl config view --minify --flatten
The cloud-native way to do this is to use the JSON output of the command, then filter it with jq:
kubectl config view -o json | jq '. as $o
| ."current-context" as $current_context_name
| $o.contexts[] | select(.name == $current_context_name) as $context
| $o.clusters[] | select(.name == $context.context.cluster) as $cluster
| $o.users[] | select(.name == $context.context.user) as $user
| {"current-context-name": $current_context_name, context: $context, cluster: $cluster, user: $user}'
{
"current-context-name": "docker-for-desktop",
"context": {
"name": "docker-for-desktop",
"context": {
"cluster": "docker-for-desktop-cluster",
"user": "docker-for-desktop"
}
},
"cluster": {
"name": "docker-for-desktop-cluster",
"cluster": {
"server": "https://localhost:6443",
"insecure-skip-tls-verify": true
}
},
"user": {
"name": "docker-for-desktop",
"user": {
"client-certificate-data": "REDACTED",
"client-key-data": "REDACTED"
}
}
}
This answer helped me figure out some of the jq bits.
The bash/kubectl with a little bit of jq, for any context equivalent:
exec >/tmp/output &&
CONTEXT_NAME=kubernetes-admin#kubernetes \
CONTEXT_CLUSTER=$(kubectl config view -o=jsonpath="{.contexts[?(#.name==\"${CONTEXT_NAME}\")].context.cluster}") \
CONTEXT_USER=$(kubectl config view -o=jsonpath="{.contexts[?(#.name==\"${CONTEXT_NAME}\")].context.user}") && \
echo "[" && \
kubectl config view -o=json | jq -j --arg CONTEXT_NAME "$CONTEXT_NAME" '.contexts[] | select(.name==$CONTEXT_NAME)' && \
echo "," && \
kubectl config view -o=json | jq -j --arg CONTEXT_CLUSTER "$CONTEXT_CLUSTER" '.clusters[] | select(.name==$CONTEXT_CLUSTER)' && \
echo "," && \
kubectl config view -o=json | jq -j --arg CONTEXT_USER "$CONTEXT_USER" '.users[] | select(.name==$CONTEXT_USER)' && \
echo -e "\n]\n" && \
exec >/dev/tty && \
cat /tmp/output | jq && \
rm -rf /tmp/output
You can use the command kubectl config view --minify to get current context only.
It is handy to use --help to get the options what you could have for kubectl operations.
kubectl config view --help
Thanks to this tutorial: https://www.twilio.com/docs/sip-trunking/api/trunks#action-create I am able to CRUD create, read, update and delete trunks on my Twilio account.
To create a new trunk I do it like so:
curl -XPOST https://trunking.twilio.com/v1/Trunks \
-d "FriendlyName=MyTrunk" \
-u '{twilio account sid}:{twilio auth token}'
and this is the response I get when creating a new trunk:
{
"trunks": [
{
"sid": "TKfa1e5a85f63bfc475c2c753c0f289932",
"account_sid": "ACxxx",
....
....
"date_updated": "2015-09-02T23:23:11Z",
"url": "https://trunking.twilio.com/v1/Trunks/TKfa1e5a85f63bfc475c2c753c0f289932",
"links": {
"origination_urls": "https://trunking.twilio.com/v1/Trunks/TKfa1e5a85f63bfc475c2c753c0f289932/OriginationUrls",
"credential_lists": "https://trunking.twilio.com/v1/Trunks/TKfa1e5a85f63bfc475c2c753c0f289932/CredentialLists",
"ip_access_control_lists": "https://trunking.twilio.com/v1/Trunks/TKfa1e5a85f63bfc475c2c753c0f289932/IpAccessControlLists",
"phone_numbers": "https://trunking.twilio.com/v1/Trunks/TKfa1e5a85f63bfc475c2c753c0f289932/PhoneNumbers"
}
}],
"meta": {
"page": 0,
"page_size": 50,
... more
}
}
What I am interested from the response is:
"links": {
"origination_urls": "https://trunking.twilio.com/v1/Trunks/TKfa1e5a85f63bfc475c2c753c0f289932/OriginationUrls",
Now if I perform a get command on that link like:
curl -G "https://trunking.twilio.com/v1/Trunks/TKfa1e5a85f63bfc475c2c753c0f289932/OriginationUrls" -u '{twilio account sid}:{twilio auth token}'
I get back this:
{
"meta":
{
"page": 0,
"page_size": 50,
"first_page_url":
....
},
"origination_urls": []
}
Now my goal is to update the origination_urls. So using the same approach I used to update a trunk I have tried:
curl -XPOST https://trunking.twilio.com/v1/Trunks/TKfa1e5a85f63bfc475c2c753c0f289932/OriginationUrls \
-d "origination_urls=sip:200#somedomain.com" \
-u '{twilio account sid}:{twilio auth token}'
But that fails. I have also tried:
curl -XPOST https://trunking.twilio.com/v1/Trunks/TKfa1e5a85f63bfc475c2c753c0f289932/OriginationUrls \
-d "origination_urls=['someUrl']" \
-u '{twilio account sid}:{twilio auth token}'
and that fails too. How can I update the origination_urls?
I was missing to add Priority, FriendlyName, SipUrl, Weight and Enabled on my post request. I finally got it to work by doing:
curl -XPOST "https://trunking.twilio.com/v1/Trunks/TKfae10...../OriginationUrls" -d "Priority=10" -d "FriendlyName=Org1" -d "Enabled=true" -d "SipUrl=sip:test#domain.com" -d "Weight=10" -u '{twilio account sid}:{twilio auth token}'