I have the filebeat running in a docker container and logstash running in a different docker container.
In configuration of filebeat.yml, the logstash IP is set as with the IP of logstash as http://:5044.
Facing the below error
WARN DNS lookup failure "http://172.17.0.2:5044": lookup http://localhost:5044/: invalid domain name
2017/04/14 14:16:51.537977 single.go:126: INFO Connecting error publishing events (retrying): lookup http://localhost:5044/: invalid domain name
2017/04/14 14:16:51.538000 single.go:152: INFO send fail
Configuration of filebeat.yml with regards to log stash configuration
output:
logstash:
enabled: true
hosts:
- "172.17.0.2:5044"
Should the docker ip of log stash be used or a separate IP be used?
The connection between Beats and Logstash is not based on the HTTP protocol so do not configure the hosts option with a URL. Each Logstash host should be of the format host:port. As you can see from the error message it is trying to resolve the full value you specified as a domain name which is wrong (i.e. it's doing the equivalent of nslookup "http://localhost:5044/").
The indentation in the config shown in the question is off as well. The hosts setting should be a child of logstash so it needs to be indented. Try using this instead to avoid any indentation issues:
output.logstash.hosts: ['logstash:5044']
If both the Logstash and Filebeat containers are in the same Docker environment then you can link the two containers and use the Logstash container's name as the hostname in your config file. If they are on separate hosts then you need to expose port 5044 from LS and use the host machine's IP in your Filebeat configuration.
Related
I have some services managed by Kubernetes. each service has some number of pods. I want to develop a new service to analyze logs of all other services. To do this, I first need to ship the logs to my new service, but I don't know how.
I can divide my question into two parts.
1- how should I access/read the logs? should I read from /var/logs or run apps using pipe like this:
./app | myprogram
which myprogram gets the logs of app as standard input.
2- how can I send logs to another service? my options are gRPC and Kafka(or RabbitMQ).
Using CephFS volume can also be a solution, but it seems that is an anti-pattern(read How to share storage between Kubernetes pods?)
The below is basic workflow of how you can collect logs from your pods and send it to a logging tool, i have taken an example of fluent-bit (open source) to explain, but you can use tools like Fluentd/Logstash/Filebeat.
Pods logs are stored in specific path on nodes -> Fluent bit runs as daemonset collects logs from nodes using its input plugins -> use Output plugins of fluent bit and send logs to logging tools (Elastic/ Datadog / Logiq etc)
Fluent Bit is an open source log shipper and processor, that collects data from multiple sources and forwards it to different destinations, Fluent-bit has various input plugins which can be used to collect log data from specific paths or port and you can use output plugin to collect logs at a Elastic or any log collector.
please follow the below instruction to install fluent-bit.
https://medium.com/kubernetes-tutorials/exporting-kubernetes-logs-to-elasticsearch-using-fluent-bit-758e8de606af
or
https://docs.fluentbit.io/manual/installation/kubernetes
To get you started, below is a configuration of how your input plugin should look like, it uses a tail plugin and notice the path, this is where the logs are stored on the nodes which are on the kubernetes cluster.
https://docs.fluentbit.io/manual/pipeline/inputs
Parser can be changed according to your requirement or the format of log
[INPUT]
Name tail
Tag kube.*
Path /var/log/containers/*.log
Parser docker
DB /var/log/flb_kube.db
Skip_Long_Lines On
Refresh_Interval 60
Mem_Buf_Limit 1MB
Below is an example of the Output plugin, this is http plugin which is where log collector will be listening, there are various plugins that can be used to configure, depending on the logging tool you choose.
https://docs.fluentbit.io/manual/pipeline/outputs
The below uses http plugin to send data to http(80/443) endpoint
[OUTPUT]
Name http
Match *
Host <Hostname>
Port 80
URI /v1/json_batch
Format json
tls off
tls.verify off
net.keepalive off
compress gzip
below is an output to elastic.
[OUTPUT]
Name es
Match *
Host <hostname>
Port <port>
HTTP_User <user-name>
HTTP_Passwd <password>
Logstash_Format On
Retry_Limit False
I'm currently trying to run a Jenkins build on top of a Kubernetes minikube 2-node cluster. This is the code that I am using: https://github.com/rsingla2012/docker-development-youtube-series-youtube-series/tree/main/jenkins. Every time I run the build, I get an error that the slave is offline. This is the output of "kubectl get all -o wide -n jenkinsonkubernetes2" after I apply the files:
cmd line logs
Looking at the Jenkins logs below, Jenkins is able to spin up and provision a slave pod but as soon as the container is run (in this case, I'm using the inbound-agent image although it's named jnlp), the pod is terminated and deleted and another is created. Jenkins logs
2: https://i.stack.imgur.com/mudPi.png`enter code here`
I also added a new Jenkins logger for org.csanchez.jenkins.plugins.kubernetes at all levels, the log of which is shown below.
kubernetes logs
This led me to believe that it might be a network issue or a firewall blocking the port so I checked with netstat and although jenkins was listening at 0.0.0.0:8080, port 50000 was not. So, I opened port 50000 with an inbound rule for Windows 10, but after running the build, it's still not listening. For reference, I also created a node port for the service and port forwarded the master pod to port 32767, so that the Jenkins UI is accessible at 127.0.01:32767. I believed opening the port should fix the issue, but upon using Microsoft Telnet to double check, I received the error "Connecting To 127.0.0.1...Could not open connection to the host, on port 50000: Connect failed" with the command "open 127.0.0.1 50000". One thing I thought was causing the problem was the lack of a server certificate when accessing the kubernetes API from jenkins, so I added the Kubernetes server certificate key to the Kubernetes cloud configuration, but still receiving the same error. My kubernetes URL is set to https://kubernetes.default:443, Jenkins URL is http://jenkins, and I'm using Jenkins tunnel jenkins:50000 with no concurrency limit.
After deploying BITNAMI HELM chart for AIRFLOW, on kubernetes cluster, ALTHOUGH EVERYTHING WORKS, logging is still unreachable.
Turns out that helm chat that is being used to deploy is using a headless service for communication between celery workers and is not able to show me logs.
I have set the hostname_callable setting right, and yet, LOGS ALWAYS PICK UP THE NAME OF HEADLESS SERVICE as their hostname, but, not the DNS name.
*** Log file does not exist: /opt/bitnami/airflow/logs/secondone/s3files/2020-06-19T10:35:00+00:00/1.log
*** Fetching from: http://mypr-afw-worker-1.mypr-afw-headless.mynamespace.svc.cluster.local:8793/log/secondone/s3files/2020-06-19T10:35:00+00:00/1.log
*** Failed to fetch log file from worker. HTTPConnectionPool(host='mypr-afw-worker-1.mypr-afw-headless.mynamespace.svc.cluster.local', port=8793): Max retries exceeded with url: /log/secondone/s3files/2020-06-19T10:35:00+00:00/1.log (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f12917f5630>: Failed to establish a new connection: [Errno 111] Connection refused',))
Any help in this regard would be appreciated! thanks!
How are you setting the hostname, it seems you need to pass them as an array:
## The list of hostnames to be covered with this ingress record.
## Most likely this will be just one host, but in the event more hosts are needed, this is an array
##
hosts:
- name: airflow.local
path: /
or --set ingress.hosts[0].name=airflow.local --set ingress.hosts[0].path=/ in the helm install command
I am trying to configure Grafana to visulaize metrics collected by Prometheus.
My Prometheus Datasource is validated successfully. But when I am trying to create dashboard then it's showing error saying "can not read property 'result' of undefined"
I am adding screenshots.
It looks like you are pointing towards the node exporter endpoint and not Prometheus Server. The default Prometheus Server endpoint is 9090. Try change your source to http://192.168.33.22:9090
Grafana doesn't query Node Exporter directly, it queries Prometheus Server which gathers the time series statistics.
Please see the guide below to fix the issue!
This will work as long as you have both your Grafana and Prometheus running as a docker images so before you begin please run the command below to be sure that both prom and Grafana images are up
docker ps
To connect the prometheus to GRAFANA, you will need to get the prometheus server IP address that is running as a docker image from host.
Use this command on your terminal to display all the container IDs:
docker ps -a
You will see your prometheus server container ID displayed for example "faca0c893603". Please copy the ID and run the command below on your terminal to see the IP address of your Prometheus server:
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' faca0c893603
Note : (faca0c893603 is the ContainerID of the prom/prometheus server)
When you run the command it will display the IP address(172.17.0.3) of the Prometheus container which needs to be mapped with the port of the prometheus server on Grafana.
On data source on Grafana, put this on the URL http://172.17.0.3:9090 and try to save and test.
I'm trying to fetch nodes list via ansible playbook using a context name. but its not working
my playbook:
getnodes.yaml
- name: "get nodes"
hosts: kubernetes
tasks:
- name: "nodes"
command: "kubectl get nodes --context='contextname'"
I do have multiple clusters in config file. I need to either specify cluster name or context name and get the nodes list or to perform any activity on a particular cluster
As far as I understand you when you run the command kubectl get nodes --context='contextname' directly on your master node, everything works fine, right ? And it fails only when you run it as a part of your ansible playbook against the master node ? What errors do you get ?
Yes that's correct. i'm able to execute from command line
"The connection to the server localhost:8080 was refused - did you
specify the right host or port?"
Are you sure it is available on the same host as you run your ansible playbook ? I mean your Kubernetes master node, on which you have kubectl binary installed ? My guess is that it is not and even if it is on the same host you'll not be able to connect to it using localhost:8080.
Look. You're not using here any particular Ansible module specific to manage Kubernetes cluster like this one, which you run directly against the API server and you need to provide its valid URL. Instead here you are just using simple command module which doesn't care what command you want to run as long as you provide a valid hostname with ssh access and Python installed.
In this case your Ansible simply tries to ssh to your Kubernetes master node and execute the shell command you passed to it:
kubectl get nodes --context='contextname'
I really doubt that your ssh server listens on port 8080.
If you run your ansible playbook on same host you can run your kubectl commands there are much easier solutions in Ansible for such cases like:
local_action or delegate_to: localhost statements in your task or more globally connection: local
More details on usage of all above mentioned statements in your Ansible plays you can find in Ansible docs and in this article.
I hope it will help you.