Using Grafana Agent to monitor Posgres on AWS - postgresql

How can I use the Grafana Agent to collect metrics and logs from Postgres? Postgres instance is on AWS. What agent configuration needs to be made? The agent is installed successfully. See screenshots.Grafana installation Tab

Related

Collect VM uptime from openstack cloud

I would like to collect uptime info from a VM provisioned in Redhat Openstack. Is there a native service in OpenStack to continually provide the system logs (with uptime info)? I checked Nova VM diagnostics capability - https://wiki.openstack.org/wiki/Nova_VM_Diagnostics#Overview
but i am still trying to figure out if i should have it on an agent on the VM in openstack to provide me the logs or is there any better and elegant way to do it?
You can use the Gnocchi service for information about server uptime.
you can use one of these solutions
setup ceilometer with seprated rabbit-mq cluster
use libvirt exporter (if you are using kvm/qemu) and prometheus.
setup custom script and send it to instance via cloud-init (not very good idea if you are not the owner of instance)

How to install and connect to Control-M Agent in a Kubernetes cluster?

I am very new to using Control-M. I need to install Agents inside a Kubernetes cluster -could you please tell me which steps I need to follow or point me in the direction of the relevant documentation? Once installed (which I don't know how to do), how can I "connect" my control-m server to the agent ?
Thanks very much for any help/guidance you can provide.
BMC have a FAQ for this, note the Agent settings will need tweaking (see answer 1). Support for this is better in v9.0.19 and 9.0.20. Also check out the link to github -
1. If we provision agent on Kubernetes pod as containers what should be Agent host name? . By default it is taking kubernetes pod name as host name which is no pingable from outside.
You can use a StatefulSet so the name will be set.
If you want to Control-M/Server (from outside k8s) to connect to a Control-M/Agent inside k8s you need to change the connection type to a persistent connection (see utilities agent: ctmagcfg, ctm: ctm_menu) that will be initiate from the Control-M/Agent side.
Additional Information: Best Practices for using Control-M to run a pod to completion in a Kubernetes-based cluster
https://github.com/controlm/automation-api-community-solutions/tree/master/3-infrastructure-as-code-examples/kubernetes-statefulset-agent-using-pvc
2. Can we connect the Agent provisioned in kubernetes via load balancer?
Yes. LoadBalancer will expose a static name/ip and allow the Control-M/Server to connect the Control-M/Agent but it is not needed (see the persistent connection) and it cost money in most clouds (for example in AWS it's actually defining an elastic IP that you are paying for)
3. Since we see couple of documents from the bmc communities for installing Agent on kubernetes via docker image then there should be a way to discover it from the on-prem Control-M/Server.
The Control-M/Agent discover is done from the Control-M/Agent side using CLI (or rest call) "ctm provision setup" once the pod (container) starts.
This API configures the Control-M/Agent (for example: to use persistent connection that was mentioned) and define/register it in Control-M/Server.
4. When setting agents in a kubernetes environment, does an agent need to be installed on each node in a cluster?
The Control-M/Agent only needs to be installed once. It does not have to be installed on every node.
5. Can the agent be installed on the cluster through a DeamonSet and shared by all containers?
The agent can be installed through a DeamonSet but this will install an agent on each node in the cluster. Each agent will be considered a separate installation and each agent would individually be added in the CCM. Alternatively an agent can be installed in a StatefulSet where only one agent is installed but has access to the Kubernetes cluster

How to connect to AWS ECS cluster?

I have successfully created ECS cluster (EC2 Linux + Networking). Is it possible to login to the cluster to perform some administrative tasks? I have not deployed any containers or tasks to it yet. I can’t find any hints for it in AWS console or AWS documentation.
The "cluster" is just a logical grouping of resources. The "cluster" itself isn't a server you can log into or anything. You would perform actions on the cluster via the AWS console or the AWS API. You can connect to the EC2 servers managed by the ECS cluster individually. You would do that via the standard ssh method you would use to connect to any other EC2 Linux server.
ECS will take care most of the administrative works for you.You simply have to deploy and manage your applications on ECS. If you setup ECS correctly, you will never have to connect to instances.
Follow these instructions to deploy your service (docker image): https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-service.html
Also you can use Cloudwatch to store container logs, so that you don't have to connect to instances to check the logs: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html

Use preinstalled fluentd installation in a K8saaS in the IBM-Cloud?

One K8saaS cluster in the IBM-Cloud runs preinstalled fluentd. May I use it on my own, too?
We think about logging strategy, which is independed from the IBM infrastrukture and we want to save the information inside ES. May I reuse the fluentd installation done by IBM for sending my log information or should I install my own fluentd? If so, am I able to install fluentd on the nodes via kubernetes API and without any access to the nodes themselfes?
The fluentd that is installed and managed by IBM Cloud Kubernetes Service will only connect to the IBM cloud logging service.
There is nothing to stop you installing your own Fluentd as well though to send your logs to your own logging service, either running inside your cluster or outside. This is best done via a daemonset so that it can collect logs from every node in the cluster.

Configure Zabbix monitoring tool on kubernetes cluster in GCP

I am trying to configure zabbix monitoring tool on the top of kubernetes clutser in Google Cloud Platform.
I followed the KB and the zabbix server configured successfully. I have also configured a zabbix agent using this link
Now I would like to know how my pods running on the cluster can be added to this zabbix server. Seeking your help.
Thanks in advance.
Used Dockbix Docker images already have preconfigured Zabbix with auto registration, which should discover your nodes/containers (containers!=pods). I guess you didn't configured sec. groups or DNS names properly.