I am trying to configure zabbix monitoring tool on the top of kubernetes clutser in Google Cloud Platform.
I followed the KB and the zabbix server configured successfully. I have also configured a zabbix agent using this link
Now I would like to know how my pods running on the cluster can be added to this zabbix server. Seeking your help.
Thanks in advance.
Used Dockbix Docker images already have preconfigured Zabbix with auto registration, which should discover your nodes/containers (containers!=pods). I guess you didn't configured sec. groups or DNS names properly.
Related
I would like to collect uptime info from a VM provisioned in Redhat Openstack. Is there a native service in OpenStack to continually provide the system logs (with uptime info)? I checked Nova VM diagnostics capability - https://wiki.openstack.org/wiki/Nova_VM_Diagnostics#Overview
but i am still trying to figure out if i should have it on an agent on the VM in openstack to provide me the logs or is there any better and elegant way to do it?
You can use the Gnocchi service for information about server uptime.
you can use one of these solutions
setup ceilometer with seprated rabbit-mq cluster
use libvirt exporter (if you are using kvm/qemu) and prometheus.
setup custom script and send it to instance via cloud-init (not very good idea if you are not the owner of instance)
I am very new to using Control-M. I need to install Agents inside a Kubernetes cluster -could you please tell me which steps I need to follow or point me in the direction of the relevant documentation? Once installed (which I don't know how to do), how can I "connect" my control-m server to the agent ?
Thanks very much for any help/guidance you can provide.
BMC have a FAQ for this, note the Agent settings will need tweaking (see answer 1). Support for this is better in v9.0.19 and 9.0.20. Also check out the link to github -
1. If we provision agent on Kubernetes pod as containers what should be Agent host name? . By default it is taking kubernetes pod name as host name which is no pingable from outside.
You can use a StatefulSet so the name will be set.
If you want to Control-M/Server (from outside k8s) to connect to a Control-M/Agent inside k8s you need to change the connection type to a persistent connection (see utilities agent: ctmagcfg, ctm: ctm_menu) that will be initiate from the Control-M/Agent side.
Additional Information: Best Practices for using Control-M to run a pod to completion in a Kubernetes-based cluster
https://github.com/controlm/automation-api-community-solutions/tree/master/3-infrastructure-as-code-examples/kubernetes-statefulset-agent-using-pvc
2. Can we connect the Agent provisioned in kubernetes via load balancer?
Yes. LoadBalancer will expose a static name/ip and allow the Control-M/Server to connect the Control-M/Agent but it is not needed (see the persistent connection) and it cost money in most clouds (for example in AWS it's actually defining an elastic IP that you are paying for)
3. Since we see couple of documents from the bmc communities for installing Agent on kubernetes via docker image then there should be a way to discover it from the on-prem Control-M/Server.
The Control-M/Agent discover is done from the Control-M/Agent side using CLI (or rest call) "ctm provision setup" once the pod (container) starts.
This API configures the Control-M/Agent (for example: to use persistent connection that was mentioned) and define/register it in Control-M/Server.
4. When setting agents in a kubernetes environment, does an agent need to be installed on each node in a cluster?
The Control-M/Agent only needs to be installed once. It does not have to be installed on every node.
5. Can the agent be installed on the cluster through a DeamonSet and shared by all containers?
The agent can be installed through a DeamonSet but this will install an agent on each node in the cluster. Each agent will be considered a separate installation and each agent would individually be added in the CCM. Alternatively an agent can be installed in a StatefulSet where only one agent is installed but has access to the Kubernetes cluster
One K8saaS cluster in the IBM-Cloud runs preinstalled fluentd. May I use it on my own, too?
We think about logging strategy, which is independed from the IBM infrastrukture and we want to save the information inside ES. May I reuse the fluentd installation done by IBM for sending my log information or should I install my own fluentd? If so, am I able to install fluentd on the nodes via kubernetes API and without any access to the nodes themselfes?
The fluentd that is installed and managed by IBM Cloud Kubernetes Service will only connect to the IBM cloud logging service.
There is nothing to stop you installing your own Fluentd as well though to send your logs to your own logging service, either running inside your cluster or outside. This is best done via a daemonset so that it can collect logs from every node in the cluster.
I have a Kubernetes cluster set up using Kubernetes Engine on GCP. I have also installed Dask using the Helm package manager. My data are stored in a Google Storage bucket on GCP.
Running kubectl get services on my local machine yields the following output
I can open the dashboard and jupyter notebook using the external IP without any problems. However, I'd like to develop a workflow where I write code in my local machine and submit the script to the remote cluster and run it there.
How can I do this?
I tried following the instructions in Submitting Applications using dask-remote. I also tried exposing the scheduler using kubectl expose deployment with type LoadBalancer, though I do not know if I did this correctly. Suggestions are greatly appreciated.
Yes, if your client and workers share the same software environment then you should be able to connect a client to a remote scheduler using the publicly visible IP.
from dask.distributed import Client
client = Client('REDACTED_EXTERNAL_SCHEDULER_IP')
Can anyone explain an example of using kiam on kubernetes to manage service-level access control to aws resources?
According to the docs:
The server is the only process that needs to call sts:AssumeRole and
can be placed on an isolated set of EC2 instances that don't run other
user workloads.
I would like to know to run the server part of it away from nodes that host your services.
Answer: KIAM architecture is well explained here:
https://www.bluematador.com/blog/iam-access-in-kubernetes-kube2iam-vs-kiam
Basically you want to use Master Nodes in your cluster with IAM::STS permissions on them to install the Server portion of kiam and then let your worker nodes connect to master nodes to retrieve credentials.
DISCLAIMER: I did some digging on k2iam and kiam without going all the way through to taking them to a test bench and wasn't happy with what I found out. It turns out we don't need them anymore starting with K8s 1.13 in EKS, that is as of september 4th as native support from AWS has been added for PODS to access IAM STS.
https://docs.aws.amazon.com/en_pv/eks/latest/userguide/iam-roles-for-service-accounts.html