How to get availability status of middleware services which is running on ibm cloud? - apache-kafka

IBM internally monitor services which is offered on cloud but somehow I need to get status of middleware services such as kafka,API Connect etc. It will help me to automate things if some service stopped/not accessable.

To monitor your provisioned instances of these services you could exercise them. For example on API Connect create an API called /health and curl the API to verify it is working. For kafka create a topic to check the health.

Related

CloudSQL Proxy on GKE : Service vs Sidecar

Does anyone know the pros and cons for installing the CloudSQL-Proxy (that allows us to connect securely to CloudSQL) on a Kubernetes cluster as a service as opposed to making it a sidecar against the application container?
I know that it is mostly used as a sidecar. I have used it as both (in non-production environments), but I never understood why sidecar is more preferable to service. Can someone enlighten me please?
The sidecar pattern is preferred because it is the easiest and more secure option. Traffic to the Cloud SQL Auth proxy is not encrypted or authenticated, and relies on the user to restrict access to the proxy (typically be running local host).
When you run the Cloud SQL proxy, you are essentially saying "I am user X and I'm authorized to connect to the database". When you run it as a service, anyone that connects to that database is connecting authorized as "user X".
You can see this warning in the Cloud SQL proxy example running as a service in k8s, or watch this video on Connecting to Cloud SQL from Kubernetes which explains the reason as well.
The Cloud SQL Auth proxy is the recommended way to connect to Cloud SQL, even when using private IP. This is because the Cloud SQL Auth proxy provides strong encryption and authentication using IAM, which can help keep your database secure.
When you connect using the Cloud SQL Auth proxy, the Cloud SQL Auth proxy is added to your pod using the sidecar container pattern. The Cloud SQL Auth proxy container is in the same pod as your application, which enables the application to connect to the Cloud SQL Auth proxy using localhost, increasing security and performance.
As sidecar is a container that runs on the same Pod as the application container, because it shares the same volume and network as the main container, it can “help” or enhance how the application operates. In Kubernetes, a pod is a group of one or more containers with shared storage and network. A sidecar is a utility container in a pod that’s loosely coupled to the main application container.
Sidecar Pros: Scales indefinitely as you increase the number of pods. Can be injected automatically. Already used by serviceMeshes.
Sidecar Cons: A bit difficult to adopt, as developers can't just deploy their app, but deploy a whole stack in a deployment. It consumes much more resources and it is harder to secure because every Pod must deploy the log aggregator to push the logs to the database or queue.
Refer to the documentation for more information.

Flink native kubernetes deployment

I have some limitations with the rights required by Flink native deployment.
The prerequisites say
KubeConfig, which has access to list, create, delete pods and **services**, configurable
Specifically, my issue is I cannot have a service account with the rights to create/remove services. create/remove pods is not an issue. but services by policy only can be created within an internal tool.
could it be any workaround for this?
Flink creates two service in native Kubernetes integration.
Internal service, which is used for internal communication between JobManager and TaskManager. It is only created when the HA is not enabled. Since the HA service will be used for the leader retrieval when HA enabled.
Rest service, which is used for accessing the webUI or rest endpoint. If you have other ways to expose the rest endpoint, or you are using the application mode, then it is also optional. However, it is always be created currently. I think you need to change some codes to work around.

Openshift idle a service using REST API

Is there a way to idle an OCP service using the Openshift REST API ?
I know, that every oc command that runs from the client hits an api to perform the action requested.
However - I don't see an information about the endpoint to call to idle a service in the Openshift API Documentation.
Any help is truly appreciated.

How to collect SF reverse proxy logs on-prem

I have 3-node on-prem cluster. Now i want to collect and analyze reverse proxy logs (and other system service fabric logs). I google and found this article and it says
Refer to Collect reverse proxy events to enable collecting events from
these channels in local and Azure Service Fabric clusters.
But that link describes how to enable, configure and collect reverse proxy logs for clusters in Azure. And I don't understand how to do it on-prem.
Please, help!
Service Fabric Events are just ETW events, you have the option to use the Builtin mechanism to collect and forward these events to a Monitoring Application like Windows Azure Diagnostics, or you can build your own.
If you decide to follow the approach in the documents, it will work on azure or On premises, the only caveat is that On-Premises it will send the logs to Azure, but it will work the same way.
On Premises, another way is build you own collector using EventFlow, you can configure EventFlow to collect the ReverseProxy ETW events and then forward these to ELK or any other monitoring platform.

Application insights and service fabric?

I found this from several months back on Application Insights and Service Fabric and I'm wondering if there is any new information.
I would really like to get CPU, Memory, Storage and other metrics out of service fabric and the reliable actors. Having it presented in a user friendly HUD like app insights provides would be awesome!
Thanks!
On the azure portal, you can now create a resource called 'Service Fabric Analytics' to create a nice dashboard for your cluster. Configure as cluster like described here. It's OMS based, not Appinsights though.
The Service Fabric Solution helps identify and troubleshoot issues
across your Service Fabric cluster, by providing visibility into how
your Service Fabric virtual machines are performing and how your
applications and micro-services are running. Available features
include: • Get insight into your Service Fabric framework • Get
insight into the performance of your Service Fabric applications and
micro-services • View events from your applications and micro-services
Data collected: Service Fabric Reliable Service Events, Service Fabric
Actor Events, Service Fabric Operational Events, Event tracing for
Windows events and Windows event logs. Requirements: This solution
will only work if you have set up Azure Diagnostics on your Service
Fabric VMs, and have configured OMS to collect data for your WAD
tables.
In service fabric with Eventflow https://github.com/Azure/diagnostics-eventflow you had a option to send the diagnostics data to multiple data stores like WAD Tables, OMS, elastic search and application insights.
Have a look at it. It is really straight forward and easily integrate with ETW events and will serve your purpose.