Stackdriver Job Monitoring -Big query or Dataflow - dataflow

How we can check Slow job performance and Job recovery through Stackdriver , am looking for Dataflow or Big query jobs .

In regards to this inquiry, you can always go to the public documentation page for GCP here to ask general questions in regards to GCP.
In regards to your inquiry, I have attached an article on how you can monitor your Dataflow pipelines using Stackdriver Monitoring here.
YOu can also follow this article on how you can use Stackdriver Monitoring to monitor your jobs within BigQuery.

Related

Spring Batch and multi-instanz Kubernets Application with ONE Database

i do not 100% understand if SpringBatch, in a multi-instanz Kubernets application, work fine. I read Batch Processing on Kubernetes, so i understand that in general it works fine, but in the answer it is not mentioned that it works fine in a multi-instanz installation using ONE database.
Our setup looks like this: We have multiple instances of the application running in Kubernets and they sharing ONE database. Some jobs would be triggered by user interaction (in one of the many pod's that is answering the request from the UI) and some are triggerd by cronjob from kubernetes (eg data reorg) (in one of the many pod's that is answering the REST request from the cronjob). All pods are containing the incidental application.
Does this setup work fine with SpringBatch?
thanks for your help :-)
As far as Spring Batch is concerned, all these things are deployment details. It is up to you to design your jobs and job instances with that in mind. This is what I explained in details in the Choosing the right Spring Batch job parameters and Choosing the Right Kubernetes Job Deployment Pattern sections. Note that this blog post is linked in the answer you shared.
What Spring Batch guarantees, thanks to the centralized job repository design (which is what you are referring to as "ONE database"), is preventing duplicate and concurrent job executions of the same job instance.
So the answer to your question is yes, as long as you choose the right deployment pattern for your Spring Batch jobs and Kubernetes jobs.

GKE and Task Queues

I am working on a cloud service platform that consists of getting tasks from users, executing them, and giving back the results.
TL;DR
Is there a way to have a "task queue", where tasks can be inserted via a REST API, and extracted automatically by the Google Kubernetes Engine cluster by guaranteeing an automatic scaling?
Long description
Users can send tasks in parallel, and each task is time consuming and need to be performed on a GPU. So, setting up an auto-scaling GPU cluster is what I thought of.
More in particular, in my idea, users could send tasks/data through a REST API, the REST API provides in filling a task queue, and the task queue itself will feed tasks to workers on the GPU auto-scaling cluster. Of course, there are other details (authentication, database, storage, etc.) that have to be addressed but are not the point of my question.
For reasons I don't specify here, the project is already started on the Google Cloud Platform, so switching to AWS or other providers is not an option.
For what I understood, things seem a bit different from standard Docker-only clusters in AWS, that is, we have to use the Google Kubernetes Engine (GKE) to setup the auto-scaling cluster, even for "simple" GPU-enabled Docker containers.
By looking at the not-so-exhaustive documentation, I know that queues are used, but what I don't know is whether feeding of tasks to the cluster is automatically handled. Also, the so-called "Task Queue" service has been deprecated.
Thank you!
First I thought Cloud Tasks queues may be the answer to your troubles, but more this post seems to promote Cloud Pub/Sub as a better alternative.
After a quick chat with batch developers, the current solution (before the batch service become public) is to adopt a third-party queue system like Slurm.

Batch Processing on Kubernetes

Anyone here have experience about batch processing (e.g. spring batch) on kubernetes ? Is it good idea ? How to prevent batch processing process same data if we use kubernetes auto scaling feature ? Thank you.
Anyone here have experience about batch processing (e.g. spring batch) on kubernetes ? Is it good idea ?
For Spring Batch, we (the Spring Batch team) do have some experience on the matter which we share in the following talks:
Cloud Native Batch Processing on Kubernetes, by Michael Minella
Spring Batch on Kubernetes, by me.
Running batch jobs on kubernetes can be tricky:
pods may be re-scheduled by k8s on different nodes in the middle of processing
cron jobs might be triggered twice
etc
This requires additional non-trivial work on the developer's side to make sure the batch application is fault-tolerant (resilient to node failure, pod re-scheduling, etc) and safe against duplicate job execution in a clustered environment.
Spring Batch takes care of this additional work for you and can be a good choice to run batch workloads on k8s for several reasons:
Cost efficiency: Spring Batch jobs maintain their state in an external database, which makes it possible to restart them from the last save point in case of job/node failure or pod re-scheduling
Robustness: Safe against duplicate job executions thanks to a centralized job repository
Fault-tolerance: Retry/Skip failed items in case of transient errors like a call to a web service that might be temporarily down or being re-scheduled in a cloud environment
I wrote a blog post in which I explain all these aspects in details with code examples. You can find it here: Spring Batch on Kubernetes: Efficient batch processing at scale
How to prevent batch processing process same data if we use kubernetes auto scaling feature ?
Making each job process a different data set is the way to go (a job per file for example). But there are different patterns that you might be interested in, see Job Patterns from k8s docs.

Query Stackdriver Uptime Checks

I am trying to query for the Stackdriver Uptime Checks using the google monitoring api. I cannot seem to find anything in their documentation that illustrates how to query for the uptime checks that were set up on stackdriver. Here are some of the docs I have been reading through. You will note that some of the query-able metrics include agent.googleapis.com/agent/uptime but this does not return the uptime checks seen on Stackdriver Uptime Checks. Below I am listing some of the documentation I have been sifting through in case it may be helpful.
Does anyone know how/if this can be done?
Google Python Client Docs
Time Series Query
Metrics
I'm a product manager on the Stackdriver team. Unfortunately, Uptime Check metrics are not currently available via the Stackdriver Metrics API. This is a feature we're actively working to provide. I'll follow-up on this thread when the feature is released.
Thank you for your question and for using Stackdriver!
It's my understanding that this metric can now be externally queried as:
monitoring.googleapis.com/uptime_check/check_passed
You can see it referenced in the sample alerting policy JSON for creating uptime check alerting policies.

Google Cloud SQL CPU Monitoring

I'm working on trying to setup some monitoring on a Google Cloud SQL node and am not seeing how to do it. I was able to install the monitoring agent on my Google Compute Engine instances to monitor CPU, Network, etc. I have not been able to figure out how to do so on the Cloud SQL instance. I have access to these types of monitoring:
Storage Usage (GB)
Number of Read/Write operations
Egress Bytes
Active Connections
MySQL Queries
MySQL Questions
InnoDB Pages Read/Written (pages/sec)
InnoDB Data fsyncs (operations/sec)
InnoDB Log fsyncs (operations/sec)
I'm sure these are great options, but at this point all I want to pay attention to is if my node is performing on a CPU/RAM standpoint as they seem to first and foremost measures for performance.
If I'm missing something, or misunderstnading what I'm trying to do, any advice is appreciated.
Thanks!
Google has a Stackdriver which is for logging and monitoring Google and AWS cloud infrastructure. It can monitor every single thing present on GCP. You can create visualization to monitor your Cloud SQL instance in one dashboard. You just have to ---->
1. login to stackdriver and Go to any existing dashboard, If you dont have create one.---->
2. Add chart and select Cloud SQL in resource Name.---->
3. Select CPU Utilization from metric and save. You can also monitor memory, Disk I/o, Delta count of Queries or servers Up-time and many more.
if you want to monitor any other GCP Compute engine, App-Engine, Kubernetese Engine, storage bucket, Bigtable or pub/sub you just have to select appropriate resource name from list. Hope you got your answer.
You can view all of them directly from the "Overview" tab of the Cloud SQL console:
I have added this as a feature request as issue 110.
https://code.google.com/p/googlecloudsql/issues/detail?id=110