I have a google cloud postgre instance and I'd like to run periodic sql queries on it and use the monitoring system to alert the user with the results.
How can I accomplish just using the gcp platform? Without having to develop a separate app.
As far as I am aware of, There is no Built-in feature for recurring queries in Cloud SQL at the moment.
So you have to implement your own. You can Use Cloud Scheduler to trigger a Cloud function (via HTTP/S endpoint) that runs the query on Cloud SQL and then notify the user in the way that suits your needs (I would recommend using pub/sub).
and you might want to save the result in a GCS bucket and the user is to pull the result from there.
Also, you might want to check BigQuery. It has a built-in feature of Scheduling queries.
Related
I would like to achieve a real time change data capture (log-based preferred) pipeline from Google Cloud Spanner to PubSub/Kafka for my downstream real time applications. Could you please let me know if there is a great and cost-effective way to achieve that? I will appreciate any advice and recommendations.
In addition, for Cloud Data Fusion from google, I noticed that it could achieve real time from mysql/postgresql to cloud spanner, but I did not find the way go from cloud spanner to pubsub/kafka in real time.
Also, I found another two ways, which to be listed here for any comments or suggestions.
Use Debezium, a log-based change data capture Kafka connector from the link https://cloud.google.com/architecture/capturing-change-logs-with-debezium#deploying_debezium_on_gke_on_google_cloud
Create a polling service (which may miss some data) to poll data from cloud spanner from the link: https://cloud.google.com/architecture/deploying-event-sourced-systems-with-cloud-spanner
If you have any suggestion or comment on this, I will be really grateful.
There's a open source implementation of a polling service for Cloud Spanner that can also automatically push changes to PubSub here: https://github.com/cloudspannerecosystem/spanner-change-watcher
It is however not log-based. It has some inherent limitations:
It can miss updates if the same record is updated twice within the polling interval. In that case, only the last value will be reported.
It only supports soft deletes.
You could have a look at the samples to see if it is something that might suit your needs at least to some degree: https://github.com/cloudspannerecosystem/spanner-change-watcher/tree/master/samples
Cloud Spanner has a new feature called Change Streams that would allow building a downstream pipeline from Spanner to PubSub/Kafka.
At this time, there's not a pre-packaged Spanner to PubSub/Kafka connector.
The way to read change streams currently is to use the SpannerIO Apache Beam connector that would allow building the pipeline with Dataflow, or also directly querying the API.
Disclaimer: I'm a Developer Advocate that works with the Cloud Spanner team.
how to create a background service in FLUTTER
with posh notification
I create an app but I went to integer a service to check the database when the app is not run
thank tou
I would recommend using Google Cloud Scheduler, which allows you to create CRON jobs which can send a request to your API, on a regular basis. The first three jobs are free.
If you also need to implement the actual function checking your database, have a look at Google Cloud Functions. Those can be written in Javascript or Typescript and call make calls to external APIs as long as you are on Blaze Plan (which includes the monthly free quotas).
The advantages are:
you get free credit when you create your account
depending on your needs you might not need more than the free monthly quota for Google Functions calls (first 2 millions invocations are free every month)
it's very easy to create a scheduled function which will picked up and run by the Cloud Scheduler
it's highly scalable and reliable so you don't have to worry about managing your own servers
I have a Google Cloud Function triggered by a Google Cloud Storage object.finalize event. When I deploy a new version of this function, I would like to run it for every existing file in the bucket (which have already been processed by the previous version of the function). Processing all the existing files in the bucket is a long running task, hence I don't think a Google Cloud Function which will process all files in a row is an option.
The best option I can see for now is to make a Google Cloud Function I can triggered via HTTP that will list all the files in the bucket and publish one event per file via Google PubSub, and then process each of these events with a slightly modified version of my initial Google Cloud Function which accepts a PubSub event in place of the object.finalize storage event.
I think it can work but I was wondering if there was an easier way to perform this operation.
If the operation you're trying to perform may take longer than the maximum time that a Cloud Function can run, you will need to split that operation into multiple steps. Your approach of using a PubSub trigger for each individual file, sounds like a valid approach to do that for me.
One option might be to write a small program that lists all of the objects in a bucket and, for each object, posts a message to Cloud Pub/Sub that triggers your function in the same way a GCS change would.
I have defined several Activities in IBM Data Connect (on Bluemix) and would like to chain them together, e.g. one for copying a Cloudant DB to dashDB, another for refining the copied data - so forth and so on.
Can this be done? If yes - how.
Data Connect doesn't currently support a way of chaining your activities together. However, you could make use of the current scheduling capabilities to arrange the activities to run in sequence. As the only current trigger mechanism we have is time, their successful operation would require you to leave enough time for each one to execute before the next activity in the chain.
I will find out for you if we have the kind of feature you're after on our roadmap.
Regards,
Wesley -
IBM Bluemix Data Connect Engineering
You can also use the Data Connect API to do the orchestration. See the documentation here https://console.ng.bluemix.net/docs/services/dataworks1/index.html
Regards,
Hernando Borda
IBM Bluemix Data Connect Product Manager
I started my test of using a Google's CloudSQL instance with a desktop based application, so far I am impressed with a performance, even it is laggy, it does the job, so my next step is to see what simple modifications can do to my application most intended to reduce Access to the database and optimize if there is something more to do.
How can I do log the sql commands send to the database in order to check what queries are being sent. My app uses ODBC drivers in Windows.
Regards
What you probably want is to turn on the general log. Unfortunately, that requires SUPER privileges and that was removed some time ago (announcement). We are going to provide a way to tweak parameters like that via the Cloud SQL API. For now, the best solution is to use a setup a local server and use the logging on that one. If you really want it on production ping us on the google-cloud-sql-discuss Google group and we'll enable the SUPER for your instance.