How to chain IBM Data Connect Activities in a flow - ibm-cloud

I have defined several Activities in IBM Data Connect (on Bluemix) and would like to chain them together, e.g. one for copying a Cloudant DB to dashDB, another for refining the copied data - so forth and so on.
Can this be done? If yes - how.

Data Connect doesn't currently support a way of chaining your activities together. However, you could make use of the current scheduling capabilities to arrange the activities to run in sequence. As the only current trigger mechanism we have is time, their successful operation would require you to leave enough time for each one to execute before the next activity in the chain.
I will find out for you if we have the kind of feature you're after on our roadmap.
Regards,
Wesley -
IBM Bluemix Data Connect Engineering

You can also use the Data Connect API to do the orchestration. See the documentation here https://console.ng.bluemix.net/docs/services/dataworks1/index.html
Regards,
Hernando Borda
IBM Bluemix Data Connect Product Manager

Related

Running periodic queries on google cloud sql instance

I have a google cloud postgre instance and I'd like to run periodic sql queries on it and use the monitoring system to alert the user with the results.
How can I accomplish just using the gcp platform? Without having to develop a separate app.
As far as I am aware of, There is no Built-in feature for recurring queries in Cloud SQL at the moment.
So you have to implement your own. You can Use Cloud Scheduler to trigger a Cloud function (via HTTP/S endpoint) that runs the query on Cloud SQL and then notify the user in the way that suits your needs (I would recommend using pub/sub).
and you might want to save the result in a GCS bucket and the user is to pull the result from there.
Also, you might want to check BigQuery. It has a built-in feature of Scheduling queries.

Google Cloud Spanner real time Change Data Capture to PubSub/Kafka through Cloud Data Fusion or Others

I would like to achieve a real time change data capture (log-based preferred) pipeline from Google Cloud Spanner to PubSub/Kafka for my downstream real time applications. Could you please let me know if there is a great and cost-effective way to achieve that? I will appreciate any advice and recommendations.
In addition, for Cloud Data Fusion from google, I noticed that it could achieve real time from mysql/postgresql to cloud spanner, but I did not find the way go from cloud spanner to pubsub/kafka in real time.
Also, I found another two ways, which to be listed here for any comments or suggestions.
Use Debezium, a log-based change data capture Kafka connector from the link https://cloud.google.com/architecture/capturing-change-logs-with-debezium#deploying_debezium_on_gke_on_google_cloud
Create a polling service (which may miss some data) to poll data from cloud spanner from the link: https://cloud.google.com/architecture/deploying-event-sourced-systems-with-cloud-spanner
If you have any suggestion or comment on this, I will be really grateful.
There's a open source implementation of a polling service for Cloud Spanner that can also automatically push changes to PubSub here: https://github.com/cloudspannerecosystem/spanner-change-watcher
It is however not log-based. It has some inherent limitations:
It can miss updates if the same record is updated twice within the polling interval. In that case, only the last value will be reported.
It only supports soft deletes.
You could have a look at the samples to see if it is something that might suit your needs at least to some degree: https://github.com/cloudspannerecosystem/spanner-change-watcher/tree/master/samples
Cloud Spanner has a new feature called Change Streams that would allow building a downstream pipeline from Spanner to PubSub/Kafka.
At this time, there's not a pre-packaged Spanner to PubSub/Kafka connector.
The way to read change streams currently is to use the SpannerIO Apache Beam connector that would allow building the pipeline with Dataflow, or also directly querying the API.
Disclaimer: I'm a Developer Advocate that works with the Cloud Spanner team.

Connect services DataConnect and DB2 Warehouse on Cloud (former DashDB)

I am working on the migration of a db2 process, which connects to several remotes servers, exports data into our local db, and then manipulates it (insert computed data, calculated times, etc) I have created some activities in DataConnect to replicate the export data from different datasources and load to local tables. The scripts that handles the data have to be done in DB2 Warechouse on Cloud (ex dashdb)
Currently, this scripts run automatically triggered by the first task (manual) However, having the new processes separated (2 services) it does not allow me to automate it. Furthermore, we have many activities in dataconnect, then it keeps switching between dc and db2...and you have to go from one console to the other.
Does anyone know of a Bluemix service which allow schedule or trigger jobs or events from services? Is there a way to use the API and programmatically do this?
Thanks
Well,
Bluemix offers Workload Scheduler. Data Connect allows to schedule activities.
Juan,
A couple of things come to mind for automation here. Do the databases that you are speaking of have IP line of sight to the Warehouse DB ? If so remote tables may be able to help depending on the source database. Once the data is visible you should be able to write a SQL process that manages the process all from Warehouse DB.
The other possibility is external tables as long as the data is visible on the head node. There are some other choices like s3 storage. I think the concept is if that you can push your data into S3 storage you can pull it into Warehouse DB. I think you should be able to coordinate this all from the Warehouse db side as long as the data is visible through remote tables and/or external tables.

Is there a way to enable an SQL log to see/optimize my queries using CloudSQL

I started my test of using a Google's CloudSQL instance with a desktop based application, so far I am impressed with a performance, even it is laggy, it does the job, so my next step is to see what simple modifications can do to my application most intended to reduce Access to the database and optimize if there is something more to do.
How can I do log the sql commands send to the database in order to check what queries are being sent. My app uses ODBC drivers in Windows.
Regards
What you probably want is to turn on the general log. Unfortunately, that requires SUPER privileges and that was removed some time ago (announcement). We are going to provide a way to tweak parameters like that via the Cloud SQL API. For now, the best solution is to use a setup a local server and use the logging on that one. If you really want it on production ping us on the google-cloud-sql-discuss Google group and we'll enable the SUPER for your instance.

how to connect database using uLink MMO with RPC

I am new on Unity and uLink MMO.
I am developing 3D game. In this project I need to connect mySQL database.
I have done the database connectivity using following method.
1. Made web service in C#.net. This is returning me data in Json format. Database connection string is in web service.
2. Read the Json output using Unity C# file.
Now I want to connect database using uLink+RPC.
Is there any way to connect database using RPC (without web service and uGameDB).
Please tell me the step or over all scenario to connect database using uLink + RPC.
Thanks in advance.
Regards
Bharat
All database communication with mySQL should be done
asynchronously, otherwise your game server will pause until it gets an answer
and the server's frames per second (FPS) will drop to zero until the database gives an answer.
How you make it asynchronous is up to you. Just make sure the Unity server's main thread
isn't stopped while it waits for the database to reply.
As long as you stick with mySQL you could try one of these solutions.
Use the asynchronous API for mysql. Use MySQL Connector/Net.
The Asynchronous API is described here:
http://mysql-connector-net.sourcearchive.com/documentation/6.1.2-1/classMySql_1_1Data_1_1MySqlClient_1_1MySqlCommand.html
Start one or several threads to handle the communication with mysql. This way you can use several parallel "normal" connections to mysql without stopping the main thread in Unity. The hard part is to implement the callbacks when the answer arrives from the database.
None of these ways are without challenges.
We (Muchdifferent, makers of uLink) might publish some examples of how to do solution nr 1 in the future, but at the moment we are focusing on releasing uGameDB instead.
/David