Google SQL Cloud operations callback? - google-cloud-sql

I currently have an application which triggers import jobs to Google SQL Cloud using the their API:
https://cloud.google.com/sql/docs/admin-api/v1beta4/instances/import
This works great. However, this is only a request to import an SQL file. I have to check that the request was successful a minute or two afterwards.
What I would like, is to somehow register a callback to notify my application when the operation is complete. Then I can delete the bucket item and mark the data as persisted.
I have no idea if this is possible, but would be grateful for any advice. Perhaps the PubSub system API could be used for this, but so far have been unable to find any documentation on how this would be done.

There's currently no out of the box way to do this. You need to poll the operation status to determine when it's finished.

Related

Google Assistant SDK - Actions get information from SQL Server

I'm trying to make my own Google Assistant Actions.
I would like to ask a question. SQL Server is the data source for this. Google Assistant is searching my SQL Server for results. When it finds the result, it reads out the result. Would this be possible? Where can I search or read for more information about how to do this?
What you are looking for is called Actions on Google broadly. Specifically, you're looking to build a conversational action where the user can ask questions that match Intents that you provide to the Assistant to match. When an Intent is matched, the information is passed to your code, which is running as a webhook, to generate a response.
Your webhook can do pretty much whatever you want, as long as you do it quickly enough (in about 5 seconds) and return a response. This can include database queries or any other processing or business logic necessary. Details about doing so for SQL Server are out of the scope for this particular question - but it should be very similar to doing SQL Server queries from any other server you're running.

Cannot create a batch pipeline to get data from ZohoCRM with http plugin 1.2.1 to BigQuery. Retuns Spark Program 'phase-1' failed

My first post here and I'm new to Data Fusion and I'm with low to no coding skills.
I want to get data from ZohoCRM to BigQuery. Module from ZohoCRM (e.g. accounts, contacts...) to be a separate table in BigQuery.
To connect to Zoho CRM I obtained a code, token, refresh token and everything needed as described here https://www.zoho.com/crm/developer/docs/api/v2/get-records.html. Then I ran a successful get records request as described here via Postman and it returned the records from Zoho CRM Accounts module as JSON file.
I thought it will be all fine and set the parameters in Data Fusion
DataFusion_settings_1 and DataFusion_settings_2 it validated fine. Then I previewed and ran the pipeline without deploying it. It failed with the following info from the logs logs_screenshot. I tried to manually enter a few fields in the schema when the format was JSON. I tried changing the format to csv, nether worked. I tried switching the Verify HTTPS Trust Certificates on and off. It did not help.
I'd be really thankful for some help. Thanks.
Update, 2020-12-03
I got in touch with Google Cloud Account Manager, who then took my question to their engineers and here is the info
The HTTP plugin can be used to "fetch Atom or RSS feeds regularly, or to fetch the status of an external system" it does not seems to be designed for APIs
At the moment a more suitable tool for data collected via APIs is Dataflow https://cloud.google.com/dataflow
"Google Cloud Dataflow is used as the primary ETL mechanism, extracting the data from the API Endpoints specified by the customer, which is then transformed into the required format and pushed into BigQuery, Cloud Storage and Pub/Sub."
https://www.onixnet.com/insights/gcp-101-an-introduction-to-google-cloud-platform
So in the next weeks I'll be looking at Data Flow.
Can you please attach the complete logs of the preview run? Make sure to redact any PII data. Also what is the version of CDF you are using? Is CDF instance private or public?
Thanks and Regards,
Sagar
Did you end up using Dataflow?
I am also experiencing the same issue with the HTTP plugin, but my temporary way to go around it was to use a cloud scheduler to periodically trigger a cloud function that fetches my data from the API and exports them as a JSON to GCS, which can then be accessed by Data Fusion.
My solution is of course non-ideal, so I am still looking for a way to use the Data Fusion HTTP plugin. I was able to make it work to get sample data from public API end-points, but for a reason still unknown to me I can't get it to work for my actual API.

How to know the amount of requests

I'm using React and Firebase, and when I check the usage on Firestore, I see a lot of request being made. The problem is that I'm not the only one using it, so I don't know if most of them are mine or not. Is there anyway (using console maybe?) to know how many request I'm doing?
There is currently no way to track the source of reads and write happening in Firestore. You can only see the total volume of those requests in the console.

how to automate bots to monitor for successful queues on orchestrator?

I have a project that I have to do that deals with queues being loaded successfully and unsuccessfully whereby I do manually at the moment that can be tedious and also positive negative meaning the orchestrator can state that new queues have been added but when I access the actual job (process) nothing has been added.
I would like to know, is there a way to monitor queue success and unsuccessful rates on orchestrator instead of the using monitoring it manually?
You can access pretty much any information via the Orchestrator API.
You can find the "Orchestrator HTTP Request" activity, which will allow you to access any relevant endpoint.
Note that the provisioned Robot in Orchestrator needs to have the right access permission, so please have a look at what roles are associated to the Robot user.
The API reference can be found here:
https://docs.uipath.com/orchestrator/reference
You will see it mentions swagger, which in turn will give you all the information you need to access the relevant APIs.

sync API calls in Node.JS

Folks
I am not a Node.JS expert and as a product manager of a team that uses Node.JS, I have the following dilemma.
My team uses Node.JS to build a platform of which a business rules engine is a key component. The Rules Engine (RE) has commands to make API calls to various target servers.
The RE intends to execute its statements in sequential way but my tech team tells me that web API calls are executed in parallel in Node. Hence if have a API call followed by statements which process the data fetched from the API call and then the code passes on the data after some processing to another API call, I am told -- the second API will receive invalid data as it will be executed along with the first API. Is this true?
If so, what are some nice ways to effectively solve without hurting the performance?
As said earlier, I want statements, irrespective of they being API calls or non-API calls, to be executed in parallel.
thanks
That's not true. In the callback for the first response, process the data and send off the second response.