Azure Copy Activity Rest Results Unexpected - azure-data-factory

I'm attempting to pull data from the Square Connect v1 API using ADF. I'm utilizing a Copy Activity with a REST source. I am successfully pulling back data, however, the results are unexpected.
The endpoint is /v1/{location_id}/payments. I have three parameters, shown below.
I can successfully pull this data via Postman.
The results are stored in a Blob and are as if I did not specify any parameters whatsoever.
Only when I hardcode the parameters into the relative path
do I get correct results.
I feel I must be missing a setting somewhere, but which one?

You can try setting the values you want into a setVariable activity, and then have your copyActivity reference those variables. This will tell you whether it is an issue with the dynamic content or not. I have run into some unexpected behavior myself. The benefit of the intermediate setVariable activity is twofold. Firstly it coerces the datatype, secondly, it lets you see what the value is.
My apologies for not using comments. I do not yet have enough points to comment.

Related

Azure Data Factory - REST Pagination rules

I'm trying to pull data from Hubspot to my SQL Server Database through an Azure Data Factory pipeline with the usage of a REST dataset. I have problems setting up the right pagination rules. I've already spent a day on Google and MS guides, but I find it hard to get it working properly.
This is the source API. I am able to connect and pull the first set of 20 rows. It gives an offset which is usable with vidoffset= which is returned in the body.
I need to return the result of vid-offset from the body to the HTTP request. Also the process needs to stop when has-more results in 'false'.
I tried to reproduce the same in my environment and I got the below results:
First I create a linked service with this URL: https://api.hubapi.com/contacts/v1/lists/all/contacts/all?hapikey=demo&vidOffset
Then after I created the pagination end condition rule with $.has-more and absolute URL.
For demo purpose, I took sink as a storage account.
The pipeline run success full look at the below image for reference.
For more information refer this Ms Document

Is there anyway to query components in an Azure Pipeline?

I know one can use REST API to query Pipeline activities but is there anyway to query Pipeline components ( i.e, listing of linked services, sources, sinks, parameters, etc.)
Right now I'm manually recording all the components I see in the pipeline and querying these components would make this listing faster and more accurate
enter image description here
Thanks, Jeannie
I haven't found a way to get all the components of a pipeline directly. If the information you want to obtain is defined in the YAML file, you cannot directly obtain it, you may need to parse the YAML file to obtain it.
If the information you want to get is defined on the UI, such as variables, you can get it through the REST API Definitions - Get. And you can get the resources of the pipeline through Resources - List.

Setting id to task using Google Task API returns 400 invalid value

I am using Google's Java API for an project.
Strangely, inserting a task without setting an id works fine. However, inserting a task with a id returns a 400 invalid value error. The id is to be used for syncing local data with Google Tasks
I'm pretty sure there's nothing wrong with the algorithm that generates the ids for the task. The same algorithm works perfectly for Google Calendar API.
Am I missing something here?
You may refer with this SO answer. It suggested to pass the id key/value pair together with the title information you are already sending. There's no documentation to indicate this is a required parameter, especially since it is included in the URL. Google requires the id of the task to be passed as a part of url, parameters and body.

Tarantool shiny dashboard

I want to use Tarantool database for logging user activity.
Are there any out of the box solutions to create web dashboard with nice charts based on the collected data?
A long time ago, using an old-old version of tarantool I've created a draft of tarbon - time-series database, with carbon-cache identical interface.
Since that time the protocol have changed, but the generic idea still the same: use spaces to store data, compact data organization and correct indexes to access spaces as time-series rows and lua for preparing resulting jsons.
That solution was perfect in performance (either on reads or on writes), but that old version lacks disk storage and without disk I was very limited to metrics capacity.
Tarantool has embedded lua language so u could generate json from your data and use any charting library. For example D3.js has method to load json directly from url.
d3.json(url[, callback])
Creates a request for the JSON file at the specified url with the mime type "application/json". If a callback is specified, the request is immediately issued with the GET method, and the callback will be invoked asynchronously when the file is loaded or the request fails; the callback is invoked with two arguments: the error, if any, and the parsed JSON. The parsed JSON is undefined if an error occurs. If no callback is specified, the returned request can be issued using xhr.get or similar, and handled using xhr.on.
You also could look at c3.js simple facade for d3

Mule: after delivering a message, save the current timestamp for later use. What's the correct idiom?

I'm connecting to a third-party web service to retrieve rows from the underlying database. I can optionally pass a parameter like this:
http://server.com/resource?createdAfter=[yyyy-MM-dd hh:ss]
to get only the rows created after a given date.
This means I have to store the current timestamp (using #[function:datestamp:...], no problem) in one message scope and then retrieve it in another.
It also implies the timestamp should be preserved in case of an outage.
Obviously, I could use a subflow containing a file endpoint, saving in a designated file on a path. But, intuitively, based on my (very!) limited experience, it feels hackish.
What's the correct idiom to solve this?
Thanks!
The Object Store Module is designed just for that: to allow you to save bits of information from your flows.
See:
http://mulesoft.github.io/mule-module-objectstore/mule/objectstore-config.html
https://github.com/mulesoft/mule-module-objectstore/