How to combine Graph API field expansions? - facebook

When using this syntax to query multiple metrics
https://graph.facebook.com/v2.4/69999732748?fields=insights.metric(page_fan_adds_unique,page_fan_adds,page_fan_adds_by_paid_non_paid_unique)&access_token=XYZ&period=day&since=2015-08-12&until=2015-08-13
I always get data for the last three days regardness of the values of the since and until parameters
https://graph.facebook.com/v2.4/69999732748/insights/page_fan_adds_unique?period=day&access_token=XYZ&since=2015-09-10&until=2015-09-11
If I ask for a single metric, then the date parameters have effect.
Is there a different syntax for requesting multiple insights metrics which will accept the date parameters?

I think this should work:
curl -G \
-d "access_token=PAGE_TOKEN" \
-d "fields=insights.metric(page_fan_adds_unique,page_fan_adds,page_fan_adds_by_paid_non_paid_unique).since(2015-08-12).until(2015-08-13).period(day)" \
-d "pretty=1" \
"https://graph.facebook.com/v2.4/me"
You are requesting the page's insights as a field. This field contains a list of results in a data array (insights.data), and you want to filter this array.
The way to do that is by chaining parameterizations to the requested insights field like so:
fields=insights
.filter1(params)
.filter2(params)
.filter3(params)
Each .filterX(params) will be applied to the particular field that precedes it.
I've added newlines and indentation for clarity, but in your actual request you'd chain them all in a single line, without spaces.

Related

Export Google BigQuery datasets in a single project as list in bq command-line?

I've looked at this resource, but it's not quite what I need. This question is what I want to accomplish, but I want to run it in the BQ terminal.
For instance, in the past I've exported table information as a .json in bq command-line as so:
bq show --schema --format=prettyjson Dataset.TableView > /home/directory/Dataset.TableView.json
This gives a prettyjson of Table information of a specified dataset in a set project. I would like to just have a .csv (or any type of list) of all dataset names in the project. But I can't figure out how to change that command-line appropriately to output what I want.
In order to further contribute to the community, as an alternative to #DanielZagales answer, using the bq command line. According to the documentation, you can use the bq ls to list all the datasets in a project. Such as follows,
bq ls -a --format=pretty --project_id your-project-id
The flag -a is short for --all, which guarantees that all the datasets will be included in the list. The flag --format=pretty will output the list as a table format, you can use other formatting such as described here. Furthermore, you can also filter the datasets which match an expression with --filter labels.key:value or set the maximum number of results with --max_results or -n.
Note: you can also list all the tables within a dataset, such as described here.
You should be able to query the information schema to get the results you want.
example:
select * from `project_id.INFORMATION_SCHEMA.SCHEMATA`;
You can then add that to the bq command like:
bq query --format=csv 'select * from `project_id.INFORMATION_SCHEMA.SCHEMATA`;'

Increase Filter Limit in Apache Superset

I am trying to create a filter for a field that contains over 5000 unique values. However, the filter's query is automatically setting a limit of 1000 rows, meaning that the majority of the values do not get displayed in the filter dropdown.
I updated the config.py file inside the 'anaconda3/lib/python3.7/site-packages' directory by increasing the DEFAULT_SQLLAB_LIMIT and QUERY_SEARCH_LIMIT to 6000, however this did not work.
Is there any other config that I need to update?
P.S - The code snippet below shows the json representation of the filter where the issue seems to be coming from.
"query": "SELECT casenumber AS casenumber\nFROM pa_permits_2019\nGROUP BY casenumber\nORDER BY COUNT(*) DESC\nLIMIT 1000\nOFFSET 0"
After using the grep command to find all files containing the text '1000', I found out the the filter limit can be configured through the filter_row_limit in viz.py

PostgREST filter returns incorrect results when value contains a space

Using current version of PostgREST (v.0.3.2.0), attempting a very simple GET call:
Db contains two records with the following accountNames: "Account 1" and "Account #2".
This works:
GET localhost:3000/accounts?accountName=eq.Account 1
==> proper data is retrieved.
This does NOT work:
GET localhost:3000/accounts?accountName=eq.Account #2
==> NO data is retrieved. Obviously the # character prevents the filter from working properly.
Is there a way around this problem?
/accounts?accountName=eq.Account%20%232. Use Urlencoding.

How to limit the -findall curl call result to certain number in FileMaker's curl call

I know the -findall curl call parameter for FileMaker server will return all the rows in that specific database, however, for testing purpose, I just want to show for maybe, three.
I am aware sql has command of LIMIT, but how a curl command for FM is handling this same scenario?
In this case, -find is not an option for me.
I am assuming you are using curl to query a FileMaker server. If so, you are looking for the -max url parameter
From the Help PDF for FileMaker Server 13
–max (Maximum records) query parameter
Specifies the maximum number of records you want returned
Value is: A number, or use the value all to return all records. If –max is not specified, all records
are returned.

Is there any way how I can get JSON raw data from Splunk for a given query?

Is there any way how I can get JSON raw data from Splunk for a given query in a RESTful way?
Consider the following timechart query:
index=* earliest=<from_time> latest=<to_time> | timechart span=1s count
Key things in the query are: 1. Start/End Time, 2. Time Span (say sec) and 3. Value (say count)
The expected JSON response would be:
{"fields":["_time","count","_span"],
"rows":[ ["2014-12-25T00:00:00.000-06:00","1460981","1"],
...,
["2014-12-25T01:00:00.000-06:00","536889","1"]
]
}
This is the XHR (ajax calls) for the output_mode=json_rows calls. This requires session and authentication setups.
I’m looking for a RESTful implementation of the same with authentication.
You can do something like this using the curl command
curl -k -u admin:changeme --data-urlencode search="search index=* earliest=<from_time> latest=<to_time> | timechart span=1s count" -d "output_mode=json" https://localhost:8089/servicesNS/admin/search/search/jobs/export