Google Analytics API documentation shows that, for fetching the lifetime values, the date ranges should not be specified. But when I make such a request (without date range), it returns empty dimension and metrics result. But when I use date range, it returns dimension and metrics values for that date range.
The following is an excerpt from the API documentation :
Date ranges should not be specified for cohorts or Lifetime value
requests.
For example, if I make the request without date range, as follows:
{
"reportRequests": [
{
"viewId": "XXXXXXXXX",
"dimensions": [
{
"name": "ga:date"
},
{
"name": "ga:eventLabel"
}
],
"metrics": [
{
"expression": "ga:totalEvents"
}
]
}
]
}
I get the following response:
{
"reports": [
{
"columnHeader": {
"dimensions": [
"ga:date",
"ga:eventLabel"
],
"metricHeader": {
"metricHeaderEntries": [
{
"name": "ga:totalEvents",
"type": "INTEGER"
}
]
}
},
"data": {
"totals": [
{
"values": [
"0"
]
}
]
}
}
]
}
However, if I include the date range,
{
"reportRequests": [
{
"viewId": "XXXXXXXX",
"dimensions": [
{
"name": "ga:date"
},
{
"name": "ga:eventLabel"
}
],
"metrics": [
{
"expression": "ga:totalEvents"
}
],
"dateRanges": [
{
"startDate": "2016-01-01",
"endDate": "2016-04-30"
}
]
}
]
}
I get the following response:
{
"reports": [
{
"columnHeader": {
"dimensions": [
"ga:date",
"ga:eventLabel"
],
"metricHeader": {
"metricHeaderEntries": [
{
"name": "ga:totalEvents",
"type": "INTEGER"
}
]
}
},
"data": {
"rows": [
{
"dimensions": [
"20160412",
"http://mytestblog.com/"
],
"metrics": [
{
"values": [
"1"
]
}
]
},
{
"dimensions": [
"20160412",
"http://mytestblog.com/2016/04/first-post.html"
],
"metrics": [
{
"values": [
"3"
]
}
]
},
{
"dimensions": [
"20160419",
"http://mytestblog.com/"
],
"metrics": [
{
"values": [
"4"
]
}
]
},
{
"dimensions": [
"20160419",
"http://mytestblog.com/2016/04/fourth.html"
],
"metrics": [
{
"values": [
"13"
]
}
]
}
],
"totals": [
{
"values": [
"21"
]
}
],
"rowCount": 4,
"minimums": [
{
"values": [
"1"
]
}
],
"maximums": [
{
"values": [
"13"
]
}
]
}
}
]
}
Why is it that, even though specified in the documentation, I have to specify date range in the ReportRequest to get the values? Am I misunderstanding the meaning of Lifetime values here?
The reportRequest object should have either a value for dateRanges or a definition value for cohortGroup. When you omit both the requests assumes the default values for a startDate of 7daysAgo and an endDate of yesterday.
The correct interpretation of the docs is that the reportRequest should not have a dateRange defined for cohort and LTV requests. But in order to make a cohort or lifetime value request you must add a cohort definition. For Lifetime value requests the cohort definition should have a specific dateRange in addition to the lifetimeValue field set to true:
POST https://analyticsreporting.googleapis.com/v4/reports:batchGet
{
"reportRequests": [
{
"viewId": "XXXX",
"dimensions": [
{"name": "ga:cohort" },
{"name": "ga:cohortNthWeek" }],
"metrics": [
{"expression": "ga:cohortTotalUsersWithLifetimeCriteria"},
{"expression": "ga:cohortRevenuePerUser"}
],
"cohortGroup": {
"cohorts": [{
"name": "cohort 1",
"type": "FIRST_VISIT_DATE",
"dateRange": {
"startDate": "2015-08-01",
"endDate": "2015-09-01"
}
},
{
"name": "cohort 2",
"type": "FIRST_VISIT_DATE",
"dateRange": {
"startDate": "2015-07-01",
"end_date": "2015-08-01"
}
}],
"lifetimeValue": True
}
}]
}
Related
EDIT: I have found out that mongo does not allow to use special characters such as dots and the dollar sign as a key in the report so I had to rechange the structure of the JSON a bit. But, my question remains the same (I removed the old stuff so it will be more readable but you can still see it in the edit history section). The new structure looks as follows:
{
"name": "test1",
"main": [
{
"subs": [
{
"data": [
{
"group": "ABC",
"values": [
"tcsh"
]
},
{
"group": "AA",
"values": [
"6.13.00"
]
}
]
},
{
"data": [
{
"group": "xyz",
"values": [
"tcsh"
]
},
{
"group": "SADA",
"values": [
"6.13.00"
]
}
]
}
],
"main_name": "MAIN",
"main_path": "play_ground/MAIN"
},
{
"subs": [
{
"data": [
{
"group": "BAB",
"values": [
"tcsh"
]
},
{
"group": "GO",
"values": [
"6.13.00"
]
}
]
}
],
"main_name": "MAIN2",
"main_path": "play_ground/MAIN2"
}
],
"user": "easdasa",
"timestamp": "1564437533"
}
I want to get all reports that have a name test1 and a user easdasa. Then, I would like to take the latest block of data of each block of subs. This is done with the help of the timestamp.
For example in the following array I have two reports:
[{
"name": "test1",
"main": [
{
"subs": [
{
"data": [
{
"group": "xyz",
"values": [
"tcsh"
]
},
{
"group": "SADA",
"values": [
"6.13.00"
]
}
]
}
],
"main_name": "MAIN",
"main_path": "play_ground/MAIN"
}
],
"timestamp": "1564437533"
},
{
"name": "test1",
"main": [
{
"subs": [
{
"data": [
{
"group": "ABC",
"values": [
"tcsh"
]
},
{
"group": "AA",
"values": [
"6.13.00"
]
}
]
},
{
"data": [
{
"group": "xyz",
"values": [
"tcsh"
]
},
{
"group": "SADA",
"values": [
"5.0.1",
"12312"
]
}
]
}
],
"main_name": "MAIN",
"main_path": "play_ground/MAIN"
}
],
"timestamp": "1564437522"
}]
The first report is was created after the second report (due to the timestamp). I can see that there is a block that located in the second report but not in the first report:
{
"data": [
{
"group": "ABC",
"values": [
"tcsh"
]
},
{
"group": "AA",
"values": [
"6.13.00"
]
}
]
},
So I want the final report to heve it (besides all the blocks from the first report). Also, you can see that the values of the SADA group are diffrenet. So we want to take the first's report block. The final report should be:
{
"name": "test1",
"main": [
{
"subs": [
{
"data": [
{
"group": "ABC",
"values": [
"tcsh"
]
},
{
"group": "AA",
"values": [
"6.13.00"
]
}
]
},
{
"data": [
{
"group": "xyz",
"values": [
"tcsh"
]
},
{
"group": "SADA",
"values": [
"6.13.00"
]
}
]
}
],
"main_name": "MAIN",
"main_path": "play_ground/MAIN"
}
],
"timestamp": "1564437533"
}
In other words, in the (json) values of the data level I want to get the latest report and in the (json) values of the subs level I want to get all existing subs. So it will be more clear, in the (json) values of the data level I want to get all the groups and values of the latest report and for the (json) values of the subs level I want to have all the subs.
If I could specify steps:
Get all reports by user and name.
Theoritcly merge all report into one main report (the implmenetation could be diffrent). The merge will use be done by main_name.
Remove all old subs values by timestamp that already exists in the latest report so the final report will have in the subs level only the newest objects and object from the old reports that were not in the newer reports.
Which query I should be in order to get the wanted report?
Please use the below query and check on stats, I can really say performance can be improved by having proper indexing as per your requirements(querying), Please use $explain (enter link description here)
to check on query performance. I've considered your array exists in a field with key as values , Please let me know if this works or if it doesn't provide sample data, we can check on that:
db.getCollection('yourcollection').aggregate([{$unwind: '$values'},{$match : {'values.name': 'test1', 'values.user': 'galih'}},
{$sort: {'values.timestamp' : -1}},
{$limit: 1}
])
Thera are some order's in GA. I want to get some order's properties (ex. campaign) with GA reporting api v4. I have such a request:
POST https://analyticsreporting.googleapis.com/v4/reports:batchGet?key=
{YOUR_API_KEY}
{
"reportRequests": [
{
"dateRanges": [
{
"startDate": "2018-11-14",
"endDate": "2018-11-16"
}
],
"pivots": [
{
"dimensions": [
{
"name": "ga:campaign"
}
],
"metrics": [
{
"expression": "ga:totalValue"
}
]
}
],
"dimensions": [
{
"name": "ga:transactionId"
}
],
"viewId": "63535262"
}
]
}
Certain orders are skipped, an i specify wrong metrics?
Result part, order 1811/430 is not in the list:
{
"dimensions": [
"1811/429"
],
"metrics": [
{
"pivotValueRegions": [
{
"values": [
"6000.0",
"0.0",
"0.0",
"0.0",
"0.0",
"0.0",
"0.0",
"0.0",
"0.0",
"0.0"
]
}
]
}
]
},
{
"dimensions": [
"1811/431"
],
"metrics": [
{
"pivotValueRegions": [
However, in the GA itself the order is present and it has a campaign.
Sorry for dummy english, will be very grateful for the answers!
I'm trying to create an Elasticsearch mapping for Twitter's Place geo bounding_box array and I can't get Elasticsearch to index it as a geo bounding box. In my app, I will be getting the raw JSON from Twitter4j, however the bounding box does not close the bounding box, so for the purpose of this test, I edited the json and closed it. I'm using Elastic cloud (ES v5) and the Rest API and then visualizing with Kibana.
Here is the mapping I'm trying to use. I've tried several variations with and without a "properties" block and it doesn't work. With this mapping, I am successfully able to PUT the mapping, but when I POST the document, Kibana recognizes the array as an unknown field type.
The Point coordinates field is indexed as a geopoint just fine, it's the bounding box that does not.
Here is my mapping:
PUT /testgeo
{
"mappings": {
"tweet": {
"_all": {
"enabled": false
},
"properties": {
"created_at": {
"type": "date",
"format": "EEE MMM dd HH:mm:ss Z YYYY||strict_date_optional_time||epoch_millis"
},
"coordinates": {
"properties": {
"coordinates": {
"type": "geo_point",
"ignore_malformed": true
}
}
},
"place": {
"properties": {
"bounding_box": {
"type": "geo_shape",
"tree": "quadtree",
"precision": "1m"
}
}
}
}
}
}
}
Here is the snippet of the document I am trying to POST (NOTE: I manually added the 5th array element to close the bounding box).
POST /testgeo/tweet/1
{
...
"coordinates": {
"type": "point",
"coordinates": [
0.78055556,
51.97222222
]
},
"place": {
"id": "0c31a1a5b970086e",
"url": "https:\/\/api.twitter.com\/1.1\/geo\/id\/0c31a1a5b970086e.json",
"place_type": "city",
"name": "Bures",
"full_name": "Bures, England",
"country_code": "GB",
"country": "United Kingdom",
"bounding_box": {
"type": "polygon",
"coordinates": [
[
[
0.773779,
51.96971
],
[
0.773779,
51.976437
],
[
0.781794,
51.976437
],
[
0.781794,
51.96971
],
[
0.773779,
51.96971
]
]
]
},
"attributes": {
}
},
If anyone can identify the reason for this and correct it, I would be most appreciative.
NOTE 1:: I tried using the mapping and document examples from Elastic's geo_shape documentation page and Kibana again showed the location field as unknown type.
PUT /testgeo
{
"mappings": {
"tweet": {
"_all": {
"enabled": false
},
"properties": {
"location": {
"type": "geo_shape",
"tree": "quadtree",
"precision": "1m"
}
}
}
}
}
POST /testgeo/tweet/1
{
"location" : {
"type" : "polygon",
"coordinates" : [
[ [100.0, 0.0], [101.0, 0.0], [101.0, 1.0], [100.0, 1.0], [100.0, 0.0] ]
]
}
}
Turns out that Kibana simply does reflect the type for GeoShape's. When doing a geo query, however, Elasticsearch returns correct results.
For example:
"query": {
"bool": {
"must": {
"match_all": {}
},
"filter": {
"geo_shape": {
"place.bounding_box": {
"shape": {
"type": "polygon",
"coordinates": [
[
[
0.773779,
51.96971
],
[
0.773779,
51.976437
],
[
0.781794,
51.976437
],
[
0.781794,
51.96971
],
[
0.773779,
51.96971
]
]
]
},
"relation": "within"
}
}
}
}
}
}
Even though you seem to have found a solution to your problem I just wanted to say there is a fix now for this issue by using the coerce option in the mapping for geo_shape like so:
"properties": {
"bounding_box": {
"type": "geo_shape",
"tree": "quadtree",
"precision": "1m",
"coerce": true
}
}
Also see:
https://github.com/elastic/elasticsearch/pull/11161
Ideally the call to this API - https://api.surveymonkey.net/v2/surveys/get_responses
should give a json response like this :
{
"data": [
{
"questions": [
{
"answers": [
{
"col": "3024965133",
"row": "3024965139"
},
{
"col": "3024965134",
"row": "3024965140"
},
{
"col": "3024965135",
"row": "3024965141"
},
{
"row": "0",
"text": "Other!"
}
],
"question_id": "316084770"
},
{
"answers": [
{
"col": "3024965125",
"row": "3024965122"
},
{
"col": "3024965124",
"row": "3024965123"
}
],
"question_id": "316084761"
},
{
"answers": [
{
"row": "3024959616"
}
],
"question_id": "316083321"
},
{
"answers": [
{
"row": "0",
"text": "This is an open answer"
}
],
"question_id": "316083320"
},
{
"answers": [
{
"col": "3024962639",
"row": "3024962638"
},
{
"col": "3024962640",
"row": "3024962637"
},
{
"col": "3024962639",
"row": "3024962636"
}
],
"question_id": "316084090"
},
{
"answers": [
{
"row": "3024964761",
"text": "9"
},
{
"row": "3024964762",
"text": "1"
}
],
"question_id": "316084724"
}
],
"respondent_id": "2500019027"
}
],
"status": 0
}
But when I am requesting the API for responses to my survey by specific respondents, I am getting a blank array.
Note : I am able to see the proper responses via SurveyMonkey UI console.
If you use API v3, you may get a better error message:
https://developer.surveymonkey.com/api/v3/#surveys-id-responses
It seems your issue could be that you have more than 100 responses for that survey and you can only access the first 100 on a basic plan from what I see from the logs.
If that's the case the solution would be to
Upgrade your plan
Delete some old responses
Duplicate the survey maybe?
Again I would recommend moving to V3 of the API for a better experience.
I am trying to use CloudFormation for the first time to configure a CloudFront distribution that uses an S3 bucket as its origin.
However I am receiving the error One or more of your origins do not exist when the template is run. I have assumed it is down to the origin DomainName being configured incorrectly, however have not been able to find a configuration that works.
I currently have the following template:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Resources": {
"AssetBucket": {
"Type": "AWS::S3::Bucket",
"Properties": {
"BucketName": "cdn-assets",
"AccessControl": "PublicRead",
"CorsConfiguration": {
"CorsRules": [
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET"
],
"AllowedOrigins": [
"*"
],
"Id": "OpenCors",
"MaxAge": "3600"
}
]
}
}
},
"AssetCDN": {
"Type": "AWS::CloudFront::Distribution",
"Properties": {
"DistributionConfig": {
"Origins": [
{
"DomainName": {
"Fn::GetAtt": [
"AssetBucket",
"DomainName"
]
},
"Id": "AssetBucketOrigin",
"S3OriginConfig": {}
}
],
"Enabled": "true",
"DefaultCacheBehavior": {
"Compress": true,
"AllowedMethods": [
"GET",
"HEAD",
"OPTIONS"
],
"TargetOriginId": "origin-access-identity/cloudfront/AssetCDN",
"ForwardedValues": {
"QueryString": "false",
"Cookies": {
"Forward": "none"
}
},
"ViewerProtocolPolicy": "allow-all"
},
"PriceClass": "PriceClass_All",
"ViewerCertificate": {
"CloudFrontDefaultCertificate": "true"
}
}
},
"DependsOn": [
"AssetBucket"
]
}
}
}
I have not been able to find much advice on this, so hoping someone can point me in the right direction.
Your Cache Behavior's TargetOriginId property must match the value specified in the S3 Origin's Id property.
In your above example, TargetOriginId is origin-access-identity/cloudfront/AssetCDN while Id is AssetBucketOrigin, which is causing the error.
The real issue here is that Cloudfront have a dependency - S3 bucket. And so you should put this reference inside cloudfront object to let CFN know that first of all it should create S3 bucket. To do this you have to change your Origins.Id and DefaultCacheBehavior.TargetOriginId properties to Ref to your bucket config:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Resources": {
"AssetBucket": {
"Type": "AWS::S3::Bucket",
"Properties": {
"BucketName": "cdn-assets",
"AccessControl": "PublicRead",
"CorsConfiguration": {
"CorsRules": [
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET"
],
"AllowedOrigins": [
"*"
],
"Id": "OpenCors",
"MaxAge": "3600"
}
]
}
}
},
"AssetCDN": {
"Type": "AWS::CloudFront::Distribution",
"Properties": {
"DistributionConfig": {
"Origins": [
{
"DomainName": {
"Fn::GetAtt": [
"AssetBucket",
"DomainName"
]
},
"Id": { "Ref": "AssetBucket" }, /// HERE!!!!
"S3OriginConfig": {}
}
],
"Enabled": "true",
"DefaultCacheBehavior": {
"Compress": true,
"AllowedMethods": [
"GET",
"HEAD",
"OPTIONS"
],
"TargetOriginId": { "Ref": "AssetBucket" }, /// HERE!!!!
"ForwardedValues": {
"QueryString": "false",
"Cookies": {
"Forward": "none"
}
},
"ViewerProtocolPolicy": "allow-all"
},
"PriceClass": "PriceClass_All",
"ViewerCertificate": {
"CloudFrontDefaultCertificate": "true"
}
}
},
"DependsOn": [
"AssetBucket"
]
}
}
}