2 dimensional Request on Google Analytics API (adveronix-like) - google-analytics-api

I am trying to get the same data from an Adveronix spreadsheets report
using Google Analytics Python API.
So far I can get all data except for the Account Column (just picture the above table without the Account column), that one I can get the exact same data with the following request:
{'reportRequests': [{'viewId': '[ID]',
'dateRanges': [{'startDate': '2022-01-01', 'endDate': '2022-01-01'}],
"pivots": [{"dimensions": [{"name": 'ga:date'}],
"metrics": [{"expression": "ga:users"},
{"expression": "ga:newUsers"},
{"expression": "ga:sessions"},
{"expression": "ga:sessionsPerUser"},
{"expression": "ga:bounces"},
{"expression": "ga:timeOnPage"},
{"expression": "ga:avgSessionDuration"},
{"expression": "ga:PageViews"}
]
}]
}]}
It would make sense to me if the following request worked:
{'reportRequests': [{'viewId': '118175578',
'dateRanges': [{'startDate': '2022-01-01', 'endDate': '2022-01-01'}],
"pivots": [{"dimensions": [{"name": 'ga:date'},{"name": "ga:account_name"}],
"metrics": [{"expression": "ga:users"},
{"expression": "ga:newUsers"},
{"expression": "ga:sessions"},
{"expression": "ga:sessionsPerUser"},
{"expression": "ga:bounces"},
{"expression": "ga:timeOnPage"},
{"expression": "ga:avgSessionDuration"},
{"expression": "ga:PageViews"}
]
}]
}]}
since I'm only adding another dimension (I've tried adding "ga:city" and it worked just fine)
However I get the following error:
HttpError: <HttpError 400 when requesting https://analyticsreporting.googleapis.com/v4/reports:batchGet?alt=json returned "Unknown dimension(s) included in pivot: ga:account_name
I've tried several variants of the dimension, like: AccountName,ga:account,ga:userId
What am I doing wrong ?
P.s. i've tried other dimensions from https://developers.google.com/analytics/devguides/reporting/data/v1/api-schema for curiosity purposes, most of them don't work.
How can I Request to get the same data as the table?
Adveronix seems to do it so easily
thanks

I DID IT!
the error was the way I was requesting
it should be something like this:
{'reportRequests': [
{'viewId': *[MY_VIEW_ID]*,
'dateRanges': [{'startDate': '2022-06-13', 'endDate': 'yesterday'}],
"dimensions": [{'name': "ga:date"}, {"name": 'ga:pageTitle'}, {"name": "ga:pagePath"}],
"metrics": [{"expression": "ga:PageViews"}],
"dimensionFilterClauses": filters,
'pageSize': 100000
}]}
no pivots, just dimensions.

Related

split mint money metaplex

I am new to blockchain technology and I have a certain issue with spliting into multiple wallet the mint money with metaplex
I want to know if it is possible to have some wallets that will get a percentage of the primary sale (mint) but will not get any royalties ?
And if it is possible how to do it ? (Json attribute in the metadata ? )
Here is my json metadata :
{
"name": "name",
"symbol": "symbol",
"description": "Collection of 2 NFTS on the blockchain. this is the number 1 out of 2.",
"seller_fee_basis_points": 500, // Here only public key 1 gets the royalties
"image": "1.png",
"attributes": [
{"trait_type:": "Background", "value": "Door"},
{"trait_type:": "Ninja", "value": "Red"}
],
"properties": {
"creators": [
{"address": "public key 1", "share": 50},
{"address": "public key 2", "share": 50}
],
"files": [{"uri": "1.png", "type": "image/png"}]
},
"collection": {"name": "Lavish Fighters", "family": "Rare"}
}
I do know we can't comment in Json it is just to make it more understandable
creators on the json field inside the metadata are deprecated on the newers standards. Also those creators are just for secondary markets royalty share.
You can take a look at Hydra its a wallet of wallets that work to split mint funds between different wallets. Here is a Hydra-UI that can work on mainnet.

Data Modeling for table in MongoDB

I have a hypothetical table with the following information about cost of vehicles, and I am trying to model the data for storing into a Expenses collection in MongoDB:
Category
Item
Cost
Land
Car
1000
Land
Motorbike
500
Air
Plane
2000
Air
Others: Rocket
5000
One assumption for this use case is that the Categorys and Items are fixed fields in the table, while users will fill in the Cost for each specific Item in the table. Should there be other vehicles in the category, users will fill them under "Others".
Currently, of 2 options to store the document:
Option 1 - as a nested object:
[
{
"category": "land",
"items": [
{"name": "Car", "cost": 1000},
{"name": "Motorbike", "cost": 500},
]
}
{
"category": "air",
"items": [
{"name": "Plane", "cost": 2000},
{"name": "Others", remarks: "Rocket", "cost": 5000},
]
}
]
Option 2 - as a flattened array, where the React application will map the array to render the data in the table:
[
{"category": "land", "item": "car", "cost": 1000},
{"category": "land", "item": "motorbike", "cost": 500},
{"category": "air", "item": "plane", "cost": 2000},
{"category": "air", "item": "others", "remarks": "rocket", "cost": 5000},
]
Was hoping to get any suggestions on which is a better approach, or if there is a better approach that you have in mind.
Thanks in advance! :)

PowerBI direct query connection to PostgreSQL error. OLE or ODBC error: [Expression.Error] We couldn't fold the expression to the data source

According to information from Microsoft DataConnectors I want to create a connector from PowerBi to PostgreSQL via this ODBC driver using direct query. I reused the code from the Microsoft sample, just adjusted the ConnectionString, nothing else.
After building a .mez file that I imported to the PowerBI, I want to connect to the PostgreSQL server.
This is how the connection dialog looks like. The connection is successful and I can see the tables in DB and use them. And now, here is where the issue comes in: When I select some column to display the data in table or plot, I get OLE or ODBC error: [Expression.Error] We couldn't fold the expression to the data source. I Also enabled tracing in Diagnostic options, so here is the log content:
DataMashup.TraceWarning: 24579:
"Start": "2018-05-18T10:51:56.6199845Z",
"Action": "OdbcQuery/FoldingWarning",
"HostProcessId": "25020",
"Function Name": "Group",
"ProductVersion": "2.58.5103.501 (PBIDesktop)",
"ActivityId": "f06a4b5b-09ba-40ce-bd99-424710286b77",
"Process": "Microsoft.Mashup.Container.NetFX40",
"Pid": 11080,
"Tid": 1,
"Duration": "00:00:00.0000051"
DataMashup.TraceInformation: 24579:
"Start": "2018-05-18T10:51:56.6199552Z",
"Action": "BackgroundThread/RollingTraceWriter/Flush",
"ProductVersion": "2.58.5103.501 (PBIDesktop)",
"ActivityId": "00000000-0000-0000-0000-000000000000",
"Process": "Microsoft.Mashup.Container.NetFX40",
"Pid": 11080,
"Tid": 8,
"Duration": "00:00:00.0000560"
DataMashup.TraceWarning: 24579:
"Start": "2018-05-18T10:51:56.6199999Z",
"Action": "OdbcQuery/FoldingWarning",
"HostProcessId": "25020",
"ErrorMessage": "This ODBC driver doesn't set the GroupByCapabilities feature. You can override it by using SqlCapabilities.",
"ProductVersion": "2.58.5103.501 (PBIDesktop)",
"ActivityId": "f06a4b5b-09ba-40ce-bd99-424710286b77",
"Process": "Microsoft.Mashup.Container.NetFX40",
"Pid": 11080,
"Tid": 1,
"Duration": "00:00:00.0000159"
DataMashup.TraceInformation: 24579:
"Start": "2018-05-18T10:51:56.6200385Z",
"Action": "BackgroundThread/RollingTraceWriter/Flush",
"ProductVersion": "2.58.5103.501 (PBIDesktop)",
"ActivityId": "00000000-0000-0000-0000-000000000000",
"Process": "Microsoft.Mashup.Container.NetFX40",
"Pid": 11080,
"Tid": 9,
"Duration": "00:00:00.0000215"
DataMashup.TraceWarning: 24579:
"Start": "2018-05-18T10:51:56.6201305Z",
"Action": "OdbcQueryDomain/ReportFoldingFailure",
"HostProcessId": "25020",
"Exception": "Exception:\r\nExceptionType: Microsoft.Mashup.Engine1.Runtime.FoldingFailureException, Microsoft.MashupEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35\r\nMessage: Folding failed. Please take a look the information in the trace.\r\nStackTrace:\n at Microsoft.Mashup.Engine1.Library.Odbc.OdbcQuery.Group(Grouping grouping)\r\n at Microsoft.Mashup.Engine1.Runtime.OptimizingQueryVisitor.VisitQuery(Query query, Func`2 operation)\r\n\r\n\r\n",
"ProductVersion": "2.58.5103.501 (PBIDesktop)",
"ActivityId": "f06a4b5b-09ba-40ce-bd99-424710286b77",
"Process": "Microsoft.Mashup.Container.NetFX40",
"Pid": 11080,
"Tid": 1,
"Duration": "00:00:00.0000504"
DataMashup.TraceInformation: 24579:
"Start": "2018-05-18T10:51:56.6202107Z",
"Action": "BackgroundThread/RollingTraceWriter/Flush",
"ProductVersion": "2.58.5103.501 (PBIDesktop)",
"ActivityId": "00000000-0000-0000-0000-000000000000",
"Process": "Microsoft.Mashup.Container.NetFX40",
"Pid": 11080,
"Tid": 8,
"Duration": "00:00:00.0000154"
DataMashup.TraceWarning: 24579:
"Start": "2018-05-18T10:51:56.6199413Z",
"Action": "RemotePageReader/RunStub",
"HostProcessId": "25020",
"Exception": "Exception:\r\nExceptionType: Microsoft.Mashup.Engine1.Runtime.ValueException, Microsoft.MashupEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35\r\nMessage: [Expression.Error] We couldn't fold the expression to the data source. Please try a simpler expression.\r\nStackTrace:\n at Microsoft.Mashup.Engine1.Library.Odbc.OdbcQueryDomain.ReportFoldingFailure(NotSupportedException ex)\r\n at Microsoft.Mashup.Engine1.Runtime.OptimizingQueryVisitor.VisitQuery(Query query, Func`2 operation)\r\n at Microsoft.Mashup.Engine1.Runtime.OptimizingQueryVisitor.VisitQuery(Query query)\r\n at Microsoft.Mashup.Engine1.Runtime.OptimizingQueryVisitor.VisitQuery(Query query, Func`2 operation)\r\n at Microsoft.Mashup.Engine1.Runtime.OptimizingQueryVisitor.VisitQuery(Query query)\r\n at Microsoft.Mashup.Engine1.Runtime.OptimizingQueryVisitor.Optimize(Query query)\r\n at Microsoft.Mashup.Engine1.Language.Query.QueryTableValue.get_OptimizedQuery()\r\n at Microsoft.Mashup.Engine1.Language.Query.QueryTableValue.GetReader()\r\n at Microsoft.Mashup.Engine.Interface.Tracing.TracingDataReaderSource.get_PageReader()\r\n at Microsoft.Mashup.Evaluator.RemoteDocumentEvaluator.Service.c__DisplayClass11.c__DisplayClass13.b__10()\r\n at Microsoft.Mashup.Evaluator.RemotePageReader.c__DisplayClass7.b__0()\r\n at Microsoft.Mashup.Evaluator.EvaluationHost.ReportExceptions(IHostTrace trace, IEngineHost engineHost, IMessageChannel channel, Action action)\r\n\r\n\r\n",
"ProductVersion": "2.58.5103.501 (PBIDesktop)",
"ActivityId": "f06a4b5b-09ba-40ce-bd99-424710286b77",
"Process": "Microsoft.Mashup.Container.NetFX40",
"Pid": 11080,
"Tid": 1,
"Duration": "00:00:00.0005557"
Any ideas how to resolve this error?
Thanks
I just connected to PostgreSQL with a Direct Query storage mode for the first time today, and ran into this error as well. Like you, the connection and everything else seemed to work fine, but then threw an error when trying to use visuals that worked fine when using the Import storage mode.
My error read: OLE DB or ODBC error: [Expression.Error] We couldn't fold the expression to the data source. Please try a simpler expression. I assume that the last sentence was added in a later version of Power BI.
Carl de Souza explains Query Folding here and it appears that Direct Querying attempts to fold as much of the analysis into the Native Query as possible. I hadn't specified any query when I first pulled in the data, but once I did the error disappeared.
For any who don't know how to do this, after clicking 'Get Data' (or 'New Source' if already in the Query Editor) and choosing PostreSQL database, make sure to add a SQL statement in the Advanced options section.
For me, a simple
SELECT * FROM my_schema."my_table_name"
was sufficient.
"Sample filling out of the PostgreSQL database popup

how to create Jsonpath file to load data in redshift

one of my sample record for Json:
{
"viewerId": "Ext-04835139",
"sid5": "269410578:2995631181:2211755370:3307088398:33879957",
"firstHbTimems": 1.506283958371E12,
"ipAddress": "74.58.57.31",
"streamUrl": "https://dc3-ll-livedazn-dznlivejp.hs.llnwd.net/live/channel/1007/all/stream.m3u8?event_id=61824040049&h=c912885e2a69ffa7ea84f45dc18c004d",
"asset": "[nlq9biy7trxl1cjceg70rogvd] Saints # Panthers",
"os": "IOS",
"osVersion": "10.3.3",
"deviceModel": "iPhone",
"geoInfo": {
"city": 63666,
"state": 3851,
"isp": 120,
"longitudeTimes1K": -73562,
"country": 37,
"dma": 0,
"asn": 5769,
"latitudeTimes1K": 45502,
"publicIP": 1245329695
},
"totalPlayingTime": 4.097,
"totalBufferingTime": 0.0,
"VST": 1.411,
"avgBitrate": 202.0,
"playStateSwitch": [
"{'seqNum': 0, 'eventNum': 0, 'sessionTimeMs': 7, 'startPlayState': 'eUnknown', 'endPlayState': 'eBuffering'}",
"{'seqNum': 1, 'eventNum': 5, 'sessionTimeMs': 1411, 'startPlayState': 'eBuffering', 'endPlayState': 'ePlaying'}"
],
"bitrateSwitch": [
],
"errorEvent": [
],
"tags": {
"LSsportName": "Football",
"c3.device.model": "iPhone+6+Plus",
"LSvideoType": "LIVE",
"c3.device.ua": "DAZN%2F5560+CFNetwork%2F811.5.4+Darwin%2F16.7.0",
"LSfixtureId": "5trxst8tv7slixckvawmtf949",
"genre": "Sport",
"LScompetitionName": "NFL+Game+Pass",
"show": "NFL+Game+Pass",
"c3.cmp.0._type": "DEVATLAS",
"c3.protocol.type": "cws",
"LSsportId": "9ita1e50vxttzd1xll3iyaulu",
"stageId": "8hm0ew6b8m7907ty8vy8tu4tl",
"LSvenueId": "na",
"syndicator": "None",
"applicationVersion": "2.0.8",
"deviceConnectionType": "wifi",
"c3.client.marketingName": "iPhone+6+Plus",
"playerVersion": "1.2.6.0",
"c3.cmp.0._id": "da",
"drmType": "AES128",
"c3.sh": "dc3-ll-livedazn-dznlivejp.hs.llnwd.net",
"c3.pt.ver": "10.3.3",
"applicationType": "ios",
"c3.viewer.id": "Ext-04835139",
"LSinterfaceLanguage": "en",
"c3.pt.os": "IOS",
"playerVendor": "Open+Source",
"c3.client.brand": "Apple",
"c3.cws.sf": "7",
"c3.cmp.0._ver": "1",
"c3.client.hwType": "Mobile+Phone",
"c3.pt.os.ver": "10.3.3",
"isAd": "false",
"c3.device.cver.bld": "2.124.0.33357",
"stageName": "Regular+Season",
"c3.client.osName": "iOS",
"contentType": "Live",
"c3.device.cver": "2.124.0",
"LScompetitionId": "wy3kluvb4efae1of0d8146c1",
"expireDate": "na",
"c3.client.model": "iPhone+6+Plus",
"c3.client.manufacturer": "Apple",
"LSproductionValue": "na",
"pubDate": "2017-09-23",
"c3.cluster.name": "production",
"accountType": "FreeTrial",
"c3.adaptor.type": "eCws1_7",
"c3.device.brand": "iPhone",
"c3.pt.br": "Non-Browser+Apps",
"contentId": "nlq9biy7trxl1cjceg70rogvd",
"streamingProtocol": "FairPlay",
"LSvenueName": "na",
"c3.device.type": "Mobile",
"c3.protocol.level": "2.4",
"c3.player.name": "AVPlayer",
"contentName": "Saints+%40+Panthers",
"c3.device.manufacturer": "Apple",
"c3.framework": "AVFoundation",
"c3.pt": "iOS",
"c3.device.ver": "6+Plus",
"c3.video.isLive": "T",
"c3.cmp.0._cfg_ver": "1504808821",
"c3.cws.clv": "2.124.0.33357",
"LScountryCode": "America%2FEl_Salvador"
},
"playername": "AVPlayer",
"isLive": "T",
"playerVersion": "1.2.6.0"
}
How to create jsonpath file to load it in redshift ?
Thanks
You have a nested array within your json - so a jsonpath will not expand that out for you.
You have a couple of choices on how to proceed:
You can load your data at the higher level (e.g. playStateSwitch
rather than seqNum within that) - and then try to use redshift to
process that data. This can be tricky as you cannot explode json
data from an array in redshift.
You can preprocess the data using e.g. aws glue / python / pyspark
or some other etl tool that can handle these nested arrays.
It all depends on the end goal, which is not clear form the above description.
I will approach the solution in the following order
Define which fields and array values that are required to be loaded into the Redshift. If the need is to copy all the records then the next check is how to handle the multiple array records.
If array or key/value are missing as part of JSON source then JSONPath will not work as is - So, better to update the JSON to add the missing array prior to COPY the data set over to RS.
The JSON update can be done using Linux commands or external tools like JP or refer additional reference
If all the values in the nested arrays are required then an alternative work around will be using external table an example
Otherwise, the JSONPATH file can be developed in this format
{
"jsonpaths": [
"$.viewerId", ///root level fields
...
"$geoInfo.city", /// object hierarchy
...
"$playStateSwitch[0].seqNum" ///define the required array number
...
]
}
Hope, this helps.

Designing a REST API with redundant data: client-side or server-side data processing

I'm developing a JSON-based REST API allowing to get information about documents where a 'document' resource has the following form:
{
"id": 1,
"name": "document 1",
...
"fields":
[
{"name": "field1", "category": "category1", ...},
{"name": "field2", "category": "category2", ...},
...
]
}
The GET /documents/:id route is quite straight-forward, and I want the GET /documents route to provide an array of basic information related to documents instead of just links or IDs only, something like:
[
{"id": 1, "name": "document 1"},
...
]
Now the UI needs to display the list of documents with the list of categories the fields of a document belong to:
document 1 (category1, category2)
document 2 (category2, category3, category4)
...
The first solution would be to add the fields field to each document in the response of GET /documents and let the client compute the list of categories (maybe this could lead to a poor UX when there are a lot of documents to display), eg:
[
{"id": 1, "name": "document1", "fields":
[
{"name": "field1", "category": "category1", ...},
...
]
]
The second one would be to add that piece of (redundant) information server-side to avoid to add too much data in the response and avoid the client to loop through each document's fields (but now the API is more dependent on how the UI presents the data), eg:
[
{"id", "name": "document 1", "categories": ["category1", "category2"]},
...
]
According to your experiences, which solution should I use?