PowerBI direct query connection to PostgreSQL error. OLE or ODBC error: [Expression.Error] We couldn't fold the expression to the data source - postgresql

According to information from Microsoft DataConnectors I want to create a connector from PowerBi to PostgreSQL via this ODBC driver using direct query. I reused the code from the Microsoft sample, just adjusted the ConnectionString, nothing else.
After building a .mez file that I imported to the PowerBI, I want to connect to the PostgreSQL server.
This is how the connection dialog looks like. The connection is successful and I can see the tables in DB and use them. And now, here is where the issue comes in: When I select some column to display the data in table or plot, I get OLE or ODBC error: [Expression.Error] We couldn't fold the expression to the data source. I Also enabled tracing in Diagnostic options, so here is the log content:
DataMashup.TraceWarning: 24579:
"Start": "2018-05-18T10:51:56.6199845Z",
"Action": "OdbcQuery/FoldingWarning",
"HostProcessId": "25020",
"Function Name": "Group",
"ProductVersion": "2.58.5103.501 (PBIDesktop)",
"ActivityId": "f06a4b5b-09ba-40ce-bd99-424710286b77",
"Process": "Microsoft.Mashup.Container.NetFX40",
"Pid": 11080,
"Tid": 1,
"Duration": "00:00:00.0000051"
DataMashup.TraceInformation: 24579:
"Start": "2018-05-18T10:51:56.6199552Z",
"Action": "BackgroundThread/RollingTraceWriter/Flush",
"ProductVersion": "2.58.5103.501 (PBIDesktop)",
"ActivityId": "00000000-0000-0000-0000-000000000000",
"Process": "Microsoft.Mashup.Container.NetFX40",
"Pid": 11080,
"Tid": 8,
"Duration": "00:00:00.0000560"
DataMashup.TraceWarning: 24579:
"Start": "2018-05-18T10:51:56.6199999Z",
"Action": "OdbcQuery/FoldingWarning",
"HostProcessId": "25020",
"ErrorMessage": "This ODBC driver doesn't set the GroupByCapabilities feature. You can override it by using SqlCapabilities.",
"ProductVersion": "2.58.5103.501 (PBIDesktop)",
"ActivityId": "f06a4b5b-09ba-40ce-bd99-424710286b77",
"Process": "Microsoft.Mashup.Container.NetFX40",
"Pid": 11080,
"Tid": 1,
"Duration": "00:00:00.0000159"
DataMashup.TraceInformation: 24579:
"Start": "2018-05-18T10:51:56.6200385Z",
"Action": "BackgroundThread/RollingTraceWriter/Flush",
"ProductVersion": "2.58.5103.501 (PBIDesktop)",
"ActivityId": "00000000-0000-0000-0000-000000000000",
"Process": "Microsoft.Mashup.Container.NetFX40",
"Pid": 11080,
"Tid": 9,
"Duration": "00:00:00.0000215"
DataMashup.TraceWarning: 24579:
"Start": "2018-05-18T10:51:56.6201305Z",
"Action": "OdbcQueryDomain/ReportFoldingFailure",
"HostProcessId": "25020",
"Exception": "Exception:\r\nExceptionType: Microsoft.Mashup.Engine1.Runtime.FoldingFailureException, Microsoft.MashupEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35\r\nMessage: Folding failed. Please take a look the information in the trace.\r\nStackTrace:\n at Microsoft.Mashup.Engine1.Library.Odbc.OdbcQuery.Group(Grouping grouping)\r\n at Microsoft.Mashup.Engine1.Runtime.OptimizingQueryVisitor.VisitQuery(Query query, Func`2 operation)\r\n\r\n\r\n",
"ProductVersion": "2.58.5103.501 (PBIDesktop)",
"ActivityId": "f06a4b5b-09ba-40ce-bd99-424710286b77",
"Process": "Microsoft.Mashup.Container.NetFX40",
"Pid": 11080,
"Tid": 1,
"Duration": "00:00:00.0000504"
DataMashup.TraceInformation: 24579:
"Start": "2018-05-18T10:51:56.6202107Z",
"Action": "BackgroundThread/RollingTraceWriter/Flush",
"ProductVersion": "2.58.5103.501 (PBIDesktop)",
"ActivityId": "00000000-0000-0000-0000-000000000000",
"Process": "Microsoft.Mashup.Container.NetFX40",
"Pid": 11080,
"Tid": 8,
"Duration": "00:00:00.0000154"
DataMashup.TraceWarning: 24579:
"Start": "2018-05-18T10:51:56.6199413Z",
"Action": "RemotePageReader/RunStub",
"HostProcessId": "25020",
"Exception": "Exception:\r\nExceptionType: Microsoft.Mashup.Engine1.Runtime.ValueException, Microsoft.MashupEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35\r\nMessage: [Expression.Error] We couldn't fold the expression to the data source. Please try a simpler expression.\r\nStackTrace:\n at Microsoft.Mashup.Engine1.Library.Odbc.OdbcQueryDomain.ReportFoldingFailure(NotSupportedException ex)\r\n at Microsoft.Mashup.Engine1.Runtime.OptimizingQueryVisitor.VisitQuery(Query query, Func`2 operation)\r\n at Microsoft.Mashup.Engine1.Runtime.OptimizingQueryVisitor.VisitQuery(Query query)\r\n at Microsoft.Mashup.Engine1.Runtime.OptimizingQueryVisitor.VisitQuery(Query query, Func`2 operation)\r\n at Microsoft.Mashup.Engine1.Runtime.OptimizingQueryVisitor.VisitQuery(Query query)\r\n at Microsoft.Mashup.Engine1.Runtime.OptimizingQueryVisitor.Optimize(Query query)\r\n at Microsoft.Mashup.Engine1.Language.Query.QueryTableValue.get_OptimizedQuery()\r\n at Microsoft.Mashup.Engine1.Language.Query.QueryTableValue.GetReader()\r\n at Microsoft.Mashup.Engine.Interface.Tracing.TracingDataReaderSource.get_PageReader()\r\n at Microsoft.Mashup.Evaluator.RemoteDocumentEvaluator.Service.c__DisplayClass11.c__DisplayClass13.b__10()\r\n at Microsoft.Mashup.Evaluator.RemotePageReader.c__DisplayClass7.b__0()\r\n at Microsoft.Mashup.Evaluator.EvaluationHost.ReportExceptions(IHostTrace trace, IEngineHost engineHost, IMessageChannel channel, Action action)\r\n\r\n\r\n",
"ProductVersion": "2.58.5103.501 (PBIDesktop)",
"ActivityId": "f06a4b5b-09ba-40ce-bd99-424710286b77",
"Process": "Microsoft.Mashup.Container.NetFX40",
"Pid": 11080,
"Tid": 1,
"Duration": "00:00:00.0005557"
Any ideas how to resolve this error?
Thanks

I just connected to PostgreSQL with a Direct Query storage mode for the first time today, and ran into this error as well. Like you, the connection and everything else seemed to work fine, but then threw an error when trying to use visuals that worked fine when using the Import storage mode.
My error read: OLE DB or ODBC error: [Expression.Error] We couldn't fold the expression to the data source. Please try a simpler expression. I assume that the last sentence was added in a later version of Power BI.
Carl de Souza explains Query Folding here and it appears that Direct Querying attempts to fold as much of the analysis into the Native Query as possible. I hadn't specified any query when I first pulled in the data, but once I did the error disappeared.
For any who don't know how to do this, after clicking 'Get Data' (or 'New Source' if already in the Query Editor) and choosing PostreSQL database, make sure to add a SQL statement in the Advanced options section.
For me, a simple
SELECT * FROM my_schema."my_table_name"
was sufficient.
"Sample filling out of the PostgreSQL database popup

Related

How can I select internal fields of CDC JSON to be Key of record in Kafka using SMT?

I have tried using SMT configuration ValueToKey and ExtractField$Key for my following CDC JSON data. But as id field is internal it is giving me an error as field is not recognized. How can I make it accessible to internal fields ?
"before": null,
"after": {
"id": 4,
"salary": 5000
},
"source": {
"version": "1.5.0.Final",
"connector": "mysql",
"name": "Try-",
"ts_ms": 1623834752000,
"snapshot": "false",
"db": "mysql_db",
"sequence": null,
"table": "EmpSalary",
"server_id": 1,
"gtid": null,
"file": "binlog.000004",
"pos": 374,
"row": 0,
"thread": null,
"query": null
},
"op": "c",
"ts_ms": 1623834752982,
"transaction": null
}
Configuration Used:
transforms=createKey,extractInt
transforms.createKey.type=org.apache.kafka.connect.transforms.ValueToKey
transforms.createKey.fields=id
transforms.extractInt.type=org.apache.kafka.connect.transforms.ExtractField$Key
transforms.extractInt.field=id
transforms.extractInt.type=org.apache.kafka.connect.transforms.ExtractField$Key
transforms.extractInt.field=id
key.converter.schemas.enable=false
value.converter.schemas.enable=false
With these transformations and changes in properties file. I could make it possible.
Unfortunately accessing nested fields is not possible without using a different transform.
If you want to use the built-in ones, you'd need to extract the after state before you can access its fields
transforms=extractAfterState,createKey,extractInt
# Add these
transforms.extractAfterState.type=io.debezium.transforms.ExtractNewRecordState
# since you cannot get the ID from null events
transforms.extractAfterState.drop.tombstones=true
transforms.createKey.type=org.apache.kafka.connect.transforms.ValueToKey
transforms.createKey.fields=id
transforms.extractInt.type=org.apache.kafka.connect.transforms.ExtractField$Key
transforms.extractInt.field=id

Google task api due field

I am using google task list api and getting list from server. I created three task with different due time and date. I am getting date for every task but getting same due time. Can you please elaborate why this is happening?
Output:
{
"kind": "tasks#tasks",
"etag": "*********",
"items": [
{
"kind": "tasks#task",
"id": "******",
"etag": "******",
"title": "Task 2",
"updated": "2021-01-29T14:40:36.000Z",
"selfLink": "******",
"position": "00000000000000000001",
"status": "needsAction",
"due": "2021-01-30T00:00:00.000Z"
},
{
"kind": "tasks#task",
"id": "*********",
"etag": "*******",
"title": "Task 4",
"updated": "2021-01-29T13:18:51.000Z",
"selfLink": "*******",
"position": "00000000000000000000",
"status": "needsAction",
"due": "2021-01-30T00:00:00.000Z"
},
{
"kind": "tasks#task",
"id": "***********",
"etag": "*************",
"title": "Task 1",
"updated": "2021-01-29T13:08:39.000Z",
"selfLink": "*******",
"position": "00000000000000000002",
"status": "needsAction",
"due": "2021-01-29T00:00:00.000Z"
}
]
}
Based on the Resource:tasks,
Field: due
Due date of the task (as a RFC 3339 timestamp). Optional. The due date only records date information; the time portion of the timestamp is discarded when setting the due date. It isn't possible to read or write the time that a task is due via the API.
Google api can only read date not time for due field.
This line is from their official documentation Tasks API . tasks
Blockquote
"due": "A String", # Due date of the task (as a RFC 3339 timestamp). Optional. The due date only records date information; the time portion of the timestamp is discarded when setting the due date. It isn't possible to read or write the time that a task is due via the API.

Does Kafka Connect provide data provenance?

Iam new to kafka connect. I have used tools like nifi for sometime now. Those tools provide data provenance for auditing and other purpose for understanding what happened to a piece of data. But I couldn't find any similar feature with kafka connect. Does that feature exist for kafka connect? Or is there some way of handling data provenance in kafka connect so as to understand what happened to the data?
A CDC tool may help with your auditing needs, otherwise you will have to build your custom logic using a single message transformation (SMT). For example, using Debezium connector, this is what you will get as message payload for every change event:
{
"payload": {
"before": null,
"after": {
"id": 1,
"first_name": "7b789a503dc96805dc9f3dabbc97073b",
"last_name": "8428d131d60d785175954712742994fa",
"email": "68d0a7ccbd412aa4c1304f335b0edee8#example.com"
},
"source": {
"version": "1.1.0.Final",
"connector": "postgresql",
"name": "localhost",
"ts_ms": 1587303655422,
"snapshot": "true",
"db": "cdcdb",
"schema": "cdc",
"table": "customers",
"txId": 2476,
"lsn": 40512632,
"xmin": null
},
"op": "c",
"ts_ms": 1587303655424,
"transaction": null
}
}

Azure DB for PostgreSQL - changes to log_line_prefix parameter not implemented

I have a General Purpose Single Server instance of Azure DB for PostgreSQL where I have installed the pgAudit plugin.
I am trying to add more data to the pgAudit session auditing entries by following the instructions on Microsoft's page and PostgreSQL's page and I tried to set up log_line_prefix in the following configurations:
t=%t c=%c a=%a u=%u d=%d r=%r% h=h% e=e c=%c
%t,%c,%a,%u,%d,%r,%h,%e,%c
%t%c%a%u%d%r%h%e%c
None of these have any effect on events collected. Here's the most of what an INSERT looks like:
{
"LogicalServerName": "postgresql4moi",
"SubscriptionId": "****",
"ResourceGroup": "OLC_Research",
"time": "2020-05-05T12:10:59Z",
"resourceId": "***",
"category": "PostgreSQLLogs",
"operationName": "LogEvent",
"properties": {
"prefix": "t=2020-05-05 12:10:59 UTC c=5eb157c4.5c a=DBeaver 7.0.1 - SQLEditor <testingScript.sql> u=system d=postgres r=****.234(4344)h=he=e c=5eb157c4.5c",
"message": "AUDIT: SESSION,6,1,WRITE,INSERT,,,\"INSERT INTO public.koko_table VALUES ('kokoMoko','kokoMoko')\",<none>",
"detail": "",
"errorLevel": "LOG",
"domain": "postgres-11",
"schemaName": "",
"tableName": "",
"columnName": "",
"datatypeName": ""
}
}
Is there something else I forgot to configure?
I event restarted the database after each attempt to set the parameter.
Thanks in advance.

OData JSON response from server comes back with line return characters

When you ask the OData server for JSON, the JSON response comes back with "\r\n" line returns. Currently I'm stripping the response of the line returns on the client side. Is there a way to have the JSON response come back without the "pretty format" without the "\r\n" line returns?
Response from server:
{\r\n"d" : [\r\n{\r\n"__metadata": {\r\n"uri": "http://services.odata.org/(S(cxfoyevtmm2e2elq52yherkc))/OData/OData.svc/Categories(0)", "type": "ODataDemo.Category"\r\n}, "ID": 0, "Name": "Food", "Products": {\r\n"__deferred": {\r\n"uri": "http://services.odata.org/(S(cxfoyevtmm2e2elq52yherkc))/OData/OData.svc/Categories(0)/Products"\r\n}\r\n}\r\n}, {\r\n"__metadata": {\r\n"uri": "http://services.odata.org/(S(cxfoyevtmm2e2elq52yherkc))/OData/OData.svc/Categories(1)", "type": "ODataDemo.Category"\r\n}, "ID": 1, "Name": "Beverages", "Products": {\r\n"__deferred": {\r\n"uri": "http://services.odata.org/(S(cxfoyevtmm2e2elq52yherkc))/OData/OData.svc/Categories(1)/Products"\r\n}\r\n}\r\n}, {\r\n"__metadata": {\r\n"uri": "http://services.odata.org/(S(cxfoyevtmm2e2elq52yherkc))/OData/OData.svc/Categories(2)", "type": "ODataDemo.Category"\r\n}, "ID": 2, "Name": "Electronics", "Products": {\r\n"__deferred": {\r\n"uri": "http://services.odata.org/(S(cxfoyevtmm2e2elq52yherkc))/OData/OData.svc/Categories(2)/Products"\r\n}\r\n}\r\n}\r\n]\r\n}
Expected response:
{"d" : [{"__metadata": {"uri": "http://services.odata.org/(S(cxfoyevtmm2e2elq52yherkc))/OData/OData.svc/Categories(0)", "type": "ODataDemo.Category"}, "ID": 0, "Name": "Food", "Products": {"__deferred": {"uri": "http://services.odata.org/(S(cxfoyevtmm2e2elq52yherkc))/OData/OData.svc/Categories(0)/Products"}}}, {"__metadata": {"uri": "http://services.odata.org/(S(cxfoyevtmm2e2elq52yherkc))/OData/OData.svc/Categories(1)", "type": "ODataDemo.Category"}, "ID": 1, "Name": "Beverages", "Products": {"__deferred": {"uri": "http://services.odata.org/(S(cxfoyevtmm2e2elq52yherkc))/OData/OData.svc/Categories(1)/Products"}}}, {"__metadata": {"uri": "http://services.odata.org/(S(cxfoyevtmm2e2elq52yherkc))/OData/OData.svc/Categories(2)", "type": "ODataDemo.Category"}, "ID": 2, "Name": "Electronics", "Products": {"__deferred": {"uri": "http://services.odata.org/(S(cxfoyevtmm2e2elq52yherkc))/OData/OData.svc/Categories(2)/Products"}}}]}
This is a known issue in the last release. In the next release, we will fix the code to never indent the response payload. If the client