replicate subquery in PowerBI - merge

I have two tables:
The merge is on the field name value of the JLE_LineTypeCategory matching the Type and Line Type columns on JobLedgerEntry. In SQL, I would do something like this:
SELECT optiontext
FROM metadataTable md
WHERE TableName='JobLedgerEntry'
AND FieldName='LineType'
AND md.OptionInteger=JobLedgerEntry.[Type]
) as 'Type'
but I'm not sure how to do that in BI. Basically, I'm looking at the value of a field in the JLE_LineTypeCategory table to match with the column name in the JobLedgerEntry table.

Since I only needed one set of field descriptors, I filtered the LineTypeCategory table to just the 3 possible values for the JobLedgerEntry.LineType field. I then merged the two tables on the optionInteger-LineType fields.

Related

How to understand the return type?

I'm building a framework for rust-postgres.
I need to know what value type will be returned from a row.try_get, to get the value in a variable of the appropriate type.
I can get the sql type from row.columns()[index].type, but not if the value is nullable , so i can't decide to put the value in a normal type or a Option<T>.
I can use just the content of the row to understand it, i can't do things like "get the table structure from Postgresql".
is there a way?
The reason that the Column type does not expose any way to find out if a result column is nullable is because the database does not return this information.
Remember that result columns are derived from running a query, and that query may contain arbitrary expressions. If the query was a simple SELECT of columns from a table, then it would be reasonably simple to determine if a column could be nullable.
But it could also be a very complex expression, derived from multiple columns, subselects or even custom functions. Postgres can figure out the data type of each column, but in the general case it doesn't know if a result column may contain nulls.
If your application is only performing simple queries, and you know which table column each result column comes from, then you can find out if that table column is nullable like this:
SELECT is_nullable
FROM information_schema.columns
WHERE table_schema='myschema'
AND table_name='mytable'
AND column_name='mycolumn';
If your queries are not that simple then I recommend you always get the result as an Option<T> and handle the possibility that the result might be None.

Update column in a dataset only if matching record exists in another dataset in Tableau Prep Builder

Any way to do this? Basically trying to do a SQL UPDATE SET function if matching record for one or more key fields exists in another dataset.
Tried using Joins and Merge. Joins seems like more steps and the Merge appends records instead of updating the correlating rows.

How to change column order in Grafana InfluxDb Table

I have table like this:
And I need to have "last" column (it is value from influxDb) in first column.
It is InfluxDb version 1.7.
I have a lot of queries (A,B,C,D):
So I can't use organize fields transformation:
But if I do join transformation before (regardless of the field) my table looks like this:
Use Grafana Organize fields transformation and drag/drop fields to achieve desired order. Example:
If query consist of a lot of series (A,B,C,D), probably it is necessary to do Merge transformation before Organize fields:

group the record by columns and wanted to pick only the max of created date record using mapping data flows azure data factory

HI there in Mapping data flows and azure data factory. I tried to create data flow in that we used aggregate transformation to group the records, now there is a column called [createddate] now we want to pick the max of created record and out put should show all the columns.
any advice please help me
The Aggregate transformation will only output the columns participating in the aggregation (group by and aggregate function columns). To include all of the columns in the Aggregate output, add a new column pattern to your aggregate (see pic below ... change column1, column2, ... to the names of the columns you are already using in your agg)

Postgres: Query Values in nested jsonb-structure with unknown keys

I am quite new in working with psql.
Goal is to get values from a nested jsonb-structure where the last key has so many different characteristics it is not possible to query them explicitely.
The jsonb-structure in any row is as follows:
TABLE_Products
{'products':[{'product1':['TYPE'], 'product2':['TYPE2','TYPE3'], 'productN':['TYPE_N']}]}
I want to get the values (TYPE1, etc.) assigned to each product-key (product1, etc.). The product-keys are the unknown, because of too many different names.
My work so far achieves to pull out a tuple for each key:value-pair on the last level. To illustrate this here you can see my code and the results from the previously described structure.
My Code:
select url, jsonb_each(pro)
from (
select id , jsonb_array_elements(data #> '{products}') as pro
from TABLE_Products
where data is not null
) z
My result:
("product2","[""TYPE2""]")
("product2","[""TYPE3""]")
My questions:
Is there a way to split this tuple on two columns?
Or how can I query the values kind of 'unsupervised', so without knowing the exact names of 'product1 ... n'