Custom column from Joined table unknown - magento2

I created a resourcemodel for my Grid table that contains a custom query that joins 2 table. The sales_order and the sales_payment_transaction are joined to display all records that has a payment. Below is my query
protected function _initSelect()
{
parent::_initSelect();
$this->getSelect()
->joinLeft(
['spt' => $this->getTable('sales_payment_transaction')],
'main_table.entity_id = spt.order_id',
['spt.created_at as date_paid']
)
->where('main_table.status in ("complete", "processing")')
->order('main_table.entity_id DESC');
$this->addFilterToMap('created_at', 'main_table.created_at');
return $this;
}
As you can see in my custom column I added a new column which is the spt.created_at with a name of date_paid this new column is use for date filtering. So whenever I filter the order by date is uses the date_paid as a parameter. Now when viewing the logs I get this error
SELECT COUNT(*) FROM `sales_order` AS `main_table`
LEFT JOIN `sales_payment_transaction` AS `spt`
ON main_table.entity_id = spt.order_id
WHERE (main_table.status in (\"complete\", \"processing\"))
AND (`date_paid` >= '2021-01-03 00:00:00')
AND (`date_paid` <= '2021-10-03 22:59:59')
/// Column not found: 1054 Unknown column 'date_paid' in 'where clause'
it seems that it can't recognize the new column. May I know how to properly construct this query?

It looks like your quotes are off here:
['spt.created_at as date_paid']
should be:
['spt.created_at' as 'date_paid']

Related

Create rows from part of column names

Source data
I am working on an ELT project to load data from CSV files into PostgreSQL where I will transform it. The CSV files have many columns that are consistent across files, but also contain activity columns that are inconsistent with names like Date (05/19/2020), Type (05/19/2020), etc.
In the loading script I am merging all of the columns with dates in the column name into one jsonb column so I don't have to constantly add new columns to the raw data table.
The resulting jsonb column in the raw data table looks like this:
id
activity
12345678
{"Date (05/19/2020)": null, "Type (05/19/2020)": null, "Date (06/03/2020)": "06/01/2020", "Type (06/03/2020)": "E"}
98765432
{"Date (05/19/2020)": "05/18/2020", "Type (05/19/2020)": "B", "Date (10/23/2020)": "10/26/2020", "Type (10/23/2020)": "T"}
JSON to columns
Using the amazing create_jsonb_flat_view function from this post I can convert the jsonb to columns like this:
id
Date (05/19/2020)
Type (05/19/2020)
Date (06/03/2020)
Type (06/03/2020)
Type (10/23/2020
Date (10/23/2020)
Type (10/23/2020)
10629465
null
null
06/01/2020
E
98765432
05/18/2020
B
10/26/2020
T
Need to move part of column name to row
Now, this is where I'm stuck. I need to remove the portion of the column name that is the Activity Date (e.g. (05/19/2020)) and create a row for each id and ActivityDate with additional columns for Date and Type like this:
id
ActivityDate
Date
Type
12345678
05/19/2020
null
null
12345678
06/03/2020
06/01/2020
E
98765432
05/19/2020
05/18/2020
B
98765432
10/23/2020
10/26/2020
T
I followed your link to the create_jsonb_flat_view article yesterday and then forgot this question. While I thank you for pointing me there, I think that mentioning it worked against you.
A more conventional approach using regexp_replace() works here. I left the date values as strings, but you can convert them with to_date() if needed:
with parse as (
select id, e.k, e.v,
regexp_replace(e.k, '\s+\([0-9/]{10}\)', '') as k_no_date,
regexp_replace(e.k, '^.+([0-9/]{10}).+', '\1') as k_date_only
from rawinput
cross join lateral jsonb_each_text(activity) as e(k, v)
)
select id,
k_date_only as activity_date,
min(v) filter (where k_no_date = 'Date') as date,
min(v) filter (where k_no_date = 'Type') as type
from parse
group by id, k_date_only;
db<>fiddle here
#Mike-Organek's Answer works beautifully!
However, I was curious if the regexp_replace() calls might be slowing the query down a bit and it seemed I could get the same results using a simpler function.
Since Mike gave me a great example to start with I modified it to split on the space between Date and (05/19/2020).
For 20,000 rows, it went from taking an avg of 7 sec on my local machine to an avg of .9 sec.
Here is the resulting query:
with parse as (
select id, e.k, e.v,
split_part(e.k, ' ', 1) as k_no_date,
trim(split_part(e.k, ' ', 2),'()') as k_date_only
from rawinput
cross join lateral jsonb_each_text(activity) as e(k, v)
)
select id,
k_date_only as activity_date,
min(v) filter (where k_no_date = 'Date') as date,
min(v) filter (where k_no_date = 'Type') as type
from parse
group by id, k_date_only;

Cast a PostgreSQL column to stored type

I am creating a viewer for PostgreSQL. My SQL needs to sort on the type that is normal for that column. Take for example:
Table:
CREATE TABLE contacts (id serial primary key, name varchar)
SQL:
SELECT id::text FROM contacts ORDER BY id;
Gives:
1
10
100
2
Ok, so I change the SQL to:
SELECT id::text FROM contacts ORDER BY id::regtype;
Which reults in:
1
2
10
100
Nice! But now I try:
SELECT name::text FROM contacts ORDER BY name::regtype;
Which results in:
invalid type name "my first string"
Google is no help. Any ideas? Thanks
Repeat: the error is not my problem. My problem is that I need to convert each column to text, but order by the normal type for that column.
regtype is a object identifier type and there is no reason to use it when you are not referring to system objects (types in this case).
You should cast the column to integer in the first query:
SELECT id::text
FROM contacts
ORDER BY id::integer;
You can use qualified column names in the order by clause. This will work with any sortable type of column.
SELECT id::text
FROM contacts
ORDER BY contacts.id;
So, I found two ways to accomplish this. The first is the solution #klin provided by querying the table and then constructing my own query based on the data. An untested psycopg2 example:
c = conn.cursor()
c.execute("SELECT * FROM contacts LIMIT 1")
select_sql = "SELECT "
for row in c.description:
if row.name == "my_sort_column":
if row.type_code == 23:
sort_by_sql = row.name + "::integer "
else:
sort_by_sql = row.name + "::text "
c.execute("SELECT * FROM contacts " + sort_by_sql)
A more elegant way would be like this:
SELECT id::text AS _id, name::text AS _name AS n FROM contacts ORDER BY id
This uses aliases so that ORDER BY still picks up the original data. The last option is more readable if nothing else.

Using crosstab and adding new value to a column in postgresql

I have a function (sae_rel_data())that returns a result like the one showing in the picture. I am trying to return the result as table event_crf_id, description, value, CBID instead. So I would have to identify the CBID value and assign that value to all other rows that has the same evet_crf_id in another column named CBID.
For example CBID= 60051 has event_crf_id=444
event_crf_id; description; value; CBID
444; "CBID"; "60051"; "60051"
444; "Month"; "09"; "60051"
444; "Day"; "27"; "60051"
444; "Year"; "2016"; "60051"
...
How can it be done? I am using postgresql
I was able to get the result I wanted by using breaking down my code and saving portions of it into a function and then applying inner join
SELECT cbid, description, value FROM (
SELECT test.event_crf_id, description, test.value, id_table.cbi FROM sae_rel_data() test
INNER JOIN (SELECT event_crf_id, value as cbid FROM sae_rel_data() where description='CBID') id_table
ON id_table.event_crf_id=test.event_crf_id )relevant

Temporary Table Value into a Table-Value UDF

I was having some trouble with an SQL 2k sproc and which we moved to SQL 2k5 so we could used Table Value UDF's instead of Scalar UDF's.
This is simplified, but this is my problem.
I have a temporary table that I fill up with product information. I then pass that product information into a UDF and return the information back to my main results set. It doesn't seem to work.
Am I not allowed to pass a Temporary Table value into an CROSS APPLY'd Table Value UDF?
--CREATE AND FILL #brandInfo
SELECT sku, upc, prd_id, cp.customerPrice
FROM products p
JOIN #brandInfo b ON p.brd_id=b.brd_id
CROSS APPLY f_GetCustomerPrice(b.priceAdjustmentValue, b.priceAdjustmentAmount, p.Price) cp
--f_GetCUstomerPrice uses the AdjValue, AdjAmount, and Price to calculate users actual price
When I put dummy values in for b.priceAdjustmentValue and b.priceAdjustmentAmount it works great. But as soon as I try to load the temp table values in it bombs.
Msg 207, Level 16, State 1, Line 140
Invalid column name 'b.priceAdjustmentValue'.
Msg 207, Level 16, State 1, Line 140
Invalid column name 'b.priceAdjustmentAmount'.
Have you tried:
--CREATE AND FILL #brandInfo
SELECT sku, upc, prd_id, cp.customerPrice
FROM products p
JOIN #brandInfo b ON p.brd_id=b.brd_id
CROSS APPLY (
SELECT *
FROM f_GetCustomerPrice(b.priceAdjustmentValue, b.priceAdjustmentAmount, p.Price) cp
)
--f_GetCUstomerPrice uses the AdjValue, AdjAmount, and Price to calculate users actual price
Giving the UDF the proper context in order to resolve the column references?
EDIT:
I have built the following UDF in my local Northwind 2005 database:
CREATE FUNCTION dbo.f_GetCustomerPrice(#adjVal DECIMAL(28,9), #adjAmt DECIMAL(28,9), #price DECIMAL(28,9))
RETURNS TABLE
AS RETURN
(
SELECT Level = 'One', AdjustValue = #adjVal, AdjustAmount = #adjAmt, Price = #price
UNION
SELECT Level = 'Two', AdjustValue = 2 * #adjVal, AdjustAmount = 2 * #adjAmt, Price = 2 * #price
)
GO
And referenced it in the following query without issue:
SELECT p.ProductID,
p.ProductName,
b.CompanyName,
f.Level
FROM Products p
JOIN Suppliers b
ON p.SupplierID = b.SupplierID
CROSS APPLY dbo.f_GetCustomerPrice(p.UnitsInStock, p.ReorderLevel, p.UnitPrice) f
Are you certain that your definition of #brandInfo has the priceAdjustmentValue and priceAdjustmentAmount columns defined on it? More importantly, if you are putting this in a stored procedure as you mentioned, does there exist a #brandInfo table already without those columns defined? I know #brandInfo is a temporary table, but if it exists at the time you attempt to create the stored procedure and it lacks the columns, the parsing engine may be getting tripped up. Oddly, if the table doesn't exist at all, the parsing engine simply glides past the missing table and creates the SP for you.

Olap cube and MDX and NON EMPTY

I am pretty new to SSAS, OLAP and MDX syntax.
So I have this MDX to query the cube by TSQL (by linked server to SSAS) and it works fine:
select * from openquery(GCUBE,
'SELECT NON EMPTY { [Measures].[Valore] } ON COLUMNS,
NON EMPTY {
( [Prodotti].[Top Marca].[Top Marca].ALLMEMBERS
* [Prodotti].[Top Codice].[Top Codice].ALLMEMBERS
* [Agenti].[Vw Agenti].[Vw Agenti].ALLMEMBERS
* [Calendario].[AnnoMese].[Mese].ALLMEMBERS
* [Prodotti].[Ordinamento].[Ordinamento].ALLMEMBERS
* [Prodotti].[Top].[Top].ALLMEMBERS )
}
DIMENSION PROPERTIES MEMBER_CAPTION
ON ROWS FROM ( SELECT ( { [Calendario].[Anno].&[2012] } )
ON COLUMNS FROM ( SELECT ( { [Agenti].[Vw Agenti].&[005] } )
ON COLUMNS FROM [Vendite])) WHERE ( [Calendario].[Anno].&[2012] )'
)
Well, the [Prodotti].[Top Marca] is a dimension based on a table with the 50 top selling brands and this MDX is filtered by a specific ID Agent [Vw Agenti] = 005.
The purpose of the query is to find out how the agent is selling the company's 50 top selling brands.
The query works fine but there is one brand not sold by this agent and I need to show the empty row.
The figure below shows the missing record relative to the position (rank) 31.
I understand the concept about NON EMPTY but I can't find the right syntax to also show the empty record.
How should I modify the MDX?
I tried to remove NON EMPTY but I get a generic error:
Cannot execute the query against OLE DB provider "MSOLAP" for linked server "GCUBE"
Do I need to change the dimension Top Marca in the cube?
Thanks in advance to anyone who can help me or give the right tips to solve this.
I'm not a specialist of SSAS/TSQL, but I would try a simple request first :
SELECT
[Measures].[Valore]ON COLUMNS,
NON EMPTY [Prodotti].[Top Marca].[Top Marca].ALLMEMBERS ON ROWS
FROM ( SELECT { [Calendario].[Anno].&[2012] } ON COLUMNS
FROM ( SELECT { [Agenti].[Vw Agenti].&[005] } ON COLUMNS
FROM [Vendite]
)
) '
Any way to run it without this TSQL stuff?