Kentico getting data from multiple choice fields that use SQl causes GetValue to return a number instead of the actual name - content-management-system

I am hoping that there is just a method that I am missing.
Right now i am using {% CurrentDocument.GetValue("marketType").Replace("|", ", ") #%} which works totally fine if I have a list of options. As soon as I switched my field to get data by using :
SELECT 0 AS ItemID, '-Select-' marketType
UNION ALL
SELECT ItemID, marketType FROM BBUS_MarketType
{% CurrentDocument.GetValue("marketType").Replace("|", ", ") #%} started displaying the number of the item instead of the item name itself.

The list of choice has two parts "value,display". Your SELECT statement populate the value with ItemID which is number.
If you want to store the text, then it should be
SELECT '', '-Select-' UNION ALL SELECT marketType, marketType FROM BBUS_MarketType
populate the value with "marketType" instead of ID

Related

Create rows from part of column names

Source data
I am working on an ELT project to load data from CSV files into PostgreSQL where I will transform it. The CSV files have many columns that are consistent across files, but also contain activity columns that are inconsistent with names like Date (05/19/2020), Type (05/19/2020), etc.
In the loading script I am merging all of the columns with dates in the column name into one jsonb column so I don't have to constantly add new columns to the raw data table.
The resulting jsonb column in the raw data table looks like this:
id
activity
12345678
{"Date (05/19/2020)": null, "Type (05/19/2020)": null, "Date (06/03/2020)": "06/01/2020", "Type (06/03/2020)": "E"}
98765432
{"Date (05/19/2020)": "05/18/2020", "Type (05/19/2020)": "B", "Date (10/23/2020)": "10/26/2020", "Type (10/23/2020)": "T"}
JSON to columns
Using the amazing create_jsonb_flat_view function from this post I can convert the jsonb to columns like this:
id
Date (05/19/2020)
Type (05/19/2020)
Date (06/03/2020)
Type (06/03/2020)
Type (10/23/2020
Date (10/23/2020)
Type (10/23/2020)
10629465
null
null
06/01/2020
E
98765432
05/18/2020
B
10/26/2020
T
Need to move part of column name to row
Now, this is where I'm stuck. I need to remove the portion of the column name that is the Activity Date (e.g. (05/19/2020)) and create a row for each id and ActivityDate with additional columns for Date and Type like this:
id
ActivityDate
Date
Type
12345678
05/19/2020
null
null
12345678
06/03/2020
06/01/2020
E
98765432
05/19/2020
05/18/2020
B
98765432
10/23/2020
10/26/2020
T
I followed your link to the create_jsonb_flat_view article yesterday and then forgot this question. While I thank you for pointing me there, I think that mentioning it worked against you.
A more conventional approach using regexp_replace() works here. I left the date values as strings, but you can convert them with to_date() if needed:
with parse as (
select id, e.k, e.v,
regexp_replace(e.k, '\s+\([0-9/]{10}\)', '') as k_no_date,
regexp_replace(e.k, '^.+([0-9/]{10}).+', '\1') as k_date_only
from rawinput
cross join lateral jsonb_each_text(activity) as e(k, v)
)
select id,
k_date_only as activity_date,
min(v) filter (where k_no_date = 'Date') as date,
min(v) filter (where k_no_date = 'Type') as type
from parse
group by id, k_date_only;
db<>fiddle here
#Mike-Organek's Answer works beautifully!
However, I was curious if the regexp_replace() calls might be slowing the query down a bit and it seemed I could get the same results using a simpler function.
Since Mike gave me a great example to start with I modified it to split on the space between Date and (05/19/2020).
For 20,000 rows, it went from taking an avg of 7 sec on my local machine to an avg of .9 sec.
Here is the resulting query:
with parse as (
select id, e.k, e.v,
split_part(e.k, ' ', 1) as k_no_date,
trim(split_part(e.k, ' ', 2),'()') as k_date_only
from rawinput
cross join lateral jsonb_each_text(activity) as e(k, v)
)
select id,
k_date_only as activity_date,
min(v) filter (where k_no_date = 'Date') as date,
min(v) filter (where k_no_date = 'Type') as type
from parse
group by id, k_date_only;

Cast a PostgreSQL column to stored type

I am creating a viewer for PostgreSQL. My SQL needs to sort on the type that is normal for that column. Take for example:
Table:
CREATE TABLE contacts (id serial primary key, name varchar)
SQL:
SELECT id::text FROM contacts ORDER BY id;
Gives:
1
10
100
2
Ok, so I change the SQL to:
SELECT id::text FROM contacts ORDER BY id::regtype;
Which reults in:
1
2
10
100
Nice! But now I try:
SELECT name::text FROM contacts ORDER BY name::regtype;
Which results in:
invalid type name "my first string"
Google is no help. Any ideas? Thanks
Repeat: the error is not my problem. My problem is that I need to convert each column to text, but order by the normal type for that column.
regtype is a object identifier type and there is no reason to use it when you are not referring to system objects (types in this case).
You should cast the column to integer in the first query:
SELECT id::text
FROM contacts
ORDER BY id::integer;
You can use qualified column names in the order by clause. This will work with any sortable type of column.
SELECT id::text
FROM contacts
ORDER BY contacts.id;
So, I found two ways to accomplish this. The first is the solution #klin provided by querying the table and then constructing my own query based on the data. An untested psycopg2 example:
c = conn.cursor()
c.execute("SELECT * FROM contacts LIMIT 1")
select_sql = "SELECT "
for row in c.description:
if row.name == "my_sort_column":
if row.type_code == 23:
sort_by_sql = row.name + "::integer "
else:
sort_by_sql = row.name + "::text "
c.execute("SELECT * FROM contacts " + sort_by_sql)
A more elegant way would be like this:
SELECT id::text AS _id, name::text AS _name AS n FROM contacts ORDER BY id
This uses aliases so that ORDER BY still picks up the original data. The last option is more readable if nothing else.

it is possible to concatenate one result set onto another in a single query?

I have a table of Verticals which have names, except one of them is called 'Other'. My task is to return a list of all Verticals, sorted in alpha order, except with 'Other' at the end. I have done it with two queries, like this:
String sqlMost = "SELECT * from core.verticals WHERE name != 'Other' order by name";
String sqlOther = "SELECT * from core.verticals WHERE name = 'Other'";
and then appended the second result in my code. Is there a way to do this in a single query, without modifying the table? I tried using UNION
(select * from core.verticals where name != 'Other' order by name)
UNION (select * from core.verticals where name = 'Other');
but the result was not ordered at all. I don't think the second query is going to hurt my execution time all that much, but I'm kind of curious if nothing else.
UNION ALL is the usual way to request a simple concatenation; without ALL an implicit DISTINCT is applied to the combined results, which often causes a sort. However, UNION ALL isn't required to preserve the order of the individual sub-results as a simple concatenation would; you'd need to ORDER the overall UNION ALL expression to lock down the order.
Another option would be to compute an integer order-override column like CASE WHEN name = 'Other' THEN 2 ELSE 1 END, and ORDER BY that column followed by name, avoiding the UNION entirely.

SphinxQL - how to filter behind match

I'm working on a project where I use Sphinx searchengine. But - as I realized - the Sphinx documentation is big but hard to understand.
So I was not able to find any information on how to use the WHERE clause to filter behind a MATCH-statement. What I tried yet is:
"SELECT *, country FROM all_gebrauchte_products WHERE MATCH('#searchtext (".$searchQuery.")') AND country='".$where."' ORDER BY WEIGHT() DESC LIMIT ".$page.", ".$limit." OPTION ranker=expr('sum(lcs)')"
If I use it without the country=$where clause, I get back many GUIDs but from different countries. So somehow I have to filter the country column;
If I use the above statement, I get error:
Fatal error: Uncaught exception 'PDOException' with message 'SQLSTATE[42000]: Syntax error or access violation: 1064 index all_gebrauchte_products: parse error: unknown column: country'
But I set the index like this:
sql_query_range = SELECT MIN(gebr_id), MAX(gebr_id) FROM all_gebrauchte_products
sql_range_step = 10000
sql_query = \
SELECT a.gebr_id AS guid, 'products' AS data_type, a.gebr_products AS products, a.gebr_user AS username, a.gebr_date AS datadate, CONCAT(a.gebr_hersteller,' ', a.gebr_modell,' ', a.gebr_ukat,' ', a.gebr_kat,' ', a.gebr_bemerkung) AS searchtext, a.gebr_bild1 AS image1, a.gebr_bild2 AS image2, a.gebr_bild3 AS image3, a.gebr_bild4 AS image4, a.gebr_bild5 AS image5, b.h_land AS country, b.h_web AS weblink, b.h_firmenname AS company, b.h_strasse AS street, b.h_plz AS zipcode, b.h_ort AS city, a.gebr_aktiv AS active \
FROM all_gebrauchte_products a, all_haendler b \
WHERE a.gebr_user = b.h_loginname AND a.gebr_id>=$start AND a.gebr_id<=$end
sql_attr_uint = active
Can anybody tell me what is going wrong? Or how do I have to filter for country?
Thnx. in advance for your help.
Any columns in the sql_query you dont make an ATTRIBUTE, is automatically a FIELD (except the first column is always the document-id).
FIELDs are 'full-text' indexed, they are what you can match in the query - ie the MATCH(...) clause.
ATTRIBUTES are what can be 'filtered' in WHERE, sorted by in ORDER BY, grouped in GROUP BY, or retrieved in the SELECT (or even used in ranking expressions).
So you need country to be an ATTRIBUTE to be able use it in WHERE filter
You don't say but guess it's a string. You can use sql_field_string to make a column BOTH a FIELD and ATTRIBUTE, if you are still interested in being able to full-text query the column too.
(also because its a string, need a very recent version of sphinx. Sphinx only recently gained ability to filter by strings attributes)

Postgresql, select empty fields

I'm trying to get empty "text" fields from my table which I cleared manually with pgadmin.
Initially in those fields was '' and I can query them like this:
SELECT mystr, mystr1 FROM mytable WHERE mystr='' or mystr1=''
But that not work if I delete text from them and leave cells blank.
How to write query to get those '' and clear cells together in result?
Or clear cells alone?
SELECT mystr, mystr1
FROM mytable
WHERE COALESCE(mystr, '') = ''
OR COALESCE(mystr1, '') = ''
;
Explanation: the coalesce(a,b,c, ...) function traverses the list a,b,c,... from left to right and stops at the first non-null element. a,b,c can be any expression (or constant), but must yield the same type (or be coercable to the same type).