Inserting commas into values - amazon-redshift

I have a field, let's call it total_sales where the value it returns is 3621731641
I would like to convert that so it has a thousand separator commas inserted into it. So it would ultimately return as 3,621,731,641
I've looked through the Redshift documentation and have not been able to find anything.

Similar to following query should work for you.
select to_char(<columnname>,'999,999,999,999') from table1;
Make sure to put Maximum size in while specifying pattern into second parameter.
It should not give you $ if don't specify 'l' in second parameter like below.
select to_char(<columnname>,'l999,999,999,999') from table1;

Money format: select '$'||trim(to_char(1000000000.555,'999G999G999G999.99'))

select to_char(<columnname>,'999,999,999,999') from table1;
or
select to_char(<columnname>,'999G999G999G999') from table1;

Related

Casting rows to arrays in PostgreSQL

I need to query a table as in
SELECT *
FROM table_schema.table_name
only each row needs to be a TEXT[] with array values corresponding to column values casted to TEXT coming in the same order as in SELECT * so assuming the table has columns a, b and c I need the result to look like
SELECT ARRAY[a::TEXT, b::TEXT, c::TEXT]
FROM table_schema.table_name
only it shouldn't explicitly list columns by name. Ideally it should look like
SELECT as_text_array(a)
FROM table_schema.table_name AS a
The best I came up with looks ugly and relies on "hstore" extension
WITH columnz AS ( -- get ordered column name array
SELECT array_agg(attname::TEXT ORDER BY attnum) AS column_name_array
FROM pg_attribute
WHERE attrelid = 'table_schema.table_name'::regclass AND attnum > 0 AND NOT attisdropped
)
SELECT hstore(a)->(SELECT column_name_array FROM columnz)
FROM table_schema.table_name AS a
I am having a feeling there must be a simpler way to achieve that
UPDATE 1
Another query that achieves the same result but arguably as ugly and inefficient as the first one is inspired by the answer by #bspates. It may be even less efficient but doesn't rely on extensions
SELECT r.text_array
FROM table_schema.table_name AS a
INNER JOIN LATERAL ( -- parse ROW::TEXT presentation of a row
SELECT array_agg(COALESCE(replace(val[1], '""', '"'), NULLIF(val[2], ''))) AS text_array
FROM regexp_matches(a::text, -- parse double-quoted and simple values separated by commas
'(?<=\A\(|,) (?: "( (?:[^"]|"")* )" | ([^,"]*) ) (?=,|\)\Z)', 'xg') AS t(val)
) AS r ON TRUE
It is still far from ideal
UPDATE 2
I tested all 3 options existing at the moment
Using JSON. It doesn't rely on any extensions, it is short to write, easy to understand and the speed is ok.
Using hstore. This alternative is the fastest (>10 times faster than JSON approach on a 100K dataset) but requires an extension. hstore in general is very handy extension to have through.
Using regex to parse TEXT presentation of a ROW. This option is really slow.
A somewhat ugly hack is to convert the row to a JSON value, then unnest the values and aggregate it back to an array:
select array(select (json_each_text(to_json(t))).value) as row_value
from some_table t
Which is to some extent the same as your hstore hack.
If the order of the columns is important, then using json and with ordinality can be used to keep that:
select array(select val
from json_each_text(to_json(t)) with ordinality as t(k,val,idx)
order by idx)
from the_table t
The easiest (read hacky-est) way I can think of is convert to a string first then parse that string into an array. Like so:
SELECT string_to_array(table_name::text, ',') FROM table_name
BUT depending on the size and type of the data in the table, this could perform very badly.

How to add a leading zero when the length of the column is unknown?

How can I add a leading zero to a varchar column in the table and I don't know the length of the column. If the column is not null, then I should add a leading zero.
Examples:
345 - output should be 0345
4567 - output should be 04567
I tried:
SELECT lpad(column1,WHAT TO SPECIFY HERE?, '0')
from table_name;
I will run an update query after I get this.
You may be overthinking this. Use plain concatenation:
SELECT '0' || column1 AS padded_col1 FROM table_name;
If the column is NULL, nothing happens: concatenating anything to NULL returns NULL.
In particular, don't use concat(). You would get '0' for NULL columns, which you do not want.
If you also have empty strings (''), you may need to do more, depending on what you want.
And since you mentioned your plan to updated the table: Consider not doing this, you are adding noise, that could be added for display with the simple expression. A VIEW might come in handy for this.
If all your varchar values are in fact valid numbers, use an appropriate numeric data type instead and format for display with the same expression as above. The concatenation automatically produces a text result.
If circumstances should force your hand and you need to update anyway, consider this:
UPDATE table_name
SET column1 = '0' || column1
WHERE column1 IS DISTINCT FROM '0' || column1;
The added WHERE clause to avoid empty updates. Compare:
How do I (or can I) SELECT DISTINCT on multiple columns?
try concat instead?..
SELECT concat(0::text,column1) from table_name;

PostgreSQL use function result in ORDER BY

Is there a way to use the results of a function call in the order by clause?
My current attempt (I've also tried some slight variations).
SELECT it.item_type_id, it.asset_tag, split_part(it.asset_tag, 'ASSET', 2)::INT as tag_num
FROM serials.item_types it
WHERE it.asset_tag LIKE 'ASSET%'
ORDER BY split_part(it.asset_tag, 'ASSET', 2)::INT;
While my general assumption is that this can't be done, I wanted to know if there was a way to accomplish this that I wasn't thinking of.
EDIT: The query above gives the following error [22P02] ERROR: invalid input syntax for integer: "******"
Your query is generally OK, the problem is that for some row the result of split_part(it.asset_tag, 'ASSET', 2) is the string ******. And that string cannot be cast to an integer.
You may want to remove the order by and the cast in the select list and add a where split_part(it.asset_tag, 'ASSET', 2) = '******', for instance, to narrow down that data issue.
Once that is resolved, having such a function in the order by list is perfectly fine. The quoted section of the documentation in the comments on the question is referring to applying an order by clause to the results of UNION, INTERSECTION, etc. queries. In other words, the order by found in this query:
(select column1 as result_column1 from table1
union
select column2 from table 2)
order by result_column1
can only refer to the accumulated result columns, not to expressions on individual rows.

Trimming parts of a word but each word is different size

I have a table with values like this:
book;65
book;1000
table;66
restaurant;1202
park;2
park;44444
Is there a way using postgres sql to remove everything, regardless of the length of the word, that includes the semi-colon and everything after it?
I plan on doing a query that goes something like this after I figure this out:
select col1, modified_col_1
from table_1
--modified is without the semi-colon and everything after
You can use substring and strpos() for this:
select col1, substring(col1, 1, strpos(col1, ';') - 1) as modified_col_1
The above will give an error if there are values without a ;
Another option would be to split the string into an array and then just pick the first element:
select (string_to_array(col1, ';'))[1]
from table_1
This will also work if no ; is present

Dynamic number of fields in table

I have a problem with TSQL. I have a number of tables, each table contain different number of fielsds with different names.
I need dynamically take all this tables, read all records and manage each record into string list, where each value separated by commas. And do smth. with this string.
I think that I need to use CURSORS, but I can't FETCH em without knowing A concrete amount of fields with names and types. Maybe I can create a table variable with dynamic number of fields?
Thanks a lot!
Makarov Artem.
I would repurpose one of the many T-SQL scripts written to generate INSERT statements. They do exactly what you require. Namely
Reverse engineer a given table to determine columns names and types
Generate a delimited string of values
The most complete example I've found is here
But just a simple Google search for "INSERT STATEMENT GENERATOR" will yield several examples that you can repurpose to fit your needs.
Best of luck!
SELECT
ORDINAL_POSITION
,COLUMN_NAME
,DATA_TYPE
,CHARACTER_MAXIMUM_LENGTH
,IS_NULLABLE
,COLUMN_DEFAULT
FROM
INFORMATION_SCHEMA.COLUMNS
WHERE
TABLE_NAME = 'MYTABLE'
ORDER BY
ORDINAL_POSITION ASC;
from http://weblogs.sqlteam.com/joew/archive/2008/04/27/60574.aspx
Perhaps you can do something with this.
select T2.X.query('for $i in *
return concat(data($i), ",")'
).value('.', 'nvarchar(max)') as C
from (
select *
from YourTable
for xml path('Row'),elements xsinil, type
) as T1(X)
cross apply T1.X.nodes('/Row') T2(X)
It will give you one row for each row in YourTable with each value in YourTable separated by a comma in the column C.
This builds an XML for the entire table and then parses that XML. Might get you into trouble if you have tables with a lot of rows.
BTW: I saw from a comment that you can "use only pure SQL". I really don't think this qualifies as "pure SQL" :).