How to concate the Currency symbol for negative integers? - postgresql

I am trying to show the currency symbol with the numbers. I am using the CONCAT method to do this.
select concat('$', "amount") from payments;
This method working good when the amount is positive but when the amount is negative it is concat the currency symbol before minus.
eg:
$-243.44
What is the proper way to do this?

You can use select case
select case when amount < 0 then concat('-$', abs("amount")) else concat('$', "amount") end from payments;

I suggest that you take advantage of Postgres' in-built currency type, e.g.
SELECT '-243.44'::float8::numeric::money;
This printed -£243.44 on the demo tool I am using, which appears to be located in the UK. The actual currency symbol you see would depend on your Postgres locale settings.
If you really need to do this concatenation yourself, you could use REGEXP_REPLACE:
WITH cte AS (
SELECT '-123.456'::text AS val UNION ALL
SELECT '123.456'::text
)
SELECT
val,
REGEXP_REPLACE(val, '^(-?)', '\1$') AS val_out
FROM cte;

Related

to_date in HANA with mixed date formats

What can you do in case you have different date formats in the origin?
I have a case where we are using a to_date function to get the information from a table, but I am getting an error because some of the records have a date format YYYY-DD-MM instead of YYYY-MM-DD
How to apply a uniform solution for this?
To handle this situation (arbitrary text should be converted into a structured date value), I would probably work with regular expressions.
That way you can select the set of records that fit the format you like to support and perform the type conversion on those records.
For example:
create column table date_vals (dateval nvarchar (4000), date_val date)
insert into date_vals values ('2018-01-23', NULL);
insert into date_vals values ('12/23/2016', NULL);
select dateval, to_date(dateval, 'YYYY-MM-DD') as SQL_DATE
from date_vals
where
dateval like_regexpr '[[:digit:]]{4}-[[:digit:]]{2}-[[:digit:]]{2}'
union all
select dateval, to_date(dateval, 'MM/DD/YYYY') as SQL_DATE
from date_vals
where
dateval like_regexpr '[[:digit:]]{2}/[[:digit:]]{2}/[[:digit:]]{4}';
This approach also provides a good option to review the non-matching records and possible come up with additional required pattern.
Why not use a case when in the select where you would test the different regular expressions, then use the to_date to return the date with the proper format.
This would avoid a union all and 2 select statements.
You could add more "format" without more "select" in an additional union.
Unless like_regexpr only works in where clause (I have to admit I never tried that function).

PostgreSQL use function result in ORDER BY

Is there a way to use the results of a function call in the order by clause?
My current attempt (I've also tried some slight variations).
SELECT it.item_type_id, it.asset_tag, split_part(it.asset_tag, 'ASSET', 2)::INT as tag_num
FROM serials.item_types it
WHERE it.asset_tag LIKE 'ASSET%'
ORDER BY split_part(it.asset_tag, 'ASSET', 2)::INT;
While my general assumption is that this can't be done, I wanted to know if there was a way to accomplish this that I wasn't thinking of.
EDIT: The query above gives the following error [22P02] ERROR: invalid input syntax for integer: "******"
Your query is generally OK, the problem is that for some row the result of split_part(it.asset_tag, 'ASSET', 2) is the string ******. And that string cannot be cast to an integer.
You may want to remove the order by and the cast in the select list and add a where split_part(it.asset_tag, 'ASSET', 2) = '******', for instance, to narrow down that data issue.
Once that is resolved, having such a function in the order by list is perfectly fine. The quoted section of the documentation in the comments on the question is referring to applying an order by clause to the results of UNION, INTERSECTION, etc. queries. In other words, the order by found in this query:
(select column1 as result_column1 from table1
union
select column2 from table 2)
order by result_column1
can only refer to the accumulated result columns, not to expressions on individual rows.

Postgres: cast, money and NULL values

In my select I am using this in order to convert an integer to a money form.
CAST(mytable.discount AS money) AS Discount
But I cannot figure out how to avoid the 'NULL' output if the join fails (for good cause) to bring the optional value.
I've done this to avoid NULLS in the past:
COALESCE(mytable.voucher,'----') AS Voucher
But I cannot figure out how to combined CAST and COALESCE for the same field. I just want my discount NULL fields to be '----'
That's tricky, as the "mytable.discount AS money" converts NULLS to $0
It's actually not what happens, but an implicit cast which happens after that.
An expression must have a particular type. In this case it's money. So you see $0.00 as a result of my proposed expression because it's ---- that is converted to money, not NULL.
As a solution you may explicitly convert the inner expression to text like:
SELECT COALESCE(CAST('1' as money)::text, '--');
or
SELECT COALESCE(CAST(null as money)::text, '--');
SQLFiddle demo: http://sqlfiddle.com/#!12/d41d8/2866

Dynamic number of fields in table

I have a problem with TSQL. I have a number of tables, each table contain different number of fielsds with different names.
I need dynamically take all this tables, read all records and manage each record into string list, where each value separated by commas. And do smth. with this string.
I think that I need to use CURSORS, but I can't FETCH em without knowing A concrete amount of fields with names and types. Maybe I can create a table variable with dynamic number of fields?
Thanks a lot!
Makarov Artem.
I would repurpose one of the many T-SQL scripts written to generate INSERT statements. They do exactly what you require. Namely
Reverse engineer a given table to determine columns names and types
Generate a delimited string of values
The most complete example I've found is here
But just a simple Google search for "INSERT STATEMENT GENERATOR" will yield several examples that you can repurpose to fit your needs.
Best of luck!
SELECT
ORDINAL_POSITION
,COLUMN_NAME
,DATA_TYPE
,CHARACTER_MAXIMUM_LENGTH
,IS_NULLABLE
,COLUMN_DEFAULT
FROM
INFORMATION_SCHEMA.COLUMNS
WHERE
TABLE_NAME = 'MYTABLE'
ORDER BY
ORDINAL_POSITION ASC;
from http://weblogs.sqlteam.com/joew/archive/2008/04/27/60574.aspx
Perhaps you can do something with this.
select T2.X.query('for $i in *
return concat(data($i), ",")'
).value('.', 'nvarchar(max)') as C
from (
select *
from YourTable
for xml path('Row'),elements xsinil, type
) as T1(X)
cross apply T1.X.nodes('/Row') T2(X)
It will give you one row for each row in YourTable with each value in YourTable separated by a comma in the column C.
This builds an XML for the entire table and then parses that XML. Might get you into trouble if you have tables with a lot of rows.
BTW: I saw from a comment that you can "use only pure SQL". I really don't think this qualifies as "pure SQL" :).

SQL DateTime Conversion Fails when No Conversion Should be Taking Place

I'm modifying an existing query for a client, and I've encountered a somewhat baffling issue.
Our client uses SQL Server 2008 R2 and the database in question provides the user the ability to specify custom fields for one of its tables by making use of an EAV structure. All of the values stored in this structure are varchar(255), and several of the fields are intended to store dates. The query in question is being modified to use two of these fields and compare them (one is a start, the other is an end) against the current date to determine which row is "current".
The issue I'm having is that part of the query does a CONVERT(DateTime, eav.Value) in order to turn the varchar into a DateTime. The conversions themselves all succedd and I can include the value as part of the SELECT clause, but part of the question is giving me a conversion error:
Conversion failed when converting date and/or time from character string.
The real kicker is this: if I define the base for this query (getting a list of entities with the two custom field values flattened into a single row) as a view and select against the view and filter the view by getdate(), then it works correctly, but it fails if I add a join to a second table using one of the (non-date) fields from the view. I realize that this might be somewhat hard to follow, so I can post an example query if desired, but this question is already getting a little long.
I've tried recreating the basic structure in another database and including sample data, but the new database behaves as expected, so I'm at a loss here.
EDIT In case it's useful, here's the statement for the view:
create view Festival as
select
e.EntityId as FestivalId,
e.LookupAs as FestivalName,
convert(Date, nvs.Value) as ActivityStart,
convert(Date, nve.Value) as ActivityEnd
from tblEntity e
left join CustomControl ccs on ccs.ShortName = 'Activity Start Date'
left join CustomControl cce on cce.ShortName = 'Activity End Date'
left join tblEntityNameValue nvs on nvs.CustomControlId = ccs.IdCustomControl and nvs.EntityId = e.EntityId
left join tblEntityNameValue nve on nve.CustomControlId = cce.IdCustomControl and nve.EntityId = e.EntityId
where e.EntityType = 'Festival'
The failing query is this:
select *
from Festival f
join FestivalAttendeeAll fa on fa.FestivalId = f.FestivalId
where getdate() between f.ActivityStart and f.ActivityEnd
Yet this works:
select *
from Festival f
where getdate() between f.ActivityStart and f.ActivityEnd
(EntityId/FestivalId are int columns)
I've encountered this type of error before, it's due to the "order of operations" performed by the execution plan.
You are getting that error message because the execution plan for your statement (generated by the optimizer) is performing the CONVERT() operation on rows that contain string values that can't be converted to DATETIME.
Basically, you do not have control over which rows the optimizer performs that conversion on. You know that you only need that conversion done on certain rows, and you have predicates (WHERE or ON clauses) that exclude those rows (limit the rows to those that need the conversion), but your execution plan is performing the CONVERT() operation on rows BEFORE those rows are excluded.
(For example, the optimizer may be electing to a do a table scan, and performing that conversion on every row, before any predicate is being applied.)
I can't give a specific answer, without a specific question and specific SQL that is generating the error.
One simple approach to addressing the problem would be to use the ISDATE() function to test whether the string value can be converted to a date.
That is, replace:
CONVERT(DATETIME,eav.Value)
with:
CASE WHEN ISDATE(eav.Value) > 0 THEN CONVERT(DATETIME, eav.Value) ELSE NULL END
or:
CONVERT(DATETIME, CASE WHEN ISDATE(eav.Value) > 0 THEN eav.Value ELSE NULL END)
Note that the ISDATE() function is subject to some significant limitations, such as being affected by the DATEFORMAT and LANGUAGE settings of the session.
If there is some other indication on the eav row, you could use some other test, to conditionally perform the conversion.
CASE WHEN eav.ValueIsDateTime=1 THEN CONVERT(DATETIME, eav.Value) ELSE NULL END
The other approach I've used is to try to gain some modicum of control over the order of operations of the optimizer, using inline views or Common Table Expressions, with operations that force the optimizer to materialize them and apply predicates, so that happens BEFORE any conversion in the outer query.