Question about the result with the presence of a comment in MySql - select

Doubt about the result with the presence of a comment in MySql
Recently I've got an issue with a code where I've got a strange result. After some minutes, just by casualty, I could find the result. I made this simple test to check that it was not a problem with my query:
select
NOW()as date_1,
'4: 33.32%' as string_1,
--- comment,
NOW() as date_2,
--- comment
'4: 33.32%' as string_2
I've got this result:
By accident, I made this little change for both comments:
select
NOW()as date_1,
'4: 33.32%' as string_1,
--- comment,
NOW() as date_2,
--- comment
'4: 33.32%' as string_2,
-- - comment,
NOW() as date_3,
-- - comment
'4: 33.32%' as string_3
With this, I have the right answer:
However, I've got doubt about the comments present before the second date and string. Why the third consecutive hyphen affected the result and what was the calculation it forced to performed.
version: 10.3.8-MariaDB

A SQL comment begins with --<space>. When you write ---<space>, the first - is not part of the comment, it's a minus sign before the comment. So it's like you wrote
select
NOW()as date_1,
'4: 33.32%' as string_1,
- -- comment,
NOW() as date_2,
- -- comment
'4: 33.32%' as string_2,
-- - comment,
NOW() as date_3,
-- - comment
'4: 33.32%' as string_3
And when you remove the comments, this is equivalent to
select
NOW()as date_1,
'4: 33.32%' as string_1,
- NOW() as date_2,
- '4: 33.32%' as string_2,
NOW() as date_3,
'4: 33.32%' as string_3
The - operator converts its operand to a number first, so the date in NOW() is converted to the number 20190621200233, and the string '4: 33.32%' is converted to the number 4.000. Then the - operator returns the negative of these numbers, so you see -20190621200233 and -4.000 in the results.

Related

Issue with Postgresql 13.0 compilation on CentOS 8,7 on check timetz

if anyone have some experience with compilation of Postgresql 13.0 from source on platform Centos 7,8 , and would you feel free to share the knowledges, please help me.
On both version I had problem with check "timetz" looks like:
...
date ... ok 1948 ms
time ... ok 1613 ms
timetz ... FAILED 1631 ms
timestamp ... ok 2133 ms
timestamptz ... ok 2397 ms
interval ... ok 2197 ms
inet ... ok 2153 ms
...
I tried to change timezone to UTC etc. but without success. Maybe I totally wrong. If you know what is wrong, please let me know the hint or help.
Thanks folks!
Add to debug:
file: /home/release/src/postgresql-13.0/src/test/regress/results/timetz.out
SELECT '24:00:00.01'::timetz; -- not allowed
ERROR: date/time field value out of range: "24:00:00.01"
LINE 1: SELECT '24:00:00.01'::timetz;
^
SELECT '23:59:60.01'::timetz; -- not allowed
ERROR: date/time field value out of range: "23:59:60.01"
LINE 1: SELECT '23:59:60.01'::timetz;
^
SELECT '24:01:00'::timetz; -- not allowed
ERROR: date/time field value out of range: "24:01:00"
LINE 1: SELECT '24:01:00'::timetz;
^
SELECT '25:00:00'::timetz; -- not allowed
ERROR: date/time field value out of range: "25:00:00"
LINE 1: SELECT '25:00:00'::timetz;
^
--
-- TIME simple math
--
-- We now make a distinction between time and intervals,
-- and adding two times together makes no sense at all.
-- Leave in one query to show that it is rejected,
-- and do the rest of the testing in horology.sql
-- where we do mixed-type arithmetic. - thomas 2000-12-02
SELECT f1 + time with time zone '00:01' AS "Illegal" FROM TIMETZ_TBL;
ERROR: operator does not exist: time with time zone + time with time zone
LINE 1: SELECT f1 + time with time zone '00:01' AS "Illegal" FROM TI...
^
HINT: No operator matches the given name and argument types. You might need to add explicit type casts.
A bug has been filed, see here. Should be fixed in next minor release.

Change currency sign oracle d2k reports?

I want to replace $ sign to 'Rs.' in oracle d2k reports. In some system it is displaying Rs but in some system it is showing $. From where I have to change the sign.
You can use the currency in your NLS_TERRITORY settings as follows:
select to_char(123456789.91, 'L999,999,999,990.00') from dual;
L999,999,999,990.00 is the format mask you may be able to set in the property sheet (it's a while since I used Reports) or you can use a sql function like in the example above.
Or you can take the date and format it as a string (as above) and concatenate with the character you want to display. Obviously this isn't as flexible.
select 'Rs'||to_char(123456789.91, '999,999,999,990.00') from dual;
You can check your nls_settings by connecting in sqlplus
SELECT * FROM nls_session_parameters;
You can use this code as well.
SELECT TO_CHAR
(-10000,
'L99G999D99MI',
'NLS_NUMERIC_CHARACTERS = '',.''
NLS_CURRENCY = ''RS'' '
) "Amount"
FROM DUAL;

How to remove everything after certain character in SQL?

I've got a list 400 rows +. Each row looks similar to this: example-example123 I would like to remove everything past '-' so that I'm left with just the beginning part: example123
Any help would be greatly appreciated.
try it like this:
UPDATE table SET column_name=LEFT(column_name, INSTR(column_name, '-')-1)
WHERE INSTR(column_name, '-')>0;
If you only want to select you do it this way:
SELECT LEFT(column_name, INSTR(column_name, '-')-1) FROM table;
INSTR function gets you the position of your - then you update the column value to become from the first letter of the string till the position of the - -1
Here's a fiddle
You can use SQL Trim() function
SELECT TRIM(TRAILING '-' FROM BHEXLIVESQLVS1-LIVE61MSSQL)
AS TRAILING_TRIM
FROM table;
The result should be "BHEXLIVESQLVS1"
select SUBSTRING(col_name,0,Charindex ('-',col_name))
Assuming you need to do this in a query, you can use the string functions of your database.
For DB2 this would look something like
select SUBSTR(YOURCOLUMN, 1, LOCATE('-',YOURCOLUMN)) from YOURTABLE where ...
In SQL Server you could use
SUBSTRING
and
CHARINDEX
For SQL server you can do this,
LEFT(columnName, charindex('-', columnName)) to remove every character after '-'
to remove the special character as well do this,
LEFT(columnName, charindex('-', columnName)-1)
SELECT SUBSTRING(col_name,0,Charindex ('-',col_name)) FROM table_name
WHERE col_name='yourvalue'
Eg.
SELECT SUBSTRING(TPBS_Path,0,Charindex ('->',TPBS_Path)) FROM [CFG].[CFG_T_Project_Breakdown_Structure] WHERE TPBS_Parent_PBS_Code='LE180404'
here TPBS_Path is the column for which trim is to be done and [CFG].[CFG_T_Project_Breakdown_Structure] is table name and TPBS_Parent_PBS_Code='LE180404' is the select condition. Everything after '->' will be trimmed

Raise error when date is not valid

What I'm trying to do is to raise out of range error in case of dates outside of the supported range like what typecasting does.
I'm using PostgreSQL-9.1.6 on CentOS. The issue is below...
postgres=# select to_date('20130229','yyyymmdd');
to_date
------------
2013-03-01
(1 row)
But the output I want to see is:
postgres=# select '20130229'::date;
ERROR: date/time field value out of range: "20130229"
Surfing the web I found an informative page. So I did adding IS_VALID_JULIAN to the function body of to_date, adding the four lines marked + below to formatting.c:
Datum
to_date(PG_FUNCTION_ARGS)
{
text *date_txt = PG_GETARG_TEXT_P(0);
text *fmt = PG_GETARG_TEXT_P(1);
DateADT result;
struct pg_tm tm;
fsec_t fsec;
do_to_timestamp(date_txt, fmt, &tm, &fsec);
+ if (!IS_VALID_JULIAN(tm.tm_year, tm.tm_mon, tm.tm_mday))
+ ereport(ERROR,
+ (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),
+ errmsg("date out of range: \"%s\"",text_to_cstring(date_txt))));
result = date2j(tm.tm_year, tm.tm_mon, tm.tm_mday) - POSTGRES_EPOCH_JDATE;
PG_RETURN_DATEADT(result);
}
Then I rebuilt PostgreSQL:
pg_ctl -m fast stop # 1. stopping pgsql
vi src/backend/utils/adt/formatting.c # 2. using the version above
rm -rf /usr/local/pgsql/* # 3. getting rid of all bin files
./configure --prefix=/usr/local/pgsql
--enable-nls --with-perl --with-libxml
--with-pam --with-openssl
make && make install # 4. rebuilding source
pg_ctl start # 5. starting the engine
My bin directory info is below.
[/home/postgres]echo $PATH
/usr/lib64/qt-3.3/bin:
/usr/local/bin:
/bin:
/usr/bin:
/usr/local/sbin:
/usr/sbin:
/sbin:
/home/postgres/bin:
/usr/bin:
/usr/local/pgsql/bin:
/usr/local/pgpool/bin:
/usr/local/pgtop/bin/pg_top:
[/home/postgres]which pg_ctl
/usr/local/pgsql/bin/pg_ctl
[/home/postgres]which postgres
/usr/local/pgsql/bin/postgres
[/usr/local/bin]which psql
/usr/local/pgsql/bin/psql
But upon checking to_date again, the result remained the same.
postgres=# select to_date('20130229','yyyymmdd');
to_date
------------
2013-03-01
(1 row)
Is there anything I missed?
You can write your own to_date() function, but you have to call it with its schema-qualified name. (I used the schema "public", but there's nothing special about that.)
create or replace function public.to_date(any_date text, format_string text)
returns date as
$$
select to_date((any_date::date)::text, format_string);
$$
language sql
Using the bare function name executes the native to_date() function.
select to_date('20130229', 'yyyymmdd');
2013-03-01
Using the schema-qualified name executes the user-defined function.
select public.to_date('20130229', 'yyyymmdd');
ERROR: date/time field value out of range: "20130229"
SQL state: 22008
I know that's not quite what you're looking for. But . . .
It's simpler than rebuilding PostgreSQL from source.
Fixing up your existing SQL and PLPGSQL source code is a simple search-and-replace with a streaming editor. I'm pretty sure that can't go wrong, as long as you really want every use of the native to_date() to be public.to_date().
The native to_date() function will still work as designed. Extensions and other code might rely on its somewhat peculiar behavior. Think hard and long before you change the behavior of native functions.
New SQL and PLPGSQL would need to be reviewed, though. I wouldn't expect developers to remember to write public.to_date() every time. If you use version control, you might be able to write a precommit hook to make sure only public.to_date() is used.
The native to_date() function has behavior I don't see documented. Not only can you call it with February 29, you can call it with February 345, or February 9999.
select to_date('201302345', 'yyyymmdd');
2014-01-11
select to_date('2013029999', 'yyyymmdd');
2040-06-17

T-SQL Pattern matching issue

I need to determine whether a given string is of the format 'abcd efg -4' i.e '% -number'. I need to isolate the '4', and increment it to '5'.
The rest of the string can contain dates and times like so:
abcd efg - ghis asdjh - 07-07-2011 05-30-34 AM
this string, for instance, does NOT satisfy the pattern i.e. -[number]. For this string, the output from my SQL should be
abcd efg - ghis asdjh - 07-07-2011 05-30-34 AM -1
If the above is input, I should get:
abcd efg - ghis asdjh - 07-07-2011 05-30-34 AM -2
The number can be any number of digits i.e. so a string could be 'abcd efg -123', and my T-SQL would return 'abcd efg -124'
This T-SQL code is going to be embedded in a stored procedure. I know I could implement a .Net stored proc/function and use Regex to do this, however there are various access issues which I have to get around in order to switch-on the CLR on the SQL Server.
I have tried the following patterns:
'%[ ][-]%[0-9]', this works for most cases, but put in an extra space somewhere and it fails
'%[ ][-]%[^a-z][^A-Z]%[0-9]', this manages to skip '-4' (as shown in the above example), but works in several cases, such
'%[ ][-][^a-z][^A-Z]%[0-9]', this again works in some, doesn't in others...
This pattern ' -[number]' would always be at the end of the string, if it's not present the code would append it, as seen in the examples above.
I would like a pattern that works for ALL cases...
Interesting problem. You do realize that this is much more difficult than it really needs to be. If you properly normalized your table so that each column only contains one piece of information, you wouldn't have a problem at all. If it's possible, I would strongly encourage you to consider normalizing this data.
If you cannot normalize the data, then I would approach this backwards. You said the dash-number you are looking for would always appear at the end of the data. Why not reverse the string, parse it, and put it back together. By reversing the string, you will be looking for '[0-9]%[-]' which is a whole lot easier to find.
I put your test data in to a table variable so that I could test the code I've come up with. You can copy/paste this to a query window to see how it works.
Declare #Temp Table(Data VarChar(100))
Insert Into #Temp Values('abcd efg - ghis asdjh - 07-07-2011 05-30-34 AM')
Insert Into #Temp Values('abcd efg - ghis asdjh - 07-07-2011 05-30-34 AM -1')
Insert Into #Temp Values('abcd efg - ghis asdjh - 07-07-2011 05-30-34 AM -2')
Insert Into #Temp Values('abcd efg -123')
Select Case When PatIndex('[0-9]%[-]%', Reverse(Data)) = 1
Then Left(Data, Len(Data)-CharIndex('-', Reverse(Data))) + '-' +
Convert(VarChar(20), 1+Convert(Int, Reverse(Left(Reverse(Data), CharIndex('-', Reverse(Data))-1))))
Else Data + ' -1'
End
From #Temp