How to trim the column in DB2 for the below given input
00652835065718
00052835065718
I need to use SQL to remove all leading zeros from the values so that the final output will be:
652835065718
52835065718
The Column is VARCHAR
I have tried the below query
select TRIM(L '0' FROM ID) from ITM where ID = '0652835065718'
but it's not working in my DB2 version 9.1.5
The answer depends on your DB2 Platform
If you're on DB2 for Linux/Unix/Windows, then the answer is similar to what #Teun Loonen said, I'm guessing that it's the backticks that are messing things up. The correct syntax for the TRIM function on DB2 is:
SELECT TRIM(LEADING '0' FROM ID)
FROM ITM
If you're on Mainframe DB2, then you can use the LTRIM scalar function:
SELECT LTRIM(ID, '0')
FROM ITM
DB2 will automatically remove leading 0s from Integers so just use CAST, like this:
SELECT CAST(ID AS INTEGER)
Here is a test.
SELECT CAST('0012345' AS INTEGER) FROM SYSIBM.SYSDUMMY1
1
-----------
12345
by simply using the trim function
select TRIM( LEADING '0' FROM `field` ) as field FROM table
edit: removed link
Related
I have to determinate if a column contains a numeric or a alphanumeri value.
My sql code is:
select
case when trim(TRANSLATE(my_column, '0123456789-,.', ' ')) is null
then 'integer'
else 'char'
end
from my_table
and i have to traslate it in JPQL.
Thanks !
Your requirement is easy to come by using a native Oracle query:
SELECT *
FROM my_table
WHERE REGEXP_LIKE(my_column, '[A-Za-z0-9]');
The above will match any record having at least one alphanumeric character in the my_column column. You can't do this from JPQL directly. The best you might be able to do is to use a large number of LIKE statements, e.g.
SELECT *
FROM my_table
WHERE my_column LIKE '%A%' OR
my_column LIKE '%B%' OR
...
for every alphanumeric value. But this is unwieldy, and probably should be avoided. There is nothing wrong with using a native query if the situation merits that one should be used.
We are migrating DB2 data to PostgreSQL 11.x using AWS DMS, we have varchar fields in db2 with trailing spaces and without any TRIM these fields working fine when we are using these fields in a WHERE clause. I think DB2 internally trimming them as these are varchar fields. But after moving to PostgreSQL these fields are not working without TRIM and also some times these giving unexpected results even if you use TRIM. below is the detailed problem.
Source: DB2 - RECIP_NUM -- VARCHAR(10) -- 'ST001 '
select RECIP_NUMBER, SERV_TYPE, LENGTH(SERV_TYPE) AS before_trim_COL_LENGTH, LENGTH(trim(SERV_TYPE)) AS after_trim_COL_LENGTH
from serv_type rst
WHERE SERV_TYPE = 'ST001' -- THIS WORKS FINE WITHOUT TRIM
Output:Output of DB2
Target: PGSQL -- RECIP_NUM -- VARCHAR(10) -- 'ST001 '
select RECIP_NUMBER, SERV_TYPE, LENGTH(SERV_TYPE) AS COL_LENGTH
from serv_type rst
WHERE trim(SERV_TYPE) = 'ST001' -- THIS IS NOT GIVING ANY OUTPUT WITHOUT TRIM
Output: Output of PostgreSQL
Is there any way we can tell PostgreSQL to ignore the trailing spaces of a VARCHAR Column?
Postgres doesn't follow the SQL standard, which requires the shorter string be padded, when comparing VARCHAR or TEXT strings; it only pads the CHAR strings. Therefore, you can use ...WHERE SERV_TYPE::char = 'ST001'::char to simulate the Db2 behaviour. Note though that this will preclude the use of index on SERV_TYPE, same as when using trim(SERV_TYPE).
i tried to run this query in DB2 ( which includes regex ). I am getting the following error. Can someone help?
Here is the query:
SELECT COUNT(*) FROM TABLE WHERE REGEXP_LIKE(TRIM(FIELD), '[^[:digit:]]')
Support for BOOLEAN data type is new in Db2 11.1.1.1 (i.e. the first Mod Pack + Fix pack for Db2 11.1). If you are only on Db2 11.1.0.0, then you will need to explicitly test the result of your regex function.
SELECT COUNT(*) FROM TABLE
WHERE REGEXP_LIKE(TRIM(FIELD), '[^[:digit:]]') = 1;
I am using postgresql 8.1 and i dont have the new functions in that version. Please help me what to do in that case?
My first table is as follows
unit_office table:
Mandal_ids Name
82:05: test sample
08:20:16: test sample
Mandal Master table:
mandal_id mandal_name
08 Etcherla
16 Hiramandalam
20 Gara
Now when I say select * from unit_office it should display:
Mandal Name of office
Hiramandalam, Gara test sample
i.e in place of ids I want the corresponding names (which are in master table)separated by comma
I have a column in postgres which has colon separated ids. The following is one record from my table.
mandalid
18:82:14:11:08:05:20:16:83:37:23:36:15:06:38:33:26:30:22:04:03:
When I say select * from table, the mandalid column should display the names of the mandals in the id place separated by a comma.
Now i have the corresponding name for the id in a master table.
I want to display the names of the ids in the select query of the first table. like
my first table name is unit office. when i say select * from unit office, I want the names in the place of ids.
I suggest you redesign your tables, but if you cannot, then you may need to define a function, which will split the mandal_ids string into integers, and map them to names. I suggest you read the PostgreSQL documentation on creating functions. The "PL/pgSQL" language may be a good choice. You may use the functions string_to_array and array_to_string.
But if you can, I suggest you define your tables in the following way:
mandals:
id name
16 Hiramandalam
20 Gara
unit_offices:
id name
1 test sample
mandals_in_offices:
office_id mandal_id
1 16
1 20
The output from the following query should be what you need:
SELECT string_agg(m.name,',') AS mandal_names,
max(o.name) AS office_name
FROM mandals_in_offices i
INNER JOIN unit_offices o ON i.office_id = o.id
INNER JOIN mandals m ON i.mandal_id = m.id
GROUP BY o.id;
The function string_agg appeared in PostgreSQL version 9, so if you are using older version, you may need to write similar function yourself. I believe this will not be too hard.
Here's what we did in LedgerSMB:
created a function to do the concatenation
created a custom aggregate to do the aggregation.
This allows you to do this easily.
CREATE OR REPLACE FUNCTION concat_colon(TEXT, TEXT) returns TEXT as
$$
select CASE WHEN $1 IS NULL THEN $2 ELSE $1 || ':' || $2 END;
$$ language sql;
CREATE AGGREGATE concat_colon (
BASETYPE = text,
STYPE = text,
SFUNC = concat_colon
);
Then you can:
select concat_colon(mycol::text) from mytable;
Works just fine.
I've run into a problem in a project I'm working on: some of the string values in a specific SQL Server 2008 table column contain Unicode characters. For example, instead of a dash some strings will instead contain an EM DASH (http://www.fileformat.info/info/unicode/char/2014/index.htm).
The column values that contain Unicode characters are causing problems when I send HTTP requests to a third-party server. Is there a way to query what rows contain one-or-more Unicode characters, so I can at least begin to identify how many rows need to be fixed?
You want to find all strings that contain one or more characters outside ASCII characters 32-126.
I think this should do the job.
SELECT *
FROM your_table
WHERE your_column LIKE N'%[^ -~]%' collate Latin1_General_BIN
One way you can do it is to see which rows no longer equal themselves when converted to a datatype that doesn't support unicode.
CREATE TABLE myStrings (
string nvarchar(max) not null
)
INSERT INTO myStrings (string)
SELECT 'This is not unicode' union all
SELECT 'This has '+nchar(500)+' unicode' union all
SELECT 'This also does not have unicode' union all
SELECT 'This has lots of unicode '+nchar(600)+nchar(700)+nchar(800)+'!'
SELECT cast(string as varchar)
FROM myStrings
SELECT *
FROM myStrings
WHERE cast(cast(string as varchar(max)) as nvarchar(max)) <> string
SELECT *
FROM your_table
WHERE your_column LIKE N'%[^ -~]%' collate Latin1_General_BIN
finds all strings that contain one or more characters within ASCII characters 32-126.
I thought the purpose was to find strings where ASCII characters are not in the range 32-126?
NOT is possible with LIKE. Wouldn't this work?
SELECT *
FROM your_table
WHERE your_column NOT LIKE N'%[^ -~]%'
No collate required.