HEX where clause in Postgre - postgresql

I'm new in postgreSQL
how to do this
select * from table_abc where table_abc.a>=7a and table_abc.b<=7a
all value is HEX in column a, b and input value
Thanks
EDIT :
table_abc
a bytea
b bytea
c text

Careful, here. In Postgres, bytea is a byte array. You look like you want to store a single byte in those columns.
I don't see a single-byte type in the list of datatypes at http://www.postgresql.org/docs/9.0/static/datatype.html.
You can go with an integer type. For example, when I say this:
select x'7A'::integer
I get 122.
If you intend to store a single byte in these columns and write your queries with hex values, then I suggest you make the columns integers and query like this:
select * from table_abc where table_abc.a>=x'7a'::integer and table_abc.b<=x'7a'::integer

Related

Syncsort produces a non-readable output for decimal(9,8) or smallint data type columns in Db2

I am using Syncsort to select records from Db2. For the columns that are either decimal(9,8) or smallint the output looks weird with junk characters in it. If I cast the column to CHAR type in the select statement the output is proper. I do not want to cast the column type to char in the SQL statement rather I want a solution in syncsort if this is possible.
For example: the decimal column has a value 2.98965467 which is displayed in a non-readable format by Syncsort if I don't use casting in the SQL statement. Kindly help

Converting bytea back to varchar

In Postgres when I want to save a varchar to a bytea column, this is made easy by an implicit conversion. So I can simply execute
UPDATE my_table SET my_bytea_col = 'This varchar will be converted' WHERE id = 1;
I use this all the time. However, I would like to occasionally see the contents of this column as a varchar. IDEs will handle this for you, but I would prefer in my use case to return the results with the bytea converted back to a varchar.
Of course I've tried something like this, among more complex options:
select my_bytea_col::VARCHAR from my_table WHERE id = 1
This, however, doesn't return my original readable text. How else can I convert my bytea back to the original varchar after postgres's implicit conversion in updates and inserts like the one above?
If the string encoding is UTF-8, you could use
SELECT convert_from(my_bytea_col, 'UTF8')
FROM my_table
WHERE id = 1;
If the encoding is different, you need to supply the appropriate second argument (e.g. LATIN1) to convert_from.
May I remark that I consider it not a good idea to store text strings as bytea?

kdb type error when inserting I as F

I have a certain file type that contains a column with floats which I read in using insert
`table insert ("TISISIFIIIFFIbIFFFFFFIIIFIIFFFFIIIIIIIIIIIIFFFFFFIIFFIIIIFFIIIIIIIIIIIIIIIII"; enlist "\t" ) 0:`:my_file.txt
unfortunately, sometimes the values in column happen to be all integers and in the txt file the are saved as ints, not floats, so as 1 instead of 1.0 and it seems kdb is throwing a type error. Is there a way make kdb accept ints saved in that format as floats?
I do have a lot of columns with floats and theoretically, the problem can appear in any of them. Is there some way to tell kdb on insert to treat any int as float if the column type is float?
The 'type error is actually happening from your insert. You are trying to insert some parsed data, but the types of each column are not conforming to the types of each column in 'table'.
You are basically saying that your raw data can contain floats, therefore you are going to have to read them in as floats.
What you do with that column after the parse is up to you though.
1) keep as floats, read in as floats, insert as floats, column should be a float in 'table' pre-read (I presume this is what you want going by your question):
update "f"$COLUMN from `table
`table insert (1#"F";1#"\t") 0:`myfile.txt
2) update to an integer and then insert into 'table' - you are going to have update the schema of table first, read in as floats, and then run an update after every read:
update "i"$COLUMN from `table
`table insert update "i"COLUMN from (1#"F";1#"\t") 0:`myfile.txt
Another option you may want to consider, but please test first as it may replace too much, is to replace the trailing ".0" from your floats, and then just read in as integers:
q)\cd /var/tmp
q)`:myfile.txt 0:("x\tx1";"1.0\t2.0";"3.0\t1.0")
q)\sed -i -e 's/.0//g' myfile.txt
q)("II";1#"\t")0:`myfile.txt

Show all numeric rows or vice-versa postgresql

I have a table named "temp_table" and a column named "temp_column" of type varchar. The problem is "temp_column" must be of type integer. If I will just automatically update the table into type integer, it will generate an error since some data has non-numeric data in it.
I want a query that will show all rows if "temp_column" has non-numeric values in it (or the other way around) and update or SET the value accordingly. I'm having a hard time since ISNUMERIC is not available in postgresql.
how to do this?
This will show all rows where you have non-integer values in that column. It uses a regular expression to find all values that have anything else than just numbers in it:
select *
from temp_table
where temp_column ~ '[^0-9]';
this can also be used in an update statement:
update temp_table
set temp_column = null
where temp_column ~ '[^0-9]';
This will also filter out "numeric" values like 3.14 as those aren't integers.

How to convert PostgreSQL escape bytea to hex bytea?

I got the answer to check for one certain BOM in a PostgreSQL text column. What I really like to do would be to have something more general, i.e. something like
select decode(replace(textColumn, '\\', '\\\\'), 'escape') from tableXY;
The result of a UTF8 BOM is:
\357\273\277
Which is octal bytea and can be converted by switching the output of bytea in pgadmin:
update pg_settings set setting = 'hex' WHERE name = 'bytea_output';
select '\357\273\277'::bytea
The result is:
\xefbbbf
What I would like to have is this result as one query, e.g.
update pg_settings set setting = 'hex' WHERE name = 'bytea_output';
select decode(replace(textColumn, '\\', '\\\\'), 'escape') from tableXY;
But that doesn't work. The result is empty, probably because the decode cannot handle hex output.
If the final purpose is to get the hexadecimal representation of all the bytes that constitute the strings in textColumn, this can be done with:
SELECT encode(convert_to(textColumn, 'UTF-8'), 'hex') from tableXY;
It does not depend on bytea_output. BTW, this setting plays a role only at the final stage of a query, when a result column is of type bytea and has to be returned in text format to the client (which is the most common case, and what pgAdmin does). It's a matter of representation, the actual values represented (the series of bytes) are identical.
In the query above, the result is of type text, so this is irrelevant anyway.
I think that your query with decode(..., 'escape') can't work because the argument is supposed to be encoded in escape format and it's not, per comments it's normal xml strings.
With the great help of Daniel-Vérité I use this general query now to check for all kind of BOM or unicode char problems:
select encode(textColumn::bytea, 'hex'), * from tableXY;
I had problem with pgAdmin and too long columns, as they had no result. I used that query for pgAdmin:
select encode(substr(textColumn,1,100)::bytea, 'hex'), * from tableXY;
Thanks Daniel!