I have a table named "temp_table" and a column named "temp_column" of type varchar. The problem is "temp_column" must be of type integer. If I will just automatically update the table into type integer, it will generate an error since some data has non-numeric data in it.
I want a query that will show all rows if "temp_column" has non-numeric values in it (or the other way around) and update or SET the value accordingly. I'm having a hard time since ISNUMERIC is not available in postgresql.
how to do this?
This will show all rows where you have non-integer values in that column. It uses a regular expression to find all values that have anything else than just numbers in it:
select *
from temp_table
where temp_column ~ '[^0-9]';
this can also be used in an update statement:
update temp_table
set temp_column = null
where temp_column ~ '[^0-9]';
This will also filter out "numeric" values like 3.14 as those aren't integers.
Related
I need to set limit to the values of the column. For example, I have a column as TEST_COLUMN with INTEGER data type. I need to set the condition that the column should only accepts the values between
1-4.(it should not be more than 4 and less than 1) Is it possible in postgress?
Thanks in Advance.
I may handle it in the code level but is there is a way to do it in Database level.
You almost certainly don't mean "postgresql 9.4" in your tags - that version is VERY old.
What you are after is a "CHECK constraint".
https://www.postgresql.org/docs/15/ddl-constraints.html
You use it something like this:
CREATE TABLE mytable (
...
my_column int NOT NULL CHECK (my_column BETWEEN 1 AND 4)
...
)
For PostgreSQL at least, the check shouldn't rely on running a query, just accessing columns from the row you are inserting or updating.
I am using PostgreSQL 11.9
I have a table containing a jsonb column with arbitrary number of key-values. There is a requirement when we perform a search to include all values from this column as well. Searching in jsonb is quite slow so my plan is to create a trigger which will extract all the values from the jsonb column:
select t.* from app.t1, jsonb_each(column_jsonb) as t(k,v)
with something like this. And then insert the values in a newly created column in the same table so I can use this column for faster searches.
My question is what type would be most suitable for storing the keys and then searchin within them. Currently the search looks like this:
CASE
WHEN something IS NOT NULL
THEN EXISTS(SELECT value FROM jsonb_each(column_jsonb) WHERE value::text ILIKE search_term)
END
where the search_term is what the user entered from the front end.
This is not going to be pretty, and normalizing the data model would be better.
You can define a function
CREATE FUNCTION jsonb_values_to_string(
j jsonb,
separator text DEFAULT ','
) RETURNS text LANGUAGE sql IMMUTABLE STRICT
AS 'SELECT string_agg(value->>0, $2) FROM jsonb_each($1)';
Then you can query like
WHERE jsonb_values_to_string(column_jsonb, '|') ILIKE 'search_term'
and you can define a trigram index on the left hand side expression to speed it up.
Make sure that you choose a separator that does not occur in the data or the pattern...
I have a column of type TEXT which is supposed to represent a CLOB value and I'm trying to update its value like this:
UPDATE my_table SET my_column = TEXT 'Text value';
Normally this column is written and read by Hibernate and I noticed that values written with Hibernate are stored as integers (perhaps some internal Postgres reference to the CLOB data).
But when I try to update the column with the above SQL, the value is stored as a string and when Hibernate tries to read it, I get the following error: Bad value for type long : ["Text value"]
I tried all the options described in this answer but the result is always the same. How do I insert/update a TEXT column using SQL?
In order to update a cblob created by Hibernate you should use functions to handling large objects:
the documentation can be found in the following links:
https://www.postgresql.org/docs/current/lo-interfaces.html
https://www.postgresql.org/docs/current/lo-funcs.html
Examples:
To query:
select mytable.*, convert_from(loread(lo_open(mycblobfield::int, x'40000'::int), x'40000'::int), 'UTF8') from mytable where mytable.id = 4;
Obs:
x'40000' is corresponding to read mode (INV_WRITE)
To Update:
select lowrite(lo_open(16425, x'60000'::int), convert_to('this an updated text','UTF8'));
Obs:
x'60000' = INV_WRITE + INV_READ is corresponding to read and write mode (INV_WRITE + IV_READ).
The number 16425 is an example loid (large object id) which already exists in a record in your table. It's that integer number you can see as value in the blob field created by Hinernate.
To Insert:
select lowrite(lo_open(lo_creat(-1), x'60000'::int), convert_to('this is a new text','UTF8'));
Obs:
lo_creat(-1) generate a new large object a returns its loid
I have a table with more than 30.000 entries and have to add a new column (zip_prefixes) containing the first digit of the a zip code (zcta).
I created the column successfully:
alter table zeta add column zip_prefixes text;
Then I tried to put the values in the column with:
update zeta
set zip_prefixes = (
select substr(cast (zctea as text)1,1)
from zeta)
)
Of course I got:
error more than one row returned by a subquery used as an expression
How can I get the first digit of the value from zctea into column zip_prefixes of the same row?
No need for sub-select:
update zeta
set zip_prefixes = substr(zctea, 1, 1);
update zeta
set zip_prefixes = substr(zctea as text)1,1)
There is no need for select query and casting
Consider not adding a functionally dependent column. It's typically cleaner and cheaper overall to retrieve the first character on the fly. If you need a "table", I suggest to add a VIEW.
Why the need to cast(zctea as text)? A zip code should be text to begin with.
Name it zip_prefix, not zip_prefixes.
Use the simpler and cheaper left():
CREATE VIEW zeta_plus AS
SELECT *, left(zctea::text, 1) AS zip_prefix FROM zeta; -- or without cast?
If you need the additional column in the table and the first character is guaranteed to be an ASCII character, consider the data type "char" (with double quotes). 1 byte instead of 2 (on disk) or 5 (in RAM). Details:
What is the overhead for varchar(n)?
Any downsides of using data type "text" for storing strings?
And run both commands in one transaction if you need to minimize lock time and / or avoid a visible column with missing values in the meantime. Faster, too.
BEGIN;
ALTER TABLE zeta ADD COLUMN zip_prefix "char";
UPDATE zeta SET zip_prefixes = left(zctea::text, 1);
COMMIT;
I have a table. I wrote a function in plpgsql that inserts a row into this table:
INSERT INTO simpleTalbe (name,money) values('momo',1000) ;
This table has serial field called id. I want in the function after I insert the row to know the id that the new row received.
I thought to use:
select nextval('serial');
before the insert, is there a better solution?
Use the RETURNING clause. You need to save the result somewhere inside PL/pgSQL - with an appended INTO ..
INSERT INTO simpleTalbe (name,money) values('momo',1000)
RETURNING id
INTO _my_id_variable;
_my_id_variable must have been declared with a matching data type.
Related:
PostgreSQL next value of the sequences?
Depending on what you plan to do with it, there is often a better solution with pure SQL. Examples:
Combining INSERT statements in a data-modifying CTE with a CASE expression
PostgreSQL multi INSERT...RETURNING with multiple columns
select nextval('serial'); would not do what you want; nextval() actually increments the sequence, and then the INSERT would increment it again. (Also, 'serial' is not the name of the sequence your serial column uses.)
#Erwin's answer (INSERT ... RETURNING) is the best answer, as the syntax was introduced specifically for this situation, but you could also do a
SELECT currval('simpletalbe_id_seq') INTO ...
any time after your INSERT to retrieve the current value of the sequence. (Note the sequence name format tablename_columnname_seq for the automatically-defined sequence backing the serial column.)