Postgres HStore - Updating existing columns with values from Hstore - postgresql

I am trying to move data from a HStore to a Column using the following query
update mytable set "height" = tags->"height" ,
"building:levels" = tags->"building:levels", "type" = tags->"type",
"building:part" = tags->"building:part"
Where ( tags->"building:levels" <>'' or "tags"->"height" <> ''
or "tags"->"type" <> '' or "tags"->"building:part" <> '' );
The idea was to try and speed the query up by testing for non null values in the HStore. (This is a very large database)
After two days of the query running, I am sure there must be a better way. This is my first attempt at moving data form HStore into a Column.
I can see populate_record in the documentation, but cannot figure out how to use it to just get the hstore tags I need to the correct columns.
Is my original syntax correct and is there any way I can do it faster using populate_record and if so, what should the query look like?
Many Thanks

Related

Is it possible to set condition to the values of the column in postgres?

I need to set limit to the values of the column. For example, I have a column as TEST_COLUMN with INTEGER data type. I need to set the condition that the column should only accepts the values between
1-4.(it should not be more than 4 and less than 1) Is it possible in postgress?
Thanks in Advance.
I may handle it in the code level but is there is a way to do it in Database level.
You almost certainly don't mean "postgresql 9.4" in your tags - that version is VERY old.
What you are after is a "CHECK constraint".
https://www.postgresql.org/docs/15/ddl-constraints.html
You use it something like this:
CREATE TABLE mytable (
...
my_column int NOT NULL CHECK (my_column BETWEEN 1 AND 4)
...
)
For PostgreSQL at least, the check shouldn't rely on running a query, just accessing columns from the row you are inserting or updating.

PostgreSQL, allow to filter by not existing fields

I'm using a PostgreSQL with a Go driver. Sometimes I need to query not existing fields, just to check - maybe something exists in a DB. Before querying I can't tell whether that field exists. Example:
where size=10 or length=10
By default I get an error column "length" does not exist, however, the size column could exist and I could get some results.
Is it possible to handle such cases to return what is possible?
EDIT:
Yes, I could get all the existing columns first. But the initial queries can be rather complex and not created by me directly, I can only modify them.
That means the query can be simple like the previous example and can be much more complex like this:
WHERE size=10 OR (length=10 AND n='example') OR (c BETWEEN 1 and 5 AND p='Mars')
If missing columns are length and c - does that mean I have to parse the SQL, split it by OR (or other operators), check every part of the query, then remove any part with missing columns - and in the end to generate a new SQL query?
Any easier way?
I would try to check within information schema first
"select column_name from INFORMATION_SCHEMA.COLUMNS where table_name ='table_name';"
And then based on result do query
Why don't you get a list of columns that are in the table first? Like this
select column_name
from information_schema.columns
where table_name = 'table_name' and (column_name = 'size' or column_name = 'length');
The result will be the columns that exist.
There is no way to do what you want, except for constructing an SQL string from the list of available columns, which can be got by querying information_schema.columns.
SQL statements are parsed before they are executed, and there is no conditional compilation or no short-circuiting, so you get an error if a non-existing column is referenced.

What PostgreSQL type is good for stroring array of strings and offering fast lookup afterwards

I am using PostgreSQL 11.9
I have a table containing a jsonb column with arbitrary number of key-values. There is a requirement when we perform a search to include all values from this column as well. Searching in jsonb is quite slow so my plan is to create a trigger which will extract all the values from the jsonb column:
select t.* from app.t1, jsonb_each(column_jsonb) as t(k,v)
with something like this. And then insert the values in a newly created column in the same table so I can use this column for faster searches.
My question is what type would be most suitable for storing the keys and then searchin within them. Currently the search looks like this:
CASE
WHEN something IS NOT NULL
THEN EXISTS(SELECT value FROM jsonb_each(column_jsonb) WHERE value::text ILIKE search_term)
END
where the search_term is what the user entered from the front end.
This is not going to be pretty, and normalizing the data model would be better.
You can define a function
CREATE FUNCTION jsonb_values_to_string(
j jsonb,
separator text DEFAULT ','
) RETURNS text LANGUAGE sql IMMUTABLE STRICT
AS 'SELECT string_agg(value->>0, $2) FROM jsonb_each($1)';
Then you can query like
WHERE jsonb_values_to_string(column_jsonb, '|') ILIKE 'search_term'
and you can define a trigram index on the left hand side expression to speed it up.
Make sure that you choose a separator that does not occur in the data or the pattern...

Postgres: update value of TEXT column (CLOB)

I have a column of type TEXT which is supposed to represent a CLOB value and I'm trying to update its value like this:
UPDATE my_table SET my_column = TEXT 'Text value';
Normally this column is written and read by Hibernate and I noticed that values written with Hibernate are stored as integers (perhaps some internal Postgres reference to the CLOB data).
But when I try to update the column with the above SQL, the value is stored as a string and when Hibernate tries to read it, I get the following error: Bad value for type long : ["Text value"]
I tried all the options described in this answer but the result is always the same. How do I insert/update a TEXT column using SQL?
In order to update a cblob created by Hibernate you should use functions to handling large objects:
the documentation can be found in the following links:
https://www.postgresql.org/docs/current/lo-interfaces.html
https://www.postgresql.org/docs/current/lo-funcs.html
Examples:
To query:
select mytable.*, convert_from(loread(lo_open(mycblobfield::int, x'40000'::int), x'40000'::int), 'UTF8') from mytable where mytable.id = 4;
Obs:
x'40000' is corresponding to read mode (INV_WRITE)
To Update:
select lowrite(lo_open(16425, x'60000'::int), convert_to('this an updated text','UTF8'));
Obs:
x'60000' = INV_WRITE + INV_READ is corresponding to read and write mode (INV_WRITE + IV_READ).
The number 16425 is an example loid (large object id) which already exists in a record in your table. It's that integer number you can see as value in the blob field created by Hinernate.
To Insert:
select lowrite(lo_open(lo_creat(-1), x'60000'::int), convert_to('this is a new text','UTF8'));
Obs:
lo_creat(-1) generate a new large object a returns its loid

How can I change all occurrences of a particular value in any column in PostgreSQL?

I have three different values in my database that represent a null: an actual null, an empty string, and a string {x:Null}. This value appears across multiple columns.
{x:Null} is normalized on the web front-end, so all these values look exactly the same although they end up ordered differently in a sort. How can I write a query that will take these values and make them actual nulls across every column and every table?
Bonus points if you can tell me how to make sure these other empty values are always inserted as nulls going forward. (Disclaimer: I have no power to grant any actual bonus points. ;)
You can query the information_schema to get a list of all tables and columns with a string type.
SELECT table_name, column_name
FROM information_schema.columns
WHERE data_type IN ('text', 'character', 'character varying')
NOTE double check first what values data_type has, I'm not sure if it will be character or char or what.
Then I would write a small program to update each column in each table. Here it is sketched out in Perl.
while( my($table, $column) = $sth->fetch ) {
my $q_table = $dbh->quote($table);
my $q_column = $dbh->quote($column);
$dbh->do(q[
UPDATE `$q_table`
SET `$q_column` = NULL
WHERE `$q_column` = '{x:Null}'
OR `$q_column` = ''
]);
}
Be sure to SQL escape $table and $column as in my sample.
Going forward, you'll have to set CONSTRAINTS on each and every column. You can use the information_schema.columns to do this as well. Something like
ALTER TABLE `$q_table` ADD CHECK(`$q_column` NOT IN ('{x:Null}', ''))
You could use a trigger to change the values to NULL, but I don't like data stores that silently change basic data for application purposes.
For new columns and tables, you'll have to remember to add that constraint. Same caveats about data_type apply.
However, it's probably a bad idea to say that no column can ever be an empty string. You might want to be bit more selective.
Another thing to note: NULL is a funny thing, its not true and its not false. You might be better off deciding that an empty string is the thing to set empty values to.
I don't think this approach is maintainable. It's scribbling an application rule all over the data layer. What if you have some data that doesn't follow that rule? And it will have to be continuously maintained for any new data schema added. Perhaps instead you should put this at your ORM layer. Or write a few stored procedures to take care of this.
Using the information_schema.columns table, write a procedural language routine which iterates through all applicable tables and columns, executing an update... set *column* = NULL...where column in ('','{x:Null}'). for each eligible column.
As for inserting these values as NULL going forward, you would have to set triggers on your tables to intercept these values and replace them with NULL.
I don't think there is any query that would do this thing for every table and every column. In principle, what you want to do is
UPDATE table SET column=NULL WHERE column='' OR column='{x:Null}';
You could try selecting data from the pg_attribute and pg_class columns to get the names of the tables and names of the columns and then generating automatically the queries. Be sure to select only those columns that contain textual data.
What if somebody has entered a genuine string '{x:Null}'? You would then change it into NULL.
However, you have done a real mistake by letting the situation to be as bad as it's currently. You should always normalize data before putting it into a database.