PostgreSQL JSONB optimal setting and removing from nested array - postgresql

I have a jsonb object as follows, where I need to add or remove items from the nested array:
{
"GROUP_ONE": [
"FIRST_ITEM",
"SECOND_ITEM"
],
"GROUP_TWO": [
"FIRST_ITEM",
"SECOND_ITEM"
]
}
An update function passes:
x_id (table)
x_group (jsonb top level)
x_item (item to add or remove items from the nested arrays)
x_is_add (add or remove).
The group may or may not exist.
Is this code optimal or is there a better way to use jsonb functions to achieve this?
update table set list = (
case
when not x_is_add then jsonb_set(list, '{' || x_group || '}', (list->x_group) - x_item)
when x_is_add and list->x_group is null then list || jsonb_build_object(x_group, array[x_item])
when x_is_add and not list->x_group ? x_item then jsonb_set(list, '{' || x_group || '}', list->x_group || jsonb_build_array(x_item))
end)
where id = x_id

Related

The column index is out of range: 2, number of columns: 1 error while updating jsonb column

I am trying to update jsonb column in java with mybatis.
Following is my mapper method
#Update("update service_user_assn set external_group = external_group || '{\"service_name\": \"#{service_name}\" }' where user=#{user} " +
" and service_name= (select service_name from services where service_name='Google') " )
public int update(#Param("service_name")String service_name,#Param("user") Integer user);
I am getting the following error while updating the jsonb (external_group) cloumn.
### Error updating database. Cause: org.postgresql.util.PSQLException: The column index is out of range: 2, number of columns: 1.
### The error may involve com.apds.mybatis.mapper.ServiceUserMapper.update-Inline
I am able to update with the same way for non-jsonb columns.
Also if I am putting hardcoded value it's working for jsonb columns.
How to solve this error while updating jsonb column?
You should not enclose #{} in single quotes because it will become part of a literal rather than a placeholder. i.e.
external_group = external_group || '{"service_name": "?"}' where ...
So, there will be only one placeholder in the PreparedStatement and you get the error.
The correct way is to concatenate the #{} in SQL.
You may also need to cast the literal to jsonb type explicitly.
#Update({
"update service_user_assn set",
"external_group = external_group",
"|| ('{\"service_name\": \"' || #{service_name} || '\" }')::jsonb",
"where user=#{user} and",
"service_name= (select service_name from services where service_name='Google')"})
The SQL being executed would look as follows.
external_group = external_group || ('{"service_name": "' || ? || '"}')::jsonb where ...

Postgres invalid input syntax for type json Detail: Token "%" is invalid

I'm trying to check if some text contains the concatenation of a text and a value from an array in Postgres, something like:
SELECT true from jsonb_array_elements('["a", "b"]'::jsonb) as ids
WHERE 'bar/foo/item/b' LIKE '%item/' || ids->>'id' || '%'
I'm getting the following error:
ERROR: invalid input syntax for type json Detail: Token "%" is invalid. Position: 95 Where: JSON data, line 1: %...
How can I make use of the values of the array, concatenate them with the text and check the LIKE expression?
I have tried several ideas of explicitly adding a cast like ::jsonb, but no luck so far.
The problem is that the || and ->> operators have the same precedence and are left associative, so the expression is interpreted as
(('%item/' || ids) ->>'id') || '%'
You'd have to add parentheses:
'%item/' || (ids->>'id') || '%'
Finally got this working, this is the result:
SELECT true from jsonb_array_elements_text('["a", "c"]'::jsonb) as ids
WHERE 'bar/foo/item/b' LIKE '%item/' || ids.value || '%'
The key changes were to use jsonb_array_elements_text instead of jsonb_array_elements and ids.value instead of ids->>'id'

Full-Text Search producing no results

I have the following View:
CREATE VIEW public.profiles_search AS
SELECT
profiles.id,
profiles.bio,
profiles.title,
(
setweight(to_tsvector(profiles.search_language::regconfig, profiles.title::text), 'B'::"char") ||
setweight(to_tsvector(profiles.search_language::regconfig, profiles.bio), 'A'::"char") ||
setweight(to_tsvector(profiles.search_language::regconfig, profiles.category::text), 'B'::"char") ||
setweight(to_tsvector(profiles.search_language::regconfig, array_to_string(profiles.tags, ',', '*')), 'C'::"char")
) AS document
FROM profiles
GROUP BY profiles.id;
However, if profiles.tags is empty then document is empty, even if the rest of the fields (title, bio, and category) contain data.
Is there some way to make the make that field optional such that it having empty data doesn't result in an an empty document?
This seems to be the common string concatenation issue - concatenating a NULL value makes the whole result NULL.
Here it is suggested you should always provide a default value for any input with coalesce():
UPDATE tt SET ti =
setweight(to_tsvector(coalesce(title,'')), 'A') ||
setweight(to_tsvector(coalesce(keyword,'')), 'B') ||
setweight(to_tsvector(coalesce(abstract,'')), 'C') ||
setweight(to_tsvector(coalesce(body,'')), 'D');
If you do not want to provide default values for complex datatypes (like coalesce(profiles.tags, ARRAY[]::text[]) as suggested by #approxiblue), I suspect you could simply do:
CREATE VIEW public.profiles_search AS
SELECT
profiles.id,
profiles.bio,
profiles.title,
(
setweight(to_tsvector(profiles.search_language::regconfig, profiles.title::text), 'B'::"char") ||
setweight(to_tsvector(profiles.search_language::regconfig, profiles.bio), 'A'::"char") ||
setweight(to_tsvector(profiles.search_language::regconfig, profiles.category::text), 'B'::"char") ||
setweight(to_tsvector(profiles.search_language::regconfig, coalesce(array_to_string(profiles.tags, ',', '*'), '')), 'C'::"char")
) AS document
FROM profiles
GROUP BY profiles.id;

Postgres SQL - different results from LIKE query using OR vs ||

I have a table with an integer column. It has 12 records numbered 1000 to 1012. Remember, these are ints.
This query returns, as expected, 12 results:
select count(*) from proposals where qd_number::text like '%10%'
as does this:
SELECT COUNT(*) FROM "proposals" WHERE (lower(first_name) LIKE '%10%' OR qd_number::text LIKE '%10%' )
but this query returns 2 records:
SELECT COUNT(*) FROM "proposals" WHERE (lower(first_name) || ' ' || qd_number::text LIKE '%10%' )
which implies using || in concatenated where expressions is not equivalent to using OR. Is that correct or am I missing something else here?
You probably have nulls in first_name. For these records (lower(first_name) || ' ' || qd_number::text results in null, so you don't find the numbers any longer.
using || in concatenated where expressions is not equivalent to using ORIs that correct or am I missing something else here?
That is correct.
|| is the string concatenation operator in SQL, not the OR operator.

Error thrown when trying to chain a text search with a search on a join table that uses group_by

I have three models, Story, Post and Tag. Story is in a one-to-many relationship with Post, and a many-to-many relationship with Tag.
I use the following scope to search for stories that have a least one tag that matches an array of tags:
scope :in_tags, lambda {|category_array, tag_array|
Story.joins('LEFT OUTER JOIN stories_tags ON stories_tags.story_id = stories.id').
joins('LEFT OUTER JOIN tags ON tags.id = stories_tags.tag_id').
where("tags.name in (:tag_array)", :tag_array => tag_array ).
group("stories.id")
}
Note that I use GROUP_BY so that the scope doesn't return a story multiple times if that story has multiple matching tags.
I use the following pg_search_scope to search for stories or posts with specified text:
pg_search_scope :with_text,
:against => :title,
:using => { :tsearch => { :dictionary => "english" }},
:associated_against => { :posts => :contents }
Both of these scopes work fine independent of each other. When I try to chain them together, however, I experience the postgreSQL error:
ActiveRecord::StatementInvalid (PG::Error: ERROR: column "pg_search_79dd68cf7f962ac568b3d7.pg_search_c5f43d73058486e1799d26" must appear in the GROUP BY clause or be used in an aggregate function
LINE 1: ...e"::text, '')) || to_tsvector('english', coalesce(pg_search_...
^
: SELECT "stories".*, ((ts_rank((to_tsvector('english', coalesce("stories"."title"::text, '')) || to_tsvector('english', coalesce(pg_search_79dd68cf7f962ac568b3d7.pg_search_c5f43d73058486e1799d26::text, ''))), (to_tsquery('english', ''' ' || 'track' || ' ''')), 0))) AS pg_search_rank FROM "stories" LEFT OUTER JOIN stories_tags ON stories_tags.story_id = stories.id LEFT OUTER JOIN tags ON tags.id = stories_tags.tag_id LEFT OUTER JOIN (SELECT "stories"."id" AS id, string_agg("posts"."contents"::text, ' ') AS pg_search_c5f43d73058486e1799d26 FROM "stories" INNER JOIN "posts" ON "posts"."story_id" = "stories"."id" GROUP BY "stories"."id") pg_search_79dd68cf7f962ac568b3d7 ON pg_search_79dd68cf7f962ac568b3d7.id = "stories"."id" WHERE (tags.name in (sports') AND (((to_tsvector('english', coalesce("stories"."title"::text, '')) || to_tsvector('english', coalesce(pg_search_79dd68cf7f962ac568b3d7.pg_search_c5f43d73058486e1799d26::text, ''))) ## (to_tsquery('english', ''' ' || 'track' || ' ''')))) GROUP BY stories.id ORDER BY latest_post_id DESC LIMIT 5000 OFFSET 0):
Is there any way to use pg_search with a scope that requires a join table with a GROUP_BY clause?
I ended up having to switch from the .group() method (which I couldn't seem to get working with pg_search) to select("DISTINCT(stories.id), stories.*). I also made use of ActiveRecords#preload so to eagerly load the associated tag records