I'm using full text search supported by postgres, I installed acts_as_tsearch plugin and it works successfully, but when I tried it later I found an error
runtimeError: ERROR C42883 Mfunction ts_rank_cd(text, tsquery) does
not exist HNo function matches the given name and argument types. You
might need to add explicit type
You need to cast your first parameter into a tsvector.
So let's assume you're searching against a column named foo.text. You will want to change from this:
SELECT ts_rank_cd(foo.text, plainto_tsquery('my search terms')) FROM foo;
to this:
SELECT ts_rank_cd(to_tsvector(foo.text), plainto_tsquery('my search terms')) FROM foo;
or something similar.
If you're using the ## operator elsewhere, you can generally re-use the expressions that operator is operating on.
You can find more documentation on to_tsvector at http://www.postgresql.org/docs/current/static/textsearch-controls.html#TEXTSEARCH-PARSING-DOCUMENTS
EDIT
ts_rank_cd(text, tsquery) does not
exist
This means there is no function with this name that accepts text and a tsquery parameter as input. And that's correct, PostgreSQL doens't have a function using these parameters.
From the manual:
ts_rank_cd([ weights float4[], ]
vector tsvector, query tsquery [,
normalization integer ])
Change your input for the function ts_rank_cd() and you will be fine.
Related
I am trying to run this query:
select *
from my_table
where column_one=${myValue}
I get the following error in Datagrip:
[42883] ERROR: operator does not exist: character varying = bigint Hint: No operator matches the given name and argument types. You might need to add explicit type casts.
Now, I have found this question, and I can fix the error by putting a string like this:
select *
from my_table
where column_one='123'
What I need is a way to pass in the '123' as a parameter. I usually do this ${myValue} and it works, but I am not sure how to keep my variable there as an input so I can run dynamic queries in code and let Postgres understand I want to pass in a string and not a number.
Any suggestions?
Here's a screenshot of how I am putting the parameter value in DataGrip...:
Ok, so, I just tried to put quotes in the data grip parameters input field for myValue #thirumal's answer things work. I didn't know I have to quote the value for it to work.
This is what it looks like:
Type cast ${myValue} using SQL Standard,
cast(${myValue} AS varchar)
or using Postgres Syntax:
${myValue}::varchar
I have a custom type with code similar to the following (shortened for brevity):
create domain massFraction as jsonb (
value->'h' > 0 and value->'h' is not null
);
Running this provokes the following error:
ERROR: type modifier is not allowed for type "jsonb"
The documentation makes no mention of json being a disallowed base type for a domain.
https://www.postgresql.org/docs/12/sql-createdomain.html
I also thought that it was perhaps the "not null" part of the constraint, which the documentation mentions is "tricky". Additionally, I've tried similar statements without any json operators. All of these seem to be disallowed.
I am obnoxiously prone to not reading documentation, and worse still about understanding documentation when I do bother to read it... so does it not exist, or am I looking in the wrong places to see if this is permitted? Why is it not permitted?
You're missing a CHECK after jsonb:
create domain massFraction as jsonb check (
value->'h' > 0 and value->'h' is not null
);
This results in another error:
ERROR: 42883: operator does not exist: jsonb > integer
HINT: No operator matches the given name and argument types. You might need to add explicit type casts.
I believe what you want is
CREATE DOMAIN massFraction AS jsonb CHECK (
CASE jsonb_typeof(value->'h')
-- only cast when it is safe to do so
-- note: this will reject numeric values stored as text in the json object (eg '{"h": "1"}')
WHEN 'number' THEN (value->>'h')::integer > 0
ELSE false
END
)
The error you're getting comes from the fact that you input is parsed as
CREATE DOMAIN massFraction AS jsonb(<modifier>) -- like in varchar(10)
As an aside I would recommend against using camelCase in postgresql, as massFraction is the same as massfraction (unless quoted) and postgresql will use the lowercased form when reporting errors, hints, etc.
I'm trying to work out how to expand a JSONB field with a jsonb_to_recordset and a custom type.
Postgres 11.4.
This is just a test case, but for the minute I'm trying to work out the syntax. I've got a table named data_file_info with a JSONB field named table_stats. The JSONB field always includes a JSON array with the same structure:
[
{"table_name":"Activity","record_count":0,"table_number":214},
{"table_name":"Assembly","record_count":1,"table_number":15},
{"table_name":"AssemblyProds","record_count":0,"table_number":154}
]
The following code works correctly:
from data_file_info,
jsonb_to_recordset(table_stats) as table_stats_splat (
table_name text,
record_count integer,
table_number integer
)
What I would like to do is pass in a custm type definition instead of the long-form column definition list shown above. Here's the matching type:
create type data.table_stats_type as (
table_name text,
record_count integer,
table_number integer)
Some examples I've seen, and the docs say, that you can supply a type name using a null:row_type casting in the first parameter to jsonb_to_recordset. The examples that I've found use in-line JSON, while I'm trying to pull stored JSON. I've made a few attempts, all have failed. Below are two of the trials, with errors. Can someone point me towards the correct syntax?
FAIL:
select table_stats_splat.*
from data_file_info,
jsonb_populate_recordset(null::table_stats_type, data_file_info) as table_stats_splat;
-- ERROR: function jsonb_populate_recordset(table_stats_type, data_file_info) does not exist
-- LINE 4: jsonb_populate_recordset(null::table_stats_type, dat...
^
-- HINT: No function matches the given name and argument types. You might need to add explicit type casts. (Line 4)
FAIL:
select *
from jsonb_populate_recordset(NULL::table_stats_type, (select table_stats from data_file_info)) as table_stats_splat;
-- ERROR: more than one row returned by a subquery used as an expression. (Line 2)
I'm doubtlessly missing something pretty obvious, and am hoping someone can suggest what that is.
Use the column as the second parameter:
select table_stats_splat.*
from data_file_info,
jsonb_populate_recordset(null::table_stats_type, table_stats) as table_stats_splat;
I am trying to use Postgresql Full Text Search. I read that the stop words (words ignored for indexing) are implemented via dictionary. But I would like to give the user a limited control over the stop words (insert new ones), so I grouped then in a table.
From the example below:
select strip(to_tsvector('simple', texto)) from longtxts where id = 23;
I can get the vector:
{'alta' 'aluno' 'cada' 'do' 'em' 'leia' 'livro' 'pedir' 'que' 'trecho' 'um' 'voz'}
And now I would like to remove the elements from the stopwords table:
select array(select palavra_proibida from stopwords);
That returns the array:
{a,as,ao,aos,com,default,e,eu,o,os,da,das,de,do,dos,em,lhe,na,nao,nas,no,nos,ou,por,para,pra,que,sem,se,um,uma}
Then, following documentation:
ts_delete(vector tsvector, lexemes text[]) tsvector remove any occurrence of lexemes in lexemes from vector ts_delete('fat:2,4 cat:3 rat:5A'::tsvector, ARRAY['fat','rat'])
I tried a lot. For example:
select ts_delete((select strip(to_tsvector('simple', texto)) from longtxts where id = 23), array[(select palavra_proibida from stopwords)]);
But I always receive the error:
ERROR: function ts_delete(tsvector, character varying[]) does not exist
LINE 1: select ts_delete((select strip(to_tsvector('simple', texto))...
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
Could anyone help me? Thanks in advance!
ts_delete was introduced in PostgreSQL 9.6. Based on the error message, you're using an earlier version. You may try select version(); to be sure.
When you land on the PostgreSQL online documentation with a web search, it may correspond to any version. The version is in the URL and there's a "This page in another version" set of links at the top of each page to help switching to the equivalent doc for a different version.
I have a Python script that runs a pgSQL file through SQLAlchemy's connection.execute function. Here's the block of code in Python:
results = pg_conn.execute(sql_cmd, beg_date = datetime.date(2015,4,1), end_date = datetime.date(2015,4,30))
And here's one of the areas where the variable gets inputted in my SQL:
WHERE
( dv.date >= %(beg_date)s AND
dv.date <= %(end_date)s)
When I run this, I get a cryptic python error:
sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) argument formats can't be mixed
…followed by a huge dump of the offending SQL query. I've run this exact code with the same variable convention before. Why isn't it working this time?
I encountered a similar issue as Nikhil. I have a query with LIKE clauses which worked until I modified it to include a bind variable, at which point I received the following error:
DatabaseError: Execution failed on sql '...': argument formats can't be mixed
The solution is not to give up on the LIKE clause. That would be pretty crazy if psycopg2 simply didn't permit LIKE clauses. Rather, we can escape the literal % with %%. For example, the following query:
SELECT *
FROM people
WHERE start_date > %(beg_date)s
AND name LIKE 'John%';
would need to be modified to:
SELECT *
FROM people
WHERE start_date > %(beg_date)s
AND name LIKE 'John%%';
More details in the pscopg2 docs: http://initd.org/psycopg/docs/usage.html#passing-parameters-to-sql-queries
As it turned out, I had used a SQL LIKE operator in the new SQL query, and the % operand was messing with Python's escaping capability. For instance:
dv.device LIKE 'iPhone%' or
dv.device LIKE '%Phone'
Another answer offered a way to un-escape and re-escape, which I felt would add unnecessary complexity to otherwise simple code. Instead, I used pgSQL's ability to handle regex to modify the SQL query itself. This changed the above portion of the query to:
dv.device ~ E'iPhone.*' or
dv.device ~ E'.*Phone$'
So for others: you may need to change your LIKE operators to regex '~' to get it to work. Just remember that it'll be WAY slower for large queries. (More info here.)
For me it's turn out I have % in sql comment
/* Any future change in the testing size will not require
a change here... even if we do a 100% test
*/
This works fine:
/* Any future change in the testing size will not require
a change here... even if we do a 100pct test
*/