Postgres escape a single quote - postgresql

I have the following postgres query:
SELECT SUM(Cost)
FROM DB
WHERE ID NOT IN (<parameter>)
<parameter> is a dynamic text field where multiple ID's need to be inserted. If you type in
123, 456
as ID's, it results in:
SELECT SUM(Cost)
FROM DB
WHERE ID NOT IN ('123,456')
Which doesn't run properly.
I can change the query, but I can't change the input field. If you type in
123','456
It results in:
SELECT SUM(Cost)
FROM DB
WHERE ID NOT IN ('123'',''456')
When you change the query into:
SELECT SUM(Cost)
FROM DB
WHERE ID NOT IN ('<parameter>')
And you type in
123,456
Then it results in:
SELECT SUM(Cost)
FROM DB
WHERE ID NOT IN (''123'',''456'')
I've got it working for Mysql, but not for Postgresql. Any idea how to trick postgresql?

Try something like:
SELECT SUM(Cost)
FROM DB
WHERE ID != ALL(('{'||'123,456'||'}')::numeric[])
It will form array string from your input values : {123,456}, cast it to an array and check ID against all elements of array.

Related

Selecting an entry from PostgreSQL table based on time and id using psycopg2

I have the following table in PostgreSQL DB:
DB exempt
I need a PostgreSQL command to get a specific value from tbl column, based on time_launched and id columns. More precisely, I need to get a value from tbl column which corresponds to a specific id and latest (time-wise) value from time_launched column. Consequently, the request should return "x" as an output.
I've tried those requests (using psycopg2 module) but they did not work:
db_object.execute("SELECT * FROM check_ids WHERE id = %s AND MIN(time_launched)", (id_variable,))
db_object.execute(SELECT DISTINCT on(id, check_id) id, check_id, time_launched, tbl, tbl_1 FROM check_ids order by id, check_id time_launched desc)
Looks like a simple ORDER BY with a LIMIT 1 should do the trick:
SELECT tbl
FROM check_ids
WHERE id = %s
ORDER BY time_launched DESC
LIMIT 1
The WHERE clause filters results by the provided id, the ORDER BY clause ensures results are sorted in reverse chronological order, and LIMIT 1 only returns the first (most recent) row

Postgres: insert value from another table as part of multi-row insert?

I am working in Postgres 9.6 and would like to insert multiple rows in a single query, using an INSERT INTO query.
I would also like, as one of the values inserted, to select a value from another table.
This is what I've tried:
insert into store_properties (property, store_id)
values
('ice cream', select id from store where postcode='SW1A 1AA'),
('petrol', select id from store where postcode='EC1N 2RN')
;
But I get a syntax error at the first select. What am I doing wrong?
Note that the value is determined per row, i.e. I'm not straightforwardly copying over values from another table.
demo:db<>fiddle
insert into store_properties (property, store_id)
values
('ice cream', (select id from store where postcode='SW1A 1AA')),
('petrol', (select id from store where property='EC1N 2RN'))
There were some missing braces. Each data set has to be surrounded by braces and the SELECT statements as well.
I don't know your table structure but maybe there is another error: The first data set is filtered by a postcode column, the second one by a property column...

How to query an ampersand symbol in Postgres

I have a Postgres table that has names and addresses. Some of these name fields are both names of a couple -- for example, "John & Jane".
I am trying to write a query that pulls out only those rows where this is the case.
When I run this query, it selects 0 rows even though I know that they exist in the table:
SELECT count(*) FROM name_list where namefirst LIKE '%&%';
Does anyone know how to address this?

Sum a field in a jsonb column with an Ecto query

Say I have a jsonb type column in a Postgres DB, called info. One of the fields is bytes, which is stored as an integer in the info field.
If I try and sum the values of the info => bytes field in an Ecto Query, as below:
total_bytes = Repo.one(from job in FilesTable,
select: sum(fragment("info->>'bytes'")))
I get the error function sum(text) does not exist.
Is there a way to write the query above so that info => bytes can be summed, or would I have to just select that field from each row in the database, and then use Elixir to add up the values?
The error message says that it can't sum a text field. You need to explicitly cast the field to an integer so that sum works.
Also, it's incorrect to hardcode a column name in a fragment. It only works in this case because you're selecting from only one table. If you had some join statements in there with other tables with the same column name, the query won't work. You can use ? in the string and then pass the column as argument.
Here's the final thing that should work:
sum(fragment("(?->>'bytes')::integer", job.info)))

can we retrieve all fields in mongodb if we use group by clause without specifying each fields?

there is a problem in mongo query i want like this:
suppose table is:
Table name: Employee
Fields: id, name, salary,age
select * from employee where name="xyz" group by salary;
if i use this query in mongodb so i have to write like this:
db.employee.aggregate([{$match:{"name":"xyz"}},{$group:{"_id":"$salary"}}]);
but i am not getting all the fields only getting salary we can do this if we specify fields name using $first but i dont wanna specify the fields name i need something like select * functionality in mongodb aggregation query.