Update multiple columns based on other columns in same table - oracle-sqldeveloper

I'm trying to update two columns certain values based on two other columns, with certain values in the same table, but they keep coming up with something called bind.
This is what doesn't work:
UPDATE table t1
SET t1.column1 = value1, t1.column2 = value2
WHERE t1.column5 = cake
AND t1.column7 = pie;

I assume you are getting an "invalid identifier" error, if my assumption is correct then the only possible mistake in your code is you forgot to enclose your values in single qoutation marks.
UPDATE table t1
SET t1.column1 = value1, t1.column2 = value2
WHERE t1.column5 = 'cake'
AND t1.column7 = 'pie';
Note that if you are assigning/comparing a value with a datatype string, you should always enclose it with single-qoutation marks.

Related

Postgres dynamic filter conditions

I want to dynamically filter through data based on condition, which is stored in specific column. This condition can change for every row.
For example I have a table my_table with couple of columns, one of them is called foo, where there are couple of filter conditions such as AND bar > 1 or in the next row AND bar > 2 or in the next row AND bar = 33.
I have a query which looks like:
SELECT something from somewhere
LEFT JOIN otherthing on some_condition
WHERE first_condition AND second_condition AND
here_i_want_dynamically_load_condition_from_my_table.foo
What is the correct way to do it? I have read some articles about dynamic queries, but I am not able to find a correct way.
This is impossible in pure SQL: at query time, the planner has to know your exact logic. Now, you can hide it away in a function (in pseudo-sql):
CREATE FUNCTION do_I_filter_or_not(some_id) RETURNS boolean AS '
BEGIN
value = select some_value from table where id = some_id
condition_type = SELECT ... --Query a condition type for this row
condition_value = SELECT ... --Query a condition value for this row
if condition_type = 'equals' and condition_value = value
return true
if condition_type = 'greater_than' and condition_value < value
return true
if condition_type = 'lower_than' and condition_value > value
return true
return false;
END;
'
LANGUAGE 'plpgsql';
And query it like this:
SELECT something
FROM somewhere
LEFT JOIN otherthing on some_condition
WHERE first_condition
AND second_condition
AND do_I_filter_or_not(somewhere.id)
Now the performance will be bad: you have to invoke that function potentially on every row in the query; triggering lots of subqueries.
Thinking about it, if you just want <, >, =, and you have a table (filter_criteria) describing for each id what the criteria is you can do it:
CREATE TABLE filter_criteria AS(
some_id integer,
equals_threshold integer,
greater_than_threshold integer,
lower_than_threshold integer
-- plus a check that two thresholds must be null, and one not null
)
INSERT INTO filter_criteria (1, null, 5, null); -- for > 5
And query like this:
SELECT something
FROM somewhere
LEFT JOIN otherthing on some_condition
LEFT JOIN filter_criteria USING (some_id)
WHERE first_condition
AND second_condition
AND COALESCE(bar = equals_threshold, true)
AND COALESCE(bar > greater_than_threshold, true)
AND COALESCE(bar < lower_than_threshold, true)
The COALESCEs are here to default to not filtering (AND true) if the threshold is missing (bar = equals_threshold will yield null instead of a boolean).
The planner has to know your exact logic at query time: now you're just doing 3 passes of filtering, with a =, <, > check each time. That'd still be more performant than idea #1 with all the subquerying.

How to update table based on CASE logic?

I need to update a table based on a value derived from case logic. That case logic is created using several other tables, such as this:
CASE
WHEN column = 'value'
THEN
COALESCE
(
CASE WHEN column = 'test1' THEN 'result' END,
CASE WHEN column = 'test2' THEN 'result' END
)
ELSE
column
END AS Derived_Column
FROM
table_a a
LEFT JOIN table_b b ON a.column = b.column
LEFT JOIN tabel_c c ON b.column = c.column
What I need to to do something like this:
UPDATE table SET column =
( SELECT column FROM table WHERE column = <CASE STATEMENT LOGIC>)
Somehow I need to updated the column in the table filtering on the the output of Derived_Column. So I need to check against a sub query or something of that nature.
Would anyone know how to do this?

What does a column assignment using an aggregate in the columns area of a select do?

I'm trying to decipher another programmer's code who is long-gone, and I came across a select statement in a stored procedure that looks like this (simplified) example:
SELECT #Table2.Col1, Table2.Col2, Table2.Col3, MysteryColumn = CASE WHEN y.Col3 IS NOT NULL THEN #Table2.MysteryColumn - y.Col3 ELSE #Table2.MysteryColumn END
INTO #Table1
FROM #Table2
LEFT OUTER JOIN (
SELECT Table3.Col1, Table3.Col2, Col3 = SUM(#Table3.Col3)
FROM Table3
INNER JOIN #Table4 ON Table4.Col1 = Table3.Col1 AND Table4.Col2 = Table3.Col2
GROUP BY Table3.Col1, Table3.Col2
) AS y ON #Table2.Col1 = y.Col1 AND #Table2.Col2 = y.Col2
WHERE #Table2.Col2 < #EnteredValue
My question, what does the fourth column of the primary selection do? does it produce a boolean value checking to see if the values are equal? or does it set the #Table2.MysteryColumn equal to some value and then inserts it into #Table1? Or does it just update the #Table2.MysteryColumn and not output a value into #Table1?
This same thing seems to happen inside of the sub-query on the third column, and I am equally at a loss as to what that does as well.
MysteryColumn = gives the expression a name also called a column alias. The fact that a column in the table#2 also has the same name is besides the point.
Since it uses INTO syntax it also gives the column its name in the resulting temporary table. See the SELECT CLAUSE and note | column_alias = expression and the INTO CLAUSE

Postgres: buckets always filled from left in crosstab query

My query looks like this:
SELECT mthreport.*
FROM crosstab
('SELECT
to_char(ipstimestamp, ''mon DD HH24h'') As row_name,
varid::text || log.varid || ''_'' || ips.objectname::text As bucket,
COUNT(*)::integer As bucketvalue
FROM loggingdb_ips_boolean As log
INNER JOIN IpsObjects As ips
ON log.Varid=ips.ObjectId
WHERE ((log.varid = 37551)
OR (log.varid = 27087)
OR (log.varid = 50876)
OR (log.varid = 45096)
OR (log.varid = 54708)
OR (log.varid = 47475)
OR (log.varid = 54606)
OR (log.varid = 25528)
OR (log.varid = 54729))
GROUP BY to_char(ipstimestamp, ''yyyy MM DD HH24h''), row_name, objectid, bucket
ORDER BY to_char(ipstimestamp, ''yyyy MM DD HH24h''), row_name, objectid, bucket' )
As mthreport(item_name text, varid_37551 integer,
varid_27087 integer ,
varid_50876 integer ,
varid_45096 integer ,
varid_54708 integer ,
varid_47475 integer ,
varid_54606 integer ,
varid_25528 integer ,
varid_54729 integer ,
varid_29469 integer)
the query can be tested against a test table with this connection string:
"host=bellariastrasse.com port=5432 dbname=IpsLogging user=guest password=guest"
The query is syntactically correct and runs fine. My problem is that it the COUNT(*) values are always filling the leftmost column. however, in many instances the left columns should have a zero, or a NULL, and only the 2nd (or n-th) column should be filled. My brain is melting and I cannot figure out what is wrong!
The solution for your problem is to use the crosstab() variant with two parameters.
The second parameter (another query string) produces the list of output columns, so that NULL values in the data query (the first parameter) are assigned correctly.
Check the manual for the tablefunc extension, and in particular crosstab(text, text):
The main limitation of the single-parameter form of crosstab is that
it treats all values in a group alike, inserting each value into the
first available column. If you want the value columns to correspond to
specific categories of data, and some groups might not have data for
some of the categories, that doesn't work well. The two-parameter form
of crosstab handles this case by providing an explicit list of the
categories corresponding to the output columns.
Emphasis mine. I posted a couple of related answers recently here or here or here.

SQL Server cast varchar to int

I have a table that has a column 'Value' that is a varchar. One row puts a '10' in this column. This "number" will need to be added and substracted to, but I can do so directly b/c its a varchar.
So, the following gives an error:
update Fields
set Value = Value - 1
from Fields f, FTypes ft
where ft.Name = 'Field Count'
and ft.ID = f.ID_FT
and f.ID_Project = 186
GO
How do I cast/convert the value to an int, perform the math, then set as a varchar again?
Martin Smith's point is an excellent one --> If it is only numeric data going in there and you are always going to be doing operations like this, it will save you time and hassle not having to do this conversion work.
That being said you can do -
update Fields
set ColumnName = cast( (cast(ColumnName as int) - 1) as varchar(nn))
from Fields f, FTypes ft
where ft.Name = 'Field Count'
and ft.ID = f.ID_FT
and f.ID_Project = 186
where nn is the original definition of your varchar column
You need to use CAST twice - once to make your Value column an INT so you can subtract 1 from it, and then back to a VARCHAR(x):
update dbo.Fields
set Value = CAST((CAST(Value AS INT) - 1) AS VARCHAR(20))
from dbo.Fields f
inner join dbo.FTypes ft ON ft.ID = f.ID_FT
where ft.Name = 'Field Count'
and f.ID_Project = 186
Also, I would recommend using the dbo. prefix always, on all your database objects, and I would always argue for the new, ANSI standard JOIN syntax which is more expressive (clearer to read and understand) and helps avoid unwanted cartesian products (by forgetting to specify a JOIN condition in the WHERE clause....)