is it possible to copy data from table to another based off if data in one table matches the data in the other table in a specific column - triggers

I have a table A with the following columns and data:
Tag_ = P-111
Description_ = Pump
HP_ = 100
RPM_ = 1,800
I have another table B with same columns:
Tag_ = P-111
Description_
HP_
RPM_
Is there a way when I enter data in the Tag_ column in Table B and there is matching data in the same Tag_ column in Table A that I can set up a trigger or automatic command that sends the data from the other columns in Table A to Table B?

Related

Postgresql - set columns based on jsonb columns with the same name

My table has a jsonb field report that has fields I want to copy to columns I already have in the same table. I don't want to create new records. I just want to update each row. I want to do something like this:
UPDATE project SET col1 = report->>'col1', col2 = report->>'col2', col3 = report->>'col3';

Stream Joins based on distinct values from a previous stream

I have a main table A which stores all the events which occur with some details.
Then for each "event" in Table A, there is a separate table by that name (called the event table)
I am using the main table A as a stream and ideally need to perform a join to the event table so that I get a single table for Table A with its respective details.
In the case below there are two distinct event types tables each with its own table and schema.
Example:
Table A:
id
time
detail_1
detail_2
event
Table event 1:
id
detail_6
detail_8
detail_9
Table event 2:
id
detail 11
detail 12
How do I union it so I have a single table in the end with the corresponding details from Table A, event 1 and event 2?
Here is what I was trying to do:
df = (
spark.readStream.format("delta")
.option("ignoreChanges", "true")
.load(f"{table_name}")
)
event_types = df.select("event").distinct().collect()
for row in event_types:
event = row[0].replace(" ", "_").replace(":","").lower()
if event in ["task", "module", "_created", "test_created", "test_to_be_deleted"]:
df_event = spark.readStream.format("delta").option("ignoreChanges", "true").load(f"{event}")
joined_df = df.join(df_event, Seq("message_id"),"inner")
df.writeStream.format("delta").outputMode("append").option(
"checkpointLocation",
f"{table}",
).trigger(once=True).foreachBatch(apple_a_bunch_of_changes).start()
is there a better way to do this?

How to set values for one column based on another?

How to set values for one column based on another?
Goal: When in the DB table the column Remote = table SO in the column
Thrunode -> Set in the table DB the column customer = table SO
DB = tbl_db_collecting
SO = tb_systemshc
sql:
UPDATE tbl_db_collecting SET
tbl_db_collecting.customer = tb_systemshc.environment
FROM tb_systemshc
WHERE tbl_db_collecting.lower(remote) = tb_systemshc.lower(thrunode)
output:
SQL Error [3F000]: ERROR: schema "tbl_db_collecting" does not exist
Is this what you are looking for?
update tbl_db_collecting
set customer = tb_systemshc.environment
from tb_systemshc
where lower(tbl_db_collecting.remote) = lower(tb_systemshc.thrunode);
When you write tbl_db_collecting.lower(remote), PostgreSQL parses that as if you are looking for a lower() function defined in schema tbl_db_collecting.

Update schema table data into another schema table

I have a function having input parameters inserting data into xch.xch_transmas_patient table after inserting data into xch.xch_transmas_patient table I am trying to update data into lt_transmas_patient table by writing update statement and I have written my code in func like this is this correct or not
UPDATE lt_transmas_patient p
SET patient_first_name=pa.patient_first_name,
patient_last_name=pa.patient_last_name,
patient_birthdate=pa.patient_birthdate,
patient_email=pa.patient_email,
patient_mobile=pa.patient_mobile,
patient_home_phone=pa.patient_home_phone,
patient_updated_by=pa.patient_created_by,
patient_updated_on=pa.patient_updated_on,
patient_salutation=pa.patient_salutation,
patient_maritalstatus=pa.patient_maritalstatus,
patient_education_level=pa.patient_education_level,
patient_blood_group=pa.patient_blood_group,
patient_mother_tongue=pa.patient_mother_tongue,
patient_identification_mark=pa.patient_identification_mark
FROM xch.xch_transmas_patient pa
join xch.xch_transmasmap_mpi_link m on m.mpi_lk_xch_mpi=pa.patient_mpi and m.mpi_lk_partner_id=2
where p.patient_mpi=m.mpi_lk_external_mpi
and pa.patient_mpi=_patient_mpi;

Can we update same rows in two tables in postgresql?

Is it possible to update a row in two tables in Postgresql?
The table names are patron and cir_transaction
The column name is patron_id
I need to update the same rows of patron_id which is present in both above tables
I wrote like below
update patron p, cir_transaction c
set patron_id = '4BW14MBA10'
where patron_id = '4BW14MBA10 ' and c.patron_id = patron_id