I have a few rows stored in a source table (as defined as $schema.$sourceTable in the UPDATE query below). This table has 3 columns: TABLE_NAME, PERMISSION_TAG_COL, PT_DEPLOYED
I have an update statement stored in a string like:
var update_PT_Deploy = s"UPDATE $schema.$sourceTable SET PT_DEPLOYED = 'Y' WHERE TABLE_NAME = '$tableName';"
My source table does have rows with TABLE_NAME as $tableName (parameter) as I inserted rows into this table using another function of my program. The default value of PT_DEPLOYED when I inserted the rows was specified as NULL.
I'm trying to execute update using JDBC in the following manner:
println(update_PT_Deploy)
val preparedStatement: PreparedStatement = connection.prepareStatement(update_PT_Deploy)
val row = preparedStatement.execute()
println(row)
println("row updated in table successfully")
preparedStatement.close()
The above piece of code does not throw any exception, but when I query my table in a tool like DBeaver, the NULL value of PT_DEPLOYED does not get updated to Y.
If I execute the same query as mentioned in update_PT_Deploy inside DBeaver, the query works and the table updates. I am sure I am following the correct steps..
Related
I am trying to insert record into a table using a variable but it is failing.
command:
val query = "INSERT into TABLE Feed_metadata_s2 values ('LOGS','RUN_DATE',{} )".format(s"$RUN_DATE")
spark.sql(s"query")
spark.sql("INSERT into TABLE Feed_metadata_s2 values ('LOGS','ExtractStartTimestamp',$ExtractStartTimestamp)")
error:
INSERT into TABLE Feed_metadata_s2 values ('SDEDLOGS','ExtractStartTimestamp',$ExtractStartTimestamp)
------------------------------------------------------------------------------^^^
at org.apache.spark.sql.catalyst.parser.ParseException.withCommand(ParseDriver.scala:241)
at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parse(ParseDriver.scala:117)
at org.apache.spark.sql.execution.SparkSqlParser.parse(SparkSqlParser.scala:48)
at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parsePlan(ParseDriver.scala:69)
at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642)
It seems you're confused with string interpolation... you need to put s before the last query so that the variable is substituted into the string. Also the first two lines can be simplified:
val query = s"INSERT into TABLE Feed_metadata_s2 values ('LOGS','RUN_DATE',$RUN_DATE)"
spark.sql(query)
spark.sql(s"INSERT into TABLE Feed_metadata_s2 values ('LOGS','ExtractStartTimestamp',$ExtractStartTimestamp)")
I ran the following query in Hive and it successfully updated the column value in the table: select id, regexp_replace(full_name,'A','C') from table
But when I ran the same query from Spark SQL, it did not update the actual records
hiveContext.sql("select id, regexp_replace(full_name,'A','C') from table")
but when I do a hiveContext.sql("select id, regexp_replace(full_name,'A','C') from table").show() -- it displays A replaced with C successfully ... only in the display and not in the actual table
I tried to assign the result to another variable
val vFullName = hiveContext.sql("select id, regexp_replace(full_name,'A','C') from table")
and then
vFullName.show() -- it displays the original values without replacement
How do I get the value replaced in the table from SparkSQL?
I am looking for an efficient method (which I can reuse for similar situations) to drop rows which have been updated.
My table has many columns, but the important ones are:
creation_timestamp, id, last_modified_timestamp
My primary key is the creation_timestamp and the id. However, after and id has been created, it can be modified by other users which is indicated by the last_modified_timestamp.
1) Read a daily file and add any new rows (based on creation_timestamp and id)
2) Remove old rows which have a different last_modified_timestamp and replace them with the latest versions.
I typically do most of my operations with Pandas (python library) and pyscopg2, so I am not extremely familiar with PostgreSQL 9.6 which is the database I am using. My initial approach is to just add the last_modified_timestamp to the primary key, and then just use a view to SELECT DISTINCT based on the latest changes. However, it seems like that is 'cheating' and I will be wasting space since I do not need to retain previous versions.
EDIT:
def create_update_query(df, table=FACT_TABLE):
columns = ', '.join([f'{col}' for col in DATABASE_COLUMNS])
constraint = ', '.join([f'{col}' for col in PRIMARY_KEY])
placeholder = ', '.join([f'%({col})s' for col in DATABASE_COLUMNS])
updates = ', '.join([f'{col} = EXCLUDED.{col}' for col in DATABASE_COLUMNS])
query = f"""
INSERT INTO {table} ({columns})
VALUES ({placeholder})
ON CONFLICT ({constraint})
DO UPDATE SET {updates};"""
query.split()
query = ' '.join(query.split())
return query
def load_updates(df, connection=DATABASE):
conn = connection.get_conn()
cursor = conn.cursor()
df1 = df.where((pd.notnull(df)), None)
insert_values = df1.to_dict(orient='records')
for row in insert_values:
cursor.execute(create_update_query(df), row)
conn.commit()
cursor.close()
del cursor
conn.close()
This appears to work. I was running into some issues, so right now i am looping through each row of the DataFrame as a dictionary, then inserting that row. Also, I had to figure out a way to fill in the nan columns with None, because I was getting errors with Timestamp dtypes with blank values, etc.
I have the following code in Postgres
select op.url from identity.legal_entity le
join identity.profile op on le.legal_entity_id =op.legal_entity_id
where op.global_id = '8wyvr9wkd7kpg1n0q4klhkc4g'
which returns 1 row.
Then I try to update the url field with the following:
update identity.profile
set url = 'htpp:sam'
where identity.profile.url in (
select op.url from identity.legal_entity le
join identity.profile op on le.legal_entity_id =op.legal_entity_id
where global_id = '8wyvr9wkd7kpg1n0q4klhkc4g'
);
But the above ends up updating more than 1 row, actually all of the rows of the identity table.
I would assume since the first postgres statement returns one row, only one row at most can be updated, but I am getting the wrong effect where all of the rows are being updated. Why ?? Please help a nubie fix the above update statement.
Instead of using profile.url to identify the row you want to update, use the primary key. That is what it is there for.
So if the primary key column is called id, the statement could be modified to:
UPDATE identity.profile
SET ...
WHERE identity.profile.id IN (SELECT op.id FROM ...);
But you can do this much simpler in PostgreSQL with
UPDATE identity.profile op
SET url = 'htpp:sam'
FROM identity.legal_entity le
WHERE le.legal_entity_id = op.legal_entity_id
AND le.global_id = '8wyvr9wkd7kpg1n0q4klhkc4g';
I need to update a few thousand rows in my Postgres table using the result of a array_agg and spatial lookup.
The query needs to take the geometry of the parent table, and return an array of the matching row IDs in the other table. It may return no IDs or potentially 2-3 IDs.
I've tried to use an UPDATE FROM but I can't seem to pass into the subquery the parent table geom column for the SELECT. I can't see any way of doing a JOIN between the 2 tables.
Here is what I currently have:
UPDATE lrc_wales_data.records
SET lrc_array = subquery.lrc_array
FROM (
SELECT array_agg(wales_lrcs.gid) AS lrc_array
FROM layers.wales_lrcs
WHERE st_dwithin(records.geom_poly, wales_lrcs.geom, 0)
) AS subquery
WHERE records.lrc = 'nrw';
The error I get is:
ERROR: invalid reference to FROM-clause entry for table "records"
LINE 7: WHERE st_dwithin(records.geom_poly, wales_lrcs.geom, 0)
Is this even possible?
Many thanks,
Steve
Realised there was no need to use SET FROM. I could just use a sub query directly in the SET:
UPDATE lrc_wales_data.records
SET lrc_array = (
SELECT array_agg(wales_lrcs.gid) AS lrc
FROM layers.wales_lrcs
WHERE st_dwithin(records.geom_poly, wales_lrcs.geom, 0)
)
WHERE records.lrc = 'nrw';