Insert value into table from another table mybatis xml - postgresql

I'd like to know if i need to return value after first INSERT then pass it to next INSERT or i can just use this kind of code
<insert id="CreateUser" parameterType="com.portale.model.UserObject">
INSERT INTO users(user_org, user_login, user_pass, locked)
VALUES (#{organization},#{username},#{password},#{locked})
INSERT INTO users_details(details_id, nome, cognome)
VALUES ((SELECT users.user_id FROM users WHERE user_login = #{username}),#{nome},#{cognome})
</insert>
This code give me an error
### Error updating database. Cause: org.postgresql.util.PSQLException: ОШИБКА: ошибка синтаксиса (примерное положение: "INSERT") Позиция: 87
### The error may exist in file [D:\Users\Eclipse and VM\Tomcat\apache-tomcat- 8.5.53\wtpwebapps\com.portale\WEB-INF\classes\com\portale\mapper\UserMapper.xml]
### The error may involve com.portale.mapper.UserMapper.CreateUser-Inline
### The error occurred while setting parameters
### SQL: INSERT INTO users(user_org, user_login, user_pass, locked) VALUES (?,?,?,?) INSERT INTO users_details(details_id, nome, cognome) VALUES ((SELECT users.user_id FROM users WHERE user_login = ?),?,?)
### Cause: org.postgresql.util.PSQLException: ОШИБКА: ошибка синтаксиса (примерное положение: "INSERT") Позиция: 87;
bad SQL grammar []; nested exception is org.postgresql.util.PSQLException: ОШИБКА: ошибка синтаксиса (примерное положение: "INSERT") Позиция: 87!!!! Exception cause!!!! org.postgresql.util.PSQLException: ОШИБКА: ошибка синтаксиса (примерное положение: "INSERT") Позиция: 87
But all parameters are passed in the code.
Can i use (SELECT...) with (INSERT) in MyBatis ? Is this an wrong way to do it or a issue by passing parameters?

The issue was a simple ";" missing.
After first query (INSERT) of course sql need to know that the next INSERT is a new query so after writing the first one you need to put an ";"
<insert id="CreateUser" parameterType="com.portale.model.UserObject">
INSERT INTO users(user_org, user_login, user_pass, locked)
VALUES (#{organization},#{username},#{password},#{locked});
INSERT INTO users_details(details_id, nome, cognome)
VALUES ((SELECT users.user_id FROM users WHERE user_login = #{username}),#{nome},#{cognome})
</insert>

Related

How to Insert Overwrite Hive Table without failing with org.apache.spark.sql.AnalysisException: Can only write data to relations with a single path.?

I have a Hive Table Which I want to overwrite use Insert Overwrite, Example Query below
spark.sql("INSERT OVERWRITE TABLE my_database.my_table VALUES (221221, 'DUMMY_Record_Pav', 21233, 'SPACE')")
--Show create Table
CREATE TABLE `my_database.my_table`(
`player_id` string,
`player_type` string,
`position_id` string,
`position_location` string)
ROW FORMAT SERDE
'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
WITH SERDEPROPERTIES (
'path'='hdfs://path/hive/data/my_database.db/my_table')
STORED AS INPUTFORMAT
'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'
LOCATION
'hdfs://path/hive/data/my_database.db/my_table''
TBLPROPERTIES (
--Redacted
)
That Spark Sql query is failing with below Error
org.apache.spark.sql.AnalysisException: Can only write data to relations with a single path.;
at org.apache.spark.sql.execution.datasources.DataSourceAnalysis$$anonfun$apply$1.applyOrElse(DataSourceStrategy.scala:188)
at org.apache.spark.sql.execution.datasources.DataSourceAnalysis$$anonfun$apply$1.applyOrElse(DataSourceStrategy.scala:134)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:107)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:106)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsDown(AnalysisHelper.scala:106)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:29)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperators(AnalysisHelper.scala:73)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:29)
at org.apache.spark.sql.execution.datasources.DataSourceAnalysis.apply(DataSourceStrategy.scala:134)
at org.apache.spark.sql.execution.datasources.DataSourceAnalysis.apply(DataSourceStrategy.scala:52)
at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:87)
at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:84)
at scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:57)
at scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:66)
at scala.collection.mutable.ArrayBuffer.foldLeft(ArrayBuffer.scala:48)
at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:84)
at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:76)
at scala.collection.immutable.List.foreach(List.scala:392)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:76)
at org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Analyzer.scala:127)
at org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:121)
at org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:106)
at org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:105)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:201)
at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:105)
at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:57)
at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:55)
at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:47)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:78)
at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:651)
... 49 elided
I am able to overwrite to a similar table which doesn't have SerdeProperties like below:
WITH SERDEPROPERTIES (
'path'='hdfs://path/hive/data/my_database.db/my_table')
Is there a way I can remove the SerdeProperties for a Table?
I tried setting the Path to '' like below but the Spark Sql failed with Empty Path Error.
ALTER TABLE my_database.my_table SET SERDEPROPERTIES('path'='');
Removing SerdeProperies will help the Spark SQL query to run.
You could create intermediate table
Create table mytable2 () with ();
insert into mytable2 select * from mytable;
alter table mytable rename to mytable1;
alter table mytable2 rename to mytable;

sqlalchemy to create temporary table

I created a temporary table with sqlalchemy (with an underlying postgres database) that is going to be joined with a database table. However, in some cases when a value is empty '' then postgres throws the error:
failed to find conversion function from unknown to text
SqlAlchemy assembles everything to the following context
[SQL: 'WITH temp_table AS \n(SELECT %(param_1)s AS id, %(param_2)s AS email, %(param_3)s AS phone)\n SELECT campaigns_contact.id, campaigns_contact.email, campaigns_contact.phone \nFROM campaigns_contact JOIN temp_table ON temp_table.id = campaigns_contact.id AND temp_table.email = campaigns_contact.email AND temp_table.phone = campaigns_contact.phone'] [parameters: {'param_1': 83, 'param_2': '', 'param_3': '+1234567890'}]
I assemble the temporary table as follows
stmts = []
for row in import_data:
row_values = [literal(row[value]).label(value) for value in values]
stmts.append(select(row_values))
subquery = union_all(*stmts)
subquery = subquery.cte(name="temp_table")
The problem seems to be the part here
...%(param_2)s AS email...
which after replacing the param_2 results in
...'' AS email...
which will cause the error mentioned above.
One way to solve the issue is to perform a cast
...''::text AS email...
However, I don't know how to perform ::text cast with sqlalchemy!?

updating multiple rows of 2 columns with different values (db2 sql)

I have a table in which I need to change the values of a couple of columns in multiple rows.
The table with values to be changed is like:
The code I have tried containing updated values, with no success, is:
UPDATE <table_name>
SET (IDENTIFIER_1, IDENTIFIER_2)
VALUES (1635, 1755),
(2024, 2199),
(1868, 1692),
(3577, 4825)
WHERE ID
IN ('1',
'23',
'54',
'21');
To be honest, I am not sure if this is even supported in db2 SQL. The error is:
[Error Code: -104, SQL State: 42601] DB2 SQL Error: SQLCODE=-104, SQLSTATE=42601, SQLERRMC=update *
I should also advise that I am a db2 newbie.
You can always use Merge
MERGE INTO TABLE1
USING (
VALUES (1,1635, 1755),
(23,2024, 2199),
(54,1868, 1692) ) dummytable(ID_T, INF1,INF2)
on table1.id_table = dummytable.id_t
when matched
then UPDATE set TABLE1.IDENTIFIER_1 = dummytable.INF1
, TABLE1.IDENTIFIER_2 = dummytable.INF2
else ignore

column does not exist when it does

using PGAdminqueries:
SELECT * FROM analyzed_users;
SELECT * FROM time_table;
runs successfully. But query:
SELECT * FROM analyzed_users, time_table WHERE analyzed_users.id = time_table.userId
returns error:
ERROR: column analyzed_users.id does not exist
LINE 2: SELECT * FROM analyzed_users, time_table WHERE analyzed_user...
********** Error **********
ERROR: column analyzed_users.id does not exist
SQL state: 42703
Character: 49
I'm struggling with it for a while and I have no idea why it doesn't want to work..
The problem is in the WHERE clause in the second query:
WHERE analyzed_users.id = time_table.userId
The error is saying that analyzed_users.id doesn't exist in that table.
Check for and use the actual name of the column in analyzed_users that you want to compare to time_table.userId in the second query. That should fix the problem.

JPA / EclipseLink - create script source with one SQL statement taking multiple lines

I want to let the persistence provider (EclipseLink 2.5.0) automatically create the tables in the, already existing, database by using the persistence unit property "javax.persistence.schema-generation.create-script-source" and a valid SQL-DDL-script.
persistence.xml:
<property name="javax.persistence.schema-generation.create-script-source" value="data/ddl.sql"/>
ddl.sql:
USE myDatabase;
CREATE TABLE MyTable (
id INTEGER NOT NULL AUTO_INCREMENT,
myColumn VARCHAR(255) NOT NULL,
PRIMARY KEY (id)
) ENGINE = InnoDB DEFAULT CHARACTER SET = utf8 DEFAULT COLLATE = utf8_bin;
But I got the following error:
[EL Warning]: 2014-02-12 13:31:44.778--ServerSession(768298666)--Thread(Thread[main,5,main])--Local Exception Stack:
Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.5.0.v20130507-3faac2b): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '' at line 1
Error Code: 1064
Call: CREATE TABLE MyTable (
Query: DataModifyQuery(sql="CREATE TABLE MyTable (")
at org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:331)
at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeDirectNoSelect(DatabaseAccessor.java:895)
at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeNoSelect(DatabaseAccessor.java:957)
at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.basicExecuteCall(DatabaseAccessor.java:630)
at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeCall(DatabaseAccessor.java:558)
at org.eclipse.persistence.internal.sessions.AbstractSession.basicExecuteCall(AbstractSession.java:1995)
at org.eclipse.persistence.sessions.server.ServerSession.executeCall(ServerSession.java:570)
at org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:242)
at org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:228)
at org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeNoSelectCall(DatasourceCallQueryMechanism.java:271)
at org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeNoSelect(DatasourceCallQueryMechanism.java:251)
at org.eclipse.persistence.queries.DataModifyQuery.executeDatabaseQuery(DataModifyQuery.java:85)
at org.eclipse.persistence.queries.DatabaseQuery.execute(DatabaseQuery.java:899)
at org.eclipse.persistence.internal.sessions.AbstractSession.internalExecuteQuery(AbstractSession.java:3207)
at org.eclipse.persistence.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1797)
at org.eclipse.persistence.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1779)
at org.eclipse.persistence.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1730)
at org.eclipse.persistence.internal.sessions.AbstractSession.executeNonSelectingCall(AbstractSession.java:1499)
at org.eclipse.persistence.internal.sessions.AbstractSession.executeNonSelectingSQL(AbstractSession.java:1517)
at org.eclipse.persistence.internal.jpa.EntityManagerSetupImpl.writeSourceScriptToDatabase(EntityManagerSetupImpl.java:4065)
at org.eclipse.persistence.internal.jpa.EntityManagerSetupImpl.writeDDL(EntityManagerSetupImpl.java:3910)
at org.eclipse.persistence.internal.jpa.EntityManagerSetupImpl.writeDDL(EntityManagerSetupImpl.java:3783)
at org.eclipse.persistence.internal.jpa.EntityManagerSetupImpl.deploy(EntityManagerSetupImpl.java:724)
at org.eclipse.persistence.internal.jpa.EntityManagerFactoryDelegate.getAbstractSession(EntityManagerFactoryDelegate.java:204)
at org.eclipse.persistence.internal.jpa.EntityManagerFactoryDelegate.getDatabaseSession(EntityManagerFactoryDelegate.java:182)
at org.eclipse.persistence.internal.jpa.EntityManagerFactoryImpl.getDatabaseSession(EntityManagerFactoryImpl.java:527)
at org.eclipse.persistence.jpa.PersistenceProvider.createEntityManagerFactoryImpl(PersistenceProvider.java:140)
at org.eclipse.persistence.jpa.PersistenceProvider.createEntityManagerFactory(PersistenceProvider.java:177)
at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:79)
at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:54)
at nl.tent.competent.data.access.Main.main(Main.java:22)
Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '' at line 1
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
at com.mysql.jdbc.Util.getInstance(Util.java:386)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1054)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4187)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4119)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2570)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2731)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2815)
at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:2155)
at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2458)
at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2375)
at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2359)
at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeDirectNoSelect(DatabaseAccessor.java:885)
... 29 more
It seems that the carriage return line feed (newline) is the problem. And if I change the SQL-DDL-script, so that one SQL-statement only takes one line, everything is working fine.
adjusted ddl.sql:
USE myDatabase;
CREATE TABLE MyTable (id INTEGER NOT NULL AUTO_INCREMENT, myColumn VARCHAR(255) NOT NULL, PRIMARY KEY (id) ENGINE = InnoDB DEFAULT CHARACTER SET = utf8 DEFAULT COLLATE = utf8_bin;
But I don't want to reformat my SQL-DDL-script for readability. Please help!
If someone still faces this problem, it can be solved by adding following parameter (found here):
<property name="hibernate.hbm2ddl.import_files_sql_extractor" value="org.hibernate.tool.hbm2ddl.MultipleLinesSqlCommandExtractor" />