I have created my schema in Redshift and want to use Data pipeline to populate my table with csv file in S3.
Under parameters for the field: myRedshiftTableName:
If I only use my tableName without specifying the Schema, then error is:
output table named 'public.myTable' doesn't exist and no
createTableSql was provided
If I also specify the Schema, then the error is:
output table named 'public.mySchema.myTable' doesn't exist and no
createTableSql was provided
If I drop the table and specify the Schema in myRedshiftCreateTableSql field, then the error is:
ERROR: schema "mySchema" does not exist
How to use my own defined schema?
Go to edit pipelines > data nodes > click on "Add an optional field" and then specify your schema name.
Related
In Azure Data Factory, I'm doing a pretty vanilla 'Copy Data' operation. One dataset to another.
Can I view the query being used to perform the copy operation? Apparently, it's a syntax error, but I've only used drag-and-drop menus. Here's the error:
ErrorCode=SqlOperationFailed,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=A
database operation failed. Please search error to get more
details.,Source=Microsoft.DataTransfer.ClientLibrary,''Type=System.Data.SqlClient.SqlException,Message=Incorrect
syntax near the keyword 'from'.,Source=.Net SqlClient Data
Provider,SqlErrorNumber=156,Class=15,ErrorCode=-2146232060,State=1,Errors=[{Class=15,Number=156,State=1,Message=Incorrect
syntax near the keyword 'from'.,},],'
Extra context
1.Clear Schema and import Schema again.
2. This mostly happens when there are some changes to table schema after creating pipeline and datasets. verify once.
3.The schema and datasets should be refreshed when there are some changes in the SQL table schema.
4. For table name or view name used in the query, use []. ex: [dbo].[persons]
5. In datasets select table name
6. try to publish before testing.
I followed the same scenario and reproduced the same thing. I didn’t get any error as you can see, the above error mainly happens because of schema.
Source dataset:
In source dataset, I manually added and connected to SQL database with Sample SQL table.
Sink dataset:
In sink dataset, I added another SQL database with auto create table, write behavior: Upset, key column: PersonID.
Before execution there is no table on SQL database, then after the execution was successful, I got this sample output in azure SQL database.
I am running one of the existing Azure data factory pipeline that contains instet into sql table, where the sink is a sql server stored procedure.
I supplied the table type and table type parameter name and which maps to my table.
I am getting error as Failure happened on 'Sink' side.
Storedprocedure:
CREATE PROCEDURE [ods].[Insert_EJ_Bing]
#InputTable [ods].[EJ_Bing_Type] READONLY,
#FileName varchar(1000)
AS
BEGIN
insert into #workingtable
(
[ROWHASH],
[Date])
select [ROWHASH],
[Date] from #InputTable
end
Error message:
Failure happened on 'Sink' side.
ErrorCode=InvalidParameter,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=The value of the property '' is invalid:
'Invalid 3 part name format for TypeName.'.,Source=,''Type=System.ArgumentException,Message=Invalid 3 part name format for TypeName.,Source=System.Data
Can anyone please let me where i am doing wrong.
There is nothing wrong with your approach or code. You are getting this error because of the way you specified the Table type value in the sink.
I got same error as you with the above table type.
Change the Table type from [dbo.][tabletype] to [dbo].[tabletype]
You can see my copy activity is successful.
Rather than typing the stored procedure name and Table type names, you can do like below.
I'm trying to get basic plain SQL example working in Slick 3, on Postgres but with custom DB schema, say local instead of default public one. I have hard time inserting the row as executing the following
sqlu"INSERT INTO schedule(user_id, product_code, run_at) VALUES ($userId, $code, $nextRun)"
says
org.postgresql.util.PSQLException: ERROR: relation "schedule" does not exist
The table is in place because when I prefix schedule with local. in the insert statement it works as expected. How can I get correct schema provided to this query?
I'm using it as part of akka-projection handler and all the projection internals like maintaining offsets work as expected on local schema.
I cannot simply put schema as a variable as it errors while resolving parameters:
sqlu"INSERT INTO ${schema}.schedule(user_id, product_code, run_at) VALUES ($userId, $code, $nextRun)"
You can insert schema name using #${value}:
sqlu"INSERT INTO #${schema}.table ..."
I am new with DB2.
I am not able to get data from a table without using the schema name. If I use a schema name with table name, I am able to fetch data.
Example:
SELECT * FROM TABLE_NAME;
It is giving me error, while
SELECT FROM SCHEMA_NAME.TABLE_NAME;
is fetching a result.
What do I have to set up not to always have to use the schema name?
By default, your username is used as the schema name for unqualified object names. You can see the current schema with, e.g. VALUES CURRENT SCHEMA. You can change the current schema for you current session with SET SCHEMA new_schema_name, or via e.g. a JDBC connection parameter. Most query tools also have a place to specify/change the current schema.
See the manual page for SET SCHEMA https://www.ibm.com/support/knowledgecenter/en/SSEPGG_11.1.0/com.ibm.db2.luw.sql.ref.doc/doc/r0001016.html
The full rules for the qualification of unqualified objects is here https://www.ibm.com/support/knowledgecenter/en/SSEPGG_11.1.0/com.ibm.db2.luw.sql.ref.doc/doc/r0000720.html#r0000720__unq-alias
E.g.
Unqualified alias, index, package, sequence, table, trigger, and view names are implicitly qualified by the default schema.
However, you can create a public alias for a table, module or sequence if you wish to be able to reference it regardless of your CURRENT SCHEMA value.
https://www.ibm.com/support/knowledgecenter/SSEPGG_11.5.0/com.ibm.db2.luw.sql.ref.doc/doc/r0000910.html
(P.S. all the above assumes you are using Db2 LUW)
Try using SET SCHEMA to set the default schema to be used in the session:
SET SCHEMA SCHEMA_NAME;
SELECT * FROM TABLE_NAME;
When using DBeaver - right click on connection > Connection Settings > Initialization, and select your default DB and default Schema:
After that, open your SQL Script and select Active DB:
When using spring-data QueryDslJdbcTemplate to query, can I specify table schema name - different to the jdbc username in the datasource? Thanks.
(the generated qBean from querydsl-maven-plugin used the correct schema name, however, when I'm querying, the template always use jdbc username from the datasource, and the generated query itself doesn't have schema name prefix, resulting in java.sql.SQLException: ORA-00942: table or view does not exist)
By default the schema is not printed, but you can enable it via
OracleTemplates.builder().printSchema().build()