Using the Mercari Tempalate for Spanner to BigQuery - https://github.com/mercari/DataflowTemplates
When building the dataflow through Google Console - it works.
But when executing the dataflow command through CLI, it generates an error "unrecognized arguments".
DATAFLOW COMMAND:
gcloud dataflow jobs run mercari_CLI --gcs-location gs://mystorage/templates/SpannerToBigQuery --region us-central1 --staging-location gs://mystorage/temp --parameters projectId=myProject,instanceId=myspanner,databaseId=mydpspanner,query=SELECT *, current_timestamp AS LoadDttm FROM source_table,output=raw_data_zone.testtable
Tried the following, but same error:
query="SELECT *, current_timestamp AS LoadDttm FROM source_table"
query="""SELECT *, current_timestamp AS LoadDttm FROM source_table"""
query='SELECT *, current_timestamp AS LoadDttm FROM source_table'
query=`SELECT *, current_timestamp AS LoadDttm FROM source_table`
Would anyone know any approach for this?
I guess you are looking for a escape sequence in gcloud command(since your job works when launching from gcloud console)
I would recommend going through "gcloud topic escaping" or the following link
https://cloud.google.com/sdk/gcloud/reference/topic/escaping
There are examples to guide you in specifying a delimiter that is not present in your query.
Related
Iam getting the below error while running the following select query select now();
conversion to class java.time.OffsetDateTime from timestamptz not supported
I am trying to use io.confluent.connect.jdbc.JdbcSourceConnector in bulk mode using the query .
query = select name, cast(ID as NUMBER(20,2)),status from table_name
Is this possible?
If possible am I missing something?
I am getting
exception (org.apache.kafka.connect.runtime.WorkerTask:148)
org.apache.avro.SchemaParseException: Illegal character in: CAST(IDASNUMBER(20,2))
Use an alias for the 2nd column. This alias must be the name of the target avro-field.
I created a DB2 task to run my stored procedure automatically at a specific time, I created the task using the ADMIN_TASK_ADD procedure:
CALL SYSPROC.ADMIN_TASK_ADD ( 'WR_AM_ADT_AUTO_CNRRM_SCHDLR',
NULL,
NULL,
NULL,
'05 16 * * *',
'ASPECT',
'WR_AM_ADT_AUTO_CNRRM',
'81930',NULL,NULL);
COMMIT;
I want to run my scheduled task every day at 04:05 PM, but it didn't work and giving the status as
NOTRUN, SQLCODE -104
.
So can anyone please tell me what am I doing wrong?
I also checked my scheduler in task list using following command:
SELECT * from SYSTOOLS.ADMIN_TASK_LIST
I am using DB2 9.7 version on Windows.
The status of the task NOTRUN means an error prevented the scheduler from calling the task's procedure. The SQLCODE indicates the type of error.
I suggest you the followings;
Confirm scheduler is enabled.
db2 > db2set
DB2_ATS_ENABLE=YES
ATS depends on the SYSTOOLSPACE tablespace to store historical data and configuration information. You can check if the tablespace exists in your system with the following query.
db2 select TBSPACE from SYSCAT.TABLESPACES where TBSPACE = 'SYSTOOLSPACE'
You can test stored procedure in isolation
CALL WR_AM_ADT_AUTO_CNRRM()
Then run your task in schedular!
I am running a SQL query and trying to break the results down into chunks.
select task_id, owner_cnum
from (select row_number() over(order by owner_cnum, task_id)
as this_row, wdpt.vtasks.*
from wdpt.vtasks)
where this_row between 1 and 5;
That SQL works with DB2 10.5 on Windows and Linux, but fails on DB2 10.1 on z/OS with the following error messages:
When I run the SQL from IBM DataStudio 4.1.1 running on my Windows machine connected to the database, I am getting:
ILLEGAL SYMBOL "<EMPTY>". SOME SYMBOLS THAT MIGHT BE LEGAL ARE: CORRELATION NAME. SQLCODE=-104, SQLSTATE=42601, DRIVER=4.18.60
When I run my Java program on a zLinux system connecting to the database, I get the following error:
DB2 SQL Error: SQLCODE=-104, SQLSTATE=42601, SQLERRMC=<EMPTY>;CORRELATION NAME, DRIVER=3.65.97
Any ideas what I'm doing wrong?
In some DB2 versions you must use a correlation name for a subselect, as suggested by the error message:
select FOO from (
select FOO from BAR
) as T
Here "T" is the correlation name.
I have Postgresql-9.2.10 on CentOS.
I experience the following error:
DETAIL: Multiple failures --- write error might be permanent.
ERROR: could not open file "pg_tblspc / 143862353 / PG_9.2_201204301 / 16439 / 199534370_fsm": No such file or directory
This happens since I stopped the PostgreSQL service, ran pg_resetxlog and started the service. The logs in pg_log look good, and the service is listed without any problem.
DML works well , but not a DDL statement like CREATE TABLE, otherwise an error message is thrown or nothing is visible in the logs in pg_log.
If I try to create a table, there is no reaction, and it looks like the statement is blocked by a lock.
So I tried the following query to look for locks:
SELECT blocked_locks.pid AS blocked_pid,
blocked_activity.usename AS blocked_user,
blocking_locks.pid AS blocking_pid,
blocking_activity.usename AS blocking_user,
blocked_activity.query AS blocked_statement,
blocking_activity.query AS blocking_statement
FROM pg_catalog.pg_locks blocked_locks
JOIN pg_catalog.pg_stat_activity blocked_activity ON blocked_activity.pid = blocked_locks.pid
JOIN pg_catalog.pg_locks blocking_locks
ON blocking_locks.locktype = blocked_locks.locktype
AND blocking_locks.DATABASE IS NOT DISTINCT FROM blocked_locks.DATABASE
AND blocking_locks.relation IS NOT DISTINCT FROM blocked_locks.relation
AND blocking_locks.page IS NOT DISTINCT FROM blocked_locks.page
AND blocking_locks.tuple IS NOT DISTINCT FROM blocked_locks.tuple
AND blocking_locks.virtualxid IS NOT DISTINCT FROM blocked_locks.virtualxid
AND blocking_locks.transactionid IS NOT DISTINCT FROM blocked_locks.transactionid
AND blocking_locks.classid IS NOT DISTINCT FROM blocked_locks.classid
AND blocking_locks.objid IS NOT DISTINCT FROM blocked_locks.objid
AND blocking_locks.objsubid IS NOT DISTINCT FROM blocked_locks.objsubid
AND blocking_locks.pid != blocked_locks.pid
JOIN pg_catalog.pg_stat_activity blocking_activity ON blocking_activity.pid = blocking_locks.pid
WHERE NOT blocked_locks.granted;
You probably corrupted the PostgreSQL cluster with pg_resetxlog. How exactly did you run the command?
I would restore from the last good backup.