Gcloud crashed (ValueError): Invalid header value - gcloud

I used the following DDL command to create a table in gcloud spanner database named "messages" in the "guestbook" spanner instance
gcloud spanner databases ddl update messages \
--instance=guestbook --ddl="$(<~/guestbook-service/db/spanner.ddl)"
spanner.ddl contains the following
CREATE TABLE guestbook_message (
id STRING(36) NOT NULL,
name STRING(255) NOT NULL,
image_uri STRING(255),
message STRING(255)
) PRIMARY KEY (id);
But I get the following error.
ERROR: gcloud crashed (ValueError): Invalid header value
b'/usr/bin/../lib/google-cloud-sdk/lib/gcloud.py spanner databases ddl
update messages --instance=guestbook --d dl=CREATE TABLE
guestbook_message (\n id STRING(36) NOT NULL,\n name STRING(255)
NOT NULL,\n image_uri STRING(255),\n message STRING(255)\n)
PRIMARY KEY (id );'
How can I fix this?

This gcloud command is not accepting ddl statements with new line character \n.
It is enough to change spanner.ddl to:
CREATE TABLE guestbook_message (id STRING(36) NOT NULL,name STRING(255) NOT NULL,image_uri STRING(255),message STRING(255)) PRIMARY KEY (id);
If all is in one line, without those \n it works fine.

Related

REVINFO table is missing the sequence "revinfo_seq"

I am migrating to SpringBoot 3.0.1 and updated "hibernate-envers" version to "6.1.6.Final". My DB is PostgreSQL 13.6.
Hibernate is configured to create the DB schema:
spring.jpa.hibernate.ddl-auto:create
After starting the application I get the following error:
pim 2022-12-27 12:00:13,715 WARN C#c7b942ec-33b4-4749-b113-22cbb2946a8d [http-nio-9637-exec-1] SqlExceptionHelper/133 - SQL Error: 0, SQLState: 42P01
pim 2022-12-27 12:00:13,715 ERROR C#c7b942ec-33b4-4749-b113-22cbb2946a8d [http-nio-9637-exec-1] SqlExceptionHelper/138 - ERROR: relation "revinfo_seq" does not exist
Position: 16
The revinfo table look like this:
create table revinfo
(
revision bigint not null
primary key,
client_id varchar(255),
correlation_id varchar(255),
origin varchar(255),
request_id varchar(255),
revision_timestamp bigint not null,
timestamp_utc timestamp with time zone,
user_name varchar(255)
);
The sequence "revinfo_seq" does not exist, but in the old DB structure with envers
5.6.8.Final
and SpringBoot 2.6.6 it didn't exist either without any problems.
What am i Missing?
I tried to toggle the paramter
org.hibernate.envers.use_revision_entity_with_native_id
but it did not help.
You can solve it with this property:
spring.jpa.properties.hibernate.id.db_structure_naming_strategy: legacy
Tested with Spring Boot 3.0.1
Reason:
Hibernate 6 changed the sequence naming strategy, so it was searching for a sequence ending with "_seq".
You can a really detailed explanation here: https://thorben-janssen.com/sequence-naming-strategies-in-hibernate-6/

insert null value for auto incremented primary key

I'm using aws glue and i'm trying to insert (in pg databases) line with null value in primary key. I get this error :
An error occurred while calling o204.pyWriteDynamicFrame. ERROR: null value in column "abc" violates not-null constraint.
The issue is that primary key has a sequence
nextval('abc'::regclass).
Is it a parameter in glue to avoid this error ? Thanks
Conf :
AWS GLUE,
Python job,
Postgres databases

Why is liquibase deleting databasechangelog rows and trying to create a renamed database table?

I am using postgres 10.5 and liquibase 3.6.2 on a Mac.
I nuke & re-create my database, run liquibase update, and it works.
But a second liquibase update fails with an exception that the pkey already exists.
After the first liquibase update, the databasechangelog table contains 97 entries. After the second, it contains 10, and the time and deployment ids for those are different than they were after the first update!
Table foo was created in an early change.
Later it was changed to be named bar, but the pkey is still foo.pkey.
Liquibase-update should not be trying to re-create foo, but it does, and fails because foo.pkey already exists.
A) In general, how can I get liquibase to output more info about what it's doing? I tried both of the commands:
liquibase --logLevel=debug --logFile=`pwd`/foo.log update
liquibase --logLevel debug --logFile `pwd`/foo.log update
Both seem to work the same, and foo.log isn't created and there's no more output in the terminal.
B) How can I stop liquibase from trying to re-make this and nuking my databasechangelog?
I tried to make a small example that fails, but this seems to work... Others here are using it with postgres 9.5.10 with no problem...
All I see in the terminal is:
Starting Liquibase at Wed, 14 Nov 2018 13:06:44 PST (version 3.6.2 built at 2018-07-03 11:28:09)
Unexpected error running Liquibase: ERROR: relation "cant_change_pkey" already exists [Failed SQL: CREATE TABLE nuss.cant_change (message_id UUID NOT NULL, origin VARCHAR(4), type VARCHAR(12) NOT NULL, CONSTRAINT CANT_CHANGE_PKEY PRIMARY KEY (message_id), UNIQUE (message_id))]
liquibase.exception.MigrationFailedException: Migration failed for change set db/changelog/changelog-new1.xml::first-one::rstrauss:
Reason: liquibase.exception.DatabaseException: ERROR: relation "cant_change_pkey" already exists [Failed SQL: CREATE TABLE nuss.cant_change (message_id UUID NOT NULL, origin VARCHAR(4), type VARCHAR(12) NOT NULL, CONSTRAINT CANT_CHANGE_PKEY PRIMARY KEY (message_id), UNIQUE (message_id))]
at liquibase.changelog.ChangeSet.execute(ChangeSet.java:637)
at liquibase.changelog.visitor.UpdateVisitor.visit(UpdateVisitor.java:53)
at liquibase.changelog.ChangeLogIterator.run(ChangeLogIterator.java:78)
at liquibase.Liquibase.update(Liquibase.java:202)
at liquibase.Liquibase.update(Liquibase.java:179)
at liquibase.integration.commandline.Main.doMigration(Main.java:1205)
at liquibase.integration.commandline.Main.run(Main.java:191)
at liquibase.integration.commandline.Main.main(Main.java:129)
Caused by: liquibase.exception.DatabaseException: ERROR: relation "cant_change_pkey" already exists [Failed SQL: CREATE TABLE nuss.cant_change (message_id UUID NOT NULL, origin VARCHAR(4), type VARCHAR(12) NOT NULL, CONSTRAINT CANT_CHANGE_PKEY PRIMARY KEY (message_id), UNIQUE (message_id))]
at liquibase.executor.jvm.JdbcExecutor$ExecuteStatementCallback.doInStatement(JdbcExecutor.java:356)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:57)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:125)
at liquibase.database.AbstractJdbcDatabase.execute(AbstractJdbcDatabase.java:1229)
at liquibase.database.AbstractJdbcDatabase.executeStatements(AbstractJdbcDatabase.java:1211)
at liquibase.changelog.ChangeSet.execute(ChangeSet.java:600)
... 7 common frames omitted
Caused by: org.postgresql.util.PSQLException: ERROR: relation "cant_change_pkey" already exists
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2476)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2189)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:300)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:428)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:354)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:301)
at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:287)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:264)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:260)
at liquibase.executor.jvm.JdbcExecutor$ExecuteStatementCallback.doInStatement(JdbcExecutor.java:352)
... 12 common frames omitted
For more information, please use the --logLevel flag

Confluent Kafka Sink Connector is not loading data to Postgres table

I am trying to load data to Postgres table(s) through Kafka Sink connector but I am getting the following error:
Caused by: org.apache.kafka.connect.errors.ConnectException: Cannot ALTER to add missing field SinkRecordField{schema=Schema{STRING}, name='A_ABBREV', isPrimaryKey=false}, as it is not optional and does not have a default value
The table in Postgres DB already has the field A_ABBREV, but now sure why I am getting missing field error.
Did anyone face the similar kind of issue?
Below is my Sink Connector Configuration:
connector.class=io.confluent.connect.jdbc.JdbcSinkConnector
table.name.format=AGENCY
connection.password=passcode
topics=AGENCIES
tasks.max=1
batch.size=10000
fields.whitelist=A_ID, A_NAME, A_ABBREV
connection.user=pmmdevuser
name=partner5-jdbcSinkConnector
connection.url=jdbc:postgresql://aws-db.sdfdgfdrwwisc.us-east- 1.rds.amazonaws.com:3306/pmmdevdb?currentSchema=ams
insert.mode=upsert
pk.mode=record_value
pk.fields=A_ID
auto.create=false
I am using Liquibase scripts to create tables and below is the create query from the postgres DB which got created through Liquibase scripts:
"CREATE TABLE gds.agency
(
a_id integer NOT NULL,
a_name character varying(100) COLLATE pg_catalog."default" NOT NULL,
a_abbrev character varying(8) COLLATE pg_catalog."default" NOT NULL,
source character varying(255) COLLATE pg_catalog."default" NOT NULL DEFAULT 'AMS'::character varying,
CONSTRAINT pk_agency PRIMARY KEY (a_id),
CONSTRAINT a_abbrev_uk1 UNIQUE (a_abbrev)
)"
From my experience, this means that that the field definition for the sink does not match the field definition for the source table/database. Make sure the field definitions match. Inspect the individual record the sink connector is trying to write to your target db. You should be able to see this insert statement in debug mode in the stack trace. take that query and run it manually to get a clearer idea of the error from the database.

postgres schema not found when create with upper case

I am trying to create app using OpenJPA & Postgres 9.2.xx, Currently facing issue at DB level
1) Created schema say PCM:-
CREATE SCHEMA PCM
2) Tried create table :-
CREATE TABLE PCM.USER_PROFILE (
USER_PROFILE_ID BIGINT NOT NULL,
USER_FNAME VARCHAR(60),
USER_LNAME VARCHAR(60)
};
Got error "pcm" schema does not exists
Then tried creating table :-
CREATE TABLE "PCM.USER_PROFILE" (
USER_PROFILE_ID BIGINT NOT NULL,
USER_FNAME VARCHAR(60),
USER_LNAME VARCHAR(60)
};
Table is created successful,
If I list the schema:-
[postgres#DBMigration ~] $ psql -c "\dn"
List of schemas
Name | Owner
--------+----------
pcm | dbadmin
public | postgres
B) In persistence.xml , I have entered configuration
<property name="openjpa.jdbc.Schema" value="PCM" />
Now I am getting issue in OpenJPA stating schema is not present.
I tried refering here, but no success.
I have tried entering schema name in configuration as '\"PCM\"', "\"PCM\"", '\"pcm\"', "\"pcm\"".
Not sure where am I going wrong.
I need suggestion/help,
1) how or what is proper standard to create schema in Postgres & refer while creating table.
2) Is my entry in persistence.xml correct? Then why its not identifying the schema
Object names in Postgres when not quoted are implicitly converted to lower case.
When you create a table the way you did below with quotation mark on "PCM.USER_PROFILE" then the table is created in default public schema with the name of "PCM.USER_PROFILE".
CREATE TABLE "PCM.USER_PROFILE" (
USER_PROFILE_ID BIGINT NOT NULL,
USER_FNAME VARCHAR(60),
USER_LNAME VARCHAR(60)
);
However, your create statement mentioned in the post is completely valid (with the exception of changing } to ) at the end of command:
CREATE TABLE PCM.USER_PROFILE (
USER_PROFILE_ID BIGINT NOT NULL,
USER_FNAME VARCHAR(60),
USER_LNAME VARCHAR(60)
);
It creates user_profile table under pcm schema succesfully.
The error that I did was created schema outside database environment & root user. When we tried running select * from information_schema.schemata; under both users (root & db user) the schema was not listing.
Hence create schema under a DB by running query
psql -U [dbUser] -d [database] -c "CREATE SCHEMA pcm;"
or
psql -h localhost -U [dbUser] -d [database]
[database]#=> CREATE SCHEMA pcm;
Try running query to test if schema is loaded successfully under database & dbowner user.
[database]#=> select * from information_schema.schemata;