Postgres on AWS RDS: Create table succeeds but only creates a relation which I can not find anywhere and can not delete - postgresql

The create table query is as followed.
CREATE TABLE xxx (
id BIGSERIAL PRIMARY KEY,
user_id BIGINT NOT NULL,
name VARCHAR(255) NOT NULL,
created DATE
);
it returns :
Table xxx created
Execution time: 0.11s
If I now try to select then I get:
SELECT * FROM xxx;
ERROR: relation "xxx" does not exist
Position: 15
If I try to recreate table I get
ERROR: relation "xxx" already exists
1 statement failed.
Execution time: 0.12s
And to top it off. If I reconnect. Then I can do it all over again.
I am using SQL Workbench to connect to the database on AWS RDS.
I am using the master account for these queries.

Can you use PgAdmin to see if it helps. I have my Postgres RDS configured with PgAdmin and haven't faced this issue

Okay I found the problem and in retro spec it makes a lot of sense. The problem was
that I was not committing the changes to database. I guess as I have never worked in a non auto commit environment then I did not know to look for this. Butting the create statement between begin and end like so:
BEGIN;
CREATE TABLE xxx (
id BIGSERIAL PRIMARY KEY,
user_id BIGINT NOT NULL,
name VARCHAR(255) NOT NULL,
created DATE
);
END;
worked

Related

Supabase realtime "error occurred when joining realtime:public:<Channel>"

I upgraded from Supabase-Js v1 to v2. After doing so, my previously working realtime subscription all fail and create the following error:
Question
What does this error message mean? Why does the error occur and how could I fix it?
I think the table name is a channel.
event: "phx_reply"
payload:
response: {reason: "error occurred when joining realtime:public:<table-name>"}
reason: "error occurred when joining realtime:public:<table-name>"
status: "error"
ref: "1"
topic: "realtime:public:<table-name>"
I found a similar error message here. However, I do not understand it. Although, I disabled Postgres Row Level Security:
https://github.com/supabase/realtime/issues/217
My code
You can find the full code here:
https://github.com/Donnerstagnacht/polity/blob/master/src/app/profile/state/profile.service.ts
Table
CREATE TABLE IF NOT EXISTS public.profiles_counters
(
"id" uuid NOT NULL,
"amendment_counter" bigint DEFAULT 0::bigint,
"follower_counter" bigint DEFAULT 0::bigint,
"following_counter" bigint DEFAULT 0::bigint,
"groups_counter" bigint DEFAULT 0::bigint,
"unread_notifications_counter" bigint DEFAULT 0::bigint,
CONSTRAINT profiles_counters_pkey PRIMARY KEY (id),
CONSTRAINT profiles_counters_fkey FOREIGN KEY (id)
REFERENCES auth.users (id) MATCH SIMPLE
ON UPDATE NO ACTION
ON DELETE NO ACTION
)
TABLESPACE pg_default;
ALTER TABLE IF EXISTS public.profiles_counters OWNER to postgres;
GRANT ALL ON TABLE public.profiles_counters TO anon;
GRANT ALL ON TABLE public.profiles_counters TO authenticated;
GRANT ALL ON TABLE public.profiles_counters TO postgres;
GRANT ALL ON TABLE public.profiles_counters TO service_role;
Activating RealTime
begin;
drop publication if exists supabase_realtime;
create publication supabase_realtime;
commit;
alter publication supabase_realtime add table profiles_counters;
alter table "profiles_counters" replica identity full;
Disabling Row Level Security
ALTER TABLE profiles_counters DISABLE ROW LEVEL SECURITY;
Creating Realtime Subscription
getRealTimeChangesCounters(uuid: string): RealtimeChannel {
const subscription = this.supabaseClient
.channel(`public:profiles_counters`)
.on('postgres_changes',
{
event: 'UPDATE',
schema: 'public',
table: 'profiles_counters'
},
payload => {
console.log(payload)
}
)
.subscribe()
return subscription;
It is related to a current supabase internal bug.
I used the local development setup based on Docker. However, the docker image of supabase seems to be behind the supabase remote version (which is the one described in the supabase documentation).
So the solution is to switch to supabase remote development and wait for a fix of the local supabase docker image.
Related issues:
https://github.com/supabase/realtime/issues/295
https://github.com/supabase/supabase/issues/9798

Querying Postgres through Babelfish

I am connected over TDS (1433) to a Postgres/Aurora (babelfish-enabled) database.
I can run the following three queries from my application and I receive confusing responses:
SELECT current_database()
SELECT * FROM information_schema.tables WHERE table_name = 'PERSON'
SELECT COUNT(1) FROM "PERSON"
The responses are:
current_database":"babelfish_db"
"table_catalog":"babelfish_db","table_schema":"public","table_name":"PERSON","table_type":"BASE TABLE"...}
relation "person" does not exist
I simply cannot query the PERSON table. I have tried:
"PERSON"
"person"
PERSON
person
public.PERSON
public.person
public."PERSON"
I have ensured the user I am connecting as has access to the database, schema and tables:
GRANT CONNECT ON DATABASE babelfish_db TO popweb;
GRANT USAGE ON SCHEMA public TO popweb;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO popweb;
Still, I cannot access the table. I feel like such a boob/noob
For anyone who has connected to Postgres via Babelfish, what am I doing wrong?
The GA release of Babelfish didn't make any modifications to the PostgreSQL implementation of the information schema. So, what you see is the physical database of babelfish_db and the public schema. It looks like you created the table using the PostgreSQL endpoint.
To work with tables in Babelfish at this time, you need to create a T-SQL virtual database and your tables inside of that database using the T-SQL endpoint - just like you did before.
For example using SSMS and a new query connected to your Babelfish endpoint. You should notice in the SSMS database drop down and in the status bar that your context shows you are in the master database.
CREATE DATABASE ford_prefect;
GO
USE ford_prefect;
GO
CREATE SCHEMA school;
GO
CREATE TABLE [School].[Person](
[PersonID] [int] NOT NULL,
[LastName] [nvarchar](50) NOT NULL,
[FirstName] [nvarchar](50) NOT NULL,
[HireDate] [datetime] NULL,
[EnrollmentDate] [datetime] NULL,
[Discriminator] [nvarchar](50) NOT NULL,
CONSTRAINT [PK_School.Student] PRIMARY KEY CLUSTERED
(
[PersonID] ASC
)
GO
At this point you can add records via INSERT and select the table without issues.
Cheers,
Bill Ramos
Aurora PostgreSQL Babelfish PM, Amazon

PowerApps Postgres auto increment ID

I got a table of the following form in Postgres:
CREATE TABLE contract (
id serial NOT NULL,
start_date date NOT NULL,
end_date date NOT NULL,
price float8 NOT NULL,
CONSTRAINT contract_pkey PRIMARY KEY (id)
);
In Microsoft Powerapps, I create a EditForm to update the table above. For other databases, like MS SQL, I didn't need to supply the id, since it's auto increment. But for some reason, PowerApps keeps demanding to fill in the id for this table, even though it's auto increment and shouldn't be supplied to Postgres.
Anyone with the same experience with Powerapps in combination with Postgres? Struggling with it for hours...

Why doesn't knex create serial column in postgres?

I use knex to create a postgres table as following:
knex.schema.createTable('users', table => {
table.bigIncrements('user_id');
....
})
But after the table was created, the column user_id is a integer not the serial as expected.
The sql get by the pgAdmin is as following:
CREATE TABLE public.users
(
user_id bigint NOT NULL DEFAULT nextval('users_user_id_seq'::regclass),
....
)
And the consequence is that when I do insert statement, the user_id won't auto increment as expected.
Any gives?
====================
Currently I just changed to mysql connection, and the inserting works well. But if I changed the database back to postgresql, then inserting would fail due to the duplication of user_id. The code can be found here: https://github.com/buzz-buzz/buzz-service
serial and bigserial are not real types they are just shorthand for what pgAdmin is showing.
You will also find that a sequence has been created with the name users_user_id_seq when you look under sequences in pgAdmin.

apache phoenix DoNotRetryIOException

when i run the sql to create table, like this:
CREATE TABLE FM_DAY(
APPID VARCHAR NOT NULL,
CREATETIME VARCHAR NOT NULL,
PLATFORM VARCHAR NOT NULL,
USERCOUNT UNSIGNED_LONG,
LONGCOUNT UNSIGNED_LONG,
USERCOUNT UNSIGNED_LONG,
CONSTRAINT PK PRIMARY KEY (APPID,CREATETIME,PLATFORM)
)
this sql has wrong with duplicate key USERCOUNT, and error occur when i run it. However, although it thows a exception, this table is created, and the table is exactly like created with this sql:
CREATE TABLE FM_DAY(
APPID VARCHAR NOT NULL,
CREATETIME VARCHAR NOT NULL,
PLATFORM VARCHAR NOT NULL,
USERCOUNT UNSIGNED_LONG,
LONGCOUNT UNSIGNED_LONG,
CONSTRAINT PK PRIMARY KEY (APPID,CREATETIME,PLATFORM)
)
Unfortunately, the follow exception was throwed when excuting both delete table and select table, and I can't drop this table.
Error: org.apache.hadoop.hbase.DoNotRetryIOException: FM_DAY: 34
at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1316)
at org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:10525)
at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7435)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1875)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1857)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32209)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 34
at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:354)
at org.apache.phoenix.schema.PTableImpl.<init>(PTableImpl.java:276)
at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:265)
at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:826)
at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:462)
at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doDropTable(MetaDataEndpointImpl.java:1336)
at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1289)
... 10 more
If someone know this situation? And how can I delete this table.
thanks.
I think I ran into this issue before. First, backup your db (in case my instructions don't work :))
Second:
hbase shell
Then use hbase commands to disable and then drop the table.
disable ...
drop ...
After doing this, the table may still show up in Phoenix despite the table not existing in HBase. This is because Phoenix caches metadata in a HBase table. So now you have to find the Phoenix metadata table and drop it (it will be regenerated the next time you start Phoenix).
https://mail-archives.apache.org/mod_mbox/phoenix-user/201403.mbox/%3CCAAF1JditzYY6370DVwajYj9qCHAFXbkorWyJhXVprrDW2vYYBA#mail.gmail.com%3E