I have created the following setup script. The script is meant to be run before the application (which would run on node) is deployed.
The idea is that it would eventually contain all table definitions, relations, stored procedures and some seed data to make the application work, split into different files.
But currently I'm stuck because I can't figure out how I can store data in variables.
In this example we have profiles (because user is a keyword) and inventories. I want each user to have an inventory. For this to work I need to generate an inventory, store its id in a variable and pass it to the profile.
But for some reason postgres won't allow me to declare variables. Heres what I got so far:
drop database test;
create database test;
\c test
create extension if not exists "uuid-ossp";
create table inventory(
id uuid PRIMARY KEY DEFAULT uuid_generate_v4(),
name text
);
create table profile(
id uuid PRIMARY KEY DEFAULT uuid_generate_v4(),
name text,
inventory_id uuid references inventory(id)
);
DECLARE myid uuid;
/*I want to set the inv_id to the result of this expression*/
insert into inventory(name) values('asdf') returning id into myid;
insert into profile(name,inventory_id) values ('user1',#myid)
But I get the following error:
$ psql -U postgres -f init.sql
psql:init.sql:18: ERROR: syntax error at or near "uuid"
LINE 1: DECLARE myid uuid;
^
So how can I create a variable to store this id? Am I doing something wrong in general, because I'm pretty sure Declare is part of the SQL spec.
You can use
WITH myids AS (
INSERT INTO inventory(name)
VALUES ('asdf')
RETURNING id
)
INSERT INTO profile(name,inventory_id)
SELECT 'user1', id
FROM myids;
Related
I'm writing a database migration that adds a new table whose id column is populated using uuid_generate_v4(). However, that generated id needs to be used in an UPDATE on another table to associate the entities. Here's an example:
BEGIN;
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
CREATE TABLE IF NOT EXISTS models(
id,
type
);
INSERT INTO models(id)
SELECT
uuid_generate_v4() AS id
,t.type
FROM body_types AS t WHERE t.type != "foo";
ALTER TABLE body_types
ADD COLUMN IF NOT EXISTS model_id uuid NOT NULL DEFAULT uuid_generate_v4();
UPDATE TABLE body_types SET model_id =
(SELECT ....??? I'M STUCK RIGHT HERE)
This is obviously a contrived query with flaws, but I'm trying to illustrate that what it looks like I need is a way to store the uuid_generate_v4() value from each inserted row into a variable or hash that I can reference in the later UPDATE statement.
Maybe I've modeled the solution wrong & there's a better way? Maybe there's a postgresql feature I just don't know about? Any pointers greatly appreciated.
I was modeling the solution incorrectly. The short answer is "don't make the id in the INSERT random". In this case the key is to add the 'model_id' column to 'body_types' first. Then I can use it in the INSERT...SELECT without having to save it for later use because I'll be selecting it from the body_types table.
BEGIN;
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
ALTER TABLE body_types
ADD COLUMN IF NOT EXISTS model_id uuid NOT NULL DEFAULT uuid_generate_v4();
CREATE TABLE IF NOT EXISTS models(
id,
type
);
INSERT INTO models(id)
SELECT
t.model_id AS id
,t.type
FROM body_types AS t WHERE t.type != "foo";
Wish I had a better contrived example, but the point is, avoid using random values that you have to use later, and in this case it was totally unnecessary to do so anyway.
I am using Prisma as my schema and migrating it to supabase with prisma migrate dev
One of my tables Profiles, should reference the auth.users table in supabase, in sql something like this id uuid references auth.users not null,
Now since that table is automatically created in supabase do I still add it to my prisma schema? It's not in public either it is in auth.
model Profiles {
id String #id #db.Uuid
role String
subId String
stripeCustomerId String
refundId String[]
createdAt DateTime #default(now())
updatedAt DateTime #updatedAt
}
The reason I want the relation is because I want a trigger to automatically run a function that inserts an id and role into the profiles table when a new users is invited.
This is that trigger and function
-- inserts a row into public.profiles
create function public.handle_new_user()
returns trigger
language plpgsql
security definer
as $$
begin
insert into public.Profiles (id, role)
values (new.id, 'BASE_USER');
return new;
end;
$$;
-- trigger the function every time a user is created
create trigger on_auth_user_created
after insert on auth.users
for each row execute procedure public.handle_new_user();
I had this working when I created the profiles table manually in supabase I included the reference to the auth.users, that's the only reason I can think of why the user Id and role won't insert into the profiles db when I invite a user, the trigger and function are failing
create table public.Profiles (
id uuid references auth.users not null,
role text,
primary key (id)
);
Update from comment:
One error I found is
relation "public.profiles" does not exist
I change it to "public.Profiles" with a capital in supabase, but the function seem to still be looking for lowercase.
What you show should just work:
db<>fiddle here
Looks like you messed up capitalization with Postgres identifiers.
If you (or your ORM) created the table as "Profiles" (with double-quotes), non-standard capitalization is preserved and you need to double-quote the name for the rest of its life.
So the trigger function body must read:
...
insert into public."Profiles" (id, role) -- with double-quotes
...
Note that schema and table (and column) have to be quoted separately.
See:
Are PostgreSQL column names case-sensitive?
I am trying to convert a MySQL database to PostgreSQL and I created this
CREATE TYPE location_m as enum('England','Japan','France','Usa','China','Canada');
CREATE TABLE airport (
id int NOT NULL,
owner varchar(40) NOT NULL DEFAULT '',
location location_m NOT NULL DEFAULT 'England',
travel_prices varchar(100) NOT NULL DEFAULT '100-100-100-100-100-100',
profit varchar(100) NOT NULL DEFAULT '0-0-0-0-0-0'
) ;
INSERT INTO airport (id, owner, location, travel_prices, profit) VALUES
(1, 'Mafia', 'Japan', '1000-1000-1000-1000-1000-1000', '0-18000-34000-15500-11000-13000');
What I run the insert it returns with this
psql:main_db.sql:43: ERROR: type "location_m" already exists
I tried looking it up but can't really find anything. I don't understand why it is saying it already exists.
I thought I was doing the enum correctly based on the docs and the other Stack Overflow posts.
That is my entire file so far, except I have DROP TABLE IF EXISTS airport; at the beginning.
If you're doing that, you will also want to place a DROP TYPE IF EXISTS location_m; at the beginning of the script. (Possibly with the CASCADE option if it's already used in a table definition, or make sure to drop all such tables first).
Alternatively, have a look at Check if a user-defined type already exists in PostgreSQL for various other workarounds for CREATE OR REPLACE TYPE …, although if you are working with a migration script that will run only once and expect an empty database, it's probably no harm to just drop and recreate them.
A far better idea, use 2 scripts. Put your creates in 1 script, it gets run 1 time. The other script contains only the Insert. If needed delete and re-run as many times as needed. Further IMHO never use 'if exists' on ddl, keep in mind that errors can be your friend. Consider the following scenario.
Your script:
drop type if exists location_m cascade;
create type location_m as enum('England','Japan','France','Usa','China','Canada');
drop table if exists airport ;
create table airport (
id int not null,
owner varchar(40) not null default '',
location location_m not null default 'England',
travel_prices varchar(100) not null default '100-100-100-100-100-100',
profit varchar(100) not null default '0-0-0-0-0-0'
) ;
insert into airport (id, owner, location, travel_prices, profit) VALUES
(1, 'Mafia', 'Japan', '1000-1000-1000-1000-1000-1000', '0-18000-34000-15500-11000-13000');
The another developer in your organization working on a different project, but using the same naming conventions comes along with their script:
drop type if exists location_m cascade;
create type location_m as enum('USA', 'Canada', 'Brazil');
create table hotel( id integer generated always as identity
, name text
, location location_m not null default 'Canada'
, corporate_rate money
) ;
Now what does the following give you?
select *
from airport
where location = 'Japan';
See demo here. So what happened? Have fun trying to find this why this happened. Oh well never mind just rerun your script. But soon you are both posting to SO wanting to know what Postgres bug in causing your problem - even though the problem is not on the Postgres side at all.
I am new in PostgreSQL and I am working with this database.
I got a file which I imported, and I am trying to get rows with a certain ID. But the ID is not defined, as you can see it in this picture:
so how do I access this ID? I want to use an SQL command like this:
SELECT * from table_name WHERE ID = 1;
If any order of rows is ok for you, just add a row number according to the current arbitrary sort order:
CREATE SEQUENCE tbl_tbl_id_seq;
ALTER TABLE tbl ADD COLUMN tbl_id integer DEFAULT nextval('tbl_tbl_id_seq');
The new default value is filled in automatically in the process. You might want to run VACUUM FULL ANALYZE tbl to remove bloat and update statistics for the query planner afterwards. And possibly make the column your new PRIMARY KEY ...
To make it a fully fledged serial column:
ALTER SEQUENCE tbl_tbl_id_seq OWNED BY tbl.tbl_id;
See:
Creating a PostgreSQL sequence to a field (which is not the ID of the record)
What you see are just row numbers that pgAdmin displays, they are not really stored in the database.
If you want an artificial numeric primary key for the table, you'll have to create it explicitly.
For example:
CREATE TABLE mydata (
id integer GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
obec text NOT NULL,
datum timestamp with time zone NOT NULL,
...
);
Then to copy the data from a CSV file, you would run
COPY mydata (obec, datum, ...) FROM '/path/to/csvfile' (FORMAT 'csv');
Then the id column is automatically filled.
I have a following situation.
In database A on server I, let's call it Host DB, there is a table, that has a following sample create script:
CREATE TABLE public.some_table (
id SERIAL PRIMARY KEY,
some_field TEXT
);
CREATE INDEX public.some_field_index
ON public.some_table USING btree
(my_custom_function(some_field));
As you can see, the index is created on a result of some custom, stored in database A, function my_custom_function.
Now I want to declare some_table as foreign table on other server, in database B. After creating the server, user mappings etc. I declare foreign table as:
CREATE FOREIGN TABLE public.some_table (
id SERIAL PRIMARY KEY,
some_field TEXT
)
SERVER host_server
OPTIONS (
schema_name 'public',
table_name 'some_table'
);
The table is created nicely, however I cannot query it. Instead I am getting following error:
ERROR: function my_custom_function(text) does not exist.
No function matches the given name and argument type.
You might need to add explcit type casts.
CONTEXT: Remote SQL command: SELECT id, some_field FROM public.some_table
SQL fuction my_custom_function during inlining.
I believe the problem is related to function my_custom_function not being declared on the server B, in the "guest" database. For some reasons i don't want to create this function. Is there any solution to overcome this problem?
Thanks for all your answers in advance.