I created a pg_dump with the following command -
pg_dump -U postgres -d db -n public \
--exclude-table-data 'exclude_table_*' \
--exclude-table-data 'another_set_of_tables_to_exclude*' > dump.sql
This excluded the tables I needed it to exclude, but it didn't dump any functions that were in the public schema. Why did it not dump the functions and how do I get it to dump them?
UPDATE
This is the definition of a materialized view -
CREATE MATERIALIZED VIEW public.attending AS
SELECT (split_part((ct.id)::text, '-'::text, 1))::bigint AS
attending_physician,
split_part((ct.id)::text, '-'::text, 2) AS business,
(split_part((ct.id)::text, '-'::text, 3))::bigint AS organization,
split_part((ct.id)::text, '-'::text, 4) AS county,
ct.id,
ct."qtr-0",
ct."qtr-1",
ct."qtr-2",
ct."qtr-3",
ct."qtr-4",
ct."qtr-5",
ct."qtr-6",
ct."qtr-7",
ct."qtr-8"
FROM crosstab('SELECT attending_practitioner || ''-'' || business || ''-'' || organization || ''-'' || county AS id, period, COALESCE(admits, 0)
FROM calc ORDER BY 1, 2 DESC'::text, 'SELECT year || ''q'' || quarter FROM calc_trend ORDER BY 1 DESC limit 9'::text) ct(id character varying(32), "qtr-0" integer, "qtr-1" integer, "qtr-2" integer, "qtr-3" integer, "qtr-4" integer, "qtr-5" integer, "qtr-6" integer, "qtr-7" integer, "qtr-8" integer);
It should dump functions (and all other objects) in the public schema.
The functions that are not dumped are those that are part of an extension, like the crosstab in your case. Such objects are not dumped individually, they are included in the CREATE EXTENSION.
Unfortunately extensions are not dumped with a schema dump (they belong to the database).
You should create the extensions manually on the destination database before restoring the dump:
CREATE EXTENSION crosstab;
Related
I have installed postgis, I still have problem:
function st_distancespheroid(geometry, geometry) does not exist
WITH data AS (
SELECT a.*,
CAST(ST_DistanceSpheroid(geometry(location), st_geomfromtext('POINT(' || $7::decimal || ' ' || $8::decimal || ')', 4326))as numeric) / 1000 AS distance
FROM agent AS a
WHERE (a.agent_code ILIKE $1
OR a.name ILIKE $1
OR a.phone LIKE $1)
AND
a.sub_district_name LIKE ANY(string_to_array($4, ','))
AND
a.agent_status_id LIKE ANY(string_to_array($5, ','))
AND CASE WHEN $6 = 'FAVORITE' THEN
a.is_subscription = true
WHEN $6 = 'REGULER' THEN
a.is_subscription != true
ELSE
a.location LIKE '%%'
END
ORDER BY distance ASC
),
data_counter AS (
SELECT COUNT(agent_id) AS __total__
FROM data
)
SELECT *
FROM data, data_counter
LIMIT $2
OFFSET $3
`
when I run my go repo
How to install PostGIS
After installation you have to create the extension in the database. For instance, to install PostGIS 3.0 in a Debian distribution running PostgreSQL 13:
apt-get install postgresql-13-postgis-3
After that you create the extension in the database
CREATE EXTENSION postgis;
Now you'll be able to use the spatial functions.
How to calculate distances over a spheroid / sphere
The function ST_DistanceSpheroid expects three parameters, namely two geometries and the shperoid. In your code you're calling the function using only two geometries, hence the error function st_distancespheroid(geometry, geometry) does not exist. This is how it should be called (adapt it to the spheroid of your choice):
SELECT
ST_DistanceSpheroid(geom1,geom2,'SPHEROID["WGS 84",6378137,298.257223563]')
FROM t;
You can also tell ST_Distance to compute the distances using the spheroid if you cast the geometry parameters to geography:
SELECT ST_Distance(geom1::geography,geom2::geography,true) FROM t;
Another option - less accurate - is to use ST_DistanceSphere:
SELECT ST_DistanceSphere(geom1,geom2) FROM t;
I have a requirement to dump the contents of a definable selection of tables as CSV's for an initial load of systems that are not able to connect with PostgreSQL for various reasons.
I have written a script to do this which runs through a list of tables using psql with the -c flag to run psql's \COPY command to dump the corresponding table to a file like this:
COPY table_name TO table_name.csv WITH (FORMAT 'csv', HEADER, QUOTE '\"', DELIMITER '|');
It works fine. But I am sure you have already spotted the problem: as the process takes ~57 minutes for ~60 odd tables, the likelyhood of consistency is quite close to absolute zero.
I had a think about it and suspected I could make a few lightweight changes to pg_dump to do what I want, i.e., create multiple csv's from pg_dump whilst having a hope of integrity between the tables - and being able to specify parallel dumps too.
I have added a few flags to allow me to apply a file postfix (the date), set the format options and pass in a path for the relevant output file.
However my modified pg_dump was failing when writing to a file, like:
COPY table_name (pkey_id, field1, field2 ... fieldn) TO table_name.csv WITH (FORMAT 'csv', HEADER, QUOTE '"', DELIMITER '|')
Note: Within pg_dump, the column list is expanded
So I cast around for further information and found these COPY Tips.
It looks like writing to a file is a no-no over the network; however I am on the same machine (for now). I felt writing to /tmp would be OK as it is writable by anyone.
So I tried cheating with:
seingramp#seluonkeydb01:~$ ./tp_dump -a -t table_name -D /tmp/ -k "FORMAT 'csv', HEADER, QUOTE '\"', DELIMITER '|'" -K "_$DATE_POSTFIX"
tp_dump: warning: there are circular foreign-key constraints on this table:
tp_dump: table_name
tp_dump: You might not be able to restore the dump without using --disable-triggers or temporarily dropping the constraints.
tp_dump: Consider using a full dump instead of a --data-only dump to avoid this problem.
--
-- PostgreSQL database dump
--
-- Dumped from database version 12.3
-- Dumped by pg_dump version 14devel
SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SELECT pg_catalog.set_config('search_path', '', false);
SET check_function_bodies = false;
SET xmloption = content;
SET client_min_messages = warning;
SET row_security = off;
--
-- Data for Name: material_master; Type: TABLE DATA; Schema: mm; Owner: postgres
--
COPY table_name (pkey_id, field1, field2 ... fieldn) FROM stdin;
tp_dump: error: query failed:
tp_dump: error: query was: COPY table_name (pkey_id, field1, field2 ... fieldn) TO PROGRAM 'gzip > /tmp/table_name_20200814.csv.gz' WITH (FORMAT 'csv', HEADER, QUOTE '"', DELIMITER '|')
I have neutered the data as it is customer specific.
I didn't find pg_dump's error message very helpful, do you have any ideas as to what I am doing wrong?
The changes really are quite small (excuse the code!) starting ~line 1900, ignoring the flags added around getopt().
/*
* Use COPY (SELECT ...) TO when dumping a foreign table's data, and when
* a filter condition was specified. For other cases a simple COPY
* suffices.
*/
if (tdinfo->filtercond || tbinfo->relkind == RELKIND_FOREIGN_TABLE)
{
/* Note: this syntax is only supported in 8.2 and up */
appendPQExpBufferStr(q, "COPY (SELECT ");
/* klugery to get rid of parens in column list */
if (strlen(column_list) > 2)
{
appendPQExpBufferStr(q, column_list + 1);
q->data[q->len - 1] = ' ';
}
else
appendPQExpBufferStr(q, "* ");
if ( copy_from_spec )
{
if ( copy_from_postfix )
{
appendPQExpBuffer(q, "FROM %s %s) TO PROGRAM 'gzip > %s%s%s.csv.gz' WITH (%s)",
fmtQualifiedDumpable(tbinfo),
tdinfo->filtercond ? tdinfo->filtercond : "",
copy_from_dest ? copy_from_dest : "",
fmtQualifiedDumpable(tbinfo),
copy_from_postfix,
copy_from_spec);
}
else
{
appendPQExpBuffer(q, "FROM %s %s) TO PROGRAM 'gzip > %s%s.csv.gz' WITH (%s)",
fmtQualifiedDumpable(tbinfo),
tdinfo->filtercond ? tdinfo->filtercond : "",
copy_from_dest ? copy_from_dest : "",
fmtQualifiedDumpable(tbinfo),
copy_from_spec);
}
}
else
{
appendPQExpBuffer(q, "FROM %s %s) TO stdout;",
fmtQualifiedDumpable(tbinfo),
tdinfo->filtercond ? tdinfo->filtercond : "");
}
}
else
{
if ( copy_from_spec )
{
if ( copy_from_postfix )
{
appendPQExpBuffer(q, "COPY %s %s TO PROGRAM 'gzip > %s%s%s.csv.gz' WITH (%s)",
fmtQualifiedDumpable(tbinfo),
column_list,
copy_from_dest ? copy_from_dest : "",
fmtQualifiedDumpable(tbinfo),
copy_from_postfix,
copy_from_spec);
}
else
{
appendPQExpBuffer(q, "COPY %s %s TO PROGRAM 'gzip > %s%s.csv.gz' WITH (%s)",
fmtQualifiedDumpable(tbinfo),
column_list,
copy_from_dest ? copy_from_dest : "",
fmtQualifiedDumpable(tbinfo),
copy_from_spec);
}
}
else
{
appendPQExpBuffer(q, "COPY %s %s TO stdout;",
fmtQualifiedDumpable(tbinfo),
column_list);
}
I tried a couple of other cheats too, like specifying a directory owned by postgres. I know it's a quick hack but I hope you can help, and thanks for looking.
This is a use case for pg_restore -f.
So:
-- Create custom format dump file
pg_dump -d some_db -U some_user -Fc -f dump.out
-- Move that file to where you need it
-- Dump data only from named table to a file from the dump file.
pg_restore -a -t table_1 -f table_1_data.sql dump.out
The pg_dump will create a consistent snapshot of the tables, so you have the database in a 'frozen' state in dump.out. Then you can use pg_restore to 'thaw out' those parts you need on your schedule. By using -a you will get the COPY you want.
Assume the following table and custom range type:
create table booking (
identifier integer not null primary key,
room uuid not null,
start_time time without time zone not null,
end_time time without time zone not null
);
create type timerange as range (subtype = time);
In PostgreSQL v10, you can do:
alter table booking add constraint overlapping_times
exclude using gist
(
room with =,
timerange(start_time, end_time) with &&
);
In PostgreSQL v9.5/v9.6, you have to manually cast the uuid column as gist_btree does not support uuid:
alter table booking add constraint overlapping_times
exclude using gist
(
(room::text) with =,
timerange(start_time, end_time) with &&
);
I would like to support v9.5, v9.6 and v10 for my customers. Is there a way to conditionally add the above constraint in the same .sql file, depending on the version of the current database?
You can use dynamic sql
For example:
do
$block$
declare
l_version text;
begin
select setting into l_version from pg_settings where name = 'server_version';
execute
format(
$script$
alter table booking add constraint overlapping_times
exclude using gist
(
%s with =,
timerange(start_time, end_time) with &&
)
$script$,
case when (l_version like '9.5.%' or l_version like '9.6.%') then '(room::text)' else 'room' end
)
;
end;
$block$
language plpgsql;
Here is a proof of concept:
#!/bin/sh
#THE_HOST="192.168.0.104"
THE_HOST="192.168.0.101"
get_version ()
{
psql -t -h ${THE_HOST} -U postgres postgres <<OMG | awk -e '{ print $2; }'
select version();
OMG
}
# ############################################################################
# main
#
# - Connect to database to retrieve version
# - use the retrieved version to create a symlink "versioned" to
# one of our subdirs
# - call an sql script that includes this symlink/some.sql
# symlinking to a non-existing directory will cause a dead link
# (, and the script to fail.)
# ergo: there should be a subdir "verX.Y.Z" for every supported version X.Y.Z
# ############################################################################
pg_version=`get_version`
#echo "version=${pg_version}"
rm versioned
ln -fs "ver${pg_version}" versioned
if [ -f versioned/alter.sql ]; then
echo created link versioned to "ver${pg_version}"
else
echo "version ${pg_version} not supported today..."
echo "Failed!"
exit 1
fi
psql -t -h ${THE_HOST} -U postgres postgres <<LETS_GO
\i common/create.sql
\i versioned/alter.sql
\echo done!
LETS_GO
#eof
How do I perform SELECT on a table on all databases available in an ElasticDB Pool. All of them have same DB schema and they're created dynamically. I've explored Elastic Database Query but getting lost in the middle.
Reporting across scaled-out cloud databases
It asks to download a sample console application first, create a shard and then run the query which is a bit confusing. Is there anyway I can run T-SQL queries from SQL Server Management Studio to query all the databases.
PS: The DBs are not sharded. They're on DB per customer.
Thanks in Advance!
I'm thinking you need to add the databases as external sources so you can do a cross database query, you will be able to query the tables as if they were local.
I found a guide that can help you set it up:
Link to guide:
https://www.mssqltips.com/sqlservertip/4550/sql-azure-cross-database-querying/
the guide:
DB1 has Db1Table table:
CREATE TABLE DB1.dbo.Db1Table (
ID int IDENTITY(1, 1) NOT NULL PRIMARY KEY,
CustomerId INT,
CustomerName NVARCHAR(50));
INSERT INTO DB1.dbo.Db1Table(CustomerId, CustomerName) VALUES
( 1, 'aaaaaaa' ),
( 2, 'bbbbbbb' ),
( 3, 'ccccccc' ),
( 4, 'ddddddd' ),
( 5, 'eeeeeee' );
CREATE TABLE DB1.dbo.Db1Table (
ID int IDENTITY(1, 1) NOT NULL PRIMARY KEY,
CustomerId INT,
CustomerName NVARCHAR(50));
INSERT INTO DB1.dbo.Db1Table(CustomerId, CustomerName) VALUES
( 1, 'aaaaaaa' ),
( 2, 'bbbbbbb' ),
( 3, 'ccccccc' ),
( 4, 'ddddddd' ),
( 5, 'eeeeeee' );
DB2 has Db2Table table:
CREATE TABLE DB2.dbo.Db2Table (
ID int IDENTITY(1, 1) NOT NULL PRIMARY KEY,
CustomerId INT,
Country NVARCHAR(50));
INSERT INTO DB2.dbo.Db2Table(CustomerId, Country) VALUES
( 1, 'United States' ),
( 3, 'Greece' ),
( 4, 'France' ),
( 5, 'Germany' ),
( 6, 'Ireland' );
CREATE TABLE DB2.dbo.Db2Table (
ID int IDENTITY(1, 1) NOT NULL PRIMARY KEY,
CustomerId INT,
Country NVARCHAR(50));
INSERT INTO DB2.dbo.Db2Table(CustomerId, Country) VALUES
( 1, 'United States' ),
( 3, 'Greece' ),
( 4, 'France' ),
( 5, 'Germany' ),
( 6, 'Ireland' );
If we want to fetch customers whose country is Greece then we could do the following query:
SELECT
db1.CustomerId,
db1.CustomerName
FROM DB1.dbo.Db1Table db1
INNER JOIN DB2.dbo.Db2Table db2 ON db1.CustomerId = db2.CustomerId
WHERE db2.Country = 'Greece';
SELECT
db1.CustomerId,
db1.CustomerName
FROM DB1.dbo.Db1Table db1
INNER JOIN DB2.dbo.Db2Table db2 ON db1.CustomerId = db2.CustomerId
WHERE db2.Country = 'Greece';
but instead of returning customerId 3 we get the following error:
Reference to database and/or server name in 'DB2.dbo.Db2Table' is not supported in this version of SQL Server.
Reference to database and/or server name in 'DB2.dbo.Db2Table' is not supported in this version of SQL Server.
In order to be able to perform a cross database query we need to perform the following steps:
Step1: Create Master Key
The database master key is a symmetric key used to protect the private keys of certificates and asymmetric keys that are present in the database. More info here.
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<password>';
-- Example --
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '3wbASg68un#q'
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<password>';
-- Example --
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '3wbASg68un#q'
Step 2: Create Database Scoped Credential “my_credential”
A database credential is not mapped to a server login or database user. The credential is used by the database to access the external location anytime the database is performing an operation that requires access.
CREATE DATABASE SCOPED CREDENTIAL <credential_name>
WITH IDENTITY = '<user>',
SECRET = '<secret>';
-- Example --
CREATE DATABASE SCOPED CREDENTIAL my_credential
WITH IDENTITY = 'dbuser',
SECRET = '9Pfwbg68un#q';
CREATE DATABASE SCOPED CREDENTIAL <credential_name>
WITH IDENTITY = '<user>',
SECRET = '<secret>';
-- Example --
CREATE DATABASE SCOPED CREDENTIAL my_credential
WITH IDENTITY = 'dbuser',
SECRET = '9Pfwbg68un#q';
credential_name
Specifies the name of the database scoped credential being created. credential_name cannot start with the number (#) sign. System credentials start with ##.
IDENTITY =’identity_name‘
Specifies the name of the account to be used when connecting outside the server.
SECRET =’secret‘
Specifies the secret required for outgoing authentication.
Step 3: Create External Data Source “my_datasource” of type RDBMS
This instruction creates an external data source for use in Elastic Database queries. For RDBMS, it specifies the logical server name of the remote database in Azure SQL Database.
-- (only on Azure SQL Database v12 or later)
CREATE EXTERNAL DATA SOURCE <data_source_name>
WITH (
TYPE=RDBMS,
LOCATION='<server_name>.database.secure.windows.net',
DATABASE_NAME='<remote_database_name>',
CREDENTIAL = <sql_credential>);
-- Example --
CREATE EXTERNAL DATA SOURCE my_datasource
WITH (
TYPE=RDBMS,
LOCATION='ppolsql.database.secure.windows.net',
DATABASE_NAME='DB2',
CREDENTIAL = my_credential);
-- (only on Azure SQL Database v12 or later)
CREATE EXTERNAL DATA SOURCE <data_source_name>
WITH (
TYPE=RDBMS,
LOCATION='<server_name>.database.secure.windows.net',
DATABASE_NAME='<remote_database_name>',
CREDENTIAL = <sql_credential>);
-- Example --
CREATE EXTERNAL DATA SOURCE my_datasource
WITH (
TYPE=RDBMS,
LOCATION='ppolsql.database.secure.windows.net',
DATABASE_NAME='DB2',
CREDENTIAL = my_credential);
data_source_name
Specifies the user-defined name for the data source. The name must be unique within the database in Azure SQL Database.
TYPE = [ HADOOP | SHARD_MAP_MANAGER | RDBMS ]
Use RDBMS with external data sources for cross-database queries with Elastic Database query on Azure SQL Database.
LOCATION =
specifies the logical server name of the remote database in Azure SQL Database.
DATABASE_NAME = ‘remote_database_name’
The name of the remote database (for RDBMS).
CREDENTIAL = credential_name
Specifies a database-scoped credential for authenticating to the external data source.
Step 4: Create External Table “mytable”
This instruction creates an external table for Elastic Database query.
CREATE EXTERNAL TABLE [ database_name . [ schema_name ] . | schema_name. ] table_name
( <column_definition> [ ,...n ] )
WITH (
DATA_SOURCE = <data_source_name>);
-- Example --
CREATE EXTERNAL TABLE [dbo].[Db2Table] (
[ID] int NOT NULL,
[CustomerId] INT,
[Country] NVARCHAR(50)
) WITH ( DATA_SOURCE = my_datasource )
CREATE EXTERNAL TABLE [ database_name . [ schema_name ] . | schema_name. ] table_name
( <column_definition> [ ,...n ] )
WITH (
DATA_SOURCE = <data_source_name>
);
-- Example --
CREATE EXTERNAL TABLE [dbo].[Db2Table] (
[ID] int NOT NULL,
[CustomerId] INT,
[Country] NVARCHAR(50)
) WITH ( DATA_SOURCE = my_datasource )
database_name . [ schema_name ] . | schema_name. ] table_name
The one to three-part name of the table to create. For an external table, only the table metadata is stored in SQL along with basic statistics about the file and/or folder referenced in Hadoop or Azure blob storage. No actual data is moved or stored in SQL Server.
[ ,…n ]
The column definitions, including the data types and number of columns, must match the data in the external files.
DATA_SOURCE = external_data_source_name
Specifies the name of the external data source that contains the location of the external data.
After running the DDL statements, you can access the remote table Db2Table as though it were a local table.
So, now if we want to fetch customers whose country is Greece the query would be executed successfully:
SELECT
db1.CustomerId,
db1.CustomerName
FROM DB1.dbo.Db1Table db1
INNER JOIN DB1.dbo.Db2Table db2 ON db1.CustomerId = db2.CustomerId
WHERE db2.Country = 'Greece';
-- Result --
CustomerId | CustomerName
-------------------------
3 ccccccc
SELECT
db1.CustomerId,
db1.CustomerName
FROM DB1.dbo.Db1Table db1
INNER JOIN DB1.dbo.Db2Table db2 ON db1.CustomerId = db2.CustomerId
WHERE db2.Country = 'Greece';
-- Result --
CustomerId | CustomerName
-------------------------
3 ccccccc
Our solution needs us to work in hierarchies of regions which are as follows.
STATE
|
DISTRICT
|
TALUK
/ \
/ \
HOBLI PANCHAYAT
\ /
\ /
\ /
VILLAGE
There are 2 ways to navigate to a village from a Taluk. Either through HOBLI OR through PANCHAYAT.
We need a PK(non-business KEY) and a SERIAL_NUMBER/ID for each STATE, DISTRICT, TALUK, HOBLI, PANCHAYAT, VILLAGE; However, each village has 8 additional attributes.
How do I design this structure in PostgreSQL 8.4 ?
My previous experience was on Oracle so I'm wondering how to navigate hierarchical structures in PostgreSQL 8.4 ? If at all, the solution should be friendly for READ/navigation speed.
================================================================
Quassnoi : Here is a sample hierarchy
KARNATAKA
|
|
TUMKUR (District)
|
|
|
KUNIGAL (Taluk)
/ \
/ \
/ \
HULIYUR DURGA(Hobli) CHOWDANAKUPPE(Panchayat)
\ /
\ /
\ /
\ /
\ /
Voddarakempapura(Village)
Ankanahalli(Village)
Chowdanakuppe(Village)
Yedehalli(Village)
NAVIGATE : For now, I will be presenting 2 separate UI screens each having separate navigable hierarchies
#1 using HOBLI and
So, for #1, I will need the entire tree starting from STATE, DISTRICT(s), TALUK(s), HOBLI(s), VILLAGE(s). Using the above tree, I will need
KARNATAKA (State)
|
|
|---TUMKUR (District)
|
|
|-----KUNIGAL(Taluk)
|
|
**|----HULIYUR DURGA(Hobli)**
|
|
|---VODDARAKEMPAPURA(Village)
|
|---Yedehalli(Village)
|
|---Ankanahalli(Village)
#2 using PANCHAYAT.
So, for #2, I will need the entire tree starting from STATE, DISTRICT(s), TALUK(s), PANCHAYAT(s), VILLAGE(s)
KARNATAKA (state)
|
|
|---TUMKUR (District)
|
|
|-----KUNIGAL(Taluk)
|
|
**|----CHOWDANAKUPPE (Panchayat)**
|
|
|---VODDARAKEMPAPURA(Village)
|
|---Ankanahalli(Village)
|
|---Chowdanakuppe(Village)
ResultSet
Should be able to create above Trees with the following details.
We need a PK(non-business KEY) and a SERIAL_NUMBER/ID for each STATE, DISTRICT, TALUK, HOBLI, PANCHAYAT, VILLAGE along with a Name and LEVEL of the relationship(similar to ORACLE'S LEVEL).
For now, getting the above ResultSet is OK. But in the future, we will need an ability to do reporting(some aggregation) at a HOBLI/PANCHAYAT/TALUK level.
=====================================
#Quassnoi #2,
Thank you very much,
"If you are planning to add some more hierarchy axes, it may be worth creating a separate table to store the hierarchies (with the axis field added) rather than adding the fields to the table."
Actually, I simplified the existing requirement so as NOT to confuse anyone. The actual hierarchy is like this
STATE
|
DISTRICT
|
TALUK
/ \
/ \
HOBLI PANCHAYAT
\ /
\ /
\ /
REVENUE VILLAGE
|
|
HABITATION
Sample data for such a hierarchy is like below
KARNATAKA
|
TUMKUR (District)
|
KUNIGAL (Taluk)
/ \
/ \
HULIYUR DURGA(Hobli) CHOWDANAKUPPE(Panchayat)
\ /
\ /
Thavarekere(Revenue Village)
/ \
Bommanahalli(habitation) Tavarekere(Habitation)
Will anything in your solution below change by the above modification ?
Also, would you recommend that I create another Table like below to store the 7 properties of the Habitats ? Is there a better way to store such info ?
CREATE TABLE habitatDetails
(
id BIGINT NOT NULL PRIMARY KEY,
serialNumber BIGINT NOT NULL,
habitatid BIGINT NOT NULL, -- we will add these details only for habitats
CONSTRAINT "habitatdetails_fk" FOREIGN KEY ("habitatid")
REFERENCES "public"."t_hierarchy"("id")
prop1 VARCHAR(128) ,
prop2 VARCHAR(128) ,
prop3 VARCHAR(128) ,
prop4 VARCHAR(128) ,
prop5 VARCHAR(128) ,
prop6 VARCHAR(128) ,
prop7 VARCHAR(128) ,
);
Thank you,
CREATE TABLE t_hierarchy
(
id BIGINT NOT NULL PRIMARY KEY,
type VARCHAR(128) NOT NULL,
name VARCHAR(128) NOT NULL,
tax_parent BIGINT,
gov_parent BIGINT,
CHECK (NOT (tax_parent IS NULL AND gov_parent IS NULL))
);
CREATE INDEX ix_hierarchy_taxparent ON t_hierarchy (tax_parent);
CREATE INDEX ix_hierarchy_govparent ON t_hierarchy (gov_parent);
INSERT
INTO t_hierarchy
VALUES (1, 'State', 'Karnataka', 0, 0),
(2, 'District', 'Tumkur', 1, 1),
(3, 'Taluk', 'Kunigal', 2, 2),
(4, 'Hobli', 'Huliyur Durga', 3, NULL),
(5, 'Panchayat', 'Chowdanakuppe', NULL, 3),
(6, 'Village', 'Voddarakempapura', 4, 5),
(7, 'Village', 'Ankanahalli', 4, 5),
(8, 'Village', 'Chowdanakuppe', 4, 5),
(9, 'Village', 'Yedehalli', 4, 5)
CREATE OR REPLACE FUNCTION fn_hierarchy_tax(level INT, start BIGINT)
RETURNS TABLE (level INT, h t_hierarchy)
AS
$$
SELECT $1, h
FROM t_hierarchy h
WHERE h.id = $2
UNION ALL
SELECT (f).*
FROM (
SELECT fn_hierarchy_tax($1 + 1, h.id) f
FROM t_hierarchy h
WHERE h.tax_parent = $2
) q;
$$
LANGUAGE 'sql';
CREATE OR REPLACE FUNCTION fn_hierarchy_tax(start BIGINT)
RETURNS TABLE (level INT, h t_hierarchy)
AS
$$
SELECT fn_hierarchy_tax(1, $1);
$$
LANGUAGE 'sql';
CREATE OR REPLACE FUNCTION fn_hierarchy_gov(level INT, start BIGINT)
RETURNS TABLE (level INT, h t_hierarchy)
AS
$$
SELECT $1, h
FROM t_hierarchy h
WHERE h.id = $2
UNION ALL
SELECT (f).*
FROM (
SELECT fn_hierarchy_gov($1 + 1, h.id) f
FROM t_hierarchy h
WHERE h.gov_parent = $2
) q;
$$
LANGUAGE 'sql';
CREATE OR REPLACE FUNCTION fn_hierarchy_gov(start BIGINT)
RETURNS TABLE (level INT, h t_hierarchy)
AS
$$
SELECT fn_hierarchy_gov(1, $1);
$$
LANGUAGE 'sql';
SELECT ht.level, (ht.h).*
FROM fn_hierarchy_tax(1) ht;
SELECT ht.level, (ht.h).*
FROM fn_hierarchy_gov(1) ht;
The main idea is to keep two parents in two different fields, and use CONNECT BY emulation (rather than recursive CTE) functionality to preserve the order.
If you are planning to add some more hierarchy axes, it may be worth creating a separate table to store the hierarchies (with the axis field added) rather than adding the fields to the table.
Update:
Will anything in your solution below change by the above modification?
No, it will work alright.
By "axes" I mean hierarchy chains. Currently, you have two axes: political hierarchy (though hablis) and tax hierarchy (through panchayats). If you are planning to add some more axes (which is of course improbable), you may consider storing the hierarchies in another table and adding "axis" field to that table. Again, it's very improbable that you want to do this, I just mentioned this possibility for the other readers who may have a similar problem.
Also, would you recommend that I create another Table like below to store the 7 properties of the Habitats ? Is there a better way to store such info ?
Yes, keeping them in a separate table is a good idea.