I have a table in postgres:
create table fubar (
name1 text,
name2 text, ...,
key integer);
I want to write a function which returns field values from fubar given the column names:
function getFubarValues(col_name text, key integer) returns text ...
where getFubarValues returns the value of the specified column in the row identified by key. Seems like this should be easy.
I'm at a loss. Can someone help? Thanks.
Klin's answer is a good (i.e. safe) approach to the question as posed, but it can be simplified:
PostgreSQL's -> operator allows expressions. For example:
CREATE TABLE test (
id SERIAL,
js JSON NOT NULL,
k TEXT NOT NULL
);
INSERT INTO test (js,k) VALUES ('{"abc":"def","ghi":"jkl"}','abc');
SELECT js->k AS value FROM test;
Produces
value
-------
"def"
So we can combine that with row_to_json:
CREATE TABLE test (
id SERIAL,
a TEXT,
b TEXT,
k TEXT NOT NULL
);
INSERT INTO test (a,b,k) VALUES
('foo','bar','a'),
('zip','zag','b');
SELECT row_to_json(test)->k AS value FROM test;
Produces:
value
-------
"foo"
"zag"
Here I'm getting the key from the table itself but of course you could get it from any source / expression. It's just a value. Also note that the result returned is a JSON value type (it doesn't know if it's text, numeric, or boolean). If you want it to be text, just cast it: (row_to_json(test)->k)::TEXT
Now that the question itself is answered, here's why you shouldn't do this, and what you should do instead!
Never trust any data. Even if it already lives inside your database, you shouldn't trust it. The method I've posted here is safe against SQL injection attacks, but an attacker could still set k to 'id' and see a column which was not intended to be visible to them.
A much better approach is to structure your data with this type of query in mind. Postgres has some excellent datatypes for this; HSTORE and JSON/JSONB. Merge your dynamic columns into a single column with one of those types (I'd suggest HSTORE for its simplicity and generally being more complete).
This has several advantages: your schema is well-defined and does not need to change if you add more dynamic columns, you do not need to perform expensive re-casting (i.e. row_to_json), and you are able to take advantage of indexes on your columns (thanks to PostgreSQL's functional indexes).
The equivalent to the code I wrote above would be:
CREATE EXTENSION HSTORE; -- necessary if you're not already using HSTORE
CREATE TABLE test (
id SERIAL,
cols HSTORE NOT NULL,
k TEXT NOT NULL
);
INSERT INTO test (cols,k) VALUES
('a=>"foo",b=>"bar"','a'),
('a=>"zip",b=>"zag"','b');
SELECT cols->k AS value FROM test;
Or, for automatic escaping of your values when inserting, you can use one of:
INSERT INTO test (cols,k) VALUES
(hstore( 'a', 'foo' ) || hstore( 'b', 'bar' ), 'a'),
(hstore( ARRAY['a','b'], ARRAY['zip','zag'] ), 'b');
See http://www.postgresql.org/docs/9.1/static/hstore.html for more details.
You can use dynamic SQL to select a column by name:
create or replace function get_fubar_values (col_name text, row_key integer)
returns setof text language plpgsql as $$begin
return query execute 'select ' || quote_ident(col_name) ||
' from fubar where key = $1' using row_key;
end$$;
Related
I'm trying to figure out if it's possible to create an SQL function that treats an argument row as if it were "duck-typed". That is, I would like to be able to pass rows from different tables or views that have certain common column names and operate on those columns within the function.
Here's a very trivial example to try to describe the issue:
=> CREATE TABLE tab1 (
id SERIAL PRIMARY KEY,
has_desc BOOLEAN,
x1 TEXT,
description TEXT
);
CREATE TABLE
=> CREATE FUNCTION get_desc(tab tab1) RETURNS TEXT AS $$
SELECT CASE tab.has_desc
WHEN True THEN
tab.description
ELSE
'Default Description'
END;
$$ LANGUAGE SQL;
=> INSERT INTO tab1 (has_desc, x1, description) VALUES (True, 'Foo', 'FooDesc');
INSERT 0 1
=> INSERT INTO tab1 (has_desc, x1, description) VALUES (True, 'Bar', 'BarDesc');
INSERT 0 1
=> SELECT get_desc(tab1) FROM tab1;
get_desc
----------
BarDesc
FooDesc
(2 rows)
This is of course very artificial. In reality, my table has many more fields, and the function is way more complicated that that.
Now I want to add other tables/views and pass them to the same function. The new tables/views have columns that differ, but the columns the function will care about are common to all of them. To add to the trivial example, I add these two tables:
CREATE TABLE tab2 (
id SERIAL PRIMARY KEY,
has_desc BOOLEAN,
x2 TEXT,
description TEXT
);
CREATE TABLE tab3 (
id SERIAL PRIMARY KEY,
has_desc BOOLEAN,
x3 TEXT,
description TEXT
);
Note all three have the has_desc and description fields that are the only ones actually used in get_desc. But of course if I try to use the existing function with tab2, I get:
=> select get_desc(tab2) FROM tab2;
ERROR: function get_desc(tab2) does not exist
LINE 1: select get_desc(tab2) FROM tab2;
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
I would like to be able to define a common function that does the same thing as get_desc but takes as argument a row from any of the three tables. Is there any way to do that?
Or alternatively is there some way to cast entire rows to a common row type that includes only a defined set of fields?
(I realize I could change the function arguments to just take XX.has_desc and XX.description but I'm trying to isolate which fields are used inside the function without needing to expand those in every place the function is called.)
You can create a cast:
CREATE CAST (tab2 AS tab1) WITH INOUT;
INSERT INTO tab2 (has_desc, x2, description) VALUES (True, 'Bar', 'From Tab2');
SELECT get_desc(tab2::tab1) FROM tab2;
get_desc
-----------
From Tab2
(1 row)
I'm adding an answer to show the complete way I solved this for posterity. But thanks to #klin for getting me pointed in the right direction. (One problem with #klin's bare CAST is that it doesn't produce the right row type when the two tables' common columns don't appear in the same relative position within their respective column lists.)
My solution adds a new custom TYPE (gdtab) containing the common fields, then a function that can convert from each source table's row type to the gdtab type, then adding a CAST to make each conversion implicit.
-- Common type for get_desc function
CREATE TYPE gdtab AS (
id INTEGER,
has_desc BOOLEAN,
description TEXT
);
CREATE FUNCTION get_desc(tab gdtab) RETURNS TEXT AS $$
SELECT CASE tab.has_desc
WHEN True THEN
tab.description
ELSE
'Default Description'
END;
$$ LANGUAGE SQL;
CREATE TABLE tab1 (
id SERIAL PRIMARY KEY,
has_desc BOOLEAN,
x1 TEXT,
description TEXT
);
-- Convert tab1 rowtype to gdtab type
CREATE FUNCTION tab1_as_gdtab(t tab1) RETURNS gdtab AS $$
SELECT CAST(ROW(t.id, t.has_desc, t.description) AS gdtab);
$$ LANGUAGE SQL;
-- Implicitly cast from tab1 to gdtab as needed for get_desc
CREATE CAST (tab1 AS gdtab) WITH FUNCTION tab1_as_gdtab(tab1) AS IMPLICIT;
CREATE TABLE tab2 (
id SERIAL PRIMARY KEY,
x2 TEXT,
x2x TEXT,
has_desc BOOLEAN,
description TEXT
);
CREATE FUNCTION tab2_as_gdtab(t tab2) RETURNS gdtab AS $$
SELECT CAST(ROW(t.id, t.has_desc, t.description) AS gdtab);
$$ LANGUAGE SQL;
CREATE CAST (tab2 AS gdtab) WITH FUNCTION tab2_as_gdtab(tab2) AS IMPLICIT;
Test usage:
INSERT INTO tab1 (has_desc, x1, description) VALUES (True, 'FooBlah', 'FooDesc'),
(False, 'BazBlah', 'BazDesc'),
(True, 'BarBlah', 'BarDesc');
INSERT INTO tab2 (has_desc, x2, x2x, description) VALUES (True, 'FooBlah', 'x2x', 'FooDesc'),
(False, 'BazBlah', 'x2x', 'BazDesc'),
(True, 'BarBlah', 'x2x', 'BarDesc');
SELECT get_desc(tab1) FROM tab1;
SELECT get_desc(tab2) FROM tab2;
Postgresql functions depend on the schema of the argument. If the different tables have different schemas, then you can always project them into the common sub-schema needed by get_desc. This can be done in a quick and temporary fashion with a WITH clause before the get_desc use.
If this answer is too thin on details, just add a comment and I'll flesh out some example.
More details:
CREATE TABLE subschema_table ( has_desc boolean, description text ) ;
CREATE FUNCTION get_desc1(tab subschema_table) RETURNS TEXT AS $$
SELECT CASE tab.has_desc
WHEN True THEN
tab.description
ELSE
'Default Description'
END; $$ LANGUAGE SQL;
Now, the following will work (with other tables also):
WITH subschema AS (SELECT has_desc, description FROM tab1)
SELECT get_desc1(subschema) FROM subschema;
The VIEW method didn't work in my test (VIEWs don't seem to have the appropriate schema.
Maybe the other answer gives a better way.
I have a table created like
CREATE TABLE data
(value1 smallint references labels,
value2 smallint references labels,
value3 smallint references labels,
otherdata varchar(32)
);
and a second 'label holding' table created like
CREATE TABLE labels (id serial primary key, name varchar(32));
The rationale behind it is that value1-3 are a very limited set of strings (6 options) and it seems inefficient to enter them directly in the data table as varchar types. On the other hand these do occasionally change, which makes enum types unsuitable.
My question is, how can I execute a single query such that instead of the label IDs I get the relevant labels?
I looked at creating a function for it and stumbled at the point where I needed to pass the label holding table name to the function (there are several such (label holding) tables across the schema). Do I need to create a function per label table to avoid that?
create or replace function translate
(ref_id smallint,reference_table regclass) returns varchar(128) as
$$
begin
select name from reference_table where id = ref_id;
return name;
end;
$$
language plpgsql;
And then do
select
translate(value1, labels) as foo,
translate(value2, labels) as bar
from data;
This however errors out with
ERROR: relation "reference_table" does not exist
All suggestions welcome - at this point a can still alter just about anything...
CREATE TABLE labels
( id smallserial primary key
, name varchar(32) UNIQUE -- <<-- might want this, too
);
CREATE TABLE data
( value1 smallint NOT NULL REFERENCES labels(id) -- <<-- here
, value2 smallint NOT NULL REFERENCES labels(id)
, value3 smallint NOT NULL REFERENCES labels(id)
, otherdata varchar(32)
, PRIMARY KEY (value1,value2,value3) -- <<-- added primary key here
);
-- No need for a function here.
-- For small sizes of the `labels` table, the query below will always
-- result in hash-joins to perform the lookups.
SELECT l1.name AS name1, l2.name AS name2, l3.name AS name3
, d.otherdata AS the_data
FROM data d
JOIN labels l1 ON l1.id = d.value1
JOIN labels l2 ON l2.id = d.value2
JOIN labels l3 ON l3.id = d.value3
;
Note: labels.id -> labels.name is a functional dependency (id is the primary key), but that doesn't mean that you need a function. The query just acts like a function.
You can pass the label table name as string, construct a query as string and execute it:
sql = `select name from ` || reference_table_name || `where id = ` || ref_id;
EXECUTE sql INTO name;
RETURN name;
How to make a new TYPE with ENUM that takes all of the members of an interval? Like:
CREATE TYPE letters [a...z]
There is no built-in syntax for this, but you can do it with dynamic SQL:
DO
$$
BEGIN
EXECUTE
(
SELECT 'CREATE TYPE a2z AS ENUM ('''
|| string_agg(chr(ascii('a') + g), ''',''')
|| ''')'
FROM generate_series(0,25) g
);
END
$$;
This builds and executes a statement of the form:
CREATE TYPE a2z AS ENUM ('a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y','z')
This depends on your locale, but I think that all locales have a-z in a continuous range, which is all that's needed for this. Tested with an UTF-8 locale.
Alternative for anything that won't easily fit into an ENUM
For long lists of values, values that tend to change or are not as simple as the example or for special data types etc. consider creating a small look-up table instead and use a foreign key constraint to it. Example for a selection of dates:
CREATE TABLE my_date (my_date date PRIMARY KEY);
INSERT INTO my_date(my_date) VALUES ('2015-02-03'), ...;
CREATE TABLE foo (
foo_id serial PRIMARY KEY
, my_date date REFERENCES my_date ON UPDATE CASCADE
--, more columns ...
);
I have a POstgreSQL 8.4.
I have a table and i want to find a string in one row (character varying datatype) of this table using substring (character varying datatype) returned by subquery:
SELECT uchastki.kadnum
FROM uchastki
WHERE kadnum LIKE (
SELECT str
FROM test
WHERE str IS NOT NULL)
But get a error
ERROR: more than one row returned by a subquery used as an expression
In field test.str i have strings like 66:07:21 01 001 in uchastki.kadnum 66:07:21 01 001:27.
How to find substring using results of subquery?
UPDATE
Table test:
CREATE TABLE test
(
id serial NOT NULL,
str character varying(255)
)
WITH (
OIDS=FALSE
);
ALTER TABLE test OWNER TO postgres;
Table uchastki:
CREATE TABLE uchastki
(
fid serial NOT NULL,
the_geom geometry,
id_uch integer,
num_opora character varying,
kod_lep integer,
kadnum character varying,
sq real,
kod_type_opora character varying,
num_f11s integer,
num_opisanie character varying,
CONSTRAINT uchastki_pkey PRIMARY KEY (fid),
CONSTRAINT enforce_dims_the_geom CHECK (st_ndims(the_geom) = 2)
)
WITH (
OIDS=FALSE
);
ALTER TABLE uchastki OWNER TO postgres;
Use like any :
SELECT uchastki.kadnum
FROM uchastki
WHERE kadnum LIKE ANY(
SELECT str
FROM test
WHERE str IS NOT NULL)
Or perhaps:
SELECT uchastki.kadnum
FROM uchastki
WHERE kadnum LIKE ANY(
SELECT '%' || str || '%'
FROM test
WHERE str IS NOT NULL)
this is a nice feature, You can use different operators, for example = any (select ... ), or <> all (select...).
I'm going to take a wild stab in the dark and assume you mean that you want to match a string Sa from table A against one or more other strings S1 .. Sn from table B to find out if any of the other strings in S1 .. Sn is a substring of Sa.
A simple example to show what I mean (hint, hint):
Given:
CREATE TABLE tableA (string_a text);
INSERT INTO tableA(string_a) VALUES
('the manual is great'), ('Chicken chicken chicken'), ('bork');
CREATE TABLE tableB(candidate_str text);
INSERT INTO tableB(candidate_str) VALUES
('man'),('great'),('chicken');
I want the result set:
the manual is great
chicken chicken chicken
because the manual is great has man and great in it; and because chicken chicken chicken has chicken in it. There is no need to show the substring(s) that matched. bork doesn't match any substring so it is not found.
Here's a SQLFiddle with the sample data.
If so, shamelessly stealing #maniek's excellent suggestion, you would use:
SELECT string_a
FROM tableA
WHERE string_a LIKE ANY (SELECT '%'||candidate_str||'%' FROM tableB);
(Vote for #maniek please, I'm just illustrating how to clearly explain - I hope - what you want to achieve, sample data, etc).
(Note: This answer was written before further discussion clarified the poster's actual intentions)
It would appear highly likely that there is more than one str in test where str IS NOT NULL. That's why more than one row is returned by the subquery used as an expression, and, thus, why the statement fails.
Run the subquery stand-alone to see what it returns and you'll see. Perhaps you intended it to be a correlated subquery but forgot the outer column-reference? Or perhaps there's a column also called str in the outer table and you meant to write:
SELECT uchastki.kadnum
FROM uchastki
WHERE kadnum LIKE (
SELECT test.str
FROM test
WHERE uchastki.str IS NOT NULL)
?
(Hint: Consistently using table aliases on column references helps to avoid name-clash confusion).
I'm trying to return nested data of this format from PostgreSQL into PHP associative arrays.
[
'person_id': 1,
'name': 'My Name',
'roles': [
[ 'role_id': 1, 'role_name': 'Name' ],
[ 'role_id': 2, 'role_name': 'Another role name' ]
]
]
It seems like it could be possible using composite types. This answer describes how to return a composite type from a function, but it doesn't deal with an array of composite types. I'm having some trouble with arrays.
Here are my tables and types:
CREATE TEMP TABLE people (person_id integer, name text);
INSERT INTO "people" ("person_id", "name") VALUES
(1, 'name!');
CREATE TEMP TABLE roles (role_id integer, person_id integer, role_name text);
INSERT INTO "roles" ("role_id", "person_id", "role_name") VALUES
(1, 1, 'role name!'),
(2, 1, 'another role');
CREATE TYPE role AS (
"role_name" text
);
CREATE TYPE person AS (
"person_id" int,
"name" text,
"roles" role[]
);
My get_people() function parses fine, but there are runtime errors. Right now I'm getting the error: array value must start with "{" or dimension information
CREATE OR REPLACE FUNCTION get_people()
RETURNS person[] AS $$
DECLARE myroles role[];
DECLARE myperson people%ROWTYPE;
DECLARE result person[];
BEGIN
FOR myperson IN
SELECT *
FROM "people"
LOOP
SELECT "role_name" INTO myroles
FROM "roles"
WHERE "person_id" = myperson.person_id;
result := array_append(
result,
(myperson.person_id, myperson.name, myroles::role[])::person
);
END LOOP;
RETURN result;
END; $$ LANGUAGE plpgsql;
UPDATE in reply to Erwin Brandstetter's question at the end of his answer:
Yeah, I could return a SETOF a composite type. I've found SETs are easier to deal with than arrays, because SELECT queries return SETs. The reason I'd rather return a nested array is because I think representing nested data as a set of rows is a little awkward. Here's an example:
person_id | person_name | role_name | role_id
-----------+-------------+-----------+-----------
1 | Dilby | Some role | 1978
1 | Dilby | Role 2 | 2
2 | Dobie | NULL | NULL
In this example, person 1 has 2 roles, and person 2 has none. I'm using a structure like this for another one of my PL/pgSQL functions. I wrote a brittle PHP function that converts record sets like this into nested arrays.
This representation works fine, but I'm worried about adding more nested fields to this structure. What if each person also has a group of jobs? Statuses? etc. My conversion function will have to become more complicated. The representation of the data will be complicated as well. If a person has n roles, m jobs, and o statuses, that person fills max(n, m, o) rows, with person_id, person_name, and whatever other data they have uselessly duplicated in the extra rows. I'm not at all worried about performance, but I want to do this the simplest way possible. Of course.. maybe this is the simplest way!
I hope this helps to illustrate why I'd rather deal directly with nested arrays in PostgreSQL. And of course I'd love to hear any suggestions you have.
And for anyone dealing with PostgreSQL composite types with PHP, I've found this library to be really useful for parsing PostgreSQL's array_agg() output in PHP: https://github.com/nehxby/db_type. Also, this project looks interesting: https://github.com/chanmix51/Pomm
Consider this (improved and fixed) test case, tested with PostgreSQL 9.1.4:
CREATE SCHEMA x;
SET search_path = x, pg_temp;
CREATE TABLE people (person_id integer primary key, name text);
INSERT INTO people (person_id, name) VALUES
(1, 'name1')
,(2, 'name2');
CREATE TABLE roles (role_id integer, person_id integer, role_name text);
INSERT INTO roles (role_id, person_id, role_name) VALUES
(1, 1, 'role name!')
,(2, 1, 'another role')
,(3, 2, 'role name2!')
,(4, 2, 'another role2');
CREATE TYPE role AS (
role_id int
,role_name text
);
CREATE TYPE person AS (
person_id int
,name text
,roles role[]
);
Function:
CREATE OR REPLACE FUNCTION get_people()
RETURNS person[] LANGUAGE sql AS
$func$
SELECT ARRAY (
SELECT (p.person_id, p.name
,array_agg((r.role_id, r.role_name)::role))::person
FROM people p
JOIN roles r USING (person_id)
GROUP BY p.person_id
ORDER BY p.person_id
)
$func$;
Call:
SELECT get_people();
Clean up:
DROP SCHEMA x CASCADE;
Core features are:
A much simplified function that only wraps a plain SQL query.
Your key mistake was that you took role_name text from table roles and treated it as type role which is isn't.
I'll let the code speak for itself. There is just too much to explain and I don't have any more time now.
This is very advanced stuff and I am not sure you really need to return this nested type. Maybe there is a simpler way, like a SET of not nested complex type?