The following query, when executed in AdventureWords and tempdb, will give different results.
SELECT o.type_desc,
OBJECT_NAME(m.object_id) name,
definition
FROM [AdventureWorks].sys.sql_modules m
INNER JOIN [AdventureWorks].sys.objects o ON m.object_id = o.object_id;
When you execute it in the tempdb context, the name column will be NULLs, not the real object name.
What's the reason for this problem?
By default OBJECT_NAME looks at objects in the current database as specified by a USE statement; change your call to specify the specific database you want object information for:
OBJECT_NAME(m.object_id, DB_ID('AdventureWorks')) AS [name]
Related
I feel the need to get the column names and data types of the table returned by any function that has a 'record' return data type, because...
A key process in an existing SQL Server-based system makes use of a stored procedure that takes a user-defined function as a parameter. An initial step gets the column names and types of the table returned by the function that was passed as a parameter.
In Postgres 13 I can use pg_proc.prorettype and the corresponding pg_type to find functions that return record types...that's a start. I can also use pg_get_function_result() to get the string containing the information I need. But, it's a string, and while I ultimately will have to assemble a very similar string, this is just one application of the info. Is there a tabular equivalent containing (column_name, data_type, ordinal_position), or do I need to do that myself?
Is there access to a composite data type the system may have created when such a function is created?
One option that I think will work for me, but I think it's a little weird, is to:
> create temp table t as select * from function() limit 0;
then look that table up in info_schema.columns, assemble what I need and drop the temp table...putting all of this into a function.
You can query the catalog table pg_proc, which contains all the required information:
SELECT coalesce(p.na, 'column' || p.i),
p.ty::regtype,
p.i
FROM pg_proc AS f
CROSS JOIN LATERAL unnest(
coalesce(f.proallargtypes, ARRAY[f.prorettype]),
f.proargmodes,
f.proargnames
)
WITH ORDINALITY AS p(ty,mo,na,i)
WHERE f.proname = 'interval_ok'
AND coalesce(p.mo, 'o') IN ('o', 't')
ORDER BY p.i;
I'm using a PostgreSQL with a Go driver. Sometimes I need to query not existing fields, just to check - maybe something exists in a DB. Before querying I can't tell whether that field exists. Example:
where size=10 or length=10
By default I get an error column "length" does not exist, however, the size column could exist and I could get some results.
Is it possible to handle such cases to return what is possible?
EDIT:
Yes, I could get all the existing columns first. But the initial queries can be rather complex and not created by me directly, I can only modify them.
That means the query can be simple like the previous example and can be much more complex like this:
WHERE size=10 OR (length=10 AND n='example') OR (c BETWEEN 1 and 5 AND p='Mars')
If missing columns are length and c - does that mean I have to parse the SQL, split it by OR (or other operators), check every part of the query, then remove any part with missing columns - and in the end to generate a new SQL query?
Any easier way?
I would try to check within information schema first
"select column_name from INFORMATION_SCHEMA.COLUMNS where table_name ='table_name';"
And then based on result do query
Why don't you get a list of columns that are in the table first? Like this
select column_name
from information_schema.columns
where table_name = 'table_name' and (column_name = 'size' or column_name = 'length');
The result will be the columns that exist.
There is no way to do what you want, except for constructing an SQL string from the list of available columns, which can be got by querying information_schema.columns.
SQL statements are parsed before they are executed, and there is no conditional compilation or no short-circuiting, so you get an error if a non-existing column is referenced.
I solved this problem with a function and I like my solution, but I want to know if there is a way to solve this problem without using functions. Here is the thing:
There are four tables that are relevant to this:
entities: entities of the system (tenants)
members: members of an entity
member_sets: sets of members
members_and_sets: table to join members and sets (many to many)
The member_sets table has a column named bits which is a binary representation of the set, so, for example, if an entity has 5 members and one specific set has the third member, the value of the bits column is 00100, the entity has three special kinds of sets: universe, empty and unit, their binary repesentation is: 11111, 00000 and 10000 respectively, assuming the unit set has the first member.
The challenge is keep this binary representation of the set up to date; Whenever one member is added to the entity, all binary representations must be updated. This is easy to do with a trigger and a function, my solution is this:
CREATE FUNCTION setbits(INTEGER) RETURNS VARBIT AS
$$SELECT STRING_AGG(belongs, '')::VARBIT AS setbits FROM (
SELECT LEAST(COALESCE(members_and_sets.set_id, 0), 1)::text AS belongs
FROM members LEFT JOIN members_and_sets
ON members.id=members_and_sets.member_id
AND members_and_sets.set_id=$1
GROUP BY members.id,members_and_sets.set_id
ORDER BY members.id)
AS bitsring;$$
LANGUAGE SQL
RETURNS NULL ON NULL INPUT;
-- calling this function in a trigger after inserting a new member:
UPDATE member_sets ms
SET bits=setbits(ms.id)
WHERE ms.entity_id=NEW.entity_id;
Now my question is: Can I do this without using a function? I tried with CTE but apparently I'm to noob to accomplish this; I couldn't pass the set_id to the must inner query so my solution was to wrap the query in a function and pass the set_id as an argument to the function. Again, this solution works perfectly, I just want to know if there is no way I can do this without a function call.
As your function body is simply a SELECT, you should be able to replace the function call with a subquery:
UPDATE member_sets ms
SET bits= (
SELECT STRING_AGG(belongs, '')::VARBIT AS setbits FROM (
SELECT LEAST(COALESCE(members_and_sets.set_id, 0), 1)::text AS belongs
FROM members LEFT JOIN members_and_sets
ON members.id=members_and_sets.member_id
AND members_and_sets.set_id=ms.id
GROUP BY members.id,members_and_sets.set_id
ORDER BY members.id)
AS bitsring
)
WHERE ms.entity_id=NEW.entity_id;
This question is specific to libpqxx.
Given an SQL statement like the following:
string s = "SELECT a.foo, b.bar FROM tableOne a, tableTwo b WHERE a.X=b.X"
and sending it to a pqxx transaction:
trans.exec(s.c_str(), s.c_str());
What names will the columns have in the result field?
In other words, assuming 1 row is selected:
pqxx::result::const_iterator row = result.begin();
int foo = row->at(FOO_COLUMN).as<int>();
int bar = row->at(BAR_COLUMN).as<int>();
What values should FOO_COLUMN and BAR_COLUMN have? Would they be "a.foo" and "b.bar", respectively?
If the SQL statement renamed the variables using the "as" keyword, then I suppose the column name would be whatever "as" set it to, is that right?
Normally I would try an SQL and print the column values, but as I am developing both the software and the database itself, doing that test is not very easy right now.
Thanks!
The names are going to be foo and bar. If they were aliases in the query, then the aliases would be returned, the original names being lost.
Column names in results never include table names.
If they were named tablename.colname, it would be ambiguous anyway because SELECT 1 as "foo.colname" is valid and produces a column foo.colname despite the fact there is no foo table.
The way to identify the table from which a column originates, when it applies, is to call pqxx::result::column_table(column_number) which returns the oid of the table. The table's name is not available directly but could be queried in pg_class with the oid.
Also note that column names don't have to be unique within a resultset. SELECT 1 AS a, 2 AS a is valid and produces two columns with exactly the same name.
Trying to be lazy when looking at an example
SELECT realestate.address, realestate.parcel, s.sale_year, s.sale_amount,
FROM realestate INNER JOIN
dblink('dbname=somedb port=5432 host=someserver
user=someuser password=somepwd',
'SELECT parcel_id, sale_year,
sale_amount FROM parcel_sales')
AS s(parcel_id char(10),sale_year int, sale_amount int)
Is there a way of getting the AS section filled in from the table?
I'm copying data from tables of the same name and structure on different servers.
If I can get the structure to copy from the existing table, it will save me a lot of time
Thanks
Bruce
The answer is: No. See the doc:
Since dblink can be used with any query, it is declared to return record, rather than specifying any particular set of columns. This means that you must specify the expected set of columns in the calling query — otherwise PostgreSQL would not know what to expect.
http://www.postgresql.org/docs/9.1/static/contrib-dblink-function.html
Edit: by the way, for a table or a view, you can get the fields name and type in a first query:
select column_name
from information_schema.columns
where table_name = 'your_table_or_view';
You could then use it to fill the fields declaration.
Alexis