Name keyword in postgresql - postgresql

Why is this working or better yet what does it represent ( replace table with an existing one )
select table.name from table
and where is it documented ( postgresql ) ?

select table.name from table
is equivalent to
select name(table) from table
which, since name is a type, is equivalent to
select cast(table as name) from table
The first table is a row variable containing all the columns from the respective table, so you will get a text representation of the row.
This is not directly documented, since it's a combination of several obscure features (some dating back to PostQUEL). In fact, this usage has been disallowed in PostgreSQL 9.1 (see the release notes under "Casting").

The name type is documented in the string types. Hidden columns are documented (off the top of my head) in the section on system catalogs, or in the internals.
You can see hidden columns in pg_attribute. Their numbers are negative:
select attum, attname
from pg_attribute
where attrelid = 'yourtable'::regclass
That should reveal xmin, xmax, ctid, oid (where applicable), etc. Might this also yield name? (I can't test on the iPad.)

Related

How to Link 2 Sheets that have the same fields

I am looking for some help with trying to link 2 sheets that have a number of Filters that I have setup but are both sitting in separate tables. The reason this is because I have a number of aggregated columns that are different for the 2 tables and want to keep this separately as I will be building more sheets as I go along.
The filters that are the same within the 2 sheets are the following:
we_date
product
manager
patch
Through the data manager I managed to create an association between the 2 tables for we_date but from reading on this site and other searches on Google I can't make any associations between these tables and this is where I am stuck.
The 2 sheets will now allow me to filter using the we_date, but if I use the filters for product, manager or patch then nothing happens on my 2nd sheet as they are not linked.
Currently in my data load editor I have 2 sections of select queries like the following:
Table1
QUALIFY *;
w:
SELECT
*
FROM
table1
;
UNQUALIFY *;
Table2
QUALIFY *;
w_c:
SELECT
*
FROM
table2
;
UNQUALIFY *;
I would really appreciate if somebody could advise a fix on the issue I am having.
In Qlik, field names of identical values from different tables are automatically associated.
When you're calling Qualify *, you're actually renaming all field names and explicitly saying NOT to associate.
Take a look at the Qlik Sense documentation on Qualify *:
The automatic join between fields with the same name in different
tables can be suspended by means of the qualify statement, which
qualifies the field name with its table name. If qualified, the field
name(s) will be renamed when found in a table. The new name will be in
the form of tablename.fieldname. Tablename is equivalent to the label
of the current table, or, if no label exists, to the name appearing
after from in LOAD and SELECT statements.
We can use as to manually reassign field names.
SELECT customer_id, private_info as "private_info_1", favorite_dog from table1;
SELECT customer_id, private_info as "private_info_2", car from table2;
Or, we can correctly use Qualify. Example:
table1 and table2 have a customer_id field, and private_info field. We want customer_id field to be the associative value, and private_info to not be. We would use QUALIFY on private_info, which Qlik would then rename based on file name.
QUALIFY private_info;
SELECT * from table1;
SELECT * from table2;
The following field names would then be: customer_id (associated), and table1.private_info, and table2.private_info

Postgres subquery has access to column in a higher level table. Is this a bug? or a feature I don't understand?

I don't understand why the following doesn't fail. How does the subquery have access to a column from a different table at the higher level?
drop table if exists temp_a;
create temp table temp_a as
(
select 1 as col_a
);
drop table if exists temp_b;
create temp table temp_b as
(
select 2 as col_b
);
select col_a from temp_a where col_a in (select col_a from temp_b);
/*why doesn't this fail?*/
The following fail, as I would expect them to.
select col_a from temp_b;
/*ERROR: column "col_a" does not exist*/
select * from temp_a cross join (select col_a from temp_b) as sq;
/*ERROR: column "col_a" does not exist
*HINT: There is a column named "col_a" in table "temp_a", but it cannot be referenced from this part of the query.*/
I know about the LATERAL keyword (link, link) but I'm not using LATERAL here. Also, this query succeeds even in pre-9.3 versions of Postgres (when the LATERAL keyword was introduced.)
Here's a sqlfiddle: http://sqlfiddle.com/#!10/09f62/5/0
Thank you for any insights.
Although this feature might be confusing, without it, several types of queries would be more difficult, slower, or impossible to write in sql. This feature is called a "correlated subquery" and the correlation can serve a similar function as a join.
For example: Consider this statement
select first_name, last_name from users u
where exists (select * from orders o where o.user_id=u.user_id)
Now this query will get the names of all the users who have ever placed an order. Now, I know, you can get that info using a join to the orders table, but you'd also have to use a "distinct", which would internally require a sort and would likely perform a tad worse than this query. You could also produce a similar query with a group by.
Here's a better example that's pretty practical, and not just for performance reasons. Suppose you want to delete all users who have no orders and no tickets.
delete from users u where
not exists (select * from orders o where o.user_d = u.user_id)
and not exists (select * from tickets t where t.user_id=u.ticket_id)
One very important thing to note is that you should fully qualify or alias your table names when doing this or you might wind up with a typo that completely messes up the query and silently "just works" while returning bad data.
The following is an example of what NOT to do.
select * from users
where exists (select * from product where last_updated_by=user_id)
This looks just fine until you look at the tables and realize that the table "product" has no "last_updated_by" field and the user table does, which returns the wrong data. Add the alias and the query will fail because no "last_updated_by" column exists in product.
I hope this has given you some examples that show you how to use this feature. I use them all the time in update and delete statements (as well as in selects-- but I find an absolute need for them in updates and deletes often)

Why are my view's columns nullable?

I'm running PostgreSQL 9.2 on Windows.
I have an existing table with some non nullable columns :
CREATE TABLE testtable
(
bkid serial NOT NULL,
bklabel character varying(128),
lacid integer NOT NULL
}
The I create a view on this table :
CREATE OR REPLACE VIEW test AS
SELECT testtable.bkid, testtable.lacid
from public.testtable;
I'm surprised that information_schema.columns for the view reports is_nullable to be YES for the selected columns ?
select * from information_schema.columns where table_name = 'test'
Reports :
"MyDatabase";"public";"test";"bkid";1;"";"YES";"integer";;;32;2;0;;"";;"";"";"";"";"";"";"";"";"";"MyDatabase";"pg_catalog";"int4";"";"";"";;"1";"NO";"NO";"";"";"";"";"";"";"NEVER";"";"NO"
"MyDatabase";"public";"test";"lacid";2;"";"YES";"integer";;;32;2;0;;"";;"";"";"";"";"";"";"";"";"";"MyDatabase";"pg_catalog";"int4";"";"";"";;"2";"NO";"NO";"";"";"";"";"";"";"NEVER";"";"NO"
Is it an expected behavior ?
My problem is that I'm trying to import such views in an Entity Framework Data Model and it fails because all columns are marked as nullable.
EDIT 1 :
The following query :
select attrelid, attname, attnotnull, pg_class.relname
from pg_attribute
inner join pg_class on attrelid = oid
where relname = 'test'
returns :
attrelid;attname;attnotnull;relname
271543;"bkid";f;"test"
271543;"lacid";f;"test"
As expected, attnotnull is 'false'.
As #Mike-Sherrill-Catcall suggested, I could manually set them to true :
update pg_attribute
set attnotnull = 't'
where attrelid = 271543
And the change is reflected in the information_schema.columns :
select * from information_schema.columns where table_name = 'test'
Output is :
"MyDatabase";"public";"test";"bkid";1;"";"NO";"integer";;;32;2;0;;"";;"";"";"";"";"";"";"";"";"";"MyDatabase";"pg_catalog";"int4";"";"";"";;"1";"NO";"NO";"";"";"";"";"";"";"NEVER";"";"NO"
"MyDatabase";"public";"test";"lacid";2;"";"NO";"integer";;;32;2;0;;"";;"";"";"";"";"";"";"";"";"";"MyDatabase";"pg_catalog";"int4";"";"";"";;"2";"NO";"NO";"";"";"";"";"";"";"NEVER";"";"NO"
I'll try to import the views in the Entity Framework data model.
EDIT 2 :
As guessed, it works, the view is now correctly imported in the Entity Framework Data Model.
Of course, I won't set all columns to be non nullable, as demonstrated above, only those non nullable in the underlying table.
I believe this is expected behavior, but I don't pretend to fully understand it. The columns in the base table seem to have the right attributes.
The column in the system tables underlying the information_schema here seems to be "attrnotnull". I see only one thread referring to "attnotnull" on the pgsql-hackers listserv: cataloguing NOT NULL constraints. (But that column might have had a different name in an earlier version. It's probably worth researching.)
You can see the behavior with this query. You'll need to work with the WHERE clause to get exactly what you need to see.
select attrelid, attname, attnotnull, pg_class.relname
from pg_attribute
inner join pg_class on attrelid = oid
where attname like 'something%'
On my system, columns that have a primary key constraint and columns that have a NOT NULL constraint have "attnotnull" set to 't'. The same columns in a view have "attnotnull" set to 'f'.
If you tilt your head and squint just right, that kind of makes sense. The column in the view isn't declared NOT NULL. Just the column in the base table.
The column pg_attribute.attnotnull is updatable. You can set it to TRUE, and that change seems to be reflected in the information_schema views. Although you can set it to TRUE directly, I think I'd be more comfortable setting it to match the value in the base table. (And by more comfortable, I don't mean to imply I'm comfortable at all with mucking about in the system tables.)
Why:
A view could be computed but references the column from a table. That computation could result in a NULL value on what is otherwise a non-null column. So basically, they put it into the too hard basket.
There is a way to see the underlying nullability for yourself with the following query:
select vcu.column_name, c.is_nullable, c.data_type
from information_schema.view_column_usage vcu
join information_schema."columns" c
on c.column_name = vcu.column_name
and c.table_name = vcu.table_name
and c.table_schema = vcu.table_schema
and c.table_catalog = vcu.table_catalog
where view_name = 'your_view_here'
If you know that you are only projecting the columns raw without functions, then it will work. Ideally, the Postgres provider for EF would use this view and also read the view definition to confirm nullability.
The nullability tracking in PostgreSQL is not developed very much at all. In most places, it will default to claiming everything is potentially nullable, which is in many cases allowed by the relevant standards. This is the case here as well: Nullability is not tracked through views. I wouldn't rely on it for an application.

Mentioning column names in select for derived tables

i have doubt,
select *
from
(
select *
(
select User_Id,User_Name,Password
from <table> T
where IsActive = 1
) k
) m
in this case, is it required to mention column names in other 2 select statements,
mentioning columns is always better than keeping *
but,what is the use actually in above top 2 selects as we are getting selected columns from derived tables..
There's no need to mention each column instead of doing SELECT * FROM. However, if you don't need all columns you can optimize by selecting only the columns you need: SELECT a, b, c FROM.
There's no value added or optimization in having two nested SELECT * without any sort of calculation. Here's an article on Transact-SQL Derived Tables where I recommend you check out the Advantages of SQL derived tables section. There's a good example in there.

Capture Value In Table Upper Case Or Lower case

I want to
SELECT * FROM table1 WHERE name=petter
Now if there is many type of petter in table like PETTER , Petter And petter.
Want all this three (PETTER, Petter, petter) to be considered which command is for this in cognos report studio?
Or in DB2 without using 'IN' function.
I think you want UPPER (or LOWER, the effect should be the same):
SELECT *
FROM table1
WHERE UPPER(name) = 'PETTER'
Remember, though, if you have an index on name, then this won't be able to use that index. You can create (at least if you're on z/OS) an index with that function. On other platforms, you can create a generated column and create index on that.