IBM DB2 9.7, how to explicitly define current schema in SQL - db2

I would like to force the current schema to be the same as current user. It seems to be possible to set current schema in DB2 9.7 with statement:
SET SCHEMA '...'
If schema is to be set same as user, is it then:
SET SCHEMA USER?
How do I then refer to that schema when e.g. calling a stored procedure?

You can set the schema...
.-CURRENT-. .-=-.
>>-SET--+---------+--SCHEMA--+---+--+-schema-name-----+--------><
+-USER------------+
+-SESSION_USER----+
+-SYSTEM_USER-----+
+-CURRENT_USER----+
+-host-variable---+
'-string-constant-'
http://publib.boulder.ibm.com/infocenter/db2luw/v8/index.jsp?topic=/com.ibm.db2.udb.doc/admin/r0001016.htm
But for SP you have to use SET PATH...
.-CURRENT-. .-=-.
>>-SET--+-+---------+--PATH-+--+---+---------------------------->
'-CURRENT_PATH------'
.-,------------------------.
V |
>----+-schema-name----------+-+--------------------------------><
+-SYSTEM PATH----------+
+-USER-----------------+
+-+-CURRENT PATH-+-----+
| '-CURRENT_PATH-' |
+-CURRENT PACKAGE PATH-+
+-host-variable--------+
'-string-constant------'
http://publib.boulder.ibm.com/infocenter/db2luw/v8/index.jsp?topic=/com.ibm.db2.udb.doc/admin/r0001014.htm

Related

Why would postgres views not be visible to PGadmin browser, Psycopg2?

I've created some views in my postgres database. I know they're there, because I can query them through the query tool in PGAdmin4 (and they are persistent between restarting the machine hosting the database), but they are neither visible in the schema browser nor queryable through psycopg2.
For larger context, I'm trying to extract some text from a large collection of documents which are stored in a database. (The database is a copy of the data received from a third party, and fully normalized, etc.) I'd like to do my NLP nonsense in Python, while defining a lot of document categorizations through SQL views so the categorizations are consistent, persistent, and broadly shareable to my team.
Googling has not turned up anything relevant here, so I'm wondering if there is a basic configuration issue that I've missed. (I am much more experienced with SQLServer than with postgres.)
Example:
[Assume I'm connected to database DB, schema SC, which has tables T1, T2, T3.]
-- in PGAdmin4 window
CREATE VIEW v_my_view as
SELECT T1.field1, T2.field2
FROM T1
JOIN T2
on T1.field3 = T2.field3
Restart host machine (so definitely new PGAdmin session), the following works:
-- in pgadmin4 window
SELECT *
FROM v_my_view
-- 123456 results returned
...but even though that works, in the pgadmin4 browser panel, the 'views' folder is empty (right underneath the tables folder that proudly shows T1 and T2).
Within psycopg2:
import psycopg2
import pandas as pd
sqluser = 'me'
sqlpwd = 'secret'
dbname = 'DB'
schema_name = 'SC'
pghost = 'localhost'
def q(query):
cnxn = psycopg2.connect(dbname=dbname, user=sqluser, password=sqlpwd, host=pghost)
cursor = cnxn.cursor()
cursor.execute('SET search_path to ' + schema_name)
return pd.read_sql_query(query, cnxn)
view_query = """select *
from v_my_view
limit 100;"""
table_query = """select *
from SC.T1
limit 100;"""
# This works
print(f"Result: {q(table_query)}")
# This does not; error is: relation 'v_my_view' does not exist
# (Same result if view is prefixed with schema name)
# print(f"Result: {q(view_query)}")
Software versions:
pgadmin 4.23
postgres: I'm connected to 10.13 (Ubuntu 10.13-1-pgdg18.04+1), though 12
is also installed.
psycopg2: 2.8.5
Turns out this was a noob mistake. Views are created to the first schema of the search path (which can be checked by executing show search_path;, which in my case was set to "$user", public despite attempting to set it to the appropriate schema name). So the views were getting created against a different schema from the one I was working with/where the tables were defined.
Created views are all visible in the left-hand browser once I look under the correct schema.
The following modification to the psycopg2 code returns the expected results:
import psycopg2
import pandas as pd
sqluser = 'me'
sqlpwd = 'secret'
dbname = 'DB'
schema_name = 'SC'
pghost = 'localhost'
def q(query):
cnxn = psycopg2.connect(dbname=dbname, user=sqluser, password=sqlpwd, host=pghost)
cursor = cnxn.cursor()
cursor.execute('SET search_path to ' + schema_name)
return pd.read_sql_query(query, cnxn)
# NOTE I am explicitly indicating the 'public' schema here
view_query = """select *
from public.v_my_view
limit 100;"""
table_query = """select *
from SC.T1
limit 100;"""
# This works
print(f"Result: {q(table_query)}")
# This works too once I specify the right schema:
print(f"Result: {q(view_query)}")
Try refresh Object on the PGAdmin toolbar. This should refresh the view.
Thanks
Amar

Error while loading Raster data to a Postgres Table

I am getting an error while loading Raster data to a postgres table
ERROR: function st_bandmetadata(public.raster, integer[]) does not exist
LINE 1: SELECT array_agg(pixeltype)::text[] FROM st_bandmetadata($1...
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
QUERY: SELECT array_agg(pixeltype)::text[] FROM st_bandmetadata($1, ARRAY[]::int[]);
CONTEXT: SQL function "_raster_constraint_pixel_types" during inlining
COPY elevation_hi, line 1: "1 01000001006A98816335DA4E3F6A98816335DA4EBFA2221ECF131C64C0FEE6DF13C4963640000000000000000000000000..."
3139
Any idea on this?
I have below extensions present in the database where I am trying to create the table:
CREATE EXTENSION postgis
CREATE EXTENSION postgis_topology
CREATE EXTENSION fuzzystrmatch
Please help me on this If you have any idea.
My search path set for the database is ::
search_path
----------------------------------------------
"$user", public, rasters, postgis, pg_catalog
post_GIS version
SELECT postgis_version();
postgis_version
---------------------------------------
2.1 USE_GEOS=1 USE_PROJ=1 USE_STATS=1
Please let me knwo if you havve face this issue earlier.

Is there a way to describe an external/spectrum table via redshift?

In AWS Athena you can write
SHOW CREATE TABLE my_table_name;
and see a SQL-like query that describes how to build the table's schema. It works for tables whose schema are defined in AWS Glue. This is very useful for creating tables in a regular RDBMS, for loading and exploring data views.
Interacting with Athena in this way is manual, and I would like to automate the process of creating regular RDBMS tables that have the same schema as those in Redshift Spectrum.
How can I do this through a query that can be run via psql? Or is there another way to get this via the aws-cli?
Redshift Spectrum does not support SHOW CREATE TABLE syntax, but there are system tables that can deliver same information. I have to say, it's not as useful as the ready to use sql returned by Athena though.
The tables are
svv_external_schemas - gives you information about glue database mapping and IAM roles bound to it
svv_external_tables - gives you the location information, and also data format and serdes used
svv_external_columns - gives you the column names, types and order information.
Using that data, you could reconstruct the table's DDL.
For example to get the list of columns and their types in the CREATE TABLE format one can do:
select distinct
listagg(columnname || ' ' || external_type, ',\n')
within group ( order by columnnum ) over ()
from svv_external_columns
where tablename = '<YOUR_TABLE_NAME>'
and schemaname = '<YOUR_SCHEM_NAME>'
the query give you the output similar to:
col1 int,
col2 string,
...
*) I am using listagg window function and not the aggregate function, as apparently listagg aggregate function can only be used with user defined tables. Bummer.
I had been doing something similar to #botchniaque's answer in the past, but recently stumbled across a solution in the AWS-Labs' amazon-redshift-utils code package that seems to be more reliable than my hand-spun queries:
amazon-redshift-utils: v_generate_external_tbl_ddl
If you don't have the ability to create a view backed with the ddl listed in that package, you can run it manually by removing the CREATE statement from the start of the query. Assuming you can create it as a view, usage would be:
SELECT ddl
FROM admin.v_generate_external_tbl_ddl
WHERE schemaname = '<external_schema_name>'
-- Optionally include specific table references:
-- AND tablename IN ('<table_name_1>', '<table_name_2>', ..., '<table_name_n>')
ORDER BY tablename, seq
;
They added show external table now.
SHOW EXTERNAL TABLE external_schema.table_name [ PARTITION ]
SHOW EXTERNAL TABLE my_schema.my_table;
https://docs.aws.amazon.com/redshift/latest/dg/r_SHOW_EXTERNAL_TABLE.html

Setting a User's Valid Until date in the future

I was trying to see if there was a way to automatically set a user's VALID UNTIL value three months in the future without having to type out the literal date. Tried the following:
alter user rchung set valuntil = dateadd(day,90,GETDATE());
alter user rchung set valuntil = select dateadd(day,90,GETDATE());
both failed with a syntax error.
alter user rchung valid until dateadd(day,90,GETDATE());
also failed with a syntax error.
Anyone have any success with this?
TIA,
Rich
It appears that this is a limitation on the PostgreSQL side.
CREATE USER, like pretty much all utility statements in Postgres,
won't do any expression evaluation --- the parameters have to be
simple literal constants.
VALID UNTIL programmatically in SQL
Since Amazon Redshift doesn't support plpgsql like PostgreSQL, client side scripting is really the only option. If you're using a semi-modern version (9.3+) of psql the following works:
select dateadd(day,90,GETDATE()) as expiry; \gset
alter user myuser valid until :'expiry';

Why is FORMAT function not recognised?

I have executed the following query:
SELECT productid,
FORMAT(productid, 'd10') AS str_productid
FROM Production.Products;
I it is sayng that 'FORMAT' is not a recognized built-in function name.
I am using the TSQL2012 database and Microsoft SQL Server 2012 Express.
Can some one tell me what is wrong? The Express verssion has not included Format function?
Try this
ALTER DATABASE database_name
SET COMPATIBILITY_LEVEL = 110
here are details