I've created some views in my postgres database. I know they're there, because I can query them through the query tool in PGAdmin4 (and they are persistent between restarting the machine hosting the database), but they are neither visible in the schema browser nor queryable through psycopg2.
For larger context, I'm trying to extract some text from a large collection of documents which are stored in a database. (The database is a copy of the data received from a third party, and fully normalized, etc.) I'd like to do my NLP nonsense in Python, while defining a lot of document categorizations through SQL views so the categorizations are consistent, persistent, and broadly shareable to my team.
Googling has not turned up anything relevant here, so I'm wondering if there is a basic configuration issue that I've missed. (I am much more experienced with SQLServer than with postgres.)
Example:
[Assume I'm connected to database DB, schema SC, which has tables T1, T2, T3.]
-- in PGAdmin4 window
CREATE VIEW v_my_view as
SELECT T1.field1, T2.field2
FROM T1
JOIN T2
on T1.field3 = T2.field3
Restart host machine (so definitely new PGAdmin session), the following works:
-- in pgadmin4 window
SELECT *
FROM v_my_view
-- 123456 results returned
...but even though that works, in the pgadmin4 browser panel, the 'views' folder is empty (right underneath the tables folder that proudly shows T1 and T2).
Within psycopg2:
import psycopg2
import pandas as pd
sqluser = 'me'
sqlpwd = 'secret'
dbname = 'DB'
schema_name = 'SC'
pghost = 'localhost'
def q(query):
cnxn = psycopg2.connect(dbname=dbname, user=sqluser, password=sqlpwd, host=pghost)
cursor = cnxn.cursor()
cursor.execute('SET search_path to ' + schema_name)
return pd.read_sql_query(query, cnxn)
view_query = """select *
from v_my_view
limit 100;"""
table_query = """select *
from SC.T1
limit 100;"""
# This works
print(f"Result: {q(table_query)}")
# This does not; error is: relation 'v_my_view' does not exist
# (Same result if view is prefixed with schema name)
# print(f"Result: {q(view_query)}")
Software versions:
pgadmin 4.23
postgres: I'm connected to 10.13 (Ubuntu 10.13-1-pgdg18.04+1), though 12
is also installed.
psycopg2: 2.8.5
Turns out this was a noob mistake. Views are created to the first schema of the search path (which can be checked by executing show search_path;, which in my case was set to "$user", public despite attempting to set it to the appropriate schema name). So the views were getting created against a different schema from the one I was working with/where the tables were defined.
Created views are all visible in the left-hand browser once I look under the correct schema.
The following modification to the psycopg2 code returns the expected results:
import psycopg2
import pandas as pd
sqluser = 'me'
sqlpwd = 'secret'
dbname = 'DB'
schema_name = 'SC'
pghost = 'localhost'
def q(query):
cnxn = psycopg2.connect(dbname=dbname, user=sqluser, password=sqlpwd, host=pghost)
cursor = cnxn.cursor()
cursor.execute('SET search_path to ' + schema_name)
return pd.read_sql_query(query, cnxn)
# NOTE I am explicitly indicating the 'public' schema here
view_query = """select *
from public.v_my_view
limit 100;"""
table_query = """select *
from SC.T1
limit 100;"""
# This works
print(f"Result: {q(table_query)}")
# This works too once I specify the right schema:
print(f"Result: {q(view_query)}")
Try refresh Object on the PGAdmin toolbar. This should refresh the view.
Thanks
Amar
Related
I am using sql developer version 21.( Recently installed) I can't access the view sqls/ definition from the view tab. I can accessor see the view text from " details" tab but not from the "Sql" tab.
I don't have admin privilege.
The same user can view view sqls from sqldeveloper version 18...
In older versions of SQL Developer we had a 'try to generate DDL' method for when DBMS_METADATA.GET_DLL() wasn't available.
This wasn't a maintainable position. The 'internal generator' has multiple issues, and we decided to deprecate it.
In order to see the DDL for an object, you need for the DBMS_METADATA package to be available to your user, for said object.
What SQL Developer runs to get you the DDL for a VIEW, is approx:
SELECT
dbms_metadata.get_ddl(
'VIEW',
:name,
:owner
)
FROM
dual
UNION ALL
SELECT
dbms_metadata.get_ddl(
'TRIGGER',
trigger_name,
owner
)
FROM
dba_triggers
WHERE
table_owner = :owner
AND table_name = :name
UNION ALL
SELECT
dbms_metadata.get_dependent_ddl(
'COMMENT',
table_name,
owner
)
FROM
(
SELECT
table_name,
owner
FROM
dba_col_comments
WHERE
owner = :owner
AND table_name = :name
AND comments IS NOT NULL
UNION
SELECT
table_name,
owner
FROM
sys.dba_tab_comments
WHERE
owner = :owner
AND table_name = :name
AND comments IS NOT NULL
)
In a development environment, a developer should have full access to their application, and I would extend that to the data dictionary. It's another reason I advocate developers have their own private database (Docker/VirtualBox/Cloud/whatever).
If that fails, consult your data model.
If you don't have a data model, that's another problem.
If that fails, you do have workaround of checking the Details panel for a view to get the underlying SQL.
Just FYI, I searched for an answer to this problem and found no actual solutions.
thatjeffsmith was correct that earlier versions of SQLDeveloper do not have this issue or requirement of higher privs to view the SQL tab. However, the link he provided was version 20.4 and it sill did not display the SQL tab correctly. I reverted back to 3.1.07 (which I happened to be using prior to upgrading my laptop) and using the same login to the same instance it does display the SQL for views, full definition, without issue. This is against a 12c Oracle database.
I've just installed the vscode extension (Oracle Developer Tools for VS Code (SQL and PLSQL)
) and successfully connected the db.
The db resides on AWS.
I can connect the db and just wanted to test it by opening an existing view.
But, it just lets me "describe" the view. So I can see the columns but I need to edit the query statement.
What's missing? Or is the problem the AWS part?
I usually use SQL Developer but I'm furthermore interested in backing up the work via git commits. And I like the way "git graph" extensions presents the changes.
DDL view_name
Or
SELECT
text_vc
FROM
dba_views
WHERE
owner = :schema AND
view_name = :view_name;
With help from someone of the Oracle community I managed to get it working.
Basic query is:
select
dbms_metadata.get_ddl('VIEW', 'VIEW_NAME', 'VIEW_OWNER')
from
dual;
So, in my case it is:
select
dbms_metadata.get_ddl('VIEW', 'ALL_DATA_WAREHOUSE_BOSTON', 'WHB')
from
dual;
Owner is the name you fill in when connection to the database, which is the key/value pair (username/password).
If you are not sure who the owner of the view is, check it with this query:
select owner from ALL_VIEWS where VIEW_NAME ='ALL_DATA_WAREHOUSE_BOSTON';
I'm connected to an IBM DB2 database using Oracle SQL Developer and I'm querying several tables in order to perform an automated extraction of data. The issue here is that I can't set aliases for my results. I tried a lot of variants like adding quotes ("") ([]) ('') and it's not working. I saw several tutorials and everyone uses "AS" only, but for me it's not working. Any recommendations? Thanks!
Image as example here: https://i.stack.imgur.com/5NrED.png
My code is:
SELECT
"A"."TC_SHIPMENT_ID" AS SHIPMENT_ID,
"A"."CLAIM_ID",
B.DESCRIPTION CLAIM_CLASSIFICATION,
C.DESCRIPTION CLAIM_CATEGORY,
D.DESCRIPTION CLAIM_TYPE,
F.DESCRIPTION CLAIM_STATUS
FROM CLAIMS A
INNER JOIN CLAIM_CLASSIFICATION B ON A.CLAIM_CLASSIFICATION = B.CLAIM_CLASSIFICATION
INNER JOIN CLAIM_CATEGORY C ON A.CLAIM_CATEGORY = C.CLAIM_CATEGORY
INNER JOIN CLAIM_TYPE D ON A.CLAIM_TYPE = D.CLAIM_TYPE
INNER JOIN CLAIM_STATUS F ON A.CLAIM_STATUS = F.CLAIM_STATUS;
TLDR: append the connection-attribute(s) to the database name bounded by : and ;
When creating a new DB2-connection: On the dialog box for 'New /Select Database Connection', click the DB2 tab, and on the field labelled 'Database' you enter your database-name followed by a colon, followed by your property=value (connection attribute), followed by a semicolon.
When you want to alter the properties of an existing DB2 connection, right click that DB2-connection icon and choose properties, and adjust the database name in the same pattern as above, then test and save.
For example, in my case the database name is SAMPLE and if I want the application to show the correlation-ID names from my queries then I use for the database-name:
SAMPLE:useJDBC4ColumnNameAndLabelSemantics=No;
The same labels for result-sets as given in my queries then appear on the
Query-Result pane on Oracle SQL Developer.
Tested with DB2 v11.1.2.2 with db2jcc4.jar and Oracle SQL Developer 17.2.0.188
I am writing a simple python prog to connect and display results from Postgres table this is on AWS RDS. I have table mytest in public schema.
connection = psycopg2.connect(dbname='some_test',
user='user1',
host='localhost',
password='userpwd',
port=postgres_port)
cursor = connection.cursor()
cursor.execute("SET SEARCH_PATH TO public;")
cursor.execute("SELECT * FROM mytest;")
But this throws an error
psycopg2.ProgrammingError: relation "mytest" does not exist
LINE 1: SELECT * FROM mytest;
Connection is successful and I can query other basetables like
SELECT table_name FROM information_schema.tables
It is just that I cannot change to any other schema. I googled and tried all kinds of SET SERACH_PATH and commit it and recreate cursor etc. but no use. I cannot query any other schema.
ALTER USER username SET search_path = schema1,schema2;
After setting this the query works fine!
I want to change the size of a String column in my PostgreSQL database through alembic.
My first attempt in my local DB was the more straightforward way and the one that seemed logic:
Change the size of the db.Column field I wanted to resize and configure alembic to look for type changes as stated here: add the compare_type=True parameter to the context.configure() of my alembic/env.py. And then run alembic revision --autogenerate which correctly generated a file calling alter_column.
This seems to be OK, but the alembic upgrade head was taking so much time because of the alter column that I cancelled the execution and looked for other solutions as I guess if it takes so long in my computer it would take long in the Heroku server too and I'm not going to have my service paused until this operation is finished.
So I came up with a quite hacky solution that worked perfectly in my machine:
I created my update statement in an alembic file both for upgrade and downgrade:
connection = op.get_bind()
def upgrade():
connection.execute("update pg_attribute SET atttypmod = 1000+4" + \
"where attrelid = 'item'::regclass and attname = 'details'", execution_options=None)
def downgrade():
connection.execute("update pg_attribute SET atttypmod = 200+4" + \
"where attrelid = 'item'::regclass and attname = 'details'", execution_options=None)
And worked really fast in my machine. But When pushing it to my staging app in Heroku and executing the upgrade it prompted ERROR: permission denied for relation pg_attribute. The same happens if I try to execute the update statement directly in psql. I guess this is intentional from Heroku and that I am not supposed to update those kind of tables as I could make malfunction the database if doing it wrong. I guess forcing that update in Heroku is not the way to go.
I have also tried creating a new temporary column, copying all the data from the old-small column into that temp one, deleted the old column, created a new one with the same name as the old one but with the desired size, copied the data from the temp column and deleted it this way:
def upgrade():
op.add_column('item', sa.Column('temp_col', sa.String(200)))
connection.execute("update item SET temp_col = details", execution_options=None)
op.drop_column('item', 'details')
op.add_column('item', sa.Column('details', sa.String(1000)))
connection.execute("update item SET details = temp_col", execution_options=None)
op.drop_column('item', 'temp_col')
def downgrade():
op.add_column('item', sa.Column('temp_col', sa.String(1000)))
connection.execute("update item SET temp_col = details", execution_options=None)
op.drop_column('item', 'details')
op.add_column('item', sa.Column('details', sa.String(200)))
connection.execute("update item SET details = temp_col", execution_options=None)
op.drop_column('item', 'temp_col')
But it also takes ages and doesn't seem to be a really neat way to do it.
So my question is: what is the correct way to resize a string column in postgreSQL in Heroku through alembic without having to wait ages for the alter column to be executed?