I've got a strange issue. I got an SQL View written using CTE. When I create the View in AWS RedShift using Aginity Workbench or SQL Workbench - it automatically converts the CTE into Subqueries and saves the DDL.
I would like to retain the CTE version of the SQL in the DDL.
Why would RedShift do this and how do I avoid it?
Thanks.
Related
I am a new man for PostgresQL; working in DBEaver. I have created a procedure that modifies, among others, a temp table. I would like to print out the table for testing purposes: to see what is in the table now.
In T-SQL I could just execute “select * from MyTestTable”; and this was output to SQL Studio Grid tab. This did not break the procedure.
Now on Postgres I am using DBeaver and get errors when I try to use the same approach.
A question to experienced PostgresQL: how you cope with that? Is there any way to “peek my nose” into middle of a proc and see – what data are at given moment in table. If no - how to debug large and complicated procedures without ability to look at produced data Grid?
I am quite new to this:
My destination is a postgres table, and I want to update two fields (col1, col2) base on a column value from another sql server table (when postgres_table.a = sqlserver_table.b).
I know this could be easily realized by using OLEDB Command, however, since my destination table is a postgres table that I used ODBC to connect, the OLEDB Command won't work for this case.
Any thoughts on this?
It's a bit hacky, but how about using Foreach loop and Execute SQL task?
So first you read in the values to an object variable (use the Execute SQL -tas for this). Then use that variable as the source for Foreach loop and use another Execute SQL -task inside the loop to send an update to Postgres with correct values.
After changes to some Terraform code, I can no longer access the data I've added into an Aurora (PostgreSQL) database. The data gets added into the database as expected without errors in the logs but I can't find the data after connecting to the database with AWS RDS Query Editor.
I have added thousands of rows with Python code that uses the SQLAlchemy/PostgreSQL engine object to insert a batch of rows from a mappings dictionary, like so:
if (count % batch_size) == 0:
self.engine.execute(Building.__table__.insert(), mappings)
self.session.commit()
The logs from this data ingest show no errors, the commits all appear to have completed successfully. So the data was inserted someplace, I just can't work out where that is, as it's not showing up in the AWS Console RDS Query Editor. I run the SQL below to find the table, with zero rows returned:
SELECT * FROM information_schema.tables WHERE table_name = 'buildings'
This has worked as expected before (i.e. I could see the data in the Aurora database via the Query Editor) so I'm trying to work out which of the recently modified Terraform settings have caused the issue.
Where else can I look to find where the data was inserted, assuming that it was actually inserted somewhere? If I can work that out it may help reveal the culprit.
I suspect misleading capitalization. Like "Buildings". Search again with:
SELECT * FROM information_schema.tables WHERE table_name ~* 'building';
Or:
SELECT * FROM pg_catalog.pg_tables WHERE tablename ~* 'building';
Or maybe your target wasn't a table? You can "write" to simple views. Check with:
SELECT * FROM pg_catalog.pg_class WHERE relname ~* 'building';
None of this is specific to RDS. It's the same in plain Postgres.
If the last query returns nothing, you are in the wrong database. (You are aware that there can be multiple databases in one DB cluster?) Or you have a serious problem.
See:
How to check if a table exists in a given schema
Are PostgreSQL column names case-sensitive?
Once I logged more information regarding the connection I discovered that the database name being used was incorrect, so I have been querying the Aurora instance using the wrong database name. Once I worked this out and used the correct database name the select statements in AWS RDS Query Editor worked as expected.
I want to search a value in all column of all tables in my database. I have done it before in SQL but I don't know how I can do this in db2.
There is a pretty good (free) SQL tool SQL Workbench which has this functionality
I am developing a windows application and using Postgres as backend database. At some point in my application i am dynamically creating table e.g Table1, then Table2 and so on. In this way i have many dynamic table in my database. Now i provide a button "Clean Database", so i need to remove all those dynamic tables using SQL query. Should some one guide me how to write SQL Query that automatically delete all such tables?
You should just be able to say
DROP TABLE {tablename}
for each dynamically created table. Try that and see if it works.