I'd like to query postgresql view via presto. I've found some old links/notes that it should be possible. Unfortunately I don't have any views listed in SHOW TABLES command and when I tried to query data select cnt from umcount I got Table 'public.umcount' not found. How to query views then?
I'm using Presto 0.205
You can try out these queries first:
show schemas from <connector_name>
show tables from <connector_name>.<schema_name>
If everything works fine then use these prefixes in your select queries.
For example:
select * from <connector_name>.<schema_name>.<table_name> limit 10
Related
I am new to postgres. I am trying to make the following query dynamic for different tables.
SELECT DISTINCT jsonb_object_keys(elements -> 'elements') AS individual_element
FROM Table1
I have 30 different tables on which I need to run this query but not sure how to do that in postgres (using pgAdmin4). Is this possible? Please help.
Thanks
-MS
Queries I seem to be easily able to run in SQL workbench just don't work in Tableau - it's literally one java error after another... rant over.
One thing I've noticed is that Tableau keeps trying to wrap an additional SELECT which Athena doesn't recognise. I thought I could overcome this using Athena views, but that doens't seem to work either.
When I do the following in Tableau:
SELECT count(distinct uuid), category
FROM "pregnancy_analytics"."final_test_parquet"
GROUP BY category
I get the following in Athena (that throws an error - SYNTAX_ERROR: line 1:8: Column 'tableausql._col0' cannot be resolved). As I say, since it looks like Tableau is trying to "nest" the SELECT:
SELECT "TableauSQL"."_col0" AS "xcol0"
FROM (
SELECT count(distinct uuid)
FROM "pregnancy_analytics"."final_test_parquet"
WHERE category = ''
LIMIT 100
) "TableauSQL"
LIMIT 10000
NB: The error, as I said above, arises because Tableau sticks another SELECT around this to a table that doesn't exist, and as such Athena kicks up an error.
Starting to feel like Tableau is not a good fit with Athena? Is there a better suggestion perhaps?
Thanks!
For my Project Purpose I want to create a table from the data fetched from a view. I am using the basic statement:
CREATE TABLE TABLE_NAME AS (SELECT * FROM VIEW_NAME) ;
The problem is that Around 3 cores of data will be fetched from that view and as the view has joins on many tables and many conditions are applied the performance of the view is bit slow. When I am trying the basic syntax (as mentioned above) after sometime the session is getting timed out and hence it fails. Any alternative way to do this?
an alternative way will be using Postgres Copy option.but you will have to create table schema prior to copy.
so actual query will be
CREATE TABLE yourtable AS (SELECT * FROM view With no Data);Copy select * from view to yourtable;
you can follow the provided link to know advanced options to increase performance of copy command.hope this helps.
I am not able to view my query result when I do query on hive shell(even its executing successfully).
when I do "show tables;" its displaying list of tables as below
hive> show tables;
OK
bucketed_users
logs
managed_table
records
student
students
tweets
user
but when I do any query its executing but its not displaying output
EG:
hive> select * from students;
OK
Time taken: 0.164 seconds
Is there any settings required to print output on my console or is there any issues with my hive shell. please help on this....
I created the table one day before I ran the select query. I discovered that there is no data in the students table. I don't know how the data got lost.
When I dropped and recreated the "Students" table, I was able to run all queries on it fine.
Could anyone offer a solution to speed up one of our processes? We have a view used for reporting that is a union all of 10 tables. The view has 180 million rows. We would like to generate a list of the distinct values of individual columns. The current SQL generated by the reporting tool does a select distinct on the view which takes 10 minutes. Preferably the solution would be automatically updated. We have been trying to create a MQT in DB2 udb V8 as a union all, refresh immediate with little success. Any suggestions would be greatly appreciated.
Charles.
There are a lot of restrictions in DB2 8.2 for refresh immediate MQTs, and they can have a significant performance impact on applications that write to the base tables. That said, you may be able to use an MQT. However, instead of using SELECT DISTINCT, try making the query look something like:
select yourcolumn, count(*) as ignore
from union_all_view
group by yourcolumn
The column (yourcolumn) from must be defined as NOT NULL for this to work (in DB2 8.2). The optimizer may not select this MQT if you still issue SELECT DISTINCT against the union all view, so you may need to query the MQT (or a view defined on top of it) directly. Ignore the column "ignore" in the MQT -- that is there only for DB2; if you really don't want to see it, you can create a vew on top of the MQT.
However, this is really a database design issue. Why do you need to scan 180 million rows of data to find the unique values in a particular column? Why don't these values already reside in their own table, with foreign keys defined against it from each of the 10 base tables?