An error occurred while creating datasets: (table_name) Table could not be found for mixpanel data in redshift - amazon-redshift

new to superset here. I'm connecting mixpanel data (stored in redshift) to superset, but I do not see tables available when I 'add dataset' in superset. Specifically, I want to connect to mp_master_event, and I wonder if the reason it does not show is because it's stored under "External Tables?"
When I type mp_master_event in the 'table' field, I get this error: An error occurred while creating datasets: (table_name) Table [mp_master_event] could not be found, please double check your database connection, schema, and table name. The database connection is fine and the schema 'mixpanel' I've chosen is right.
Any ideas what I should do?

Related

Delta Lake Data Load Datatype mismatch

I am loading data from SQL Server to Delta lake tables. Recently i had to repoint the source to another table(same columns), but the data type is different in new table. This is causing error while loading data to delta table. Getting following error:
Failed to merge fields 'COLUMN1' and 'COLUMN1'. Failed to merge incompatible data types LongType and DecimalType(32,0)
Command i use to write data to delta table:
DF.write.mode("overwrite").format("delta").option("mergeSchema", "true").save("s3 path)
The only option i can think of right now is to enable OverWriteSchema to True.
But this will rewrite my target schema completely. I am just concerned about any sudden change in source schema that will replace existing target schema without any notification or alert.
Also i can't explicitly convert these columns because the databricks notebook i am using is a parametrized one used to to load data from source to Target(We are reading data from a CSV file that contain all the details about Target table, Source table, partition key etc)
Is there any better way to tackle this issue?
Any help is much appreciated!

Delete data from teradata using pyspark

I am trying to delete the record from teradata and then write into the table for avoiding duplicates
So i have tried in many ways which is not working
I have tried deleting while reading the data which is giving syntax error like '(' expected between table and delete
spark.read.format('jdbc').options('driver','com.TeradataDriver').options('user','user').options('pwd','pwd').options('dbtable','delete from table').load()
Also tried like below, which is also giving syntax error like something expected between '('and delete
options('dbtable','(delete from table) as td')
2)I have tried deleting while writing the data which is not working
df.write.format('jdbc').options('driver','com.TeradataDriver').options('user','user').options('pwd','pwd').options('dbtable','table').('preactions','delete from table').save()
Possible solution is to call procedure which delete data.
import teradata
host,username,password = '','', ''
udaExec = teradata.UdaExec (appName="test", version="1.0", logConsole=False)
with udaExec.connect(method="odbc"
,system=host
,username=username
,password=password
,driver="Teradata Database ODBC Driver 16.20"
,charset= 'UTF16'
,transactionMode='Teradata') as connect:
connect.execute("CALL db.PRC_DELETE()", queryTimeout=0)

Unable to create db2 table with DATE data type

I am using DB2 9.7 (LUW) in a windows server, in which multiple DBs are available in a single DB instance. I just found that in one of these DBs, I am unable to add a column with DATE data type, during table creation or altering. The column been added is getting changed to timestamp instead.
Any help on this will be welcome.
Check out your Oracle Compatibility setting
Depending on that setting a date is interpreted as Timestamp(0) like in your example.
Because these settings take effect if the database has been created after setting the DB2_COMPATIBILITY_VECTOR registry variable your database can show a different behaviour.

How to make a Cassandra connection using CQL used to create a table?

I am new to tableau, gone through the site before having this question posted, didn't found answer matching to my question.
I have connection established successfully to Cassandra using "DataStax cassandra ODBC driver 64bit windows", evrything is fine, filled all details like "keyspace name, table name as per documentation available in Datastax site".
But when I drag the available table to canvas it's keep on loading for minutes, what the database guy has told me about the data is it's more millions of data for one day, so we have 6months data and that to data gets updated for every 10 minutes, it;s for a reputed wind energy company.
My client has given me "" CQL used for creating table:
create table abc_data_test.machine_data
(machine_id text, tag text, timestamp timestamp, value double,
PRIMARY KEY((machine_id, tag), timestamp))
WITH CLUSTERING ORDER BY(timestamp DESC)
AND compression = { 'sstable_compression' : 'LZ4Compressor' };"".
Where to keep this code?
I tried to insert in connection page it's giving a error. I am getting a new custom sql error (I placed the code in "new custom sql" ) .
The time is still running, can be seen as:
processing request: connecting to datasource, Elapsed time 87:09
The error from new custom sql is
An error occured while commuicating with the datasource. [DataStax][CassandraODBC] (10) Error while executing a query in Cassandra:33562624: line 1.11 no viable alternative at input '1' (SELECT [TOP]1...)
I'm using windows 10 64bit, DataStax odbc driver 64bit-2.4.1 version,DSE is4.8 and later .
You cannot pass DDL sql into the custom sql box. If the Cassandra connection supports the Initial SQL option, you could pass it there. Then your custom sql would be some sort of select statement. Otherwise, create the table in Cassandra then connect to that table from Tableau.

Lesson 3 for MSDN error SQL Server Data Tools

i'm currently following this link and performing the steps
https://msdn.microsoft.com/en-us/library/ms170625.aspx
however, after importing in the Transact-SQL query, it tells me that
"Could not create a list of fields for the query. Verify that you connect to the data source and that your query syntax is correct. Query (7,24) Parser: The syntax for 'sp' is incorrect"
What could be the problem? I have tested connection of the database and it is connected. (and yes, the table exists in the database)