How to I upload table to Apache Superset? - postgresql

I'm trying to upload a dataset to be viewed in Superset.
I created a postgres databased and able to connect via the URI: postgresql://user:password#localhost:port
I created a table called NYC Taxi with a table called nyctaxi.
However, when I tried to add the table to superset, I get the following error:
Table [nyctaxi] could not be found, please double check your database connection, schema, and table name, error: nyctaxi

if you have the data in CSV you can load it using the UPLOAD CSV from the source menu.
I have an small demo here

Related

Error when creating table in BigQuery Google cloud console

I followed this tutorial to export my firestore collections to excel
https://www.youtube.com/watch?v=3J1M1bSwgV8
Ive followed it before and it worked
Now, Iam not being able to make it work. I followed all the steps, but in the end, when I need to create the table, im getting the erro
Failed to create table: Entity "ycCJePRLCrkHXkQyJ9nn" was of unexpected kind "patients".
What I've been successful to do:
– create bucket and export collection in https://console.cloud.google.com/firestore/import-export
– create dataset in https://console.cloud.google.com/bigquery?project=root-catfish-299022
The next step would be to create the table, in which im getting the error

Azure datafactory get id of the tables which successeds

I am using Azure data factory to copy data from tables present in one database to tables present in another database. I am using the lookup table to get the list of tables that needs to be copied and after that using the foreach iterator to copy the data.
I am using the Below table to get a list of tables that needs to be copied.
The problem: I want to update the flag to 1 when a table is successfully copied. I tried using the log that is generated after the pipeline ran but I am unable to use it effectively.

How to copy data from an a csv to Azure SQL Server table?

I have a dataset based on a csv file. This exposes a data as follows:
Name,Age
John,23
I have an Azure SQL Server instance with a table named: [People]
This has columns
Name, Age
I am using the Copy Data task activity and trying to copy data from the csv data set into the azure table.
There is no option to indicate the table name as a source. Instead I have a space to input a Stored Procedure name?
How does this work? Where do I put the target table name in the image below?
You should DEFINITELY have a table name to write to. If you don't have a table, something is wrong with your setup. Anyway, make sure you have a table to write to; make sure the field names in your table match the fields in the CSV file. Then, follow the steps outlined in the description below. There are several steps to click through, but all are pretty intuitive, so just follow the instructions step by step and you should be fine.
http://normalian.hatenablog.com/entry/2017/09/04/233320
You can add records into the SQL Database table directly without stored procedures, by configuring the table value on the Sink Dataset rather than the Copy Activity which is what is happening.
Have a look at the below screenshot which shows the Table field within my dataset.

Loading JSON files via Azure Data Factory

I have over 100 JSON files which is nested and I am trying to Load the JSON files via Data FactoryV2 into SQL Data Warehouse. I have created the Data FactoryV2 and everything seems fine the connection below seems fine and the Data Preview seems fine also.
When I run the Data Factory I get this error:
I am not sure what the issue is. I have tried to re-create the Data Factory several times.
The error message is clear enough when it says "All columns of the table must be specified...". This means that the table in the data warehouse has more columns than what you are seeing in the preview of the json file. You will need to create a table in the data warehouse with the same columns that are shown in the preview of the json files.
If you need to insert them in a table with more fields, create a "staging" table with the same columns as the json file, and then call a stored procedure to insert the content of this staging table in the corresponding table.
Hope this helped!

* coming in place of keys when connecting saiku with mongo using apache drill

I am using Apache drill in embedded mode and when I am able to connect to mongo and query in drill successfully.
However when I create a schema in saiku schema designer using driver as "org.apache.drill.jdbc.Driver" and URL as "jdbc:drill:drillbit=hostname:31010" the connection is successful and all collections are also fetched and shown as tables in saiku, but in place of column names "*" is coming and actual column names are not coming.
Dont know what I am missing on.
I figured out the solution and posting in case anyone could benefit. I had created a view in drill with select * from table. When I created view as select col1,col2... from table the issue got resolved.