How to import datasets as csv file to power bi using rest api? - rest

I want to automate the import process in power bi, But I can't find how to publish a csv file as a dataset.
I'm using a C# solution for this.
Is there a way to do that?

You can't directly import CSV files into a published dataset in Power BI Service. AddRowsAPIEnabled property of datasets published from Power BI Desktop is false, i.e. this API is disabled. Currently the only way to enable this API is to create a push dataset programatically using the API (or create a streaming dataset from the site). In this case you will be able to push rows to it (read the CSV file and push batches of rows, either using C# or some other language, even PowerShell). In this case you will be able to create reports with using this dataset. However there are lots of limitations and you should take care of cleaning up the dataset (to avoid reaching the limit of 5 million rows, but you can't delete "some" of the rows, only to truncate the whole dataset) or to make it basicFIFO and lower the limit to 200k rows.
However a better solution will be to automate the import of these CSV files to some database and make the report read the data from there. For example import these files into Azure SQL Database or Data Bricks, and use this as a data source for your report. You can then schedule the refresh of this dataset (in case you use imported) or use Direct Query.

After a Power BI updates, it is now possible to import the dataset without importing the whole report.
So what I do is that I import the new dataset and I update parameters that I set up for the csv file source (stored in Data lake).

Related

Mapping Synapse data flow with parameterized dynamic source need importing projection dynamically

I am trying to build a cloud data warehouse where I have staged the on-prem tables as parquet files in data lake.
I implemented the metadata driven incremental load.
In the above data flow I am trying to implement merge query passing the table name as parameter so that the data flow dynamically locate respective parquet files for full data and incremental data and then go through some ETL steps to implement merge query.
The merge query is working fine. But I found that projection is not correct. As the source files are dynamic, I also want to "import projection" dynamically during the runtime. So that the same data flow can be used to implement merge query for any table.
In the picture, you see it is showing 104 columns (which is a static projection that it imported at the development time). Actually for this table it should be 38 columns.
Can I dynamically (i.e run-time) assign the projection? If so how?
Or anyone has any suggestion regarding this?
Thanking
Muntasir Joarder
Enable Schema drift in your source transformation when the metadata is often changed. This removes or adds columns in the run time.
The source projection displays what has been imported at the run time but it changes based on the source schema at run time.
Refer to this document for more details with examples.

Convert PBI Desktop dataset to push dataset

I have a report and dataset created in Power BI Desktop and published to Power BI Online. I need to update the rows of this dataset from an API and try to use a REST API, but I can only update a push dataset and look for a method to convert my normal dataset. Would it be possible to convert or copy the existing structure to facilitate the creation of this Push Dataset?
I tried converting using a Windows PowerShell module PowerBIPS, but that module has had an error since last year and apparently doesn't work anymore.

When you create a free form of Microstrategy, is it possible to do an automatic mapping?

When you finish the free form query in microstrategy, the next step is to map the columns.
Is there any way to do it automatically? At least make the list of the columns with its names.
Thanks!!!!
Sadly, this isn't possible. You will have to map all columns manually.
While this functionality isnt possible with freeform reporting specifically, Microstrategy Data Import will allow you the ability to create Data Import Cubes. These cubes can be configured as live connections, meaning they execute against the data source selected every time they are used, and are not your typical snapshot cube. Data Imports from a database can be sourced from a database query. This effectively allows you to write your own SQL with the end result being a report that you did not have to specify columns manually for.

SQL Job Excel Row Limit on Import

I am able to load about 16,000 rows of data in to a table using my SQL Job. When I try to load more than that, the job fails. I believe it has to do with the buffering limits. I am not the DBA and getting changes are difficult at best. Do I have any options within the SSIS package itself to send all the records in the workbook?

Adding new data to the neo4j graph database

I am importing a huge dataset of about 46K nodes into Neo4j using import option.Now this dataset is dynamic i.e new entries keep getting adding to it now and then so if i have to re perform the entire import then its wastage of resources.I tried using neo4j rest client of python to send the queries to create the new data points but as the number of new data points increase the time taken is more than the importing of 46k nodes.So is there any alternative to add these datapoints or do i have to redo the entire import?
First of all - 46k is rather tiny.
The most easy way to import data into Neo4j is using LOAD CSV togesther with PERIODIC COMMIT. http://neo4j.com/developer/guide-import-csv/ contains all the details.
Be sure to have indexes in place to find the stuff that needs to be changed with an incremental update quickly.