When we create a dataset, data profiling is done by Oracle automatically
and then we can see the complete dataset.
My OAC is failing to profile the data, and shows me an error.
I was expecting that data will be profiled automatically so that I can work on my workbook quickly..
profiling error screenshot: https://i.stack.imgur.com/XLOWO.png
Related
My question might look similar to some earlier posts but none of the solution has answered what is the rootcause of this behavior.
I would try to explain what I have done so far:
I am connecting to a PostgresDB (running in our company's aws environment) via my Power BI desktop client. The connection set up was pretty easy and I am able to see all the tables in the DB.
For 2 of my table which are extremely big in size, I am trying to load the data I am getting below error message -
Data Load Error- OLE DB or ODBC error: [DataSource.Error] PostgreSQL: Exception while reading from stream.
I tried changing the Command TimeOut Parameter in the initial M
query-- Didn't help
I tried writing native query with select * and
where clause (used parameter)-- It worked
Question:
When the Power BI starts loading the data without any parameter, it does start extracting some thousands of record but get interrupted and throws the mentioned error. Is it a limit from the database server side which is getting hit or is it a limitation of power BI?
What can I change in my Database server side, if I don't want to pull information using parameters (as at the end I need all the data for my reports)
Task :-
I've SSIS package that load data From SQLServer to PostgreSQL Database.
Mechanism :-
For each loop only 100 records are picked and processed to destination till all records are process.
OnLoad events:-
While loading data to PostgreSQL there are triggers on table that do their activity to sync data to SalesForce, which cannot be disable currently while migration.
Issue:-
While loading data some records are loading in to Error table used in DFT ,for which Error Column Name is coming with value "Unable to fetch column name".
Now here's the thing weird, if i reprocess same error rows they are processed successfully to destination.
Really appreciate if someone came under same scenario and can share their experience to debug this further.
I connected my sonarqube server to my postgres db however when I view the the "metrics" table, it lacks the actual value of the metric.
Those are all the columns I get, which are not particularly helpful. How can I get the actual values of the metrics?
My end goal is to obtain metrics such as duplicate code, function size, complexity etc. on my projects. I understand I could also use the REST api to do this however another application I am using will need a db to extract data from.
As far as i know connecting to db just helps to store data, not to display data.
You can check stored data on sonarqube's gui
Click on project
Click on Activity
We have many talend jobs to transfer data from oracle (tOracleInput) to redshift (tRedshiftOutputBulkExec). I would like to store the result information into a DB table. For example:
Job name, start time, running time, rows loaded, successful or failed
I know if I turn on log4j, most those information can be derived from the log. However, saving it into DB table will make it easy to check and report the result.
I'm most interested in records loaded. I checked this link http://www.talendbyexample.com/talend-logs-and-errors-component-reference.html and manual of tRedshiftOutputBulkExec. None of them gives me such information.
Will Talend Administration Center provide such function? What is the best way to implement it?
Thanks,
After looking at the URL you provided, tLogCatcher should provide you with what you need (minus the rows loaded, which you can get with a lookup).
I started with Talend Studio Version 6.4.1. There you can set "Stats & Logs" for a job. It can log to console, files or database. When writing to a DB you set JDBC parameters and the name for three tables:
Stats Table: stores start and end timestamps of the job
Logs Table: stores error messages
Meter Table: stores the count of rows for each monitored flow
They correspond to the components tStatCatcher, tLogCatcher, tFlowMeterCatcher, where you can find the needed table schema.
To make a flow monitored select it, open tab "Component" and mark the checkbox "Monitor this connection".
To see the logged values you can use the "AMC" (Active Monitoring Console) in Studio or TAC.
I have a database build and a reference data build which are loaded onto my computer. When I try to load the transaction data from a file to the database via staging tables and stored procedures it takes 20 minutes to load 10,000 records.
If I load the database build, reference data build and also my test data then loading the transaction data via the same process takes between 40-50 seconds.
I am trying to find out what causes the process to speed up when test data is added and have considered that it could be that the database may have worked out the best route to inserting the transaction data by loading the test data first, but I wouldn't expect it to be this big a difference in time.
Can anyone recommend what I could do to identity the problem or have any ideas to what it could be?
Thanks for any help.