I'm working with Talend ETL to transfer data between two Salesforce Orgs. I'm trying to run preliminary tests to make sure everything is setup properly.
Is there a way to limit the number of rows being transferred? The database has over 50,000 rows, and I only want to send over 15 or 20.
Thank you.
On the Talend side, you can use tSampleRow to only process a limited number of rows which were retrieved . For example you can use a line number range to only process rows 1-50.
Related
I have the following flow in Pentaho Data Integration to read a txt file and map it to a PostgreSQL table.
The first time I run this flow everything goes ok and the table gets populated. However, if later I want to do an incremental update on the same table, I need to truncate it and run the flow again. Is there any method that allows me to only load new/updated rows?
In the PostgreSQL Bulk Load operator, I can only see "Truncate/Insert" options and this is very inefficient, as my tables are really large.
See my implementation:
Thanks in advance!!
Looking around for possibilities, some users say that the only advantage of Bulk Loader is performance with very large batch of rows (upwards of millions). But there ways of countering this.
Try using the Table output step, with Batch size("Commit size" in the step) of 5000, and altering the number of copies executing the step (depends on the number of cores your processor has) to say, 4 copies (Dual core CPU with 2 logical cores ea.). You can alter the number of copies by right clicking the step in the GUI and setting the desired number.
This will parallelize the output into 4 groups of Inserts, of 5000 rows per 'cycle' each. If this cause memory overload in the JVM, you can easily adapt that and increase the memory usage in the option PENTAHO_DI_JAVA_OPTIONS, simply double the amount that's set on Xms(minimum) and XmX(maximum), mine is set to "-Xms2048m" "-Xmx4096m".
The only peculiarity i found with this step and PostgreSQL is that you need to specify the Database Fields in the step, even if the incoming rows have the exact same layout as the table.
you are looking for an incremental load. you can do it in two ways.
There is a step called "Insert/Update" , this will be used to do incremental load.
you will have option to specify key columns to compare. then under fields section select "Y" for update. Please select "N" for those columns you are selecting under key comparison.
Use table output and uncheck "Truncate table" option. While retrieving the data from source table, use variable in where clause. first get the max value from your target table and set this value to a variable and include in the where clause of your query.
Editing here..
if your data source is a flat file, then as I told get the max value(date/int) from target table and join with your data. after that use filter rows to have incremental data.
Hope this will help.
How many rows can the web data connector handle to import data into Tableau? Or what is the maximum number of rows which I can generally import?
There are no limitations to how many rows of data you bring back with your web data connector; performance scales pretty well as you bring back more and more rows, so it's really just a matter of how much time you are OK dealing with.
The total performance will be a combination of:
The time it takes for you to retrieve data from the API.
The time it takes our database to create an extract with that data once your web data connector passes it back to Tableau.
#2 will be comparable to the time it would take to create an extract from an Excel file with the same schema and size as the data in your web data connector.
On a related note, the underlying database used (Tableau Data Engine) handles a large number of rows well, but is not as suited for handling a large number of columns, thus our guidance is to bring back less than 60 columns if possible.
I have 6.5 GB data which consists 900000 rows in my Input table ****(tPostgresqlInput)** ,I am trying to load the same data into my output table(tPostgresqlOutput) , while running the job i am not getting any response from my input table, Is there Any solution to load the data? pls refer my attachment
You made need to develop a strategy to retrieve more manageable chunks of data, for example dividing up the data based on row IDs. That way, it does not take as much memory or time to retrieve the data .
You could also increase the Default memory limit for the job from 1 GB to a higher number .
If you job runs on the same network as your database server, that can also increase performance.
Make sure you enable Use Cursor on the Inputs advanced settings. The default 1k value is fine.
Also enable the batch size on the ouput which does similar.
By enabling this Talend will work with 1k records at a time.
If this two tables are in the same DB you can try to use Talend ELT component
no push down you processing to the database. Take a look on following set of components:
https://help.talend.com/display/TalendOpenStudioComponentsReferenceGuide60EN/tELTPostgresqlInput
https://help.talend.com/display/TalendOpenStudioComponentsReferenceGuide60EN/tELTPostgresqlMap
https://help.talend.com/display/TalendOpenStudioComponentsReferenceGuide60EN/tELTPostgresqlOutput
I have a table of 3M rows.
I wanted to retrieve all those rows and do a visualization using dc.js.
Problem I have is, for just a single column it takes about 70 secs.
And If i write my query it takes about 240 secs to retrieve those rows.
I'm using using select query on columns like this.
SELECT COL1, COL2 FROM TABLE
That's it. No grouping, nothing.
But it takes hell lot of time.
Heard of indexing and I created a Index for the columns I use. But even though no fruitful results.
We should not retrieve 3M rows in any query. And sending 3M records will always take a lot of time (nothing to do with the database, it is transfer speed). It will kill your IO. The bulk of the time taken is on the transfer (IO) from request-originator and the postgres database.
Consider to break that requests into batches of async-requests that gets streamed down to clients. That means restructuring your front-end code (javascript) for improved user-experience.
You didn't specify the environment in which you are using PostgreSQL.
As an example, in Node.js you can solve this problem by streaming the data with the help of pg-query-stream and rendering it on the client side at the same time, so the client doesn't have to wait for the query to finish and can see intermediate results.
This may not be the best solution though. A better solution would be to implement data aggregation within a database function to provide a smaller data subset.
I have a solution that can be parallelized, but I don't (yet) have experience with hadoop/nosql, and I'm not sure which solution is best for my needs. In theory, if I had unlimited CPUs, my results should return back instantaneously. So, any help would be appreciated. Thanks!
Here's what I have:
1000s of datasets
dataset keys:
all datasets have the same keys
1 million keys (this may later be 10 or 20 million)
dataset columns:
each dataset has the same columns
10 to 20 columns
most columns are numerical values for which we need to aggregate on (avg, stddev, and use R to calculate statistics)
a few columns are "type_id" columns, since in a particular query we may
want to only include certain type_ids
web application
user can choose which datasets they are interested in (anywhere from 15 to 1000)
application needs to present: key, and aggregated results (avg, stddev) of each column
updates of data:
an entire dataset can be added, dropped, or replaced/updated
would be cool to be able to add columns. But, if required, can just replace the entire dataset.
never add rows/keys to a dataset - so don't need a system with lots of fast writes
infrastructure:
currently two machines with 24 cores each
eventually, want ability to also run this on amazon
I can't precompute my aggregated values, but since each key is independent, this should be easily scalable. Currently, I have this data in a postgres database, where each dataset is in its own partition.
partitions are nice, since can easily add/drop/replace partitions
database is nice for filtering based on type_id
databases aren't easy for writing parallel queries
databases are good for structured data, and my data is not structured
As a proof of concept I tried out hadoop:
created a tab separated file per dataset for a particular type_id
uploaded to hdfs
map: retrieved a value/column for each key
reduce: computed average and standard deviation
From my crude proof-of-concept, I can see this will scale nicely, but I can see hadoop/hdfs has latency I've read that that it's generally not used for real time querying (even though I'm ok with returning results back to users in 5 seconds).
Any suggestion on how I should approach this? I was thinking of trying HBase next to get a feel for that. Should I instead look at Hive? Cassandra? Voldemort?
thanks!
Hive or Pig don't seem like they would help you. Essentially each of them compiles down to one or more map/reduce jobs, so the response cannot be within 5 seconds
HBase may work, although your infrastructure is a bit small for optimal performance. I don't understand why you can't pre-compute summary statistics for each column. You should look up computing running averages so that you don't have to do heavy weight reduces.
check out http://en.wikipedia.org/wiki/Standard_deviation
stddev(X) = sqrt(E[X^2]- (E[X])^2)
this implies that you can get the stddev of AB by doing
sqrt(E[AB^2]-(E[AB])^2). E[AB^2] is (sum(A^2) + sum(B^2))/(|A|+|B|)
Since your data seems to be pretty much homogeneous, I would definitely take a look at Google BigQuery - You can ingest and analyze the data without a MapReduce step (on your part), and the RESTful API will help you create a web application based on your queries. In fact, depending on how you want to design your application, you could create a fairly 'real time' application.
It is serious problem without immidiate good solution in the open source space. In commercial space MPP databases like greenplum/netezza should do.
Ideally you would need google's Dremel (engine behind BigQuery). We are developing open source clone, but it will take some time...
Regardless of the engine used I think solution should include holding the whole dataset in memory - it should give an idea what size of cluster you need.
If I understand you correctly and you only need to aggregate on single columns at a time
You can store your data differently for better results
in HBase that would look something like
table per data column in today's setup and another single table for the filtering fields (type_ids)
row for each key in today's setup - you may want to think how to incorporate your filter fields into the key for efficient filtering - otherwise you'd have to do a two phase read (
column for each table in today's setup (i.e. few thousands of columns)
HBase doesn't mind if you add new columns and is sparse in the sense that it doesn't store data for columns that don't exist.
When you read a row you'd get all the relevant value which you can do avg. etc. quite easily
You might want to use a plain old database for this. It doesn't sound like you have a transactional system. As a result you can probably use just one or two large tables. SQL has problems when you need to join over large data. But since your data set doesn't sound like you need to join, you should be fine. You can have the indexes setup to find the data set and the either do in SQL or in app math.