When data is copied over from source to destination in a SSIS package, source being a sql query with 'group by' keywords used and destination being a table, is it necessary that the data at a row position has to match the data at the same row position at the destination table??
sagar
You can use a clustered index to force things to be stored in an ordered way, but as Peter notes this has a performance penalty for incremental updates.
Are you concerened about getting things out in order? That's an ORDER BY on your queries or perhaps you should create a standardised view that shows things in the order you want.
Its a performance question, really. Tables have no logical ordering. Or course the data does have a physical order on disk, and I/O has a significant effect on performance, so the best approach will depend on a) how the table is being populated (complete refresh vs. incremental update) and b) how the table is used downstream.
You could create a clustered index on the target table with the same columns as you have in the GROUP BY clause. This will physically order the data on disk by the keys of the clustered index.
If the target table is completely repopulated each time the package is run (drop-recreate or truncate), this may be a good design, since the incoming data will probably be in the right order.
If the target table is incrementally updated each time the package is run, this may be a bad design, since the database will have to interleave the incoming data with existing data on each insert, which can be quite expensive.
Related
I have a typical star pattern in my Azure SQL Data Warehouse. Data is first dumped into staging tables via Data Factory, then it calls a master procedure that calls other procedures to transform data into the appropriate format and then clear out the staging tables for that chunk of data.
Should these staging tables have indexes? Should they have statistics? I recently upgraded to Gen 2, but don't have auto create statistics turned on. I worry that statistics will get created but not updated, and so will end up slowing things down more than anything.
For more context, there is a procedure to rebuild indexes and update statistics which is run overnight, once a day. The data load process is run hourly.
Given that these are staging tables, the biggest impacts will come from the following.
Where possible, use a hash distribution. This will give best performance when you process the table in subsequent steps. While documentation sometimes suggests round_robin distribution, and this is slightly faster for ingestion, the next query on the table will be slower.
Always use statistics. I suggest creating them manually, based on expected usage, for greater predictability in your ELT performance. If you don't create and update statistics you're going to get dreadful performance at some time in future. If you don't want to undertake the effort of manually managing statistics, then definitely turn on auto statistics.
Consider the use of HEAP vs CLUSTERED COLUMNSTORE table structures for staging tables. In general, staging tables are processed on a whole-row basis, and you may find that your performance is better at the staging layer if you use a HEAP. This needs to be tested on your data, as the Gen2 caching that gives much greater performance does not apply to Heap tables.
Definitely create your fact and dimension tables as clustered columnstore indexes. Hash distribute your fact/s, and replicate your dimensions (unless you have billion row dimensions, in which case a hash distribution may be more appropriate).
If you're using CTAS algorithms your need for non-clustered indexes should be very low. I generally add indexes only when I see a performance problem with a query that I can't solve by any other technique.
Finally, make sure that you're using a reasonable DWU and Resource Class. A general rule of thumb is that you shouldn't be running your ELT at less than DWU500, and LARGERC. If you don't do this, you'll find that you get bad clustered columnstore indexes which will lead to future performance problems.
Some input from my side -
Your fact table should be partitioned . in fact you should have a job which creates the partitions in fact automatically .
how big is fact table ? if your fact table is becoming too big then based on your requirement you can think of introducing archiving of old table if its not required in fact table .
I am using postgresql db.
my application manages many objects of the same type.
for each object my application performs intense db writing - each object has a line inserted to db at least once every 30 seconds. I also need to retrieve the data by object id.
my question is how it's best to design the database? use one huge table for all the objects (slower inserts) or use table for each object (more complicated retrievals)?
Tables are meant to hold a huge number of objects of the same type. So, your second option, that is one table per object, doesn't seem to look right. But of course, more information is needed.
My tip: start with one table. If you run into problems - mainly performance - try to split it up. It's not that hard.
Logically, you should use one table.
However, so called "write amplification" problem exhibited by PostgreSQL seems to have been one of the main reasons why Uber switeched from PostgreSQL to MySQL. Quote:
"For tables with a large number of secondary indexes, these
superfluous steps can cause enormous inefficiencies. For instance, if
we have a table with a dozen indexes defined on it, an update to a
field that is only covered by a single index must be propagated into
all 12 indexes to reflect the ctid for the new row."
Whether this is a problem for your workload, only measurement can tell - I'd recommend starting with one table, measuring performance, and then switching to multi-table (or partitioning, or perhaps switching the DBMS altogether) only if the measurements justify it.
A single table is probably the best solution if you are certain that all objects will continue to have the same attributes.
INSERT does not get significantly slower as the table grows – it is the number of indexes that slows down data modification.
I'd rather be worried about data growth. Do you have a design for getting rid of old data? Big DELETEs can be painful; sometimes partitioning helps.
I've created the following table on GreenPlum:
CREATE TABLE data."CDR"
(
mcc text,
mnc text,
lac text,
cell text,
from_number text,
to_number text,
cdr_time timestamp without time zone
)
WITH (
OIDS = FALSE,appendonly=true, orientation=column,compresstype=quicklz, compresslevel=1
)
DISTRIBUTED BY (from_number);
I've loaded one billion rows to this table but every query works very slow.
I need to do queries on all fields (not only one),
What can I do to speed up my queries?
Using PARTITION? using indexes?
maybe using a different DB like Cassandra or Hadoop?
This highly depends on the actual queries you are doing and what your hardware setup looks like.
Since you are querying all the fields the selectivity gained by going columnar orientation is probably hurting you more than helping, as you needs to scan all the data anyway. I would remove columnar orientation.
Generally speaking indexes don't help in a Greenplum system. Usually the amount of hardware that is involved tends to make scanning the data directory faster than doing index lookups.
Partitioning could be a great help but there would need to be a better understanding of the data. You are probably accessing specific time intervals so creating a partitioning scheme around cdr_time could eliminate the scan of data not needed for the result. The last thing I would worry about is indexes.
Your distribution by from_number could have an impact on query speed. The system will hash the data based on from_number so if you are querying selectively on the from_number the data will only be returned by the node that has it and you won't be leveraging the parallel nature of the system and spreading the request across all of the nodes. Unless you are joining to other tables on from_number, which allows the joins to be collocated and performed within the node, I would change that to be distributed RANDOMLY.
On top of all of that there is the question of what the hardware is and if you have a proper amount of segments setup and resources to feed them. Essentially every segment is a database. Good hardware can handle multiple segments per node, but if you are doing this on a light hardware you need to find the sweet spot where number of segments matches what the underlying system can provide.
#Dor,
I have same type of data where CDR info is stored for a telecom company, and daily 10-12 millions rows inserted and also heavy queries running on those CDRs related tables, I was also facing the same issue last year, and i have created partitions on those tables on the CDR timings column.
As per My understanding GP creates physical tables for each partition whereas logical tables created in other RDBMS. After this I got better performance with all SELECTs on these tables. Also I think you should convert text datatype to Character Varying for all columns (if text is really not required) I felt DB operations on Text field is very slow(specially order by, group by)
index will help you depends on your queries in my case i have huge inserts so i didnt try yet
If you are selecting all the columns in select so no need of Column Oriented table
Regards
This question might be a bit too general but I thought I would ask. I'm working with a terabyte scale data warehouse in SQL Server 2008 R2. There is a large fact table with data going back 5 years. I have aggregated a lot of this old data to a different table at a higher level of granularity. The next step is to remove the old data from my fact table.
I've decided that partition swapping is probably the best way to go to remove the older rows from the fact table and put them in an archive table, but I was wondering what a partition swap will do to stats and indexes on my fact table? Should I consider manually updating statistics after a partition swap? (auto update is set to off), will my indexes be fragmented and need reorganising or rebuilding?
Thanks for your help!
Partition switching is a metadata operation, so it's not going to cause fragmentation as no physical data is actually moving-- just logical references to it.
You should probably be updating statistics on a large table regularly, but it's not especially needed after a partition switch.
Hi i am building database, with couple tables (products, orders, costumers) and i am interested if it is possible to do such a trick, generate table every-day based on orders table with name of current day, because orders table will have about 1000 or more rows every-day and it will hurt application speed.
1000 rows is nothing. What database are you using? Most modern databases can handle millions of rows with no issue, as long as you put some effort into proper indexing of the table.
From your comment, I'm assuming you don't know about database table indexing.
A database index is a data structure that improves the speed of data
retrieval operations on a database table at the cost of slower writes
and increased storage space. Indices can be created using one or more
columns of a database table, providing the basis for both rapid random
lookups and efficient access of ordered records.
From http://en.wikipedia.org/wiki/Database_index
You need to add indexes to your database tables to ensure they can be searched optimally.
What you are suggesting is a bad idea IMO, and it's going to make working with the application a pain. Instead, if you really fill this table with vast amounts of data you could consider periodically archiving old data, but don't do this until you really need to.