What is the table record limit in Microsoft SQL Server 2008 R2? - sql-server-2008-r2

I'm up to a little over 200,000 records in an error log table so I was just wondering. Thanks.

These are some of the Maximum Capacity Specifications for SQL Server 2008 R2
Database size: 524,272 terabytes
Databases per instance of SQL Server: 32,767
Filegroups per database: 32,767
Files per database: 32,767
File size (data): 16 terabytes
File size (log): 2 terabytes
Rows per table: Limited by available storage
Tables per database: Limited by number of objects in a database
Duplicated from: this StackOverflow answer

https://learn.microsoft.com/en-us/sql/sql-server/maximum-capacity-specifications-for-sql-server
Not really an answer, but I'd suggest looking at this documentation

Related

How much shared memory does Postgresql need per max_locks_per_transaction?

I have a Postgresql 10 database with about 300k tables in 23k schemas. I am trying to upgrade to Postgresql 13 using pg_upgradecluster. This is failing while attempting dump all the schemas:
pg_dump: error: query failed: ERROR: out of shared memory
HINT: You might need to increase max_locks_per_transaction.
pg_dump: error: query was: LOCK TABLE "a45119740"."activity_hrc" IN ACCESS SHARE MODE
Is setting max_locks_per_transaction to 300k something that can be done? I haven't be able to find anything explaining how much shared memory this might need. The machine has 64G of RAM.
(I understand that I need to change my db design .. I have been backing up one schema at a time until now so wasn't aware of this problem)
Your lock table needs to be big enough to lock all your tables and metadata tables.
Since the lock table has room enough for
max_locks_per_transaction * (max_connections + max_prepared_transactions)
locks, all you need to do is set max_locks_per_transaction big enough that the lock table can hold the locks your pg_dump and the other workload needs.
To answer the question how much space each entry in the lock table needs, that can vary based on your architecture, but in general it is sizeof(LOCK) + sizeof(LOCKTAG), which is 168 bytes on my Linux system.

Pentaho table input giving very less performance on postgres tables even for two columns in a table

The simple source read from postgres table(get 3 columns out of 20 columns) is taking huge time to read which I want to read to stream lookup where I fetch one column information
Here is the log:
2020/05/15 07:56:03 - load_identifications - Step **Srclkp_Individuals.0** ended successfully, processed 4869591 lines. ( 7632 lines/s)
2020/05/15 07:56:03 - load_identifications - Step LookupIndiv.0 ended successfully, processed 9754378 lines. ( 15288 lines/s)
The table input query is:
SELECT
id as INDIVIDUAL_ID,
org_ext_loc
FROM
individuals
This table is in postgres with 20 columns hardly & about 4.8 million rows..
This is for pentaho 7.1 data integration & server details below
**Our server information**:
OS : Oracle Linux 7.3
RAM : 65707 MB
HDD Capacity : 2 Terabytes
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
CPU(s): 16
CPU MHz: 2294.614
I am connecting to postgres using jdbc
Don't know what else I can do to get about 15K rows/sec throughput
Check transformation properties under Miscellaneous
Nr of rows in rowset
Feedback size
Also check your Table if it has proper index.
When you use a table input & stream lookup, the way how pentaho runs the stream lookup is slower than when you use a database lookup. As #nsousa suggested, I checked that with dummy step and got to know that pentaho's way of handling is different for every type of step
Even though database lookup & stream lookup come in same category, the performance for database lookup is better in this situation..
Pentaho help gives some idea / suggestion regarding the same

Processing multiple concurrent read queries in Postgres

I am planning to use AWS RDS Postgres version 10.4 and above for storing data in a single table comprising of ~15 columns.
My use case is to serve:
1. Periodically (after 1 hour) store/update rows in to this table.
2. Periodically (after 1 hour) fetch data from the table say 500 rows at a time.
3. Frequently fetch small data (10 rows) from the table (100's of queries in parallel)
Does AWS RDS Postgres support serving all of above use cases
I am aware of Read-Replicas support, but is there any in built load balancer to serve the queries that come in parallel?
How many read queries can Postgres be able to process concurrently?
Thanks in advance
Your usecases seems to be a normal fit for all relational database systems. So I would say: yes.
The question is: how fast the DB can handle the 100 queries (3).
In general the postgresql documentation is one of the best I ever read. So give it a try:
https://www.postgresql.org/docs/10/parallel-query.html
But also take into consideration how big your data is!
That said, try w/o read replicas first! You might not need them.

Postgresql database size is less after migration from oracle

I have migrated Oracle to Postgresql using Ora2pg tool.
The database size before migration in Oracle is around 2Tb,
The same database after migration into Postgresql,size seems only 600 gb.
NOTE: Records are migrated correctly with equal row counts.
Also i wanted to know how Postgresql handles Bytea data type after migration from Blob in oracle.
You might want to check if all migrated objects are present.
However, this is not surprising, and there are several things that can contribute to that:
You counted the size of the tablespaces in Oracle, but they were partly empty.
Your table and index blocks were fragmented, while they are not in the newly imported PostgreSQL database.
Depending on which options you installed in Oracle, the data dictionary can be quite large (though that alone cannot explain the observed difference).

Tableau extract using row store vs Column store DBs

I am creating a .TDE(tableau extract) from a table in sql server which has around 180+ columns and 60 million records and is taking around 4 hours in current infra of 16GB RAM and 12 cores
I was looking for any other way by which this can be done faster. I would like to know if I could load my data into any column store DB which can connect to tableau and then create a TDE from the data in the column store DB can make a bit better in performance.
If yes, please suggest any such column store DB
The Tableau SDK is a way to build TDE files without having to use Desktop. You can try it and see if you get better performance.
Does your TDE need all 180+ columns? You can get a noticeable performance improvement if your TDE contains only the columns you need.