bcp queryout dreadful performance - sql-server-2008-r2

I'm using SQL Server 2008 R2. I have a stored procedure that runs bcp via xp_command shell. On my laptop with a copy of the database, a job with 50000 records is almost instant and bcp performance is 71K rows per sec.
I run exactly the same stored procedure on the server and it takes 1h 51 minutes and bcp performance is 7 rows per sec (so 10,000x slower). The query that selects the data runs in under a second on the server BTW. This happened last week and we restarted the SQL Server instance and it ran pretty quick again on server. After about 5 days, the performance got real slow again, but restarting SQL instance didn't help.
My command is:
bcp "exec DBNAME.dbo.SPNAME 224,1 "
queryout "\\Server\path\OUTPUT\11111.txt" -c -t\t -Usa -P"PASSWORD" -SSQLSERVER
If I run activity monitor, I see my stored procedure process and it says RUNNABLE.
The server is on a VM with 4 cores and 28GB RAM.
If I run the same bcp command from a dos shell, I get same.
I'm at a loss where to look now. Anyone got any suggestions?
TIA
Mark

To answer the question of "where to look" and because the task you are trying to complete involves distributed resources (I'm assuming here because you are using UNC paths)... you have to look into differences between the environments, which when comparing execution between Server and laptop... is just about everything.
Storage (and available storage)
CPU (and available cpu)
Network (and available bandwidth)
Memory (and available memory)
SQL Server version/updates
Maintenance schedules (of which the laptop will likely have none)
concurrent activity (of which the laptop will likely have none)
The data you seem to have addressed. You can confirm that the data/database objects are the same? This is a restored database you are working with on the laptop (restored from the server?) or you've manually inspected tables and indexes if not a restore from the server?
If not restored, could the laptop have less data?
To troubleshoot, you'll also need much more than activity monitor. You'll need performance monitor.
This is from some time ago (not sure why things like this dont expire on here, but oh well).

Related

Prevent Data Loss during PostgreSQL Host Shutdown

So I've spent the better part of my day (and several searches before) looking for a workable solution to prevent data loss when the host of a PostgreSQL server installation gets rebooted or shut down. We maintain a number of Azure and on-prem servers and the number of times someone has inadvertently shut down the server without first ensuring Postgres is no longer flushing data to disk is far more frequent than it should be. Of note we are a Windows Server shop.
Our current best practice (which if followed appropriately works) is to stop the Postgres service, then watch disk writes to the Postgres data directory in Resource Monitor. Once nothing is writing to that directory, shut down the host. I have to think that there's a better way to ensure that it doesn't get shutdown in a manner that leads to data corruption, regardless of adherence to the best practice (or in some cases, because Windows Update mandates a reboot, regardless of configured settings telling it not to reboot).
Some things I've considered, but have been unable to find solid answers for:
Create a scheduled task that uses the "On an event" trigger to monitor the System log for event 1074. It would have to be configured to "run whether the user is logged in or not". The script would cancel the shutdown command with shutdown /a, then run a script to elegantly shutdown Postgres. I've seen mixed results on if the scheduled job would reliably trigger before Task Scheduler is terminated in the shutdown sequence.
Create a shutdown script using Group Policy. My question there is will it wait for the script to complete before executing the shutdown?
How do you deal with data loss in your Postgres server Windows hosts?
First, if you register PostgreSQL as a Windows service, a shutdown of the machine will automatically shut down PostgreSQL first.
But even without that, a properly configured PostgreSQL server on proper hardware will never suffer data loss (unless you hit a rare PostgreSQL software bug). It is one of the basic requirements for a relational database to survive crashes without data loss.
To enumerate a few things that come to mind:
make sure that the PostgreSQL parameters fsync and synchronous_commit are set to on
make sure that you are using a reliable file system for the data files and the WAL (a Windows network share is not a reliable file system)
make sure you are using storage that has no caches that are not battery-backed

Postgres utilizing 100% CPU on EC2 Instance? Why? How to fix?

I am facing same issue regularly which happens 1-3 times in a month and mostly on weekends.
To explain, CPU utilization is exceeding past 100% from last 32 hours.
EC2 Instance is t3.medium
Postgres version is 10.6
OS : Amazon Linux 2
I have tried gather all the information I could get using command provided in reference https://severalnines.com/blog/why-postgresql-running-slow-tips-tricks-get-source
But didn't found any inconsistency or leak in my database, although while checking for process consuming all CPU resources I found following command is the culprit running for more than 32 hours.
/var/lib/postgresql/10/main/postgresql -u pg_linux_copy -B
This command is running from 3 separate processes at the moment and running from last 32 hours, 16 hours & 16 hours respectively.
Searching about about this didn't even returned a single result on google which is heartbreaking.
If I kill the process, everything turns back to normal.
What is the issue and what can I do to prevent this from happening again in future?
I was recently contacted by AWS EC2 Abuse team regarding my instance involved in some intrusion attack to some other server.
To my surprise, I found out that as I had used very week password root for default postgres account for my database and also had the postgres port public, the attacker silently gained access to instance and used my instance to try gaining access to another instance.
I am still not sure, how was he able to try ssh command by gaining access to master database account.
To summarise, One reason for unusual database spikes on server could be someone attacking your system.

SQL Server management scaling

We are playing with a multi tenant architecture not baed on partitions but rather havings tons of databases. We decided to run some tests
Generated 5 000 database schemas, each contains ~ 100 DB objects. 250k tables & 250k other DB objects (keys, indexes) at all.
Found cons:
Tried to open list of tables from SQL MGMT Studio – it took ~ 10-15 min. MGMT Studio allocated ~ 700 Mb of RAM
DB Utilities don’t work – tried Red Gate, DB Forge, Adept SQL Diff
Any advice when managing and running SQL Server like this?
Try to use sqlcmd utility running from command prompt.
You could try writing your own management tool, targeted specifically at what you need using SMO:
Creating SMO Programs - MSDN
That way you could simplify the program and load only what is required and potentially increasing performance.

Distributed write job crashes remote machine with MongoDB server

Looking for any advice I can get.
I have 16 virtual CPUs all writing to a single remote MongoDB server. The machine that's being written to is a 64-bit machine with 32GB RAM, running Windows Server 2008 R2. After a certain amount of time, all the CPUs stop cold (no gradual performance reduction), and any attempt to get a Remote Desktop Connection hangs.
I'm writing from Python via pymongo, and the insert statement is "[collection].insert([document], safe=True)"
I decided to more actively monitor my server as the distributed write job progressed, remoting in from time to time and checking the Task Manager. What I see is a steady memory creep, from 0.0GB all the way up to 29.9GB, in a fairly linear fashion. My leading theory is therefore that my writes are filling up the memory and eventually overwhelming the machine.
Am I missing something really basic? I'm new to MongoDB, but I remember that when writing to a MySQL database, inserts are typically followed by commits, where it's the commit statement that actually makes sure the record is written. Here I'm not doing any commits...?
Thanks,
Dave
Try it with journaling turned off and see if the problem remains.

How to grab a full memory dump of a large memory usage

I am hosting IIS based web service applications on Windows 2008 64-bit system running on a Quad core 8G machine. Ran into couple of instances when W3WP was running at 7.6G of memory usage. Nothing else was responding on the system including RDP. Right click on the process from the task manager and creating the dumps, froze the system and all its threads for a long time (close to 30minutes). When the freeze up occurred during off hours, we let the dump run for a while (ran close to 1 hour) but still dump didn't complete. In the interest of getting the system up, we had to kill IIS
Tried other tools like procexp, debug diag etc to create full memory dump and all have the same results
So, what tool does the community use to grab dump files quickly? Or without freezing all the threads? I realize latter might be a rhetorical question. But what are the options for generating such a large dump file without locking up the system for a long time?
IMO you shouldn't have to wait until the process memory grows to 8 GB. I am sure with something like 3 - 4 GB you should be able to detect the memory leak.
Procdump has an option based on memory threshold
-m Memory commit threshold in MB at which to create a dump of the process.
I would you this option to dump the memory of the process.
And also SSD would help in writing faster.
WPA a.k.a xperf (http://msdn.microsoft.com/en-us/performance/cc825801.aspx) is a powerfull tool, to diagnose the applications. You will get call stack of the culprit allocation. You dont have to collect the dump and it is no-invasive and does not load much in production systems
Complete step by step information is available here. http://msdn.microsoft.com/en-us/library/ff190906(v=VS.85).aspx.