Aug 28, 2013

Your Storage Probably Effects Your System Performance

One of my clients had started getting some performance alerts in their monitoring systems: "IO is too high".
Probably this not something you will be glad to have.

In a quick analysis we found out that the alerts and the high IO originated from servers that were installed in a new data center.
While the actual CPU utilization devoted to IO wait at the old data center was around 25%, in the new data center it was about 75%.

Who is to Blame?
In the new data center NetApp 2240c was chosen as a storage appliance, while in the old IBM V7000 unified was used. Both systems had SAS disks so we didn't expected a major difference between the two. Yet, it was something worth to explore.

Measurement 
In order to verify the source we made a read/write performance benchmark to both systems by running the following commands:
  1. Write: dd if=/dev/zero of=/tmp/outfile count=512 bs=1024k
  2. Read: dd if=/tmp/outfile of=/dev/null bs=4096k
UPDATE I: You should also try btest or IOMeter as suggested by Yuval Kashtan 
UPDATE II: when using dd, you should better use dd if=/dev/urandom of=/tmp/outfile.txt bs=2048000 count=100 that actually uses random input and just allocate spaces with nulls

Results
On the NetApp 2240 we got 0.62GB/s write rate and 2.0GB/s read rate (in site #2)
On the IBM V7000 unified we got 0.57GB/s write rate and 2.1GB/s read rate (in site #2)
On the IBM V7000 unified we got 1.1GB/s write rate and 3.4GB/s read rate  (in site #1)
That is almost a 100% boost when we used the IBM system!

Bottom Line
When selecting and migrating between storage appliances pay attention to their performance. Otherwise, you may tackle this differences in production. However, differences should be inspect in the same environment. In our case something that was seemed like a storage issue turned into a VM/OS configuration or network issue (exact cause is still under investigation).

Keep Performing,
Moshe Kaplan

Aug 16, 2013

Azure Production Issues, Not a nice thing to share with your friends...

Last night I got a call from a close friend.
"Our production SQL Server VM at Azure is down, and we cannot provide service to clients".
A short analysis issued that we are in a deep trouble:

  1. The server status was: Stopped (Failed to start).
  2. Repeated tries to start the server resulted in Starting... and back to the fail message.
  3. Changing instance configuration as proposed in various forums and blogs was resulted in the same fail message.
  4. Microsoft claims that everything is Okay with its data centers.
  5. Checking the Azure storage container found out that the specific VHD disk was not updated since the server failure.
Since getting back to cold backup would cause losing too much data, we had to restore somehow the failed server.

The chances were against us. Yet, lucky us, we could do that by restoring the database files from the VHD (VM disk) file that was available at Azure storage. 

How to Recover from the Stopped (Failed to start) VM Machine?
  1. Start a new instance in the same availability set (that way you can continue using the same DNS name, instead of also deploying a new version of the app servers).
  2. Attach a new large disk to the instance (the failed server disk was 127GB, make sure the allocated disk is larger).
  3. Start the new machine.
  4. Format the disk as a new drive.
  5. Get to your Azure account and download the VHD file from Azure storage. Make sure you download it to the right disk. We found out that the download process takes several hours even when the blob storage is in the same data center as the VM.
  6. Mount the VHD file you downloaded as a new disk.
  7. Extract the database and log files from the new disk and attach them to the new SQL Server instance.
Other recommendations:
  1. Keep your backup files updated and in a safe place.
  2. Keep your database data and log files out of the system disk, so you could easily attach them to other servers.
Bottom Line
When the going gets tough, the tough get going

Keep Performing,

ShareThis

Intense Debate Comments

Ratings and Recommendations