14 Kasım 2014 Cuma

Transaction Log Shipping Status report

Hello there,

I realise that a good amount of DBAs do not take advantage of the built-in reports in the SQL Server Management Studio (SSMS). I have been using Log Shipping in the production servers for more than 7 years. I have used in-house developer tools to monitor the status of my log shipped databases and SSMS' built-in reports. Every shop can not build their own monitoring tools and some do not have a budget to buy a 3rd party tool; but everyone can benefit from the built-in reports of SSMS!

When it comes to IT generally, monitoring is crucial; with this in mind, to a DBA, monitoring is everything. We manage our systems with monitoring tools. Some need to act proactively and others to react to problems before our managers or users start yelling at us.

Oh , yea, I would write something about monitoring Log Shippings. If yours is a small shop and if you are a small shop probably you will have a primary production server and probably you will configure Log Shipping only on that primary server, then you can use SSMS' Transaction Log Shipping Status report. I wanted to stress out this report because I do not see people write about it on the internet. So I thought this report could be beneficial to junior DBAs or some shops who does not have a DBA or some IT professionals (I call them "all-in-one", no offence!) who has to carry about every IT related stuff in the office.

Here's a screen shot about the report I am talking about:

Transaction Log Shipping Status report

I had to crop the left side of the report because it was too large to fit on my screen, besides I would blur it all that column as they are the database names of one of my production servers.

Anyway, as you can see from the screenshot above, you can see everything you need to know about the status of a database's Log Shipping. How long it's been the last backup has been performed, the threshold and if the alert would be triggered if the threshold was crossed and the details about other fundamental jobs, Copy and Restore of Log Shipping.

Here's how you can open this report:
- Open SSMS and go to Object Explorer,
- Right click on the SQL Server Instance name and select "Reports" and then "Standard Reports"
- You will see "Transaction Log Shipping Status" report at the bottom of the list, click on it and there you go.

You will see much more reports about server monitoring in this chest, you can play with them to learn more.

I hope it helps!

Ekrem Önsoy

12 Kasım 2014 Çarşamba

Idera: Another awful experience with Compliance Manager

Hi folks!

For more than a year I have been using Idera's Compliance Manager (CM) tool which is as its name refers, a compliance tool. I actually have created lots of tickets because of lots of problems with this tool during this period of time. I guess some of you may find this comment enough to keep away from this tool, especially for a production environment, but for the others let me tell you a fresh story I have just experienced this morning.

Last night I enabled Before/After feature for a specific table's some specific fields to collect detailed information about the modifications and as I thought its agent sends the records from the source to the target I observed for any change in the repository database, but I saw nothing spectacular in the row size of the tables. However this morning, clients began screaming with the following kind of errors:

ErrorQuery: SELECT 'Error.  Table: dbo.xxx, Error: Invalid object name ''SQLcompliance_Data_Change.SQLcompliance_Changed_Data_Table''.' Message FROM [SQLcompliance_Data_Change].[SQLcompliance_Changed_Data_Table] WHERE 1 = 0An error occurred sending error message query: Invalid object name 'SQLcompliance_Data_Change.SQLcompliance_Changed_Data_Table'.Invalid object name 'SQLcompliance_Data_Change.SQLcompliance_Changed_Data_Table'.The statement has been terminated.

The name of the table, which I replaced with "xxx" in the error message, was the one I enabled for Before/After data collection. As soon as I saw this error message, I thought CM might had created some DML Triggers on this critical table and this was the case indeed! I immediately shutdown CM's agent service, dropped the trigger and cleared this configuration from the properties of the related database from the CM's console.

Then according to the feedbacks from the end users the problem was solved, then I dug into the problem to understand more about the "SQLcompliance_Data_Change" schema, as it's in front of the table name, I assumed it's a schema. However, neither the schema nor the table were there! The database that I had this problem did not contain them. So it turned out that CM would keep these records in the production database itself, like a CDC configuration. But it skipped or failed to create the schema and table, it only created the trigger and our most critical production database went down!

For those who consider using this tool in their production environment, god bless you my friends...

Ekrem Önsoy

Heads up! A "gotcha" about FCI.

Hello there,

I was watching the presentation of Ryan Adams in the SQL PASS Summit 14 from PASSTV and he was talking about a "gothca" of SQL Server Failover Clustering Instances, I wanted to spread it a little bit further, so that's why I'm sharing about it.

The thing is, you can now locate your TEMPDB on a local drive (like an SSD disk) when you set up a SQL Server Failover Clustered Instance. The directory will be created by the SQL Server Setup and the permissions to that directory will be granted by the Setup again, but this will be done only for the first node of the cluster! When you add the second node to the cluster, that same directory with the same path will not be created on that node and of course because of this a permission will not be granted. He mentions that you will be warned about this only once during the setup of the first node and you will not be warned when you were adding the other node. So you have to be careful about it as the SQL Server resources can not be brought online as the TEMPDB can not be created if the directory with the same path does not exist at the other node.

Ekrem Önsoy

11 Kasım 2014 Salı

Sneak peak: Strecthing tables into Azure!

Merhaba arkadaşlar,

Maalesef bu sene de PASS Zirvesine katılamadım, fakat arayı kapatmak için PASSTV'den faydalanmaya çalışıyorum. PASS'ın başındaki adam olan Thomas LaRock'ın Keynote'unu izlerken ilginç bir şey dikkatimi çekti; Rengarajan, bir mühendis yardımıyla bir veritabanındaki tablodaki sıcak verinin (güvenlik veya performans amacıyla) yerel sunucunuzda barındırılıp soğuk verinin (örneğin arşivlenecek veya seyrek olarak sorgulanacak veya hassas olmayan) Azure'a uzatılabileceğinden bahsetti. Konu başlığında da belirttiğim gibi bu yeni ve adı resmen konmuş bir özellik değil; yani ne Azure'un ne de SQL Server'ın bir sonraki versiyonuna gelecek diye bir şey denmedi. Sadece ucundan tanıtımı yapılmış oldu, ama bence gelecek vaadeden ve yakın gelecekte yeni versiyonlara eklenecek bir özellik gibi görünüyor.

Sunumdan bir ekran görüntüsünü aşağıda paylaşıyorum:

Bahsini ettiğim Keynote'u da buradan izleyebilirsiniz:

Bu özelliğin altyapısının nasıl olduğundan falan da bahsedilmedi, fakat sunumda birkaç komut görülebiliyor. Örneğin replikasyon şu şekilde dondurulabiliyor:


Veya aşağıdaki komut ile tekrar devam ettirilebiliyor:


Şunu da eklemek isterim, replikasyon devam ederken tabloda işlem yapmaya devam edebiliyorsunuz.

Kolay gelsin,
Ekrem Önsoy

6 Kasım 2014 Perşembe

İpucu: Full Recovery Model


Örneğin aşağıdaki komut ile önceden Recovery Model'ı SIMPLE olan bir veritabanınızın Recovery Model'ını FULL yaptığımızı varsayalım:


Bazıları, sadece bu komutu çalıştırdıktan sonra AdventureWorks2012 veritabanının Recovery Model'ının FULL olarak ayarlandığını düşünebilir, aslında öyle olmuş gibi de görünüyor; fakat her ne kadar metadata'sı böyle görünse de pratikte AdventureWorks2012 veritabanı Full Recovery Model davranışlarını göstermez, ta ki BACKUP DATABASE komutuyla veritabanınızın tam olarak yedeğini alıncaya kadar…

Kolay gelsin,
Ekrem Önsoy