14 December, 2010

Find SQL Default Data and Log Directories using TSQL Independent of Instance name or Cluster Installation

 

Hello, I was needing this function for some reason and I though someone else may need it.

So basically this info we are wanting is the default place where SQL will place the Data and Log Files, this will be brought when you try to restore a database using the GUI, if you’re not using the GUI you’ll have to know the directories.

Well SQL just doesn’t have a Custom Function for finding these directories, its not contained on SQL itself, SSMS shows you if you right-click the server you’re connected and select properties, then selecting Database Settings, but other than that the only way to know this info is by looking it up on the windows registry.

So how do we need to look in the registry with TSQL, we also need to check the right place because if you are running a named instance this values will be on a different place that they would be if we were querying a default instance, the same for a clustered instance.

So How do we go through this?

Well there's a procedure called XP_INSTANCE_REGREAD, if we were just reading any key on the registry we would be using xp_regread, but we need this info regardless of instance, so this xp_instance_regread will do the job of localizing which instance we’re currently using.

Now we need to query two registry keys one for the data directory and another for the log, however on cluster installations the Data directory is located on another key, so just using xp_instance_regread won’t save us this time, we’ll have to manually check for this. So on the code I expose bellow I'm checking first on the registry key that is not the one for the cluster and if the value is not there I’ll read the cluster one, that way we’ll get the value always.

The Keys are (remember Instance aware):

For Data: HKLM\Software\Microsoft\MSSQLServer\MSSQLServer
and Value DefaultData

For Log: HKLM\Software\Microsoft\MSSQLServer\MSSQLServer

and Value DefaultLog.

So now here’s the code.

declare @datadir nvarchar(4000)
,@logdir nvarchar(4000);

EXEC master.dbo.xp_instance_regread
N'HKEY_LOCAL_MACHINE'
, N'Software\Microsoft\MSSQLServer\MSSQLServer'
, N'DefaultData'
, @datadir output;

IF @datadir IS NULL
BEGIN
EXEC master.dbo.xp_instance_regread
N'HKEY_LOCAL_MACHINE'
, N'Software\Microsoft\MSSQLServer\Setup'
, N'SQLDataRoot'
, @datadir output;
END
EXEC master.dbo.xp_instance_regread
N'HKEY_LOCAL_MACHINE'
, N'Software\Microsoft\MSSQLServer\MSSQLServer'
, N'DefaultLog'
, @logdir output;

SELECT @datadir as Data, ISNULL(@logdir,@datadir) as Log;

Hope this helps you!


Kind Regards,
The Developer Seer.

21 November, 2010

CHECKDB WITH DATA_PURITY


I’ve Learned this recently, about two months ago.

Again on Paul Randal’s blog, click here if you want to see the post.

The thing is SQL 2005 introduced a check on DBCC CHECKDB, that checks for columns values like, is my column values within range of my selected data type?

When migrating databases I’ve seen some negative DateTimes (which Query Analyzer does display, some weird float values that even query analyzer does not display)

The First question you might have is how the hell did this values get in there?
Well… On SQL 2000 and bellow, SQL allowed data Import without some checking with ended up with those values in there.

If you created the database on SQL 2005, this check is always done when you issue a:

DBCC CHECKDB ([db_name])

if your database is from a SQL 2000, you may have to perform a explicit check, and once this check is completed successfully a value is changed in the database boot page, so SQL know that it passed once and will enforce checking every time.

Here’s the code for an explicit check:

DBCC CHECKDB ([db_name]) WITH DATA_PURITY

Remember to change the db_name for your database name, ok?

If you got any problem with the invalid values know you have two choices, and I don’t like any of them.

  1. You can delete those rows. (lame solution right).
  2. You can update the values for those rows, but probably you won’t be able to see what’s currently stored if you get this problem.

For more information around this you can check this MS KB

19 November, 2010

Database Maintenance–Part 1–Corruption

 

SQL Server is a Database System that needs maintenance and proper care!

Face it, realize it. The faster you realize this, the faster you can start researching why and how to take good care of your Database System.

Now, don’t illude yourself. Every RDBMS system will need care.

Never, ever Install a Database System and leave it there, things don’t work that way. As I said you have to maintain the system.

This is Part 1 and it only concentrate on detecting corruption.

I’ll update this with links for the next parts!

So I’ll skip the Why you need to do this, maybe when I can fully understand all SQL subsystems (SQL engine is huge).

So lets get into the How you maintain a database system.

Its important to understand that SQL Database files stand on a Hard Drive, and this files are under a whole I/O subsystem. You need to be aware of this because well, if a failure occur on the I/O subsystem database corruption may occur, and the worst part of it, you may not even know it, data that is not frequently read is never out of the disk, so its never checked for corruption, so this kind of corruption may only be noticed when you select some data that is corrupt. Fortunately there's a kind of protection that is a CHECKSUM per page that can be enabled on a database that will do a calculation for the bytes on the page and will prevent the corruption when writing the page to disk, and detect when reading from disk. This helps detects corruption, otherwise you wouldn’t even know if your data is corrupt.

So if you don’t have it enabled please do.

ALTER DATABASE [database_name] SET PAGE_VERIFY CHECKSUM;

This is a Best practice. But this only helps detect some corruptions, there are other and you should always have a Consistent database (I mean all databases).

The only way to recover from corruption without Backups and Replication is loosing the corrupt data. So One reason you need to maintain your databases are because well they can become corrupt and you need to investigate, is my data ok? Is there any corruption on my database? Is all the data on my database OK?

If you want to know really really really (have I said enough really?) deep about corruption you may want to check Paul’s Randal Blog – In Recovery. This man really know a lot, he used to work for Microsoft and was one of the responsible for writing one of SQL routines for checking corruption.

So if you understood well, how do you check your database?

there are tons of commands but the most used is DBCC CHECKDB.

This is a pretty heavy command, as it checks the whole DB, the bigger the database the longer it takes, at least this is How I think it is.

So How do I check my Database?

DBCC CHECKDB ([database_name])

Don’t worry this command will not change anything, it will only check.

Pray that it won’t find anything, I have found corruption before and it ain’t nice.

you have to make this check regularly, but do it on a window. If you can do it daily, but if you can’t do it weekly, if you database is really huge, there are other ways, check Paul Randal’s blog.

Well there are 3 things you can do to protect from corruption.

1 – Turn Checksum on a page level.

2 – Check your database regularly.

3 – Take Backups, Lots of them, but not just any Backup, you have to take the right ones. I’ll do post more info around Backups later. But why is taking backups necessary to protect from corruption, well the thing is you can’t escape from corruption without data loss unless you have a backup (or loads of them). And please if you take a backup check if it’s ok, there’s a really nice command which I’ll post about it that verifies the backup.

Well to sum up. Checking your database will not prevent future corruptions, it will only evaluate your database for corruptions Now. There’s “nothing” that can prevent you from future corruptions, only proper database maintenance can help you on that.

Now Do yourself a favor and schedule a job that does check all your databases on the night and mail you if there is an error on the job.

Hope this helps.
Good Luck
The Developer Seer.

11 September, 2010

Resource Governor

Today I’ll demonstrate how to create and control Resource Governor.

But before I do that, i need to say what is Resource Governor.

Well Resource Governor is a new feature in SQL 2008 that lets you control how the resources available to SQL are distributed to applications.

for now on I’ll use RG for Resource Governor.

To do this RG have the following concepts:

Resource Pools: which are configurations that are bound to a amount of resource on SQL server.

And.

Workload Groups: which are groups where applications will be classified and grouped.

and to finish up there’s the function that defines what Workload Group the applications (done for every connection) will belong to.

To better illustrate this you can check this diagram from TechNet.

rg_basic_funct_components(en-us,SQL.105)

taken from: http://technet.microsoft.com/en-us/library/bb934084.aspx

With the Resource Governor you can set how much CPU or Memory a Resource Pool will have, and on the Workload Group you can say how much will the apps in there will request.

It works like this:

You create a Resource Pool that says it will use a minimum of 10% CPU and a maximum of 70% and for memory you say minimum 0% and maximum 50%. Now you create a workload group that says it will use the before mentioned resource pool and that request’s a maximum of 30 seconds of CPU Time.

All that you can do through SSMS 2008, however the tricky part is how you say an application is part of this Group?

There’s something called Classifier Function.

On every connection to SQL there will be an evaluation that says on which group the connection belongs to.

So What is this Classifier Function and how do you create one?

Well its a User Defined Function, a normal function that returns the name of the group.

How do you create one?

Well only through Transact SQL, this is not even visible on SSMS.

So here’s an example of the full transact SQL.

BEGIN TRANSACTION
USE master
-- Create a resource pool that sets:
--MIN_CPU_PERCENT to 10%.
--MAX_CPU_PERCENT to 70%.
--MIN_MEMORY_PERCENT to 0%.
--MAX_MEMORY_PERCENT to 50%.
CREATE RESOURCE POOL SSMS_RESOURCE_POOL
WITH
(MAX_CPU_PERCENT = 70
, MIN_CPU_PERCENT = 10,
MIN_MEMORY_PERCENT = 0
, MAX_MEMORY_PERCENT = 50);
GO
-- Create a workload group to use this pool
-- with its maximum request of CPU Time being 30s
CREATE WORKLOAD GROUP SSMS_WORKLOAD_GROUP
WITH
(REQUEST_MAX_CPU_TIME_SEC = 30)
USING SSMS_RESOURCE_POOL;
GO
-- Create a classification function.
-- test the application name on the application
-- assign a workload group based on the application
-- otherwise redirect to the default group (NULL)

CREATE FUNCTION
dbo.resource_governor_classification_function()
returns sysname
WITH SCHEMABINDING
AS
BEGIN
IF APP_NAME() LIKE
'Microsoft SQL Server Management Studio%'
BEGIN
RETURN 'SSMS_WORKLOAD_GROUP'
END
return NULL
END
GO
ALTER RESOURCE GOVERNOR WITH
(CLASSIFIER_FUNCTION =
dbo.resource_governor_classification_function);
COMMIT TRANSACTION
ALTER RESOURCE GOVERNOR RECONFIGURE;


Hope that this clarifies it a bit, and after testing it yourselves you get to understand it better.

23 June, 2010

Generating a Table’s diagram and Script

Great tip for Entity-relationship model and script generation.

A great tool for building Entity-relationship models for databases and it’s scripts

http://ondras.zarovi.cz/sql/demo/

it generates mysql, sqlite, web2py, mssql (Microsoft sql), postgresql, oracle, sqlalchemy, vfp9 scripts for a designed model.

Options Menu You can select the language in the options, and then start designing the database, or do that in the end when you want to generate the full script, this helps you to create a ERM model and visually see the relationship’s test your ideas and see if they really work.

Creating Tables

You can start by creating tables using the menu Add Table button, you may be tempted to drag a table, but that won’t work, just click the diagram and there will be a dialog asking you the name of the table. The table comes with an default id field, you can edit it, changing it’s type by double clicking it, then selecting it's name, type, size (when applicable), if it is a auto increment field, and if it allows nulls. You can add new fields with the Add field button also in the Menu and create new fields on the desired table.

Creating Relationships

Relationships can be created in two ways in here, first you can create the destiny field in the desired tables and then select which one is part of the key (only keys can be related)  then click Connect foreign key, and the connection between the fields will be created (and also the foreign key). The other way of doing it is by only creating the field on the source table and then selecting the field (must be part of the key),  clicking on the menu Create foreign key and then selecting the table you want to have the key, the field should be created automatically.

Setting a Primary Key

When you select a table you can use the menu to define it’s unique key (a table doesn’t need to have one, but its highly desired), you can click the Keys menu item, there you can define which columns are part of the primary key, moreover you can also define indexes, and which columns are part of the index, however I don’t think you can create an index that includes some columns but are not part of the index.

After that you can select Save/Load and Generate SQL for the script.

Also one nice trick is that you can download this to work locally, click on documentation and Installation then follow the instructions and use.

Here’s a diagram example:

Simple Diagram

Hope this helps
Good Luck
The Developer Seer.

11 June, 2010

Silverlight Lifecycle Support End Date

I first seen something related to this in here however I needed an official Microsoft web site saying the same, and today I found it.
The dates are:

Version End Date
Silverlight 2October 12, 2010
Silverlight 3April 12, 2011
Silverlight 4Twelve Months after the release of Silverlight 5

The official source is this one:

http://support.microsoft.com/gp/lifean45?ln=en-us

Hope this helps,
The Developer Seer.

02 June, 2010

Refreshing the Intellisense cache in SSMS 2008

I'm currently using alot SQL Server Management Studio (SSMS) 2008, and often when I create a new table it doesn't appear on Intellisense, and after a little research I came across this website:

by Dan Jones.

And learned why this happens, and how to fix it.

First how to Fix it: There's two ways, the quicker and what you'll want to know if you are a shortcut Ace (or wannabe) is Ctrl + Shift + R , the other way is to go to the Menu in: 
Edit -> IntelliSense (last option) -> Refresh Local Cache 


Then Why it happens: SSMS reads all this info on what tables exist's and other things and cache it locally (it doesn't check every time), if you create a table or anyone creates a new table on the database SSMS won't update his local cache, so you have to do this manually, a good point of observation is that sometimes SSMS does update the cache and it depends on some actions, to learn how does the SSMS cache work you can observe when does it reads his cache.

Hope it helped,
Good Luck
The Developer Seer.