All posts related to Microsoft SQL Server

,

Understanding Transaction Isolation Levels – READ COMMITTED

The READ COMMITTED transaction isolation level is the default setup of Microsoft SQL server. With the committed read transaction isolation level, data cannot be read from a data page until all UPDATE, DELETE, or INSERT operations performed by other users or connections on the table which you are trying to read from has been completed and committed to the database.

When running a select query on any database without specifying the isolation level, the query is executed under the read committed transaction isolation level. Hence it is the default.

Real world case study

To explain exactly what this means, I’ll take a financial system as an example. Senior management wants to check the revenue of the month by running a report. One of the managers is a bit ahead of the curve and tries to run the revenue report while the company HR is running the monthly payroll processes. The managers’ report just keeps running without returning any data. Eventually when the payroll process finishes, the managers’ report returns the revenue data. The manager, who was a bit hasty, had to wait for the payroll to finish before he could see the data and with good reason. If he was able to see the data before the payroll was completed, he would have seen false revenue figures as the salaries of all the employees have not yet been deducted which affects the revenue.

How do I specify the read committed transaction isolation level in my script?

With any default installation of SQL server, it is not necessary to specify the read committed transaction isolation level as this is the server default and any query that is run on the server without specifying an isolation level will run under the read committed transaction isolation level. If you would like to specify the read committed transaction isolation level to make sure the query runs under said transaction isolation level regardless of what the SQL Server default is, you can set it by adding the following statement to the beginning of your script or transaction:

SET TRANSACTION ISOLATION LEVEL READ COMMITTED

The transaction isolation level will be applied to the entire transaction in which it is set.

Demonstration

To demonstrate the read committed transaction isolation level, open SQL Server Management Studio and run the following create script:

IF OBJECT_ID('Sales_E636F1418128466888ED3A3CECC07611') is not null
begin 
    DROP TABLE Sales_E636F1418128466888ED3A3CECC07611
end

Create table Sales_E636F1418128466888ED3A3CECC07611
(
    SalesID int identity(1,1),
    SalesDate datetime,
    CustomerFirstName varchar(50),
    CustomerLastName varchar(50),
    PaymentMethod varchar(50),
    SalesAmount decimal(16,2)
)

insert into Sales_E636F1418128466888ED3A3CECC07611(SalesDate, CustomerFirstName, CustomerLastName, PaymentMethod, SalesAmount)
values('20160905','Alan','Wilkinson','Credit Card',59.50)

insert into Sales_E636F1418128466888ED3A3CECC07611(SalesDate, CustomerFirstName, CustomerLastName, PaymentMethod, SalesAmount)
values('20161011','Joey','Hollins','Credit Card',712.63)

insert into Sales_E636F1418128466888ED3A3CECC07611(SalesDate, CustomerFirstName, CustomerLastName, PaymentMethod, SalesAmount)
values('20170406','Evander','Emmet','Cash',265.42)

insert into Sales_E636F1418128466888ED3A3CECC07611(SalesDate, CustomerFirstName, CustomerLastName, PaymentMethod, SalesAmount)
values('20170512','Martin','Curtis','Cash',34.78)

insert into Sales_E636F1418128466888ED3A3CECC07611(SalesDate, CustomerFirstName, CustomerLastName, PaymentMethod, SalesAmount)
values('20170518','Alton','Batts','Credit Card',275.65)

select * from Sales_E636F1418128466888ED3A3CECC07611

Open a new query tab and run the following UPDATE statement. This query tab will represent the first user or session:

BEGIN TRANSACTION

    UPDATE Sales_E636F1418128466888ED3A3CECC07611 SET PaymentMethod = 'EFT'
    where SalesID = 2

    WAITFOR DELAY '00:02:15'

COMMIT TRANSACTION

Open a new query tab and run the following SELECT statement. This query tab will represent the second user or session which is trying to read the same data that the first user or session is updating:

SET TRANSACTION ISOLATION LEVEL READ COMMITTED

SELECT
    SalesDate,
    CustomerFirstName + ' ' + CustomerLastName as Customer,
    PaymentMethod,
    SalesAmount
FROM
    Sales_E636F1418128466888ED3A3CECC07611
WHERE
    SalesID = 2

You will notice that the second query keeps running and does not return any data until the first query which is updating the data finishes its execution. This is the read committed transaction isolation level in action. The data that the second query is requesting will only be returned once the first user or session has committed its changes to the database.

As an added measure, open a new query tab and run the following statement:

sp_who2 ‘active’

In the result, look for the spid of the second user or query (session 2 script). You can see the spid by looking at the tab. In my example the spid I am looking for is 55.

jj prinsloo sql expert spwho2

In the results of the sp_who2 script, find the spid. You will notice that your session 2 spid is being blocked by the spid of your first user or session query (Session 1 script). In my case the spid of the first session is 54.

jj prinsloo sql expert spwho2 result

This is because session 2 will first have to wait for session 1 to complete the update of the data and commit it to the database before it can read the data.

Conclusion

As with everything there will be pros and cons in using the read committed transaction isolation level in different situations. In the end, there is a reason why it is the SQL server default. You will always see the correct data.

One thing to keep in mind is that just as you cannot read data under the read committed transaction isolation level that other users are updating, other users cannot update data which you are reading using the read committed transaction isolation level. Their spid will be blocked until you have finished reading the data before their update can start. This prevents any data from being modified while you are doing a select and will prevent dirty data which is a clear difference from the read uncommitted transaction isolation level.

Pro tip: When transactions are taking long to complete like updating data, running reports etc. Always check sp_who2 ‘active’. You might find a lot of blocking transactions which could be explained or solved by using the correct transaction isolation levels.

,

Understanding Transaction Isolation Levels – READ UNCOMMITTED

Sometimes referred to as a “dirty read”, the read uncommitted isolation level reads any data on a SQL Server data page regardless of whether or not the data has been committed. This means that you will see all committed and modified uncommitted data within the database.

There are a many pro’s and cons to consider when using the read uncommitted isolation level. One of the most noticeable pros is query speed. The read uncommitted isolation level will speed up queries as it does not have to wait for other blocking transactions to complete before it gets a turn to read from the table being modified by a different user. This pro, however, goes hand in hand with a con. On the flip side, you will see “dirty data” which may have been modified and may still be modified later in the transaction. Between these modifications, you may see data that does not make logically sense at that point in time. This especially holds true to financial systems where a lot of number crunching is being performed. If the system has been developed in such a way that a lot of changes occur before the final data is committed and you read that data, your figures may be skewed.

Real world case study

To demonstrate how this could potentially be a problem, let’s take an ordering system as an example. We have 100 laptops in stock. Sales person A is dealing with a client who wants to buy 20. Sales person B is dealing with a client who wants to buy 100. Sales person A adds 20 laptops to their cart which updates the stock level to 80. This data, however, has been updated but not yet committed. Sales person B checks the stock level using a query which reads uncommitted data and sees that only 80 laptops are in stock. The stock level of 80 is dirty data because sales person A has not yet confirmed and finalized the sale whereby committing the new stock level data to the database. In reality, the stock level is still 100 instead of the dirty read value which is 80. Sales person B now tells their client that they only have 80 laptops in stock and they lose the sale because the client is insisting on 100 laptops in one sale. Next, Sales person A’s client backs out of the sale because the price is too high. Sales person A cancels the order and the 80 stock level is never committed to the database. The stock level thus remains on 100 which could have filled the need of salesperson B’s client. Because sales person B was doing an uncommitted read on the same data sales person A was working with, they had the wrong impression of the stock level and lost the sale of 100 laptops.

How do I specify the read uncommitted transaction isolation level in my script?

You can set the read uncommitted transaction isolation level in any transaction or script by adding the following at the beginning of any transaction:

SET TRANSACTION ISOLATION READ UNCOMMITTED

The isolation level will be applied to the entire transaction in which it is set. Setting the transaction isolation level to read uncommitted is similar to specifying NOLOCK on every table which data is read from within your transaction.

Demonstration

To demonstrate the read uncommitted transaction isolation level, open SQL Server Management Studio and run the following create script on a test database:

IF OBJECT_ID('Sales_E636F1418128466888ED3A3CECC07611') is not null
begin 
    DROP TABLE Sales_E636F1418128466888ED3A3CECC07611
end

Create table Sales_E636F1418128466888ED3A3CECC07611
(
	SalesID				int identity(1,1),
	SalesDate			datetime,
	CustomerFirstName	varchar(50),
	CustomerLastName	varchar(50),
	PaymentMethod		varchar(50),
	SalesAmount			decimal(16,2)
)

insert into Sales_E636F1418128466888ED3A3CECC07611(SalesDate, CustomerFirstName, CustomerLastName, PaymentMethod, SalesAmount)
values('20160905','Alan','Wilkinson','Credit Card',59.50)

insert into Sales_E636F1418128466888ED3A3CECC07611(SalesDate, CustomerFirstName, CustomerLastName, PaymentMethod, SalesAmount)
values('20161011','Joey','Hollins','Credit Card',712.63)

insert into Sales_E636F1418128466888ED3A3CECC07611(SalesDate, CustomerFirstName, CustomerLastName, PaymentMethod, SalesAmount)
values('20170406','Evander','Emmet','Cash',265.42)

insert into Sales_E636F1418128466888ED3A3CECC07611(SalesDate, CustomerFirstName, CustomerLastName, PaymentMethod, SalesAmount)
values('20170512','Martin','Curtis','Cash',34.78)

insert into Sales_E636F1418128466888ED3A3CECC07611(SalesDate, CustomerFirstName, CustomerLastName, PaymentMethod, SalesAmount)
values('20170518','Alton','Batts','Credit Card',275.65)

select * from Sales_E636F1418128466888ED3A3CECC07611

The result set should show after the script has completed and the table has been created.

Open a new tab. This tab will represent the first session, or user, who will be updating the data. Run the following code in the first tab:

BEGIN TRANSACTION

	UPDATE Sales_E636F1418128466888ED3A3CECC07611 SET PaymentMethod = 'EFT'
	where SalesID = 2

	WAITFOR DELAY '00:00:15'

ROLLBACK

Open one more tab which will represent the second session, or user, who will be reading the uncommitted (dirty) data. Run the following code in the second tab:


SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED

SELECT
	SalesDate,
	CustomerFirstName + ' ' + CustomerLastName as Customer,
	PaymentMethod,
	SalesAmount
FROM
	Sales_E636F1418128466888ED3A3CECC07611
WHERE
	SalesID = 2

If you run the session 2 script while the session 1 script is running, you will see the following result set:

jj prinsloo sql expert read uncommitted result2

As you can see, the payment method is EFT, the same payment method as it was updated to in the session 1 script.

When you run the session 2 script again, after the session 1 script has completed, you will see the following result set:

jj prinsloo sql expert read uncommitted result3

Wait a minute!? The Payment method is Credit Card now? We updated the value in the session 1 script to EFT? And confirmed the update with a query? WHAT’S GOING ON? Steady now, if you check the session 1 script, you will notice that the transaction is never committed so the updated data is never committed.

Conclusion

As you can see, reading data using an uncommitted transaction isolation level can give you unexpected results. In most cases, this should not be an issue as the default setting of SQL Server nowadays is to commit data to the database automatically whereas with earlier versions you had to manually commit your changes to the database. That had to be a pain! The default of SQL Server is also to use that read committed transaction isolation level so you have to deliberately specify the usage of the read uncommitted transaction isolation level. In the end. When you use the read uncommitted isolation level and you see funny or unexpected results, keep all of the characteristics of this isolation level in mind. You may just end up saving yourself a lot of frustration.

Waiting for worker thread: Configuring SQL Server subsystem values

Waiting for worker thread. The message every DBA loves to see in their job activity monitor. Yeah right! Although seldom noticed, these do tend to pop up from time to time. Especially when using PowerShell steps within your jobs. I will demonstrate the issue using PowerShell jobs.

What does this message mean and why do you see it in the job activity monitor? It basically means that the job is sitting in a queue, waiting to execute. Microsoft implemented limitations for the various subsystems which SQL Server uses. This was done for a good reason. It would be a mess if queries were waiting to execute due to a bunch of PowerShell jobs all running at the same time and hogging the worker threads. As we know, however, default configuration is not always the best configuration and this depends on your specific needs. I tend to run all my jobs and ETL processes on a separate server set up specifically for that purpose. This is a standard practice for me. On this server, no production queries are run and I can increase the worker threads for my jobs whether they are SSIS ETL processes, PowerShell backup jobs or normal database maintenance jobs. In this event, I can alter the subsystem limitations to fit my requirements.

Reproducing the problem

Under the default subsystem configuration you can create 3 PowerShell jobs to see how this issue occurs. Each job should have a PowerShell step with the following code:

Start-Sleep -s 600

It’s a simple line which tells the PowerShell subsystem to wait for 10 minutes.

After creating the jobs, you can execute them one after the other from the SQL Server Agent. In the activity windows, you see that all three jobs are running. They should look like the screenshots below

If you open the job activity monitor, you will notice that PowerShell Job 1 and PowerShell Job 2 is executing as expected. Take note the Status message of PowerShell Job 3. As predicted, waiting for worker thread. This is what the message will look like in a production environment.

Waiting for worker thread Job Activity Monitor - JJ Prinsloo

As you can imagine, this could have serious ramifications if you have jobs that kick of close to each other where your processes will be delayed heavily if these limitations of simultaneous execution is reached.

The obvious question: Why?

As mentioned earlier, this is due to the subsystem limitations implemented by default. You can check the subsystem limitations on your server by running the following query:

SELECT [subsystem_id]
,[subsystem]
,[max_worker_threads]
FROM [msdb].[dbo].[syssubsystems]

If you check the values of the PowerShell subsystem, you will see the limitation is set to 2 by default. This ties up with the previous experiment where PowerShell Job 1 and PowerShell Job 2 were both executing and PowerShell Job 3 was sitting and waiting in the queue (or waiting for a worker thread if you will) to execute. This subsystem limits you to run only 2 PowerShell processes simultaneously.

How do you change the limitation?

To change the subsystem limitation, you can update the table in which the subsystem limitations reside directly and restart the SQL Server Agent. That’s all there is to it. As an example, you can use the following statement to update the subsystem limitations for PowerShell to 3.

UPDATE msdb.dbo.syssubsystems
SET max_worker_threads = 3 --Increase this value to increase the subsystem limitation
WHERE subsystem = N'PowerShell'

After performing this change, you will be able to run all three of the test PowerShell jobs, which was created above, simultaneously and none of them will be waiting for a worker thread.

As with any SQL server change, I will again mention, be careful and take note in what you are doing. You may have to do some proper planning when changing these values and this should not be done just because it “may” solve your problem. Always make sure and make sure you update the right values. SQL does not ask questions. It assumes you know what you are doing.

Get object dependencies for SQL Server in TSQL

We all know there are many features within management studio that we take for granted. One of these features is the Object Dependencies feature. In short, this tool will show you any objects which are dependent a chosen object. For instance, if you want to check which objects are dependent on a specific table. Object dependencies will include any Views, Stored Procedures etc. which references the table in question.

This information is easily retrievable in management studio by right-clicking on the object you want to check and clicking on “View Dependencies”. The following screen will show with a list of dependencies for the chosen object.

1
2
Object Dependencies Screen - JJ Prinsloo
1

Source Object

2

Referencing Objects

Recently, I had to develop a process to automatically deploy SQL scripts to a specific server through a TFS process. Although I did not create the TFS process, I had to make sure that TFS could execute one stored procedure to figure out which objects to create or alter and in which order.

As an example, the problem is that if you try to create a View which references a table which does not exist, the creation of the view will fail which means that the process will fail.

All I had to work with was the object names, the script files and the database. Hmmm. That View Dependencies functionality would come in pretty handy right now. The problem is that I need the results in SQL so that I can query the results to check which objects have dependencies and in which order I should execute my deployment SQL scripts to ensure a successful deployment.

I set out to write my own stored procedure to analyze object dependencies for me.

To install the object dependencies procedure, run the install script on the database on which you want to analyze the object dependencies.

To use the object dependencies stored procedure execute the following command:

exec sp_GetObjectDependencies

The procedure will execute with the default parameter which analyzes all objects on the database excluding constraints.

The main 2 columns to note in the result set is the SourceObject and the ReferencingObject. The SourceObject is the object which was analyzed. The ReferencingObject is the object which is dependent on the source object.

To view all options available, execute the object dependencies stored procedure with the @help parameter:

exec sp_GetObjectDependencies @Help = 0

Now you can view all object dependencies on an object in SQL Server in data format which can be queried and used to achieve all sorts of wonderful things.

Object dependencies Install Script:

Why sending e-mail with sp_send_dbmail is a bad idea

Sending e-mail with sp_send_dbmail. Should you or should you not. Well as with everything, it will depend on your requirements on your circumstances. Recently, I have been wondering the exact same thing. In my case it was more in the line of application specific emails.

sp_send_dbmail usage scenario:

Our developers are sending e-mail with sp_send_dbmail for various in-application tasks and events. You know, password reset emails, event notification emails, that kind of thing. To me, as a DBA, this always felt like a dirty solution. I took it upon myself to investigate as to whether or not sending e-mail with sp_send_dbmail is a good idea. To my surprise, I could not find any discussions or debates regarding the best practices for sending e-mail with sp_send_dbmail. I spoke to a couple of fellow SQL experts and I finally figured it out. Everyone has there own opinion. This does not really help. Eventually I turned to the good folks of stack exchange. Here, I got some interesting responses as well a some key point which is simply not debatable, and not in the favor of sp_send_dbmail either.

Problems with sending e-mail with sp_send_dbmail

  • Sending SMTP email with sp_send_dbmail will cause your msdb to grow. This may or may not be a problem depending on how many e-mails you are planning to send out. Plan for your volumes in the future. If you only plan for your current volume requirement you will soon find yourself back at the drawing board.
  • You run the risk of damaging the deliver ability and reputation of your IP/domain in regards to spam or blacklisting.
  • SQL server core licenses are expensive. When half of your expensive server cores are used to send email, this can be a potential waste of expensive resources.
  • If you couple the application features too closely with the database, it makes scaling difficult and becomes a nightmare if anything needs to be moved. Different services should run on different servers.
  • The server will require internet access. ALWAYS.
  • SMTP is a common security vulnerability often targeted by hackers and will have to be enabled on the SQL server.
  • Garbage data could be sent to the SQL Server as it will be an SMTP server as well. SQL server may not be able to handle this load. Especially not 1 TB of garbage data.

What now?

I hear you shouting from behind your screen. “I use sp_send_dbmail to send administration and notification messages from my SQL server!”. This is fine. These types of emails do not take up a lot of space and they will not consume large amounts of resources to send out”. I, however, have never liked sending administration emails using sp_send_dbmail for the following reasons:

  • You will have to enable SMTP on the server which is a potential security risk.
  • You will have to connect the server to the internet and keep it that way. This is also a potential security risk.

What I do to prevent sending email with sp_send_dbmail

Let’s be honest, it’s not always avoidable and I have sp_send_dbmail enabled on some of my servers due to the problem mentioned above. However, when it comes to maintenance, I like to keep my system on a central server which sends instructions to my other servers like backup databases, rebuild indexes, etc. I will collect the data and store it on the central server and send out notifications based on the data that I collected i.e. disk space usage and so forth. Third party applications monitor the performance of the SQL Servers and will send out notifications based on the data it collects. In the end, there is no sending email with sp_send_dbmail and no need to enable SMTP, or sp_send_dbmail for that matter, on a production server.

Conclusion

In the end it will be for you to decide whether it is necessary to enable sp_send_dbmail on your SQL server as it is not always avoidable, even though there are risks involved. Whether there is a right or wrong answer is debatable and as mentioned, every SQL expert has his or her own opinions regarding the matter. When deciding though, keep the above points as to why it could be a bad idea in mind and make an informed decision based on the information at hand.

The damage you do when shrinking SQL Server database files

I have heard of, seen, investigated, and debated about so many problems relating to database sizes, disk space, and shrinking SQL Server database files. It always puts a question mark on my face when listening to so many people offering bad advice and/or solutions to these problems.

“Just shrink the file”. Seems to be the general consensus here. DON’T!! I understand the urge. You might be a new DBA and want to show you can solve the problem as soon as possible. The problem is that you may be creating additional problems in the background that you are not aware of.

The problem with shrinking your files is that you are only solving the immediate problem on the surface whilst creating additional problems deeper down. Shrinking your files causes SQL server to move data around within the physical files. This in turn causes extreme index fragmentation. Alright, so we can just rebuild the indexes right? Well in theory yes, and this will sort out the fragmentation issue. The problem now is that your database size has grown again and you are back to square one. This turns into a vicious cycle.

So then what should I do??

As with everything SQL this will depend on your requirement and SLA. A good practice in any event will be to consider the following options:

  1. Determine where the size issue is on your database. Is it within your data files or the transaction log?
  2. If the space issue is within your data files, the first thing to consider is leaving it. The purpose of your drive is to store files. There is no point in carrying around an empty backpack. Whether the free space is on the drive itself or within the data file it does not matter. You have ample free space available for the database to do what it needs to.
  3. If the transaction log is growing uncontrollably and not shrinking, it is most likely that you are using a full recovery model on your database. If this is the case and your transaction log is not shrinking, then you are not taking proper log backups of your transaction log. Review your backup policy and SLA agreements of the business which should point out one of the following options:
    1. You might find that it is not necessary to perform in point restores on a database. If this is the case, consider switching the database recovery model to simple instead of full. Just keep in mind that you will not be able to recover to a specific point in time but only to the time of the last full backup.
    2. If it is necessary to restore to a specific point in time you will have to design a backup solution that fits these needs. Part of your solution should include taking backups of the transaction log which will truncate the log automatically and free up unused space. Unfortunately, full backups won’t do this because of the backup chain.
  4. Review your auto growth settings. Are you using the default 10%? If so, consider changing this to a decent amount of bytes rather than percentage. 10% of a 1TB file = 100GB. Wowza!
  5. Get more disk space. This is not something to be considered lightly as by working through the previous few steps you have more than likely came across one or two things that need to be optimized or changed. More disk space should be the last item on this list as there will be a direct cost to the business which might have been avoided. In any event though, this is sometimes necessary.

How do I prevent this in the first place?

A good practice is to plan for your data usage requirements as it will be in the future and not for what your requirement is right now. This will save you a lot of headache. If you size your databases to cater for the amount of data you will have in a years’ time, you’ll be grand.

Work on a proper backup solution from day one. Review the SLA and business requirements and set your database recover models to match the SLA’s. Design your backup solutions around this and if you go with full recovery, take regular transaction log backups which in turn will keep your transaction log file sizes at bay.

Once again, make sure your auto growth setting is set to a reasonable amount of fixed bytes rather than a percentage. Percentage settings on these can quickly become a handful to deal with.

Great, I still need to shrink my files for other reasons.

I admit that sometimes you cannot avoid the shrinking of a data file. You may have deleted a huge chunk of data after archiving it and the database is not expected to grow which makes it necessary to shrink the files. After all, that space can be used for the data files of other databases.

If this is the requirement, consider following the following alternative approach:

  1. Backup your database (ALWAYS BACKUP your database)
  2. Create a new filegroup
  3. Move all the affected tables and indexes to the newly created filegroup. This process will also remove any fragmentation automatically
  4. Delete the previous file group where you moved the tables and indexes from. In the case where this is the PRIMARY filegroup, shrink it way down.

To recap

Make sure your settings on your database is correct, that your backup policies and procedures are in place, and that these agree with your SLA.

Make sure you plan ahead regarding the required storage space for your data.

NEVER shrink your data files, especially not as part of your automatic maintenance procedures (It sounds harsh and extreme but as a DBA, this is how you should think about it)

If it’s not a problem, leave it! As mentioned before, your disk is there to be used and there is no difference between free space on the disk and free space in your data file.

Take the time to understand the road ahead and plan for what is coming. It could become a slippery slope.

SQL Server Date Formats

One of the most frequently searched terms regarding SQL server is SQL Server Date Formats and codes as well as how to format a date for a specific requirement or standard. Although it is one of the first things you learn when working with SQL server, it it is not always as easy to remember all the format codes which is to be used as part of the CONVERT function to accomplish the desired result.

Here is a complete reference to the date formats that come standard with SQL Server. Following the standard format list is a list of extended formats that are often required. Note that these extended formats are not returned as a DATETIME data type but rather a VARCHAR data type so you will have to perform all your date calculations and modifications before converting it to an extended format. Happy dating!

Standard SQL Date Formats

Date FormatStandardSQL StatementSample Output
Mon DD YYYY HH:MIAM (or PM)DefaultSELECT CONVERT(VARCHAR(20), GETDATE(), 100)Jan 1 2005 1:29PM
MM/DD/YYUSASELECT CONVERT(VARCHAR(8), GETDATE(), 1) AS [MM/DD/YY]11/23/98
MM/DD/YYYYUSASELECT CONVERT(VARCHAR(10), GETDATE(), 101) AS [MM/DD/YYYY]11/23/1998
YY.MM.DDANSISELECT CONVERT(VARCHAR(8), GETDATE(), 2) AS [YY.MM.DD]72.01.01
YYYY.MM.DDANSISELECT CONVERT(VARCHAR(10), GETDATE(), 102) AS [YYYY.MM.DD]1972.01.01
DD/MM/YYBritish/FrenchSELECT CONVERT(VARCHAR(8), GETDATE(), 3) AS [DD/MM/YY]19/02/72
DD/MM/YYYYBritish/FrenchSELECT CONVERT(VARCHAR(10), GETDATE(), 103) AS [DD/MM/YYYY]19/02/1972
DD.MM.YYGermanSELECT CONVERT(VARCHAR(8), GETDATE(), 4) AS [DD.MM.YY]25.12.05
DD.MM.YYYYGermanSELECT CONVERT(VARCHAR(10), GETDATE(), 104) AS [DD.MM.YYYY]25.12.2005
DD-MM-YYItalianSELECT CONVERT(VARCHAR(8), GETDATE(), 5) AS [DD-MM-YY]24-01-98
DD-MM-YYYYItalianSELECT CONVERT(VARCHAR(10), GETDATE(), 105) AS [DD-MM-YYYY]24-01-1998
DD Mon YYSELECT CONVERT(VARCHAR(9), GETDATE(), 6) AS [DD MON YY]04 Jul 06
DD Mon YYYYSELECT CONVERT(VARCHAR(11), GETDATE(), 106) AS [DD MON YYYY]04 Jul 2006
Mon DD, YYSELECT CONVERT(VARCHAR(10), GETDATE(), 7) AS [Mon DD, YY]Jan 24, 98
Mon DD, YYYYSELECT CONVERT(VARCHAR(12), GETDATE(), 107) AS [Mon DD, YYYY]Jan 24, 1998
HH:MM:SSSELECT CONVERT(VARCHAR(8), GETDATE(), 108)03:24:53
Mon DD YYYY HH:MI:SS:MMMAM (or PM)Default + millisecondsSELECT CONVERT(VARCHAR(26), GETDATE(), 109)Apr 28 2006 12:32:29:253PM
MM-DD-YYUSASELECT CONVERT(VARCHAR(8), GETDATE(), 10) AS [MM-DD-YY]01-01-06
MM-DD-YYYYUSASELECT CONVERT(VARCHAR(10), GETDATE(), 110) AS [MM-DD-YYYY]01-01-2006
YY/MM/DDSELECT CONVERT(VARCHAR(8), GETDATE(), 11) AS [YY/MM/DD]98/11/23
YYYY/MM/DDSELECT CONVERT(VARCHAR(10), GETDATE(), 111) AS [YYYY/MM/DD]1998/11/23
YYMMDDISOSELECT CONVERT(VARCHAR(6), GETDATE(), 12) AS [YYMMDD]980124
YYYYMMDDISOSELECT CONVERT(VARCHAR(8), GETDATE(), 112) AS [YYYYMMDD]19980124
DD Mon YYYY HH:MM:SS:MMM(24h)Europe default + millisecondsSELECT CONVERT(VARCHAR(24), GETDATE(), 113)28 Apr 2006 00:34:55:190
HH:MI:SS:MMM(24H)SELECT CONVERT(VARCHAR(12), GETDATE(), 114) AS [HH:MI:SS:MMM(24H)]11:34:23:013
YYYY-MM-DD HH:MI:SS(24h)ODBC CanonicalSELECT CONVERT(VARCHAR(19), GETDATE(), 120)1972-01-01 13:42:24
YYYY-MM-DD HH:MI:SS.MMM(24h)ODBC Canonical (with milliseconds)SELECT CONVERT(VARCHAR(23), GETDATE(), 121)1972-02-19 06:35:24.489
YYYY-MM-DDTHH:MM:SS:MMMISO8601SELECT CONVERT(VARCHAR(23), GETDATE(), 126)1998-11-23T11:25:43:250
DD Mon YYYY HH:MI:SS:MMMAMKuwaitiSELECT CONVERT(VARCHAR(26), GETDATE(), 130)28 Apr 2006 12:39:32:429AM
DD/MM/YYYY HH:MI:SS:MMMAMKuwaitiSELECT CONVERT(VARCHAR(25), GETDATE(), 131)28/04/2006 12:39:32:429AM

 

Extended Date Formats

Date FormatSQL StatementSample Output
YY-MM-DDSELECT SUBSTRING(CONVERT(VARCHAR(10), GETDATE(), 120), 3, 8) AS [YY-MM-DD] SELECT REPLACE(CONVERT(VARCHAR(8), GETDATE(), 11), ‘/’, ‘-‘) AS [YY-MM-DD]99-01-24
YYYY-MM-DDSELECT CONVERT(VARCHAR(10), GETDATE(), 120) AS [YYYY-MM-DD] SELECT REPLACE(CONVERT(VARCHAR(10), GETDATE(), 111), ‘/’, ‘-‘) AS [YYYY-MM-DD]1999-01-24
MM/YYSELECT RIGHT(CONVERT(VARCHAR(8), GETDATE(), 3), 5) AS [MM/YY] SELECT SUBSTRING(CONVERT(VARCHAR(8), GETDATE(), 3), 4, 5) AS [MM/YY]08/99
MM/YYYYSELECT RIGHT(CONVERT(VARCHAR(10), GETDATE(), 103), 7) AS [MM/YYYY]12/2005
YY/MMSELECT CONVERT(VARCHAR(5), GETDATE(), 11) AS [YY/MM]99/08
YYYY/MMSELECT CONVERT(VARCHAR(7), GETDATE(), 111) AS [YYYY/MM]2005/12
Month DD, YYYYSELECT DATENAME(MM, GETDATE()) + RIGHT(CONVERT(VARCHAR(12), GETDATE(), 107), 9) AS [Month DD, YYYY]July 04, 2006
Mon YYYYSELECT SUBSTRING(CONVERT(VARCHAR(11), GETDATE(), 113), 4, 8) AS [Mon YYYY]Apr 2006
Month YYYYSELECT DATENAME(MM, GETDATE()) + ‘ ‘ + CAST(YEAR(GETDATE()) AS VARCHAR(4)) AS [Month YYYY]February 2006
DD MonthSELECT CAST(DAY(GETDATE()) AS VARCHAR(2)) + ‘ ‘ + DATENAME(MM, GETDATE()) AS [DD Month]11 September
Month DDSELECT DATENAME(MM, GETDATE()) + ‘ ‘ + CAST(DAY(GETDATE()) AS VARCHAR(2)) AS [Month DD]September 11
DD Month YYSELECT CAST(DAY(GETDATE()) AS VARCHAR(2)) + ‘ ‘ + DATENAME(MM, GETDATE()) + ‘ ‘ + RIGHT(CAST(YEAR(GETDATE()) AS VARCHAR(4)), 2) AS [DD Month YY]19 February 72
DD Month YYYYSELECT CAST(DAY(GETDATE()) AS VARCHAR(2)) + ‘ ‘ + DATENAME(MM, GETDATE()) + ‘ ‘ + CAST(YEAR(GETDATE()) AS VARCHAR(4)) AS [DD Month YYYY]11 September 2002
MM-YYSELECT RIGHT(CONVERT(VARCHAR(8), GETDATE(), 5), 5) AS [MM-YY] SELECT SUBSTRING(CONVERT(VARCHAR(8), GETDATE(), 5), 4, 5) AS [MM-YY]12/92
MM-YYYYSELECT RIGHT(CONVERT(VARCHAR(10), GETDATE(), 105), 7) AS [MM-YYYY]05-2006
YY-MMSELECT RIGHT(CONVERT(VARCHAR(7), GETDATE(), 120), 5) AS [YY-MM] SELECT SUBSTRING(CONVERT(VARCHAR(10), GETDATE(), 120), 3, 5) AS [YY-MM]92/12
YYYY-MMSELECT CONVERT(VARCHAR(7), GETDATE(), 120) AS [YYYY-MM]2006-05
MMDDYYSELECT REPLACE(CONVERT(VARCHAR(10), GETDATE(), 1), ‘/’, ”) AS [MMDDYY]122506
MMDDYYYYSELECT REPLACE(CONVERT(VARCHAR(10), GETDATE(), 101), ‘/’, ”) AS [MMDDYYYY]12252006
DDMMYYSELECT REPLACE(CONVERT(VARCHAR(10), GETDATE(), 3), ‘/’, ”) AS [DDMMYY]240702
DDMMYYYYSELECT REPLACE(CONVERT(VARCHAR(10), GETDATE(), 103), ‘/’, ”) AS [DDMMYYYY]24072002
Mon-YYSELECT REPLACE(RIGHT(CONVERT(VARCHAR(9), GETDATE(), 6), 6), ‘ ‘, ‘-‘) AS [Mon-YY]Sep-02
Mon-YYYYSELECT REPLACE(RIGHT(CONVERT(VARCHAR(11), GETDATE(), 106), 8), ‘ ‘, ‘-‘) AS [Mon-YYYY]Sep-2002
DD-Mon-YY SELECT REPLACE(CONVERT(VARCHAR(9), GETDATE(), 6), ‘ ‘, ‘-‘) AS [DD-Mon-YY]25-Dec-05
DD-Mon-YYYYSELECT REPLACE(CONVERT(VARCHAR(11), GETDATE(), 106), ‘ ‘, ‘-‘) AS [DD-Mon-YYYY]25-Dec-2005