December 14, 2021


I was troubleshooting an issue with some SQL in a stored procedure on SQL Server the other day where a float to a decimal was failing to convert and breaking the whole script. After scouring the web I found an interesting solution that I thought I would share.

The Issue

Before I go into the solution, I want to dive a little deeper into what the issue was. This particular stored procedure is pulling data from another database and loading the data into a data warehouse that has a defined table structure. The field that was causing me issues had a FLOAT data type. The destination field had a DECIMAL(8, 4) data type. Typically, I just use CAST() for casting the FLOAT to a DECIMAL(). When running the script, we were getting the following error.

Msg 8115, Level 16, State 6, Line 23
Arithmetic overflow error converting float to data type numeric.
The statement has been terminated.

The issue is that the destination field is set to a value that isn't big enough for the FLOAT value to fit in. When SQL Server converts the values from FLOAT to DECIMAL, the value on the left side of the decimal in the value has to be large enough. The value on the right side will just be rounded to the last digit. Let's use the below code for example.

SELECT CAST(6471.80197152331 AS DECIMAL (8, 4));
-- Returns: 
-- 6471.8020

The data type DECIMAL(8, 4) means that the data type has 8 digits and 4 of them are after the decimal. That means there are 4 digits before the decimal. Since the value provided only has 4 digits to the left of the decimal, this case works fine. Now let's adjust that data type as a DECIMAL(8, 6).

SELECT CAST(6471.80197152331 AS DECIMAL (8, 6));
-- Returns:
-- Msg 8115, Level 16, State 8, Line 8
-- Arithmetic overflow error converting numeric to data type numeric.

This code fails due to the DECIMAL(8, 6) only has 2 digits to the left of the decimal to store values.

There are a few things we can do here. The first is we could just make the DECIMAL something like DECIMAL(18, 4) to give us plenty of digits to store data in. I prefer to be more conservative with my storage and aim to try to be conservative with data types. The second thing we can do is figure out the greatest number in that field in the database and adjust the field to compensate for it.

The second approach is the one I prefer. This leads to another issue though. How do I tell what the problem number is? One way is you can use a MAX() on the field to find the largest number.

SELECT MAX(field_name)
FROM [dbo].[Table]

For most cases this works fine. I needed something a little different for my situation as the data was slow to pull and not easily accessible to look at at the source. Fumbling through forums, I found a post where someone suggested using TRY_CAST(). I found this rather intriguing as I had never used that particular function before. I tried it and sure enough it worked exactly how I needed it to.


TRY_CAST() works just like CAST except if there is an exception, it returns a null. Let's take a look at TRY_CAST using our previous code we were working with.

SELECT TRY_CAST(6471.80197152331 AS DECIMAL (8, 6));
-- Returns:

As noted previously, this cast will not work because there are not enough digits on the left side of the decimal. Since the case fails, a NULL is returned.

Performance of CAST() vs. TRY_CAST()

My initial thoughts on this were the performance has to suck because TRY_CAST() has to be doing a bunch of extra logic in error handling. To confirm my suspicions I did some rudimentary tests to see how they performed.

For this test, I created a table and populated it with 2,000,000 rows of float data. I am sure there is an easier way to do this, but this is ultimately how I generated my numbers.

INSERT INTO [dbo].[FloatTest] (
	, [string_value]
	[random_value] AS [float_value]
	, SUBSTRING(TRIM(CAST([random_value] AS VARCHAR(36))), 0, CHARINDEX('e',TRIM(CAST([random_value] AS VARCHAR(36))))) AS [string_value]
		RAND()*(rn-1.5)-1.5 AS [random_value]
		SELECT TOP 2000000 rn = ROW_NUMBER() OVER (ORDER BY s1.[object_id])
		FROM sys.all_objects AS s1
			CROSS JOIN sys.all_objects AS s2
	) t
) t;

In performing the test, I am pulling from this [dbo].[FloatTest] table and inserting the values into a temp table called #tempFloatTable. I am dropping the table in between each test to make sure I am timing only the inserts of the new data.

The first test I did was using CAST to cast the values from FLOAT to DECIMAL(12, 4) which is a data type that will work. Here are my results.

	, CAST([float_value] AS DECIMAL(12, 4)) AS converted_value
INTO #tempFloatTable
FROM [dbo].[FloatTest]

-- Run 1 - 00:00:00.805
-- Run 2 - 00:00:00.856
-- Run 3 - 00:00:00.698
-- Average - 00:00:00.786

For my second test, I kept the same data types, but am using a TRY_CAST() instead.

	, TRY_CAST([float_value] AS DECIMAL(12, 4)) AS converted_value
INTO #tempFloatTable
FROM [dbo].[FloatTest]

-- Run 1 - 00:00:00.689
-- Run 2 - 00:00:00.712
-- Run 3 - 00:00:00.689
-- Average - 00:00:00.696

The results from this surprised me. The TRY_CAST() is faster in this situation. On a regular cast is works nice, but what happens where we try to cast it to a data type where we know there will be failures? Let's try to cast the values to a DECIMAL(12,6). The largest number in this dataset is 1264856.00956389 which requires 7 digits on the left side of the decimal. With the data type DECIMAL(12,6), we will only have 6 digits.

	, TRY_CAST([float_value] AS DECIMAL(12, 6)) AS converted_value
INTO #tempFloatTable
FROM [dbo].[FloatTest]

-- Run 1 - 00:00:00.718
-- Run 2 - 00:00:00.712
-- Run 3 - 00:00:00.734
-- Average - 00:00:00.721

Overall, the performance is still pretty good. I was sharing this with a co-worker and he asked, "How does it perform using VARCHAR?" Great questions. Let's take a look. I am going to run the same tests, but using the [string_value] column instead. This column has the same values as the [float_value] column, but stored in a VARCHAR(36) field.

	, CAST([string_value] AS DECIMAL(12, 4)) AS converted_value
INTO #tempFloatTable
FROM [dbo].[FloatTest]

-- Run 1 - 00:00:00.862
-- Run 2 - 00:00:00.845
-- Run 3 - 00:00:00.867
-- Average - 00:00:00.858

	, TRY_CAST([string_value] AS DECIMAL(12, 4)) AS converted_value
INTO #tempFloatTable
FROM [dbo].[FloatTest]

-- Run 1 - 00:00:00.883
-- Run 2 - 00:00:00.850
-- Run 3 - 00:00:00.859
-- Average - 00:00:00.864

	, TRY_CAST([string_value] AS DECIMAL(12, 6)) AS converted_value
INTO #tempFloatTable
FROM [dbo].[FloatTest]

-- Run 1 - 00:00:00.851
-- Run 2 - 00:00:01.011
-- Run 3 - 00:00:00.847
-- Average - 00:00:00.903

As you can see, the TRY_CAST() didn't perform quite a good when casting from a VARCHAR, but still acceptable speeds.


TRY_CAST() is a great function to use if you are needing to make sure a script runs even if there is bad data being passed to it. At worst, the field will store a NULL if the field in the table is nullable. CAST is still a great option to use to make sure that data is properly being converted and giving you an indication if something fails.

There isn't much difference between them from a performance position. Especially if you are converting from FLOAT to DECIMAL.

I don't have a comments section yet, so feel free to send me feedback on this blog.

Kevin Williams

Kevin is a data engineer and is the Business Intelligence Practice Lead at Software Design Partners specializing in data warehousing. He is a father, an occasional gamer, and lover of many different types of music.

The opinions expressed on this site are my own and may not represent my employer's view.
Share this post
About this blog...

This post takes a look at TRY_CAST() on Microsoft SQL Server.


An error has occurred. This application may no longer respond until reloaded. Reload 🗙