Recently I was asked this question about how decimal is rounding off values when doing a simple calculation. The question was
DECLARE @x DECIMAL(38, 0) = 0 DECLARE @y DECIMAL(32, 4) = 4.2 SELECT @x - @y --Output -4.2 DECLARE @x DECIMAL(37, 0) = 0 DECLARE @y DECIMAL(32, 4) = 4.2 SELECT @x - @y
I am not able to find why output is getting round off in case of DECIMAL(38,0) but not in case of DECIMAL(37,0). As per BOL, decimal 37 and and 38 both should be having same size and that should not matter.
The answer is quite simple but not so obvious until you look closer. The maximum value for the precision (p) part of the decimal data type decimal (p,s) is 38. However the scale (s) comes from subtracting the number of digits required from the precision. So when you use something like decimal (10,6) your saying that the total number of digits on either side of the decimal point cannot exceed 10 with up to 6 digits allowed on the right of the decimal place so it looks something like (4,6) in our eyes. This is common knowledge and we are all aware of it. This logic is true for how values can be stored into this datatype and not really accurate when it comes to how mathematical operators behave when performing addition and subtraction on this data type.
When the precision is already at 38 and scale is set to zero then there is no more bytes available for decimal places to be stored in the data type. Hence the result for any addition / subtraction will result in zero decimal places to the right in order to make sure the precision is accurate. However when performing multiplication or division this doesn’t apply because then decimal places and the scale of it matters and we cannot just truncate the result to fit the datatype of the result. If you refer the link below you will see the explaination for this scenario just below the table explaining the precision for each case I mentioned above.