As an old-school DBA who has been around the block more times than I’d like to admit, I am always a bit slow to jump on the bandwagon with new features and technology. I’m the guy that still doesn’t like the idea of virtualizing databases for example. The CLR was no different when it came out. Many DBAs like myself feared that it would lead to all kinds of unsupportable code with developers doing things in .NET that should be done in T-SQL. And regrettably, I have seen some of this in my consulting engagements. I have actually seen ADO.NET code in a CLR stored procedure. Go figure.
All that said, I actually ran across a situation a few weeks ago that presented a great opportunity to use a CLR function. In the database in question, there was a column containing a very large number which was stored as a hexadecimal value in one table and an integer in another. In both cases the datatype was varchar and the column had the same name, so the problem did not become apparent for a while. The query in question was using the column in a join. So we had to come up with a way to convert the value on one side or the other. The big challenge was that because the integer value was so large (WAY bigger than a bigint), normal T-SQL functions and methods did not work. I then went down the path of writing a T-SQL UDF to do the conversion, but had issues with precision. After fighting with this for a while the thought ocurred to me that his would sure be easy to do in C#. Light bulb moment.
The resulting CLR function is about 6 lines of C# code and plenty fast. I was concerned about performance since we were using this in a join but it has been pretty good. In fact, because the original query was returning no rows, yet trying really hard, the new query performs about 25% of the I/O as the original one.