I honestly have no idea how or why people tend to use this configuration option, if you know please drop me a line and let me know or put a comment below, I’m genuinely curious. When I ask people why this is set when I see/find it, I normally get a “well that’s how the last server was” or “I don’t know”. Awesome. I always like to just change random settings for no particular reason. Some days you wake up decide you’re going to go change a bunch of settings on your computer because why not, it’ll be fun.
SQL Server has had an I/O affinity setting for a long time, too long, and it’s time it went away. In almost all cases, if this is set, you’re doing it wrong. This seems (I could be incorrect here) to be a benchmark specific configuration option to get a higher score, not saying it didn’t or hasn’t helped in production environments in the past, but that past is quite distant. There is, essentially, no reason to even look at this now.
One of the things that caused this post was a question I was asked, “I have a server with 128 cores, it won’t let me set I/O affinity on some of them, is this a bug?”. No, it’s not a bug. It’s a left over remnant from a bygone era. It’s not possible to set I/O affinity past 64 cores. Why? Because it’s a bitmask and the configuration options (yes, there’s two of them) for that bitmask is 32-bits each. Thus, combining them, it’s a maximum of 64 bits, hence the maximum of 64 cores.
I wish this would be removed in a future version so that there aren’t “oopsie” configurations of this as it can cause actual issues.
Well, in the docs is said that “Because setting the SQL Server affinity I/O mask option is a specialized operation, it should be used only when necessary. In most cases, the Windows 2000 or Windows Server 2003 default affinity provides the best performance.”, so running newer Windows Server makes me feel all right 🙂