By Laurelle Turner and Kenneth Oh
Recently, Dr. Oh, our very own in-house western blotting guru, kicked off a webinar series covering everything you could want to know about western blotting (which you can find at the end of this article). He launched the opening chapter of the western world webinar series by introducing the mathematical technique every wise scientist needs to perform to report meaningful data from their western blots — normalization.
What Is Normalization?
You may be asking yourself if normalization in western blots is the same thing as the data normalization you learned about in Stats 101. The answer is yes. Normalization is the analytical tool that allows you to accurately compare how various data points within a specific set of data are different, relative to each other.
There are multiple ways to normalize your western blotting data and in theory, they should provide you with the same desired result: the amount of a specific protein or specific proteins contained within a sample, relative to the other samples run in the same blot. When comparing any given data point to a whole set, you’ll need to know what the whole set looks like, mathematically. Each technique has its own set of considerations. Regardless of which method you ultimately decide to use, the criteria for any good normalization technique is to establish a region of linear signal response: quite simply, the increase in signal is directly and linearly proportional to the amount of protein contained within the sample.