This paper describes a quantitative model of the effects of three practical display design variables on the visibility of random dither noise used to improve grayscale performance. The magnitude of noise, and thus the size of the grey scale step that can be tolerated, increases as expected with decreasing luminance and pitch and with increasing frame rate. The data are summarized in a multiple regression model that explains 95% of the variance in the data from seven observers. When this simple technique is employed in video images, gray scale steps as large as 3% can be used under any practical conditions. For typical desktop displays with pixel pitch in the range of 1.5 to 2 arcmin, the maximum grayscale step size increases to 5-7%. As luminance decreases below 1 fL the grayscale step size that can be tolerated increases rapidly such that below 0.01 fL a binary display performs as well as a display with 8-bit grayscale. For typical simulation training display systems installed today (2.5 arcmin pitch, 60 Hz, <20 fL) 60 gray scale steps would be sufficient if they could be optimally allocated. The 255 grayscale steps available in the simplest of modern digital interfaces provide ample head room for tolerating the sub-optimal distribution of grayscale steps provided by the standard gamma corrections used in IGs and projectors.