Oddbean new post about | logout
 Short answer: Let X be the anamorphic squeeze factor (e.g., 1.33, 1.66, whatever).  Scale the width of your sensor up by SQRT(X). Scale the height of your sensor down by SQRT(1/X). That's the largest size you can scale that uses all the information the lens captured.

So if you're using a 1.33X anamorphic lens with a 4K sensor (4096 x 2160), you can scale the result up to as large as (approximately) 4724x1873

Both the sensor and the final image have 8,847,360 pixels, just in a different shape. 
 Now, this is less useful than it seems, because while 4724X1873 is a common aspect ratio for film, it's not a common delivery size. It would be scaled down to 4096x1716 to conform to 4K DCI cinemascope format. 
 None of the above takes into account interactions with the Bayer filter, which would considerably complicate the analysis. But my experiments today with a resolution chart suggest that the effect there is very modest.