In OpenGL, Ogre uses OgreDDSCodec.cpp to decode DDS files. When uncompressed files with built in mip maps are decoded, the byte width of each row of the file's image is calculated as:
srcPitch = header.sizeOrPitch / std::max((size_t)1, mip * 2);
This in incorrect. For example, at mip level 3 it divides the pitch by 6 instead of 8.
The correct value would be:
srcPitch = header.sizeOrPitch >> mip;
(although this might have issues with rectangular textures is width is less than height).
An alternative is to ignore the sizeOrPitch setting and use dstPitch (as Ogre already does when sizeOrPitch is missing, on line 840). The MSDN dds guide supports this, due to some dds writers setting bad pitch values.
Ogre in DirectX doesn't suffer from this, it calls the DirectX loader which ignores sizeOrPitch, as does the directx sdk sample code for loading dds files manually.
Below this, the mip maps are read from the file into an ogre image. The loops (lines 845 and 847) use imgData->depth and imgData->height (the dimensions of the top mip map) for every mip map, instead of depth and height (the dimensions for just the current map being read). For example, the 4th mip map of a 512x512x1 texture will try to read 64x512x1 pixels.
More info discussed here: http://www.ogre3d.org/forums/viewtopic.php?f=2&t=76100