
On Thu, Sep 07, 2017 at 06:13:21AM -0700, Bin Meng wrote:
When EHCD and xHCD are enabled at the same time, USB storage device driver will fail to read/write from/to the storage device attached to the xHCI interface, due to its transfer blocks exceeds the xHCD driver limitation.
With driver model, we have an API to get the controller's maximum transfer size and we can use that to determine the storage driver's capability of read/write.
Note: the non-DM version driver is still broken with xHCD and the intent here is not to fix the non-DM one, since the xHCD itself is already broken in places like 3.0 hub support, etc.
Signed-off-by: Bin Meng bmeng.cn@gmail.com Tested-by: Stefan Roese sr@denx.de
[snip]
@@ -953,6 +957,17 @@ static void usb_stor_set_max_xfer_blk(struct us_data *us) #else blk = 20; #endif +#else
- ret = usb_get_max_xfer_size(udev, (size_t *)&size);
- if (ret < 0) {
/* unimplemented, let's use default 20 */
blk = 20;
- } else {
if (size > USHRT_MAX * 512)
blk = USHRT_MAX;
blk = size / 512;
- }
+#endif
So, Coverity saw this and found an issue (CID 167250), and I was going to just fix it, but I'm not sure. The problem is that we check size > (USHRT_MAX * 512), and then assign blk. Then, we always assign blk, so the test above isn't used. But my background recollection is that there's a real issue that's being addressed here. Can we really just always say blk = size / 512 in this case, or did we want to be shifting size not blk under the if? Thanks!