
10 Jun
2020
10 Jun
'20
3:20 p.m.
On 6/10/20 3:12 PM, Hayes Wang wrote:
Marek Vasut [mailto:marex@denx.de]
Sent: Wednesday, June 10, 2020 8:54 PM
[...]
- /* ADC Bias Calibration:
* read efuse offset 0x7d to get a 17-bit data. Remove the
dummy/fake
* bit (bit3) to rebuild the real 16-bit data. Write the data to the
* ADC ioffset.
*/
- ocp_data = r8152_efuse_read(tp, 0x7d);
- data = (u16)(((ocp_data & 0x1fff0) >> 1) | (ocp_data & 0x7));
Are these type casts really needed ?
Is there not a warning from u32 to u16?
There is, but it seems justified to warn about conversion from u32 to u16, so maybe that needs to be fixed ?
Excuse me. I don't understand what you mean. The original is 17-bit data, so I use u32 to store it. And, the final desired data is 16-bit. That is why I have to convert it from u32 to u16.
So what happens to that bit 17 , is it dropped ?