
On 6/11/20 5:18 AM, Hayes Wang wrote:
Marek Vasut [mailto:marex@denx.de]
Sent: Wednesday, June 10, 2020 9:21 PM
[...]
> + /* ADC Bias Calibration: > + * read efuse offset 0x7d to get a 17-bit data. Remove the
dummy/fake
> + * bit (bit3) to rebuild the real 16-bit data. Write the data to the > + * ADC ioffset. > + */ > + ocp_data = r8152_efuse_read(tp, 0x7d); > + data = (u16)(((ocp_data & 0x1fff0) >> 1) | (ocp_data & 0x7));
Are these type casts really needed ?
Is there not a warning from u32 to u16?
There is, but it seems justified to warn about conversion from u32 to u16, so maybe that needs to be fixed ?
Excuse me. I don't understand what you mean. The original is 17-bit data, so I use u32 to store it. And, the final desired data is 16-bit. That is why I have to convert it from u32 to u16.
So what happens to that bit 17 , is it dropped ?
I have descripted is in the code, such as following.
/* ADC Bias Calibration: * read efuse offset 0x7d to get a 17-bit data. Remove the dummy/fake * bit (bit3) to rebuild the real 16-bit data. Write the data to the * ADC ioffset. */
The real data (16-bit) would be inserted a dummy bit, and store the 17-bit to efuse offset 0x7d. Therefore, when reading the 17-bit data from efuse, we have to remove the dummy to get the real data.
Ah, hmm, then let's use u32 type and be done with it. That solves the typecasts and is safe. Would that work ?