
Hi Allen
On Tue, Jul 10, 2012 at 10:45 AM, Allen Martin amartin@nvidia.com wrote:
On Sat, Jul 07, 2012 at 03:15:36AM -0700, Albert ARIBAUD wrote:
Hi Allen,
On Fri, 6 Jul 2012 16:17:19 -0700, Allen Martin amartin@nvidia.com wrote:
[snip]
And I forgot to mention, the code bloat from disabling the optimization is about 400 bytes (185136 -> 185540), so it's not bad, but it still seems a shame to disable all short branches because of one misoptimized one.
0.2% be my calcs
Can this not be limited to compiling the object files which are known to be sensitive to the problem?
I understand this issue fairly well now. It's a known bug in the assembler that has already been fixed:
http://sourceware.org/bugzilla/show_bug.cgi?id=12532
It only impacts preembtable symbols, and since u-boot doesn't have any dynamic loadable objects it's only explictly defined weak symbols that should trigger the bug.
I built a new toolchain with binutils 2.22 and verified the bug is no longer present there, and -fno-optimize-sibling-calls is the correct workaround for toolchains that do have the bug, so conditionally disabling the optimization for binutils < 2.22 seems like the right fix.
I ran a quick scrub of the u-boot tree and there's 195 instances of __attribute__((weak)) spread across 123 source files, so I think just disabling optimization on the failing object files may be too fragile, as code movement could cause others to crop up.
Adding -fno-optimize-sibling-calls for binutils < 2.22 - 0.2% code size increase for people using slightly older tools
Maintain the tweaking of a set of files - someone using binutils >= 2.22 adds __attribute__((weak)) to a single function and *BAM* three months later someone complains that something broke
I vote option 1
I do wonder, though, if we should spit out warnings when applying workaraounds for older tool-chains?
Regards,
Graeme