
Hi Simon,
2015-05-18 2:50 GMT+09:00 Simon Glass sjg@chromium.org:
Hi Masahiro,
On 15 May 2015 at 22:58, Masahiro Yamada yamada.masahiro@socionext.com wrote:
Hi Joe, (added Simon)
2015-05-16 4:52 GMT+09:00 Joe Hershberger joe.hershberger@gmail.com:
Hi Masahiro-san,
On Fri, May 15, 2015 at 6:01 AM, Masahiro Yamada yamada.masahiro@socionext.com wrote:
When we send patches, we are supposed to test them by build utilities such as MAKEALL, buildman. When we want to test global changes, the first hurdle is, I think, to collect toolchains for all the
architectures.
We have some documents about build utilities, but I have not seen any official information about how to get the suitable cross-tools. Of course, it is possible to build them from sources, but it is not necessarily feasible.
Fortunately, the kernel.org site provides us pre-built toolchains, but some architectures are missing. Also, some boards fail to build with the kernel.org tools. We sometimes see, "where can I get the compiler for this architecture?" things on the ML. We should be able to prepare cross-compilers more easily.
It is true that buildman provides --fetch-arch option for downloading kernel.org toolchains, but it does not have access to others. And what we really want to know is most likely how to get compilers for such
minor
architectures as kernel.org does not provide.
Maybe just integrate this into buildman? Or remove it from buildman? In buildman has the benefit that it updates buildman's config to know how to find the compiler.
I wanted to add more options to provide better flexibility.
For example, I wanted --destdir option because I think installing tools under /opt/ or /usr/loca/ is a generic
demand.
That's why I implemented this tool as a separate script. I also want to hear Simon's opinion.
I think a separate script is fine - it helps if we can reduce the functionality in buildman. But buildman should call this script.
We cannot mix up python 2 and 3 script.
If buildman should call this script, it must be re-written in 2.
Also I think your script should use urllib2 and IMO the simple progress update that buildman provides is plenty.
In python 2, there exist two libraries, urllib and urlib2, to do similar things.
In python 3, the former was discontinued and the latter was renamed into "urllib".
So, the "urllib" I am using in my python 3 script is equivalent to what you call "urllib2" in python 2.
Re the destination, buildman could provide its own destination for its download operation. But I suggest we use the same default one for both.
OK, I can do this.
Perhaps your default makes more sense that buildman's? After all, buildman doesn't really care where it is.
This tool intends to be more generic design without hard-coding such kernel.org things.
To achieve that, this tool consists of two files: Python script (this file) and the database file containing URLs of
tarballs.
We just need to update the latter when new version compilers are
released
(or better compilers are found.) The file is in the form of RFC 822 for easier editing.
Any reason not to just maintain this list on the wiki. It seem this is the primary issue for everyone... not figuring out how to download or extract the toolchain.
I can just note URLs down in README or wiki.
Of course, everyone knows how to download a tarball and extract it, but isn't it more convenient to prepare a utility that can do everything for
you?
The script only uses Python libraries, not relies on external programs although it displays wget-like log when downloading tarballs. :-)
It seems like using wget would be more appropriate. Why reinvent the
wheel?
My intention was to not depend on particular external programs like wget,
curl.
But, you are right, we should not reinvent the wheel.
I will replace my implementation with a caller of wget.
I think urllib2 is a better solution.
Now I understand we must depend on "tar" anyway.
So my first intention "no external program dependency" seems impossible (at least on Python 2).
I do not mind depending on wget, and it seems easier.
This is RFC because I am thinking it can be more brushed up. If the basis idea is OK, I will improve code, add more comments.
Note this script is written in Python 3 and only works on Python 3.3 or later. I do not think it is too much limitation, but some popular distributions under support might include older version. For example, looks like Ubuntu 12.04 LTS is shipped with Python 3.2.
Why not write it in something that exists everywhere? If it's just for downloading tool-chains, seems it should be easy to make it python 2.6 compatible.
The reason of Python 3.3 dependency is that I wanted to unpack tarballs with a Python library.
The tarfile library only supports .xz format on version 3.3 or later.
We can just invoke tar program rather than Python library, so this version dependency will go away.
Yes :-)
def Unpack(self, fname, dest): """Unpack a tar file
Args: fname: Filename to unpack dest: Destination directory Returns: Directory name of the first entry in the archive, without the trailing / """ stdout = command.Output('tar', 'xvfJ', fname, '-C', dest) return stdout.splitlines()[0][:-1]
But, using Python 3 seems a right way in a long run, I think.
Python 3k is already widespread, isn't it?
Getting that way, but why require it?
I imagine the main interest for Python developers has already moved to Python 3.
I think Python 2 will be also maintained long enough, but, our scripts move to Python 3 for better maintainability in the long run.
When I run the tool with no args it seems to start downloading everything.
tools/get-crosstools all is better?
I think it would be better if it printed help. Also it could use slightly more extensive help (maybe a -H option like buildman which prints the README?).
One more question - how do we handle multiple toolchain versions? We may as well figure that out now. How about adding another level in the directory hierarchy with the toolchain source or type? Then we could build with both eldk and kernel.org, for example. Tom has talked about how we might add this feature to buildman too (i.e. build the same arch with multiple toolchains).
Seems a good idea, but it feels like too much work here...