Tree:
b34a9a6cd3
master
v1.0.0
v1.1.0
v1.10.0
v1.11.0
v1.12.0
v1.13.0
v1.14.0
v1.15.0
v1.16.0
v1.17.0
v1.18.0
v1.18.1
v1.19.0
v1.2.0
v1.2.1
v1.20.0
v1.21.0
v1.22.0
v1.22.0-beta-1
v1.22.1
v1.22.2
v1.22.3
v1.22.4
v1.22.5
v1.22.6
v1.22.7
v1.22.8
v1.22.9
v1.23.0
v1.23.1
v1.23.2
v1.24.0
v1.25.0
v1.26.0
v1.27.0
v1.28.0
v1.28.1
v1.28.2
v1.29.0
v1.29.1
v1.29.2
v1.3.0
v1.30.0
v1.30.1
v1.31.0
v1.31.1
v1.32.0
v1.33.1
v1.34.0
v1.35.0
v1.35.1
v1.36.0
v1.37.0
v1.37.1
v1.4.0
v1.5.0
v1.5.1
v1.6.0
v1.6.1
v1.6.2
v1.7.0
v1.7.1
v1.7.2
v1.8.0
v1.9.0
v1.9.1
${ noResults }
1 Commits (b34a9a6cd3c449d49db935af47c3f352c79f5051)
Author | SHA1 | Message | Date |
---|---|---|---|
![]() |
7dd568475d
|
Fix "illegal instruction" errors on some CPUs (#177)
This is done by pinning gmp to a fork of `esy-packages/esy-gmp` that uses the `--enable-fat` argument, suggested by @ulrikstrid. Here's the description of the open PR for `esy-packages/esy-gmp` https://github.com/esy-packages/esy-gmp/pull/3: > GMP uses [Intel ADX](https://en.wikipedia.org/wiki/Intel_ADX) to do math stuff when capable for performance reasons. The pick whether to use ADX or not is being chosen on compile time, unless you specify `--enable-fat` which creates a "fat" binary that decides on runtime whether to use these custom instructions: > > > Using --enable-fat selects a “fat binary” build on x86, where optimized low level subroutines are chosen at runtime according to the CPU detected. This means more code, but gives good performance on all x86 chips. (This option might become available for more architectures in the future.) > > Without this flag, users can get "illegal hardware instruction" errors when running their binaries on a machine without Intel ADX. > > So, in other words, this PR enables building gmp into a binary on CI which _has Intel ADX_, and then using it on a machine that does not have it (like AMD or older Intels) > > To me, it sounds like a sane default. |
5 years ago |