#gnuorg
libc @ Savannah: The GNU C Library version 2.40 is now available
The GNU C Library
The GNU C Library version 2.40 is now available.
The GNU C Library is used as the C library in the GNU system and
in GNU/Linux systems, as well as many other systems that use Linux
as the kernel.
The GNU C Library is primarily designed to be a portable
and high performance C library. It follows all relevant
standards including ISO C11 and POSIX.1-2017. It is also
internationalized and has one of the most complete
internationalization interfaces known.
The GNU C Library webpage is at http://www.gnu.org/software/libc/
Packages for the 2.40 release may be downloaded from:
http://ftpmirror.gnu.org/libc/
http://ftp.gnu.org/gnu/libc/
The mirror list is at http://www.gnu.org/order/ftp.html
Distributions are encouraged to track the release/* branches
corresponding to the releases they are using. The release
branches will be updated with conservative bug fixes and new
features while retaining backwards compatibility.
NEWS for version 2.40
Major new features:
- The header type-generic macros have been changed when using
GCC 14.1 or later to use __builtin_stdc_bit_ceil etc. built-in functions
in order to support unsigned __int128 and/or unsigned _BitInt(N) operands
with arbitrary precisions when supported by the target.
- The GNU C Library now supports a feature test macro _ISOC23_SOURCE to
enable features from the ISO C23 standard. Only some features from
this standard are supported by the GNU C Library. The older name
_ISOC2X_SOURCE is still supported. Features from C23 are also enabled
by _GNU_SOURCE, or by compiling with the GCC options -std=c23,
-std=gnu23, -std=c2x or -std=gnu2x.
- The following ISO C23 function families (introduced in TS
18661-4:2015) are now supported in . Each family includes
functions for float, double, long double, _FloatN and _FloatNx, and a
type-generic macro in .
- Exponential functions: exp2m1, exp10m1.
- Logarithmic functions: log2p1, log10p1, logp1.
- A new tunable, glibc.rtld.enable_secure, can be used to run a program
as if it were a setuid process. This is currently a testing tool to allow
more extensive verification tests for AT_SECURE programs and not meant to
be a security feature.
- On Linux, the epoll header was updated to include epoll ioctl definitions
and the related structure added in Linux kernel 6.9.
- The fortify functionality has been significantly enhanced for building
programs with clang against the GNU C Library.
- Many functions have been added to the vector library for aarch64:
acosh, asinh, atanh, cbrt, cosh, erf, erfc, hypot, pow, sinh, tanh
- On x86, memset can now use non-temporal stores to improve the performance
of large writes. This behaviour is controlled by a new tunable
x86_memset_non_temporal_threshold.
Deprecated and removed features, and other changes affecting compatibility:
- Architectures which use a 32-bit seconds-since-epoch field in struct
lastlog, struct utmp, struct utmpx (such as i386, powerpc64le, rv32,
rv64, x86-64) switched from a signed to an unsigned type for that
field. This allows these fields to store timestamps beyond the year
2038, until the year 2106. Please note that applications are still
expected to migrate off the interfaces declared in and
(except for login_tty) due to locking and session management
problems.
- __rseq_size now denotes the size of the active rseq area (20 bytes
initially), not the size of struct rseq (32 bytes initially).
Security related changes:
The following CVEs were fixed in this release, details of which can be
found in the advisories directory of the release tarball:
GLIBC-SA-2024-0004:
ISO-2022-CN-EXT: fix out-of-bound writes when writing escape
sequence (CVE-2024-2961)
GLIBC-SA-2024-0005:
nscd: Stack-based buffer overflow in netgroup cache (CVE-2024-33599)
GLIBC-SA-2024-0006:
nscd: Null pointer crash after notfound response (CVE-2024-33600)
GLIBC-SA-2024-0007:
nscd: netgroup cache may terminate daemon on memory allocation
failure (CVE-2024-33601)
GLIBC-SA-2024-0008:
nscd: netgroup cache assumes NSS callback uses in-buffer strings
(CVE-2024-33602)
The following bugs were resolved with this release:
[19622] network: Support aliasing with struct sockaddr
[21271] localedata: cv_RU: update translations
[23774] localedata: lv_LV collates Y/y incorrectly
[23865] string: wcsstr is quadratic-time
[25119] localedata: Change Czech weekday names to lowercase
[27777] stdio: fclose does a linear search, takes ages when many FILE*
are opened
[29770] libc: prctl does not match manual page ABI on powerpc64le-
linux-gnu
[29845] localedata: Update hr_HR locale currency to €
[30701] time: getutxent misbehaves on 32-bit x86 when TIME_BITS=64
[31316] build: Fails test misc/tst-dirname "Didn't expect signal from
child: got `Illegal instruction'" on non SSE CPUs
[31317] dynamic-link: [RISCV] static PIE crashes during self
relocation
[31325] libc: mips: clone3 is wrong for o32
[31335] math: Compile glibc with -march=x86-64-v3 should disable FMA4
multi-arch version
[31339] libc: arm32 loader crash after cleanup in 2.36
[31340] manual: A bad sentence in section 22.3.5 (resource.texi)
[31357] dynamic-link: $(objpfx)tst-rtld-list-diagnostics.out rule
doesn't work with test wrapper
[31370] localedata: wcwidth() does not treat
DEFAULT_IGNORABLE_CODE_POINTs as zero-width
[31371] dynamic-link: x86-64: APX and Tile registers aren't preserved
in ld.so trampoline
[31372] dynamic-link: _dl_tlsdesc_dynamic doesn't preserve all caller-
saved registers
[31383] libc: _FORTIFY_SOURCE=3 and __fortified_attr_access vs size of
0 and zero size types
[31385] build: sort-makefile-lines.py doesn't check variable with _
nor with "^# variable"
[31402] libc: clone (NULL, NULL, ...) clobbers %r7 register on
s390{,x}
[31405] libc: Improve dl_iterate_phdr using _dl_find_object
[31411] localedata: Add Latgalian locale
[31412] build: GCC 6 failed to build i386 glibc on Fedora 39
[31429] build: Glibc failed to build with -march=x86-64-v3
[31468] libc: sigisemptyset returns true when the set contains signals
larger than 34
[31476] network: Automatic activation of single-request options break
resolv.conf reloading
[31479] libc: Missing #include in sched_getcpu.c may
result in a loss of rseq acceleration
[31501] dynamic-link: _dl_tlsdesc_dynamic_xsavec may clobber %rbx
[31518] manual: documentation: FLT_MAX_10_EXP questionable text, evtl.
wrong,
[31530] localedata: Locale file for Moksha - mdf_RU
[31553] malloc: elf/tst-decorate-maps fails on ppc64el
[31596] libc: On the llvm-arm32 platform, dlopen("not_exist.so", -1)
triggers segmentation fault
[31600] math: math: x86 ceill traps when FE_INEXACT is enabled
[31601] math: math: x86 floor traps when FE_INEXACT is enabled
[31603] math: math: x86 trunc traps when FE_INEXACT is enabled
[31612] libc: arc4random fails to fallback to /dev/urandom if
getrandom is not present
[31629] build: powerpc64: Configuring with "--with-cpu=power10" and
'CFLAGS=-O2 -mcpu=power9' fails to build glibc
[31640] dynamic-link: POWER10 ld.so crashes in
elf_machine_load_address with GCC 14
[31661] libc: NPROCESSORS_CONF and NPROCESSORS_ONLN not available in
getconf
[31676] dynamic-link: Configuring with CC="gcc -march=x86-64-v3"
--with-rtld-early-cflags=-march=x86-64 results in linker failure
[31677] nscd: nscd: netgroup cache: invalid memcpy under low
memory/storage conditions
[31678] nscd: nscd: Null pointer dereferences after failed netgroup
cache insertion
[31679] nscd: nscd: netgroup cache may terminate daemon on memory
allocation failure
[31680] nscd: nscd: netgroup cache assumes NSS callback uses in-buffer
strings
[31682] math: [PowerPC] Floating point exception error for math test
test-ceil-except-2 test-floor-except-2 test-trunc-except-2
[31686] dynamic-link: Stack-based buffer overflow in
parse_tunables_string
[31695] libc: pidfd_spawn/pidfd_spawnp leak an fd if clone3 succeeds
but execve fails
[31719] dynamic-link: --enable-hardcoded-path-in-tests doesn't work
with -Wl,--enable-new-dtags
[31730] libc: backtrace_symbols_fd prints different strings than
backtrace_symbols returns
[31753] build: FAIL: link-static-libc with GCC 6/7/8
[31755] libc: procutils_read_file doesn't start with a leading
underscore
[31756] libc: write_profiling is only in libc.a
[31757] build: Should XXXf128_do_not_use functions be excluded?
[31759] math: Extra nearbyint symbols in libm.a
[31760] math: Missing math functions
[31764] build: _res_opcodes should be a compat symbol only
[31765] dynamic-link: _dl_mcount_wrapper is exported without prototype
[31766] stdio: _IO_stderr IO_stdin _IO_stdout should be compat
symbols
[31768] string: Extra stpncpy symbol in libc.a
[31770] libc: clone3 is in libc.a
[31774] libc: Missing __isnanf128 in libc.a
[31775] math: Missing exp10 exp10f32x exp10f64 fmod fmodf fmodf32
fmodf32x fmodf64 in libm.a
[31777] string: Extra memchr strlen symbols in libc.a
[31781] math: Missing math functions in libm.a
[31782] build: Test build failure with recent GCC trunk (x86/tst-cpu-
features-supports.c:69:3: error: parameter to builtin not valid:
avx5124fmaps)
[31785] string: loongarch: Extra strnlen symbols in libc.a
[31786] string: powerpc: Extra strchrnul and strncasecmp_l symbols in
libc.a
[31787] math: powerpc: Extra llrintf, llrintf, llrintf32, and
llrintf32 symbols in libc.a
[31788] libc: microblaze: Extra cacheflush symbol in libc.a
[31789] libc: powerpc: Extra versionsort symbol in libc.a
[31790] libc: s390: Extra getutent32, getutent32_r, getutid32,
getutid32_r, getutline32, getutline32_r, getutmp32, getutmpx32,
getutxent32, getutxid32, getutxline32, pututline32, pututxline32,
updwtmp32, updwtmpx32 in libc.a
[31797] build: g++ -static requirement should be able to opt-out
[31798] libc: pidfd_getpid.c is miscompiled by GCC 6.4
[31802] time: difftime is pure not const
[31808] time: The supported time_t range is not documented.
[31840] stdio: Memory leak in _IO_new_fdopen (fdopen) on seek failure
[31867] build: "CPU ISA level is lower than required" on SSE2-free
CPUs
[31876] time: "Date and time" documentation fixes for POSIX.1-2024 etc
[31883] build: ISA level support configure check relies on bashism /
is otherwise broken for arithmetic
[31892] build: Always install mtrace.
[31917] libc: clang mq_open fortify wrapper does not handle 4 argument
correctly
[31927] libc: clang open fortify wrapper does not handle argument
correctly
[31931] time: tzset may fault on very short TZ string
[31934] string: wcsncmp crash on s390x on vlbb instruction
[31963] stdio: Crash in _IO_link_in within __gcov_exit
[31965] dynamic-link: rseq extension mechanism does not work as
intended
[31980] build: elf/tst-tunables-enable_secure-env fails on ppc
Release Notes
https://sourceware.org/glibc/wiki/Release/2.40
Contributors
This release was made possible by the contributions of many people.
The maintainers are grateful to everyone who has contributed
changes or bug reports. These include:
Adam Sampson
Adhemerval Zanella
Alejandro Colomar
Alexandre Ferrieux
Amrita H S
Andreas K. Hüttel
Andreas Schwab
Andrew Pinski
Askar Safin
Aurelien Jarno
Avinal Kumar
Carlos Llamas
Carlos O'Donell
Charles Fol
Christoph Müllner
DJ Delorie
Daniel Cederman
Darius Rad
David Paleino
Dragan Stanojević (Nevidljivi)
Evan Green
Fangrui Song
Flavio Cruz
Florian Weimer
Gabi Falk
H.J. Lu
Jakub Jelinek
Jan Kurik
Joe Damato
Joe Ramsay
Joe Simmons-Talbott
Joe Talbott
John David Anglin
Joseph Myers
Jules Bertholet
Julian Zhu
Junxian Zhu
Konstantin Kharlamov
Luca Boccassi
Maciej W. Rozycki
Manjunath Matti
Mark Wielaard
MayShao-oc
Meng Qinggang
Michael Jeanson
Michel Lind
Mike FABIAN
Mohamed Akram
Noah Goldstein
Palmer Dabbelt
Paul Eggert
Philip Kaludercic
Samuel Dobron
Samuel Thibault
Sayan Paul
Sergey Bugaev
Sergey Kolosov
Siddhesh Poyarekar
Simon Chopin
Stafford Horne
Stefan Liebler
Sunil K Pandey
Szabolcs Nagy
Wilco Dijkstra
Xi Ruoyao
Xin Wang
Yinyu Cai
YunQiang Su
We would like to call out the following and thank them for their
tireless patch review:
Adhemerval Zanella
Alejandro Colomar
Andreas K. Hüttel
Arjun Shankar
Aurelien Jarno
Bruno Haible
Carlos O'Donell
DJ Delorie
Dmitry V. Levin
Evan Green
Fangrui Song
Florian Weimer
H.J. Lu
Jonathan Wakely
Joseph Myers
Mathieu Desnoyers
Maxim Kuvyrkov
Michael Jeanson
Noah Goldstein
Palmer Dabbelt
Paul Eggert
Paul E. Murphy
Peter Bergner
Philippe Mathieu-Daudé
Sam James
Siddhesh Poyarekar
Simon Chopin
Stefan Liebler
Sunil K Pandey
Szabolcs Nagy
Xi Ruoyao
Zack Weinberg
--
Andreas K. Hüttel
dilfridge@gentoo.org
Gentoo Linux developer
(council, toolchain, base-system, perl, releng)
https://wiki.gentoo.org/wiki/User:Dilfridge
https://www.akhuettel.de/
GNUnet News: DHT Technical Specification Milestone 5
DHT Technical Specification Milestone 5
We are happy to announce the completion of milestone 5 for the DHT specification. The general objective is to provide a detailed and comprehensive guide for implementors of the GNUnet DHT "R 5 N". As part of this milestone, the specification was updated and interoperability testing conducted. We submitted the draft to the Independent Stream Editor (ISE) who is going to decide if it will be adopted and shepherded through the RFC process.
The current protocol is implemented as part of GNUnet and gnunet-go as announced on the mailing list when the previous implementation milestones were finished .
*We again invite any interested party to read the document and provide critical review and feedback. This greatly helps us to improve the protocol and help future implementations. Contact us at the gnunet-developers mailing list * .
This work is generously funded by NLnet as part of their NGI Assure fund .
parallel @ Savannah: GNU Parallel 20240722 ('Assange') released [stable]
GNU Parallel 20240722 ('Assange') has been released. It is available for download at: lbry://@GnuParallel:4
Quote of the month:
parallel is frickin great for launching jobs on multiple
machines. Ansible and Jenkins and others may be good too but I was
able to jump right in with parallel.
-- dwhite21787@reddit
New in this release:
- No new features. This is a candidate for a stable release.
- Bug fixes and man page updates.
News about GNU Parallel:
- Scientific Workflows at Scale using GNU Parallel https://web.cvent.com/event/f318e73c-2230-432a-a044-b75625020543/websitePage:afd80266-008e-414b-9f94-2fd9b4dd1924?session=fe79a785-ec60-414c-8d2b-c29208f53d4c&shareLink=true
- Use GNU Parallel to render blender movies distributed by a bunch of nodes https://github.com/tfmoraes/blender_gnu_parallel_render
- Lessons Learned from Scaling to Multi-Terabyte Datasets https://v2thegreat.com/2024/06/19/lessons-learned-from-scaling-to-multi-terabyte-datasets/
- Efisiensi Maksimal: Cara Paralelisasi Perintah di CLI Linux https://medium.com/@nfrozi/efisiensi-maksimal-cara-paralelisasi-perintah-di-cli-linux-f4fda3afe2a0
- Introduction to GNU parallel https://datascience.101workbook.org/06-hpc/06-parallel/01-intro-to-gnu-parallel/#gsc.tab=0
GNU Parallel - For people who live life in the parallel lane.
If you like GNU Parallel record a video testimonial: Say who you are, what you use GNU Parallel for, how it helps you, and what you like most about it. Include a command that uses GNU Parallel if you feel like it.
About GNU Parallel
GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.
If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.
GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.
For example you can run this to convert all jpeg files into png and gif files and have a progress bar:
parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif
Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:
find . -name '*.jpg' |
parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200
You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/
You can install GNU Parallel in just 10 seconds with:
$ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
fetch -o - http://pi.dk/3 ) > install.sh
$ sha1sum install.sh | grep 883c667e01eed62f975ad28b6d50e22a
12345678 883c667e 01eed62f 975ad28b 6d50e22a
$ md5sum install.sh | grep cc21b4c943fd03e93ae1ae49e28573c0
cc21b4c9 43fd03e9 3ae1ae49 e28573c0
$ sha512sum install.sh | grep ec113b49a54e705f86d51e784ebced224fdff3f52
79945d9d 250b42a4 2067bb00 99da012e c113b49a 54e705f8 6d51e784 ebced224
fdff3f52 ca588d64 e75f6033 61bd543f d631f592 2f87ceb2 ab034149 6df84a35
$ bash install.sh
Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.
When using programs that use GNU Parallel to process data for publication please cite:
O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.
If you like GNU Parallel:
- Give a demo at your local user group/team/colleagues
- Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
- Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
- Request or write a review for your favourite blog or magazine
- Request or build a package for your favourite distribution (if it is not already there)
- Invite me for your next conference
If you use programs that use GNU Parallel for research:
- Please cite GNU Parallel in you publications (use --citation)
If GNU Parallel saves you money:
- (Have your company) donate to FSF https://my.fsf.org/donate/
About GNU SQL
GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.
The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.
When using GNU SQL for a publication please cite:
O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.
About GNU Niceload
GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.
GNUnet News: The European Union must keep funding free software
The European Union must keep funding free software
The GNUnet project was granted NGI funding via NLnet . Other FOSS related projects also benefit from NGI funding. This funding is now at risk for future projects.
_The following is an open letter initially published in French by the Petites Singularités association. To co-sign it, please publish it on your website in your preferred language, then add yourself to this table . _
Open Letter to the European Commission.
Since 2020, Next Generation Internet ( NGI ) programmes, part of European Commission’s Horizon programme, fund free software in Europe using a cascade funding mechanism (see for example NLnet’s calls ). This year, according to the Horizon Europe working draft detailing funding programmes for 2025, we notice that Next Generation Internet is not mentioned any more as part of Cluster 4.
NGI programmes have shown their strength and importance to supporting the European software infrastructure, as a generic funding instrument to fund digital commons and ensure their long-term sustainability. We find this transformation incomprehensible, moreover when NGI has proven efficient and economical to support free software as a whole, from the smallest to the most established initiatives. This ecosystem diversity backs the strength of European technological innovation, and maintaining the NGI initiative to provide structural support to software projects at the heart of worldwide innovation is key to enforce the sovereignty of a European infrastructure. Contrary to common perception, technical innovations often originate from European rather than North American programming communities, and are mostly initiated by small-scaled organizations.
Previous Cluster 4 allocated 27 million euros to:
- “Human centric Internet aligned with values and principles commonly shared in Europe” ;
- “A flourishing internet, based on common building blocks created within NGI, that enables better control of our digital life” ;
- “A structured ecosystem of talented contributors driving the creation of new internet commons and the evolution of existing internet commons”.
In the name of these challenges, more than 500 projects received NGI funding in the first 5 years, backed by 18 organisations managing these European funding consortia.
NGI contributes to a vast ecosystem, as most of its budget is allocated to fund third parties by the means of open calls, to structure commons that cover the whole Internet scope - from hardware to application, operating systems, digital identities or data traffic supervision. This third-party funding is not renewed in the current program, leaving many projects short on resources for research and innovation in Europe.
Moreover, NGI allows exchanges and collaborations across all the Euro zone countries as well as “widening countries” 1 , currently both a success and an ongoing progress, likewise the Erasmus programme before us. NGI also contributes to opening and supporting longer relationships than strict project funding does. It encourages implementing projects funded as pilots, backing collaboration, identification and reuse of common elements across projects, interoperability in identification systems and beyond, and setting up development models that mix diverse scales and types of European funding schemes.
While the USA, China or Russia deploy huge public and private resources to develop software and infrastructure that massively capture private consumer data, the EU can’t afford this renunciation. Free and open source software, as supported by NGI since 2020, is by design the opposite of potential vectors for foreign interference. It lets us keep our data local and favors a community-wide economy and know-how, while allowing an international collaboration. This is all the more essential in the current geopolitical context: the challenge of technological sovereignty is central, and free software allows addressing it while acting for peace and sovereignty in the digital world as a whole.
- As defined by Horizon Europe, widening Member States are Bulgaria, Croatia, Cyprus, Czechia, Estonia, Greece, Hungary, Latvia, Lituania, Malta, Poland, Portugal, Romania, Slovakia, and Slovenia. Widening associated countries (under condition of an association agreement) include Albania, Armenia, Bosnia, Feroe Islands, Georgia, Kosovo, Moldavia, Montenegro, Morocco, North Macedonia, Serbia, Tunisia, Turkeye, and Ukraine. Widening overseas regions are Guadeloupe, French Guyana, Martinique, Reunion Island, Mayotte, Saint-Martin, The Azores, Madeira, the Canary Islands. ↩︎
GNU Taler news: Video interview with Mikolai Gütschow on payments for the Internet of Things
On the occasion of the Point Zero Forum's Innovation Tour, Evgeny Grin has interviewed Mikolai Gütschow who designed and implemented solutions for the payments in the Internet of Things (IoT).
FSF Events: Free Software Directory meeting on IRC: Friday, July 19, starting at 12:00 EDT (16:00 UTC)
Join the FSF and friends on Friday, July 19 from 12:00 to 15:00 EDT (16:00 to 19:00 UTC) to help improve the Free Software Directory.
health @ Savannah: MyGNUHealth 2.2.1 released
Dear community
I am happy to announce patchset 2.2.1 for MYGNUHealth, the GNU Health Personal Health Record.
This patchset fixes the following issues:
- MyGH crashes when clicking 'Network': https://codeberg.org/gnuhealth/mygnuhealth/issues/34
- Include icons of type gif on MANIFEST.in : https://codeberg.org/gnuhealth/mygnuhealth/issues/36
You can download MyGNUHealth source code from the official GNU Savannah (https://ftp.gnu.org/gnu/health/mygnuhealth/). You can also install MyGH from the Python Package Index (PyPI) or from your operating system distribution.
Happy hacking
Luis
GNU Taler news: Video interview with Christian Blättler on his work on tokens for unlinkable discounts and subscriptions
On the occasion of the Point Zero Forum's Innovation Tour, Berna Alp has interviewed Christian Blättler who implemented a system for using GNU Taler for unlikable discounts and subscriptions.
automake @ Savannah: automake 1.17 released [stable]
Automake 1.17 released. Announcement:
https://lists.gnu.org/archive/html/autotools-announce/2024-07/msg00000.html
gnuastro @ Savannah: Gnuastro 0.23 released
The 23rd release of GNU Astronomy Utilities (Gnuastro) is now available. See the full announcement for all the new features in this release and the many bugs that have been found and fixed: https://lists.gnu.org/archive/html/info-gnuastro/2024-07/msg00001.html
GNU Taler news: KYCID, an operational OAuth2 integration of eKYC
In this bachelor thesis Yann Doy presents his implementation of a concept of eKYC (electronic Knwo Your Customer procedure).
Simon Josefsson: Towards Idempotent Rebuilds?
After rebuilding all added/modified packages in Trisquel, I have been circling around the elephant in the room: 99% of the binary packages in Trisquel comes from Ubuntu, which to a large extent are built from Debian source packages. Is it possible to rebuild the official binary packages identically? Does anyone make an effort to do so? Does anyone care about going through the differences between the official package and a rebuilt version? Reproducible-build.org‘s effort to track reproducibility bugs in Debian (and other systems) is amazing. However as far as I know, they do not confirm or deny that their rebuilds match the official packages. In fact, typically their rebuilds do not match the official packages, even when they say the package is reproducible, which had me surprised at first. To understand why that happens, compare the buildinfo file for the official coreutils 9.1-1 from Debian bookworm with the buildinfo file for reproducible-build.org’s build and you will see that the SHA256 checksum does not match, but still they declare it as a reproducible package. As far as I can tell of the situation, the purpose of their rebuilds are not to say anything about the official binary build, instead the purpose is to offer a QA service to maintainers by performing two builds of a package and declaring success if both builds match.
I have felt that something is lacking, and months have passed and I haven’t found any project that address the problem I am interested in. During my earlier work I created a project called debdistreproduce which performs rebuilds of the difference between two distributions in a GitLab pipeline, and display diffoscope output for further analysis. A couple of days ago I had the idea of rewriting it to perform rebuilds of a single distribution. A new project debdistrebuild was born and today I’m happy to bless it as version 1.0 and to announces the project! Debdistrebuild has rebuilt the top-50 popcon packages from Debian bullseye, bookworm and trixie, on amd64 and arm64, as well as Ubuntu jammy and noble on amd64 , see the summary status page for links. This is intended as a proof of concept, to allow people experiment with the concept of doing GitLab-based package rebuilds and analysis. Compare how Guix has the [guix challenge](https://guix.gnu.org/manual/en/html_node/On-Trusting-Binaries.html)
command.
Or I should say debdistrebuild
has attempted to rebuild those distributions. The number of identically built packages are fairly low, so I didn’t want to waste resources building the rest of the archive until I understand if the differences are due to consequences of my build environment (plain apt-get build-dep
followed by dpkg-buildpackage
in a fresh container), or due to some real difference. Summarizing the results, **debdistrebuild**
is able to rebuild 34% of Debian bullseye on amd64, 36% of bookworm on amd64, 32% of bookworm on arm64. The results for trixie and Ubuntu are disappointing, below 10%.
So what causes my rebuilds to be different from the official rebuilds? Some are trivial like the classical problem of varying build paths, resulting in a different NT_GNU_BUILD_ID
causing a mismatch. Some are a bit strange, like a subtle difference in one of perl’s headers file. Some are due to embedded version numbers from a build dependency. Several of the build logs and diffoscope outputs doesn’t make sense, likely due to bugs in my build scripts, especially for Ubuntu which appears to strip translations and do other build variations that I don’t do. In general, the classes of reproducibility problems are the expected. Some are assembler differences for GnuPG’s gpgv-static, likely triggered by upload of a new version of gcc after the original package was built. There are at least two ways to resolve that problem: either use the same version of build dependencies that were used to produce the original build, or demand that all packages that are affected by a change in another package are rebuilt centrally until there are no more differences.
The current design of debdistrebuild
uses the latest version of a build dependency that is available in the distribution. We call this a “ idempotent rebuild “. This is usually not how the binary packages were built originally, they are often built against earlier versions of their build dependency. That is the situation for most binary distributions.
Instead of using the latest build dependency version, higher reproducability may be achieved by rebuilding using the same version of the build dependencies that were used during the original build. This requires parsing buildinfo files to find the right version of the build dependency to install. We believe doing so will lead to a higher number of reproducibly built packages. However it begs the question: can we rebuild that earlier version of the build dependency? This circles back to really old versions and bootstrappable builds eventually.
While rebuilding old versions would be interesting on its own, we believe that is less helpful for trusting the latest version and improving a binary distribution: it is challenging to publish a new version of some old package that would fix a reproducibility bug in another package when used as a build dependency, and then rebuild the later packages with the modified earlier version. Those earlier packages were already published, and are part of history. It may be that ultimately it will no longer be possible to rebuild some package, because proper source code is missing (for packages using build dependencies that were never part of a release); hardware to build a package could be missing; or that the source code is no longer publicly distributable.
I argue that getting to 100% idempotent rebuilds is an interesting goal on its own, and to reach it we need to start measure idempotent rebuild status.
One could conceivable imagine a way to rebuild modified versions of earlier packages, and then rebuild later packages using the modified earlier packages as build dependencies, for the purpose of achieving higher level of reproducible rebuilds of the last version, and to reach for bootstrappability. However, it may be still be that this is insufficient to achieve idempotent rebuilds of the last versions. Idempotent rebuilds are different from a reproducible build (where we try to reproduce the build using the same inputs), and also to bootstrappable builds (in which all binaries are ultimately built from source code). Consider a cycle where package X influence the content of package Y, which in turn influence the content of package X. These cycles may involve several packages, and it is conceivable that a cycle could be circular and infinite. It may be difficult to identify these chains, and even more difficult to break them up, but this effort help identify where to start looking for them. Rebuilding packages using the same build dependency versions as were used during the original build, or rebuilding packages using a bootsrappable build process, both seem orthogonal to the idempotent rebuild problem.
Our notion of rebuildability appears thus to be complementary to reproducible-builds.org’s definition and bootstrappable.org’s definition. Each to their own devices, and Happy Hacking!
FSF Blogs: Share free software with your friends and colleagues
Have you ever wondered how to get a friend or colleague or even a complete stranger hooked up with free software? Here's the ultimate guide.
direvent @ Savannah: GNU Direvent Version 5.4
GNU direvent version 5.4 is available for download.
New in this version:
Simultaneous execution limits
It is possible to limit number of command instances that are allowed to run simultaneously for a particular watcher. This is done using
the max-instances statement in watcher section.
Restore the "nowait" default
In previous version, watchers waited for the handler to terminate, unless given the nowait option explicitly. It is now fixed and nowait is the default, as described in the documentation.
Fix bug in generic to system event translation
Fix sentinel code
In some cases setting the sentinel effectively removed the original watcher. That happened if the full file name of the original watcher
and its directory part produced the same hash code.
Greg Casamento: What Apple has forgotten...
When NeXT still existed and the black hardware was a thing, Steve Jobs made the announcement that OPENSTEP would be created and that the object model, not the operating system and not the hardware, was the important thing.
This is a concept that Apple has forgotten. With it's push towards Apple Silicon and a walled-garden, Apple has committed itself to the same pitfall that NeXT fell into. NeXT lacked the infrastructure to handle OPENSTEP running on multiple kinds of hardware, but the object model on different OSes was successful... this is evident in OPENSTEP1.1 for Solaris and OPENSTEP for NT.
GNUstep attempts to reach the same goal, but provides the APIs that are available with Cocoa. The object model IS the important thing and this is why GNUstep is so important. It breaks the walled garden and makes it possible for users to run their apps and tools on other operating systems. GNUstep HASN'T forgotten and we believe this is a core concept that Apple has left behind.