GNU bug report logs - #76503
[GCD] Migrating repositories, issues, and patches to Codeberg

Previous Next

Package: guix-patches;

Reported by: Ludovic Courtès <ludo <at> gnu.org>

Date: Sun, 23 Feb 2025 15:21:02 UTC

Severity: normal

Done: Maxim Cournoyer <maxim.cournoyer <at> gmail.com>

Full log


Message #26 received at 76503 <at> debbugs.gnu.org (full text, mbox):

From: Leo Famulari <leo <at> famulari.name>
To: Arun Isaac <arunisaac <at> systemreboot.net>
Cc: 76503 <at> debbugs.gnu.org, Ricardo Wurmus <rekado <at> elephly.net>,
 Ludovic Courtès <ludo <at> gnu.org>,
 Benjamin Slade <slade <at> lambda-y.net>, Christopher Baines <guix <at> cbaines.net>
Subject: Re: [bug#76503] [GCD] Migrating repositories, issues, and patches to
 Codeberg
Date: Tue, 25 Feb 2025 14:05:47 -0500
On Tue, Feb 25, 2025 at 02:03:02PM +0000, Arun Isaac wrote:
> And, while I strongly prefer the email workflow, I concede that moving
> to a pull request workflow will lower the barrier to entry simply
> because it is more familiar thanks to GitHub's dominant mindshare. So,
> unless there is significant support for mumi and the email workflow, I
> will stand aside and go with the flow of the community. That said, my
> arguments against Codeberg follow.

I think we have similar feelings here. I'll miss living inside of Mutt
but I think we are losing a lot by not using a web-based platform.

> Now, we have a little more than 1K contributors. That means, we are
> already up to 1 TiB in storage. That's enormous, especially
> considering that all data on Codeberg combined adds up to only 12
> TiB[3].

I disagree that 1 TiB, or even 10 TiB, is enormous. It's certainly
large, but it's "entry-level" for a web service with 1000 users in 2025.
And a 10 TiB hard drive only costs ~$200. I know that's is a lot for
some people and places, but it's nothing for something like this. The
blog post you linked to even says "But storage is cheap!" And it really
is the cheapest thing in computing these days.

If Codeberg is really only hosting 12 TiB, then I suggest that either 1)
they can't handle Guix or 2) they are ready to scale up. And Guix should
think about helping them with scaling capital if necessary.

> I was present with Ludo and others when we visited the Codeberg stall at
> FOSDEM, and enquired about the possibility of hosting Guix on Codeberg.
> The person at the stall was hesitant about our large repo, and our many
> users. In fact, in order to save on disk space, they suggested that we
> encourage our contributors to delete their forks once done. :-D Needless
> to say, that's never going to happen!

Interesting, that's not the impression I got from other emails from
people who were there. We need to clear this up with Codeberg now if we
want to make this change. It sounds like we would be their first
medium-sized user (I don't consider Guix to be large). Like I said, we
should be ready to offer help with fundraising.

> As well-intentioned as Codeberg is, a single non-profit hoping to host
> all the git repos in the world in perpetuity and free of charge is a
> very tough proposition.

Well, we already are in that position: we depend on the FSF completely
for our Git hosting.

https://projects.propublica.org/nonprofits/organizations/42888848

> Critical parts of our distribution infrastructure should be directly
> under our own control. We are a large enough and specialized enough
> organization that this is necessary.

I think it would be great if we could do this, but I haven't seen any
evidence that we can. In my years with Guix, we have always struggled to
operate our infrastructure. Also, I'll point out that it is an opinion,
not a fact, that we "should" do this. Very few free software projects
host their own Git servers. But like I said, the prospect does appeal to
me. But I don't volunteer to do it :)

> [5]: Quick digression: Users must actually download about 1 GiB of data
> on their first guix pull. That's frustrating to new users, and
> effectively excludes users from parts of the world where a good Internet
> connection cannot be taken for granted.

Like I've said several times in this discussion so far, we should look
into the state of the art of shallow cloning. It might be efficient
enough on the server-side these days to be the smart move. I agree that
downloading 1 GB in this context is bad.




This bug report was last modified 16 days ago.

Previous Next


GNU bug tracking system
Copyright (C) 1999 Darren O. Benham, 1997,2003 nCipher Corporation Ltd, 1994-97 Ian Jackson.