GNU bug report logs - #76503
[GCD] Migrating repositories, issues, and patches to Codeberg

Previous Next

Package: guix-patches;

Reported by: Ludovic Courtès <ludo <at> gnu.org>

Date: Sun, 23 Feb 2025 15:21:02 UTC

Severity: normal

Done: Maxim Cournoyer <maxim.cournoyer <at> gmail.com>

Full log


Message #29 received at 76503 <at> debbugs.gnu.org (full text, mbox):

From: Arun Isaac <arunisaac <at> systemreboot.net>
To: Leo Famulari <leo <at> famulari.name>
Cc: 76503 <at> debbugs.gnu.org, Ricardo Wurmus <rekado <at> elephly.net>,
 Ludovic Courtès <ludo <at> gnu.org>,
 Benjamin Slade <slade <at> lambda-y.net>, Christopher Baines <guix <at> cbaines.net>
Subject: Re: [bug#76503] [GCD] Migrating repositories, issues, and patches
 to Codeberg
Date: Wed, 26 Feb 2025 00:34:00 +0000
Hi Leo,

> I disagree that 1 TiB, or even 10 TiB, is enormous. It's certainly
> large, but it's "entry-level" for a web service with 1000 users in 2025.
> And a 10 TiB hard drive only costs ~$200. I know that's is a lot for
> some people and places, but it's nothing for something like this. The
> blog post you linked to even says "But storage is cheap!" And it really
> is the cheapest thing in computing these days.

Aha, I'm not sure you read that article[1] carefully! :-) The very next
sentence immediately after "But storage is cheap!" is "It is not",
followed by "All data on Codeberg probably fits into a 12 TB drive".
That paragraph actually goes on to explain why storage is *not* cheap
and why it's more complicated than just buying a few more disks.

[1]:
https://blog.codeberg.org/more-power-for-you-what-a-storage-quota-will-bring.html

>> As well-intentioned as Codeberg is, a single non-profit hoping to host
>> all the git repos in the world in perpetuity and free of charge is a
>> very tough proposition.
>
> Well, we already are in that position: we depend on the FSF completely
> for our Git hosting.

True. I'm not saying our current situation is great. But, at least the
FSF is not planning to try and host all the world's git repos! That's
the point I'm making here.

>> [5]: Quick digression: Users must actually download about 1 GiB of data
>> on their first guix pull. That's frustrating to new users, and
>> effectively excludes users from parts of the world where a good Internet
>> connection cannot be taken for granted.
>
> Like I've said several times in this discussion so far, we should look
> into the state of the art of shallow cloning. It might be efficient
> enough on the server-side these days to be the smart move. I agree that
> downloading 1 GB in this context is bad.

Good idea. I don't know if our `guix authenticate' security model would
still work with a shallow clone. But, it's worth exploring.

Regards,
Arun




This bug report was last modified 16 days ago.

Previous Next


GNU bug tracking system
Copyright (C) 1999 Darren O. Benham, 1997,2003 nCipher Corporation Ltd, 1994-97 Ian Jackson.