Package: guix;
Reported by: Mathieu Othacehe <othacehe <at> gnu.org>
Date: Fri, 12 Nov 2021 11:50:02 UTC
Severity: important
Done: Maxim Cournoyer <maxim.cournoyer <at> gmail.com>
Bug is archived. No further changes may be made.
View this message in rfc822 format
From: Ricardo Wurmus <rekado <at> elephly.net> To: Mathieu Othacehe <othacehe <at> gnu.org> Cc: 51787 <at> debbugs.gnu.org Subject: bug#51787: GC takes more than 9 hours on berlin Date: Fri, 17 Dec 2021 14:06:51 +0100
Hi Mathieu, > New day, new benchmark. Berlin has two hard drives, which are roughly > used this way: > > /dev/sda -> / (916G) > /dev/sdb -> /gnu (37T) sda consists of two local hard disks that are combined to a RAID. Here are the disk details: --8<---------------cut here---------------start------------->8--- Disk.Bay.0:Enclosure.Internal.0-1:RAID.Slot.3-1 Status = Ok DeviceDescription = Disk 0 in Backplane 1 of RAID Controller in Slot 3 RollupStatus = Ok Name = Physical Disk 0:1:0 State = Online OperationState = Not Applicable PowerStatus = Spun-Up Size = 931.000 GB FailurePredicted = NO RemainingRatedWriteEndurance = Not Applicable SecurityStatus = Not Capable BusProtocol = SATA MediaType = HDD UsedRaidDiskSpace = 931.000 GB AvailableRaidDiskSpace = 0.001 GB Hotspare = NO Manufacturer = SEAGATE ProductId = ST1000NX0443 Revision = NB33 SerialNumber = W470QK7K PartNumber = CN08DN1YSGW0076S00L8A00 NegotiatedSpeed = 6.0 Gb/s ManufacturedDay = 0 ManufacturedWeek = 0 ManufacturedYear = 0 ForeignKeyIdentifier = null SasAddress = 0x4433221106000000 FormFactor = 2.5 Inch RaidNominalMediumRotationRate = 7200 T10PICapability = Not Capable BlockSizeInBytes = 512 MaxCapableSpeed = 6 Gb/s RaidType = None SystemEraseCapability = CryptographicErasePD SelfEncryptingDriveCapability = Not Capable EncryptionCapability = Not Capable CryptographicEraseCapability = Capable Disk.Bay.1:Enclosure.Internal.0-1:RAID.Slot.3-1 Status = Ok DeviceDescription = Disk 1 in Backplane 1 of RAID Controller in Slot 3 RollupStatus = Ok Name = Physical Disk 0:1:1 State = Online OperationState = Not Applicable PowerStatus = Spun-Up Size = 931.000 GB FailurePredicted = NO RemainingRatedWriteEndurance = Not Applicable SecurityStatus = Not Capable BusProtocol = SATA MediaType = HDD UsedRaidDiskSpace = 931.000 GB AvailableRaidDiskSpace = 0.001 GB Hotspare = NO Manufacturer = SEAGATE ProductId = ST1000NX0443 Revision = NB33 SerialNumber = W470SYTP PartNumber = CN08DN1YSGW0077F00FQA00 NegotiatedSpeed = 6.0 Gb/s ManufacturedDay = 0 ManufacturedWeek = 0 ManufacturedYear = 0 ForeignKeyIdentifier = null SasAddress = 0x4433221107000000 FormFactor = 2.5 Inch RaidNominalMediumRotationRate = 7200 T10PICapability = Not Capable BlockSizeInBytes = 512 MaxCapableSpeed = 6 Gb/s RaidType = None SystemEraseCapability = CryptographicErasePD SelfEncryptingDriveCapability = Not Capable EncryptionCapability = Not Capable CryptographicEraseCapability = Capable --8<---------------cut here---------------end--------------->8--- sdb is an external storage array (Dell MD3400) filled with 10 hard disks (SAS) in a RAID 10 configuration (36.36 TB effective capacity). There are two hot spares that are currently unassigned. They are used automatically when the RAID is degraded. The two RAID controllers have read and write caches enabled. The enclosure has two redundant host interfaces. Berlin has two host based adapter cards of which *one* is connected to the array. Why only one? Because we don’t have multipathd configured so that the system could *boot* off the external array with multipath. Without multipath the storage would appear as one disk device per card, but it would not be safe to mount them both at the same time. If we wanted to make use of the redundant connection here: figure out how to add multipathd to the initrd and set up multipath *before* handing off control to Linux. This would effectively double our bandwidth to the storage. My guess is that we’re not even close to saturating the available bandwidth. > I ran the fio benchmark tool on both of them. See the reports > attached, and the following summary: > > | | sda | sdb | > |-------+-----------+-----------| > | read | 1565KiB/s | 9695KiB/s | > | write | 523KiB/s | 3240KiB/s | > > > I'm not sure how slow those figures are relatively to the hard drives > technologies. Ricardo, any idea about that? It seems awfully slow. Especially performance of sda is abysmal: this is a local disk. sdb is the fancy external disk array that’s hooked up to two HBA cards. It should not perform *better* than sda. I’ll run this on a few of the build nodes to get some more comparisons. -- Ricardo
GNU bug tracking system
Copyright (C) 1999 Darren O. Benham,
1997,2003 nCipher Corporation Ltd,
1994-97 Ian Jackson.