GNU bug report logs -
#25680
[PATCH] copy: Avoid race when creating hard link over recently-created file
Previous Next
Reported by: Mike Crowe <mac <at> mcrowe.com>
Date: Fri, 10 Feb 2017 19:50:02 UTC
Severity: normal
Tags: patch
Done: Paul Eggert <eggert <at> cs.ucla.edu>
Bug is archived. No further changes may be made.
To add a comment to this bug, you must first unarchive it, by sending
a message to control AT debbugs.gnu.org, with unarchive 25680 in the body.
You can then email your comments to 25680 AT debbugs.gnu.org in the normal way.
Toggle the display of automated, internal messages from the tracker.
Report forwarded
to
bug-coreutils <at> gnu.org
:
bug#25680
; Package
coreutils
.
(Fri, 10 Feb 2017 19:50:02 GMT)
Full text and
rfc822 format available.
Acknowledgement sent
to
Mike Crowe <mac <at> mcrowe.com>
:
New bug report received and forwarded. Copy sent to
bug-coreutils <at> gnu.org
.
(Fri, 10 Feb 2017 19:50:02 GMT)
Full text and
rfc822 format available.
Message #5 received at submit <at> debbugs.gnu.org (full text, mbox):
Ensure that create_hard_link is permitted to overwrite the destination
file if --force is passed to cp. It is possible for the file to appear
between checking and creating the hard link. This may happen multiple
times, so also loop inside create_hard_link each time it fails.
The race is made _much_ easier to reproduce by inserting a sleep(1) as
the line immediately prior to the call to modified create_hard_link call
in copy_internal.
Steps to reproduce (assuming the aforementioned sleep has been added):
mkdir a
date > a/test.txt
rm -rf b && \
cp -afl a b & sleep 0.2 && \
cp -afl a b & sleep 0.4 && \
cp -afl a b
One of the copies fails with:
cp: cannot create hard link 'b/a/test.txt' to 'a/test.txt': File exists
I found this happening in the wild without the sleep multiple times in
the deep-dark depths of a build system.
It's not obvious to me that this fix is the correct one, but it does
seem to work. There is at least some risk that the while loop in
create_hard_link might spin forever, but something else would have to be
recreating the file again rather quickly. It might make sense for me to
add a maximum number of retries.
If this patch is acceptable then I believe that it is trivial enough
that it does not require copyright assignment.
---
src/copy.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/src/copy.c b/src/copy.c
index e3832c23a..b72020ccf 100644
--- a/src/copy.c
+++ b/src/copy.c
@@ -1789,7 +1789,7 @@ create_hard_link (char const *src_name, char const *dst_name,
/* If the link failed because of an existing destination,
remove that file and then call link again. */
- if (link_failed && replace && errno == EEXIST)
+ while (link_failed && replace && errno == EEXIST)
{
if (unlink (dst_name) != 0)
{
@@ -2617,7 +2617,7 @@ copy_internal (char const *src_name, char const *dst_name,
&& !(! CAN_HARDLINK_SYMLINKS && S_ISLNK (src_mode)
&& x->dereference == DEREF_NEVER))
{
- if (! create_hard_link (src_name, dst_name, false, false, dereference))
+ if (! create_hard_link (src_name, dst_name, x->unlink_dest_after_failed_open, false, dereference))
goto un_backup;
}
else if (S_ISREG (src_mode)
--
2.12.0.rc0
Information forwarded
to
bug-coreutils <at> gnu.org
:
bug#25680
; Package
coreutils
.
(Fri, 10 Feb 2017 19:56:01 GMT)
Full text and
rfc822 format available.
Message #8 received at 25680 <at> debbugs.gnu.org (full text, mbox):
On 02/10/2017 11:18 AM, Mike Crowe wrote:
> - if (link_failed && replace && errno == EEXIST)
> + while (link_failed && replace && errno == EEXIST)
This could cause 'cp -f' to loop forever, if an attacker keeps creating
hard links. Is this a new vulnerability? I don't recall any other way
that copying from a finite source could take forever.
One possible solution would be to loop for just a few times, and then
give up with a diagnostic.
Information forwarded
to
bug-coreutils <at> gnu.org
:
bug#25680
; Package
coreutils
.
(Fri, 10 Feb 2017 21:20:02 GMT)
Full text and
rfc822 format available.
Message #11 received at 25680 <at> debbugs.gnu.org (full text, mbox):
On Friday 10 February 2017 at 11:55:35 -0800, Paul Eggert wrote:
> On 02/10/2017 11:18 AM, Mike Crowe wrote:
> >- if (link_failed && replace && errno == EEXIST)
> >+ while (link_failed && replace && errno == EEXIST)
>
> This could cause 'cp -f' to loop forever, if an attacker keeps creating hard
> links. Is this a new vulnerability? I don't recall any other way that
> copying from a finite source could take forever.
>
> One possible solution would be to loop for just a few times, and then give
> up with a diagnostic.
Indeed, that's why I wrote:
> There is at least some risk that the while loop in
> create_hard_link might spin forever, but something else would have to be
> recreating the file again rather quickly. It might make sense for me to
> add a maximum number of retries.
Do you think that if I added such a limit and diagnostic then the patch
would be acceptable?
Thanks for the review.
Mike.
Reply sent
to
Paul Eggert <eggert <at> cs.ucla.edu>
:
You have taken responsibility.
(Sun, 12 Feb 2017 07:21:02 GMT)
Full text and
rfc822 format available.
Notification sent
to
Mike Crowe <mac <at> mcrowe.com>
:
bug acknowledged by developer.
(Sun, 12 Feb 2017 07:21:02 GMT)
Full text and
rfc822 format available.
Message #16 received at 25680-done <at> debbugs.gnu.org (full text, mbox):
[Message part 1 (text/plain, inline)]
Mike Crowe wrote:
> Do you think that if I added such a limit and diagnostic then the patch
> would be acceptable?
I'd rather fix the underlying problem more systematically. How about the
attached patch instead? I've installed this on Savannah so you can give it a
whirl easily. It fixes the problem for me so I'm boldly marking this bug report
as done; we can unmark it later if it turns out that I'm wrong.
[0001-ln-replace-destination-links-more-atomically.patch (text/x-diff, attachment)]
Information forwarded
to
bug-coreutils <at> gnu.org
:
bug#25680
; Package
coreutils
.
(Mon, 13 Feb 2017 12:29:02 GMT)
Full text and
rfc822 format available.
Message #19 received at 25680 <at> debbugs.gnu.org (full text, mbox):
On Saturday 11 February 2017 at 23:20:24 -0800, Paul Eggert wrote:
> Mike Crowe wrote:
> >Do you think that if I added such a limit and diagnostic then the patch
> >would be acceptable?
>
> I'd rather fix the underlying problem more systematically. How about the
> attached patch instead? I've installed this on Savannah so you can give it a
> whirl easily. It fixes the problem for me so I'm boldly marking this bug
> report as done; we can unmark it later if it turns out that I'm wrong.
My recipe for reproducing problem on demand relied on inserting sleeps in
the code. After your fix, the potential race now occurs inside the renameat
system call so there's nowhere appropriate to place such sleeps. I think
this means that your fix is clearly much better.
Thanks!
Mike.
Information forwarded
to
bug-coreutils <at> gnu.org
:
bug#25680
; Package
coreutils
.
(Wed, 15 Feb 2017 19:47:02 GMT)
Full text and
rfc822 format available.
Message #22 received at 25680 <at> debbugs.gnu.org (full text, mbox):
I see some problems with this followup patch:
http://git.savannah.gnu.org/cgit/coreutils.git/commit/?id=bd4bb42d65aac6591277066739ca42d1ddcc2d0e
in that it will make force-link.c and force-link.h harder to move to
gnulib. Is there some way that we can have the code pass syntax checks
without tying it so tightly to coreutils?
Also, why do the syntax checks require those redundant 'extern's?
They're not that common in practice and I'd rather avoid them.
Information forwarded
to
bug-coreutils <at> gnu.org
:
bug#25680
; Package
coreutils
.
(Thu, 16 Feb 2017 01:33:01 GMT)
Full text and
rfc822 format available.
Message #25 received at 25680 <at> debbugs.gnu.org (full text, mbox):
On 15/02/17 11:46, Paul Eggert wrote:
> I see some problems with this followup patch:
>
> http://git.savannah.gnu.org/cgit/coreutils.git/commit/?id=bd4bb42d65aac6591277066739ca42d1ddcc2d0e
>
> in that it will make force-link.c and force-link.h harder to move to
> gnulib. Is there some way that we can have the code pass syntax checks
> without tying it so tightly to coreutils?
Generally we put stuff in gl/ that's intended for gnulib at some stage.
> Also, why do the syntax checks require those redundant 'extern's?
> They're not that common in practice and I'd rather avoid them.
I guess to match up with the declaration in the header.
Existing modules in gl/ don't use the explicit extern
in either the header or the c file.
thanks,
Pádraig
Information forwarded
to
bug-coreutils <at> gnu.org
:
bug#25680
; Package
coreutils
.
(Thu, 16 Feb 2017 06:04:01 GMT)
Full text and
rfc822 format available.
Message #28 received at 25680 <at> debbugs.gnu.org (full text, mbox):
On Wed, Feb 15, 2017 at 11:46 AM, Paul Eggert <eggert <at> cs.ucla.edu> wrote:
> I see some problems with this followup patch:
>
> http://git.savannah.gnu.org/cgit/coreutils.git/commit/?id=bd4bb42d65aac6591277066739ca42d1ddcc2d0e
>
> in that it will make force-link.c and force-link.h harder to move to gnulib.
> Is there some way that we can have the code pass syntax checks without tying
> it so tightly to coreutils?
>
> Also, why do the syntax checks require those redundant 'extern's? They're
> not that common in practice and I'd rather avoid them.
Hi Paul,
Those "extern" directives are the sole clue that these symbols are
deliberately given external scope. That clue is used by the
sc_tight_scope syntax check rule to ensure that no symbol is
accidentally made external.
If/when it moves to gnulib, those directives should be removed.
bug archived.
Request was from
Debbugs Internal Request <help-debbugs <at> gnu.org>
to
internal_control <at> debbugs.gnu.org
.
(Thu, 16 Mar 2017 11:24:04 GMT)
Full text and
rfc822 format available.
This bug report was last modified 8 years and 100 days ago.
Previous Next
GNU bug tracking system
Copyright (C) 1999 Darren O. Benham,
1997,2003 nCipher Corporation Ltd,
1994-97 Ian Jackson.