public inbox for goredo-devel@lists.cypherpunks.ru
Atom feed
From: "Jan Niklas Böhm" <mail@jnboehm•com>
To: goredo-devel@lists.cypherpunks.ru
Subject: Re: Slowness (caching issues?) in dependency checking
Date: Sat, 27 Aug 2022 11:11:08 +0200	[thread overview]
Message-ID: <db6b23fb-df27-3136-5d0d-6454cbee3d64@jnboehm.com> (raw)
In-Reply-To: <YwjwRJYfxJxCrmpF@stargrave.org>

First of all, thank you for the quick response.

> I tried to reproduce any difference between all.suffix and
> all.suffix.suffix2 targets, but completely do not see any.
> All my measurements of running those targets takes nearly
> the same amount of time. No noticeable overhead for .suffix2.

That is quite surprising to me.  I tried out the following on a tmpfs 
and got the following results:

$ redo-ifchange all.suffix.suffix2 # first run
...
$ # change in default.suffix.do
$ time redo-ifchange all.suffix # 17.5 sec
redo all.suffix (default.suffix.do) (12.957s)
$ # change in default.suffix.do
$ time redo-ifchange all.suffix.suffix2 # 22 secs
redo . all.suffix (default.suffix.do) (12.893s)
redo all.suffix.suffix2 (default.suffix2.do) (17.424s)

I would expect that in both cases the amount of time taken would be 
almost equal and not multiple seconds apart.  Does this behavior not 
arise on your machine?  I've tried this out with the new version 1.26 now.

(As an aside, it is a bit odd that `time` reports 5 seconds more than 
redo does.)

> Various temporary and lock files are created during each redo
> invocation, so maybe that is so huge filesystem's overhead? I run those
> commands both on tmpfs and ZFS dataset and the latter works slower, but
> again with no noticeable difference between those two targets. I looked
> at debug output of those both commands and, as expected, the difference
> only in additional OOD level check, that virtually has no cost.

Yes, this issue is exacerbated if the fs is slower.  I first encountered 
it on an NFS and there the test above takes around 20 minutes, with a 
delay between .suffix and .suffix.suffix2 of 25 seconds (tested with 
v1.25).  So I would conclude from this that it is not an issue of the fs 
itself.

> I tried playing with REDO_NO_SYNC and sync attribute on filesystem
> (actually now I just too lazy to check if any write happens during
> redo-ifchange) and it plays no role.

Thank you for trying this out.  I did not change this variable at all, 
but it also does not seem to have an effect on my side.

> So I really have no ideas, except for overhead of OS overhead, like new
> process invocation, however I hardly believe in it, because one
> additional statically-linked Go process startup time should be
> negligible. There is no caching of OOD information in goredo, except for
> temporary file with (already determined) OOD targets, that is not used
> during "redo-ifchange all.suffix*" commands.

What I was wondering about is whether the ood information regarding 
`default.run.do` is cached.  Since this file is the same for all of 
`out/a/.../z/$i.$j.run` it need only be checked once.  But the debug 
output lead me to believe that this file (and whether any 
`default.{,run.}.do` in the folders in-between) is checked separately 
for each target.  I would assume that this information could be cached 
for a single invocation of `redo-ifchange`, or is it not intended?

As another note: the apenwarr-redo implementation takes roughly 0.5 
seconds to `redo-ifchange` the target on a tmpfs and 1 minute on and 
NFS, so it is quite a lot faster.  Is this expected due to this 
implementation storing the ood information in a single sqlite file?  I 
would assume that it would make things faster but that the difference 
would not be orders of magnitude. (That is one of the design decisions 
that I do not like about the apenwarr-redo implementation.)

  reply	other threads:[~2022-08-27  9:12 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-08-25 16:05 Slowness (caching issues?) in dependency checking Jan Niklas Böhm
2022-08-26 16:09 ` Sergey Matveev
2022-08-27  9:11   ` Jan Niklas Böhm [this message]
2022-08-28 14:49     ` Sergey Matveev
2022-08-28 18:30       ` Niklas Böhm
2022-08-29 18:25         ` Sergey Matveev