public inbox for nncp-devel@lists.cypherpunks.ru
Atom feed
From: John Goerzen <jgoerzen@complete•org>
To: Sergey Matveev <stargrave@stargrave•org>
Cc: nncp-devel@lists.cypherpunks.ru
Subject: Re: Issues with very large packets
Date: Fri, 19 Feb 2021 14:34:21 -0600	[thread overview]
Message-ID: <87blcfiype.fsf@complete.org> (raw)
In-Reply-To: <YDAVqtiqRuYU5DPW@stargrave.org>


On Fri, Feb 19 2021, Sergey Matveev wrote:

> Probably I am wrong, but I really believe that especially on ZFS 
> that
> leads to huge read amplification. With default recordsize=128KiB 
> it
> plays no role to read 200B or 100KiB -- ZFS will anyway read the 
> whole
> record (it had to -- to check the integrity) (assume that 
> compression
> plays no role, because of encryption). But reading 200B file 
> will lead
> only reading of that 200B, that is even much smaller that disk 
> sector
> size. So thousand of files is many megabytes of random reads, 
> that is
> really heavy.

I don't think you're wrong, but in my experience it just hasn't 
been a huge issue.  Yes nncp-stat can take a dozen seconds when 
there are a thousand packets there.  But how much of that is 
caused by head seeking vs. reading an extra 128ish K?  I mean, I 
would expect the cost of reading 128K vs. reading 200 bytes to be 
tiny compared to the latency of the seeks to get to the file in 
the first place.  This is all on HDD, of course; with SSD, I would 
imagine the 128K sequential read also to be fairly 
inconsequential.  But I guess the one way to find out is to test 
it!

-- John

  reply	other threads:[~2021-02-19 20:35 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-18 21:35 Issues with very large packets John Goerzen
2021-02-19 12:36 ` Sergey Matveev
2021-02-19 19:18   ` John Goerzen
2021-02-19 19:46     ` Sergey Matveev
2021-02-19 20:34       ` John Goerzen [this message]
2021-02-20 19:56         ` Sergey Matveev
2021-02-21  4:31           ` John Goerzen
2021-02-21  8:27             ` Sergey Matveev