public inbox for
Atom feed
From: Sergey Matveev <stargrave@stargrave•org>
Subject: Re: MTH
Date: Wed, 30 Jun 2021 17:09:36 +0300	[thread overview]
Message-ID: <> (raw)
In-Reply-To: <>

[-- Attachment #1: Type: text/plain, Size: 2578 bytes --]


*** John Goerzen [2021-06-30 06:28]:
>I'm not familiar with MTH, but I'm guessing that's what leads to the memory
>consumption that scales with the size of the data?

Just want to remark that "MTH" is just my own name of the hashing used
in NNCP. Technically it is ordinary BLAKE3 that creates binary tree of
hashes, completely similar to one used in Tiger Tree Hash (TTH),
Certificate Transparency and huge quantity of other projects. So nothing
anything new, except that I use keyed hashing instead of prepending of
0x00/0x01 to leaf/node data to differentiate them:

>I would assume that
>BLAKE3, like other hashing algorithms, would be constant-space?

Yes, right.

>I often process packets up to about 256GB in size, so I assume that would
>lead to about 128MB of RAM usage.  That's not ideal on some of the
>lower-powered systems I use, particularly Raspberry Pis, but I can probably
>live with it in general (the lowest-powered ones don't tend to process data
>at that size). Would I be correct in assuming that this is used in pretty
>much all utilities?  nncp-file/exec, call, daemon, toss?

Yeah, it is used nearly everywhere: everything that deals with encrypted
packets. Higher memory consumption is only a temporary thing now! MTH
hashes data the following way (ordinary Merkle tree):

                   /   \
                  /     \
               node2    leaf4
              /    \       \
             /      \       \
            /        \       \
           /          \       \
          /            \       \
        node0         node1    leaf4
       /    \        /    \      \
      /      \      /      \      \
    leaf0  leaf1  leaf2  leaf3  leaf4
      |      |      |      |      |
    block  block  block  block  block

and *currently* all leafs (hashes of 128KiB blocks) are kept in memory.
Only during the final hashing all of them "fold" to nodes. So that is
why 1TiB/128KiB*256bit=256MiB required to keep the leafs. And additional
256MiB/2=128MiB is needed for making the first level of nodes (that are
twice less). And of course possible various overheads.

Of course you can collapse/fold huge quantity of all those intermediate
leafs and nodes. Current implementation is only a temporary solution! It
will be optimized and require negligible amount of memory. Of course
without changing the format.

Sergey Matveev (
OpenPGP: CF60 E89A 5923 1E76 E263  6422 AE1A 8109 E498 57EF

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

  reply	other threads:[~2021-06-30 14:19 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-30 12:31 [EN] NNCP 7.0.0 release announcement Sergey Matveev
2021-06-30 13:28 ` John Goerzen
2021-06-30 14:09   ` Sergey Matveev [this message]
2021-06-30 14:31   ` nncp-redir Sergey Matveev
2021-06-30 17:25     ` nncp-redir Sergey Matveev
2021-07-01 15:03       ` nncp-redir John Goerzen
2021-07-01 16:19         ` nncp-redir Sergey Matveev
2021-06-30 23:39 ` [EN] NNCP 7.0.0 release announcement Shawn K. Quinn
2021-07-01  7:28   ` build fail Sergey Matveev