public inbox for nncp-devel@lists.cypherpunks.ru
Atom feed
* Assorted NNCP questions
@ 2020-12-27  4:48 John Goerzen
  2020-12-27  9:53 ` Sergey Matveev
  0 siblings, 1 reply; 7+ messages in thread
From: John Goerzen @ 2020-12-27  4:48 UTC (permalink / raw)
  To: nncp-devel

Hi folks,

I have a few questions!

First, is nncp-toss multithreaded?  If so, would it be possible to 
have an option forcing it to run requests sequentially?

Secondly, I would like to establish a long-running connection to a 
remote host.  I defined:

      calls: [
        {
          cron: "*/1 * * * *"
          onlinedeadline: 1800
          maxonlinetime: 1750
          addr: lan
        },
      ]

But the onlinedeadline isn't being respected in the run by 
nncp-caller; it still disconnects after 10 seconds, so this 
results in a new connection being established every minute.  I 
also tried definine onlinedeadline at the parent (neighbor) level, 
rather than within the calls structure, but that didn't help 
either.

I am considering just running nncp-call instead of nncp-caller as 
a systemd service, hoping that perhaps it would send periodic 
pings to notice if the remote end goes away (does it?)

Finally, I have questions about what happens if data for the wrong 
node is loaded.  For instance, say you have this setup:

A -> B -> C

That is, to talk to C from A, you must go via B.

Now, you use nncp-xfer or nncp-bundle to offload data for B.  But 
instead of plugging the USB stick/whatever into B, you plug it 
into C and load it in.  Now what happens?

Does C:

 - Ignore it all?
 - Consume it but send it to B?
 - Somehow realize that the packets were bound for it anyway and 
 process them?
 - Something else?

Thanks again!

- John

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Assorted NNCP questions
  2020-12-27  4:48 Assorted NNCP questions John Goerzen
@ 2020-12-27  9:53 ` Sergey Matveev
  2020-12-28  4:34   ` John Goerzen
  2020-12-30 12:01   ` Sergey Matveev
  0 siblings, 2 replies; 7+ messages in thread
From: Sergey Matveev @ 2020-12-27  9:53 UTC (permalink / raw)
  To: nncp-devel

[-- Attachment #1: Type: text/plain, Size: 10963 bytes --]

Greetings!

*** John Goerzen [2020-12-26 22:48]:
>First, is nncp-toss multithreaded?  If so, would it be possible to have an
>option forcing it to run requests sequentially?

Packets processing is done intentionally sequentially. With fast
hash/encryption (BLAKE2b/ChaCha20-Poly1305) algorithms it should not
have CPU as a bottleneck, but HDD, which obviously works with maximal
performance on sequential loads. But packet processing itself uses
separate goroutine for decryption and separate for decompression (which
is used for "command" packets created with nncp-exec) -- so here it can
occupy more than two CPU/cores. You can limit it by setting GOMAXPROCS=1
environment variable when running any Go program.

>But the onlinedeadline isn't being respected in the run by nncp-caller; it
>still disconnects after 10 seconds, so this results in a new connection being
>established every minute.  I also tried definine onlinedeadline at the parent
>(neighbor) level, rather than within the calls structure, but that didn't
>help either.

onlinedeadline option must be "in sync" on both nodes, because there is
no agreement made on it inside the online protocol itself. But anyway
currently I do not understand why it is not working properly for you. I
remember that there were problems some time ago, but I thought they were
fixed. My "upstream" node (my gateway and mail server) has only
onlinedeadline option for my "laptop":

  stargrave.org: {
    id: [...]
    exchpub: [...]
    signpub: [...]
    noisepub: [...]
    freq: {
      path: /storage
      chunked: 524288
    }
    incoming: /storage/incoming
    exec: {
      sendmail: ["/usr/sbin/sendmail"]
    }
    onlinedeadline: 3600
  }

and my laptop's configuration for that gateway/upstream node is:

  gw: {
    id: [...]
    exchpub: [...]
    signpub: [...]
    noisepub: [...]
    exec: {
      sendmail: ["/usr/sbin/sendmail"]
    }
    incoming: /home/stargrave/incoming
    freq: {
      chunked: 524288
    }
    calls: [
      {
        cron: "*/10 9-21 * * MON-FRI"
        nice: PRIORITY
        rxrate: 1
      },
      {
        cron: "*/1 21-23,0-9 * * MON-FRI"
        onlinedeadline: 3600
        addr: lan
      },
      {
        cron: "*/1 * * * SAT,SUN"
        onlinedeadline: 3600
        addr: lan
      },
    ]
    addrs: {
      lan: "[fe80::be5f:f4ff:fedd:2752%bridge0]:540"
      main: "..."
    }
  }

and I use nncp-caller running as a background process all days long and
nncp-daemon on upstream machine through inetd. And my connections are
very long-live enough:

    # zstd -d < /var/spool/nncp/log.2.zst | grep call-finish
    I 2020-12-24T19:07:40.34625988Z [call-finish duration="22540" node="..." rxbytes="15598501100" rxspeed="706453" txbytes="591692" txspeed="26"]
    I 2020-12-24T20:38:29.768306849Z [call-finish duration="511" node="..." rxbytes="36480" rxspeed="304" txbytes="175416" txspeed="730"]
    I 2020-12-25T00:21:58.498325841Z [call-finish duration="12420" node="..." rxbytes="42044" rxspeed="3" txbytes="171628" txspeed="13"]
    I 2020-12-25T04:16:39.095366317Z [call-finish duration="14079" node="..." rxbytes="39000" rxspeed="2" txbytes="135020" txspeed="9"]
    I 2020-12-25T05:45:00.081001429Z [call-finish duration="5280" node="..." rxbytes="34908" rxspeed="6" txbytes="60328" txspeed="11"]
    I 2020-12-25T07:58:00.080451266Z [call-finish duration="7920" node="..." rxbytes="38640" rxspeed="4" txbytes="146672" txspeed="18"]
    I 2020-12-25T08:18:00.090356686Z [call-finish duration="1140" node="..." rxbytes="33036" rxspeed="32" txbytes="32988" txspeed="32"]
    I 2020-12-25T09:35:00.09105514Z [call-finish duration="4560" node="..." rxbytes="36656" rxspeed="8" txbytes="122124" txspeed="27"]
    I 2020-12-25T09:59:00.090938021Z [call-finish duration="1380" node="..." rxbytes="33100" rxspeed="26" txbytes="33052" txspeed="26"]
    I 2020-12-25T10:17:00.127521868Z [call-finish duration="1020" node="..." rxbytes="33140" rxspeed="36" txbytes="407176" txspeed="452"]
    I 2020-12-25T12:09:00.09443685Z [call-finish duration="6660" node="..." rxbytes="41452" rxspeed="6" txbytes="158844" txspeed="24"]
    I 2020-12-25T12:10:10.107415566Z [call-finish duration="10" node="..." rxbytes="32748" rxspeed="32748" txbytes="32700" txspeed="32700"]

(short ones are because I disconnect my laptop from the network). Are
you sure onlinedeadline option on the node you connect *to* are in the
node's section, and not inside "calls"? But I will check all that
workability again on holidays next year.

>I am considering just running nncp-call instead of nncp-caller as a systemd
>service, hoping that perhaps it would send periodic pings to notice if the
>remote end goes away (does it?)

onlinedeadline exactly tells how many seconds to await and consider peer
dead if no replies were received from it. It should work (as nncp-caller
too :-)). Actually -call and -caller uses completely the same
code/functions and -caller is just a loop waiting when to call -call's
function to connect to another host. Of course there can be bugs, so
soon I will check that again.

>A -> B -> C
>Now, you use nncp-xfer or nncp-bundle to offload data for B.  But instead of
>plugging the USB stick/whatever into B, you plug it into C and load it in.
>Now what happens?

C will:
>- Ignore it all?

Everything here is very simple. And I am surprised that there is no
information about nncp-xfer's directory layout description in
documentation. Have to fix that too soon!

For example I sent some file from node 2BV...VCQ to node NFG...Y2A (I
stripped that long Base32 identifiers) and run nncp-xfer -mkdir on
completely empty directory (representing removable storage). It will
create the following:

    /tmp/shared
    ├── 2BVYXV6RWH74NXWRMD2SLDX44TEPSWP47TVR7NPTVA6Z63WJEVCQ         <- node itself
    │   └── 2BVYXV6RWH74NXWRMD2SLDX44TEPSWP47TVR7NPTVA6Z63WJEVCQ
    └── NFGW32PP4WLOCXSY5KGGJBQTM3GOGHZJ6K745TBHUYG6HDZ2JY2A         <- destination node id
        └── 2BVYXV6RWH74NXWRMD2SLDX44TEPSWP47TVR7NPTVA6Z63WJEVCQ     <- source node id
            └── CS5AE4UVOV4JRT3UKLYZRDHFHQG4BIYMTNOX3V7QPFAOTE3I72KA <- packet itself

Let's close our eyes on "double" 2BV...VCQ directories -- this is just a
possibility to send packets to "self". That shared directory holds
information of "destination" nodes. Each "destination" node holds
directories with "source" nodes. And each of "source" node contains
packets themselves (packet's filename is a checksum of its contents). If
-xfer, running on 2BV...VCQ, has outbound packets to NFG...Y2A and it
sees that specified shared directory contains NFG...Y2A, then it will
create 2BV...VCQ source subdirectory and place the packet inside it.

If no NFG...Y2A exists, then that storage won't travel to that node so
no packet copying will be done. Of course someone have to either create
that directory manually, or run -xfer with -mkdir option (with -node too
as a rule) at least once.

When -xfer sees directory with the self node's id, then it will treat
packets inside it as inbound ones.

In your example if you "nncp-xfer -mkdir -node B /mnt" on completely
blank storage, then only "B/A/pkt" will be created (ok, also "A/A/"). So
then nncp-xfer is run on C-node, because of lack of "C/" that storage
will be completely ignored.

When you use "-via B", then special "transition" packet is created for
B-node. Technically it is a packet with special type, that only contains
node's id to which you must copy packet's contents. And its contents is
just an ordinary encrypted packet that you will create to C-node when
not specifying -via. So packet is literally just wrapped and encrypted
to via-node(s). Because it can be decrypted only by B-node, then no
observer can get information about its contents (well, except for
niceness-level) and determine is it just an ordinary file transmission
or transition packet. So if you created "-via B C" packet, then anyway
only B-node can decrypt it and see that actually it is transition packet
to C-node. C-node just can not do anything with the data encrypted only
to B-node.

This resembles (not intentionally!) onion encryption that is used for
example in Tor. Each packet is wrapped inside another one and any
intermediate node knows only where to send it further and from which it
was received.

All that spool and -xfer's shared directories contain only encrypted
packets. They can be processed only after authenticated decryption. So
you can not even copy them because you do not know anything about them,
except for: http://www.nncpgo.org/Encrypted.html
sender/recipient node's id, niceness level and... that is all. You have
to check the header's signature by sender's public key, perform
ephemeral key agreement to get the symmetric key, decrypt the content's
with that symmetric key. Even the "real" size of the contents is
encrypted and packet can contain small email message and megabytes of
junk after it.

Technically it is rather simple to add ability to encrypt packet to
multiple recipients at once. Just encrypt the same single symmetric key
to each node with ephemeral DH keys. It will add just a few dozens of
bytes per each additional node. So we can add everyone in the -via path
as an additional recipient to the packet and transitional packets inside
it, without any considerable CPU/disk space overhead -- and everyone in
the -via path (and target's node) should be able to process the packet.

One drawback that it reveals all "participants". So onion encryption is
useless there. But I do not want to say that it is unacceptable (if user
is given the choice). NNCP anyway was never anonymity preserving network.

Another complication that each packet can not belong to just a single
target node. Possibly making (symbolic?) links of the same file to
multiple nodes is enough. And if accidentally some out-of-band node sees
the packet that it can process -- it will do it, like in your example
"C" will process "-via B"-ed packet.

Moreover that gives ability to multicast packets. I liked the idea of
hierarchical multicasting in Usenet, but I have never used it. But
several years I was point in FidoNet so I saw that in practice. Actually
FidoNet does not have multicasting ability in its transport protocols:
inbound message to echo-area is fed to echo-processor that just creates
copied of the message to other outbounds nodes. Of course that can be
done with NNCP and additional news-like, echo-processor-like software.
But possibly it could be builtin NNCP out-of-box somehow. I am not sure
about that, but multicasting idea delights me much! Will think about all
of that.

-- 
Sergey Matveev (http://www.stargrave.org/)
OpenPGP: CF60 E89A 5923 1E76 E263  6422 AE1A 8109 E498 57EF

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Assorted NNCP questions
  2020-12-27  9:53 ` Sergey Matveev
@ 2020-12-28  4:34   ` John Goerzen
  2020-12-28  7:37     ` Sergey Matveev
  2020-12-30 12:01   ` Sergey Matveev
  1 sibling, 1 reply; 7+ messages in thread
From: John Goerzen @ 2020-12-28  4:34 UTC (permalink / raw)
  To: Sergey Matveev; +Cc: nncp-devel

On Sun, Dec 27 2020, Sergey Matveev wrote:

> Greetings!

Hello again, and thank you for this very informative reply!  A few 
remarks within..

> onlinedeadline option must be "in sync" on both nodes, because 
> there is
> no agreement made on it inside the online protocol itself. But 
> anyway
> currently I do not understand why it is not working properly for 
> you. I
> remember that there were problems some time ago, but I thought 
> they were
> fixed. My "upstream" node (my gateway and mail server) has only
> onlinedeadline option for my "laptop":

Ahhhh..  I hadn't realized that it had to be in sync on both ends. 
When I corrected that, it began behaving as expected.  That does 
have some logic to it; presumably whatever side has the smallest 
value becomes the operative one for the connection?  This may be a 
useful thing to document wherever those settings are referenced.


>     # zstd -d < /var/spool/nncp/log.2.zst | grep call-finish

Is there something built into NNCP that does this log rotation and 
compression, by the way?

> Technically it is rather simple to add ability to encrypt packet 
> to
> multiple recipients at once. Just encrypt the same single 
> symmetric key
> to each node with ephemeral DH keys. It will add just a few 
> dozens of
> bytes per each additional node. So we can add everyone in the 
> -via path
> as an additional recipient to the packet and transitional 
> packets inside
> it, without any considerable CPU/disk space overhead -- and 
> everyone in
> the -via path (and target's node) should be able to process the 
> packet.

This (and your other ideas mentioned) is interesting.  I am 
contemplating a scenrio in which I have two backup drives, which 
are rotated in.

I would have a backup source machine A, a relay machine B, and 
then targets C and D (corresponding to the different drives).  C 
and D would be on an airgapped machine, and only one would be 
online at a time.

I have been contemplating gpg-encrypting my backup data at A to a 
key that is known by C and D but not B, then sending it via 
nncp-exec to B.  The command on B receives the data, and generates 
two outgoing nncp-execs with a copy of it: one to C and one to D. 
This way, whenever a drive is swapped, it will get the most recent 
data.

This would work perfectly.  It would be interesting to specify 
multiple destinations and have NNCP figure out what the most 
efficient place to do this splitting out is.  However, my own 
solution here ought to be pretty workable, and this is really a 
niche case that may not really merit code in NNCP itself.

What I would not want is to weaken the existing NNCP protections 
around "via"; for instance, B should never be able to see the 
unencrypted data in this setup.

Thanks again!

- John

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Assorted NNCP questions
  2020-12-28  4:34   ` John Goerzen
@ 2020-12-28  7:37     ` Sergey Matveev
  2020-12-28 18:32       ` John Goerzen
  0 siblings, 1 reply; 7+ messages in thread
From: Sergey Matveev @ 2020-12-28  7:37 UTC (permalink / raw)
  To: nncp-devel

[-- Attachment #1: Type: text/plain, Size: 2324 bytes --]

*** John Goerzen [2020-12-27 22:34]:
>Ahhhh..  I hadn't realized that it had to be in sync on both ends. When I
>corrected that, it began behaving as expected.  That does have some logic to
>it; presumably whatever side has the smallest value becomes the operative one
>for the connection?  This may be a useful thing to document wherever those
>settings are referenced.

Agreed and will document it explicitly. Each side has its own deadline
timers and if one is decided that session is over, then of course it
will disconnect.

>Is there something built into NNCP that does this log rotation and
>compression, by the way?

No. It was created by newsyslog rotation daemon that comes out of box in
FreeBSD. http://www.git.cypherpunks.ru/?p=nncp.git;a=blob;f=ports/nncp/files/nncp.newsyslog.conf.sample;hb=develop
Actually I very like the idea that no daemons should be bothering about
all that log rotation and they just should print log to stdout that will
be processed with some utilities like multilog from daemontools. But
that is acceptable only for non-interactive daemons. NNCP has manually
started utilities, so they write log files manually. Because all of them
open/close file for writing every single line of log, there is no
problems with simple log rotation with newsyslog.

>What I would not want is to weaken the existing NNCP protections around
>"via"; for instance, B should never be able to see the unencrypted data in
>this setup.

Personally I make my backups with gpg too, but just to be sure that
their encrypted form is placed on long-term storage:
    zfs send -R | zstd | gpg -z 0 -r ... -e | nncp-file - ...
Anyway I should think about all of that subject with multiple
recipients. If A sends data to C, -via B, then B-node anyway will see
only and only transitional (encrypted) packet to C-node. If C-node will
be an additional recipient, then it will also see that transitional
packet, but also it sees that its destination is C-node itself and it
can immediately begin also decrypting it. Of course C-node will decrypt
two packets to get the data from A-node: packet for B-node (and
additionally C-node) and packet inside it for C-node itself.

-- 
Sergey Matveev (http://www.stargrave.org/)
OpenPGP: CF60 E89A 5923 1E76 E263  6422 AE1A 8109 E498 57EF

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Assorted NNCP questions
  2020-12-28  7:37     ` Sergey Matveev
@ 2020-12-28 18:32       ` John Goerzen
  2020-12-28 19:43         ` Sergey Matveev
  0 siblings, 1 reply; 7+ messages in thread
From: John Goerzen @ 2020-12-28 18:32 UTC (permalink / raw)
  To: Sergey Matveev; +Cc: nncp-devel


On Mon, Dec 28 2020, Sergey Matveev wrote:

> *** John Goerzen [2020-12-27 22:34]:
>>Ahhhh..  I hadn't realized that it had to be in sync on both 
>>ends. When I
>>corrected that, it began behaving as expected.  That does have 
>>some logic to
>>it; presumably whatever side has the smallest value becomes the 
>>operative one
>>for the connection?  This may be a useful thing to document 
>>wherever those
>>settings are referenced.
>
> Agreed and will document it explicitly. Each side has its own 
> deadline
> timers and if one is decided that session is over, then of 
> course it
> will disconnect.

So one question I'm having here is how does it work.  Does the 
timeout apply to:

1) No incoming or outgoing NNCP packets to the site;

or

2) No response to PING commands in that amount of time?

I'm guessing from the documentation the answer is #1.  That raises 
the question: can the code be configured to use SO_KEEPALIVE or a 
protocol-level ping to hold the TCP connection open?  This would 
help, eg, for NAT devices with short timeouts or a remote that's 
crashed and rebooted (to detect that it is no longer 
communicating).


> Personally I make my backups with gpg too, but just to be sure 
> that
> their encrypted form is placed on long-term storage:
>     zfs send -R | zstd | gpg -z 0 -r ... -e | nncp-file - ...
> Anyway I should think about all of that subject with multiple
> recipients. If A sends data to C, -via B, then B-node anyway 
> will see

As I think about it, personally I think it would be best not to do 
this in the NNCP code.  The reason is that right now it is very 
clear that no node can see data except for the data destined for 
it.  Weakening this promise can lead to complexity, both for the 
implementation and for the users.  Complexity is a source of 
security bugs.  And I think the use case isn't very common, and 
could be resolved with a brief script combined with gpg.

My particular use case involves a low-bandwidth internet 
connection from A->B, and then B->C and B->D are both 
high-bandwidth (airgap or LAN).  So B can be an "exploder" to 
receive the data from A and then queue it up for both C and D.

But this is simple as the route to C and D both is "via B".  In 
other more complex situations, the "exploder" would be a different 
points in the route, or perhaps even several different exploders 
in the topology.  Representing that in NNCP itself would be 
complex.

So I think it is easy enough for me to gpg-encrypt on A, and have 
a tiny tiny script on B that uses tee to pipe it to two nncp-exec 
commands, one for C and one for D.  Only C and D would possess the 
decryption keys.

Representing this one case in NNCP might be easy enough (A 
generates a typical packet to B, with the inner data encrypted to 
both C and D's keys) but making it generic would be complex and 
probably not worth it.

- John

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Assorted NNCP questions
  2020-12-28 18:32       ` John Goerzen
@ 2020-12-28 19:43         ` Sergey Matveev
  0 siblings, 0 replies; 7+ messages in thread
From: Sergey Matveev @ 2020-12-28 19:43 UTC (permalink / raw)
  To: nncp-devel

[-- Attachment #1: Type: text/plain, Size: 3971 bytes --]

*** John Goerzen [2020-12-28 12:32]:
>1) No incoming or outgoing NNCP packets to the site;
>2) No response to PING commands in that amount of time?
>I'm guessing from the documentation the answer is #1.

Yes, it is #1. Logic is simple, as I can see it in the code (written
years ago):

* if nothing (except for PINGs) were received and sent during
  onlinedeadline time, then drop the connection.

  onlinedeadline is just a timeout mechanism to decide when peer must
  terminate the connection. If peer has no packets to send, then should
  it terminate it? Obviously no, because remote side can have the
  packets for sending. If the node will send some notification that it
  has no packets, then connection will be some kind of half-closed, but
  remote side can send packets for a long time, during which new packets
  can appear on our "already closed".

  Actually I have not thought much about how to close connections and
  negotiate that it can be closed by both sides. I just used that simple
  timeout of lack of traffic.

  So onlinedeadline=20 means that if no packets (except for PINGs) were
  received during 20 seconds, then close the connection. If packet
  appeared on any of the side and it was transmitted, then
  onlinedeadline timer is reset of course for waiting another 20 seconds

* if maxonlinetime was specified, then connection will be forcefully
  terminated at (connection establish time + maxonlinetime). Actually I
  have added it as a hack. For example you have some limitations (quota,
  speed, whatever) with you communication channel. But they depend on
  exact daytime. For example you can use maximal bandwidth in your
  office at night time, but have to limit it at working ours (say from
  09:00). You configure your "calls" section correspondingly. But if you
  use large onlinedeadline (for example 3600), then connection
  established at 8:00, which is bandwidth-limit-less, where packet
  appeared at 8:50, won't be terminated at 09:00, because that packet
  resets onlinedeadline timer. Practically that connection can live
  forever, if packets appearing at least once per onlinedeadline. And
  practically there will be alive bandwidth-limit-less connection at
  9:00, 10:00 and any time. maxonlinetime just allows to forcefully
  terminate it, allowing new 09:00 connections to use another set of
  limits

* every minute, if no other packet was sent, then PING packet is sent
  (it is just dummy empty-payload packet, but fully
  encrypted/authenticated, so we are sure that it is not some kind of
  replay)

* of no packets were received during 2*PING timeouts (2 minutes), then
  treat remote side as dead and terminate connection

>That raises the
>question: can the code be configured to use SO_KEEPALIVE or a protocol-level
>ping to hold the TCP connection open?  This would help, eg, for NAT devices
>with short timeouts or a remote that's crashed and rebooted (to detect that
>it is no longer communicating).

PINGs are already sent every minute. I think it is small enough for NATs
"heartbeating".

I believe (not sure) TCP keepalives won't help with NATs at all,
because, as I can see by default it has huge timeouts (2 hours) before
sending any heartbeats:
https://tldp.org/HOWTO/html_single/TCP-Keepalive-HOWTO/
https://webhostinggeeks.com/howto/configure-linux-tcp-keepalive-setting/
default FreeBSD sysctl options has the same huge values:

    net.inet.tcp.keepidle: 7200000
    net.inet.tcp.keepintvl: 75000
    net.inet.tcp.keepinit: 75000
    net.inet.tcp.keepcnt: 8

>Complexity is a source of security bugs.

Completely agree with you. But I still will think about "multicasting"
next year, unrelated to your use-case. Probably burying that idea :-),
if won't find it simple enough and with valuable use-cases.

-- 
Sergey Matveev (http://www.stargrave.org/)
OpenPGP: CF60 E89A 5923 1E76 E263  6422 AE1A 8109 E498 57EF

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Assorted NNCP questions
  2020-12-27  9:53 ` Sergey Matveev
  2020-12-28  4:34   ` John Goerzen
@ 2020-12-30 12:01   ` Sergey Matveev
  1 sibling, 0 replies; 7+ messages in thread
From: Sergey Matveev @ 2020-12-30 12:01 UTC (permalink / raw)
  To: nncp-devel

[-- Attachment #1: Type: text/plain, Size: 511 bytes --]

*** Sergey Matveev [2020-12-27 12:53]:
>I am surprised that there is no
>information about nncp-xfer's directory layout description in
>documentation. Have to fix that too soon!

Actually it has, just single sentence:

    DIR directory has the following structure: RECIPIENT/SENDER/PACKET,
    where RECIPIENT is Base32 encoded destination node, SENDER is Base32
    encoded sender node.

-- 
Sergey Matveev (http://www.stargrave.org/)
OpenPGP: CF60 E89A 5923 1E76 E263  6422 AE1A 8109 E498 57EF

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2020-12-30 12:01 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-27  4:48 Assorted NNCP questions John Goerzen
2020-12-27  9:53 ` Sergey Matveev
2020-12-28  4:34   ` John Goerzen
2020-12-28  7:37     ` Sergey Matveev
2020-12-28 18:32       ` John Goerzen
2020-12-28 19:43         ` Sergey Matveev
2020-12-30 12:01   ` Sergey Matveev