public inbox for nncp-devel@lists.cypherpunks.ru
Atom feed
* Efficiency of caller/toss
@ 2021-08-18 13:56 John Goerzen
  2021-08-18 18:14 ` Sergey Matveev
  0 siblings, 1 reply; 13+ messages in thread
From: John Goerzen @ 2021-08-18 13:56 UTC (permalink / raw)
  To: nncp-devel

Hi,

So while looking into the question of "how could I have the 
quickest delivery and execution of packets between machines on a 
LAN", I started looking at nncp-caller and nncp-toss under strace.

I observed:

1) With nncp-toss, or nncp-caller with autotoss, each cycle 
involves opening directories, creating a lock file, and removing 
the lock file.  It also stats hdr files, of which I seem to have 
several thousand laying about for some reason.

2) nncp-caller seems to be doing frequent calls to nanosleep, 
futex, clock_gettime, and epoll while it has a connection to a 
remote.  (very quiet when it doesn't).  I'm going to assume that 
nncp-daemon does also, but I haven't checked that yet.  Although 
this looks bad-ish in strace, as a practical matter, it's about 
number 10-20 in my top list.  Firefox is far more expensive in 
background than it is.  So this may not be a huge deal, at least 
for one persistent connection.

The broad question is: what is the most efficient way to do fast 
data exchange?  (Efficient in terms of both SSD life and battery 
life on a laptop)

I have been using persistent connections (very high onlinedeadline 
and maxonlinetime) with nncp-caller, even when that's not strictly 
necessary, reasoning that there is no particular overhead for 
establishing a new connection periodically and all the logging 
associated with that.  However, if nncp-caller is using CPU 
time/battery power to maintain that, then perhaps I'm a bit off 
there.  (Though it does seem to be negligible)

The bigger question is around tossing.  Does autotoss do something 
more restrictive than nncp-toss (perhaps only toss from a 
particular machine)?  Is there a way, since autotoss is in-process 
with nncp-caller, to only trigger the toss algorithm when a new 
packet has been received, rather than by cycle interval?

One other concern about a very short cycle interval is that a 
failing packet can cause a large number of log entries.  (86,400 
per day with the default 1-second interval).  That failing packet 
could be, eg, sending a file to a box that won't accept it, using 
the wrong name for nncp-exec, or just a failure in what nncp-exec 
starts up.  For that reason, I have often used a higher cycle 
count.  If autotoss could only run after new packets, that would 
be helpful to reduce this.

A final question about when-tx-exists being true.  I am a bit 
unclear how that interacts with cron.  Is it:

1) Calls are made both when cron says to, AND when a new packet is 
queued (when-tx-exists triggers MORE calls than cron alone);

or

2) Called are made only when cron says to, but only if an outgoing 
packet exists.  (when-tx-exists causes FEWER calls than cron 
alone)

I'm guessing it's #2 but I'm not certain.

Thanks again!

- John

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Efficiency of caller/toss
  2021-08-18 13:56 Efficiency of caller/toss John Goerzen
@ 2021-08-18 18:14 ` Sergey Matveev
  2021-08-18 19:20   ` John Goerzen
  0 siblings, 1 reply; 13+ messages in thread
From: Sergey Matveev @ 2021-08-18 18:14 UTC (permalink / raw)
  To: nncp-devel

[-- Attachment #1: Type: text/plain, Size: 6807 bytes --]

Greetings!

*** John Goerzen [2021-08-18 08:56]:
>So while looking into the question of "how could I have the quickest
>delivery and execution of packets between machines on a LAN"

I am sure that if we are dealing with <=1Gbps Ethernet, then the main
bottleneck is the network itself and TCP-related algorithms. If we deal
with >=10Gbps links and especially high-latency ones, then TCP is the
thing you are likely have to tune. That is why I played with various
protocols like UDT, Tsunami, QUIC and some others I do not remember now.
It is better to possibly loose some traffic because of congestion, send
more overall data than necessary, but deliver the whole packet as fast
as it is possible. For example flush it with a "tsunami" of UDP packets
and then resent the lost chunks (and new MTH hash algorithm allows
immediate integrity checking too). I did not dive deeply into all of
that, but with an ordinary 1Gbps Ethernet adapter and short-length home
network all of that is behind an ordinary TCP. Possibly fine TCP tuning
will be always enough for NNCP.
https://en.wikipedia.org/wiki/TCP_congestion_avoidance_algorithm

>2) nncp-caller seems to be doing frequent calls to nanosleep, futex,
>clock_gettime, and epoll while it has a connection to a remote.

Yeah, that is Go runtime uses for goroutines running for established
session. And many goroutines in NNCP are in endless loop with a sleep,
constantly checking is there anything new in the spool directory.

>The broad question is: what is the most efficient way to do fast data
>exchange?  (Efficient in terms of both SSD life and battery life on a
>laptop)

For me, the first thing about efficiency is dealing with the network.
Transport protocol: currently just an ordinary TCP and an administrator
tuning it for necessary purposes. And application protocol atop of it:
NNCP's SP, that can aggregate multiple SP-packets in single TCP segment.
In theory. In practice it is done during the handshake, but then each
even about newly appeared packet is sent immediately, to notify remote
side as quickly as possible. And Noise_IK pattern is used, because of
reduced number of round-trips, comparing to Noise_XK, which hides
identity.

Then comes CPU and memory. I assume that battery life depends mainly on
CPU. Cryptographic algorithms used in NNCP are some kind of the fastest
ones: ChaCha20-Poly1305 and BLAKE3. AES-GCM with hardware acceleration
could be faster (and less CPU hungry), but that will complicate
SP-protocol with algorithm negotiation, that I won't do. But neither
ChaCha20-Poly1305, nor BLAKE3 implementations use multiple CPUs now.
Multiple connections will be parallelized, because they will work in
multiple independent goroutines.

SSD life depends on disk activity. Because I use mainly hard drives
everywhere, I tend to minimize and serialize all disk operations.
Obviously :-). Of course the most optimal way is to transparently
receive data, checksum it, decipher, authenticate and write only the
deciphered/processed payload to the disk. But because of reliability
requirement we have to save encrypted packet, do various fsync-calls,
and only after that begin its processing, with another fsyncs.
Performance and reliability guarantees are opposites. Turning off fsync
(zfs set sync=disabled, mount -o nosync), atime, .hdr files of course
will hasten NNCP.

Constant rereading of spool directory, stat-ing files in it, locking --
generally won't create any real I/O operations to the disk, because of
filesystem caching. And of course it won't wearout SSDs, because it is
read operations. But it consumes CPU, indeed.

Instead of constant rereading of directory contents, software can use
various frameworks like kqueue and inotify, that will explicitly
immediately notify about changes, without the need of an endless
expensive loop with a sleep. But all of that is OS-specific, that is why
I am not looking in that direction. I am not against that kind of
optimization, but just have not seen they eating too much CPU to worry
about. But they are not free of course -- any kind of syscalls is
relatively expensive.

There are many places NNCP can be optimized, especially in SP-related
code, to do less loops with sleeps and syscalls. Especially with
OS-specific things like kqueue/epoll events notification.

>I have been using persistent connections (very high onlinedeadline and
>maxonlinetime) with nncp-caller, even when that's not strictly necessary,
>reasoning that there is no particular overhead for establishing a new
>connection periodically and all the logging associated with that.  However,
>if nncp-caller is using CPU time/battery power to maintain that, then
>perhaps I'm a bit off there.  (Though it does seem to be negligible)

NNCP sends PING packets from time to time and runs various goroutines
that check if anything new appeared in spool directories. We should do
benchmarks of course, but session establishing is several TCP/SP
roundtrips, with asymmetric cryptography involved (that is *very*
expensive from CPU point of view: 0.5-1M of CPU cycles), with first
handshake packets padded to their maximal size of ~64KB. So handshake
should be very expensive (traffic, delays, CPU) comparing to long-lived
sessions.

>The bigger question is around tossing.  Does autotoss do something more
>restrictive than nncp-toss (perhaps only toss from a particular machine)?

Yes, it runs tosser only for the node we have got connection.

>Is there a way, since autotoss is in-process with nncp-caller, to only
>trigger the toss algorithm when a new packet has been received, rather than
>by cycle interval?

Can be done. Should be done :-). Current autotosser runs literally the
same toss-functions as nncp-toss.

>One other concern about a very short cycle interval is that a failing packet
>can cause a large number of log entries.

I remember about that issue and about the whole problem of (unexistent)
errors processing. Currently I just had no time to think about that. And
in the nearest weeks won't start thinking about it too... various other
things in real life I have to finish :-)

>A final question about when-tx-exists being true.  I am a bit unclear how
>that interacts with cron.  Is it:
>2) Called are made only when cron says to, but only if an outgoing packet
>exists.  (when-tx-exists causes FEWER calls than cron alone)
>I'm guessing it's #2 but I'm not certain.

Yes, exactly like you wrote here. when-tx-exists just tells, every time
we appear to make a call, to check if there really exists any outgoing
packet (with specified niceness).

-- 
Sergey Matveev (http://www.stargrave.org/)
OpenPGP: CF60 E89A 5923 1E76 E263  6422 AE1A 8109 E498 57EF

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Efficiency of caller/toss
  2021-08-18 18:14 ` Sergey Matveev
@ 2021-08-18 19:20   ` John Goerzen
  2021-08-18 19:29     ` Sergey Matveev
  2021-08-20 10:23     ` Sergey Matveev
  0 siblings, 2 replies; 13+ messages in thread
From: John Goerzen @ 2021-08-18 19:20 UTC (permalink / raw)
  To: Sergey Matveev; +Cc: nncp-devel

Hi Sergey, and thanks for all the helpful info as usual!

On Wed, Aug 18 2021, Sergey Matveev wrote:

> *** John Goerzen [2021-08-18 08:56]:
>>So while looking into the question of "how could I have the 
>>quickest
>>delivery and execution of packets between machines on a LAN"
>
> I am sure that if we are dealing with <=1Gbps Ethernet, then the 
> main
> bottleneck is the network itself and TCP-related algorithms. If 
> we deal
> with >=10Gbps links and especially high-latency ones, then TCP 
> is the
> thing you are likely have to tune. That is why I played with 
> various

So in this particular case, I have fairly small TCP packets going 
over a LAN.  The transport and TCP speeds aren't really concerns; 
it more the latency of waiting for the next toss cycle that gets 
me.  I want things to be tossed ASAP, but there's a fine line 
between that and having pathological things occur when error 
packets are present.

> that, but with an ordinary 1Gbps Ethernet adapter and 
> short-length home
> network all of that is behind an ordinary TCP. Possibly fine TCP 
> tuning
> will be always enough for NNCP.
> https://en.wikipedia.org/wiki/TCP_congestion_avoidance_algorithm

I've been happy enough with it.  Yes, TCP does have its 
pathologies, especially on high-latency WAN links, but for an 
asynchronous tool, it probably doesn't merit further work right 
now.

> Then comes CPU and memory. I assume that battery life depends 
> mainly on
> CPU. Cryptographic algorithms used in NNCP are some kind of the 
> fastest
> ones: ChaCha20-Poly1305 and BLAKE3. AES-GCM with hardware 
> acceleration
> could be faster (and less CPU hungry), but that will complicate
> SP-protocol with algorithm negotiation, that I won't do. But 
> neither
> ChaCha20-Poly1305, nor BLAKE3 implementations use multiple CPUs 
> now.
> Multiple connections will be parallelized, because they will 
> work in
> multiple independent goroutines.

Right.  In my particular case here, the packets are small and so 
the CPU usage of actually processing them is on the order of a few 
ms per hour, I'm assuming.  I'm more thinking of background CPU 
usage here.

Of course, my backup setup processes 250GB packets on occasion, so 
the calculation is very much different there!

> and only after that begin its processing, with another fsyncs.
> Performance and reliability guarantees are opposites. Turning 
> off fsync
> (zfs set sync=disabled, mount -o nosync), atime, .hdr files of 
> course
> will hasten NNCP.

Yep.  Also here, in this particular use case, I'm more concerned 
about background usage than foreground usage.

> Constant rereading of spool directory, stat-ing files in it, 
> locking --
> generally won't create any real I/O operations to the disk, 
> because of
> filesystem caching. And of course it won't wearout SSDs, because 
> it is
> read operations. But it consumes CPU, indeed.

I would assume that the creation and deletion of the lock file 
would add things to logs on various filesystems, which must be 
committed, but I could be wrong about that.  Still, even if it's 
doing it every second as with autotoss, probably negligible 
compared to what a web browser does when you make one click.

> Instead of constant rereading of directory contents, software 
> can use
> various frameworks like kqueue and inotify, that will explicitly
> immediately notify about changes, without the need of an endless
> expensive loop with a sleep. But all of that is OS-specific, 
> that is why
> I am not looking in that direction. I am not against that kind 
> of

Right.  That is a real pain to deal with.

> handshake packets padded to their maximal size of ~64KB. So 
> handshake
> should be very expensive (traffic, delays, CPU) comparing to 
> long-lived
> sessions.

Makes good sense, thanks.

>>Is there a way, since autotoss is in-process with nncp-caller, 
>>to only
>>trigger the toss algorithm when a new packet has been received, 
>>rather than
>>by cycle interval?
>
> Can be done. Should be done :-). Current autotosser runs 
> literally the
> same toss-functions as nncp-toss.

This strikes me as perhaps the single best cost-benefit thing 
we've discussed here.  If it could just change how the toss is 
invoked, from being timer-based to trigger-based, that should be 
pretty nice.  One complication I could forsee would being needing 
to remember to trigger it again if new packets come in while a 
toss is already running, but that's not particularly difficult to 
overcome.

> I remember about that issue and about the whole problem of 
> (unexistent)
> errors processing. Currently I just had no time to think about 
> that. And
> in the nearest weeks won't start thinking about it too... 
> various other
> things in real life I have to finish :-)

Completely understood!  I am most definitely NOT complaining, just 
thinking!

>>2) Called are made only when cron says to, but only if an 
>>outgoing packet
>>exists.  (when-tx-exists causes FEWER calls than cron alone)
>>I'm guessing it's #2 but I'm not certain.
>
> Yes, exactly like you wrote here. when-tx-exists just tells, 
> every time
> we appear to make a call, to check if there really exists any 
> outgoing
> packet (with specified niceness).

It might be useful to add that exact sentence (or mine) to the 
documentation, incidentally.

- John

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Efficiency of caller/toss
  2021-08-18 19:20   ` John Goerzen
@ 2021-08-18 19:29     ` Sergey Matveev
  2021-08-20  2:24       ` John Goerzen
  2021-08-20 10:23     ` Sergey Matveev
  1 sibling, 1 reply; 13+ messages in thread
From: Sergey Matveev @ 2021-08-18 19:29 UTC (permalink / raw)
  To: nncp-devel

[-- Attachment #1: Type: text/plain, Size: 548 bytes --]

Tomorrow I will answer more.

*** John Goerzen [2021-08-18 14:20]:
>I would assume that the creation and deletion of the lock file would add
>things to logs on various filesystems, which must be committed, but I could
>be wrong about that.

Creation and deletion are wearout operations of course, but NNCP does
not delete .lock files. They are created only once and then only
open+lock syscalls are involved (src/lockdir.go).

-- 
Sergey Matveev (http://www.stargrave.org/)
OpenPGP: CF60 E89A 5923 1E76 E263  6422 AE1A 8109 E498 57EF

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Efficiency of caller/toss
  2021-08-18 19:29     ` Sergey Matveev
@ 2021-08-20  2:24       ` John Goerzen
  2021-08-20 10:28         ` Sergey Matveev
  0 siblings, 1 reply; 13+ messages in thread
From: John Goerzen @ 2021-08-20  2:24 UTC (permalink / raw)
  To: Sergey Matveev; +Cc: nncp-devel


On Wed, Aug 18 2021, Sergey Matveev wrote:

> Creation and deletion are wearout operations of course, but NNCP 
> does
> not delete .lock files. They are created only once and then only
> open+lock syscalls are involved (src/lockdir.go).

Ahh, so it is.  I misread the parameters to open() in strace.

Incidentally, on the Linux environment on my Lenovo Chromebook 
Duet (a rather resource-constrained environment that runs under 
multiple layers of emulation), I found nncp-caller was using about 
3.5% of CPU while maintaining an otherwise-idle TCP connection, 
and less otherwise.  I wonder if we could have a paremeter to 
decrease the rescan interval there?

- John

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Efficiency of caller/toss
  2021-08-18 19:20   ` John Goerzen
  2021-08-18 19:29     ` Sergey Matveev
@ 2021-08-20 10:23     ` Sergey Matveev
  1 sibling, 0 replies; 13+ messages in thread
From: Sergey Matveev @ 2021-08-20 10:23 UTC (permalink / raw)
  To: nncp-devel

[-- Attachment #1: Type: text/plain, Size: 589 bytes --]

*** John Goerzen [2021-08-18 14:20]:
>> Yes, exactly like you wrote here. when-tx-exists just tells, every time
>> we appear to make a call, to check if there really exists any outgoing
>> packet (with specified niceness).
>
>It might be useful to add that exact sentence (or mine) to the
>documentation, incidentally.

Added in http://www.git.cypherpunks.ru/?p=nncp.git;a=commitdiff;h=6395395ff9539da173b1ef634623d4fcc5a786c2
Hope this clarifies its behaviour more.

-- 
Sergey Matveev (http://www.stargrave.org/)
OpenPGP: CF60 E89A 5923 1E76 E263  6422 AE1A 8109 E498 57EF

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Efficiency of caller/toss
  2021-08-20  2:24       ` John Goerzen
@ 2021-08-20 10:28         ` Sergey Matveev
  2021-08-20 19:19           ` John Goerzen
  2021-08-23 14:10           ` Sergey Matveev
  0 siblings, 2 replies; 13+ messages in thread
From: Sergey Matveev @ 2021-08-20 10:28 UTC (permalink / raw)
  To: nncp-devel

[-- Attachment #1: Type: text/plain, Size: 852 bytes --]

*** John Goerzen [2021-08-19 21:24]:
>I found nncp-caller was using about 3.5% of CPU while
>maintaining an otherwise-idle TCP connection, and less otherwise.  I wonder
>if we could have a paremeter to decrease the rescan interval there?

Parametrizing those timeouts, sleep times of course can be done and
moved to configuration file. It will be tradeoff between even delays
(how fast we react on new packets appearance) and CPU time. But at first
I think I should look about possibly existing to use epoll/inotify/kqueue
solution for Go, that will eliminate expensive regular syscalling at all.
And of course leaving current while+sleep+dir-scan algorithm as a fallback
for unsupported systems (kqueue/epoll are OS-specific).

-- 
Sergey Matveev (http://www.stargrave.org/)
OpenPGP: CF60 E89A 5923 1E76 E263  6422 AE1A 8109 E498 57EF

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Efficiency of caller/toss
  2021-08-20 10:28         ` Sergey Matveev
@ 2021-08-20 19:19           ` John Goerzen
  2021-08-21 18:38             ` Sergey Matveev
  2021-08-23 14:10           ` Sergey Matveev
  1 sibling, 1 reply; 13+ messages in thread
From: John Goerzen @ 2021-08-20 19:19 UTC (permalink / raw)
  To: Sergey Matveev; +Cc: nncp-devel


On Fri, Aug 20 2021, Sergey Matveev wrote:
> Parametrizing those timeouts, sleep times of course can be done 
> and
> moved to configuration file. It will be tradeoff between even 
> delays
> (how fast we react on new packets appearance) and CPU time. But 
> at first
> I think I should look about possibly existing to use 
> epoll/inotify/kqueue
> solution for Go, that will eliminate expensive regular 
> syscalling at all.
> And of course leaving current while+sleep+dir-scan algorithm as 
> a fallback
> for unsupported systems (kqueue/epoll are OS-specific).

Ah, if Go has a generic library for that, that would be fantastic. 
Could be used in call, caller, daemon, and toss, I would imagine. 
I do think sometimes about the overhead of the periodic scans on a 
server that can often have nearly 10,000 files in the spool dir. 
I'm probably overthinking it, as it's almost certainly cached and 
most of those are .seen files that stick around for reasons I 
understand (or .hdr ones that seem to stick around mysteriously 
sometimes).

John

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Efficiency of caller/toss
  2021-08-20 19:19           ` John Goerzen
@ 2021-08-21 18:38             ` Sergey Matveev
  0 siblings, 0 replies; 13+ messages in thread
From: Sergey Matveev @ 2021-08-21 18:38 UTC (permalink / raw)
  To: nncp-devel

[-- Attachment #1: Type: text/plain, Size: 525 bytes --]

*** John Goerzen [2021-08-20 14:19]:
>most of those are .seen files that stick around for reasons I understand

Well, seems nothing prevents moving all of them to subdirectory.
{rx,tx}/seen/HASH instead of {rx,tx}/HASH.seen. The same applies
to everything else. Trivial little complication, that will definitely
help with many .seen-files. Would do it! I really did not think about
directory size before.

-- 
Sergey Matveev (http://www.stargrave.org/)
OpenPGP: CF60 E89A 5923 1E76 E263  6422 AE1A 8109 E498 57EF

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Efficiency of caller/toss
  2021-08-20 10:28         ` Sergey Matveev
  2021-08-20 19:19           ` John Goerzen
@ 2021-08-23 14:10           ` Sergey Matveev
  2021-09-02  9:06             ` Sergey Matveev
  1 sibling, 1 reply; 13+ messages in thread
From: Sergey Matveev @ 2021-08-23 14:10 UTC (permalink / raw)
  To: nncp-devel

[-- Attachment #1: Type: text/plain, Size: 445 bytes --]

*** Sergey Matveev [2021-08-20 13:28]:
>I think I should look about possibly existing to use epoll/inotify/kqueue
>solution for Go, that will eliminate expensive regular syscalling at all.

And they exists. I tried https://github.com/fsnotify/fsnotify -- no
issues (except for https://github.com/fsnotify/fsnotify/issues/389).

-- 
Sergey Matveev (http://www.stargrave.org/)
OpenPGP: CF60 E89A 5923 1E76 E263  6422 AE1A 8109 E498 57EF

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Efficiency of caller/toss
  2021-08-23 14:10           ` Sergey Matveev
@ 2021-09-02  9:06             ` Sergey Matveev
  2021-09-02 13:07               ` John Goerzen
  0 siblings, 1 reply; 13+ messages in thread
From: Sergey Matveev @ 2021-09-02  9:06 UTC (permalink / raw)
  To: nncp-devel

[-- Attachment #1: Type: text/plain, Size: 803 bytes --]

*** Sergey Matveev [2021-08-23 17:10]:
>And they exists. I tried https://github.com/fsnotify/fsnotify -- no
>issues (except for https://github.com/fsnotify/fsnotify/issues/389).

Several days ago I used fsnotify in NNCP:
http://www.git.cypherpunks.ru/?p=nncp.git;a=commitdiff;h=726c119e6b2340994ada9fbd0e252acd31fb78b5
Currently no issues and problems discovered. It reduces unnecessary
directory listing calls. But I have not seen drop of CPU usage. truss
shows that there appeared huge quantity of nanosleep() calls, that have
not seen before fsnotify usage. Probably that is the issue related only
to Go+FreeBSD+kqueue. I have not check that commit under GNU/Linux
currently.

-- 
Sergey Matveev (http://www.stargrave.org/)
OpenPGP: CF60 E89A 5923 1E76 E263  6422 AE1A 8109 E498 57EF

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Efficiency of caller/toss
  2021-09-02  9:06             ` Sergey Matveev
@ 2021-09-02 13:07               ` John Goerzen
  2021-09-02 13:34                 ` Sergey Matveev
  0 siblings, 1 reply; 13+ messages in thread
From: John Goerzen @ 2021-09-02 13:07 UTC (permalink / raw)
  To: Sergey Matveev; +Cc: nncp-devel


On Thu, Sep 02 2021, Sergey Matveev wrote:

> *** Sergey Matveev [2021-08-23 17:10]:
>>And they exists. I tried https://github.com/fsnotify/fsnotify -- 
>>no
>>issues (except for 
>>https://github.com/fsnotify/fsnotify/issues/389).
>
> Several days ago I used fsnotify in NNCP:
> http://www.git.cypherpunks.ru/?p=nncp.git;a=commitdiff;h=726c119e6b2340994ada9fbd0e252acd31fb78b5
> Currently no issues and problems discovered. It reduces 
> unnecessary
> directory listing calls. But I have not seen drop of CPU usage. 
> truss
> shows that there appeared huge quantity of nanosleep() calls, 
> that have
> not seen before fsnotify usage. Probably that is the issue 
> related only
> to Go+FreeBSD+kqueue. I have not check that commit under 
> GNU/Linux
> currently.

I tried to test this on Linux with strace, but got certificate 
errors trying to download balloon.  I'm not familiar much with Go 
(if this was written in Rust, I'd contribute code; but since I 
don't have time to learn Go right now, I contribute enthusiasm 
<grin>) and couldn't immediately find a way to solve that issue 
with "go mod vendor" that it told me to run.  I assume you do 
whatever magic is needed when you build the tarballs.

- John

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Efficiency of caller/toss
  2021-09-02 13:07               ` John Goerzen
@ 2021-09-02 13:34                 ` Sergey Matveev
  0 siblings, 0 replies; 13+ messages in thread
From: Sergey Matveev @ 2021-09-02 13:34 UTC (permalink / raw)
  To: nncp-devel

[-- Attachment #1: Type: text/plain, Size: 1468 bytes --]

*** John Goerzen [2021-09-02 08:07]:
>I tried to test this on Linux with strace, but got certificate errors trying
>to download balloon.

I tried it under some latest Ubuntu in virtual machine -- everything
builds and works fine too. After I will separate .part/.hdr/etc to
separate directories, I will make a release. I left ability to build
NNCP without fsnotify at all, presumably for fsnotify-unsupported
platforms.

>now, I contribute enthusiasm <grin>

And I am really very appreciate that!

>solve that issue with "go mod vendor" that it told me to run.  I assume you
>do whatever magic is needed when you build the tarballs.

Well, issue with balloon is related directly to TLS certificate :-)
Go get requires websites to use HTTPS, so it forcefully connects to
ca.cypherpunks.ru-issued go.cypherpunks.ru. Personally I have got my CA
certificate installed. There is no such problem with tarballs, because I
explicitly include vendor/ directory, containing all necessary dependencies.

Just for your interest: in general, you can specify/override CA for
go-commands, or you can clone/download necessary dependency somehow to
your file system and use "replace" keyword in go.mod to specify where
dependency is already located. As an example, bottom of
http://www.gogost.cypherpunks.ru/Download.html described all of that.

-- 
Sergey Matveev (http://www.stargrave.org/)
OpenPGP: CF60 E89A 5923 1E76 E263  6422 AE1A 8109 E498 57EF

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2021-09-02 13:35 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-18 13:56 Efficiency of caller/toss John Goerzen
2021-08-18 18:14 ` Sergey Matveev
2021-08-18 19:20   ` John Goerzen
2021-08-18 19:29     ` Sergey Matveev
2021-08-20  2:24       ` John Goerzen
2021-08-20 10:28         ` Sergey Matveev
2021-08-20 19:19           ` John Goerzen
2021-08-21 18:38             ` Sergey Matveev
2021-08-23 14:10           ` Sergey Matveev
2021-09-02  9:06             ` Sergey Matveev
2021-09-02 13:07               ` John Goerzen
2021-09-02 13:34                 ` Sergey Matveev
2021-08-20 10:23     ` Sergey Matveev