Show Idle (>14 d.) Chans


← 2022-07-25 | 2022-07-27 →
signpost has been attending to meatspace backlog but will get back into this soon.
signpost: don't let me block other attempts at the problem either.
signpost: using online-codes, the scheme would be to send to peers a request to transfer the item identified by $HASH. perhaps this is a message type that's part of the broadcast chain of messages.
signpost: peers that care to do so reply with an offer to supply a stream of encoded "check blocks" in OC terms (encoded message blocks).
signpost: requester then confirms that he wants such a stream, and the encoded chunks are supplied as direct messages to him.
signpost: there are details that would be included in each step which I skimmed over, but this is a sketch.
signpost: a nice thing about using OC is you don't have to have any of the peers coordinate, i.e. "A pls send M1-M3, B pls M4-6" etc
signpost: they generate their own streams and they're useful together.
signpost: efficiently, even.
signpost: my pyencoder is up on blog
signpost: I stopped working on this because I don't want to dance around the GIL to make it faster.
signpost: there are tests included there which explain a lot, and which all pass.
signpost: I've got a WIP lisp version on my desk that doesn't work yet, but if there's interest I'll try to get it grunted out soon.
asciilifeform: signpost: makes sense. if you make it a broadcast tho, will be riotously expensive, as l2+ would have to somehow pass the replies to the requester, and the only way to do so is to broadcast right back
dulapbot: (pest) 2022-07-26 asciilifeform: http://logs.nosuchlabs.com/log/pest/2022-07-25#1010638 << imho could make sense to have a lubytron where can ask >1 peer for $frag; but not l2+ (cuz ruinously, 'geometrically' expensive, and the traffic will be pure noise to all but the 1 intended recipient)
asciilifeform: ... could in principle have it so lubyism request aint rebroadcast (i.e. goes strictly to l1)
thimbronion: http://logs.nosuchlabs.com/log/asciilifeform/2022-07-26#1112252 << if this is the case does it make sense to continue work on blatta? I suspect most (including myself) will prefer to run a lisp station when one is eventually published. At the moment I can't think of a situation where one could run a python implementation that one couldn't also run a lisp implementation.
dulapbot: Logged on 2022-07-26 02:06:02 signpost: I stopped working on this because I don't want to dance around the GIL to make it faster.
thimbronion: I understand that it's good for the health of the network to have many implementaitons, but python ultimately seems like a dead end
thimbronion: *implementations
signpost: it's not impossible to work around, I just didn't want to. multiprocessing with queues would probably work fine for both OC and blatta.
signpost: I don't think it's time to throw anything away yet.
signpost made some progress towards sharing a numpy array backed by shared memory between multiprocessing instances for ocpy
thimbronion: aight. will keep hacking away on PROD then.
asciilifeform: thimbronion: atm blatta is the flagship, fwiw
asciilifeform: afaik almost all folx on pestnet, currently using it.
asciilifeform: 'bird^H^H^Hblatta in the hand is worth 2 in cl' or how did it go.
thimbronion: asciilifeform: hey not like I can delete it from the internet lol
asciilifeform: sure but imho makes sense to carry on with it, if sumbody posts a clean cl impl can always switch then
asciilifeform: supposing thimbronion has the cycles
thimbronion currently has nothing but cycles.
asciilifeform envious, lol
signpost: also, respectfully, fuck this purism. we should be so lucky as to have so many implementations of a necessary item that we can dismiss some.
signpost: also my py decoder can do 10MB in 1.3sec; it's not like it's terrible on single core, just not gonna saturate a 1gbps line.
signpost: *on this machine
signpost: 3ghz i7 lappy
asciilifeform: signpost: is that 10MB of input chunks, or of payload (if the latter, what mass of chunks appeared in the test?)
signpost: 10MB of payload in 1300 byte chunks. that'd require discovery of MTU, which we can leave aside (though I'd benefit from understanding the rationale against doing MTU discovery). with 10MB payload cut into 324 byte chunks, 5.9sec to decode.
asciilifeform: signpost: good q :
asciilifeform: signpost: main arg against mtu discovery is the obv one
dulapbot: Logged on 2022-04-01 14:28:09 asciilifeform: verisimilitude: per pest conception, you wanna be able to ensure unfraggedness even if pesting from moving truck and wardriving, which precludes 'clever' path mtu discovery etc
asciilifeform: signpost: the other is, of course, that it adds moving parts / complexity, for veehhery questionable gain
signpost: yeah, not the kind of thing to put on the table at this stage.
signpost hacking on the lisp version atm, will be interesting to compare performance.
asciilifeform: as it is, one can pest immed. from any working ipv4 pipe
signpost doubtful he's anywhere near wrung all performance possible out.
asciilifeform: signpost: other thought is, if you have heavier packets, verifying seal & decrypting costs you moar.
asciilifeform: primary focus of protocol from asciilifeform's pov was ddos resistance, concretely: maximizing rate at which martians, dupes, & stales can be thrown out.
asciilifeform: i.e. minimizing # of cpu cycles, within reason, req'd by this procedure
signpost: yes, and even with large payloads it is better to be able to move them at all under attack than move at maximum throughput.
asciilifeform: ( throwing out a martian, in this case a packet where seal != H(Ks) for any Ks in yer wot , only requires hashing; however, tossing out a replay dupe requires decrypting
asciilifeform: ( either that, or storing'em, as somebody suggested, which is a nonstarter, attacker can collect arbitrary many dupes from arbitrary interval for replay )
asciilifeform: per doctrine of 'Nothing to the Stranger', absolutely nuffin is carried in plaintext. so you gotta verify seal & decrypt, potentially for each incoming packet, and this operation oughta be as fast as physically possible.
signpost: cool, 100% agree that keeping this efficient trumps increasing packet size.
asciilifeform: ( moar pedantically, per 'Nothing to the Snoop' rly )
asciilifeform: signpost: 1 way to think about it, is that when you process a packet, yer 'extending credit' to the sender, giving him a certain # of cpu cycles (and occupying a certain amt of storage while it happens)
asciilifeform: ( per asciilifeform's napkin calcs, already current protocol, even in a hypothetical 'tight' compiled impl., will need 32 'opteron' cores to actually eat at 1g/s )
asciilifeform: ... ergo if processing an incoming packet takes longer than the interval req'd to receive it, yer 'running a fractional reserve'.
asciilifeform: asciilifeform would be that nobody's aboutta receive legit pest traffic at g/s, so this observation chiefly concerns liquishit from ddosers.
asciilifeform: *would bet
asciilifeform: see also this thrd.
dulapbot: Logged on 2021-09-23 15:14:30 asciilifeform: PeterL: let's assume ethernet. so, in bytes, let's calculate 1 packet's mass: 5 (gap) + 4 (preamble) + 14 (eth header) + 20 (ipv4 hdr) + 8 (udp header) + 496 (pest) + 4 (ethernet crc) == 551
asciilifeform: ( see also this piece, re actual maximum packet rate on ethernet, is considerably lower than above expectation )
signpost: extending credit is a great way to think about it.
signpost will read linked poast upon return from errands, bbl!
asciilifeform: signpost: thinking about it, one only needs to decrypt 8 bytes (well, 16, being the serpent block size) past the nonce, to see timestamp. but still.
← 2022-07-25 | 2022-07-27 →