vex: asciilifeform, it'll all be over soon. maintaining control over the design process is a good thing to do
vex: I cvoulda told you you needed more wirespace for free.
vex: nooone ever listens to me tho. I'm wrong half of the time by design
vex: hanbot is still lost
vex: someone go give her a hug
vex: noone in costa rica is daddy enough
vex: youtube.com/watch?v=mrZRURcb1cM
gregorynyssa: asciilifeform: what programming-language features are necessary to handle large codebases? how does Ada address this problem?
gregorynyssa: I remember reading Robert Harper's articles back in college about how Haskell's type-classes are not good for modularity.
gregorynyssa: are CLOS and Metaobject Protocol still worth learning in this day and age?
adlai|text: gregorynyssa: why would they not be!?
asciilifeform: gregorynyssa: clos -- yes, mop -- strictly if you find it interesting ( asciilifeform admits, never used the latter in anger, as it didn't quite work 100% on sbcl when last tried )
asciilifeform: http://logs.nosuchlabs.com/log/asciilifeform/2022-07-25#1112212 << proper package support, for one thing. ( and e.g. ada's 'generic' package thing is useful, see example. ) but see also oblig. naggum : what do you have that threatens to 'be large code base' ? (maybe wait until the problem actually
dulapbot: Logged on 2022-07-25 05:30:40 gregorynyssa: asciilifeform: what programming-language features are necessary to handle large codebases? how does Ada address this problem?
asciilifeform: happens to you?)
gregorynyssa: asciilifeform: the problem has already happened to me.
gregorynyssa: thanks for explaining though.
asciilifeform: gregorynyssa: what, from your pov, is 'large coad' ?
gregorynyssa: Emacs, for one.
asciilifeform: emacs is imho as 'great' an example as e.g. mswin
jonsykkel: am i correct in my understanding that ading infrastructure for piping bigass multiGB files over pest while retaining replay protection is planned?
jonsykkel: if so, how will this work, given that dedup bufer must fit in memory
asciilifeform: jonsykkel: there's no particular reason to dedupe warez fragments by looking in the db (or ever placing'em there to start with)
asciilifeform: jonsykkel: in luby-style scheme, btw, there's not really such a thing as a dupe, erry frag is of the form x1^x2^...^xn where x_i are chunks of the payload; erry x_i recv'd gives you some bits of info until you converge on the payload in its entirety
asciilifeform: and given that you aint rebroadcasting'em, there's no particular reason to try to filter, other than to reject stales (which may be replays thrown at your station by dr.evil)
jonsykkel: will it not then be possible to interfere with a transfer by replaying old fragments from a different warez? unless "currently transfering warez" hash is included in every packet or somthign
asciilifeform: jonsykkel: they still have timestamp (and still signed/ciphered using $peer's key like erry other msg)
jonsykkel: asciilifeform: right, but in the case where passes stale check cuz not old enuf
asciilifeform: simply won't be chained ( chaining of any kind makes 0 sense for luby frags )
asciilifeform: jonsykkel: signpost did not yet post his scheme, but asciilifeform assumed that transfer would begin with e.g. a series of (chained) msgs giving hashes of, say, 1MB slices. then transfer 1 at a time
asciilifeform: after all slices accounted for, transfer is considered complete
asciilifeform: each luby msg would include hash of the slice it is a frag of. thereby dr.evil piping in chunks of previous transfers would do 0
jonsykkel: alright, makes sense
asciilifeform: the ideal values for the constants will need to be determined empirically
asciilifeform: but in all cases replays of e.g. 20min old frags would do 0 other than waste bw
asciilifeform: (dr.evil can only replay authentic chunks that have been previously sent)
jonsykkel: indeed