Show Idle (> d.) Chans


| Results 161751 ... 162000 found in trilema for 'the' |

mircea_popescu: that's what the web is for.
trinque: and forgo the transactional write of the form
mircea_popescu: of course random dorks go on about how no such labs will go out of business through the unlikely avenue of delivering what is clearing 1mn/year worth of services out of <1k/month, but hey.
trinque: then from there you can optimize and say "I don't care if one guy gets stale form, need moar speed"
trinque: you otherwise get a case where one user can't submit his form, because mismatch between UI and acceptable-insert
mircea_popescu: phf yes, but a fine approach to answering "what is the basis of alf's value as an engineer" is pointing out that he runs phuctor on the phuctor box, which fails to cost 5k/mo.
trinque: consider the case where a form is generated based on db state, and the validity of that form depends on db state
trinque: the reason I flip the process and say that db writes static www, rather than www reads db, is that the write of www matter can be transactional with db state update.
phf: well, it's also reason why the kind of stuff you could run on a beefy dreamhost now requires $5k/mo amazon rds instance
mircea_popescu: to go into trinque 's mullings about the meaning of things and items.
mircea_popescu: which is just about what the web MEANS. www = "that data exploration mechanism which ocasionally puts out old data, of an unspecified age but younger than x".
mircea_popescu: there's a reason mysql owns the web, and that has to do with this very specific www-powered profile described above, http://btcbase.org/log/2016-12-30#1593729
asciilifeform: well yes, you get cut open by the butcher you have, not the surgeon you wish you had
mircea_popescu: you get what there is.
trinque: sure, but the tool for vast piles of relational time series data looks very much like relational database, but not for idiots.
asciilifeform: sql is exactly the infamous vice-grip: 'the wrong tool for every job'
asciilifeform: i pissed on 'db' concept as a student, and i piss today: custom data structure for each job! the year ~is~ 1972.
mircea_popescu: this point has some merit, but we're reading "arbitrary" stuff not arbitrary stuff, it's addressed to the db abstraction which is allowed to handle it, not directly to pointers.
asciilifeform: hence the slow methodical spray of petrol
jurov: because the db was in the middle of balancing some datastructure?
mircea_popescu: dude the fact that every other girl in your class is a slut isn't going to feed you or your baby.
jurov: asciilifeform: so is there any database in existence that allows it? without occassional garbage?
asciilifeform: 'keep monkeys from injuring self and others'
a111: Logged on 2016-12-30 17:29 mircea_popescu: exactly how the statements {"do not allow anyone else to write here until i say" ; "let anyone read anything at any time"} amount to an "unsolved problem in cs" ? and wtf cs is this we speak of, sounds more like chewinggum-science.
asciilifeform: ditch it, and ditch randos and their shitblocks, and 0--current sync takes 6 or so hrs.
phf: davout: you don't get consistent, uninterrupted, sequential chain of blocks. the actual distribution pattern is a mess, that "orphanage" was bandaiding
mircea_popescu: otherwise, the bottleneck is the shitsoup outisde.
ben_vulpes: davout: my node is for example, busy sometimes serving blocks to other people
mircea_popescu: davout block verification is the bottleneck in the dump-eat block process.
mircea_popescu: filtering a chain out of the soup outside like BingoBoingo is not without merit.
davout: i'm still curious what would make this kind of setup where i script "prb dumpblock | hex2bin | trb eatblock" much faster than syncing from network if the bottleneck is indeed the block verification?
asciilifeform: all of my nodes, fwiw, descend from the eatblock experiment
mircea_popescu: the bar is higher, but anyway, yes.
davout: well, either a block verifies, or it doesn't
mircea_popescu: davout the suspicion is that relevant data may be missing from the thing, but we really dunno.
mircea_popescu: well that's the next best thing.
mircea_popescu: davout no he's saying it's not in the sql spec! which, considering how specwork goes, he might be even right about some version.
ben_vulpes: http://btcbase.org/log/2016-12-30#1593697 << scripting is too much work, just manually dump every block and then manually load it into trb once the previous eat completes. nice meditative activity
phf: well, "you either expect" is because ~sql~ as a db language is specified to have acid. there are databases that support dirty reads/writes they are just not "sql"
mircea_popescu: oh i see. it's the c machine. ok then.
jurov: mircea_popescu: this is the problem with c machine, that everythign is pointer, and without preemptive locking, you can't distinguish your pointer points to merely stale data vs. garbage
mircea_popescu: and i'm supposed to care about the fact that they don't know how to write a db that doesn't spit out passwd ?
mircea_popescu: and THIS is what i mean re "problems in the field". whopee, idiots who can't code still want to be "at the forefront of computing" so they made a modern db that doesn't work.
mircea_popescu: and here's exactly the problem of superficiality : "you either expect consistency or there's no point in discussing". there's LEVELS. maybe i expect all my writes to be consistent and don't care by A CLASS of reads being consistent. this is a consistency model that's consistent.
mircea_popescu: meanwhile inconsistency within the actual db are a different matter.
mircea_popescu: you are confusing two consistencies. the problem here discussed is dirty read by www ; its consistency with the actual db is not seriously contemplated.
jurov: mircea_popescu: in this case asciilifeform categorically claimed he decided to have consistency, or are you deciding otherwise?
mircea_popescu: one cuts and the other picks. if you cut db field into "acid" i pick you out of existence.
mircea_popescu: whether i want consistency as arbitrarily defined by you is my decision, not yours.
phf: but that's the slowest option, so you have strategies for increase of speed that involve strategic placement of locks
mircea_popescu: phf here's the problem : moder(field) consists of take field, redefine it in a practically useless but superficially persuasive way, then bad_words() to whoever dares ask if your "field" solves any important questions in the field. because of course it doesn't, MIT is the premier institution in science(*) and technology(*) in the werld.
phf: old values, other half with new values. you ~can~ guarantee four requirements ~without~ using locks
phf: you have your basic database requirements: atomicity, consistency, isolation and durability. these are axiomatic, you either expect them to hold or there's no point in further elaboration. at least SQL from the conception guaranteed the four requirements. "dirty read" violates consistency. your table might be half way through an update, you do a "dirty read", which is necessarily faster than update, and you have half the results with
mircea_popescu: whether i would or i wouldn't IS NOT THE DB'S DECISION, jurov .
davout: but then, how can it vastly improve sync time to feed blocks from same machine instead of letting trb suck them from the network?
mircea_popescu: and im supposed to be so cowed by the risk of being called mysql-sometrhing that i'm not going to say anything or i dunno
mircea_popescu: davout this is entirely my argument : they've moved the problem and call this "modern db"
mircea_popescu: because, again, a semaphore exists because the user does not know what the user is doing.
davout: mircea_popescu: seems to me like it would reduce to 'moving the problem'
mircea_popescu: phf i am saying that if you imagine the user can be relied on to "know where the locks are and read around them" then you are therefore necessarily saying "locks are useless - user can always know what he wanted locked and simply not write there hurr"
davout: actually fetching the block data?
davout: what's the syncing bottleneck on trb's side?
phf: mircea_popescu: i'm not quite groking what the bad write is. are you saying that instead of intermingling writes and reads, you should batch them, and not write while you're reading?
davout: ah yeah, i'm currently syncing off ben_vulpes, i was wondering if dumping blocks from prb and then eating them with trb would work
jurov: davout: the ondisk format (not blocks, but index) changed much earlier, oorc at 0.9 or so
mircea_popescu: they have a new protocol.
mircea_popescu: phf think for a second : the whole FUCKING POINT of a semaphore, of any kind, is that user can't know what the other item involved is doing. if they could know, they wouldn't "avoid the locks", they'd avoid the bad write outright.
phf: mysql solution is to creatively relax acid and hope things will "just work", which is the flip side of "mysql crashes all the time"
a111: Logged on 2016-12-30 16:14 asciilifeform: the fastest sync method, supposing one has access to a synced node, but also supposing that it won't do to simply copy the blocks (and it won't, you want to verify) is an eater-shitter system
mircea_popescu: exactly how the statements {"do not allow anyone else to write here until i say" ; "let anyone read anything at any time"} amount to an "unsolved problem in cs" ? and wtf cs is this we speak of, sounds more like chewinggum-science.
phf: mircea_popescu: yes, ~having to deal with locks~ happens past the limit of db designer's competence
mircea_popescu: nevermind "mysql world" and "security" claptrap. the point of fact is you want me to cut off my hand so my helmet will fit.
mircea_popescu: does either of you see how this is the db writer outsourcing his incompetence on the user ?
jurov: So, you have to live with locks and know them
phf: jurov: well, it's not clear where "disregard all locks" comes from in the original request. if the actual operations are as asciilifeform describes, i.e. sporadic inserts, and sporadic selects, then there will be no locks. my point is that there's no "disregard all locks" in postgresql, you solve it by knowing what lock you're hitting, and then designing your query to sidestep the lock
phf: typically you handle it by not making your query lock the entire table, using a where clause of some sort. like if you're inserting things in batches, you can use a batch counter, and you query against max last known batch counter or less (or a variation of)
asciilifeform: yeah when i put on my shit diving suit and went down into the docs, i found none.
phf: pretty sure not on postgresql, they are strict about their acid
asciilifeform: if someone knows , from memory, the relevant knob: please write in.
asciilifeform: for all of the cruel things i have said about postgres : it crashed 0 times.
mircea_popescu: you don't in general want the frontend to be able to expire your cache, let the backend do it whenever it feels like it.
asciilifeform: mircea_popescu: it is currently a cached image, i implemented it. the cached snapshots however last for a limited time (iirc i have it set to half hr per url)
asciilifeform: phf, mircea_popescu , et al : one thing that would immediately make a very palpable difference in speed is if there were a permanent way to order postgres to perform all reads immediately, disregarding all locks.
mircea_popescu: asciilifeform as to "how to make www respond", you use the method we were discussing last time, whereby www is a cached image and if out of date tough for viewer ; as to nursery "do we have this ? how about this?" you really want the db to do that for you, it's ~the only thing it;s good for.
asciilifeform: if there is some other way of doing it, i'm all ears
asciilifeform: mircea_popescu: point of 'nursery' was to do the 'do we have this fp? how about this? ...' a few thou. at a time, is all.
asciilifeform: (and yes, it is the obviously correct way to process thous. of keyz, no question)
asciilifeform: how to make the www piece respond at all while this runs ?
mircea_popescu: so : if loading the whole batches of keys through the user-wwwform process is what 99% of the machine time goes to, then yes, put the batches into a single, sorted query, make the workmem should be 256mb or 2gb or w/e it is you actually need to cover your query (yes this can be calculated, but can also be guessed from a few tries) and then run bernstein after every such query, on the db not on "nursery" (which yes, it's a ter
mircea_popescu: if this is the path you must walk to go from solipsist-alf to socially-integrated-alf i can see it, but hurry it up already it's irritating.
mircea_popescu: http://btcbase.org/log/2016-12-30#1593602 << no. this is nonsense, and not what was at any point either suggested or discussed.
phf: asciilifeform: no, it's a command that you run, like REINDEX index_of_things; it simply queries what's already in DB and warms up the cash
Framedragger: docs say would need to get rebuilt only if there were any unwritten changes. which there shouldn't be as asciilifeform is not using write cache
asciilifeform: mircea_popescu: the other obvious thing would be to dispense with 'real time submission' entirely, and when someone dumps in a key, it goes into next batch. but we discussed this earlier in this thread, it would mean that the thing cannot be used as sks-like tool.
phf: is it a hash index? it has the least overhead (it isn't logged amongs other things, so you have to rebuild it on crash, but conversly it's kept in memory and only supports = operation) indexes will make your queries more cheap, but writes more expensive, so you want to make sure it's the cheapest possible
asciilifeform: mircea_popescu: but yes, for next version (presently only exists in my notebook) there is a nursery and it gets merged into main table at night. but this makes for considerably more complex system, where there are two very distinct types of submission, 'realtime' and 'scripted' , and they get treated quite differently.
asciilifeform: phf: there is
Framedragger: (and follow-up, does explain analyze show the use of that index)
asciilifeform: and no, you can't query the nursery every time somebody loads a url, or you get SAME performance as now, omfg
phf: a sort of impolite question, but is there's an index on hash column?
asciilifeform: otherwise we get same speed as now.
asciilifeform: only the 'adult' db
asciilifeform: but what this adds up to is to have ~two~ quite separate phuctors. we wouldn't query the nursery, for instance, when someone keys in a url with a hash
asciilifeform: it is the most obvious unmassaged piece , aha. the correct algo is , imho, to have separate 'nursery' (gcism term of art) table for the batch submits.
mircea_popescu: otherwise we'd just use hardware everywhere.
mircea_popescu: yes, well, that's then the problem. they should go in as a single query the size of the batch, with the items sorted within it
asciilifeform: i dun give half a shit about 'image'. laying out the fact of why the thing is as it is.
mircea_popescu: would you stop with these bizarro deflections, they neither impress nor persuade, but they do give you an ugly image.
asciilifeform: that way there is exactly one procedural path for key submission, and no duplicate logic.
asciilifeform: they get thrown into same hole as if human submits.
mircea_popescu: uh. then why do you put the keys in in batches if you're not... putting them in in batches ?
mircea_popescu: i gather you already do b. is it index-sorted ?
mircea_popescu: asciilifeform if that's where it spends most time then a) http://btcbase.org/log/2016-12-30#1593462 is very likely to help and b) preparing your whole query as ONE single sorted item will help also.
asciilifeform: mats: i especially loved the 1 single av signature offered
asciilifeform: (in fact, dumping out the entire db, and properly bignumizing, takes about 3min total for the current db.)
asciilifeform: these end up parsed into operable bignums every shot. but, surprisingly, this never takes > 3 minutes !
asciilifeform: when i profiled it, 99% of the time is spent in 'do we have this key hash? no? insert; do we have these fp's? no? insert...'
phf: i'm not even arguing with you, i'm saying that the ~full extent~ of what "move it to psql" is going to do is ~eliminate cross-boundary issue~ that is all. so it'll shave some significant overhead, but it's not a silver bullet.
asciilifeform: the current iteration is, iirc, the third from-the-ground rewrite.
asciilifeform: phf: actually the wwwtronic piece of phuctor is in python and does the precompiled queries thing
phf: what i'm saying is that a significant fraction of "1000s of queries AND ..." is the cross-boundary. you compile queries on c side, you send them to psql, it then parses, prepares results, serializes, sends it to c side, c side has to now parse all over again
asciilifeform: phf: what part of 'this isn't the bottleneck' was unclear
phf: well, a more practical approach would be to adapt phuctor c part to a postgresql loadable module interface. in which case you he will eliminate the cross-boundary overhead (serialize/deserialize over the "wire").
asciilifeform: mircea_popescu is seeing it through the naive vertically integrating rockefeller eyes, 'power plant expensive? let's put it right in my mansion'
asciilifeform: if it somehow had to happen inside postgres, it would not bypass the lock.
asciilifeform: understand, the only reason why the thing works at all, is that this one small part of it, the bernsteinization, can be made ~entirely~ independent from the db locking idiocy
asciilifeform: and not the bernsteining.
asciilifeform: because the actual bottleneck is '1000s of queries AND inserts / second AND guaranteed realtime consistent'
mircea_popescu: rather than in c.
asciilifeform: a sql or similar db system with built-in bignumatron could be useful and interesting. but no such thing exists. nor would it solve the actual bottleneck in phuctor if it were to be discovered tonight.
mircea_popescu: which yes takes some work, but not quite as much as the other variant.
mircea_popescu: bitcoin wants its own fs. ANOTHER way, is to use the means the db already offers for this.
mircea_popescu: you're not addressing the idea. currently you use a pile of c code you labeled for purely personal reasons "a db" to store some data for you, and another pile, you labeled phuctor, to bernstein and do other things on the db-stored data. because the interface is the bottleneck, it then becomes clear you must merge this. one way is to merge by lifting the db code and putting it into phuctor, making it you know, its own db like
asciilifeform: so it has not been a priority, because batching will tremendously complicate the moving parts.
asciilifeform: the querying of 'do-we-have-this-factor' is maybe 1% of the load.
phf: asciilifeform: i'm just trying to establish the dataflow here, for my own curiosity
asciilifeform: it is, by lightyears, the best known algo for batch gcd, also.
asciilifeform: there is no way around this.
phf: asciilifeform: oh so you do insert to a set, every time there's a result, and you query for the whole set before you start a cycle of process?
mircea_popescu: you implement bernstein IN the db. it is actually a programming language.
asciilifeform: the individual bignums.
asciilifeform: mircea_popescu: algo ~demands~ O(1) random access to the bignums.
mircea_popescu: though i am unaware anyone ever implemented this ; because, of coruse, i am unaware anyone used the guy's algo for any other purpose than gawking.
asciilifeform: phf: nope. the only thing that happens to db as a result of bernsteinization is N queries 'do we already know this factor'
phf: so you basically snapshot your entire dataset back into the database at certain times, and snapshot is an equivalent of set merge?
asciilifeform: and postgres is ~the~ albatross.
asciilifeform: the whole thing working at all is predicated on these seemingly 'abusive' design choices
asciilifeform: where i have O(1) access to them.
asciilifeform: so no, they can't 'live in db' while it happens
asciilifeform: trinque: i need random-access in O(1) to them for bernsteining
trinque: I am sadly, quite good at SQL if you want the thing translated
asciilifeform: oh and then, factors are found, largely the same set every time (how bernsteinization works) and each one is queried to the db
trinque: might be faster to do in the db
asciilifeform: this is easily 10% of the load on the db
asciilifeform: (the moduli have to turn into an array of bignum*)
asciilifeform: also did i mention that the entire db get shat out every time we bernstein ?
asciilifeform: (and, painfully, i had to find the offending garbage by hand!)
mircea_popescu: you can also set bgwriter_lru_maxpages to 0 and disable background writing altogether
asciilifeform: the db absolutely has to be in a consistent state at all times, or 0 phuctoring takes place.
mircea_popescu: asciilifeform all these are memory usage ops, what they do is establish when it should go on disk. they do not significantly affect cord-yank robustness. there are other specs you can make for the background writer for instance that do.
mircea_popescu: ah then not nearly as important.
mircea_popescu: yes but what it uses it for is sorts, select by index may use it if the index is composite.
asciilifeform: (because apparently 'thousands of queries / sec is abuse, get a cluster' is the 'state of the art')
mircea_popescu: 1) shared_buffers is to be per spec "25% of available ram" ; but it does diminish returns in the gb. you probably have it as 128mb, make it 2gb say.
mircea_popescu: alrighty then let's make a full plan here.
asciilifeform: mircea_popescu: probably. i'ma run with new knob settings as soon as it is safe to reset the db.
asciilifeform: the fastest sync method, supposing one has access to a synced node, but also supposing that it won't do to simply copy the blocks (and it won't, you want to verify) is an eater-shitter system
mircea_popescu: but you have to have the food.
a111: Logged on 2016-12-30 16:06 mircea_popescu: http://btcbase.org/log/2016-12-30#1593220 << depending on your setup about 40 to 60 days in the wild, about half with ben_vulpes recommended method.
mircea_popescu: http://btcbase.org/log/2016-12-30#1593252 << these people. if phuyctor is not THE usecase then wtf is. wwwrot ffs.
a111: Logged on 2016-12-30 12:28 davout: was there a discussion of the use case where one wishes to create, and sign transactions from an arbitrary set of unspent inputs?
mircea_popescu: http://btcbase.org/log/2016-12-30#1593220 << depending on your setup about 40 to 60 days in the wild, about half with ben_vulpes recommended method.
mircea_popescu: and for the record : the dood colluded with sonny vleisides / the rest of the 'ndrangheta running "bfl" scam (which, obviously, the usg hasn't ever prosecuted, in spite of loud violation of, eg, parole termas, because hey, partners in crime) to falsely claim that he received a miner delivery so as to scam bitbet into misresolving a bet, on which they had ~500 btc.
asciilifeform has a relative who, until recently retiring, programmed in PL/I ! i shit thee not
diana_coman: asciilifeform, to round off: atm eulora code is basically c99 (even that rather reluctantly when we moved over to 64bit)
jurov: i was just paraphrasing, don't remember the exact word
diana_coman: jurov, make cpp haskell again I gather?
mod6: when I get a free moment, i'll throw the latest eulora on there. can be my mining box. :]
jurov: For the uninitiated, there's already c++17 underway. With folks gearing up to c++23, when We Will Finally Reach Parity With Haskell(noshit).
mod6: i had obsd on it like for nearly all of '16... but wasn't doing anything with it. so i threw linux on there.
mod6: nice! i haven't done any sledding yet. gotta do that one of these times.
diana_coman: basically crystalspace has its own "boost" implementation yes, leaking as expected and on top of that planeshift uses it all over the place quite without any rhyme or reason, adding further to the swamp;
asciilifeform: btw , the unusability of naked cpp was also why we got horrors like 'qt'
mircea_popescu: that's ok, the planeshift implementation leaks at pretty much every other rivet
asciilifeform: jurov: notice how some of the more appealing 11isms (e.g., bounds checking) dun work
mircea_popescu: there's two parts to this mess.
mircea_popescu: asciilifeform the client.
asciilifeform: in the old days, every major cpp project had one
jurov: asciilifeform: https://gcc.gnu.org/wiki/C11Status btw, they claim c++11 is fully done in gcc 4.9 (as is my experience) . maybe you meant c++14 ?
mircea_popescu: (they did decide to move over to unity last year, then they abandoned the plan._
diana_coman: and it manages to have some half million lines of code doing the job of maximum 100k by the looks of it
mircea_popescu: the quality of code is uneven in the usual foss sense ; its main virtue is that being old, it is mostly not new.
mircea_popescu: asciilifeform planeshift is a mmorpg that the many-eyes beast took 10 years to make. it uses cs which is a sort of game engine, which is built on cal3d which is a gfx lib.
jurov: planeshift is opensource game, crystalspace is the engine
mircea_popescu: the server code's not published, and the client code is mostly legacy.
jurov: it does not use much c++stdlib, but the crystalspace reimplementation of
diana_coman: asciilifeform, atm we are still slowly, slowly extricating ourselves from the swamps of ps code
asciilifeform: i'ma have to gather the courage and read this thing with own eyes, at some point.
asciilifeform: how about iterators? they are all explicit? million temp vars?
mircea_popescu views with mind's eye diana_coman 's beard growing inches/second in the minds of alf
asciilifeform: so what do the data structures look like ?
asciilifeform: mircea_popescu, diana_coman : so you folx wrote own 'boost'-like horror? fwiw most games firms did, in the golden olden days. my brother's co, for instance, did.
diana_coman: ideally it will eschew cpp alltogether but so far not ideal
mircea_popescu: it dun has boost either.
asciilifeform: because cpp11 is how folx typically end up reluctantly grunting in the stake of gcc5
mircea_popescu: sure. i'm not saying it must be standardized. just, there.
mircea_popescu: asciilifeform hey, i'm not surfe i want it to work on heathens what.
asciilifeform: deedbot key getter dun work in heathens, does it..?
asciilifeform: most of the time i paste in a key from somewhere, there it is, from april.
asciilifeform: there is 1 serious reason why gotta check for existing key/fp:
mircea_popescu: for that matter they fail to change their own diapers, either, end up having Framedragger write code for them etc.
mircea_popescu: on the other hand submitter support is not mandated, they fail to produce a significant portion of the input.
asciilifeform: the one obvious optimization i was considering was to avoid all dupe checks on key submit and simply deduplicate prior to each bernsteining. but this has serious cost in ui consistency, no more could submitters expect to see a result that is guaranteed to make sense after they submit.
asciilifeform: jurov: there is always 'possibility of data loss', machine could be stolen (as it once was!) or burn down
asciilifeform: writecache, on other hand, is a major Do Not Want here, for reasons described above
asciilifeform: jurov: fs does have the journal
mircea_popescu: Framedragger think for a second : modulus gets added, it's in cache, other modulus gets added, they don't get checked against each other because one was in cache, now we have two unpopped poppables in db.
asciilifeform: the thing is a 1,001-layer shit sandwich
Framedragger: i suspect then that the inserts/sec slowness is due to postgres currently making really damn sure that *all* layers of cache are forced. this "full forcing of cache for every row" is what makes things slower; but it's also the only really-super-reliable approach for the case at hand (remote box).
asciilifeform: Framedragger: may as well run whole thing off a ramdisk then
asciilifeform: i'm not convinced that they are separable.
mircea_popescu: that's another, but just as important, issue.
asciilifeform: moreover, it's either ~completely-readable~ or 'dragons in a cave'.
mircea_popescu: the issue is the magic numbers. you said "100". why did you say "100" and how did you [think you] knew ?
Framedragger: right. either it's completely-reliable, or NP-complete complex dragons in a cave
mircea_popescu: consider the simple case of "check values, actuate machinery" in article linked here a few months ago. it is quite fundamentally informative.
mircea_popescu: Framedragger see here's what graybeard means : i see that statement, and I KNOW there's a footnote somewhere you don't know about / bother to mention which says "except when abendstar in conjunction with fuckyoustar when it's 105th to 1095th column".
asciilifeform: they must've moved some king-sized cockroach sofa.
mircea_popescu: it's not even baseless, which is the saddest part.
asciilifeform: mircea_popescu: tbh i had a 'what the hell is all this' reaction to reading ben_vulpes and phf vtron problemz
mircea_popescu: asciilifeform hey, i only said they exist, i didn't say their brains work.
a111: Logged on 2016-12-30 05:22 phf: i think it treats one of the names as canonical
mircea_popescu: http://btcbase.org/log/2016-12-30#1593121 << dude the problems jus' keep on coming. wtf is this, we have hashes, why THE FUCK would we care about directory and holy shit who came up with the idea of using path as hash
Framedragger: here's what i'm thinking: disable synchronous_commit , but set 'checkpoints' so that results are flushed to db every $n inserts/updates. i can see however how you may barf from such an idea, "it's either reliable, or isn't".
asciilifeform: mircea_popescu -- switched hosts, i -- slowly, painfully, rewrote the thing...
Framedragger: "For situations where a small amount of data loss is acceptable in return for a large boost in how many updates you can do to the database per second, consider switching synchronous commit off. This is particularly useful in the situation where you do not have a battery-backed write cache on your disk controller, because you could potentially get thousands of commits per second instead of just a few hundred."
asciilifeform: the machine isn't at my house and i have no control over the mains supply.
Framedragger: (some of those settings don't require db restart (but may require to 'flush' params), some of them do, best to restart db after all changes are made.)
asciilifeform: the 'do we haves' could benefit from bigger read cache
asciilifeform: and no, no sorts there
a111: Logged on 2016-11-19 18:52 asciilifeform: Framedragger: db being hammered 24/7 with 'do we have this hash' 'do we have this fp' 'add this and this' 1000/sec is the bottle.
asciilifeform: Framedragger: they aren't only inserts, every key turns into half a dozen to a dozen queries interleaved with inserts
Framedragger: mircea_popescu: it's dark here in the northern hemisphere, god it's deperessing :( mornin'..
Framedragger: asciilifeform: i take it you are certain that main bottleneck and 'hogger' is the numerous inserts?
asciilifeform: then -- static html phuctor.
asciilifeform: and what effect will this have on the consequences of yanked mains cord
asciilifeform: disk is not the bottleneck on this box
Framedragger: asciilifeform: do you have an idea how much memory you could allow postgres to eat up? i know you have that other super hardcore thing eating lots of memory on the side
a111: Logged on 2016-11-21 12:48 Framedragger: asciilifeform: since i'm fiddling around with postgres for work anyway, i'm curious, if you find a moment, could you maybe send me the postgresql.conf file on phuctor's machine? i'd take a look (it's very possible you know much more re. what's needed there, but i'm just curious about a coupla parameters, doesn't hurt to check)
a111: Logged on 2016-12-30 01:20 asciilifeform: yeah but one that doesn't motherfucking grind to a halt when read 1000/sec omfg

|