Show Idle (>14 d.) Chans


← 2020-03-03 | 2020-03-05 →
whaack: jfw: hm yup I received that, I will pay more attention to the =freenode tab going forward
feedbot: http://ossasepia.com/2020/03/04/no-bones-in-thy-skeleton-and-no-theory-in-thy-research/ << Ossa Sepia -- No Bones in Thy Skeleton and No Theory in Thy Research
diana_coman: jfw: why u no write? It's been a whole week!!1
diana_coman: BingoBoingo: where are you with the scripts? kind of lost track of that part and saw only the drafts.
BingoBoingo: diana_coman: Feeding urls from a file to curl using command substitution appears to fit in the hand. I'll clean up the pieces I have and get them in here.
jfw: diana_coman: because I'm ffa'ing it apparently, "can't possibly cut elephant into more manageable bites". Published nao.
diana_coman: jfw: ahaha, "eat your elephants in small pieces!"
diana_coman: BingoBoingo: more to the point: what steps do you have working, what did you obtain already with them, what's the next step and where are you with that ?
BingoBoingo: http://paste.deedbot.org/?id=x7wl << The website discovery pieces. I've got a start to a filter for "Things with these file extensions aren't interesting" and a start to a "Does this page have a comment box" tester.
BingoBoingo: Now that gathering works, the next step is cutting out the gathered items that aren't interesting.
diana_coman: BingoBoingo: did you run those on anything? on what? what did you get out of it? where do you run them next?
diana_coman: jfw: so you have in that very article some questions re the signatures thread - why didn't you ask those in #t?
BingoBoingo: diana_coman: I've run them starting from a few different sites. I get at the end a file 'churndomains4' full of website urls.
BingoBoingo: diana_coman: Since running out that many iterations gets very slow, for now I'm testing the filtering on the 'churn3' list of all urls collected from a bunch of discovered sites.
BingoBoingo: This is the most recent 'churn3' I've produced http://paste.deedbot.org/?id=VxhL
BingoBoingo: Here's the most recent (and smaller) churndomains4 http://paste.deedbot.org/?id=bKiT
diana_coman: BingoBoingo: uhm, I don't quite get it - are you after the sites or after all pages of a site? (and even ...images??)
jfw: diana_coman: they were pretty vague in my mind until spelling it all out now. Perhaps even still now, dunno; do they make sense to you?
diana_coman: BingoBoingo: to my mind the initial exploration aims to get literally as many domains as you can reach starting from a given point; so yes, it follows links from there but you don't really need to save other than those that point to *another* domain, do you?
diana_coman: BingoBoingo: what's though the core trouble you are having with this because it seems to me quite obviously going beyond curl/awk/sed/whatever command line ie you just don't see it as clear or specific enough steps at all, can't quite put my finger on it.
diana_coman: jfw: well, your article there is quite highly strung and rather visibly the result of pain-writing; but the way it looks it's quite as you say in footnote 1 - you torture the writing because it's not as definitive as you'd want it to be, huh.
diana_coman: jfw: the thing with questions though is that they are precisely exploratory - it's true that at times you can indeed ask questions to help the other party explore but not *all* questions are like that, lol; at times you literally ask to figure stuff out so yes, necessarily *before* things are clear, lol
diana_coman: jfw: specifically on the questions in footnote iv, the second one assumes the whiteout - it's unclear that is the desired approach to start with so maybe ask *that*? ie how would it work, maybe whiteout or something else/what?
diana_coman: the first one seems quite clear ie the underlying concern is that including signatures in the same place as the vpatch/text requires some clear separation of the roles of those 2 bunches of (ultimately) text; so how is that to be achieved?
diana_coman: jfw: is that what you are asking there?
jfw: so aiming too far even with the questions, hm.
diana_coman: jfw: what do you mean by "too far"?
jfw: trying to cover too much ground and possibly introducing bad assumptions rather than starting with something simpler
jfw: yes, the boundary between sigs and text is the root of it
BingoBoingo: diana_coman: It's not the most elegant approach, but I
BingoBoingo: 'll try rearranging and presenting
BingoBoingo: diana_coman: On the first couple rounds I'm after new sites. On the last round I'm after blogposts specifically. The thing I'm chewing on now is cutting the uninteresting stuff out of the file full of urls to images and everything else without stripping it down to the bare domains.
BingoBoingo: diana_coman: As this works now, it curls one site puts all the urls in a file, the next step produces from that a smaller file of only new site urls, the third step curls the sites creating a large file of all encountered urls, fourth step trims it down to sites...
BingoBoingo: diana_coman: So where I want to go is from an "all urls file" to "urls scrubbed of images, .js, .css, etc", from there retrieve urls and screen for comment boxes in the next cut.
diana_coman: jfw: well, you probably have way more practice figuring things out on your own than through discussion, don't you?
BingoBoingo: diana_coman: In between "scrub images etc" and "retrieve urls looking for comment boxes", I'm uncertain if I want to add a "cut the list to 3 or 4" urls per site step.
jfw: diana_coman: yep
diana_coman: jfw: that's pretty much the underlying cause really - in other words simply lack of practice.
diana_coman: and it's quite possibly further coming from the fact that yeah, not much to get from asking questions of the clueless and so on, to the full context; but the solution is still...practice.
jfw: makes sense.
diana_coman: BingoBoingo: it's not about elegant or anything of the sort; but to start with, a program executes a series of steps itself, it doesn't have to be one step one script; the point and my repeated asking for your "steps" is to figure out what are you trying to achieve at one *stage* if you prefer; ie stage 1: discovery of linked domains starting from a given domain; 2. finding all pages with a comment box for a given domain
diana_coman: BingoBoingo: basically you have a big problem to solve; you'll have to cut this into smaller problems so you can solve them; if needed, you cut and cut again (divide and conquer , pretty much)
diana_coman: then once you have one small-enough problem, that you *know* how to solve *manually*, you simply take those manual steps and tell the machine to do them.
diana_coman: http://logs.ossasepia.com/log/ossasepia/2020-03-04#1020036 - heh, now I suspect you've been reading the #e logs of today, lol
ossabot: Logged on 2020-03-04 17:03:48 jfw: trying to cover too much ground and possibly introducing bad assumptions rather than starting with something simpler
jfw: I haven't actually
diana_coman: jfw: you know, one of the good things in academia is that you *have to* ask questions; as in, if you listen to a presentation, whatever it might be, on whatever topic and regardless of how well or badly made, at the end you *have to ask* at least x questions; that's practice, pure and simple and it...works.
diana_coman: looking back at it (as I was initially rubbish at this part), I think initially I simply studied other people's questions to figure out how they managed it, lolz
ossabot: Logged on 2020-03-04 17:15:06 jfw: I haven't actually
BingoBoingo: diana_coman: Thank you. I'll get to breaking these problems up some more.
diana_coman: (today's #e log is not directly on question asking but it is on exploring what is pretty much a big unknown and it touches at times on what makes for a better initial exploration precisely on the grounds you gave re possibly introducing bad assumptions if not simple enough)
diana_coman: BingoBoingo: yw; is it clear to you what & how there? because I really don't want that it blocks you even more somehow.
jfw: diana_coman: interesting, I hadn't heard about the mandatory questions. Re #e, perhaps it's that you brought the notion through your feedback, and I attempted to expand.
diana_coman: might be.
diana_coman: jfw: since you have presentations at your Junto meetings for that matter, do you have questions at the end?
jfw: heh, sometimes we have to tamp down on questions popping up throughout so as to get to the end
diana_coman: jfw: ahaha, that's good then; is it *you* asking questions though? :P
BingoBoingo: diana_coman: The whats seem clear. The hows less so, but enough to get moving.
diana_coman: BingoBoingo: alright then.
jfw: diana_coman: sometimes; though hm, possibly less on the more unfamiliar topics.
jfw: mandatory questions afterward sounds like a great addition actually.
diana_coman: jfw: in principle there's nothing wrong with just agreeing to keep questions for the end (as some of them might be answered at times simply at a later point in the presentation) and otherwise set mandatory questions at the end, yeah
jfw: so I wasn't sure what "high strung" meant, my guess was something in the vein of pretentious or stuffy or bombastic (not that those are all that similar), but I'm reading it's more in the vein of nervous or tense, which certainly seems to fit better here. Is that right diana_coman?
jfw: and that'd be another example of where I coulda figured out by asking earlier!
jfw afk, food
diana_coman: jfw: ah, not at all stuffy/bombastic/pretentious, no; and not nervous either; and note that I use adverbs correctly, it's highly (not "high") strung for a reason! if you think of how you tighten/loosen up strings on a guitar, that's pretty much the analogy there - you kept stretching and tuning and fiddling with it that the result is a highly strung (and generally too tightly but not only that) text/string.
diana_coman will be back tomorrow.
whaack: when i run top on one of my vms, I get "Mem: 3922344k total, 1768028k used, 2154316k free, 143744k buffers" for the line that describes memory usage. When I inspect how much memory an individual process is using on the same vm with the command pmap, i get "total 3245868K" for the last line. Why would pmap report more memory being used by one process than top reports for all proccesses?
jfw: whaack: do you know how virtual memory works?
whaack: jfw: No, I do not
jfw: whaack: sorry 'bout the delay, got my attentions diverted. It's worth learning about (what, they didn't have any comp arch class at that MIT?!) but the short version is each process has its own address space, portions of which get mapped to different things such as physical RAM, files, hardware registers and such by the OS and CPU (MMU specifically).
jfw: so what you're looking at with pmap is the total mappings, many of which may be shared with other processes, not actually allocated due to overcommit, and so on.
jfw: The RES line in top or ps listings tends to the be closest approximation of actual usage attributable to the process in my understanding.
jfw: (resident set size)
whaack: jfw: no worries, thank you. Yes MIT did, but through my fault the material didn't stick with me. I'll read up on the subj more later, I'm about to head out to the airport.
ossabot: Logged on 2019-10-13 10:00:22 whaack: yes it did, but i ~failed that course
jfw: whaack: cool, no need to pile on further tsks then, lol
lobbes: http://logs.ericbenevides.com/log/ossasepia/2020-02-27#1019473 << I missed this earlier, but archiving should already be occurring in this channel. Currently lobbesbot is set to silently snarf urls-to-parse from all channels it sits in, so this channel ought to be covered
ericbot: Logged on 2020-02-27 13:03:46 diana_coman: lobbes: how does that link-archiving work, can I have it in here too or what does it require?
← 2020-03-03 | 2020-03-05 →