Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- [02:41] <vincentch> hello
- [02:41] <jfranusic> sorry about paging you
- [02:41] <@GLaDOS> Hey vincentch
- [02:41] <vincentch> No worries. life of an eng.
- [02:41] * S[h]O[r]T waves
- [02:41] == mode/#preposterus [+v vincentch] by GLaDOS
- [02:41] <jfranusic> is this rate better?
- [02:42] <+vincentch> anyway, don't mind you guys crawling, but would be good if you guys could dynamically throttle
- [02:42] <+vincentch> don't want to force the rate to a particular value
- [02:42] <jonas__> haha, good to know just noticed you in the channel a minute ago only
- [02:43] <jfranusic> what should the input be for the dynamic throttle? response time from posterous?
- [02:43] <+vincentch> this level is fine. main problem is that like most sites, we're heavily cached. crawling basically blows apart that assumption
- [02:43] <jfranusic> haha, I used to get paged when wikipedia crawled our site (pbwiki)
- [02:43] <jfranusic> that would blow away our caches
- [02:44] <+vincentch> anyway, is there a possibility that you could monitor the latency, and back off as needed to keep that latency at a reasonable level?
- [02:44] * chronomex waves
- [02:45] <jfranusic> what's reasonable? 400ms?
- [02:45] <+vincentch> that's reasonable
- [02:45] <jfranusic> ideally, it should be below whatever pages you :P
- [02:45] <+vincentch> (lol, probably more than reasonable, considering our current level of service)
- [02:45] <jfranusic> haha
- [02:45] <@chronomex> hm, that sounds like it requires a not trivial rewrite of the warrior stuffs
- [02:45] <+vincentch> but yeah, 400ms is just fine
- [02:45] <kennethr-> ideally, twitter could give us a data dump :)
- [02:45] <@chronomex> ^
- [02:45] * chronomex shrugs
- [02:45] <+vincentch> i am just a lowly engineer and cannot authorize that :P
- [02:46] <jfranusic> yeah, we figured
- [02:46] <@chronomex> I wonder if we could figure out a way to avoid crarwling static data on every go
- [02:46] <@chronomex> maybe distribute a .cdx of common files?
- [02:46] <@S[h]O[r]T> we do want to hit our goal. could we do that with the throttling..i guess we have to find out
- [02:46] <@chronomex> hmmmm
- [02:46] <jfranusic> so, the main problem here is that our IPs are getting banned after a period of time
- [02:46] <robbiet48> at the :50 mark of every hour
- [02:47] <+vincentch> yes, we put in place a pretty aggressive limit when we went to twitter
- [02:47] <@chronomex> can you say why?
- [02:47] <+vincentch> basically to protect ourselves
- [02:47] <+vincentch> from being paged :P
- [02:47] <@chronomex> hahaha
- [02:47] <@GLaDOS> I doubt we'll be able to archive it all within the timespan we have anyway..
- [02:47] <@chronomex> seems ineffective :P
- [02:47] <+vincentch> how are you guys crawling
- [02:47] <jfranusic> could you spin up an app server for us?
- [02:48] <@chronomex> we seem to have filled up your banlist with rotating IPs
- [02:48] <robbiet48> vincentch: wget afaik
- [02:48] <@chronomex> right, wget
- [02:48] <@ersi> Oh hey, it's vincentch! :)
- [02:48] <@S[h]O[r]T> https://github.com/ArchiveTeam/posterous-grab
- [02:48] <+vincentch> so you guys are just crawling the actual webpages (vs. the API)
- [02:48] <jfranusic> yeah, for archival purposes
- [02:48] <@ersi> vincentch: Yeah, we want to make sure the data lives on
- [02:48] <@chronomex> webpages, yes
- [02:48] <@S[h]O[r]T> we used the api to discover the hostnames
- [02:49] <+vincentch> haha, you guys probably got a lot of spam + pr0n sites
- [02:49] <robbiet48> vincentch: reason we crawl webpages and not API is because only webpages can get submitted to Internet Archive
- [02:49] <robbiet48> headers have to be in place for submission
- [02:50] <jfranusic> so, we have one option, which is to throttle our crawl based on the response time of posterous
- [02:50] <jfranusic> do we have any other options?
- [02:50] <jfranusic> vincentch: could you modify your load balancer to give us our own private app server?
- [02:51] <jfranusic> or whitelist a few IPs so that we can just leave those chugging along?
- [02:51] <robbiet48> or whitelist a user agent!
- [02:51] <robbiet48> if possible
- [02:51] <+vincentch> jfranusic: yes
- [02:51] <kennethr-> then we could set an appropriate, predictable rate
- [02:52] <+vincentch> we could probably spare a box for you, and you can just let requests queue up on that
- [02:52] <@chronomex> that sounds great
- [02:52] <@ersi> vincentch: You betcha we got a lot of spam/pr+n already hehe
- [02:53] <+vincentch> that won't get done for a day or two, but that's a reasonable option. You'd probably have to modify your code to append a no-op param or a user-agent
- [02:53] <jfranusic> vincentch: I don't think that'll be an issue
- [02:54] <+vincentch> the second problem is though that crawling will inherently blow away our caches. so we may still ask you guys to back off
- [02:54] <@chronomex> yeah, crawling does that
- [02:54] <jfranusic> well, if it's a separate app server, would it be hard to give that app server it's own cache?
- [02:54] <@chronomex> sorry, I got high in between twittering at you vincentch and when you came in to irc
- [02:55] <@ersi> too much pig fat spread? ;p
- [02:55] <@chronomex> yes ...
- [02:56] <+vincentch> jfranusic: unfortunately they're not paying me by the hour to work on posterous :) it would be a pretty deep re-write.
- [02:56] <jfranusic> vincentch: thats fine, I was jut asking
- [02:56] <jfranusic> some code bases have a "disable cache for this server" flag, some dont
- [02:56] * jfranusic shrugs
- [02:57] <jfranusic> well, if we were all hitting that app server
- [02:57] <jfranusic> do you have tools to slow us down on your end?
- [02:57] <kennethr-> i bet they wish they did ;)
- [02:57] <jfranusic> I'm trying to think of the best way to have you notify us
- [02:58] <jfranusic> if you don't mind jumping in here and being all like "uh, guys, slow it down" then I think that will work
- [02:58] == kennethr- has changed nick to kennethre
- [02:59] <@S[h]O[r]T> are there any resources we can maybe provide to help?
- [02:59] <jfranusic> beer? snacks? high-fives?
- [02:59] <@chronomex> lol
- [02:59] <@S[h]O[r]T> remote servers :P
- [03:00] <robbiet48> a funeral for posterous
- [03:00] <+vincentch> for now, if you could just throttle by latency would be nice. i can work on the special routing later this week.
- [03:01] <+vincentch> alternatively, on the DL, could maybe keep the service running a bit past 4/30
- [03:01] <@chronomex> (shhhh)
- [03:01] <+vincentch> seriously shh
- [03:01] <jfranusic> :-X
- [03:01] <jfranusic> I have enough trouble remembering what day it is TODAY
- [03:02] <@chronomex> shit man I have trouble remembering whether I'm 24 or 25
- [03:02] <@ersi> I never remember my age
- [03:02] <kennethre> vincentch: lips=sealed
- [03:02] <+vincentch> now regretting using my real name ;P
- [03:02] <kennethre> same
- [03:03] <kennethre> damn you GLaDOS
- [03:03] <@chronomex> I use my real name everywhere :)
- [03:04] <jfranusic> you're all whiners
- [03:04] <jfranusic> I'm pretty sure that my name is also a GUID
- [03:04] <@chronomex> at least irc isn't run by google
- [03:04] <@chronomex> link your g+ account to nickserv!
- [03:04] <@chronomex> woop woop woop off-topic siren
- [03:04] <robbiet48> lol
- [03:04] <robbiet48> "i regret using my name"
- [03:04] <@chronomex> I regret having a name?
- [03:04] <robbiet48> says 3 people at three of the most high profile startups in SV/SF
- [03:04] <@chronomex> lol
- [03:04] <kennethre> hahaha
- [03:05] <@chronomex> twitter is hardly a startup
- [03:05] <robbiet48> is heroku kennethre?
- [03:05] <@chronomex> ;)
- [03:05] <robbiet48> jfranusic is twilio?
- [03:05] * kennethre runs away
- [03:05] <@S[h]O[r]T> so ideas on how to do the latency stuff guys?
- [03:05] <jfranusic> I'd say that Twilio counts as a startup
- [03:05] <@chronomex> twilio maaaybe counts as a startup still
- [03:05] <robbiet48> maybe
- [03:05] <robbiet48> its iffy
- [03:05] <jfranusic> who's running the tracker?
- [03:05] <robbiet48> you have your own jackets
- [03:05] <robbiet48> i think that means you arent
- [03:05] <robbiet48> oh and shoes too!
- [03:05] <@chronomex> jfranusic: "running" is a nebulous term around here
- [03:05] <@chronomex> jfranusic: I pay for it, but don't know how to do anything on it
- [03:05] <@S[h]O[r]T> maybe modify the --timeout?
- [03:06] <jfranusic> is the sourcecode for it somewhere?
- [03:06] <@ersi> That'll just wait longer
- [03:06] == mode/#preposterus [+o alard] by S[h]O[r]T
- [03:06] <@S[h]O[r]T> i have admin on it. anything outside of that would be alard
- [03:06] <@S[h]O[r]T> yes
- [03:06] <@ersi> jfranusic: yes http://github.com/archiveteam/
- [03:06] <@chronomex> jfranusic: universal-trackre on github.com/archiveteam
- [03:06] <@ersi> universal tracker
- [03:06] <@chronomex> yes
- [03:06] <@S[h]O[r]T> ‘--connect-timeout=seconds’
- [03:06] <@S[h]O[r]T> Set the connect timeout to seconds seconds. TCP connections that take longer to establish will be aborted. By default, there is no connect timeout, other than that implemented by system libraries.
- [03:06] <jfranusic> ruby
- [03:07] <@S[h]O[r]T> theres also a limit-rate might be able to do something with that
- [03:07] <jfranusic> so, my thought is that we add some code to change the rate-limit when the response time changes
- [03:08] <jfranusic> vincentch: what's the best "endpoint" to check for responsiveness
- [03:08] <jfranusic> "/" ?
- [03:08] <+vincentch> jfranusic: yes, that's good. or just whatever site you're crawling at the moment (e.g., if the last request didn't complete in a timely manner, back off)
- [03:11] <jfranusic> i'm looking at the code now
- [03:11] <jfranusic> seeing if I can find an easy/simple way to do this
- [03:11] <@chronomex> if I understand things correctly it sounds not trivial
- [03:12] <+vincentch> jfranusic: thanks. understand that it's prb. hard to coordinate across a bunch of clients.
- [03:12] <@chronomex> but I haven't looked in detail at recent seesaw scripts
- [03:12] <jfranusic> well, the tracker is what hands out the things that the clients go and fetch
- [03:13] <jfranusic> seems like the most obvious place to do the rate limiting?
- [03:13] <@S[h]O[r]T> the tracker tracks requests per minute. and it can rate limit the clients
- [03:14] <@chronomex> actually client rate limiting sounds liek a thing that lua would be good for
- [03:14] <@S[h]O[r]T> so as of a minute ago there were 1142 requests to the tracker, but its limiting it to 25/m
- [03:14] * ersi nods
- [03:15] <+vincentch> http://memeurl.herokuapp.com/ggg/asked%20for%20throttling/gets%20right%20on%20it.jpg
- [03:16] <@S[h]O[r]T> i think its going to be a lot less complicated to do the limits from the tracker
- [03:16] <kennethre> KISS
- [03:18] <jfranusic> https://github.com/ArchiveTeam/universal-tracker/blob/master/models/tracker/transactions.rb#L120
- [03:18] <jfranusic> thats the code in the tracker that checks the rate limiting
- [03:20] <jfranusic> an ugly hack would be to have a small service that would "ping" posterous and change the rate limiting when the response time changed
- [03:21] <@S[h]O[r]T> is icmp going to actively reflect http services?
- [03:21] <jfranusic> no
- [03:21] <jfranusic> that's why i put ping in quotes
- [03:21] <jfranusic> "HTTP ping"
- [03:21] <kennethre> cron job, do a HEAD to posterous once every 5 minutes
- [03:21] <jfranusic> I don't know what the right term is for "make an HTTP request and measure the time it takes for the request to complete"
- [03:22] <kennethre> throttle accordingly
- [03:22] <jfranusic> OOOO
- [03:22] <jfranusic> well, this code appears to be running in Heroku or dotcloud
- [03:22] <jfranusic> so, we could do a one-off hack for this, or make a patch to make this fix more general
- [03:22] <kennethre> both of which have cron-like features
- [03:22] <+vincentch> 5 min might be a bit too short as spikes in load could build up pretty rapidly
- [03:23] <+vincentch> *too long, rather
- [03:23] <jfranusic> 1 min?
- [03:23] <@GLaDOS> Every second?
- [03:23] <kennethre> https://devcenter.heroku.com/articles/scheduler http://docs.dotcloud.com/guides/periodic-tasks/
- [03:23] <kennethre> second would be absurd
- [03:23] <@GLaDOS> How about 10 a second?
- [03:23] <jfranusic> IIRC 1 min is the smallest granularity that cron can do?
- [03:23] <@GLaDOS> it is
- [03:24] <jfranusic> so, it looks like alard is the main person who contributes to this code
- [03:24] <@GLaDOS> Correct!
- [03:25] <jfranusic> 157 commits from alard, 13 from david yip
- [03:25] <@S[h]O[r]T> if you attempt to write a patch for the tracker or a cronjob that needs to run to affect the tracker, alard would likely appreciate that and can look at the patch
- [03:25] <@GLaDOS> vincentch: Also, I would like to apologise for being the one that started the enticement of others semi-overloading posterous.
- [03:25] <@S[h]O[r]T> rather than relying on alard to write it all :)
- [03:26] <jfranusic> GLaDOS: I might share some of that blame too? :(
- [03:26] <+vincentch> GLaDOS: it's OK. I'd rather work with y'all that face the wrath of the internetz. ;P
- [03:26] <@GLaDOS> Considering I jumped from a warrior running posterous to 20 instances running 200 threads each when people were going at 1 a second, I'm sure it's my fault jfranusic
- [03:26] <@chronomex> jfranusic: the tracker is running on a dedicated linode
- [03:27] <@chronomex> so we have cron, etc
- [03:27] <jfranusic> chronomex: okay, good
- [03:27] <jfranusic> GLaDOS: noted
- [03:27] <@GLaDOS> I should check my amazon bill..
- [03:27] <@chronomex> probly
- [03:28] <jfranusic> vincentch: well, thanks for jumping in here, I was in the middle of spinning up some more EC2 instances
- [03:28] <@chronomex> or you should wait until it's done ;)
- [03:28] <@chronomex> hahah you dog
- [03:28] <+vincentch> jfranusic: haha, you would've only archived more 503's
- [03:28] <+vincentch> BTW, what do y'all do for non 2XX responses? retry at some other time .. ?
- [03:29] <@chronomex> we don't have anything organized
- [03:29] <@GLaDOS> I know what a certain k started, he overlooked how many he should run, and returned 100 503'd users for a minute or two..
- [03:29] <@chronomex> I think the plan is "inspect for excessive failures, rerun those jobs later"
- [03:29] <@GLaDOS> Just like how I did!
- [03:30] <jfranusic> I assumed that the tracker would take care of jobs that didn't complete correctly?
- [03:30] <@S[h]O[r]T> it tries twice to download. if wget returns exit code 0, 6 or 8 its counts it as complete
- [03:30] <@GLaDOS> The tracker doesn't know if a job completed correctly or not..
- [03:30] <jfranusic> okay party people, what "algorithm" should we use to change the throttle?
- [03:30] <@S[h]O[r]T> at some point if not already someone is hopefully doing a bunch of analisys
- [03:31] <@chronomex> I think that has not really happened yet
- [03:31] <@S[h]O[r]T> exit code 4 was network error aka ban so we dont UL the user if we get that
- [03:31] <jfranusic> UL?
- [03:31] <@S[h]O[r]T> id never underestimate alard or sketchcow
- [03:31] <@S[h]O[r]T> upload
- [03:32] <@S[h]O[r]T> hey, im going to bed..i bumped the tracker limit to 100. for now. i guess we will leave it at that?
- [03:32] <@chronomex> alard is a fucking beat
- [03:33] <@chronomex> beast
- [03:33] <@chronomex> ummm dunno maybe?
- [03:33] <@chronomex> what was it at before tonight's episode?
- [03:33] <@S[h]O[r]T> unlimited
- [03:33] <@chronomex> k
- [03:34] <@S[h]O[r]T> i could turn it back to unlimited assuming me, glados, k, and soult and whoever silicon valley are dont start up more threads till rate limiting is in effect
- [03:34] <@GLaDOS> vincentch: what's the database that hosts post.ly info like?
- [03:34] <@GLaDOS> S[h]O[r]T: I'm going to run only one micro instance with 10 tinyback and 10 seesaw threads from now on
- [03:34] <@GLaDOS> My bill is at 60AUD
- [03:34] <+vincentch> pretty much what you'd expect. a mapping from the URL slug => postid
- [03:34] <@GLaDOS> s/AUD/USD/
- [03:35] <@S[h]O[r]T> it looks req per min are around 500-600 now vs 1400 a bit ago
- [03:35] <@GLaDOS> And how easy would it be to whitelist an IP from the ban cronjob?
- [03:36] <@GLaDOS> Yeah, soultcer might be interested in talking to you about that..
- [03:36] <@S[h]O[r]T> ^^ i think once vincentch sorts the app server he would whitelist a user agent or something of that sort
- [03:36] <@GLaDOS> How about "ARCHIVETEAM FUCK YOU"?
- [03:37] <+vincentch> GLaDOS: I think that's a user-agent the @posteorus crew would get bhind
- [03:37] <@chronomex> hahaa
- [03:37] <@GLaDOS> Heh
- [03:37] <jfranusic> "Archive-team"
- [03:37] <@chronomex> we crawled geocities with "googlebot/suck a mountain of cocks" or something
- [03:38] <jfranusic> "ArchiveTeam/Reporting for duty, sir"
- [03:38] <+vincentch> Here's another question from an archive-team n00b. where do you guys store the data? on S3?
- [03:39] <@GLaDOS> vincentch: We store it on fos.textfiles, which is hosted by archive.org I believe.
- [03:39] <@chronomex> yes, fortress of solitude is our temporary staging point
- [03:39] <@GLaDOS> Also, do you keep a tally of how many times an API key is used?
- [03:39] <@chronomex> it's a vm at the internet archive with a large amount of disk behind it
- [03:39] <+vincentch> Ahh. I think I once met a guy who worked for archive.org on the caltrain once
- [03:39] <@chronomex> neat
- [03:39] <@GLaDOS> nice
- [03:40] <@chronomex> it's a fairly small organization
- [03:40] <jfranusic> and a bunch of cool people
- [03:40] <+vincentch> with a fairly sizable internet following, apparently ;P
- [03:40] <@chronomex> :P
- [03:40] <@chronomex> we do things that they don't have resources to do ... or that they don't want to be associated with
- [03:41] <@chronomex> s/don't want to/oughtn't/
- [03:41] <@S[h]O[r]T> in some cases underscore will host some data and i have for a project or two but mostly fos
- [03:42] * S[h]O[r]T laying in bed
- [03:42] <robbiet48> also, Internet Archive does a ton more then just backing up the internet
- [03:42] <robbiet48> they also back up all news programs in the US
- [03:42] <@chronomex> plus they scan books
- [03:42] <robbiet48> they also digitze old books that can't be torn apart to scan
- [03:42] <robbiet48> damnit chronomex!
- [03:42] <@chronomex> they have a really spiffy web interface for reading scanned books too
- [03:42] <@chronomex> like
- [03:42] <robbiet48> ^
- [03:42] <@chronomex> it's about the best way you can read a book in a browser
- [03:42] <@chronomex> imo
- [03:42] <@chronomex> all web2.0ey but actually usable
- [03:43] <robbiet48> man i forget that internet archive does so much
- [03:43] <robbiet48> oh of course they also have the wayback machine
- [03:43] <@chronomex> ofc
- [03:44] <@GLaDOS> What we're going to be pumping your data into.
- [03:44] <@chronomex> chunk chunk chunk
- [03:44] * chronomex makes the sound of an overloaded grain elevator
- [03:44] <@S[h]O[r]T> that secret room at ATT wasnt for the NSA it was for IA
- [03:44] <@GLaDOS> They also host, what was it, like 3gb of spam emails in the last 10 years or so?
- [03:44] <@chronomex> ha
- [03:44] <robbiet48> heh
- [03:44] <@GLaDOS> S[h]O[r]T: shhhh
- [03:45] <robbiet48> vincentch: i highly recommend you go over to internet archive some time, i'm sure SketchCow can set up a tour for you/friends
- [03:45] <robbiet48> ... in exchange for your cooperation of course :D
- [03:45] <+vincentch> sounds lovely. cooperation is here regardless.
- [03:45] <@chronomex> awww :3
- [03:46] <@GLaDOS> It's good that we don't have to treat you like yahoo.
- [03:46] <@ersi> vincentch: We're also crawling post.ly btw
- [03:46] <@chronomex> for everyone's sake GLaDOS
- [03:46] <@GLaDOS> For everyone's sake.
- [03:47] * ersi shakes fist at Yahoo!'s general direction
- [03:47] <@GLaDOS> http://posterous.tinyarchive.org/v1/ for post.ly tracking btw
- [03:48] <@Smiley> who is hotdog? :D
- [03:52] <jfranusic> vincentch: how'd you know it was archiveteam hitting posterous?
- [03:52] <@GLaDOS> Who else would be this dedicated?
- [03:52] <+vincentch> pretty obvvious, i could see how fast you were hitting us from your dashbord
- [03:53] <@ersi> well, how'd you find that, then? :)
- [03:54] <jfranusic> right, ideally the poor guy at Yahoo who's keeping the servers alive should be able to figure out to come in here too
- [03:55] <@chronomex> he's probably quitting in april
- [03:55] <@S[h]O[r]T> he csnt he go
- [03:55] <@chronomex> ?
- [03:55] <@S[h]O[r]T> err ^. what u said
- [03:56] <jfranusic> okay, I have a script ready for alard
- [03:56] <@S[h]O[r]T> got let go for not using their vpn
- [03:56] <@chronomex> oh heh
- [03:56] <jfranusic> can someone write a sample cronjob for me?
- [03:56] <jfranusic> It's been years since I've written one, I'd like to keep it that way
- [03:56] <@chronomex> heh
- [03:56] <@chronomex> cronjob is a 1 line shellscript
- [03:57] <@GLaDOS> vim/emacs/nano /etc/crontab
- [03:57] <@chronomex> EDITOR=vim crontab -e
- [03:57] <jfranusic> I want some junk to put in the comments of this script
- [03:57] <jfranusic> "run _x_ to have this run once a minute"
- [03:57] <@chronomex> I think "every minute" is * * * * *
- [03:58] <jfranusic> IIRC, _x_ is edit the crontab by hand
- [03:58] <@chronomex> EDITOR=vim crontab -e
- [03:58] <@chronomex> insert line:
- [03:58] <@chronomex> * * * * * your-script.sh
- [03:58] <@chronomex> save, exit
- [03:58] <jfranusic> * * * * * /path/to/php /var/www/html/a.php
- [03:58] <jfranusic> (cut/paste from stack overflow)
- [03:59] <@chronomex> yup
- [04:04] <jfranusic> okay
- [04:04] <jfranusic> https://github.com/ArchiveTeam/universal-tracker/pull/2
- [04:04] <jfranusic> chime in there if I missed something
- [04:05] <@SketchCow> I JUST WANT HUGS
- [04:05] * jfranusic hugs SketchCow
- [04:05] <jfranusic> In particular, check the "logic" for changing the requests_per_minute, here: https://github.com/ArchiveTeam/universal-tracker/pull/2/files#L0R32
- [04:07] <jfranusic> okay, I'm going to sleep now
- [04:07] <jfranusic> nice chatting with you vincentch
- [04:07] <+vincentch> same to you jfranusic
- [04:07] <jfranusic> hope we can get this all figured out
- [04:07] <+vincentch> yeah. i'l hop back in here later in the week and we can discuss more. i'll try and set up a route for y'all
- [04:08] <@chronomex> \o/ thanks for working with us
- [04:08] <@chronomex> you know this is about the second or third time that's happened, since 2009?
- [04:09] <@ersi> when was the first time? ;o I can't remember a single time
- [04:09] <@chronomex> ummm
- [04:09] <@chronomex> I'm leaving room for things I don't remember
- [04:11] <@SketchCow> What a cute discussion.
- [04:11] <@chronomex> oh now SketchCow comes in with the tank treads that have spikes on them
- [04:11] * chronomex popcorn.gif
- [04:11] <@SketchCow> No, no.
- [04:12] <@SketchCow> I'm sure you fine fellows handled this discussion just fine.
- [04:12] <@SketchCow> Who am I to judge.
- [04:14] <@SketchCow> All that matters to me is we download posterous.
- [04:27] <@SketchCow> Summarize.
- [04:27] <@SketchCow> Have we suddenly stopped or pulled back?
- [04:29] <@chronomex> we have slowed down until we can implement a way to stop knocking posterous offline
- [04:30] <@SketchCow> Ah.
- [04:30] <@SketchCow> I'll take pulled back for a little - not stopping.
- [04:30] <@chronomex> we have a posterous engineer on the line, vincentch
- [04:30] <@chronomex> correct
- [04:31] <@chronomex> it appears that the tracker has stopped, I don't know what's up there
- [04:31] <+vincentch> not for long, this posterous engineer has to catch some shuteye. ;)
- [04:32] <@chronomex> well ... ok
- [04:34] <@SketchCow> You won't stop us, but you know that.
- [04:34] <@SketchCow> I'm fine with us working together to make it orderly.
- [04:34] <+vincentch> wouldn't dream of it.
- [04:35] <@chronomex> that appears to be the direction we're headed in
- [04:35] <@SketchCow> But Twitter invented April 30. We didn't.
- [04:35] <@SketchCow> And unfortunately, at the rate we've hit, there's very little time to lose days.
- [04:35] <@SketchCow> So whatever makes it orderly, is fine.
- [04:35] <+vincentch> appreciate it y'all. i'll be back later and we'll talk about the most orderly way to get things done. thanks for bearing with me -- promise that i'll work with you guys
- [04:36] <@SketchCow> Sounds good.
- [04:38] <@SketchCow> Well that went better than expected.
- [04:39] <@chronomex> yup
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement