December 2008 Archives

My appearance on Planet Fedora and Planet NYLUG has been missing as of late, just due to lack of time.

However, just wanted to drop a quick note that I'm still alive and kicking, and that for Fedora bug reporters that are not able to change the version of their bug from 8 to whatever version it's now applicable in, I've created a mechanism here to do so. Just click to file a new ticket, and fill in the bug number and what version you'd like to change it to.

Anyone with rights to change bugs can view the report of what needs to be changed here if you'd like something to do!

Corporate Card Fail

| No Comments | No TrackBacks
Well, I'd had some payment issues on my corporate card since I'm horrible about filing expense reports (even though our card make it exceptionally easy to file reports). I attempted to pay for a cab ride on my card - explicitly to make it easy to file an expense report for it) and it was declined. I thought that I had taken care of this with our finance department and the corporate card provider, JPMorgan Chase. Apparently not, since my card was declined and I called them at the number on the back of the card and was transferred to the (closed) collections department when I entered my card number.

We'll see what they have to say when they call me back, I know for a fact that I have no balance - I ate those expenses out of my pocket :(

CAS - Core Analysis System

| No Comments | No TrackBacks
I've recently been informed about CAS, the core analysis system. This is an effort by Red Hat to open source the automated analysis platform for kernel cores that is used by their internal support teams, and get it out in the hands of the community. I've successfully gotten it to analyze a core, but it's not quite production ready yet :).

It's really simple to install from source (which is currently the preferred method, and involves making an RPM of it). So far, I've only done local node analysis (i.e. copying the core to the CAS server), however it supports gathering cores from remote machines, as well as farming work out via func to remote machines if the current machine is not capable of handling the analysis - i.e. the machine is i686 but the core is from an x86_64 machine.

There is a mailing list you can join for feedback!

So I have a problem. One of my customers at $DAYJOB has some serious issues with web page load times occasionally, from a user perspective (i.e. the time that it takes all elements of the page to be downloaded and rendered in a browser).

As a hosting provider, I'm only interested in the performance of my servers. Generally what I'll do is use wget to get the page and send it to /dev/null to see if I'm having a problem. This is great for proving that I do or do not have an issue, but what I want to see is some command line way to tell the aggregrate load times for all elements on a page, and a breakdown of the individual elements, in order to act more as a partner in identifying the issue rather than simply saying "not me, go talk to someone else!" :)

Surely wget can do this (I think) but a trip through the manpage doesn't immediately reveal anything. A recursive run (wget -r -l 0 or 1) doesn't seem to do it. Is there either something I'm missing or some other utility that will let me do this?