A Young Hacker's Interactive Primer

For the last five or so years I've been trying to imagine what it would have been like growing up with easy computing, and I don't think it would have been good for me. Back in 1988 when I first started seriously playing with computers they were hard to use. You had to learn a lot of obscure text commands, and most everything you tried to do required that you know how something worked that could reasonably have been abstracted away from you.

Because of that level of difficulty I think I turned out to be a much stronger computer user, programmer, and general hacker[1]. I now delve into difficult computer problems confident in the knowledge that with enough effort I'll figure out what I need to understand to make things work. I'm now quite choosy about which "ease of use" features I'll let hide ugly detail from me, and which ones I'll skip to instead reveal the underlying system.

The problem is, I was forced to climb the steep learning curve of the command line interface and other "old school" hallmarks. I don't know if I would have done it if I could have dragged myfile.txt onto a picture of printer instead of typing COPY MYFILE.TXT > LPT1

I then start to worry about the young teenagers I know who have grown up in a graphical user interface world. I don't see them embracing difficult computing and the accompanying power and flexibility. Who can blame them. It's a pain in the ass for the first few months/years and teenagers aren't always known for the determination, foresight, and patience.

In an attempt to create an inviting way to draw teens into advanced computing I've started a project called The Young Hacker's Interactive Primer (with apologies to Neal Stephenson). As I see it the two things necessary to get a kid interested in exploring difficult computing were (1) a safe place to do it and (2) inducement to further exploration.

To provide a safe place in which to explore a UNIX system, I decided the easiest was to do it was to just give them one. There are plenty of alternatives such as dual booting their existing systems, run-from-CD UNIXes, and just giving them remote shell accounts. I figured actual physical systems were the best solution because there's no risk of them trashing their parents or their real computer, they can have root when they're ready, and because everyone likes getting a gift -- even when it's a Pentium 233.

The inducement comes in the form of little puzzles and tasks of increasing difficulty. Initially you just need to log in with provided user name and password. Then you work on viewing the contents of a file. Then changing a directory. Each task can build on the previous ones while requiring one new fundamental skill be learned, ideally from man pages and documentation. In envision it as something like one of the old infocom text games except taking place right in the UNIX shell.

Who knows, maybe my attempts to make learning difficult computing will still be too boring, frustrating, and un-glamorous to draw a kid through the 10 different layers of abstraction that Apple and Microsoft want them the use unquestioningly. Worst case I get a few old computers out of my closet.

The fledgling project can be found at https://ry4an.org/primer/ There's still not much to see, but I expect to change that shortly as I've already told a kid I know that I hope to give him a computer shortly after thanksgiving.

[1] http://catb.org/~esr/jargon/html/H/hacker.html

The Flaming Moe

For the last three years Cari, Bridget, Joe and I have co-hosted a Halloween party at Cari's and my place. Every year I man the bar because I enjoy doing so and like the chance to talk to everyone periodically over the course of the night. This year I decided to turn the role into my costume and dressed as Moe Szyslak, the bartender from The Simpsons.

In one particularly good third season episode, titled "Flaming Moe" (http://www.snpp.com/episodes/8F08.html), Moe finds great success serving a drink whose recipe was stolen from Homer. The drink, which Moe calls the Flaming Moe, includes as its secret ingredient Krusty Brand non-narkotik cough syrup and erupts into quickly retreating pillar of flame when lit. I figured if I was gonna do Moe I had to make that drink.

I started with the drink base and after many failed attempts to make that was (1) classy, (2) tasty, and (3) grape, I gave up on the first two and just went with grape kool-aid, grape pucker schnapps, and triple sec. It was palatable. After pouring that base in from a pitcher I'd then add just a little "cough syrup" (grape sugar syrup) from a re-labeled nyQuil bottle.

For the column of fire effect I'd place the drink in front of a tea light I had sitting on the bar, and would then use a salt shaker to sprinkle from above a cascade of non-dairy creamer onto the candle. When I got the concentration just right (too much and you extinguish the candle, too little and nothing happens) the non-dairy creamer would flare up in a foot tall fireball which looked for all the world like it was coming out of the glass. Unfortunately it only worked about one in ten times.

Below is a photo of the tail-end of one of the flare ups, and the illusion isn't bad. Thanks to Jan Creaser for the photo. Also attached is a small image of the label and an archive containing a full-size version printable at 300dpi for bottle modification.




Lying to Myself About Calendaring

Three days ago I posted a rather lengthy entry wherein I decried the current state of open, collaborative calendaring. In it I listed six requirements for calendaring software and settled on the option available to me that met the most of them. Now I'm changing my mind.

At the time I needed a new calendaring solution fast and had checked out all the candidates before. The software package Remind had a nice interface, an expressive configuration language, web-visibility, local-storage, and was open source. It met four of the six requirements and would have served admirably for a few years. So why did I find myself laying awake worrying about my choice? (Seriously, I'm that big of a loser.)

The cause of my mis-selection was part haste and part willful self-deception. I wanted to go with Remind because I knew working with it on a day to day basis would be a joy. Toward that end I manipulated the selection criteria to favor Remind. The six requirements I listed are valid, but using a bullet list rather than a sorted list presents them as being all of equal import, which just isn't the case.

In The Cognitive Style of PowerPoint Edward Tufte addresses bullet points saying, "For the naive, bullet lists may create the appearance of hard-headed organized thought. But in the reality of day-to-day practice, the [PowerPoint] cognitive style is faux-analytical." He then goes on to explain how PowerPoint caused the Columbia space shuttle disaster.

Sorting my list of six requirements by importance (most important first) yields:

  1. standard data format: for easing inevitable migrations; vCal/iCal lately
  2. stored locally: I prefer to host my own data
  3. free: as in speech and as in beer for all the obvious reasons
  4. text-mode access: I prefer text mode to GUI applications
  5. web access: at least viewable (preferably editable) on the web
  6. groupware capable: ability view others' calendars and publish mine

Even that doesn't show the relative weight given to each requirement. A representation displaying importance would show that the first one, standard data format, is probably more important than all the rest put together.

As mentioned previously the Remind file format is great to work with, but no other software uses it. What's more as one became more proficient with remind one's event entries would get more complicated, making eventual migration all the harder. I'm not willing to ghettoize my data, so Remind has to go.

When limiting myself to the current calendaring data standards (vCal, iCal, and xCal) the list of options shrunk dramatically. There were no text-only options and the web-based options were all painful. I looked into Ximian's evolution (http://www.ximian.com/products/evolution/) but it was slow and insisted on storing data in ~/evolution. I ended up deciding to run Mozilla Calendar (http://www.mozilla.org/projects/calendar/). It's nice enough, actively developed, has good standards compliance, and runs inside the only non-text application I always have open, my web browser.

Anyway, getting my phpGroupWare data into Mozilla Calendar just meant writing another conversion script. This one, phpGroupWare to iCal, (attached) might at least be useful to someone other than me.

This whole process undoubtedly seems silly, but at its core is a firm rule for data longevity: store your data in open, standard, lowest-common-denominator formats. Ask any company with 10,000 word processor documents that can no longer be read using their new software. Closed data formats are manacles that can be a hassle over the course of decades scale and tragic on a historical time scale.


Calendaring Migration

I'm happy with my email client, text editor, compilers, IRC client, news reader, web browser, and just about every other tool I use in the process of my daily computing. The only consistent source of displeasure has been my calendaring (when the hell did that become a word?) application. I have a hideously over-scheduled life and need some sort of scheduling solution be it computerized or otherwise to keep things straight, but I've never found anything that really suits me needs.

Over the last 10 years I've used and discarded a number of solutions, including a paper notebook, palm pilot, gnomecal, and phpGroupWare. Each of them has had their advantages and their drawbacks, but none of them has been perfect. Wherein 'perfect' meets these requirements:

  • standard data format: for easing inevitable migrations; vCal/iCal lately
  • free: as in speech and as in beer for all the obvious reasons
  • text-mode access: I prefer text mode to GUI applications
  • web access: at least viewable (preferably editable) on the web
  • stored locally: I prefer to host my own data
  • groupware capable: ability view others' calendars and publish mine

To my knowledge nothing out there yet handles all of these requirements. Likely nothing ever will, but so long as they use a standard interface to a data back end I'll be able to mix-and-match sufficiently to reach calendaring nirvana.

There are, however, many applications that do an excellent job with some of the requirements. Apple's iCal (whose horrible name creates confusion by overlapping that of an established data standard) uses the standard iCal data format and WEB-DAV for good groupware-ness, but they've locked down the groupware functionality to their .mac boondoggle and aren't sufficiently free. phpGroupWare doesn't use standard data formats and is pretty krufty in general. Sked (http://sked.sourceforge.net/) has a great interface but uses non-standard data storage and doesn't seem to be actively developed any longer. Reefknot (http://reefknot.sourceforge.net/) was a promising iCal groupware back end until it was apparently abandoned.

On the horizon is Chandler from the OSAF (http://www.osafoundation.org/) which is promising a good, decentralized-groupware, standard-data-format back end that would allow for text, GUI, and web clients. I'm trying not to get my hopes up because if 10 years of looking for a decent calendaring solution has taught me anything it's that they'll shoot themselves in the foot somehow.

For the time being I'm moving to remind (http://www.roaringpenguin.com/remind/). It's free, text mode, offers web viewing, and stores my data locally. It doesn't offer any sort of groupware capabilities, and perhaps worse yet has a completely non-standard data format. Moving my existing data (exported from phpGroupWare) into remind's human-editable format required just a few simple conversion scripts (attached), but moving back out of remind's powerful storage format to something more standard (again likely vCal) will be a real nightmare.

If anything I'd say my decision to switch to the hard-to-migrate-from remind application shows my pessimism that anything sufficiently better to be worth switching to will be available within the next two or three years.


IRC Nickname Tracking Script

Being a telecommuter, the closest thing I have to an office is an IRC channel. IRC (Internet Relay Chat) is like a chat room minus the emoticons and pedophiles. While normally the office IRC channel is the very embodiment of maturity, there are two silly things about it that have always annoyed me. The first being that everyone gets to pick their own name, and the second being they can change their names at will.

When everyone can pick their own name you end up with a lot of all-lowercase, powerful sounding names for people you think of as Jim and Bob. If everyone has their own terminal on which they're viewing the channel's discourse there's no reason why we can't all see each other referenced by whatever name we're most comfortable calling the person.

Worse yet is the ability for everyone to change their names. Occasionally, and I've no idea what triggers it, everyone will start switching names willy-nilly to new and sometimes each other's names. This is like going to a costume party where everyone keeps switching masks every few minutes -- eventually someone's gonna kiss someone else's girlfriend.

Because I'm cantankerous and have too much time on my hands, I wrote I little script for the excellent Irssi IRC client I use ( http://irssi.org/ ) that mitigates the hassle from both of the bits of silliness mentioned above. Using my script I can issue commands of the form "/call theirnick whatIcallThem" and from then all messages from the person who was using the nickname "theirnick" will now show up as from "whatIcallThem", and it will do so regardless of how many times they change their nickname.

Is it perfect? No. The people in the channel still refer to one another by their nicknames so I still have to decode nick -> human mappings in my head, but at least not so frequently. At any rate, the script is attached. It is known to work with Irssi v0.8.6.



This script has been updated in a recent entry.

PGP Key Signing - October 23, 2003

On Thursday, October 23rd, 2003 I'm hosting another PGP key signing event. For those not familiar with the concept here's a four paragraph primer on public key cryptography:

Each person in the system has two matched "keys": a public key and a private key. A message encrypted with a public key can only be decrypted the complementary private key. Thus public keys are distributed far and wide while private keys are carefully guarded. When someone wants to send me a secret message they need only grab my public key from one of many freely accessible public repositories, use that key to encrypt their message, and then send the newly encrypted message to me.

However, when a public key found in the wild purports to be the public key of Ry4an Brase, there's no reason to believe it necessarily is. It could be the public key of the evil John Ashcroft on which he put my name. That's where key signing comes in. A key signature is an attestation that the signer knows for certain a public key belongs to whom it says it belongs. If I sign a public key with Joe Schmoe's name on it, I'm saying that I, Ry4an Brase, know personally that Joe Schmoe issued that key.

Reaching that level of certainty usually requires a face to face meeting. If Joe Schmoe sends me his key by email I've got no way of knowing for sure that the key wasn't substituted out for another en route by a malicious entity. Key Signing events exist so that strangers can get together and certify in person (and in the presence of photo ids, key finger prints, and other identity establishing aids) that the keys of others belong to those specific others.

It's, of course, impossible to meet every person whose key you hope to use, but with every event that grows the "web of trust" it becomes more likely that someone you trust has certified a key you'd like to trust. I might not have heard directly from Joe Schmoe that key XYZ is his key, but if someone I trust has verified that key in person then I'm better able to trust key XYZ has accurate ownership information than if I found it floating in the wild with just Joe Schmoe's name on it.

I last hosted an event like this in March of 2003. I've attached an image of the resulting trust digraph with arrows indicating a signature. Hopefully this event will be even larger than the 25 keys we got last time. Details for those interested in attending can be found at https://ry4an.org/keysigning/


Email to SMS Conversion

There's a program on freshmeat called email2sms (http://freshmeat.net/projects/email2sms/) that runs emails through a series of filters until they're short enough to be sent to a cell phone as a SMS message -- which typically have a maximum length of 150 characters. The script is mostly just a wrapper around the nifty Lingua::EN::Squeeze Perl module.

Squeeze takes English text and shortens it aggressively using all manner of abbreviations. It leaves the text remarkably readable for being about half its original length.

I ran the email2sms script for just a few weeks before running into a problem where an address sent to me by a friend was mangled past the point of usefulness. I figured that the best fix for that problem was to enlist the sender to evaluate the suitability of the compressed text.

To achieve that I added a feature to email2sms wherein a copy of the compressed message is sent back to the original sender along with a request that if important details were lost from the message during the compression process that they shorten the message themself and re-send it or send it to an alternate email address I provide on which no compression is done. The reply system has worked out quite well, and in the three years I've had it in place there have been a few circumstances were a human initiated re-send has saved an otherwise mangled message.

Attached is my version of the email2sms script, the configuration files I use with it, and a procmail recipe to invoke it. For fun here's the text of this post compressed by Lingua::EN::Squeeze.



Philips Pronto TSU-2000 Remote

I try to lead a very uncluttered life whether one's talking about hard drive layout, personal responsibilities, or physical clutter in my condo. Three years ago I got my first TV and DVD player. Each came with its own remote control. Not wanting to deal with two remotes on my coffee table (which at the time was a cardboard box) I went out and bought a nice $20 universal remote that was very programmable and easily handled the functions of both the TV and the DVD player.

Since then I'd added a TiVo and a VCR to the mix and the old remote just wasn't cutting it. Looking through Remote Central (http://remotecentral.com) it looked like my options were cheap remotes with fixed buttons whose labels would never match their assigned functions or ungodly expensive remotes with touch screen buttons and programming software run on one's computer.

Finally last week after the release of the brand new TSU-3000 remote, the price of a refurbished TSU-2000 (four generations older) dropped into my price range. The TSU-2000 (http://www.remotecentral.com/tsu2000/) has a few hard buttons around the edges for the functions you want to be able to use without having to look at the remote (volume, pause, etc.) and a large touch screen area in the center for everything else.

Reading reviews for the TSU-2000 shows that owners are divided into 2 categories: those who are geeks and those who find the remote unacceptably difficult to program. Everything about the remote is user-definable from the location and shape of the buttons to the screen transitions to the pitch of the beeps.

The software it comes with, ProntoEdit, is (I'm told) terrible, but it only runs on windows. I found a Java implementation called Tonto (http://giantlaser.com/tonto/) which has worked wonderfully thus far. It probably took a good 20 hours for me to get my remote to the point where my configuration handles most of what I need it to do, and even that was with liberal use of other peoples' graphics. Is the flexibility worth the time investment? Probably not for most people, but still there's something nice about being able to make the commercial skip button on the TiVo as big as a quarter.



So... how much did you pay? -- Gabe Turner

$120 for the refurbished unit on ebay -- which seems about average for the TSU-2000s. List that's a $350 remote. The color ones (TSU-6000) are $700 list and seem to go for about $350 refurb on ebay. The new TSU-3000s look nice but I don't know if there's a refurb supply yet. -- Ry4an

Canoeing with a GPS Unit

This weekend I had a great time canoeing with six friends. We camped, swam, paddled, drank and just generally goofed around for a weekend. Two of us had brought along Garmin eTrex GPS units which I'd not previously had when canoeing. They really added a lot.

I built an 18 point route approximating our course before hand and loaded them into the GPSs. With that info and the GPS's natural data collection we were able to always know our current speed, average speed (3.2 mph), max speed (mine = 10.5 mph), distance paddled (total = 29.1 miles), and elapsed time (10 hours 31 minutes of paddling).

When we got back I took the GPS units and dumped their track history data to my computer. Using the attached garbleplot.pl script, it made the attached image of our course. The x and y scales are internally consistent in the image, but can't be compared with each other as the distance represented by a degree longitude and that of a degree latitude are different anywhere but the equator. The GPS data has a much higher level of precision than the pixel resolution in the image can show. At a higher zoom level the red and green lines would should Louis's canoe cutting a nice straight line down the river while mine zig zagged its way along the same general course.



Email Response Times

I get and send a lot of email. Many of the emails I send are responses to emails I received. When I respond to email I almost always quote the email to which I'm responding, and when I do my email client (mutt) inserts a line like:

On Thu, Jan 02, 2003 at 11:40:25AM -0600, Justin Chapweske wrote:

Knowing the time of the original message and the time of my reply provides enough information to track my response times to email. I used the inbound message ids to make sure only the first reply to an email was counted.

I whipped up a little Perl script to extract some stats and create a histogram. The script and histogram are attached. Here are some of the stats I found:

  • Of the 1888 emails I've sent during calendar 2003 thus far 1128 of them were replies
  • My five most common response times in minutes were:
  • two minutes: 59 times
  • four minutes: 45 times
  • one minute: 44 times
  • three minutes: 41 times
  • seven minutes: 33 times
  • My mean response times was 20.3 hours.
  • My longest response time was 386 days to some guy from whom I want tobuy a domain.