Step 2 is the hard part.

Matt Ginzton writes here.

How Photo Sync Would Be Done Right

| Comments

I’ve been wishing and waiting and watching and waiting, with an idea in my head of what I want: from computers and the internet, regarding photos and photo management, and connecting all the computers I use. Some of this is very specific to photos, and some of this is general multi-computer file management, and after years of watching and waiting, little of it is done the way I wish it was.

I take photos with digital cameras. I want all of those photos accessible to me wherever I go, on all the computers and smart devices I own and any I happen to use temporarily; I want them backed up for safety; I want some of them shared with my friends or the whole internet. In short: I want my memories protected, accessible to me, and selectively shared with those I choose to share them with.

There exist good file sync solutions, and there exist good backup solutions, and there exist good photo sharing solutions, but all of them have drawbacks, not least of which is that they’re separate and don’t really work together, so I need to do more to get these combined benefits.

Ideally, I’d like all photos I take immediately uploaded to a private site in the cloud, automatically, which acts as backup and also automatically syncs them between all my devices, and which also offers a web interface capable of giving me full access to the collection, and selecting some photos for sharing in a nice way, with or without authentication.

Since upload bandwidth is generally too slow and expensive, I want the cloud upload to happen once, automatically and asynchronously in the background, as soon as possible; the photo is now backed up and syncable; if I choose to share it, there’s no further waiting for upload.

I want flexible organization (which should work well with as much metadata as I want to add, or as little), to scale to my collection of thousands of photos over many years, and grow into the future. I want the sync solution to play promiscuously well with other apps I choose to use, and not enforce a particular filesystem or logical organization scheme, but I want it to offer optional organization features which help me keep a well-groomed collection if I can use the help. And the sync part of the solution means that any organizational changes I make, on any connected device and in any app, is reflected everywhere.

I want this to work with full-resolution photos, and raw photos, and I want it to work with arbitrary metadata. And I want it to be promiscuously crossplatform: on Mac and Windows and Linux desktops, and iOS and Android and whichever other mobile platforms manage to prove their relevance.

I want the mobile device version to realize and revel in the constraints of the platform, acknowledging that connectivity is ubiquitous but not always fast or reliable, that memory is limited, and that snappy touch interactions feel good. Thus, if I’m using a device that combines a camera and a network interface (any smartphone) I want every photo I take immediately sent to the cloud for backup and syncability. I also want my entire collection synced down to my device as a photo wallet, but, acknowledging local storage limits, I want this to automatically downscale to an appropriate resolution, and I want to be able to select per-device subsets of the entire collection to subscribe to.

It would be great if the sync/backup/share features work with non-photo files — the solution I envision would be file-based, and have a bunch of special sugar for image files, but would be useful even for other files — but that’s not necessary. Non-photo files are already handled pretty well by services like DropBox.

In fact, my continuing desire for the magic super photo cloud service is driven by noting that by and large, photos are the only remaining file type I have to really care about on my computer, meaning data I create that isn’t already well handled by cloud services. Examples that are already well handled: Email (tons of cloud services including Gmail);  office documents (Google Docs); unstructured notes and clippings (Evernote); miscellaneous files that don’t fall into the above buckets but are small (Dropbox): what’s left? Well, photos, music and video. But music is a special beast, which like photos also involves largish files but unlike photos is something I buy instead of create, meaning any individual file I want already exists in some canonical form I can get from lots of other sources; anyway it’s well handled: Amazon, Apple, and Google have cloud sync; the file-based approach is somewhat displaced by subscription services like Rhapsody and Spotify. Video is a combination of photos and music which, in the case of video we record ourselves, I think will end up having the same properties as photos: large files we create, embodying our memories, that we want to sync and backup and share. So it’s the photos we take (and by extension I include video we take) that we care about managing with computers, better than it’s currently done.

Here’s a smattering of things that exist now, attacking some of the above problems and illustrating a variety of different approaches. I’ll call out what they do right, and what I find lacking:

  • Dropbox (file sync and then some): can do most of what I’m asking for, but it’s not photo-specific (which could be bug or feature or both). The price is high (compared to photo sharing and backup services which usually offer unlimited storage). They have a sharing UI which has been somewhat customized for photo albums, but it’s pretty bare bones compared to full-blown photo sharing sites. The syncing is good but not perfect; p2p sync first requires an upload to the cloud (so it makes you wait for the slow hop then skips the fast hop); it enforces a certain filesystem layout on computer clients; it doesn’t have an appropriate mode for syncing all photos to/from smart devices at suitable resolution. All this said, Dropbox is probably the closest existing thing to what I want, and it offers good 3rd party APIs allowing others to extend it — I think they see themselves more as storage infrastructure for the web than any specific application — so if the interoperability and pricing concerns don’t bother you, maybe it’s already the right answer.
  • Photo Stream (photo sync): gets the sync right (the first I’ve seen that automatically adds photos as they’re taken). However, it has very strict and arbitrary limits: no deletion, no organization, and includes only the last 1000 photos or 30 days, so it’s basically this write-only temporary repository that syncs across devices but which you need to manually fish the things you care about out of, and manage separately. It does have the magic auto-upload feature for network-connected cameras, and it does deal with RAW images. I see this as a promising demo of one aspect of what I’m looking for, but it doesn’t even touch the others (organization, backup, or sharing).
  • Adobe Carousel (photo sync and sharing): haven’t tried it yet, and it looks promising but with limited platform support and doesn’t handle raw files. Pricing may be a turnoff (monthly fee and limited number of carousels). Should handle sharing, both public and private. But not suitable for my entire photo library, as far as I can tell.
  • CrashPlan (online backup, also see Mozy, Carbonite, BackBlaze): cheap unlimited-size cloud backup, which can also let you retrieve individual files from the cloud via a web interface, which in a pinch means you can get at your files from anywhere. The web interface isn’t optimized for this case though, nor is it optimized for photo display, and there are no suitable features for sharing (albums, delegatable access control).
  • Flickr (photo sharing): Flickr is great for photo hosting and sharing; it has great organizational features and a good community. The pro version offers effectively unlimited storage and has neat tracking features so you can see the popularity of what you post. However, Flickr doesn’t itself handle uploads or, really, client software in general, so it doesn’t handle sync (it does have an API with good 3rd party adoption, addressing this somewhat). It doesn’t handle RAW files, and it’s not suitable for backup. The sharing features allow flexible public and private sharing, but with not a lot of flexibility for album display. In general, the evolution of the site has been slow since Yahoo! bought them out. I think the site was originally designed with a forward-looking vision that in the near future, nobody would need any local photo software, and you could just upload everything to Flickr, edit and organize it there, share it there, and you wouldn’t need sync or backup. And maybe if the site had continued on its original trajectory and pace, this would have happened by now — but I find it’s anything but true in the real world.
  • Facebook (photo sharing): Facebook has huge adoption, and I’ve found that photos I post there get seen (and “liked” and commented on) more than anywhere else, including Flickr. But, Facebook is a walled garden (it may be possible to use it for public photo sharing but I don’t see people doing that in practice); content hosted there is not search engine friendly; you as the photographer also get zero control over photo presentation, zero insight into who’s viewing your photos, and only very limited organizational features (albums with a size limit which can’t be nested hierarchically). Person tagging and automatic face recognition make up for some of this; general ease of use and speed and reach make Facebook a good solution for sharing your photos as long as your needs are simple; however on my list of big top-level wishes (handle sync, backup, and sharing) it only touches sharing, and honestly, given the limited control over organization and presentation, I’d prefer to use Flickr if I could get the same reach. Given the name of the site (it’s the book of faces, after all) and the mandate implied therein, I feel like they could take this feature much much farther, but it also seems like the backup/syncing aspect I want is not part of this “book of faces” mandate and cries out for a dedicated solution. Note that Facebook is free and offers unlimited storage (as far as I know); the platform scales well (and is, I believe, by far the world’s largest-volume photo server); it doesn’t handle RAW files, or even particularly high-resolution images.
  • Google’s photo suite (photo organization and sharing): Google has nice desktop client software for photo organization and editing (Picasa), which they eventually built an online companion for with sharing and automatic sync features (Picasaweb), and then integrated into their new social network, Google+. Taken together, this provides a pretty good start-to-finish solution for taking, editing, uploading, and sharing photos. However the upload/sync solution based on Picasa is not flexible or pervasive enough to serve as backup or sync files managed outside Picasa, and the website is certainly not designed to serve as a central hub for managing and syncing my entire photo library. I’m not even sure if the sync is bidirectional; it may be upload-only.
  • Evernote (generic unstructured data sync): Evernote has a good web service, with client apps and a web app for access, automatic syncing for offline access, and attractive pricing. However it doesn’t offer any of the organization or sharing features you’d really want for photo management, nor does it integrate with other apps via the filesystem (stuff stored in and synced via Evernote is available via the Evernote API, but not via older integration points, namely local filesystems).
  • File sync in general: Beyond Dropbox, described above, I haven’t seen anything that really scratches this itch in the way I’m envisioning.
  • Photo sharing in general: there are tons more photo sharing services, but by and large, they have the same properties I ascribed to Flickr above, but are less good at it. You might have a favorite that you think I’m giving short shrift to here; fine; let me know if you know of any that are qualitatively better than Flickr on the list of things I’m wishing for in this post.
  • Other things not mentioned here: This list is intended to show examples, not be exhaustive, so I’ve collapsed whole categories onto one exemplar; for example, while I’m aware of SpiderOak and see it having certain advantages over Dropbox, for my purposes here it’s essentially the same. There are probably many other sync, backup and photo sharing services that have their own advantages but don’t break new ground on the aspects I care about; if you know about something I missed that does address my wish list, please let me know.

How I’d build this: I’d want the following pieces:

  1. The central cloud service which stores files securely for backup, and handles sync to any clients I’ve authorized.
  2. Client software for every platform which matters, which handles sync, automatically in the background: new files appearing first on this device are uploaded to the cloud service; new files announced from the cloud service are synced (at an appropriate resolution) down to this device’s local library; changes to existing files (including filesystem metadata like name and location, picture-specific metadata like that encoded in EXIF and IPTC mechanisms, and image data itself) are also automatically synced in both directions.
  3. Web app atop the first cloud service which handles organization and management — letting me move files between folders or albums, apply tags, and designate what to share with whom.
  4. Web app for sharing, pulling authorized files from the cloud service, and displaying them with nice presentation and according to whatever organization (individual file, album, by tag, etc) I’ve chosen.
  5. Client local versions of #3 (optional and lower priority, since I should be able to use the web version or any local software I want to most things this could do).

(I think Evernote, top to bottom, service to applications, is a great example of this architecture — handling what they call “notes”, which basically means document snippets with attachments — but, as described above, it doesn’t solve my photo sync problem, since the client local apps don’t integrate my local filesystem and provide an integration point for other local apps, and the web interface doesn’t have theorganization and display features I’d want for photos.)

I don’t know what this should cost or what I’d be willing to pay for it — more than free (Google and Facebook); probably more than Flickr’s $25/year and more than the unlimited backup companies’ $5/month; probably less than Dropbox’s per-GB-per-month fees. For this to be done right, the storage and bandwidth would probably have significant cost, and most of the data is private so can’t be de-duped.

Clearly this isn’t easy — the inherent challenges of handling and transporting and scaling to this much data, and designing good interfaces for all the services and applications, are real, and for anyone but the incumbents, there are also the additional challenges of integrating with 3rd party software and OS platforms, especially less open and less flexible mobile platforms (given the sandboxing and multitasking restrictions in iOS, for example, I’m not sure anyone but Apple is in a position to implement Photo Stream for iOS right now).

But it would be nice.

Kindle Fire First Impressions

| Comments

My first impressions after playing with the Kindle Fire for an hour or so:

  • it’s basically a color, video-happy, app-happy Kindle.

  • fine for reading Kindle books, but if that’s what you want to do, the e-ink Kindles are cheaper, lighter, less distracting, and have much better battery life.

  • video playback from Amazon’s store or Netflix works great

  • app selection is small, but I’ll bet a lot of games show up soon/over time

  • overall touch interaction is ok but clunky, doesn’t quite feel right compared to iOS, scrolling is choppy, some touches aren’t recognized, some light touches send it scrolling for miles

  • web browser feels decidedly not as fast as this year’s Apple devices (iPad 2, iPhone 4S). So much for Silk?

Overall, if you compare it head to head against an iPad 2, of course the iPad comes out ahead, but it also costs 2.5x more, and is bigger and heavier. But iOS still sets the standard for natural touch interaction, and iOS also has a much wider app selection. Plus there’s more hardware packed into an iPad: camera and GPS, and options for adding external keyboards, so some people even find they can use it as a laptop replacement… anyway, I don’t see the Kindle Fire having comparable versatility.

If you want to focus on reading, video and games, and you like the 7” size (which is just right in many ways), the Kindle Fire is pretty compelling, especially given the price.

I sure hope Amazon is rapidly improving the touch interaction via software updates, though.

How Much Can the Kindle Fire Improve With Software Updates?

| Comments

I wrote a semi-positive mini-review of the Kindle Fire this morning.

Then I read Marco Arment’s rather scathing “human review”, and wondered: wait a minute, if it’s that bad, should I return it, and wait for a better version (or save my money if no better version materializes)?

The thing is, I saw most of the same things he did and had most of the same complaints; I just expect Amazon to fix them. Marco sees the glass half-empty; I see it half-full; maybe I’m too much of an optimist. I understand that it’s smarter to buy something for or judge something on its current capabilities, not ones that may or may not materialize in the future. Still, it seems worth considering how much better the current generation hardware might be able to get with software updates.

Most of the complaints centered around speed and touch interaction. And those are legitimate — lots of taps are ignored; lots of taps are interpreted as small drags from nowhere to nowhere; scrolling is chunky and jerky; it’s at best annoying and far from a premium experience. For goodness sake, the orange pointed stripe that you drag to unlock the device, every time you want to wake it from sleep, doesn’t drag smoothly — it’s the first thing you see; how hard is it to get that right? Still, if Amazon cares — and I hope they do — they should be able to fix this.

So will they (fix it)? What’s the problem here, anyway? The hardware should be up to the taskTI OMAP 4430 SOC with two gigahertz CPU cores on Cortex A9 architecture and PowerVR SGX540 GPU, 512MB of RAM. That compares favorably with this year’s Apple devices based on the A5 SOC (iPad 2, iPhone 4S: two comparable CPU cores and a somewhat newer GPU core) and very favorably with last year’s Apple devices based on the A4 SOC (iPhone 4 and 4th-generation iPod Touch: one comparable CPU core and a somewhat older GPU core). I heard that specs are dead, but at least this establishes a baseline. And note that at the Kindle Fire’s 1024x600 screen resolution, it’s pushing the exact same number of pixels as the 960x640 retina display in the iPhone 4 and 4S. So assuming they can figure out how to effectively leverage the CPUs and GPU — I don’t know how good Android is at this, and I don’t know what shortcuts Amazon took in forking Android, and so on — you’d think they should be able to get iPhone 4S-like performance out of this. At the very least they should be able to compete with the 4th-generation iPod Touch (which is a year old and the same price). The point is that Amazon and Apple (and everyone else too) are using similar guts, with similar- architecture ARM CPU cores and PowerVR GPU cores. Right now, the iPod Touch is night and day more responsive. Amazon, get cracking.

(I’ve owned the previous 2 generations of e-ink Kindle and Amazon delivered significant software updates to these long after purchase; that gives me reason to be hopeful the same applies to the Kindle Fire.)

Some of the complaints were about missing physical buttons — hardware volume buttons would be nice, a dedicated home button would be nice, and there’s no way to fix those problems in software. (The headphone complaints are also legitimate; the headphone jack location is obviously not patchable in software; the popping might be; however I don’t really care, because the iPod has so won the music race that I don’t have any desire to use the Kindle Fire for music. I do find it notable that pretty much every non-Apple multifunction device that can play music takes it so unseriously that they have popping problems like this; that’s par for the course with old and new Palm smartphones too… oh well.)

What this boils down to is this: if Amazon can wring decent performance out of the hardware, and pay some real attention to nitpicky details about the touch interaction, about half of Marco’s (and my) complaints should go away. I see no reason this is impossible to fix with the current hardware. However, if they also decide to yield to common sense and add hardware home and volume buttons, the first-generation units will forever feel lacking by comparison.

I’ll close with a final note on why I’m willing to give the Kindle Fire some slack. I’m not trying to compare the Kindle Fire and iPad or argue the Kindle Fire kills or replaces anything else, much less the iPad. But I do think the Kindle Fire has the right guts to deliver a much better experience than it does today, and is fixable in software if Amazon has the will to improve it. I’m also intrigued that there’s finally an Android competitor to the iPod Touch — decent hardware sold contract-free and unsubsidized for an honest $200. (Incidentally, that’s something John Gruber called out for before.) The big difference between the Kindle Fire and the iPod Touch, other than the operating system (and the responsiveness and app ecosystem that put Apple way ahead on the software side) is obviously the size; I’m not going to get into the screen size argument, other than to point out that anything that doesn’t fit in a pocket is clearly less portable than anything that does, and that size imposes so many tradeoffs there’s no one perfect size. Pick the one that works for you.

Given that the $200 Android-based iPod Touch competitor now exists, I’m interested to see what happens with the app ecosystem. I think it could be a big gaming success, the way the iPod Touch has been, for example.

If Amazon can endow it with a major dose of missing snappiness, that is.

NYC Marathon Results

| Comments

For the MMRF: They raised about $525K via this marathon, which is one third of their overall events-based fundraising for the year. (And this year was a record high.) $4300 of that was from you all, so one more big thank-you from me, the MMRF, and the people they try to help.

For the marathon as a whole: Conditions were great, and the top 3 men all beat the previous course record. You can find plenty of news stories on this if you’re curious.

For me: I originally started training for this marathon (my first) with a goal of finishing in under 4 hours, which is about 9 minutes per mile. Towards the middle of my training, it became obvious that I’d be able to exceed that goal and revised it to somewhere between a very optimistic 8 minutes per mile (3:30 total) to a more realistic 8:30 per mile (3:45 total). I ran with my friend Jeff pacing me (he’s run a couple marathons before) and we set out with a goal of starting with 8:30 miles and speeding up as we went.

Of course, at the beginning we felt great and it was hard to run the exact speed we’d planned. We did the first 6 miles a little slower than 8 minutes, then still felt strong so we sped up a little. Up through the first 16 miles (which is right where the course enters Manhattan for the first time) I felt great. At 16 miles I began to feel only good, but we kept the pace around 8 minutes per mile until mile 22, where a couple things happened: I began to feel not great at all, there’s a very mild but very long uphill leading into Central Park, and I decided I needed to slow down. Jeff stayed strong and went on ahead; I slowed down by about a minute a mile, did the last 4 miles at about a 9:30 pace, and finished in 3:37:19. (Jeff finished in 3:32:35, and I say thanks to Jeff for pulling me along at a strong pace and congrats for being able to stick it out the whole way.)

It’s funny, because during the first 16 miles I felt like I should go do this every day, and during the last 4 miles I never wanted to do it again, and after finishing, of course, my feelings (a little more detached and not in the middle of a runner’s high or extreme pain) are somewhere in between. During those last 4 miles, I knew I had to just keep slogging on to the finish; I couldn’t quite keep the 8 minute pace but I knew I had to keep running; slowing down any more wouldn’t have felt better; the only thing that would have felt good at that point was quitting, and I knew that wasn’t an option, not least because of yall’s support.

Anyway, I’m very happy with the result for my first marathon, and looking back, I don’t know if the pain and the subsequent slowdown in the last 4 miles is (a) because we went out too fast, (b) because I didn’t do quite enough training at a >20 mile distance, © lack of fuel, or (d) totally normal and that’s why they call it a marathon.

I also have to praise the marathon organizers — this race had the best organization (website, runner tracking, getting to the start line, knowing what to expect), the best infrastructure (timing mats almost every mile, drinks and food along the way), and the best cheering crowd of any race I’ve ever been involved in. That’s what I was told to expect ahead of time, but it sure lived up to expectations. If you’ve run NYC before you know what I’m talking about, and if you haven’t and you’re a runner, do consider it. It really was a great experience.

If you’re curious for any more details, the marathon website has the detailed splits available — go to http://trackmyrunners.ingnycmarathon.org/Runners.aspx and search for Ginzton. Then click the + button on the far right, and it’ll turn my name into a link which you can click on for the details.

Growing Up With Apple

| Comments

RIP Steve Jobs. Along with his passing, everyone in the tech world seems to be telling their stories about Apple’s influence on them in their formative years; here’s mine.

Like Eric Bangeman in the above-linked Ars Technica article, my first computer wasn’t an Apple II — I also had contemporary experience with TRS-80, TI 99-4/A, and Commodore 64 machines. Unlike him, my first experiences with the Apple II didn’t stand out as more satisfying — I actually was touched more deeply by the Commodore 64, likely because it was so much cheaper my family could afford one much sooner. But I did spend a lot of time playing with an Apple II+ at a friend’s house. Lots of Moon Patrol on the C64, lots of Lode Runner on the II+, lots of BASIC programming on both… the C64 was a very gentle introduction to these newfangled home computers, since you could plug it into any TV, and it worked fine with no storage (games came on cartridges), or with very cheap storage (cassette tape adapter), or you could buy a disk drive. And the C64 had color graphics and better sound support than the contemporary Apple II models… on the other hand, Apple’s BASIC actually had graphics commands, which on the C64 forced you into machine language territory.

Later I got an Apple IIc, and it was better in many ways than the Commodore 64 — but in other ways not much better, or no better at all, especially considering the price. I really appreciated the democratizing influence of the Commodore’s price — they were available for $150 at Toys ‘R’ Us, IIRC.

As it became time to upgrade the IIc, the original Macintosh was already on the scene, and I had a choice — jump to Mac, or upgrade to the latest and greatest Apple II, the IIgs. It seemed like an easy choice at the time — the IIgs was backwards compatible with the software and peripherals I already had, had color graphics and awesome sound, was expandable, and was roughly equivalently powerful. Only a couple years later, Apple made it obvious they were betting only on the Mac, finally introducing a Mac which supported color and expandability, and essentially killed off the IIgs. (They did continue to release impressive software updates for the IIgs, including a couple complete rewrites of its GS/OS operating system, which I take to mean that the IIgs software team felt much as I did about their machine’s demise.)

This was a real blow to me — the IIgs had been a stretch for my family to afford, and I don’t recall any hints at the time from Apple that it would be a dead-end purchase — so, with the wrath that only a 12 year old boy can muster, I swore that I’d never buy another Apple product. (Only much later did I realize that this coincided with the Steveless interregnum at Apple, opening the door for me to reverse this pledge later.)

Thus was kicked off a 15 year period of PC buying, where I learned the use and programming of MS-DOS and then Windows, from 286 to 386 to 486 and Pentium and beyond, and again the democratizing influence of the Wintel economy was attractive — people have long claimed that Apple computers were an overpriced item for the snob market, and while we all know that’s not true and even the stereotype has mostly dissolved, it’s still worth examining a little deeper. It’s now widely accepted — and if you look closely you realize it’s been true even longer — that Apple computers offer a good value for the money you do spend, and compare favorably with PCs at the same price point. And yet, there were often lower price points, satisfied only by PCs, that were still good enough to be relevant. Probably Steve Jobs didn’t feel that way, but I know I did, with the C64 in 1982, and with my 386 in 1992.

In high school, as a staffer and eventually editor of our school paper, we did most everything on Macs — a few Mac Classics, and later one color IIci. They were underpowered and slow, and we cursed them often, but they did the job and most of the curses were affectionate. They also networked, which was something PCs of the day didn’t easily do. Unlike standalone computers, networked computers need names — I don’t remember all of them, but two of them were Eddie Vedder and Kurt Cobain, provoking utterances like “dammit, Kurt Cobain crashed again.”

In college in the later 90’s, learning about CS fundamentals and history and non-consumer architectures and Unix and operating systems and the impact of the MMU, it became obvious that neither Mac OS (stalled at System 7) nor Windows (in the throes of the 9x procession) was really living up to the promise of performance and reliability supported by the hardware that was commonplace at this time, but that Windows 9x was closer — at least most of the software ran preemptively scheduled separate memory spaces, and its mostly backwards compatible cousin Windows NT could even be called a real OS. So it was easy for me to continue looking askance at Macs (while, to be sure, my Mac loving friends looked equally askance at Windows machines, and I’m sure we both claimed the other crashed more). However you look at it, this was not the proudest period for the Mac, while Apple kept trying and failing to get its house in order with Taligent and Pink.

Around this time Steve Jobs returned to Apple, and kicked off efforts to revitalize both the software and hardware behind the Mac. The hardware efforts paid off right away, with the bondi blue iMac and later iBook; the software efforts would take longer. I wasn’t the least bit tempted by these fruity looking computers, running the same tired Mac OS derivatives (8 and 9 being like Windows 98 and ME, that is, not meaningfully better than what they succeeded), and the IIgs experience still smarted, 10 years later.

This remained basically true until OS X came out — a Mac OS a geek could run, and still respect himself in the morning — and with it, the white iBook, which both in appearance and in internals was far more respectable than its predecessor. Just as I was graduating from college, I used my Apple student discount and bought one, finally breaking my age-12 promise.

The early revs of OS X (beta, and 10.0, and 10.1) showed a lot of promise but they were still new, under optimized and slow, and the iBook’s hardware wasn’t enough to compensate for this, and I never ended up using it for anything serious, deferring to more capable PC hardware I continued to buy. A year later, I traded the iBook away as a partial payment for a Ducati motorcycle. However, the hook was set and a couple years later, as OS X hit its stride with 10.2 and G4 processors became affordable, that underpowered feel was gone; I bought a G4 laptop, was won over by several factors that really matter on laptops (suspend/resume that’s reliable, wifi that’s reliable, combining to mean you can open the lid and be online and typing a few seconds later — something that was a total crapshoot under Windows at the time and, um, still mostly is another 7 years later), and, to close with another reference to Bangeman, every computer I’ve bought since that I didn’t build myself has been a Mac.

SF 1/2 Marathon Challenge

| Comments

I’m planning to run the NYC Marathon in November, as described earlier.

I’m also going to run the first half of the SF Marathon this coming Sunday.

If you’re interested in helping support my donation campaign for the NYC Marathon, I’d like to propose a fun challenge based on how I do in the ½ marathon this weekend.

I’ve run this ½ marathon before, 3 years ago, and finished in 1 hour 52 minutes — just a little faster than a 9-minute-per-mile pace. My training goal for this time is an 8:00-per-mile pace (1 hour 45 minutes overall), and my last couple training runs were at about 8:05 per mile. But I think I can do it faster.

I’d be happy with an 8:00 pace — substantially faster than my current record for this race — but even happier with 7:40 (just over 1 hour 40 minutes total). Want to bet I can’t do it? I’ll bet I can run 7:40 miles and you bet I can’t run faster than 8:00 miles and we’ll both donate to the MMRF based on the difference.

So. If you’d like to take this challenge, and challenge me to run faster than 8:00 miles this weekend, let me know and give me a dollar amount per second. I’ll promise to donate that amount for every second slower than 7:40 I run (average pace), and you promise to donate that amount for every second faster than 8:00 I run.

Example: you bet me $5 per second, and I finish with an average of 7:45. You donate $75, and I donate $25. Of course, if I finish with an average of 7:55, you donate $25 and I donate $75.

I chose these times because it’ll push me to run a little faster than otherwise, and it stands a good chance of raising some money for the MMRF (as opposed to, say, if I just bet you over/under 8:00, then there’s no disincentive to run slower and they get less money).

Of course, if you’d like to make a normal less complicated donation to the MMRF in support of my marathon bid, that’s welcome too, and my fundraising page is here.

But if this complicated challenge sounds fun and/or you want to make me sweat more, please contact me (email, comments here, comments on Facebook, carrier pigeon…) with your bet amount.

NYC Marathon

| Comments

I’ve signed up to run the New York City marathon in November. It’ll be my first marathon, so both training and the actual event should be interesting and challenging.

In addition to the personal challenge, I’m running on a team collecting donations for the Multiple Myeloma Research Foundation. (Before anyone freaks out, no, I don’t have any direct connection to this disease, neither I nor any of my immediate relatives or acquaintances. I chose to fundraise for them because my mom and her oncologist husband say they do good research and use their money wisely. And as my mom said, “The fight against any kind of cancer benefits the fight against all kinds of cancer.”)

If you can spare $20, or more, or less, to sponsor me and support the MMRF, I, the MMRF, and multiple myeloma patients present and future would all appreciate it: just follow this link to my fundraising page. Thank you.

Not the Modem After All

| Comments

My outbound Internet connection stopped working again this morning, and (like the last couple times) evidence pointed towards my router, not the modem or anything provided by Comcast.

(As I said last time, I’m convinced there were originally multiple problems leading to confusion and increasing troubleshooting difficulty, but it appears I owe Comcast at least a partial apology.)

The good news is there’s a software fix, which can be invoked remotely (unfortunately not from outside the house since outbound connectivity is down, but it does mean I don’t have to walk over to the equipment closet to power cycle modems or routers): “mii-tool -r -R eth1” made everything happy again.

The router in question is a Netgear WNDR3700 running OpenWrt 10.03 “backfire”. I wonder whether this happens to others and whether it’s a hardware or software bug. I’m hoping a newer version of OpenWrt might fix it (but since release candidates of 10.03.1 have been appearing for 15 months now, I’m giving up hope for a new official release; newer version probably just means trunk); alternately, it won’t be too hard to whip up a monitoring script that tries to ping the modem and does the mii-tool reset whenever it doesn’t see a response.

Update on Mac OS X Headless Fast User Switching Bug

| Comments

I complained that Fast User Switching over a Screen Sharing connection is a bag of hurt, and that it seems like something Apple’s never going to fix (#4 on the list at that link).

(Fingers crossed), maybe I was wrong: the what’s-new page for Lion1 advertises per-user screen sharing, which sounds more like the Microsoft Windows Terminal Server approach: a combination of fast user switching and remote access, so that multiple sessions can be active simultaneously, and remote sessions aren’t onscreen on the local console.

I can only hope that this new feature will make the old bug go away; I have a hard time imagining how they’d implement detached remote sessions that still result in the buggy behavior I described. Though, from the bug I described and earlier incarnations that seemed related, it seems that headless isn’t a primary/supported use case in Apple’s eyes.

Here’s hoping it gets better.


  1. I wonder how long that URL will last; probably until the next major version is announced, since there’s no versioning in the URL. I hate linking to links that are going to break, but I don’t know of a stable equivalent.

Update on Comcast Last-mile ISP Connection

| Comments

After months of woe with my Comcast cable connection, and a couple months of flirtation with indie DSL, I called Comcast out for a third service visit to focus on the external wiring. (Aside: This took some convincing from the service rep on the phone; more on this later.)

I realized that our house had a bunch of old exterior cable wiring including a dodgy splitter, left over from when we bought it, and entirely unused now. Comcast hadn’t decided to look at this before, and it hadn’t occurred to me that it would be related since I have no idea how things are wired downstream of their lock box except inside my part of the house, but this time I decided to be more careful. When the tech arrived, I asked him to open the lock box and disconnect everything in there that I’m not using.

Also, the tech took a look at the wire from the pole to the lock box and said it doesn’t meet their current standards and offered to replace it.

Other than some hilarity involving division of labor (one guy calls in the order for new wiring, a separate guy shows up a day later and strings it, then the first guy comes back a day later and actually connects it inside the lock box, because the first guy doesn’t have the cherry picker truck to string the cable and the second guy doesn’t have the key to the lock box?), this went pretty smoothly.

So now I have shiny new wire from the pole to my house, we disconnected all the other old wiring so from the lock box the only connected wiring is my own new indoor wiring, and, knock on wood. one of these things was the problem and the problem is no more.

(This was a month ago, and I hesitated to write about it immediately for fear of jinxing it, but now that it’s been a month with none of the same old problem, I’m feeling better about it.)

The promised aside: the service rep I spoke with on the phone was either a genius or knew just enough about networking to be dangerous; I’m not sure which. He asked how I knew it was Comcast’s problem and I said I’d tried replacing the modem already so what else could it be. He said what if it’s my router; I should try connecting my computer directly to the router and see if the problem still happens. I explained how the problem only happens an average of once every two weeks and there’s no way I’m disconnecting all but one computer in the house for weeks on end, and why should I suspect my router anyway? Then we got in an argument about 192.168.100.1 and whether it is indeed the address of the cable modem, as I believe, or, as he said, “the address my computer uses to get online” (whatever that means). I was convinced he was stonewalling me and he was convinced I was being difficult and this was going nowhere so finally I said just send a tech out and he said OK but we’ll have to charge you if it’s not our fault and I said fine, you’ve already done that twice and I’ll just have all the charges reversed once it does turn out to be your fault. (And I did call back later and Comcast was happy to refund the earlier service charges once they saw they’d had 3 calls for the same issue.)

So anyway. I didn’t find any of his facts or arguments highly convincing but it did get me thinking — how do I know my router’s not acting up? It doesn’t seem to be, but maybe. It is bleeding-edge OpenWRT, after all, totally awesome in its power but not the most mainstream and tested thing. So the next time the usual problem happened (conveniently, before the wiring repairs mentioned above were finished), instead of power cycling the modem I disconnected and reconnected the ethernet cable between modem and router, thus toggling the ethernet link state. After that, connectivity was restored — I could ping both the router and external sites — and the modem reported no downtime.

That seemed like a smoking gun, but it doesn’t explain why each Comcast wiring repair changed the frequency of the problem. At this point I’m pretty sure I’m dealing with multiple problems, interacting or at least masking each other in complicated ways, and making troubleshooting that much harder. It really seems like the wiring repairs helped materially, but if I do see more “modem crashes”, I’ll be looking hard at the router first.

(And to make things even more confusing, I’ll note I’ve also seen a handful of “T4 timeout” problems where the modem complains to its logfile and then reboots itself; that’s definitely a Comcast-side problem; it’s also wholly distinct from the one I’ve been calling “the problem” and easily identifiable since the modem clearly identifies it as such; it’s also a lot less annoying because it only seems to happen in the middle of the night and it fixes itself within minutes.)