Tuesday, December 22, 2009

Empowering Sinookas using Social Networks to Maintain a Durass

One of the recommendations I frequently give at conferences when asked about "What books are you reading" is to get out of the purely technical realm often so that you can communicate more effectively with the other humanoids. One of the common recommendations is to read all of the books by Kurt Vonnegut. One of his books I recently re-read (for probably the 15th time) is Cat's Cradle. In Cat's Cradle, Vonnegut defines a new religion called Bokononism (one of the first lines of the novel states that if you have a hard time believing that a perfectly useful religion can't be based entirely on lies, you won't like the book). Bokononism defines a bunch of new terms, which relates to the point of this blog.

First, some definitions from Bokononism:

  • karass: a group of people who, often unknowingly, are working together to do God's will. The people can be thought of as fingers in a Cat's Cradle.

  • duprass: a karass of only two people. The typical example is a loving couple who work together for a great purpose.

  • sinookas: The intertwining "tendrils" of people's lives.

  • wampeter: the central point of a karass

OK, so what does this have to do with anything useful? I travel a lot, even for a ThoughtWorker (a little over 200K miles this year). Of course, my wife hates the amount that I travel, but it's an occupational hazard. One of the things that makes us miss each other are the little unimportant side conversations we have when we are together: little meaningless observations, inside jokes, just the kind of things that people in a duprass do all the time. So I built a sinookas using Twitter.

I created a new GMail account for myself and one for my wife. Using each of those GMail accounts, I created a new Twitter account with protected updates for each of us, and we only subscribe to each other's Twitter stream. All the good Twitter clients make it easy to change accounts, so I have used this to create a private back channel for ongoing duprass style conversations (in other words, a sinookas). This isn't the wampeter of our duprass, but it does make the sinookas stronger. It's been great, and it's something that I recommend all traveling road warriors set up.

Now, my wife & I can have an ongoing private conversation about stuff that wouldn't make sense (or would be too politically incorrect) on a public feed. That allows us to miss each other less. Who says that you can't have a perfectly useful social network with just 2 people?

Wednesday, November 04, 2009

Productivity Pron

One of my former coworkers & I used to spend hours talking about how to set up the best individualized personal information manager. We used to call those conversations productivity porn, not realizing that someone would come along an formalize that term, albeit slightly skewed, as Productivity Pr0n. Finding a good system that doesn't get in your way yet allows you to organize all the things going on in your life (both personal and professional) is surprisingly difficult, given the number of tools that purport to do just this. At the time we were having these discussions (late 1990's), the best thing going was Franklin Covey's Ascend (a desktop application for Windows) and the Palm Treo. Ascend replaced some of the anemic default applications on the Treo (like the laughable ToDo application) with their own versions, and it worked really well. The death knell for me with Ascend was its poor quality. It was written as a desktop application that used an Access database for its back end, and it was a bit fragile. About once a year, it would spontaneously corrupt the database, which required cracking it open with Access to fix the mess it had gotten itself into. Seeing the broken and corrupted records and the general ghetto-ness of the database Ascend supposedly owned entirely didn't give me confidence. I still haven't found as good as integrated system today, but I have cobbled together a nice workable system for myself, consisting of 5 moving parts.


This piece to me is the easiest slot to fill. Maybe I'm just not discriminating, but almost any calendar application works fine for me. As long as I can add appointments with all the standard stuff (reminders, time zones, etc.), I'm pretty happy. The biggest headache with calendars is keeping them in sync. Back in the bad old days of Lotus Notes at ThoughtWorks, I basically ignored the corporate calendar, keeping all my stuff in Google Calendar instead and giving people who cared an HTML view into my work calendar. Google calendar is quite nice, including good synchronization / replication with iCal. Most of my interaction with Google calendar was through the web interface, so much that I created a fluid application that had only my calendar. One of the slick tricks you can do with fluid applications is make all the chrome disappear, giving me a full-screen calendar bound to one of my desktops that looked like the wallpaper for that desktop, which was nice because you get the biggest possible calendar. I also liked the 4 views afforded by Google calendar: 1 day, 1 week, 1 month, and next 4 days.

Since ThoughtWorks moved our infrastructure to Google applications, I just exported my old work calendar and sucked it into the new ThoughtWorks Google calendar. My itineraries, my wife's schedule, etc. I subscribe to (and share subscriptions to interesting events with her). We each "own" our own calendar and cross-subscribe to get shared events. Using sites like TripIt makes keeping travel under control and subscribable. Going forward, I want all calendar stuff delivered as iCal feeds.

For synchronization, I'm using iCal as the integration point. I still have a fluid application that now points to my ThoughtWorks calendar, but I'm using iCal as the main calendar. Of course, iCal synchronizes nicely with the iPhone. Before moving back to iCal, I was using CalenGoo on my iPhone, which is a slick iPhone interface directly to Google calendar. It doesn't really provide off-line access but does cache the previous results so you can see your calendar even if you can't get to it online. The native iPhone calendar application handles that for me now.

ToDo / Task List

If the calendar is the easiest, this is by far the hardest slot to fill. Virtually all of the To Do / Task List applications I've used are seriously deficient in one way or another. Ascend replaced the Palm ToDo application (which was laughably bad) with one it called Task List, which was quite good. One of the killer features I need is a start date in additional to a due date (they all do due dates, but most don't handle start date properly). I have a fair number of tasks that I don't need to see right now, but I don't need to find out about them the day they're due either. I need a system that allows me to express "this thing is due on 11/5, but start pestering me about it on 10/30". I also want a tag-based system using contexts, one of the really nice refinements to come from the Getting Things Done cult, rather than a hierarchy of folders.

My research in this area eventually led me to OmniFocus. I say "eventually" because I recently went through this category of application again. I started with OmniFocus but found it too complex for my day to day usage, which led me to Things. I like Things because it is so radically simple, but that became a bottleneck for me because of the way I like to attack projects. That in turn led me back around to OmniFocus and some concentrated learning about how it wants to work. It's heritage is OmniOutliner, and you can still see that lineage, which adds some complexity to some parts. Because OmniFocus has lots of ways to get information into it, I kept misplacing stuff. The thing that took OmniFocus from "nice but tolerable" to "can't live without it" is the custom perspectives. OmniFocus allows you to save customized views, including filters, columns, etc. I created a bunch of custom perspectives that show me exactly what I need ("what things are coming due within the next week", "what needs to be done next on this project", "I have 10 minutes at home -- is there anything that can be done here and now?") and assigned those perspectives to hot-keys using the standard Mac feature of assigning keys to menu items (each perspective shows up as a menu item, making this possible). Now, I never use the built-in views, I always use my custom perspective depending on the information I need right now. Since doing that, OmniFocus has worked fantastically. It allows me to organize my days and weeks, just shows me what I need right now, and I'm confident when I add something to it that it'll appear at the right place and time. Learning to use OmniFocus right was the key, and now that I have I think I have the best task list stuff that I've ever had (beating out Ascend for this title is no small feat).

Of course, OmniFocus syncs with the iPhone (and has a terrific iPhone application) so that I can keep all my To Do stuff with me at all times.


OmniFocus is great for tasks that have firm due dates and works for recurring tasks as well (including some nice flexibility around "schedule the next one of these 5 weeks after the completion of this occurrence", which is great for things like haircuts). However, I have a few but important categories of things where I want to define rules like "I want to post to my blog every 9 days or so", which could be rewritten as "remind me after 7 days that I need to post a blog entry, and start yelling after 11 days if it isn't done". I use a highly specialized tool for this called Sciral Consistency. That's all this tool does: allows you to set up ranges for things that need to be done and remind you.

I could almost replicate this using OmniFocus features, but I already had Sciral and I like the minimal display & single-mindedness of the tool. This doesn't synchronize anywhere, but I always consume this information at my computer anyway.

Random Notes

The combination of calendar and OmniFocus handles all the structured stuff -- what about unstructured notes? I have two mechanisms for that: Evernote and Moleskine.


Evernote is a desktop, web, and iPhone application that allows you to capture notes (organized into notebooks) for whatever information you want to keep and search. A few killer features for me:

  • Automatic synchronization always everywhere. Every time you capture something with Evernote, it automatically synchronizes across all views.

  • OCR for white board text. I tend to draw on white boards a lot, and if you capture the drawing with Evernote's picture note, it will allow you to do text searches in the web and desktop client for words in the white board drawing. It's not perfect but surprisingly good at this.

  • Automatic forwarding address. Evernote sets up an email address for you; anything you forward to that address becomes a notebook entry. This is nice because it allows you to get stuff out of your email client. Evernote has much better searching capabilities than most email clients, and having the forwarding address means you can get searchable emails into Evernote very easily. This is particularly nice for those who use their email inbox as the world's worst filing cabinet; get that stuff out of your email client and into something where it can be useful.


The only bad thing about an entirely electronic PIM: there are still times when you cannot use it (like when the plane it taxiing). This may not seem like a big deal, but I find that I have lots of capturable ideas at exactly the times when I can't capture them. Thus, my other permanent GTD accessory is a soft-sided Moleskine book along with a Fisher space pen. I capture interesting ideas as soon as I have them (because ideas, especially those from the right brain, are fleeting). Once I get back to my computer, I transcribe the Moleskine notes into the rest of my system. At any given time, I usually have a page or so of new stuff in the Moleskine. I could get by with a few index cards but I've been carrying the Moleskine for a while and I'm used to it.

Tying It Together

My PIM lives between 4 different applications and a notebook with no real built-in integration between them. I always use them as a unit. For example, I have all 4 applications bound to the same desktop in Mac OS X, and those are the only things bound to that desktop. That allows me to always leave them in the same window state and position. Anytime I switch to one of the PIM applications, it goes to the appropriate desktop. I've also used Automator to create a PIM application that performs "Launch Application" for each of the 4 that make up my PIM. I no longer think about these applications as separate things.

PIM as Life Support for Focus

Obviously, this system is highly customized to me and won't work without changes for anyone else. I think that it is every knowledge worker's responsibility to find a system that allows them to get to and stay in flow as much as possible. Any tools that you use should make it easier to get to flow, not harder. I find that I tend to work best in 2 hour chunks (which I'm calling work blocks), which is similar to the very popular Pomodoro technique. One of the custom views in OmniFocus allows me to review projects that have pending work blocks so that I can find out what I need to work on, then immerse myself in that problem for a contiguous chunk of time. Whatever system you find, make sure that it supports how you want to work. Don't change your effective work habits to conform to some tool's vision of what your day should look like.

Wednesday, October 07, 2009

Twitter Matters: The Meme Abiogenesis of the Internet

This is part three in an exploration of why Twitter makes sense, highlighting its use as a legitimate tool for connections and idea generation. The first article is under Twitter Matters: Keeping Up with Weak Social Links and the second is under Twitter Matters: Conversations vs. Monologues for those who want to catch up.

Abiogenesis, the study of how a primordial soup of chemicals eventually lead to amino acids and life, is an area of fascinating study by biologists. This spontaneous generation of life happened here a long time ago, and its study obviously interests those investigating life on other planets because this primordial soup seems to be the first prerequisite for life as we know it.

You can think of the Internet as a free-form gathering place for memes, an element of a culture or system of behavior that may be considered to be passed on from one individual to another by non-genetic means. Examples of memes include hit songs, water-cooler conversations about hit TV shows, and things like communism. If you are in the idea business (meaning that you are always looking for new sources of ideas and how to apply them to a broad subject like software development), you are always on the lookout for primordial meme pools. Twitter meets that goal admirably. As I mentioned in the first installment, weak social links are your best source for "outside the box" ideas. That makes Twitter a great place to harvest and generate new ideas. New ideas frequently start from seeds of an idea that are nourished into full-formed thoughts. Twitter now only delivers these seeds to your door, you can use them as an incubator for your own seeds.

Here's an example. One of my recent blog entries was called the Suck - Rock Dichotomy. That particular turn of phrase came from a quick one-off Twitter entry where I was responding to a Tweet from someone that combined rock and suck. I mentioned that the entire argument was really part of the pervasive suck/rock dichotomy in the software world. That worked nicely in a 140 character Twitter post, and it was modestly re-tweeted. But it started more serious thinking on why that phenomena exists, which further lead me to an entire blog post (i.e., essay) on the subject. The turn of phrase came from me, but in response to some other stimuli. Would I have ended up writing a blog post on that subject if it hadn't come up in a virtual conversation? Probably eventually, but having a conversational medium close by encouraged the original Tweet, which lead to more fully formed thoughts about the subject.

Finding new sources of in-context ideas is a gold mine because you can never tell what fruit those idea seedlings will bear. Yes, 99% of Twitter is mindless trivia, but discovering or creating a new idea that you wouldn't have had otherwise? Priceless. People complain that most of Twitter is drivel, and I won't dispute that against overwhelming evidence, but the remaining usefulness is an artifact of the volume of memes present. Here's an analogy. Numbers vary, but some sources suggest that up to 95% of the human genome is "junk DNA", DNA that isn't used (or at least its use hasn't been determined). That's how nature tries out new ideas, and the really good ones survive. Most of Twitter is junk, but good ideas do lurk in these murky meme pools.

Twitter has evolved to fill a niche that didn't exist before. Just like any social environment, users have to figure out a way that it can provide value. I've certainly found that for me. The combination of keeping up with my weak social links, having terse conversations vs. email monologues, the enforced constraint to keep ideas atomic, and the new medium of ideas forms a completely unanticipated but welcome enhancement to the way I work. Rather than cast stones at new technologies like social networks, ask yourself why people find it useful and how can it be useful to you. The answer may be "no", but you need to understand why it matters before dismissing it.

Tuesday, September 29, 2009

Twitter Matters: Conversations vs. Monologues

This is part two in an exploration of why Twitter makes sense, highlighting its use as a legitimate tool for connections and idea generation. The first article appears as Twitter Matters: Keeping Up with Weak Social Links.

The 140 character limit is perhaps the most distinctive characteristics of Twitter. Some of my Twitter friends have commented that conversations on Twitter tend to be more civil: you just can't cram much message and bile into a 140-character message. This has happened to me: carrying on a debate on Twitter is an interesting exercise in conciseness. Tight constraint is a forcing function on creativity: sensibility, lucidity, and articularity in just 140 characters is tough. You would think that all discussions on Twitter are either about trivial subjects (so that you can fit it into the built-in limit) or quickly degrade into multi-part messages. While the latter happens sometimes, it is rare in my experience, and the former doesn't occur as much as you might think.

An example is in order. I recently posted a message in response to Jim Weirich that I thought that cyclomatic complexity wasn't as useful a metric in Ruby because so much of the things that normally require loops and branches are so handily encapsulated in powerful libraries. Thus, this effect causes cyclomatic complexity numbers to be lower when comparing apples-to-apples code in Java & Ruby. Jim correctly pointed out that that does in fact make the Ruby code simpler, and therefore cyclomatic complexity is measuring exactly what it is supposed to measure. During this same discussion, Glenn Vanderburg weighed in on a related subject, and then so did Ola Bini. The conversation quickly turned to the Sapir-Whorf Hypothesis and how viable it is for spoken languages (not much) and computer languages (much more so). Along they way, I learned the distinction between the strong and weak versions of Sapir-Whorf. All this took place over about 20 minutes, 140 characters at a time. Yet at the end, I knew a lot more than when I started. The combination of (shortened) links to external sources and brief forays kept the conversation focused, covering just a few topics and exploring the implications between them.

How would the conversation work without Twitter? It could only work if all the interested parties (myself, Jim, Glenn, and Ola) were somehow on the same email mailing list or happened to be at the same place at the same time. While our location does coincide occasionally, it's rare (we're based in Atlanta (sometimes), Cincinnati, Dallas, and Stockholm). Even so, the topic would have to come up in conversation. If we were on the same mailing list, the conversation would proceed differently. Because there is no character limit on email (I'll let you immerse yourself in the fantasy of a limitimg function on email for just a second), it's no longer a conversation, it's a series of monologues.

A tricky balance exists between constraint and creativity. Obviously you can cram more information and context into a sonnet than a Haiku (I explored this idea in a blog series about the expressiveness of the Ruby language back in 2007). 140 characters seems to be a bit of a sweet spot: enough to convey some thought but not enough to go overboard. Composing a good Twitter update is different from composing an entire blog but they aren't as far apart as you might think. I certainly have noticed that the people who both Twitter and blog have cut down on the number of blog entries they write. I'm certainly that way. It used to be that I would blog for 2 types of messages: short announcement type blogs ("I'm speaking at Random City Users Group next week") and essays. Now, all the short announcements happen on Twitter, leaving my blog for more formal essays. I like this distinction because I find that the blogs I read tend to be more substantive.

There is no question that most of what comes through Twitter aren't deep thoughts (many think that Twitter is just for food and travel). I find that people who only post obvious messages, too much information, or too much that I either don't care about or I find offensive don't stay on my list of people I follow long. There is at least one prominent technologist who mixes his interesting posts with right-wing bile, and I dropped him like a hot potato because I don't need a subscription to a channel for misinformed dogma. Managing your user list becomes important in Twitter so that you filter out stuff you don't want or need.

Twitter creates a new communication stream for those who contribute and consume Tweets (conversations vs. monologues). By creating a new specifically constrained communication channel, it moves conversations that used to occupy other spaces to a more appropriate space. This combination of a new conversational outlet between people with who I maintain weak links and the built-in constraints mean that I have a new source of ideas (both raw ideas and refinements of my ideas) to keep my brain percolating. In the next post, I explore the idea that Twitter can be a form of meme generator.

Thursday, September 17, 2009

Twitter Matters: Keeping Up with Weak Social Links

Lots of people just Don't Get(tm) social networking sites like FaceBook, MySpace, and especially Twitter. On the face of it, Twitter doesn't seem to make much sense: 140 character updates. But those of us who use Twitter a lot (I'm @neal4d, BTW) know that it's much more than that. Twitter engenders so much puzzlement because it's so restrictive, but the restriction is the genius of Twitter.

In this and the next two blog entries, I'm going to explore why Twitter is a Good Thing(tm) and some surprising ways it can insinuate itself into a useful workstream. The first of these observations is around links.

Andrew McAfee of Harvard has done a lot of research on how social networking intersects with the enterprise (soon to be captured in a book I can't wait to read, Enterprise 2.0). I saw him talk recently about why social networking is a valuable resource left barren by most companies. He defines 3 kinds of social links: strong, weak, and potential, shown in a bulls-eye layout:

bulleye diagram

Your strong links are the people you see regularly, either at the office or during the normal course of your life. There's a good chance you know what these people had for lunch, or at least one of their meals in the last week. The next layer represents your weak links. These are people you see intermittently (perhaps once a year). They are your friends that you don't get to see on a regular basis (because of geography, for example). A good example for me is my friend Hadi Hariri, who lives in Malaga, Spain. He & I see each other perhaps once a year (generally at conferences) and always have good fun & conversation. It's this group that social networking sites support. This is a valuable link because you are more likely to get novel ideas from this group than from your strong group. Before social networks, how did you keep up with your weak links? The Christmas Letter, summarizing a year's events? You are wasting an important link if you can't reach out to your weak links sometimes. You see your strong group all the time, so they hold few surprises. However, your larger and more diverse weak links provide novelty. The potential links are those who you'll form weak & strong links with, but you haven't met them yet. You're also more likely to be introduced to a potential links through your weak links.

Twitter provides a strong connectivity to your weak link. Here's an example of how weak links can lead down interesting paths. I met someone at the erubycon conference last year who's a well known figure in the Rails world and subsequently started follow his Twitter feed. He had very recently gone vegan for health reasons, and he tweeted a reference to an astounding book called The China Study. I read this book (and several other referenced in it) and have since been strictly vegetarian, at least for the time being. It's worth reading: it lays out the case against animal protein in your diet, and backs up the claims with real science. It's a profound book, enough to convince me to change my eating habits. I don't know if I'll stay this way forever, but I've been there for about 6 weeks and it has been quite pleasant. He was very much a weak link; I would have a hard time spotting him in a room. Yet we share enough context in the Ruby community for me to use him as a source of ideas, which sometimes lead to interesting places. In this case, I wouldn't currently be vegetarian if it wasn't for Twitter.

Finding a good mechanism for maintaining weak links and finding (and exploring) potential links allows you to work smarter because you have a broader arena for ideation. The combination of links, constraint, and meme ooze make Twitter very useful to me. I explore these other two aspects in the next two installments.

Wednesday, September 09, 2009

The 2009 Edition of the Rich Web Experience: Adding Spice to Your Applications

Several years ago, I called an Ajax conference a condiment conference because most everyone there concerned themselves with technologies that augmented other technologies (for example, your base language is Java but you need JavaScript to make your applications suck less). Now, I think that user interaction, web design, the rise of Rich Internet Applications (when used suitably), and other user-facing issues have a deeper relationship to the underlying technologies. Thus, I'm calling this year's Rich Web Experience the spice for your underlying technology. Food is edible without condiments, but bland without spices. You can't avoid the browser as a platform; might as well embrace it in Orlando in December.

Wednesday, August 05, 2009

The Suck/Rock Dichotomy

Lots of people are passionate about software development (much to the confusion and chagrin of our significant others), and that unfortunately leads to what I call the "Suck/Rock Dichotomy": everything in the software world either sucks or rocks, with nothing in between. While this may lead to interesting, endless debates (Emacs vs. vi, anyone?), ultimately it ill serves us as a community.

Having been in software communities for a while, I've seen several tribes form, thrive, then slowly die. It's a sad thing to watch a community die because many of the people in the community live in a state of denial: how could their wonderful thing (which rocks) disappear under this other hideous, inelegant, terrible thing (which sucks). I was part of the Clipper) community (which I joined at its height) and watched it die rather rapidly when Windows ate DOS. I was intimately part of the Delphi community which, while not dead yet, is rapidly approaching death. When a community fades, the fanaticism of the remaining members increases proportionally for every member they lose, until you are left with one person whose veins stick out on their forehead when they try to proselytize people to join their tribe, which rocks, and leave that other tribe, which sucks.

Why is this dichotomy so stark in the software development world? I suspect a couple of root causes. First, because it takes a non-trivial time investment for proficiency in software tribes, people fear that they have chosen poorly and thus wasted their time. Perhaps the degree in which something rocks is proportional to the time investment in learning that technology. Second, technologists and particularly developers stereotypically tend to socialize via tribal ritual. How many software development teams have you seen that are not too far removed from fraternities? Because software is fundamentally a communication game, I think that the fraternal nature of most projects makes it easier to write good software. But tribal ritual implies that one of the defining characteristics of your tribe is the denigration of other tribes (we rock, they suck). In fact, some tribes within software seem to define themselves in how loudly they can say that everything sucks, except of course their beautiful thing, which rocks.

Some communities try to purposefully pick fights with others just so they can thump their collective chests over how much they rock compared to how much the other guys suck. Of course, you get camps that are truly different in many, many ways (Emacs vs. vi, anyone?) But you also see this in communities that are quite similar; one of the most annoying characteristics of some communities is how much some a few of their members try to bait other communities that aren't interested in fighting.

The Suck/Rock Dichotomy hurts us because it obscures legitimate conversations about the real differences between things. Truly balanced comparisons are rare (for an outstanding example of a balanced, well considered, sober comparison of Prototype and JQuery, check out Glenn Vanderburg's post). I try to avoid this dichotomy (some would say with varying degrees of success). For example, for the past 2 years, I've done a Comparing Groovy & JRuby talk at JavaOne, and it's been mostly well received by members of both communities. Putting together such a talk or blog entry takes a lot of effort, though: you have to learn not just the surface area details of said technologies, but how to use it idiomatically as well, which takes time. I suspect that's why you don't see more nuanced comparisons: it's a lot easier to resort to either suck or rock.

Ultimately, we need informed debates about the relative merits of various choices. The Suck/Rock Dichotomy adds heat but not much light. Technologists marginalize our influence within organizations because the non-techies hear us endless debating stuff that sounds like arguments over how many angels can dance on the head of a pin. If we argue about seemingly trivial things like that, then why listen to us when we passionately argue about stuff that is immediately important, like technical debt or why we can't disprove The Mythical Man Month on this project. To summarize: the Suck/Rock Dichotomy sucks!

Wednesday, July 15, 2009

Productivity & Location Awareness

The iPhone has retaught me the power of location awareness in user interfaces. I have lots of iPhone applications (about 90 at the current count, but, in my defense, some of those are saved bookmarks), and until the iPhone 3 update, touching the icon is the only way to invoke them. Because I have so many, I started organizing them on desktops based on usage (for example, I have a travel desktop, a food desktop, hyperlink desktop, etc). This became too arbitrary, so I recently just went alphabetical for all but the first desktop, which has a special hot key to get back to it, making it the perfect place for really oft-used applications.

The point is that I've rearranged my iPhone icons several times. It continues to surprise me how quickly I remember the desktop and location on the desktop of a given application. I very quickly learn where the applications I use all the time (TripIt, I'm looking at you) and can get there really fast. I find that even though Spotlight now works on the iPhone, I still generally go directly to the application via the icon.

While clearly launchers like Quicksilver, Spotlight, and Launchy work better for the huge numbers of applications you find traditional computers, the power of location awareness suggests several things for the builders of applications.

  • Don't move stuff around.

navigation controls are hyperlinks. But, because of add placement, they move around slightly, turning reading the groups a game of "Whack-a-Mole". I use either the NumberFox plugin or Firefox's incremental hyperlink search (the apostrophe hotkey) rather than chase the stupid hyperlinks with a mouse.

I hate applications that move menu options around based on usage. Consistency is important for usability. In fact, I use the Mac's the Smart Menu Search feature, which allows you to incrementally search for menu items without regard for their physical location, as my favorite menu affordance.

  • Context sensitivity makes it hard to leverage location awareness.

Context sensitivity for toolbar buttons makes it hard to definitively learn where something lives, which kind of dooms the ribbon user interface metaphor in modern versions of Office. While I understand the need for a rethought user interface metaphor for the huge number of features (perhaps that's the underlying problem?), having a context-sensitive set of toolbars means that, to become really effective, you have to memorize each combination of buttons and the corresponding locations. Not having used the ribbon much (I avoid Office applications pretty assiduously), I can't say whether you eventually build up the cognitive ability to utilize location awareness.

  • User interface designers should understand Fitt's Law

Pop quiz: what?s the biggest clickable target on your screen? It?s the one right under your cursor, which is why the right-mouse menu should have the most important things on it. The target right under your mouse is effectively infinitely large. Second question: what?s the next biggest target? The edges of the screen because you can accelerate as fast as possible to the edge and not overshoot it. This suggests that the really important stuff should reside on the edges of the screen. These observations come from Fitt's Law, which states that the ease of clicking on a target with the mouse is a combination of the distance you must navigate and the size of the target.

The designers of Mac OS X knew this law, which is why the menu bar resides at the top of the screen. When you use the mouse to click one of the menu items, you can ram the mouse pointer up against the top of the screen and you are where you want to be. Windows, on the other hand, has a title bar at the top of each window. Even if the window is maximized, you still must carefully find your target by accelerating to the top, and then use some precision mousing to hit the target. For right-handed users, the upper right corner is an easy mouse target. What's there on the Mac? Spotlight, the universal search utility. What's there on Windows? Nothing unless your application is full screen, and, if it is, it's the close button (which suggests the most important thing you can do to a Windows applications is close it).

There is a way to mitigate this for some Windows applications. The Microsoft Office suite has a Full Screen mode, which gets rid of the title bar and puts the menu right at the top, like Mac OS X. There is help for developers, too. Visual Studio features the same full-screen mode, as does IntelliJ for Java developers. If you are going to use the mouse, using your applications in full-screen mode makes it easier to hit the menus because it takes advantage of location awareness and consistency.

Regardless of the power of location awareness, for sophisticated computer users (like developers), location awareness doesn't scale. You should spend the time to learn the keyboard shortcuts for every possible thing you need to do. It takes longer, but it scales almost indefinitely. In fact, I turn toolbars and buttons off in IDEs and Emacs and take the time to learn how to get to what I need without reaching for the evil mouse. I'm curious to see how much I start using Spotlight on the iPhone as the number of applications I have keeps growing (which seems inevitable at this point).

Tuesday, June 23, 2009

Orlando JUG on Thursday June 25th

If you are anywhere nearby, come see me at the Orlando JUG on June 25th, 2009. I'll be giving my newly revamped Real-World Refactoring talk. By revamped, I mean that I've added a bunch of examples of architecture smells and how to attack them. From the No Fluff, Just Stuff web site description of the talk:

Refactoring is a fine academic exercise in the perfect world, but we don't really live there. Even with the best intentions, projects build up technical debt and crufty bad things. This session covers refactoring in the real world, at both the atomic level (how to refactor towards composed method and the single level of abstraction principle) to larger project strategies for multi-day refactoring efforts. This talk provides practical strategies for real projects to effectively refactor your code.

This talk is part of a series of talks I'm doing this year on Emergent Design & Evolutionary Architecture, showing examples of how to use refactoring to fix architectural and design smells. I also cover refactoring databases and build files.

Come join us.

Wednesday, June 10, 2009

AML (Arbitrary Modeling Language)

UML is a failure. It failed for several reasons. Mainly, it failed because it falls into the cracks between technical people (developers, architects) and non-technical people (business analysts, project managers, etc). UML is too technical for non-technical people, and not technical enough for technical people. By this, I mean that it isn't really technical enough to do serious work on design by techies. At the same time, it's obscure enough to be mostly incomprehensible to non-techies.

This wasn't the Three Amigos fault. They did quite impressive work on the meta-model aspect of UML. It was defeated by two forces. First, the fundamental problem lies with the amorphous nature of software itself. Coming up with a really expressive graphical notation is hard. Most developers know enough to draw boxes for classes and open-arrowheads for inheritance, but don't get much further into the UML specification because it gets quite convoluted (especially if you start looking at the later generations of UML, with Object Constraint Language and its ilk).

The second failure reason is the implicit assumption that you need (nay, must) design all the classes and interactions before you start writing code. Big Design Up Front is a failed technique in almost all software development. The only exceptions are systems that are truly life and death. One of the reasons for the outdatedness of the software on the space shuttle lies with the fact that they have very long iterations. In other words, they are willing to say "once this date passes, we will make no changes to the design of this system. Period." While most business software could make this statement, it ill serves the business. Business processes change like the weather, and you need software that can change just as readily. I don't come to this discussion as a dilettante: for a while, I worked for a company that was a Rational partner. We did the training, and we built software based on the Rational Unified Process. We even had some successes. But it didn't take long for us to realize that the upfront design didn't serve our clients because it hampered the kinds of changes required by their business.

Most developers I know use AML: Arbitrary Markup Language, usually consisting of boxes, circles, and lines. When a given developer writes on a whiteboard, they write in their own version of a diagramming language. It's a shame that we don't have an industry wide diagramming language that everyone feels compelled to use, but that's the reality in most places I've been for the last 5 years. But, having said that, I'm a fan of AML, because it cuts down on irrational artifact attachment: you have nothing except the last 5 minutes invested in the diagram, making it as transient as possible. Transient artifacts are good because you're willing to throw them away, preventing them from becoming a part of the documentation for your project once the actual code has migrated away from that initial stab at design. Out of date documentation is worse than none at all because it actively misleads

Tuesday, May 26, 2009

Mac Boot Mysteries

This is long, digressive story about diagnosing a hardware problem on a Mac; if you dislike such stories, feel free to leave now.

About a week ago, my wife Candy complains to me that her Mac won't boot up. This is my hand-me-down Mac (we have a new policy in our house: Candy gets my hand-me-down computers, and I get her hand-me-down cameras), which means that it's about 2 years old, but it has a relatively new hard drive that I installed last November. A long time ago, I had set the startup option to always run in Verbose startup mode (on demand by holding COMMAND-V upon startup, or permanent by issuing the following command:

sudo nvram boot-args="-v"

Anyway, I could see from the startup porn that she was having a kernel panic with 2 likely suspects: the fan control daemon and something about Cisco VPN. Now, Candy doesn't have a Cisco VPN, but given that this was my hand-me-down machine, that explains why some of that stuff is there. Candy hadn't installed anything in the last week or so, leading me to think that one of these two was the culprit. She had been complaining that her machine was getting slower and slower, including things like window resizing, which had me puzzled. Perhaps a dying fan was causing the processor to overheat and thus slow down?

I tried safe boot (no joy), and at this point I suspect the fan. I'm certainly not afraid to crack open a Mac (with proper respect), but replacing a fan isn't high on my list of fun things to do, so we made an appointment at the Genius bar. To Candy's credit, she had a SuperDuper! backup that was just a couple of days old, so virtually everything was safe.

We went to the genius bar where the GenX slacker (this is a compliment) booted the Mac from an external drive. I hadn't tried this (even though I have several bootable drives laying around) because I was fixated on the fan problem. After booting it up, his suspicion now lies with the VPN stuff, and I reluctantly concur (especially after he ran some fan diagnostics). Now, though, the question remains: why did this problem suddenly occur? What was his (depressing) advice to fix this problem? Reinstall Leopard and all your applications. What?!? Is this a freakin' Windows machine? I couldn't believe that was real Genius advice. I've never yet had to do a ground-up reinstall of everything, but if that's the only way...hmmmm. He was very knowledgeable, and obviously he doesn't tread in the realm of VPN stuff. He also correctly pointed out that a bad fan shouldn't cause slowness: redrawing windows is mostly handled by the GPU on the Mac. The slowness was as far as I can tell a red herring.

When I got home, the first thing I did was boot Mac OS X from an external drive and get a real SuperDuper! snapshot, getting the real current snapshot. Once I have that, I can play. Candy has already agreed to the pain and degradation of reinstalling everything, but I have to think there's a better way. Then, I had a brain storm: I took the SuperDuper! snapshot I just made and booted the machine from the external drive. Success! That suggests that some part of the internal hard drive that houses the VPN stuff has somehow gotten corrupted, but still allows it to boot using the same image from an external drive. Because I have the SuperDuper! safety net, I decided on an experiment. I reformatted the internal hard drive and ran Drive Genius on it to scan for bad sectors. Nothing of note came from that, but then I overlaid my most recent SuperDuper! snapshot back onto the internal drive.

Success! The internal drive now boots, and everything appears back to normal. I'm guessing that my bad sector theory was correct.


  • Don't reinstall everything! My record is still clean on that account: I have never had to do that on a Mac yet (and it was a once-a-year chore on Windows because of bit rot).

  • Always have good backups. This would have been a tragedy rather than a comedy if Candy hadn't been using SuperDuper!. It has yet to let me down, and it has saved my bacon on several occasions.

  • I immediately latched onto the fan because it seemed to support other observed phenomena. I should have booted it myself from an external drive and run Drive Genius, but I thought I had it figured out.

  • Stop and think. It was a good thing that we had dinner plans with a neighbor when we got back from the genius bar. It was over dinner that I had the idea of just overlaying the snapshot again. If I had started on it as soon as we got back, I would have been creating a lot of movement without a lot of forward progress. Sitting and thinking about it opened my mind to alternative options.

  • SuperDuper! rocks. I can't imagine life without it.

Monday, May 11, 2009


The economic downturn has affected conference attendance. At the conferences where I've spoken in the US, attendance seems to be down 20-30% from last year. However, it doesn't seem to have been as bad at European conferences (of course, it may just be the conferences where I'm speaking), where attendance is down only a little. It was surprising then that O'Reilly decided not to conduct RailsConf Europe this year.

However, someone has stepped in to fill the gap: RailsWayCon is happening in Berlin from May 25-27. They have gathered speakers from far and wide in what looks a rockin' good conference. If you're anywhere in the neighborhood and looking for a Ruby and Rails conference, this is the one for 2009.

Friday, May 01, 2009

Confessions of a Reformed Titilator

The Rails community has a real brouhaha on its hands, but it's a red herring that it happens to be Ruby and Rails because it's a pervasive problem in scientific and engineering fields of all kinds. It seems that a presentation at the GoGaRuCo Ruby conference (Golden Gate Ruby Conference) included a heaping pile of really racy, semi-pornographic imagery. The presenters hand-waved it away by telling the attendees that the images existed to keep everyone's interest in their presentation. And, fortunately, some people called their bluff.

Studies exist that show imagery that bores into people's basest parts (sex, violence, humor) is the easiest way to keep people interested in an otherwise boring topic. Lots of presenters use this technique to engage and keep attendees attention. I know because I've used it myself. And it's really our most slovenly kind of laziness as presenters at work. Let me explain how I've come around to this conclusion.

At RailsConf last year, Joel Spolsky did one of the morning keynotes. As his first slide, he showed a glamor shot of Angelina Jolie and said (I'm paraphrasing) "I always show this as my first slide because I always get better evaluation scores on my keynotes when I do." His next slide showed Brad Pitt, with his shirt open, and Joel added "And, just to be demographically fair, I show this one next." Joel was plugging into 2 techniques for capturing attention: sex and humor. And it works on some level. The crowd (mostly) loved it. In his keynote, it was basically gratuitous: he never used it for anything other than pure pandering. But I'm always on the prowl for effective presentation tricks, so I borrowed Joel's trick, but with a twist.

In my Ceremony & Essence keynote, which I gave at a hand-full of Ruby events last year, I had a similar picture of Angelina and Brad at the start, using it to get a laugh right out of the gate. In a keynote, if you can get people laughing early, they loosen up more and are more likely to laugh more and emote more back to the presenter. However, my use wasn't merely gratuitous. I used other images of Angelina (and Brad) throughout the talk as an anchor point, serving two purposes. First, because everyone laughs up front, seeing a similar image reminds them of that, making it more likely they'll laugh again. The other purpose was to pull the narrative along. Bringing up a topic early in a presentation, then allowing people to forget it, then bringing it back at an unexpected time is one of my favorite techniques in presentations. It allows attendees to make connections between disparate things that have more impact when you "force" them to make the connection themselves, rather than beating them over the head with it. For example, I brought Angelina up again when talking about demand for developers outstripping the supply, showing a publicity photo of her in the movie Hackers. This is a not so subtle anchor point that hopefully makes people realize that one of the reasons that the developer demand outstrips supply is the paucity of female developers, which hopefully makes people ponder that a bit. I use Angelina (and, to a lesser extent, Brad) throughout the talk as those kind of anchor points. Now, realize, these images are in no way pornographic. They are just publicity photos of famous actors. However, I was sensitive to the fact that some women might find this unsettling, so I made a deal with myself: if anyone every complains about those images, I'll remove them (and restructure the talk) with no questions asked.

That happened earlier this year. One attendee at a keynote wrote me a very nice note afterwards telling me that she wasn't comfortable with the imagery of Angelina in the talk (and that the presence of Brad didn't help). That was my cue to stop using that imagery and find other anchor points to make my points. Her email made me realize the pervasiveness and toxicity of this kind of imagery, and that, while convenient, it was ultimately just laziness on my part. Why do you think that so much entertainment falls back on sex and violence to keep people interested in otherwise pretty dull drivel? Watching a show like The West Wing), which doesn't traffic in that kind of stuff, shows that quality writing doesn't have to fall back on gratuitously titillating material. Ultimately, using sexually provocative material in a technical presentation is just lazy -- when we do it we're not spending the time to come up with really compelling metaphors to represent something, relying instead on the basest of currency. Presenters, myself included, need to do better.

Lots of people who aren't affected by this will say that this is a tempest in teapot, and that the offended parties are over reacting. Insidious misogyny is like lazy racism: people who engage in it hide behind a casual facade of "Oh, really, that was offensive"? Yes, by the way, it was.

Let me re-iterate a point: this isn't about Rails or the Rails Community (I still haven't gotten my official code ring and Certificate of Membership for this "community", by the way). On average, presenters at Ruby and Rails conferences put a lot of effort into creating compelling presentations, paying attention to metaphor, presentation style, compelling imagery, etc. The conferences where I attend the most talks are Ruby/Rails and the Tri-Fork conferences (JAOO and QCon. Kudos to presenters that care about creating compelling presentations. Only sometimes pushing the envelope on what's edgy entertainment crosses a line, which is what happened in this case. It could happen at any technical conference where the presenters are pushing hard on the creative aspect of technical presentations.

I strive not to be lazy when I put together presentations, to find compelling metaphors that don't inadvertently offend entire groups of people. I think it is an important maturity step for engineering and software communities to vote with their feet: outrage only goes so far, but notifying those who lazily offend effectively sends the message that it's not OK.

Wednesday, April 22, 2009

Guerrilla SOA (SOA & The Tarpit of Irrelevancy)

This is the sixth in a series of blog posts where I discuss what I see wrong with SOA (Service Oriented Architecture) in the way that it's being sold by vendors. For those who are behind, you can read the previous installments here:

In all the previous posts, I've been basically elucidating all the reasons why I think that most SOA projects are a quagmire: mis-understanding the goals of why you are involved in this approach, the way most vendors are poisoning the water by selling ridiculously complex tools and services, why it's so seductive for developers who fetish complexity to get knee deep in these projects, and hype. If you read all these posts back to back, you'll surely have the impression that I think all hope is lost for enterprise integration.

But it's not.

Just like any software project, it is possible to do SOA right. My colleague Jim Webber has done lots of outstanding work in this area, under a rubric called Guerilla SOA. The basic ideas are:

  • treat SOA projects like any other project, and apply the same agile principles: continuous integration, testing, simplicity

  • don't use tools just for the sake of using tools. Try to keep the tool stack as simple as possible, and as automated as possible

  • use loosely typed endpoints and document/resource orientation to allow the different parts of your architecture evolve independently

It's best said from Jim's own mouth. Jim's British, so when he curses in technical presentations, people just think it's quaint (whereas when I do it, it's crass).

This is the way we generally approach SOA project: like other pursuits. SOA doesn't have to be a huge scary thing. It's just software, and we aren't going to throw our playbook out the window just because it sounds scary.

The term SOA has been so co-opted by vendors trying to sell stuff that I think it will die off as a term. We'll still be doing soa (note the lower case letters), but we'll have develop another three-letter acronym because the old one has slipped into the tarpit of irrelevancy.

Saturday, April 11, 2009

Speaking at the Colgne JUG Monday, April 20th

Colgne jug image I'll be speaking at the Cologne Java Users Group on the eve of the JAX Conference, on Monday, April 20th at 7 PM. I'm letting the JUG organizer pick the topic, so I'm not sure what I'll be talking about, but I'm looking forward to it: I've never spoken at a German JUG before. If you're in the neighborhood, stop by and we'll geek out, then have a real beer afterwards.

Update: Here is a link to the slides I presented at the JUG. Thanks for having me; I had a great time.

Thursday, April 09, 2009

Real World Refactoring in NFJS the Magazine

Several people have asked me what ever happened to the NFJS Anthology book series (The NFJS Anthology, Volume 1 and The NFJS Anthology, Volume 2: What Every Software Developer Should Know. Both books contained essays built around some subject speakers were passionate about that year. Alas, the publishing business being what it is, there wasn't enough demand to justify continuing the series.

After much discussion, it was decided that the series would be more dynamic in magazine form rather than book form, which explains the formation of the NFJS the Magazine. This is a monthly publication written by NFJS speakers about something they are talking about this year, and of course something they are interested enough to speak and write about. Being in an magazine format makes it a bit easier to keep up to date, and the volume of material is higher because you get several articles a month.

I have a Real World Refactoring talk in the upcoming issue, based on my talk of the same name this year. If you go to a No Fluff, Just Stuff show, you get a free copy of the magazine, but anyone can subscribe

Monday, April 06, 2009

RailsConf Interview by Chad Fowler with Paul Gross and me

One of the marketing tools that RailsConf uses is a series of interviews with upcoming talks. Chad sent some interview questions to Paul and myself around our upcoming talk Rails in the Large: Building the Largest Rails Application in the World, and he's posted it on his site. Want some opinionated conversation about Rails and how to use it to build Enterprise applications, then here it is (and we didn't use an Scala!). Enjoy!

Monday, March 23, 2009

The Triumph of Hope over Reason (SOA & The Tarpit of Irrelevancy)

This is the fifth in a series of blog posts where I discuss what I see wrong with SOA (Service Oriented Architecture) in the way that it's being sold by vendors. For those who are behind, you can read the previous installments here:

A very funny site shows that this Internet thing might not be a fad. The Chuck Norris Facts web site has lots of great hyperbolic claims about Chuck Norris, American actor and legendary bad-ass. Some of the "facts":

  • If you have five dollars and Chuck Norris has five dollars, Chuck Norris has more money than you.
  • There is no 'ctrl' button on Chuck Norris's computer. Chuck Norris is always in control.
  • Apple pays Chuck Norris 99 cents every time he listens to a song.
  • Chuck Norris can kill two stones with one bird.
  • When the Boogeyman goes to sleep every night, he checks his closet for Chuck Norris.

Some of this may be just a tad exaggerated. I'm pretty sure that when Chuck Norris does pushups, he is not in fact pushing the earth down instead of pushing himself up. The site is here, if you want to go read more of them. I'll wait.

OK, now that you understand more about Chuck Norris, here's another site of over-the-top exaggeration about an over-hyped subject: SOA Facts, modeled after Chuck Norris Facts:

  • SOA is the only thing Chuck Norris can't kill.
  • SOA invented the internet, and the internet was invented for SOA.
  • SOA is not complex. You are just dumb.
  • SOA can always win at TicTacToe. Even if you go first.
  • One person successfully described SOA completely, and immediately died.
  • In a battle between a ninja and a jedi, SOA would win.
  • SOA knows what you did last summer, and is disappointed that it wasn't SOA.

I used a bunch of these in one of my SOA talks as bumper slides between the various topics, which provided a nice fun icebreaker. But I reserved two of them for the last part of the talk because I think they aren't exaggerations at all, merely deep truths:

  • Implementing SOA for the first time is the triumph of imagination over intelligence.

  • Implementing SOA for the second time is the triumph of hope over experience.

SOA has gotten so complex, with so many moving parts, that getting it right is extraordinarily difficult. Once you've lived through one of these projects (especially if you've fallen into the other tarpits I discuss in the previous installments), you understand the first quote at a deep level. That you would try it again truly is the triumph of hope over reason.

Tuesday, March 10, 2009

Rubick's Cubicle (SOA & the Tarpit of Irrelevancy)

This is the fourth in a series of blog posts where I discuss what I see wrong with SOA (Service Oriented Architecture) in the way that it's being sold by vendors. For those who are behind, you can read the previous installments here:

Developers love to solve puzzles. One project manager with which I used to work kept a jar full of little nail puzzle (like this) on his desk:

nail puzzle

Any time he was having a conversation that he didn't want developers to listen in on, he'd grab one of the puzzles and toss it to them. Inevitably, the developer would grab the toy and immediately become totally absorbed in solving the puzzle. After about 10 minutes, the puzzle would yield up it's secret, and the developer would look up and ask "Did anything important just happen?"

Developers tend to be problem solvers -- it's one of the appealing things about fiddling with computers. But what happens when you take a natural problem solver and present them with dull work, with no interesting challenges? What happens frequently is what I've deemed the Rubick's Cubicle anti-pattern.

If the presented problem isn't complex enough, developers will figure out ways to make it complicated and therefore challenging.

Writing the same dull CRUD application over and over is boring. But what if you could figure out a way to get all the simple CRUD applications to talk to one another? That's a nice and juicy puzzle. This perhaps explains the complexity fetish I see in so many "Enterprise" architectures and applications. Some of it is accidental complexity, accrued from years of piecing together parts that were never meant to work with one another. But I don't think accidental complexity covers the entirety of why things are so convoluted.

I remember back in the mid-90s, I was the CTO of a small training and consulting company. We were absolutely delighted when we first saw EJB: here was a technology no one could understand without extensive training. The same thing happened with all the variations of COM, DCOM, and CORBA. Those were flush times for training companies because we knew that developers would be drawn like moths to a flame, frequently with the same result.

Building the simplest thing that can work is sometimes duller than crafting some devilishly complex Rube Goldberg machine, but keeping it simple is a worthy challenge in it's own right. If you find yourself in Rubick's Cubicle, stop and ask yourself: "is there a simpler way to do this? Perhaps dismantling something that no longer serves it's purpose? What is the simplest thing that could possibly work?"

Tuesday, February 24, 2009

Emergent Design & Evolutionary Architecture at DeveloperWorks

For the last few months, I've been toiling away on an article series for IBM DeveloperWorks, and it's rolling out today! From the abstract for the series opener:

This series aims to provide a fresh perspective on the often-discussed but elusive concepts of software architecture and design. Through concrete examples, Neal Ford gives you a solid grounding in the agile practices of evolutionary architecture and emergent design. By deferring important architectural and design decisions until the last responsible moment, you can prevent unnecessary complexity from undermining your software projects.

The first two articles in the series appeared today:

I plan to use this series to start a conversation about something that we all do everyday that we can't really describe well, even to other technical people (much less our grand parents). I don't supposed to know the answers (I'm not even sure I know all the questions), but at some point we have to talk about it. In the first installment, I provide some working definitions and some overarching concerns. Let me know what you think about it.

Monday, February 16, 2009

Speaking at the IT Architect Regional Conference in Atlanta

At the end of February (the 25th - 27th), I'll be making a rare Atlanta conference appearance at the IT Architect Regional Conference, hosted by the International Association of Software Architects (IASA). This is the first in a series of regional conferences focused on an important but generally neglected segment of the developer population, software architects. What that title actually means is some matter of debate (hey, maybe this conference will help define that term), but the topics covered certainly trod some important ground. I'm doing a short version of my Smithying in the 21st Century keynote (overviewed here). My ThoughtWorks colleague Steven "Doc" List will also be there, imported from the west coast, convening an open space called Beyond Fight or Flight: Meetings Don’t Have to be Gladiatorial Combat, which sounds quite interesting. It's still not too late to sign up for it; hope to see you there.

Friday, February 06, 2009

Tools & Anti-Behavior (SOA & the Tarpit of Irrelevancy)

This is the third in a series of blog posts where I discuss what I see wrong with SOA (Service Oriented Architecture) in the way that it's being sold by vendors. For those who are behind, you can read the first installment and second installments.

While rank and file developers go to conferences to soak in deep technical content, their peripherally technical managers (the ones who wrote some rockin' good Cobol code back in the day, but now they make decisions about modern enterprise architecture) go to different conferences in Palm Springs. At those conferences, they have a 2-hour morning session, run by a big tool vendor, then play golf for the balance of the afternoon. And what the vendors show them is poison.

Mostly what they see these days are tools that support SOA and ESBs. And in particular, the favorite demo-ware application is their BPEL (Business Process Execution Language) designer. This designer allows you to wire together services by drawing lines between boxes. The lines can include transformations and other sexiness. And it demos great. "Look, just draw a couple of lines here and here, click on the Run button and voila! Instant SOA".

Then, the manager brings it back home and notifies the developers that this is the new tool of choice. When developers start using it, they realize the awful truth: they've been sold a hairball generator. Tools like this work great on really small scales, when it's easy to see all the connections between things. But, as things get complicated, they start suffering from the hairball effect: all the lines start running together, and you can't create a diagram that makes sense to anyone anymore. Perhaps maybe you can fight through this, by creating workflows in chunks, and zooming in and out.

Then reality arrives. Because you create workflows using these tools, you are coding, in the worst possible language (a graphical representation). Thus, you are defining behavior, just like you do when you write source code. But the behavior you define lacks all the benefits you get from writing it in code.

  • reuse: you can't really reuse portions of your workflow because their is no method or subroutine functionality (you might get lucky with a sub-workflow). Mostly, people achieve "reuse" by copy and pasting, which you never do in code.

  • refactoring: no refactoring, making it harder to identify common workflow chunks for reuse. When you don't have refactoring, you don't watch for opportunities for refactoring as much.

  • limited programmability: you don't get if statements and for loops, you get whatever this particular BPEL designer supports. You get flow-chartly looking stand-ins for real decision statements, but they are much more brittle than the facilities offered in modern languages.

  • testing: you can't write unit, functional, or integration tests for these workflows. The only real testing option is user acceptance, meaning that the entire universe must be up and running. If you have no unit testing, you also don't have mock objects or other testing techniques common in code.

  • hard to diff: lets say you fought the beast and get a non-trivial workflow up and running, and everything is great. In six months, you change it in non-trivial ways, and all is good. Then it comes time to see what's different. BPEL tools don't have diff facilities, so you can either visually diff the diagrams (yuck) or diff 2 10,000 line XML documents (double yuck). BPEL relies on either heavy-weight diagramming tools or raw XML, and nothing in between.

Tools like this fall into the category one of my colleagues identified as doodleware tools. They let you create pretty pictures but collapse under scale. And they don't support all the common facilities offered by good old fashioned code. Is it really worth giving up robust reuse, refactoring, testing, programmability, and versioning/diffing just to see the pretty picture? Ironically, it's pretty easy to generate a similar picture from code, using tools like GraphViz.

I am a strong believer in the mantra

Keep behavior in code.

We have great tools for code (including ways to generate doodles) -- why would you want to abandon what works for something new and shiny? Except, of course, that code won't take you out for a golf outing in Scotland if you choose it.

Sunday, February 01, 2009

On the Lam from the Furniture Police at Agile Atlanta

I have two keynotes this year that I'm presenting at one conference or another. The first out of the gate was my talk entitled On the Lam from the Furniture Police at the Code Freeze conference. This talk has multiple parent ideas. The first derives from the book The Productive Programmer. It turns out that when you write a book about something, you get really immersed in the subject matter. That's obvious. What's less obvious is the inability to turn that overwhelming interest off after the book is done. That can either be good or bad. Once I had finished Art of Java Web Development, I pretty much never wanted to see another web framework! But the Productive Programmer was different, I guess because it's such a broad and unsolved problem. For whatever reason, I'm still soaking up Productive Programmer stuff.

The other parent of this talk involves the intersection of agile software development and productivity. While there's a little of that in The Productive Programmer, I wanted to keep the focus mostly on developer productivity in the book. But that's where On The Lam from the Furniture Police fits. This talk discusses how to be productive in corporate environments, including broader topics like agility. Here's the abstract:

When you were hired by your current employer, you may think it's because of your winning personality, your dazzling smile, or your encyclopedic knowledge of [insert technology here]. But it's not. You were hired for your ability to sit and concentrate for long periods of time to solve problems, then placed in an environment where it's utterly impossible to do that! Who decides that, despite overwhelming evidence that it's bad for productivity and people hate it, that you must sit in a cubicle? The furniture police! This keynote describes the frustrations of modern knowledge workers in their quest to actually get some work done, and solutions for how to gird yourself against all those distractions. I talk about environments, coding, acceleration, automation, and avoiding repetition as ways to defeat the mid-guided attempts to sap your ability to produce good work. And I give you ways to go on the lam from the furniture police and ammunition to fight back!

Every time I make changes to this talk, it's to enhance it's agile focus. So it's perhaps not surprising that I'll be giving a preview of this keynote at Agile Atlanta on Tuesday evening, February 3rd. If you're in Atlanta and interested in how productivity and agility intersect, stop by. And if you can't make it there, the next scheduled appearance is in Las Vegas, as the opening keynote of The ServerSide Symposium.

Wednesday, January 28, 2009

Why You Should Attend RubyRx

I attend a lot of conferences as a speaker, covering Java, .NET, Ruby, and Agility (plus a few other random topics). The most obvious differences between all these conferences are the technical topics, but other surprising differences exist as well. One of the most striking is the presentation style and content of the Ruby conferences. The quality of the presentations at Ruby conferences is nothing short of stunning. They tend to have not only fascinating topics, but really compelling presentation skills. It's as if everyone at Ruby conference have read and internalized Presentation Zen by Garr Reynolds. No bullets in sight, lots of images, cool transitions, and just generally entertaining. It helps that virtually everyone at Ruby conferences use Macs, and therefore Keynote, which is light years ahead of PowerPoint. Keynote alone won't make a great presentation, but it certainly helps.

Not surprisingly, the worst conferences are those that force the speakers to use hideous conference-themed slide decks. Some conferences force you to create your presentation using a hideous template, then have some clueless intern enforce their misguided rules (up to the point of making changes to the slide show). The worst by far is Microsoft's TechEd. First, you must use PowerPoint (not too surprising, I guess). But last year I took a lot of time making slides with lots of images, centered text, and other transition tricks...only to have them disappear before my talk at the hands of someone who didn't have a clue. They literally took slides upon which I had big, centered text to emphasize a point and changed them to a slide with a made-up title and a single bullet point. This ham-handed treatment of my material is why I'm not doing TechEd this year.

But I digress. Ruby conferences on average have stellar presentations. And the RubyRx conference in Raleigh in late February sounds like it's going to be the exemplar of outstanding talks. It combines the expertise of the No Fluff, Just Stuff conference organization with a great lineup of speakers, including Matthew Bass, David Bock, Chad Fowler, Stuart Halloway (talking about Clojure!), Yehuda Katz, Russ Olsen, Jared Richarson, Venkat Subramaniam, Bruce Tate, and Glenn Vanderburg. And I'm going to be there too, like a kid at the adults table.

RubyRx sounds like it's going to be the great early 2009 Ruby conference. Come see what the fuss is about, both technologically and presentationally.

Wednesday, January 21, 2009

Upcoming Keynote: Smithying in the 21st Century

I have two keynotes that I'm giving at various conferences this year. The first out of the gate was On the Lam from the Furniture Police, which I debutted at the Code Freeze conference in Minneapolis -- I'll have more to say about this keynote in a future blog post when it's about to appear again.

Smithying in the 21st Century comes next. I love metaphorical titles, and this one harks back to the changes in the blacksmith profession just after the turn of the century (in this case, the 20th one). If you were a blacksmith 1890, you had a terrific career path. However, once automobiles came along, the profession gradually diminished to a shadow of what it was.

Here's the abstract:

Blacksmiths in 1900 and PowerBuilder developers in 1996 have something in common: they thought their job was safe forever. Yet circumstances proved them wrong. One of the nagging concerns for developers is how do you predict the Next Big Thing, preferably before you find yourself dinosaurized. This keynote discusses why people are bad at predicting the future, and why picking the Next Big Thing is hard. Then, it foolishly does just that: tries to predict the future. I also provide some guidelines on how to polish your crystal ball, giving you tools to help ferret out upcoming trends. Don't get caught by the rising tide of the next major coolness: nothing's sadder than an unemployed farrier watching cars drive by.

I'm debutting some form of this one twice in 1 day! I'll give a short version of this keynote at the International Association of Software Architect's ITArc Atlanta Regional Conference as 1/2 of the opening keynote on Friday morning. Then, I get on a plane and fly to Milwaukee, where I'm scheduled to give the first real version of it as part of No Fluff, Just Stuff Greater Wisconsin Software Symposium. The research has been a blast, and I'm looking forward to putting it in front of people. There should be some interesting surprises lurking...

Sunday, January 11, 2009

Standards Based vs. Standardized (SOA & the Tarpit of Irrelevancy)

This is the second in a series of blog posts where I discuss what I see wrong with SOA (Service Oriented Architecture) in the way that it's being sold by vendors. The first installment is here.

Back in the very early days of Java web development, interacting with the tools sucked. Every servlet engine vendor had their own deployment scenario, and they varied wildly between vendors. I worked on several projects where we moved a web application from one dysfunctional servlet engine to another (this was in the early days, and they all sucked in one way or another). Then, the J2EE standard came along, including the WAR file format. Suddenly, you had a fighting chance (well, after a few iterations of the standard) of deploying to multiple application servers (notice how "servlet engine" became "application server"). And the vendors hated that.

They hated it because the J2EE standard turned their application servers into commodities. And the price of commodity software quickly approaches zero. That's why the application server vendors immediately started building more elaborate stuff on top of J2EE: portlet engines, content management, elaborate load balancers, etc. They knew that something like JBoss would come along and support the J2EE standard, making it impossible to charge $17,000 per CPU to deploy their application server (this number sticks out because one of my clients in the early 2000's paid WebLogic exactly that...for 8 CPUs). The J2EE standard essentially ran the server vendors out of the business of supplying just an application server.

Contrast that with database vendors. They are still able to charge big bucks for their servers, and keep companies (sorry, Enterprises) on the upgrade treadmill. An ANSI standard for SQL exists but (and this is the key part) it is so weak that it's useless. When last I checked, the ANSI standard for SQL didn't even include indexes, and no one would reasonably deploy a database server without indexing. The database vendors dodged the commoditization bullet by ensuring that the SQL standard would remain toothless in perpetuity.

Now, let's apply this to Enterprise Service Buses. One of the bits of marketing mantra touted by all the vendors of these Rube Goldberg machines is "standards". And they're selling these things into Java shops accustomed to the J2EE standard, so companies equate "standards" to "works the same across all players". All these shops are thinking of standards under the J2EE light, not the database server light. But here's the rub:

ESBs are standards-based but not standardized.

This distinction is important. All the extant ESBs use standards in every nook and cranny, but it's all held together by highly proprietary glue. The glue shows up in the administration tools, the way their BPEL designer works (along with their custom BPEL meta-data), how you configure the thing, how you handle routing, etc. The list goes on and on. These vendors will never allow the kind of standardization imposed by Sun in J2EE. The last thing the vendors want is to see their (crazy money making) babies turned into commodity software again. They'll make noise about creating a true standard, but it won't happen. They want to be more like the database vendors, not the application server vendors.

Even the open source offerings in the ESB space suffer a little from this because they are giving the bits away and selling consulting and training. This is a good business model (look what it's done for JBoss), but they have the same motivation to keep you locked into their version of proprietary glue.

Having proprietary glue is not necessarily a bad thing. It's one of the factors you have to consider anytime you are thinking about turning part of your infrastructure over to a externally developed piece of software. Obviously, no one is going to build their own database server -- it makes good sense to buy an existing one, and fight the nasty battle if and when it comes time to move to another one. BUT, you need to understand the distinction between standards-based and standardized so that you don't buy yourself into a real long term nightmare.

Friday, January 02, 2009

Tactics vs. Strategy (SOA & The Tarpit of Irrelevancy)

This is the first in a series of blog posts where I discuss what I see wrong with SOA (Service Oriented Architecture) in the way that it's being sold by vendors. The first installment is about how the need for SOA arose: tactics vs. strategy.

No company starts out as an Enterprise, they all start as a company, with just a few employees. As such, their IT needs are small, handled by a small group of developers who can chat with each other over lunch about what's going on in the IT "department". As the business needs software, they have some process to get the requirements to the developers, and they write code. Thus, the accounting department comes to the developers and says "Here are our requirements for our new accounting application", and the developers build some version of it. Inside that application are some small parts of something that the entire company cares about, for example, some of the aspects of customers. Meanwhile, the marketing department comes to the developers and says "We need an application, and here are the requirements", and the developers build it. Of course, this application also encapsulates some aspects of a Customer (not the same as accounting, but possibly with some overlap). It is rare indeed that anyone looks around and tries to come up with a comprehensive strategy for application interoperability: you don't have time for that -- you're a small company, and if you don't get your software, you'll go out of business. This goes on for a while.

Then, one day, you wake up, and you're an Enterprise. The CIO looks around with dismay at all the actual entities the corporation cares about because they are scattered in bits and pieces in all these different siloed and packaged applications. And the database stuff is a mess, with shared tables and views and ETL jobs running on cron jobs. And the CIO throws up in his mouth, just a little. The CIO looks at the landscape, and realizes that the technical debt incurred over the last few years can only get worse from here, so he calls out "Help." Big software vendors are highly attuned to people in big companies (sorry, Enterprises) who can write checks saying "Help". They ride in with a solution wrapped around the SOA moniker. More about our friends the Enterprise vendors in a future part of the series.

What I'm concerned about in this post is the overall landscape, which is another way of asking "How did you get where you are now?" You got here because of 2 reasons: first, you took the path of least resistance when you were a company (before you became an Enterprise) because, if you had taken the time to build a comprehensive strategy, you'd have never survived as a company. Second, and more important, what is strategic to the business is always tactical to IT. Business people can go into a conference room and change the entire direction of the company in 2 hours. Or, the business may decide to merge with another company that has truly dysfunctional IT, but other business concerns override that. IT can never move as fast as the business, with means that IT always has to respond tactically to the decisions and initiatives brought forth from the business. No matter how much effort you put into a comprehensive, beautiful, well-designed enterprise architecture, it'll be blown out of the water the first time the business makes a decision unlike the ones that came before. The myth of SOA sold by the big vendors is that you can create this massively strategic cathedral of enterprise architecture, but it always falls down in the real world because the COO (and CEO) can override the CIO (and his sidekick, the CTO). If you can convince your organization to allow IT to set the strategy for what capabilities the business will have long-term, you should. However, your more agile competitors are going to eat your lunch while you build your cathedral.

Any enterprise strategy you implement must realize that you will always be in tactical mode in IT because the business strategy doesn't require physical labor. Any enterprise architecture you develop must allow the business to evolve according to it's wants (and needs). This is what my colleague Jim Webber calls "Guerrilla SOA" and what I call "Evolutionary SOA". More about the details ofp evolutionary SOA in upcoming installments.