tag:blogger.com,1999:blog-99442212024-03-13T08:10:31.790-05:00Meme Agora<i>meme</i>: an idea, behavior, or usage that spreads from person to person within a culture <br/>
<i>agora</i>: a gathering place<br/><br/>
Welcome to Neal's gathering place for ideas.
<hr/>Neal Fordhttp://www.blogger.com/profile/12839796402858974817noreply@blogger.comBlogger263125tag:blogger.com,1999:blog-9944221.post-46251830470148348022010-06-09T16:32:00.007-05:002010-06-10T10:47:12.872-05:00The iPad: the Good, the Bad, and the Ugly<cite><br />Disclaimer:<br />I'm a hard-core lover of Apple products; between my wife & I, we own 4 Macs. However, I also hate Apple's stance as a company on lots of topics. In fact, I would probably never use their products if they weren't so damn good. I'm just disclaiming that I may not be objective (although I try to be).</cite><br /><br />I was not one of the early enthusiasts for the iPad when it was first announced. I did not pre-order one and really had no interest in one until I had a chance to touch one. It didn't seem to make a lot of sense for me: I split my time between my laptop and the iPhone (which I truly love). The iPad didn't seem to offer much that I didn't already have.<br /><br />But I realized in the first 10 minutes of playing with one that I was looking at the first incarnation of the next platform, and I quickly ordered one (actually, 2 — one for my wife & one for me — like the iPhone, I knew she would never wait for a hand-me-down). This blog is a culmination of my thoughts and impressions.<br /><br /><h1>General Thoughts</h1>It doesn't take long using an iPad (especially if you've been to the Apple iPhone school of user experience) for it to feel natural. The main difference is how much the platform manages to disappear, more quickly than any device I've ever used. That's one of the things I love so much about OS X: it tries to get and stay out of your way.<br /><br />Using a well designed iPad app (and there are already some pretty stunning ones), you quickly forget all about the hardware/software combination that is the life support for the app. The application just seems to work.<br /><h2>Canon Cat</h2>In <a href="http://en.wikipedia.org/wiki/The_Humane_Interface#Contents">The Humane Interface</a>, Jef Raskin (one of the early Macintosh developers and a well respected researcher in human-computer interaction) wrote about human computer interaction at a pretty abstract level. He designed and sold a computer called the Canon Cat which implemented some of his user interface ideas. One of the most shocking things that Raskin pointed out is that computer users don't want file systems, they just want to work on stuff. The Canon Cat had no recognizable file system: you typed documents and then searched to find previous ones. The iPhone/iPad also have no visible file system: every application "owns" their own data. This causes a few headaches for applications that need to sync their data or share files, but applications (like <a href="https://www.dropbox.com/">DropBox</a>) fill that void nicely. I'd rather a few applications figure this out for themselves rather than punish every user with having to understand the nuances & headaches from dealing with what really should be low-level abstractions. How many family members have you had to walk through downloading a family photo and then helping them figure out where it ended up on their computer?<br /><br />For those who doubt what a paradigm shift this is (and how far away even close seeming competitors are), consider that at the time I wrote this, 2 of the top selling applications on the Android store don't make sense on the iPad: a task killer & a file system browser. Despite the cosmetic similarities between the iPad and competitors, the differences go much deeper.<br /><h2>Death of the Mouse</h2>The iPad represents the death of the mouse. You don't need one, don't miss one, and it would be a huge leap backwards to try to use one. Your finger does what needs to be done, and it's scary easy. The only real exception to this is stylus-type usage. The iPad is a lackluster real-time note taking device for handwritten notes. However, the built-in keyboard works better than I expected.<br /><h1>The Good</h1>As you can probably guess, I like a lot of stuff about the iPad.<br /><h2>Mechanics</h2>My general computing time (time spent on notebook, iPad, & iPhone (minus calls)) has gone slightly up since getting the iPad. Most of the displaced time has come from the iPhone. If you look at 100% of my ipad usage, I estimate that it comes from 80% less time on the phone and 20% less time on my computer. When I first got it, I had the normal hedonic glow I get from new cool toys, and I looked for excuses to use it. That quickly went away, and now i find myself using it for very specific tasks. Case in point: I realized I wanted to add this section while standing in line to get on a plane; I'm typing this in my seat waiting for the rest of the passengers to board. Not enough time to get the entire laptop out, but way more than I would want to type on my phone. Once you've been using the iPad, some of the applications on the iPhone feel really claustrophobic. Some applications (and games) still work best on the phone, but there is certainly the case that some (most?) work better on the more capable hardware and form factor.<br /><h2>Killer Uses</h2>Watching movies rocks on the iPad. It has 1024x768 resolution (which was the standard resolution for my laptop for years) which is beautiful. Watching content at the gym is absolutely transformative. The Google map application quickly shows that this is the perfect form factor and, more importantly, interaction model. The way you pinch and zoom to navigate around the map feels right, along with the gradual exposition of details as you do so.<br /><br />Listening to music with the lyrics available is another killer feature for me (I listen to a fair amount of music — progressive rock & opera — where the lyrics really matter). If you associate the lyrics to a song as part of the meta-data in iTunes, the iPad has a mode that shows you the lyrics to the song that's playing. You've been able to do this on iPods & the iPhone for a while, but this form factor makes all the difference.<br /><br />The iPad is definitely a "lean back" device, rather than a "lean forward" device. The interaction model for a laptop is well established and doesn't have a lot of options because of the form factor. Whether at a desk or an airplane, you pretty much use it the same way. The iPad, on the other hand, <a href="http://pragprog.com/magazines/2010-06/swaines-world">has lots of different interaction modes</a>. I've seen a lot written about "habit spaces" and interaction models — the iPad makes you realize how true that is, and provides a blank slate for creating some new habit spaces & interaction models.<br /><br />Using a mind mapping tool on the iPad makes me never want to go back to using a notebook for this activity. The mind mapping tools aren't as powerful yet, but the interaction is exactly right. I find I use my iPad a lot for brainstorming away from my computer, which is where I would like for it to happen because of habit spaces. And OmniGraffle for the iPad makes you rethink how drawing tools should work.<br /><br />One must-have accessory is the Apple case for the iPad. It is a marvel of engineering in its own right, and it makes it easy to find the right interaction mode for what you're doing. Having a case that allows you to put it into various physical permutations makes it easier to get immersed.<br /><br />The iPad is the ultimate airplane accessory: movies, books, games, and interacting with the outside world on planes with wi-fi. I have now partitioned my travel life into 2 eras: pre-iPad & post-iPad, and you can guess which is better.<br /><br />As you've probably guessed, this entire entry was researched, organized, and written on the iPad, and it was a pretty good experience, much better than I thought it would be. I can anticipate doing more of this type of writing on the iPad.<br /><h1>The Bad</h1>I have very little to dislike so far. It really does need rudimentary background processing just to allow stuff like Skype + <something else="">X. The proposed changes coming in the 4th release of iOS should solve most of those problems. Doing real work on the iPad feels a little like being back in the DOS days. I had my notes for this blog entry in a mind map & had to switch back and forth. However, the pain was minor because the applications "start up" so fast (in quotes because that seems like kind of a quaint notion on the iPad: applications don't really have life cycles).<br /><h1>The Ugly</h1>Nothing about the device is ugly. The only ugly thing in the neighborhood are Apple's sometimes overly controlling ways. But you have to realize that the future of applications on these devices fall into 2 categories: curated native applications from the AppStore and wide open, standards-based web applications, delivered through the browser. By maintaining control on the native applications, they remove the need for things like anti-virus software. They want to closely guard those applications all the way down to the look and a feel. I agree that if they allow cross- compiled applications, it dilutes their control. But that control makes a difference.<br /><br />Before the iPhone, I was a die hard Palm (and Treo) user, which both offered an application development platform. Because there was no curator, 98% of Palm applications were abysmal, to the point that I stopped adding new applications. The Apple AppStore largely fixes that.<br /><br />Android is going to suffer the same fate if they aren't careful. The interesting difference between the Palm & Android eras is the strength of social networks, especially user ratings. The largely unanswered question: will social networks be enough to all the cream of the Android crop to rise to the top, overcoming the Palm problem.<br /><br />I won't get in a huff at Apple for controlling and curating native applications <span style="font-weight: bold; font-style: italic;">as long as they don't try to cripple the browser</span>. As long as I can write whatever application I want using open standards and have it work correctly, I'm OK with the advantages and disadvantages of curated applications.<br /><br /><H1>"I want to live there!"</H1>I have a friend that every time he sees something new and futuristic, he says "I want to live there". Using the iPad feels like tiptoeing in to the future. I definitely want to live there.<br /><br />I'm convinced that the iPad is the first iteration of the next major computing platform. PC's will become work & power user tools, but everyone will use iPad-like things for many tasks. This is the first incarnation - can you imagine what these things will look like in 5 years?<br /><br />The iPad does for video what the Sony Walkman did for music. I can easily envision a near future where the family TV is only used on special occasions; the whole family sits around with their fifth generation iPad watching highly personalized content.<br /><br />I don't think you can really appreciate the impact of the iPad until you've used one a bit. Thomas Watson famously said in 1943 that the world-wide market for computers was about 5 computers. He didn't understand the transformative effect that pervasive personal computing could effect. We're at the threshold of a new era of computing, and it's pretty cool. Ignore it at your peril!<br /></something>Neal Fordhttp://www.blogger.com/profile/12839796402858974817noreply@blogger.com8tag:blogger.com,1999:blog-9944221.post-74779258924675585672010-05-21T11:03:00.002-05:002010-05-21T11:06:17.354-05:00Mouseless BrowsingOnce you write a book, you become really immersed in the subject matter. After it's off to the publisher, you can't turn off your interest in the subject. Consequently, after <a href="http://tr.im/prod_prog">The Productive Programmer</a> came out, I continue to find new ways to make myself more productive. One of my recent tendencies is mouseless browsing.<p></p><p>I have a love/hate affair with Firefox. I love the keyboard affordances it provides, especially the slash ("/") and apostrophe ("'") shortcuts. When looking at a web page, the slash starts an incremental find for text within the page. The apostrophe does something similar, but it restricts the matches to URLs only. How many times do you go to a web site and you know the name of the link you want already? For example, if I need to go to the ThoughtWorks web site to get the address of the London location (this happened earlier today), I know (or can guess) that there is a "Contact" link on the home page, which takes me to a list of offices. In my new mouseless browsing mode, I go to the home page <a href="http://www.thoughtworks.com/">thoughtworks.com</a>, hit the apostrophe and start typing "contact", hit enter and now I'm on the contact page. From there, I can hit the slash key and start typing "London". I've found my address and never taken my fingers off the home row. This added to the spacebar to scroll down and shift-spacebar to scroll up and you can get a lot done in a browser without a mouse.</p><p>I maintain a love/hate relationship because one of the cool things about Firefox is all the plug-ins available. Conversely, one of the things I hate about Firefox are all the plug-ins! I tend to find useful plug-ins and add them, which adds to the weight (and start up time) for the whole browser. Recently, I stumbled across <a href="http://caminobrowser.org/">Camino</a>, which uses the same rendering engine as Firefox, which means that my 2 favorite keyboard shortcuts work. Camino is very lightweight (if "feels" lighter than Safari) and it supports my browsing habits. This is going to sound odd, but one of the critical things that it supports is the ability to type a partial URL in the address bar (which it will auto-complete) and hit CTRL-N to move the cursor down to the first (or subsequent) matches. This is a big deal for me because the Emacs key bindings are deeply ingrained in my fingers (and most of OS X, as it turns out), but Firefox doesn't allow this. Firefox has apparently overwritten the CTRL-N key to do nothing (overriding the operating system). While this sounds minor, it bites me every time. Fortunately, Camino adheres to the Apple standard, allowing me to have my cake (having CTRL-N move down one line, as it was Meant To Be) and keyboard-driven browsing.</p><p>If you <em>really</em> want to go far down this path, there is the <a href="http://conkeror.org/">Conkeror</a> browser. It's "About:"</p><p></p><blockquote>Conkeror is a keyboard-oriented, highly-customizable, highly-extensible web browser based on <a class="http" href="http://www.mozilla.org/">Mozilla</a> <a class="http" href="http://developer.mozilla.org/en/docs/XULRunner">XULRunner</a>, written mainly in JavaScript, and inspired by exceptional software such as <a class="http" href="http://en.wikipedia.org/wiki/Emacs">Emacs</a> and <a class="http" href="http://en.wikipedia.org/wiki/Vi">vi</a>. Conkeror features a sophisticated keyboard system, allowing users to run commands and interact with content in powerful and novel ways. It is self-documenting, featuring a powerful interactive help system.</blockquote><p></p><p>I have played with Conkeror and it has strong promise, but it's not quite mature enough for me to switch to daily use.</p><p>And if you are a Vim junkie, you can use the Firefox plugin called <a href="http://vimperator.org/vimperator">vimperator</a>, which converts Firefix to a purely vi interface. Not for the faint of heart: if you don't know how to quit Vim, you're going to have a tough time with vimperator. Hard-core Vim-mers swear by this, but I'm too far gone down the Emacs route now to re-map the genes in my fingertips.</p><p>Mouseless browsing takes acclimatization, but once you become accustomed to it, you'll start finding it annoying to reach for a mouse (or even for the arrow keys -- not the home row!). This is why I have no interest in <a href="http://www.google.com/chrome">Chrome</a> for now because it doesn't support my normal mode of browsing.</p>Neal Fordhttp://www.blogger.com/profile/12839796402858974817noreply@blogger.com7tag:blogger.com,1999:blog-9944221.post-4431277920760515112010-04-14T11:44:00.001-05:002010-04-14T11:46:03.327-05:00Tiffany Lentz: A New Regular on the No Fluff, Just Stuff TourIt is with great pleasure that I welcome one of my ThoughtWorks colleagues, <a href="http://www.nofluffjuststuff.com/conference/speaker/tiffany_lentz">Tiffany Lentz</a>, to the No Fluff, Just Stuff tour this weekend in Tampa. She did the Seattle show last year to great feedback, and now she's joining the tour as a semi-regular speaker.<p></p><p>Tiffany has been with ThoughtWorks since late 2004 as a project manager and agile coach. She embodies what makes ThoughtWorks project managers so good: enthusiasm, experience, expertise on how to build software, and infectious enthusiasm. She's one of the project managers that I would work with on any project, no matter how bad it looks at the outset because I know that she'll find ways to make it better, usually profoundly better.</p><p>I've been leaning on Tiffany for a couple of years now to come speak on the No Fluff, Just Stuff tour to share some of her agile expertise, and she finally knuckled under and decided to join the No Fluff, Just Stuff speaker's "boy's club". I don't think it's a coincidence that 2 of the rare female speakers on the No Fluff, Just Stuff in recent times have been ThoughtWorkers: Tiffany and our CTO, Rebecca Parsons. ThoughtWorks can be a bit of a fraternal atmosphere as well (especially amongst developers which, despite concerted efforts, still remain largely male), which means that Tiffany and other ThoughtWorks females learn to tolerate too many male geeks being around, like the speakers at No Fluff, Just Stuff.</p><p>Tiffany is going to be talking about Agile practices this weekend in Tampa. Her talks include <a href="http://www.nofluffjuststuff.com/conference/tampa/2010/04/session?id=17848">The Agile Mindset: Applying Agile in Non-Technical Areas of an Organization</a>, <a href="http://www.nofluffjuststuff.com/conference/tampa/2010/04/session?id=17849">Iteration Management: What's in Your Toolkit</a>, and <a href="http://www.nofluffjuststuff.com/conference/tampa/2010/04/session?id=17850">Agile Project and Management Metrics: Measuring Success Downward and Upward</a>. If you're in Tampa this weekend (or at a future tour stop), do yourself a favor and go see Tiffany speak.</p>Neal Fordhttp://www.blogger.com/profile/12839796402858974817noreply@blogger.com1tag:blogger.com,1999:blog-9944221.post-53198333804308482812009-12-22T12:41:00.002-05:002009-12-22T12:47:22.702-05:00Empowering Sinookas using Social Networks to Maintain a DurassOne of the recommendations I frequently give at conferences when asked about "What books are you reading" is to get out of the purely technical realm often so that you can communicate more effectively with the other humanoids. One of the common recommendations is to read all of the books by <a href="http://en.wikipedia.org/wiki/Category:Novels_by_Kurt_Vonnegut">Kurt Vonnegut</a>. One of his books I recently re-read (for probably the 15th time) is <a href="http://www.amazon.com/Cats-Cradle-Novel-Kurt-Vonnegut/dp/038533348X/ref=sr_1_1?ie=UTF8&s=books&qid=1261498692&sr=8-1">Cat's Cradle</a>. In <em>Cat's Cradle</em>, Vonnegut defines a new religion called <em>Bokononism</em> (one of the first lines of the novel states that if you have a hard time believing that a perfectly useful religion can't be based entirely on lies, you won't like the book). Bokononism defines a bunch of new terms, which relates to the point of this blog.<p></p><p>First, some definitions from Bokononism:<br /></p><ul><li><em>karass</em>: a group of people who, often unknowingly, are working together to do God's will. The people can be thought of as fingers in a Cat's Cradle.</li><br /><li><em>duprass</em>: a karass of only two people. The typical example is a loving couple who work together for a great purpose.</li><br /><li><em>sinookas</em>: The intertwining "tendrils" of people's lives.</li><br /><li><em>wampeter</em>: the central point of a <em>karass</em></li></ul><p>OK, so what does this have to do with anything useful? I travel a lot, even for a ThoughtWorker (a little over 200K miles this year). Of course, my wife hates the amount that I travel, but it's an occupational hazard. One of the things that makes us miss each other are the little unimportant side conversations we have when we are together: little meaningless observations, inside jokes, just the kind of things that people in a <em>duprass</em> do all the time. So I built a <em>sinookas</em> using Twitter.</p><p>I created a new GMail account for myself and one for my wife. Using each of those GMail accounts, I created a new Twitter account with protected updates for each of us, and we only subscribe to each other's Twitter stream. All the good Twitter clients make it easy to change accounts, so I have used this to create a private back channel for ongoing <em>duprass</em> style conversations (in other words, a <em>sinookas</em>). This isn't the <em>wampeter</em> of our <em>duprass</em>, but it does make the <em>sinookas</em> stronger. It's been great, and it's something that I recommend all traveling road warriors set up.</p><p>Now, my wife & I can have an ongoing private conversation about stuff that wouldn't make sense (or would be too politically incorrect) on a public feed. That allows us to miss each other less. Who says that you can't have a perfectly useful social network with just 2 people?</p>Neal Fordhttp://www.blogger.com/profile/12839796402858974817noreply@blogger.com3tag:blogger.com,1999:blog-9944221.post-33805413137823453032009-11-04T11:36:00.000-05:002009-11-04T11:38:22.158-05:00Productivity PronOne of my former coworkers & I used to spend hours talking about how to set up the best individualized personal information manager. We used to call those conversations <em>productivity porn</em>, not realizing that someone would come along an formalize that term, albeit slightly skewed, as <a href="http://wiki.43folders.com/index.php/Productivity_pr0n">Productivity Pr0n</a>. Finding a good system that doesn't get in your way yet allows you to organize all the things going on in your life (both personal and professional) is surprisingly difficult, given the number of tools that purport to do just this. At the time we were having these discussions (late 1990's), the best thing going was Franklin Covey's Ascend (a desktop application for Windows) and the Palm Treo. Ascend replaced some of the anemic default applications on the Treo (like the laughable ToDo application) with their own versions, and it worked really well. The death knell for me with Ascend was its poor quality. It was written as a desktop application that used an Access database for its back end, and it was a bit fragile. About once a year, it would spontaneously corrupt the database, which required cracking it open with Access to fix the mess it had gotten itself into. Seeing the broken and corrupted records and the general ghetto-ness of the database Ascend supposedly owned entirely didn't give me confidence. I still haven't found as good as integrated system today, but I have cobbled together a nice workable system for myself, consisting of 5 moving parts.<p></p><h2>Calendar</h2><p>This piece to me is the easiest slot to fill. Maybe I'm just not discriminating, but almost any calendar application works fine for me. As long as I can add appointments with all the standard stuff (reminders, time zones, etc.), I'm pretty happy. The biggest headache with calendars is keeping them in sync. Back in the bad old days of Lotus Notes at ThoughtWorks, I basically ignored the corporate calendar, keeping all my stuff in Google Calendar instead and giving people who cared an HTML view into my work calendar. Google calendar is quite nice, including good synchronization / replication with iCal. Most of my interaction with Google calendar was through the web interface, so much that I created a <a href="http://fluidapp.com/">fluid</a> application that had only my calendar. One of the slick tricks you can do with fluid applications is make all the chrome disappear, giving me a full-screen calendar bound to one of my desktops that looked like the wallpaper for that desktop, which was nice because you get the biggest possible calendar. I also liked the 4 views afforded by Google calendar: 1 day, 1 week, 1 month, and next 4 days.</p><p>Since ThoughtWorks moved our infrastructure to Google applications, I just exported my old work calendar and sucked it into the new ThoughtWorks Google calendar. My itineraries, my wife's schedule, etc. I subscribe to (and share subscriptions to interesting events with her). We each "own" our own calendar and cross-subscribe to get shared events. Using sites like <a href="http://tripit.com/">TripIt</a> makes keeping travel under control and subscribable. Going forward, I want all calendar stuff delivered as iCal feeds.</p><p>For synchronization, I'm using iCal as the integration point. I still have a fluid application that now points to my ThoughtWorks calendar, but I'm using iCal as the main calendar. Of course, iCal synchronizes nicely with the iPhone. Before moving back to iCal, I was using CalenGoo on my iPhone, which is a slick iPhone interface directly to Google calendar. It doesn't really provide off-line access but does cache the previous results so you can see your calendar even if you can't get to it online. The native iPhone calendar application handles that for me now.</p><h2>ToDo / Task List</h2><p>If the calendar is the easiest, this is by far the hardest slot to fill. Virtually all of the To Do / Task List applications I've used are seriously deficient in one way or another. Ascend replaced the Palm ToDo application (which was laughably bad) with one it called <em>Task List</em>, which was quite good. One of the killer features I need is a <em>start date</em> in additional to a <em>due date</em> (they all do due dates, but most don't handle start date properly). I have a fair number of tasks that I don't need to see right now, but I don't need to find out about them the day they're due either. I need a system that allows me to express "this thing is due on 11/5, but start pestering me about it on 10/30". I also want a tag-based system using <em>contexts</em>, one of the really nice refinements to come from the Getting Things Done cult, rather than a hierarchy of folders.</p><p>My research in this area eventually led me to <a href="http://www.omnigroup.com/applications/omnifocus/">OmniFocus</a>. I say "eventually" because I recently went through this category of application again. I started with OmniFocus but found it too complex for my day to day usage, which led me to <a href="http://culturedcode.com/things/">Things</a>. I like Things because it is so radically simple, but that became a bottleneck for me because of the way I like to attack projects. That in turn led me back around to OmniFocus and some concentrated learning about how it wants to work. It's heritage is OmniOutliner, and you can still see that lineage, which adds some complexity to some parts. Because OmniFocus has lots of ways to get information into it, I kept misplacing stuff. The thing that took OmniFocus from "nice but tolerable" to "can't live without it" is the custom perspectives. OmniFocus allows you to save customized views, including filters, columns, etc. I created a bunch of custom perspectives that show me exactly what I need ("what things are coming due within the next week", "what needs to be done next on this project", "I have 10 minutes at home -- is there anything that can be done here and now?") and assigned those perspectives to hot-keys using the standard Mac feature of assigning keys to menu items (each perspective shows up as a menu item, making this possible). Now, I never use the built-in views, I always use my custom perspective depending on the information I need right now. Since doing that, OmniFocus has worked fantastically. It allows me to organize my days and weeks, just shows me what I need right now, and I'm confident when I add something to it that it'll appear at the right place and time. Learning to use OmniFocus right was the key, and now that I have I think I have the best task list stuff that I've ever had (beating out Ascend for this title is no small feat).</p><p>Of course, OmniFocus syncs with the iPhone (and has a terrific iPhone application) so that I can keep all my To Do stuff with me at all times.</p><h2>Recurrence</h2><p>OmniFocus is great for tasks that have firm due dates and works for recurring tasks as well (including some nice flexibility around "schedule the next one of these 5 weeks after the completion of this occurrence", which is great for things like haircuts). However, I have a few but important categories of things where I want to define rules like "I want to post to my blog every 9 days or so", which could be rewritten as "remind me after 7 days that I need to post a blog entry, and start yelling after 11 days if it isn't done". I use a highly specialized tool for this called <a href="http://sciral.com/consistency/">Sciral Consistency</a>. That's all this tool does: allows you to set up ranges for things that need to be done and remind you.</p><p>I could almost replicate this using OmniFocus features, but I already had Sciral and I like the minimal display & single-mindedness of the tool. This doesn't synchronize anywhere, but I always consume this information at my computer anyway.</p><h2>Random Notes</h2><p>The combination of calendar and OmniFocus handles all the structured stuff -- what about unstructured notes? I have two mechanisms for that: Evernote and Moleskine.</p><h3>Evernote</h3><p><a href="http://www.evernote.com/">Evernote</a> is a desktop, web, and iPhone application that allows you to capture notes (organized into notebooks) for whatever information you want to keep and search. A few killer features for me:</p><ul><li>Automatic synchronization always everywhere. Every time you capture something with Evernote, it automatically synchronizes across all views.</li><br /><li>OCR for white board text. I tend to draw on white boards a lot, and if you capture the drawing with Evernote's picture note, it will allow you to do text searches in the web and desktop client for words in the white board drawing. It's not perfect but surprisingly good at this. </li><br /><li>Automatic forwarding address. Evernote sets up an email address for you; anything you forward to that address becomes a notebook entry. This is nice because it allows you to get stuff out of your email client. Evernote has much better searching capabilities than most email clients, and having the forwarding address means you can get searchable emails into Evernote very easily. This is particularly nice for those who use their email inbox as the world's worst filing cabinet; get that stuff out of your email client and into something where it can be useful.</li></ul><h3>Moleskine</h3><p>The only bad thing about an entirely electronic PIM: there are still times when you cannot use it (like when the plane it taxiing). This may not seem like a big deal, but I find that I have lots of capturable ideas at exactly the times when I can't capture them. Thus, my other permanent GTD accessory is a soft-sided Moleskine book along with a Fisher space pen. I capture interesting ideas as soon as I have them (because ideas, especially those from the right brain, are fleeting). Once I get back to my computer, I transcribe the Moleskine notes into the rest of my system. At any given time, I usually have a page or so of new stuff in the Moleskine. I could get by with a few index cards but I've been carrying the Moleskine for a while and I'm used to it.</p><h2>Tying It Together</h2><p>My PIM lives between 4 different applications and a notebook with no real built-in integration between them. I always use them as a unit. For example, I have all 4 applications bound to the same desktop in Mac OS X, and those are the only things bound to that desktop. That allows me to always leave them in the same window state and position. Anytime I switch to one of the PIM applications, it goes to the appropriate desktop. I've also used Automator to create a PIM application that performs "Launch Application" for each of the 4 that make up my PIM. I no longer think about these applications as separate things.</p><h2>PIM as Life Support for Focus</h2><p>Obviously, this system is highly customized to me and won't work without changes for anyone else. I think that it is every knowledge worker's responsibility to find a system that allows them to get to and stay in <a href="http://www.amazon.com/Flow-Psychology-Optimal-Experience-P-S/dp/0061339202/ref=sr_1_1?ie=UTF8&s=books&qid=1257352368&sr=8-1"><em>flow</em></a> as much as possible. Any tools that you use should make it easier to get to flow, not harder. I find that I tend to work best in 2 hour chunks (which I'm calling <em>work blocks</em>), which is similar to the very popular <a href="http://www.pomodorotechnique.com/">Pomodoro</a> technique. One of the custom views in OmniFocus allows me to review projects that have pending work blocks so that I can find out what I need to work on, then immerse myself in that problem for a contiguous chunk of time. Whatever system you find, make sure that it supports how you want to work. Don't change your effective work habits to conform to some tool's vision of what your day should look like.</p>Neal Fordhttp://www.blogger.com/profile/12839796402858974817noreply@blogger.com4tag:blogger.com,1999:blog-9944221.post-6026885924361966302009-10-07T09:13:00.000-05:002009-10-07T09:14:18.459-05:00Twitter Matters: The Meme Abiogenesis of the Internet<p>This is part three in an exploration of why Twitter makes sense, highlighting its use as a legitimate tool for connections and idea generation. The first article is under <a href="http://tr.im/nf_twitter_weak_links">Twitter Matters: Keeping Up with Weak Social Links</a> and the second is under <a href="http://tr.im/nf_twitter_conversations">Twitter Matters: Conversations vs. Monologues</a> for those who want to catch up.</p><p><em>Abiogenesis</em>, the study of how a primordial soup of chemicals eventually lead to amino acids and life, is an area of fascinating study by biologists. This spontaneous generation of life happened here a long time ago, and its study obviously interests those investigating life on other planets because this primordial soup seems to be the first prerequisite for life as we know it.</p><p>You can think of the Internet as a free-form gathering place for <em>memes</em>, an element of a culture or system of behavior that may be considered to be passed on from one individual to another by non-genetic means. Examples of memes include hit songs, water-cooler conversations about hit TV shows, and things like communism. If you are in the idea business (meaning that you are always looking for new sources of ideas and how to apply them to a broad subject like software development), you are always on the lookout for primordial meme pools. Twitter meets that goal admirably. As I mentioned in the <a href="http://tr.im/nf_twitter_weak_links">first installment</a>, weak social links are your best source for "outside the box" ideas. That makes Twitter a great place to harvest and generate new ideas. New ideas frequently start from seeds of an idea that are nourished into full-formed thoughts. Twitter now only delivers these seeds to your door, you can use them as an incubator for your own seeds.</p><p>Here's an example. One of my recent blog entries was called the <a href="http://memeagora.blogspot.com/2009/08/suckrock-dichotomy.html">Suck - Rock Dichotomy</a>. That particular turn of phrase came from a quick one-off Twitter entry where I was responding to a Tweet from someone that combined <em>rock</em> and <em>suck</em>. I mentioned that the entire argument was really part of the pervasive <em>suck/rock dichotomy</em> in the software world. That worked nicely in a 140 character Twitter post, and it was modestly re-tweeted. But it started more serious thinking on <em>why</em> that phenomena exists, which further lead me to an entire blog post (i.e., essay) on the subject. The turn of phrase came from me, but in response to some other stimuli. Would I have ended up writing a blog post on that subject if it hadn't come up in a virtual conversation? Probably eventually, but having a conversational medium close by encouraged the original Tweet, which lead to more fully formed thoughts about the subject.</p><p>Finding new sources of in-context ideas is a gold mine because you can never tell what fruit those idea seedlings will bear. Yes, 99% of Twitter is mindless trivia, but discovering or creating a new idea that you wouldn't have had otherwise? Priceless. People complain that most of Twitter is drivel, and I won't dispute that against overwhelming evidence, but the remaining usefulness is an artifact of the volume of memes present. Here's an analogy. Numbers vary, but some sources suggest that up to <a href="http://en.wikipedia.org/wiki/Junk_DNA">95% of the human genome is "junk DNA"</a>, DNA that isn't used (or at least its use hasn't been determined). That's how nature tries out new ideas, and the really good ones survive. Most of Twitter is junk, but good ideas do lurk in these murky meme pools.</p><p>Twitter has evolved to fill a niche that didn't exist before. Just like any social environment, users have to figure out a way that it can provide value. I've certainly found that for me. The combination of keeping up with my weak social links, having terse conversations vs. email monologues, the enforced constraint to keep ideas atomic, and the new medium of ideas forms a completely unanticipated but welcome enhancement to the way I work. Rather than cast stones at new technologies like social networks, ask yourself why people find it useful and how can it be useful to you. The answer may be "no", but you need to understand <em>why</em> it matters before dismissing it.</p>Neal Fordhttp://www.blogger.com/profile/12839796402858974817noreply@blogger.com0tag:blogger.com,1999:blog-9944221.post-45182010980013900122009-09-29T12:04:00.002-05:002009-09-29T12:10:39.299-05:00Twitter Matters: Conversations vs. Monologues<p>This is part two in an exploration of why Twitter makes sense, highlighting its use as a legitimate tool for connections and idea generation. The first article appears as <a href="http://tr.im/nf_twitter_weak_links">Twitter Matters: Keeping Up with Weak Social Links</a>.</p><p>The 140 character limit is perhaps the most distinctive characteristics of Twitter. Some of my Twitter friends have commented that conversations on Twitter tend to be more civil: you just can't cram much message <em>and</em> bile into a 140-character message. This has happened to me: carrying on a debate on Twitter is an interesting exercise in conciseness. Tight constraint is a forcing function on creativity: sensibility, lucidity, and articularity in just 140 characters is tough. You would think that all discussions on Twitter are either about trivial subjects (so that you can fit it into the built-in limit) or quickly degrade into multi-part messages. While the latter happens sometimes, it is rare in my experience, and the former doesn't occur as much as you might think.</p><p>An example is in order. I recently posted a message in response to <a href="http://www.blogger.com/@jimweirich">Jim Weirich</a> that I thought that cyclomatic complexity wasn't as useful a metric in Ruby because so much of the things that normally require loops and branches are so handily encapsulated in powerful libraries. Thus, this effect causes cyclomatic complexity numbers to be lower when comparing apples-to-apples code in Java & Ruby. Jim correctly pointed out that that does in fact make the Ruby code simpler, and therefore cyclomatic complexity is measuring exactly what it is supposed to measure. During this same discussion, <a href="http://www.blogger.com/@glv">Glenn Vanderburg</a> weighed in on a related subject, and then so did <a href="http://www.blogger.com/@olabini">Ola Bini</a>. The conversation quickly turned to the <a href="http://en.wikipedia.org/wiki/Linguistic_relativity">Sapir-Whorf Hypothesis</a> and how viable it is for spoken languages (not much) and computer languages (much more so). Along they way, I learned the distinction between the <em>strong</em> and <em>weak</em> versions of Sapir-Whorf. All this took place over about 20 minutes, 140 characters at a time. Yet at the end, I knew a lot more than when I started. The combination of (shortened) links to external sources and brief forays kept the conversation focused, covering just a few topics and exploring the implications between them.</p><p>How would the conversation work without Twitter? It could only work if all the interested parties (myself, Jim, Glenn, and Ola) were somehow on the same email mailing list or happened to be at the same place at the same time. While our location does coincide occasionally, it's rare (we're based in Atlanta (sometimes), Cincinnati, Dallas, and Stockholm). Even so, the topic would have to come up in conversation. If we were on the same mailing list, the conversation would proceed differently. Because there is no character limit on email (I'll let you immerse yourself in the fantasy of a limitimg function on email for just a second), it's no longer a conversation, it's a series of monologues.</p><p>A tricky balance exists between constraint and creativity. Obviously you can cram more information and context into a sonnet than a Haiku (I explored this idea in a blog series about the <a href="http://memeagora.blogspot.com/2007/10/ruby-matters-language-beauty-part-2.html">expressiveness of the Ruby language</a> back in 2007). 140 characters seems to be a bit of a sweet spot: enough to convey some thought but not enough to go overboard. Composing a good Twitter update is different from composing an entire blog but they aren't as far apart as you might think. I certainly have noticed that the people who both Twitter and blog have cut down on the number of blog entries they write. I'm certainly that way. It used to be that I would blog for 2 types of messages: short announcement type blogs ("I'm speaking at Random City Users Group next week") and essays. Now, all the short announcements happen on Twitter, leaving my blog for more formal essays. I like this distinction because I find that the blogs I read tend to be more substantive. </p><p>There is no question that most of what comes through Twitter aren't deep thoughts (many think that Twitter is just for food and travel). I find that people who only post obvious messages, too much information, or too much that I either don't care about or I find offensive don't stay on my list of people I follow long. There is at least one prominent technologist who mixes his interesting posts with right-wing bile, and I dropped him like a hot potato because I don't need a subscription to a channel for misinformed dogma. Managing your user list becomes important in Twitter so that you filter out stuff you don't want or need.</p>Twitter creates a new communication stream for those who contribute and consume Tweets (conversations vs. monologues). By creating a new specifically constrained communication channel, it moves conversations that used to occupy other spaces to a more appropriate space. This combination of a new conversational outlet between people with who I maintain weak links and the built-in constraints mean that I have a new source of ideas (both raw ideas and refinements of my ideas) to keep my brain percolating. In the next post, I explore the idea that Twitter can be a form of meme generator.Neal Fordhttp://www.blogger.com/profile/12839796402858974817noreply@blogger.com2tag:blogger.com,1999:blog-9944221.post-81931590419141230562009-09-17T05:08:00.003-05:002009-09-18T12:53:33.965-05:00Twitter Matters: Keeping Up with Weak Social Links<p>Lots of people just Don't Get(tm) social networking sites like FaceBook, MySpace, and especially Twitter. On the face of it, Twitter doesn't seem to make much sense: 140 character updates. But those of us who use Twitter a lot (I'm @neal4d, BTW) know that it's much more than that. Twitter engenders so much puzzlement because it's so restrictive, but the restriction is the genius of Twitter.</p><p>In this and the next two blog entries, I'm going to explore why Twitter is a Good Thing(tm) and some surprising ways it can insinuate itself into a useful workstream. The first of these observations is around <em>links</em>.</p><p><a href="http://andrewmcafee.org/blog/">Andrew McAfee</a> of Harvard has done a lot of research on how social networking intersects with the enterprise (soon to be captured in a book I can't wait to read, <a href="http://www.amazon.com/Enterprise-2-0-Collaborative-Organizations-Challenges/dp/1422125874/ref=sr_1_1?ie=UTF8&s=books&qid=1253180741&sr=8-1">Enterprise 2.0</a>). I saw him talk recently about why social networking is a valuable resource left barren by most companies. He defines 3 kinds of social links: <em>strong</em>, <em>weak</em>, and <em>potential</em>, shown in a bulls-eye layout:</p><p><img height="75%" width="75%" src="http://andrewmcafee.org/useruploads/Image/bullseye.jpg" alt="bulleye diagram" title="" /></p>Your strong links are the people you see regularly, either at the office or during the normal course of your life. There's a good chance you know what these people had for lunch, or at least one of their meals in the last week. The next layer represents your <em>weak</em> links. These are people you see intermittently (perhaps once a year). They are your friends that you don't get to see on a regular basis (because of geography, for example). A good example for me is my friend <a href="http://hadihariri.com/blogengine/">Hadi Hariri</a>, who lives in Malaga, Spain. He & I see each other perhaps once a year (generally at conferences) and always have good fun & conversation. It's this group that social networking sites support. This is a valuable link because you are more likely to get novel ideas from this group than from your strong group. Before social networks, how did you keep up with your weak links? The Christmas Letter, summarizing a year's events? You are wasting an important link if you can't reach out to your weak links sometimes. You see your strong group all the time, so they hold few surprises. However, your larger and more diverse weak links provide novelty. The <em>potential</em> links are those who you'll form weak & strong links with, but you haven't met them yet. You're also more likely to be introduced to a potential links through your weak links.<br /><p>Twitter provides a strong connectivity to your weak link. Here's an example of how weak links can lead down interesting paths. I met someone at the <a href="http://erubycon.com/">erubycon</a> conference last year who's a well known figure in the Rails world and subsequently started follow his Twitter feed. He had very recently gone vegan for health reasons, and he tweeted a reference to an astounding book called <a href="http://tr.im/the_china_study">The China Study</a>. I read this book (and several other referenced in it) and have since been strictly vegetarian, at least for the time being. It's worth reading: it lays out the case against animal protein in your diet, and backs up the claims with real science. It's a profound book, enough to convince me to change my eating habits. I don't know if I'll stay this way forever, but I've been there for about 6 weeks and it has been quite pleasant. He was very much a weak link; I would have a hard time spotting him in a room. Yet we share enough context in the Ruby community for me to use him as a source of ideas, which sometimes lead to interesting places. In this case, I wouldn't currently be vegetarian if it wasn't for Twitter.</p><p>Finding a good mechanism for maintaining weak links and finding (and exploring) potential links allows you to work smarter because you have a broader arena for ideation. The combination of links, constraint, and meme ooze make Twitter very useful to me. I explore these other two aspects in the next two installments.<br /></p>Neal Fordhttp://www.blogger.com/profile/12839796402858974817noreply@blogger.com8tag:blogger.com,1999:blog-9944221.post-40896966480712810672009-09-09T03:47:00.000-05:002009-09-09T03:48:39.222-05:00The 2009 Edition of the Rich Web Experience: Adding Spice to Your ApplicationsSeveral years ago, I called an Ajax conference a <a href="http://memeagora.blogspot.com/2006/10/condiment-conference-redux.html">condiment conference</a> because most everyone there concerned themselves with technologies that augmented other technologies (for example, your base language is Java but you need JavaScript to make your applications suck less). Now, I think that user interaction, web design, the <a href="http://memeagora.blogspot.com/2008/05/ria-platform-play.html">rise of Rich Internet Applications (when used suitably)</a>, and other user-facing issues have a deeper relationship to the underlying technologies. Thus, I'm calling this year's <a href="http://www.therichwebexperience.com/conference/orlando/2009/12/home">Rich Web Experience</a> the <em>spice</em> for your underlying technology. Food is edible without condiments, but bland without spices. You can't avoid the browser as a platform; might as well embrace it in Orlando in December.Neal Fordhttp://www.blogger.com/profile/12839796402858974817noreply@blogger.com0tag:blogger.com,1999:blog-9944221.post-18386941766032056412009-08-05T08:32:00.002-05:002009-08-05T08:38:08.048-05:00The Suck/Rock Dichotomy<p>Lots of people are passionate about software development (much to the confusion and chagrin of our significant others), and that unfortunately leads to what I call the "Suck/Rock Dichotomy": everything in the software world either <em>sucks</em> or <em>rocks</em>, with nothing in between. While this may lead to interesting, endless debates (<a href="http://en.wikipedia.org/wiki/Editor_war">Emacs vs. vi</a>, anyone?), ultimately it ill serves us as a community. </p><p>Having been in software communities for a while, I've seen several tribes form, thrive, then slowly die. It's a sad thing to watch a community die because many of the people in the community live in a state of denial: how could their wonderful thing (which <em>rocks</em>) disappear under this other hideous, inelegant, terrible thing (which <em>sucks</em>). I was part of the <a href="http://en.wikipedia.org/wiki/Clipper_%28programming_language">Clipper</a>) community (which I joined at its height) and watched it die rather rapidly when Windows ate DOS. I was intimately part of the Delphi community which, while not dead yet, is rapidly approaching death. When a community fades, the fanaticism of the remaining members increases proportionally for every member they lose, until you are left with one person whose veins stick out on their forehead when they try to proselytize people to join their tribe, which <em>rocks</em>, and leave that other tribe, which <em>sucks</em>.</p><p>Why is this dichotomy so stark in the software development world? I suspect a couple of root causes. First, because it takes a non-trivial time investment for proficiency in software tribes, people fear that they have chosen poorly and thus wasted their time. Perhaps the degree in which something <em>rocks</em> is proportional to the time investment in learning that technology. Second, technologists and particularly developers stereotypically tend to socialize via tribal ritual. How many software development teams have you seen that are not too far removed from fraternities? Because software is fundamentally a communication game, I think that the fraternal nature of most projects makes it easier to write good software. But tribal ritual implies that one of the defining characteristics of your tribe is the denigration of other tribes (we <em>rock</em>, they <em>suck</em>). In fact, some tribes within software seem to define themselves in how loudly they can say that everything <em>sucks</em>, except of course their beautiful thing, which <em>rocks</em>.</p><p>Some communities try to purposefully pick fights with others just so they can thump their collective chests over how much they <em>rock</em> compared to how much the other guys <em>suck</em>. Of course, you get camps that are truly different in many, many ways (<a href="http://www.linux.com/archive/feature/19661">Emacs vs. vi</a>, anyone?) But you also see this in communities that are quite similar; one of the most annoying characteristics of some communities is how much some a few of their members try to bait other communities that aren't interested in fighting.</p><p>The Suck/Rock Dichotomy hurts us because it obscures legitimate conversations about the real differences between things. Truly balanced comparisons are rare (for an outstanding example of a balanced, well considered, sober comparison of Prototype and JQuery, check out <a href="http://blog.thinkrelevance.com/2009/1/12/why-i-still-prefer-prototype-to-jquery">Glenn Vanderburg's post</a>). I try to avoid this dichotomy (some would say with varying degrees of success). For example, for the past 2 years, I've done a <em>Comparing Groovy & JRuby</em> talk at JavaOne, and it's been mostly well received by members of both communities. Putting together such a talk or blog entry takes a lot of effort, though: you have to learn not just the surface area details of said technologies, but how to use it idiomatically as well, which takes time. I suspect that's why you don't see more nuanced comparisons: it's a lot easier to resort to either <em>suck</em> or <em>rock</em>.</p><p>Ultimately, we need informed debates about the relative merits of various choices. The Suck/Rock Dichotomy adds heat but not much light. Technologists marginalize our influence within organizations because the non-techies hear us endless debating stuff that sounds like arguments over how many <a href="http://en.wikipedia.org/wiki/How_many_angels_can_dance_on_the_head_of_a_pin%3F">angels can dance on the head of a pin</a>. If we argue about seemingly trivial things like that, then why listen to us when we passionately argue about stuff that <em>is</em> immediately important, like technical debt or why we can't disprove <a href="http://en.wikipedia.org/wiki/The_Mythical_Man-Month#The_Mythical_Man-Month">The Mythical Man Month</a> on this project. To summarize: the Suck/Rock Dichotomy <em>sucks</em>!</p>Neal Fordhttp://www.blogger.com/profile/12839796402858974817noreply@blogger.com8tag:blogger.com,1999:blog-9944221.post-73043937425935511312009-07-15T15:48:00.001-05:002009-07-15T15:50:48.289-05:00Productivity & Location Awareness<p>The iPhone has retaught me the power of location awareness in user interfaces. I have lots of iPhone applications (about 90 at the current count, but, in my defense, some of those are saved bookmarks), and until the iPhone 3 update, touching the icon is the only way to invoke them. Because I have so many, I started organizing them on desktops based on usage (for example, I have a travel desktop, a food desktop, hyperlink desktop, etc). This became too arbitrary, so I recently just went alphabetical for all but the first desktop, which has a special hot key to get back to it, making it the perfect place for really oft-used applications.</p><p></p><p>The point is that I've rearranged my iPhone icons several times. It continues to surprise me how quickly I remember the desktop and location on the desktop of a given application. I very quickly learn where the applications I use all the time (TripIt, I'm looking at you) and can get there really fast. I find that even though Spotlight now works on the iPhone, I still generally go directly to the application via the icon.</p><p>While clearly launchers like Quicksilver, Spotlight, and Launchy work better for the huge numbers of applications you find traditional computers, the power of location awareness suggests several things for the builders of applications.</p><ul><li><em>Don't move stuff around</em>. </li></ul><p> navigation controls are hyperlinks. But, because of add placement, they move around slightly, turning reading the groups a game of "Whack-a-Mole". I use either the NumberFox plugin or Firefox's incremental hyperlink search (the apostrophe hotkey) rather than chase the stupid hyperlinks with a mouse.</p><p>I hate applications that move menu options around based on usage. Consistency is important for usability. In fact, I use the Mac's the Smart Menu Search feature, which allows you to incrementally search for menu items without regard for their physical location, as my favorite menu affordance.</p><ul><li><em>Context sensitivity makes it hard to leverage location awareness</em>.</li><br /></ul><p>Context sensitivity for toolbar buttons makes it hard to definitively learn where something lives, which kind of dooms the ribbon user interface metaphor in modern versions of Office. While I understand the need for a rethought user interface metaphor for the huge number of features (perhaps that's the underlying problem?), having a context-sensitive set of toolbars means that, to become really effective, you have to memorize each combination of buttons and the corresponding locations. Not having used the ribbon much (I avoid Office applications pretty assiduously), I can't say whether you eventually build up the cognitive ability to utilize location awareness. </p><ul><li><em>User interface designers should understand Fitt's Law</em></li><br /></ul><p>Pop quiz: what?s the biggest clickable target on your screen? It?s the one right under your cursor, which is why the right-mouse menu should have the most important things on it. The target right under your mouse is effectively infinitely large. Second question: what?s the next biggest target? The edges of the screen because you can accelerate as fast as possible to the edge and not overshoot it. This suggests that the really important stuff should reside on the edges of the screen. These observations come from <a href="http://en.wikipedia.org/wiki/Fitts%27s_law">Fitt's</a> <a href="http://www.asktog.com/columns/022DesignedToGiveFitts.html">Law</a>, which states that the ease of clicking on a target with the mouse is a combination of the distance you must navigate and the size of the target.</p><p>The designers of Mac OS X knew this law, which is why the menu bar resides at the top of the screen. When you use the mouse to click one of the menu items, you can ram the mouse pointer up against the top of the screen and you are where you want to be. Windows, on the other hand, has a title bar at the top of each window. Even if the window is maximized, you still must carefully find your target by accelerating to the top, and then use some precision mousing to hit the target. For right-handed users, the upper right corner is an easy mouse target. What's there on the Mac? Spotlight, the universal search utility. What's there on Windows? Nothing unless your application is full screen, and, if it is, it's the close button (which suggests the most important thing you can do to a Windows applications is close it).</p><p>There is a way to mitigate this for some Windows applications. The Microsoft Office suite has a <em>Full Screen</em> mode, which gets rid of the title bar and puts the menu right at the top, like Mac OS X. There is help for developers, too. Visual Studio features the same full-screen mode, as does IntelliJ for Java developers. If you are going to use the mouse, using your applications in full-screen mode makes it easier to hit the menus because it takes advantage of location awareness and consistency.</p><p>Regardless of the power of location awareness, for sophisticated computer users (like developers), location awareness doesn't scale. You should spend the time to learn the keyboard shortcuts for every possible thing you need to do. It takes longer, but it scales almost indefinitely. In fact, I turn toolbars and buttons off in IDEs and Emacs and take the time to learn how to get to what I need without reaching for the evil mouse. I'm curious to see how much I start using Spotlight on the iPhone as the number of applications I have keeps growing (which seems inevitable at this point).</p>Neal Fordhttp://www.blogger.com/profile/12839796402858974817noreply@blogger.com2tag:blogger.com,1999:blog-9944221.post-10437165018415936782009-06-23T23:58:00.000-05:002009-06-23T23:59:15.099-05:00Orlando JUG on Thursday June 25th<p>If you are anywhere nearby, come see me at the <a href="http://www.codetown.us/events/orlandojug-refactoring-with">Orlando JUG</a> on June 25th, 2009. I'll be giving my newly revamped <em>Real-World Refactoring</em> talk. By revamped, I mean that I've added a bunch of examples of architecture smells and how to attack them. From the <a href="http://nofluffjuststuff.com/">No Fluff, Just Stuff</a> web site description of the talk:</p><cite>Refactoring is a fine academic exercise in the perfect world, but we don't really live there. Even with the best intentions, projects build up technical debt and crufty bad things. This session covers refactoring in the real world, at both the atomic level (how to refactor towards composed method and the single level of abstraction principle) to larger project strategies for multi-day refactoring efforts. This talk provides practical strategies for real projects to effectively refactor your code.</cite><br /><p>This talk is part of a series of talks I'm doing this year on Emergent Design & Evolutionary Architecture, showing examples of how to use refactoring to fix architectural and design smells. I also cover refactoring databases and build files.</p><p>Come join us.</p>Neal Fordhttp://www.blogger.com/profile/12839796402858974817noreply@blogger.com1tag:blogger.com,1999:blog-9944221.post-28759079810776391532009-06-10T15:08:00.001-05:002009-06-10T16:04:05.268-05:00AML (Arbitrary Modeling Language)UML is a failure. It failed for several reasons. Mainly, it failed because it falls into the cracks between technical people (developers, architects) and non-technical people (business analysts, project managers, etc). UML is too technical for non-technical people, and not technical enough for technical people. By this, I mean that it isn't really technical enough to do serious work on design by techies. At the same time, it's obscure enough to be mostly incomprehensible to non-techies.<p></p><p>This wasn't the Three Amigos fault. They did quite impressive work on the meta-model aspect of UML. It was defeated by two forces. First, the fundamental problem lies with the amorphous nature of software itself. Coming up with a really expressive graphical notation is hard. Most developers know enough to draw boxes for classes and open-arrowheads for inheritance, but don't get much further into the UML specification because it gets quite convoluted (especially if you start looking at the later generations of UML, with Object Constraint Language and its ilk). </p><p>The second failure reason is the implicit assumption that you need (nay, must) design all the classes and interactions before you start writing code. Big Design Up Front is a failed technique in almost all software development. The only exceptions are systems that are truly life and death. One of the reasons for the outdatedness of the software on the space shuttle lies with the fact that they have very long iterations. In other words, they are willing to say "once this date passes, we will make no changes to the design of this system. Period." While most business software could make this statement, it ill serves the business. Business processes change like the weather, and you need software that can change just as readily. I don't come to this discussion as a dilettante: for a while, I worked for a company that was a Rational partner. We did the training, and we built software based on the Rational Unified Process. We even had some successes. But it didn't take long for us to realize that the upfront design didn't serve our clients because it hampered the kinds of changes required by their business.</p><p>Most developers I know use AML: Arbitrary Markup Language, usually consisting of boxes, circles, and lines. When a given developer writes on a whiteboard, they write in their own version of a diagramming language. It's a shame that we don't have an industry wide diagramming language that everyone feels compelled to use, but that's the reality in most places I've been for the last 5 years. But, having said that, I'm a fan of AML, because it cuts down on <a href="http://memeagora.blogspot.com/2008/12/irrational-artifact-attachment.html">irrational artifact attachment</a>: you have nothing except the last 5 minutes invested in the diagram, making it as transient as possible. Transient artifacts are good because you're willing to throw them away, preventing them from becoming a part of the documentation for your project once the actual code has migrated away from that initial stab at design. Out of date documentation is worse than none at all because it actively misleads</p>Neal Fordhttp://www.blogger.com/profile/12839796402858974817noreply@blogger.com12tag:blogger.com,1999:blog-9944221.post-37866601643028940552009-05-26T08:36:00.001-05:002009-05-26T08:37:39.497-05:00Mac Boot Mysteries<p>This is long, digressive story about diagnosing a hardware problem on a Mac; if you dislike such stories, feel free to leave now.</p><p>About a week ago, my wife Candy complains to me that her Mac won't boot up. This is my hand-me-down Mac (we have a new policy in our house: Candy gets my hand-me-down computers, and I get her hand-me-down cameras), which means that it's about 2 years old, but it has a relatively new hard drive that I installed last November. A long time ago, I had set the startup option to always run in Verbose startup mode (on demand by holding COMMAND-V upon startup, or permanent by issuing the following command:</p><pre><code>sudo nvram boot-args="-v"<br /></code></pre><p>Anyway, I could see from the startup porn that she was having a kernel panic with 2 likely suspects: the fan control daemon and something about Cisco VPN. Now, Candy doesn't have a Cisco VPN, but given that this was my hand-me-down machine, that explains why some of that stuff is there. Candy hadn't installed anything in the last week or so, leading me to think that one of these two was the culprit. She had been complaining that her machine was getting slower and slower, including things like window resizing, which had me puzzled. Perhaps a dying fan was causing the processor to overheat and thus slow down? </p><p>I tried safe boot (no joy), and at this point I suspect the fan. I'm certainly <a href="http://pragmactic-osxer.blogspot.com/2007/11/dont-crack-open-your-mac.html">not afraid to crack open a Mac (with proper respect)</a>, but replacing a fan isn't high on my list of fun things to do, so we made an appointment at the Genius bar. To Candy's credit, she had a <a href="http://shirt-pocket.com/SuperDuper/SuperDuperDescription.html">SuperDuper!</a> backup that was just a couple of days old, so virtually everything was safe.</p><p>We went to the genius bar where the GenX slacker (this is a compliment) booted the Mac from an external drive. I hadn't tried this (even though I have several bootable drives laying around) because I was fixated on the fan problem. After booting it up, his suspicion now lies with the VPN stuff, and I reluctantly concur (especially after he ran some fan diagnostics). Now, though, the question remains: why did this problem suddenly occur? What was his (depressing) advice to fix this problem? Reinstall Leopard and all your applications. What?!? Is this a freakin' Windows machine? I couldn't believe that was real Genius advice. I've never yet had to do a ground-up reinstall of everything, but if that's the only way...hmmmm. He was very knowledgeable, and obviously he doesn't tread in the realm of VPN stuff. He also correctly pointed out that a bad fan shouldn't cause slowness: redrawing windows is mostly handled by the GPU on the Mac. The slowness was as far as I can tell a red herring.</p><p>When I got home, the first thing I did was boot Mac OS X from an external drive and get a real SuperDuper! snapshot, getting the real current snapshot. Once I have that, I can play. Candy has already agreed to the pain and degradation of reinstalling everything, but I have to think there's a better way. Then, I had a brain storm: I took the SuperDuper! snapshot I just made and booted the machine from the external drive. Success! That suggests that some part of the internal hard drive that houses the VPN stuff has somehow gotten corrupted, but still allows it to boot using the same image from an external drive. Because I have the SuperDuper! safety net, I decided on an experiment. I reformatted the internal hard drive and ran Drive Genius on it to scan for bad sectors. Nothing of note came from that, but then I overlaid my most recent SuperDuper! snapshot back onto the internal drive. </p><p>Success! The internal drive now boots, and everything appears back to normal. I'm guessing that my bad sector theory was correct.</p><p>Lessons:<br /></p><ul><li>Don't reinstall everything! My record is still clean on that account: I have never had to do that on a Mac yet (and it was a once-a-year chore on Windows because of bit rot).</li><br /><li>Always have good backups. This would have been a tragedy rather than a comedy if Candy hadn't been using SuperDuper!. It has yet to let me down, and it has saved my bacon on several occasions. </li><br /><li>I immediately latched onto the fan because it seemed to support other observed phenomena. I should have booted it myself from an external drive and run Drive Genius, but I thought I had it figured out.</li><br /><li>Stop and think. It was a good thing that we had dinner plans with a neighbor when we got back from the genius bar. It was over dinner that I had the idea of just overlaying the snapshot again. If I had started on it as soon as we got back, I would have been creating a lot of movement without a lot of forward progress. Sitting and thinking about it opened my mind to alternative options.</li><br /><li>SuperDuper! rocks. I can't imagine life without it.</li></ul>Neal Fordhttp://www.blogger.com/profile/12839796402858974817noreply@blogger.com1tag:blogger.com,1999:blog-9944221.post-28929138960409023942009-05-11T16:32:00.001-05:002009-05-11T16:32:41.118-05:00RailsWayCon <div xmlns='http://www.w3.org/1999/xhtml'> <p>The economic downturn has affected conference attendance. At the conferences where I've spoken in the US, attendance seems to be down 20-30% from last year. However, it doesn't seem to have been as bad at European conferences (of course, it may just be the conferences where I'm speaking), where attendance is down only a little. It was surprising then that O'Reilly decided not to conduct RailsConf Europe this year.</p><p>However, someone has stepped in to fill the gap: <a href='http://it-republik.de/konferenzen/railswaycon/'>RailsWayCon</a> is happening in Berlin from May 25-27. They have gathered speakers from far and wide in what looks a rockin' good conference. If you're anywhere in the neighborhood and looking for a Ruby and Rails conference, this is the one for 2009.</p> </div> Neal Fordhttp://www.blogger.com/profile/12839796402858974817noreply@blogger.com2tag:blogger.com,1999:blog-9944221.post-46636585340416146782009-05-01T11:21:00.001-05:002009-05-01T11:23:26.717-05:00Confessions of a Reformed TitilatorThe Rails community has a real brouhaha on its hands, but it's a red herring that it happens to be Ruby and Rails because it's a pervasive problem in scientific and engineering fields of all kinds. It seems that a presentation at the GoGaRuCo Ruby conference (Golden Gate Ruby Conference) included a heaping pile of really racy, semi-pornographic imagery. The presenters hand-waved it away by telling the attendees that the images existed to keep everyone's interest in their presentation. And, fortunately, <a href="http://www.ultrasaurus.com/sarahblog/2009/04/gender-and-sex-at-gogaruco/">some</a> <a href="http://www.sarahmei.com/blog/?p=46">people</a> called their bluff.<p></p><p>Studies exist that show imagery that bores into people's basest parts (sex, violence, humor) is the easiest way to keep people interested in an otherwise boring topic. Lots of presenters use this technique to engage and keep attendees attention. I know because I've used it myself. And it's really our most slovenly kind of laziness as presenters at work. Let me explain how I've come around to this conclusion.</p><p>At RailsConf last year, <a href="http://www.joelonsoftware.com/">Joel Spolsky</a> did one of the morning keynotes. As his first slide, he showed a glamor shot of Angelina Jolie and said (I'm paraphrasing) "I always show this as my first slide because I always get better evaluation scores on my keynotes when I do." His next slide showed Brad Pitt, with his shirt open, and Joel added "And, just to be demographically fair, I show this one next." Joel was plugging into 2 techniques for capturing attention: sex and humor. And it works on some level. The crowd (mostly) loved it. In his keynote, it was basically gratuitous: he never used it for anything other than pure pandering. But I'm always on the prowl for effective presentation tricks, so I borrowed Joel's trick, but with a twist. </p><p>In my <a href="http://nealford.com/downloads/conferences/canonical/Neal_Ford-Ceremony_vs_Essence-slides.pdf">Ceremony & Essence</a> keynote, which I gave at a hand-full of Ruby events last year, I had a similar picture of Angelina and Brad at the start, using it to get a laugh right out of the gate. In a keynote, if you can get people laughing early, they loosen up more and are more likely to laugh more and emote more back to the presenter. However, my use wasn't merely gratuitous. I used other images of Angelina (and Brad) throughout the talk as an anchor point, serving two purposes. First, because everyone laughs up front, seeing a similar image reminds them of that, making it more likely they'll laugh again. The other purpose was to pull the narrative along. Bringing up a topic early in a presentation, then allowing people to forget it, then bringing it back at an unexpected time is one of my favorite techniques in presentations. It allows attendees to make connections between disparate things that have more impact when you "force" them to make the connection themselves, rather than beating them over the head with it. For example, I brought Angelina up again when talking about demand for developers outstripping the supply, showing a publicity photo of her in the movie <em>Hackers</em>. This is a not so subtle anchor point that hopefully makes people realize that one of the reasons that the developer demand outstrips supply is the paucity of female developers, which hopefully makes people ponder that a bit. I use Angelina (and, to a lesser extent, Brad) throughout the talk as those kind of anchor points. Now, realize, these images are in no way pornographic. They are just publicity photos of famous actors. However, I was sensitive to the fact that some women might find this unsettling, so I made a deal with myself: if anyone every complains about those images, I'll remove them (and restructure the talk) with no questions asked. </p><p>That happened earlier this year. One attendee at a keynote wrote me a very nice note afterwards telling me that she wasn't comfortable with the imagery of Angelina in the talk (and that the presence of Brad didn't help). That was my cue to stop using that imagery and find other anchor points to make my points. Her email made me realize the pervasiveness and toxicity of this kind of imagery, and that, while convenient, it was ultimately just laziness on my part. Why do you think that so much entertainment falls back on sex and violence to keep people interested in otherwise pretty dull drivel? Watching a show like <a href="http://en.wikipedia.org/wiki/The_West_Wing_%28TV_series">The West Wing</a>), which doesn't traffic in that kind of stuff, shows that quality writing doesn't have to fall back on gratuitously titillating material. Ultimately, using sexually provocative material in a technical presentation is just lazy -- when we do it we're not spending the time to come up with really compelling metaphors to represent something, relying instead on the basest of currency. Presenters, myself included, need to do better.</p><p>Lots of people who aren't affected by this will say that this is a tempest in teapot, and that the offended parties are over reacting. Insidious misogyny is like lazy racism: people who engage in it hide behind a casual facade of "Oh, really, that was offensive"? Yes, by the way, it was.</p><p>Let me re-iterate a point: this isn't about Rails or the <em>Rails Community</em> (I still haven't gotten my official code ring and Certificate of Membership for this "community", by the way). On average, presenters at Ruby and Rails conferences put a lot of effort into creating compelling presentations, paying attention to metaphor, presentation style, compelling imagery, etc. The conferences where I attend the most talks are Ruby/Rails and the Tri-Fork conferences (<a href="http://jaoo.dk/">JAOO</a> and <a href="http://qconsf.com/">QCon</a>. Kudos to presenters that care about creating compelling presentations. Only sometimes pushing the envelope on what's edgy entertainment crosses a line, which is what happened in this case. It could happen at any technical conference where the presenters are pushing hard on the creative aspect of technical presentations.</p><p>I strive not to be lazy when I put together presentations, to find compelling metaphors that don't inadvertently offend entire groups of people. I think it is an important maturity step for engineering and software communities to vote with their feet: outrage only goes so far, but notifying those who lazily offend effectively sends the message that it's not OK.</p>Neal Fordhttp://www.blogger.com/profile/12839796402858974817noreply@blogger.com2tag:blogger.com,1999:blog-9944221.post-18810176699677555342009-04-22T04:38:00.002-05:002009-04-22T04:41:00.090-05:00Guerrilla SOA (SOA & The Tarpit of Irrelevancy)This is the sixth in a series of blog posts where I discuss what I see wrong with SOA (Service Oriented Architecture) in the way that it's being sold by vendors. For those who are behind, you can read the previous installments here:<p></p><ul><li><a href="http://memeagora.blogspot.com/2009/01/tactics-vs-strategy-soa-tarpit-of.html">Tactics vs. Strategy</a> </li><li><a href="http://memeagora.blogspot.com/2009/01/standards-based-vs-standardized-soa.html">Standards-based vs. Standardized</a>.</li><li><a href="http://memeagora.blogspot.com/2009/02/tools-anti-behavior-soa-tarpit-of.html">Tools & Anti-behavior</a></li><li><a href="http://memeagora.blogspot.com/2009/03/rubicks-cubicle-soa-tarpit-of.html">Rubick's Cubicle</a></li><li><a href="http://memeagora.blogspot.com/2009/03/triumph-of-hope-over-reason-soa-tarpit.html">Triumph of Hope over Reason</a></li></ul><p>In all the previous posts, I've been basically elucidating all the reasons why I think that most SOA projects are a quagmire: mis-understanding the goals of why you are involved in this approach, the way most vendors are poisoning the water by selling ridiculously complex tools and services, why it's so seductive for developers who fetish complexity to get knee deep in these projects, and hype. If you read all these posts back to back, you'll surely have the impression that I think all hope is lost for enterprise integration.</p><p>But it's not.</p><p>Just like any software project, it is possible to do SOA right. My colleague Jim Webber has done lots of outstanding work in this area, under a rubric called <em>Guerilla SOA</em>. The basic ideas are:</p><ul><li>treat SOA projects like any other project, and apply the same agile principles: continuous integration, testing, simplicity</li><br /><li>don't use tools just for the sake of using tools. Try to keep the tool stack as simple as possible, and as automated as possible</li><br /><li>use loosely typed endpoints and document/resource orientation to allow the different parts of your architecture evolve independently</li></ul><p>It's best said from <a href="http://www.infoq.com/presentations/webber-guerilla-soa">Jim's own mouth</a>. Jim's British, so when he curses in technical presentations, people just think it's quaint (whereas when I do it, it's crass).</p><p>This is the way we generally approach SOA project: like other pursuits. SOA doesn't have to be a huge scary thing. It's just software, and we aren't going to throw our playbook out the window just because it sounds scary.</p><p>The term <em>SOA</em> has been so co-opted by vendors trying to sell stuff that I think it will die off as a term. We'll still be doing <em>soa</em> (note the lower case letters), but we'll have develop another three-letter acronym because the old one has slipped into the tarpit of irrelevancy.</p>Neal Fordhttp://www.blogger.com/profile/12839796402858974817noreply@blogger.com1tag:blogger.com,1999:blog-9944221.post-5212931736840687492009-04-11T05:56:00.003-05:002009-04-24T06:49:29.545-05:00Speaking at the Colgne JUG Monday, April 20th<img src="http://nealford.com/images/koln-jug.png" alt="Colgne jug image" title="" align="left" hspace="10" /> I'll be speaking at the <a href="http://87.230.78.21:8080/display/jugc/Home">Cologne Java Users Group</a> on the eve of the <a href="http://it-republik.de/jaxenter/jax/">JAX Conference</a>, on Monday, April 20th at 7 PM. I'm letting the JUG organizer pick the topic, so I'm not sure <em>what</em> I'll be talking about, but I'm looking forward to it: I've never spoken at a German JUG before. If you're in the neighborhood, stop by and we'll geek out, then have a real beer afterwards.<div><br /></div><div>Update: Here is a link to the <a href="http://nealford.com/downloads/conferences/canonical/Neal_Ford-XP_in_Practice-slides.pdf">slides I presented at the JUG</a>. Thanks for having me; I had a great time.</div>Neal Fordhttp://www.blogger.com/profile/12839796402858974817noreply@blogger.com4tag:blogger.com,1999:blog-9944221.post-50974902412647424832009-04-09T04:37:00.000-05:002009-04-09T04:40:21.964-05:00Real World Refactoring in NFJS the MagazineSeveral people have asked me what ever happened to the NFJS Anthology book series (<a href="http://www.pragmaticprogrammer.com/titles/nfjs06/index.html">The NFJS Anthology, Volume 1</a> and <a href="http://pragprog.com/titles/nfjs07/no-fluff-just-stuff-2007-anthology%5D">The NFJS Anthology, Volume 2: What Every Software Developer Should Know</a>. Both books contained essays built around some subject speakers were passionate about that year. Alas, the publishing business being what it is, there wasn't enough demand to justify continuing the series.<p></p><p>After much discussion, it was decided that the series would be more dynamic in magazine form rather than book form, which explains the formation of the <a href="http://www.nofluffjuststuff.com/magazine_subscribe.jsp">NFJS the Magazine</a>. This is a monthly publication written by NFJS speakers about something they are talking about this year, and of course something they are interested enough to speak and write about. Being in an magazine format makes it a bit easier to keep up to date, and the volume of material is higher because you get several articles a month.</p><p>I have a <a href="http://www.nofluffjuststuff.com/speaker_topic_view.jsp?topicId=1700">Real World Refactoring</a> talk in the upcoming issue, based on my talk of the same name this year. If you go to a <a href="http://www.nofluffjuststuff.com/">No Fluff, Just Stuff</a> show, you get a free copy of the magazine, but anyone can subscribe</p>Neal Fordhttp://www.blogger.com/profile/12839796402858974817noreply@blogger.com1tag:blogger.com,1999:blog-9944221.post-9541995621647464912009-04-06T23:53:00.000-05:002009-04-06T23:56:14.997-05:00RailsConf Interview by Chad Fowler with Paul Gross and meOne of the marketing tools that <a href="http://www.railsconf.com/">RailsConf</a> uses is a series of interviews with upcoming talks. Chad sent some interview questions to Paul and myself around our upcoming talk <em><a href="http://en.oreilly.com/rails2009/public/schedule/detail/8706">Rails in the Large: Building the Largest Rails Application in the World</a></em>, and he's posted it on his site. Want some opinionated conversation about Rails and how to use it to build Enterprise applications, then <a href="http://chadfowler.com/2009/4/2/railsconf-speaker-interview-neal-ford-and-paul-gross">here it is</a> (and we didn't use an Scala!). Enjoy!Neal Fordhttp://www.blogger.com/profile/12839796402858974817noreply@blogger.com0tag:blogger.com,1999:blog-9944221.post-12549524035187524352009-03-23T10:42:00.002-05:002009-03-23T10:47:14.431-05:00The Triumph of Hope over Reason (SOA & The Tarpit of Irrelevancy)This is the fifth in a series of blog posts where I discuss what I see wrong with SOA (Service Oriented Architecture) in the way that it's being sold by vendors. For those who are behind, you can read the previous installments here:<ul><li><a href="http://memeagora.blogspot.com/2009/01/tactics-vs-strategy-soa-tarpit-of.html">Tactics vs. Strategy</a> </li><li><a href="http://memeagora.blogspot.com/2009/01/standards-based-vs-standardized-soa.html">Standards-based vs. Standardized</a><br /></li><li><a href="http://memeagora.blogspot.com/2009/02/tools-anti-behavior-soa-tarpit-of.html">Tools & Anti-behavior</a><br /></li><li><a href="http://memeagora.blogspot.com/2009/03/rubicks-cubicle-soa-tarpit-of.html">Rubick's Cubicle</a><br /></li></ul><p>A very funny site shows that this Internet thing might not be a fad. The Chuck Norris Facts web site has lots of great hyperbolic claims about Chuck Norris, American actor and legendary bad-ass. Some of the "facts":</p><ul><li>If you have five dollars and Chuck Norris has five dollars, Chuck Norris has more money than you.</li><li>There is no 'ctrl' button on Chuck Norris's computer. Chuck Norris is always in control.</li><li>Apple pays Chuck Norris 99 cents every time he listens to a song.</li><li>Chuck Norris can kill two stones with one bird.</li><li>When the Boogeyman goes to sleep every night, he checks his closet for Chuck Norris.</li></ul><p>Some of this may be just a tad exaggerated. I'm pretty sure that when Chuck Norris does pushups, he is not in fact pushing the earth down instead of pushing himself up. The site is <a href="http://www.chucknorrisfacts.com/">here</a>, if you want to go read more of them. I'll wait.</p><p>OK, now that you understand more about Chuck Norris, here's another site of over-the-top exaggeration about an over-hyped subject: <a href="http://soafacts.com/">SOA Facts</a>, modeled after Chuck Norris Facts:<br /></p><ul><li>SOA is the only thing Chuck Norris can't kill. </li><li>SOA invented the internet, and the internet was invented for SOA.</li><li>SOA is not complex. You are just dumb.</li><li>SOA can always win at TicTacToe. Even if you go first.</li><li>One person successfully described SOA completely, and immediately died.</li><li>In a battle between a ninja and a jedi, SOA would win.</li><li>SOA knows what you did last summer, and is disappointed that it wasn't SOA.</li></ul><p>I used a bunch of these in one of my SOA talks as bumper slides between the various topics, which provided a nice fun icebreaker. But I reserved two of them for the last part of the talk because I think they aren't exaggerations at all, merely deep truths:</p><ul><li>Implementing SOA for the first time is the triumph of imagination over intelligence.</li><br /><li>Implementing SOA for the second time is the triumph of hope over experience.</li></ul><p>SOA has gotten so complex, with so many moving parts, that getting it right is extraordinarily difficult. Once you've lived through one of these projects (especially if you've fallen into the other tarpits I discuss in the previous installments), you understand the first quote at a deep level. That you would try it again truly is <em>the triumph of hope over reason</em>.</p>Neal Fordhttp://www.blogger.com/profile/12839796402858974817noreply@blogger.com2tag:blogger.com,1999:blog-9944221.post-12076164054823542692009-03-10T11:36:00.002-05:002009-03-10T11:38:22.613-05:00Rubick's Cubicle (SOA & the Tarpit of Irrelevancy)This is the fourth in a series of blog posts where I discuss what I see wrong with SOA (Service Oriented Architecture) in the way that it's being sold by vendors. For those who are behind, you can read the previous installments here:<ul><li><a href="http://memeagora.blogspot.com/2009/01/tactics-vs-strategy-soa-tarpit-of.html">Strategy vs. Tactics</a></li><li><a href="http://memeagora.blogspot.com/2009/01/standards-based-vs-standardized-soa.html">Standards-based vs. Standardized</a></li><li><a href="http://memeagora.blogspot.com/2009/02/tools-anti-behavior-soa-tarpit-of.html">Tools & Anti-behavior</a></li></ul><p>Developers love to solve puzzles. One project manager with which I used to work kept a jar full of little nail puzzle (like this) on his desk:</p><p><img src="http://www.mountainmade.com/prodimages/10946-1.jpg" alt="nail puzzle" title="" /></p><p>Any time he was having a conversation that he didn't want developers to listen in on, he'd grab one of the puzzles and toss it to them. Inevitably, the developer would grab the toy and immediately become totally absorbed in solving the puzzle. After about 10 minutes, the puzzle would yield up it's secret, and the developer would look up and ask "Did anything important just happen?"</p><p>Developers tend to be problem solvers -- it's one of the appealing things about fiddling with computers. But what happens when you take a natural problem solver and present them with dull work, with no interesting challenges? What happens frequently is what I've deemed the Rubick's Cubicle anti-pattern.</p><blockquote> <p>If the presented problem isn't complex enough, developers will figure out ways to make it complicated and therefore challenging.</p></blockquote><p>Writing the same dull CRUD application over and over is boring. But what if you could figure out a way to get all the simple CRUD applications to talk to one another? That's a nice and juicy puzzle. This perhaps explains the complexity <em>fetish</em> I see in so many "Enterprise" architectures and applications. Some of it is accidental complexity, accrued from years of piecing together parts that were never meant to work with one another. But I don't think accidental complexity covers the entirety of why things are so convoluted.</p><p>I remember back in the mid-90s, I was the CTO of a small training and consulting company. We were absolutely delighted when we first saw EJB: here was a technology <em>no one</em> could understand without extensive training. The same thing happened with all the variations of COM, DCOM, and CORBA. Those were flush times for training companies because we knew that developers would be drawn like moths to a flame, frequently with the same result.</p><p>Building the simplest thing that can work is sometimes duller than crafting some devilishly complex Rube Goldberg machine, but keeping it simple is a worthy challenge in it's own right. If you find yourself in Rubick's Cubicle, stop and ask yourself: "is there a simpler way to do this? Perhaps dismantling something that no longer serves it's purpose? <em>What is the simplest thing that could possibly work?</em>"</p>Neal Fordhttp://www.blogger.com/profile/12839796402858974817noreply@blogger.com2tag:blogger.com,1999:blog-9944221.post-30379723265846732352009-02-24T10:29:00.001-05:002009-02-24T10:30:45.856-05:00Emergent Design & Evolutionary Architecture at DeveloperWorksFor the last few months, I've been toiling away on an article series for IBM DeveloperWorks, and it's rolling out today! From the abstract for the series opener:<blockquote> <p>This series aims to provide a fresh perspective on the often-discussed but elusive concepts of software architecture and design. Through concrete examples, Neal Ford gives you a solid grounding in the agile practices of evolutionary architecture and emergent design. By deferring important architectural and design decisions until the last responsible moment, you can prevent unnecessary complexity from undermining your software projects.</p></blockquote><p>The first two articles in the series appeared today:</p><ul><li><a href="http://www.ibm.com/developerworks/java/library/j-eaed1/index.html?S_TACT=105AGX02&S_CMP=EDU"><em>Evolutionary architecture and emergent design: Investigating architecture and design</em></a> </li><br /><li><em><a href="http://www.ibm.com/developerworks/java/library/j-eaed2/index.html?S_TACT=105AGX02&S_CMP=EDU">Test-driven Design, Part 1</a></em></li></ul><p>I plan to use this series to start a conversation about something that we all do everyday that we can't really describe well, even to other technical people (much less our grand parents). I don't supposed to know the answers (I'm not even sure I know all the questions), but at some point we have to talk about it. In the first installment, I provide some working definitions and some overarching concerns. Let me know what you think about it.</p>Neal Fordhttp://www.blogger.com/profile/12839796402858974817noreply@blogger.com10tag:blogger.com,1999:blog-9944221.post-35149050360689738772009-02-16T17:26:00.002-05:002009-02-16T17:37:31.115-05:00Speaking at the IT Architect Regional Conference in AtlantaAt the end of February (the 25th - 27th), I'll be making a rare Atlanta conference appearance at the <a href="http://www.iasahome.org/web/itarc/2009/atlanta">IT Architect Regional Conference</a>, hosted by the <a href="http://www.iasahome.org/">International Association of Software Architects</a> (IASA). This is the first in a series of regional conferences focused on an important but generally neglected segment of the developer population, software architects. What that title actually means is some matter of debate (hey, maybe this conference will help define that term), but the topics covered certainly trod some important ground. I'm doing a short version of my <span class="Apple-style-span" style="font-style: italic;">Smithying in the 21st Century</span> keynote (<a href="http://memeagora.blogspot.com/2009/01/upcoming-keynote-smithying-in-21st.html">overviewed here</a>). My <span class="Apple-style-span" style="font-weight: bold;">Thought</span>Works colleague Steven "Doc" List will also be there, imported from the west coast, convening an open space called <span class="Apple-style-span" style="font-style: italic;">Beyond Fight or Flight: Meetings Don’t Have to be Gladiatorial Combat, </span>which sounds quite interesting. It's still not too late to sign up for it; hope to see you there.Neal Fordhttp://www.blogger.com/profile/12839796402858974817noreply@blogger.com0tag:blogger.com,1999:blog-9944221.post-4344422127110690082009-02-06T12:08:00.002-05:002009-02-07T15:05:37.771-05:00Tools & Anti-Behavior (SOA & the Tarpit of Irrelevancy)<p>This is the third in a series of blog posts where I discuss what I see wrong with SOA (Service Oriented Architecture) in the way that it's being sold by vendors. For those who are behind, you can read the <a href="http://memeagora.blogspot.com/2009/01/tactics-vs-strategy-soa-tarpit-of.html">first installment</a> and <a href="http://memeagora.blogspot.com/2009/01/standards-based-vs-standardized-soa.html">second installments</a>.</p><p>While rank and file developers go to conferences to soak in deep technical content, their peripherally technical managers (the ones who wrote some rockin' good Cobol code back in the day, but now they make decisions about modern enterprise architecture) go to different conferences in Palm Springs. At those conferences, they have a 2-hour morning session, run by a big tool vendor, then play golf for the balance of the afternoon. And what the vendors show them is poison.</p><p>Mostly what they see these days are tools that support SOA and ESBs. And in particular, the favorite demo-ware application is their BPEL (Business Process Execution Language) designer. This designer allows you to wire together services by drawing lines between boxes. The lines can include transformations and other sexiness. And it demos great. "Look, just draw a couple of lines here and here, click on the Run button and voila! Instant SOA". </p><p>Then, the manager brings it back home and notifies the developers that this is the new tool of choice. When developers start using it, they realize the awful truth: they've been sold a hairball generator. Tools like this work great on really small scales, when it's easy to see all the connections between things. But, as things get complicated, they start suffering from the hairball effect: all the lines start running together, and you can't create a diagram that makes sense to anyone anymore. Perhaps maybe you can fight through this, by creating workflows in chunks, and zooming in and out.</p><p>Then reality arrives. Because you create workflows using these tools, you are coding, in the worst possible language (a graphical representation). Thus, you are defining <em>behavior</em>, just like you do when you write source code. But the behavior you define lacks all the benefits you get from writing it in code.</p><ul><li><p>reuse: you can't really reuse portions of your workflow because their is no <em>method</em> or <em>subroutine</em> functionality (you might get lucky with a sub-workflow). Mostly, people achieve "reuse" by copy and pasting, which you never do in code.</p></li><li><p>refactoring: no refactoring, making it harder to identify common workflow chunks for reuse. When you don't have refactoring, you don't watch for opportunities for refactoring as much.</p></li><li><p>limited programmability: you don't get <em>if</em> statements and <em>for</em> loops, you get whatever this particular BPEL designer supports. You get flow-chartly looking stand-ins for real decision statements, but they are much more brittle than the facilities offered in modern languages.</p></li><li><p>testing: you can't write unit, functional, or integration tests for these workflows. The only real testing option is user acceptance, meaning that the entire universe must be up and running. If you have no unit testing, you also don't have mock objects or other testing techniques common in code.</p></li><li><p>hard to diff: lets say you fought the beast and get a non-trivial workflow up and running, and everything is great. In six months, you change it in non-trivial ways, and all is good. Then it comes time to see what's different. BPEL tools don't have diff facilities, so you can either visually diff the diagrams (yuck) or diff 2 10,000 line XML documents (double yuck). BPEL relies on either heavy-weight diagramming tools or raw XML, and nothing in between.</p></li></ul><p>Tools like this fall into the category one of my colleagues identified as <em>doodleware</em> tools. They let you create pretty pictures but collapse under scale. And they don't support all the common facilities offered by good old fashioned code. Is it really worth giving up robust reuse, refactoring, testing, programmability, and versioning/diffing just to see the pretty picture? Ironically, it's pretty easy to generate a similar picture <em>from code</em>, using tools like GraphViz. </p><p>I am a strong believer in the mantra</p><blockquote style="font-weight: bold; font-style: italic;"><p>Keep behavior in code.</p></blockquote><p>We have great tools for code (including ways to generate doodles) -- why would you want to abandon what works for something new and shiny? Except, of course, that code won't take you out for a golf outing in Scotland if you choose it.</p>Neal Fordhttp://www.blogger.com/profile/12839796402858974817noreply@blogger.com14