Monday, October 30, 2006

Post Windows

Professionally, a lot has changed for me over the last few of weeks. I've rolled off the rich client, .NET 2 project to which I've been attached since December and rolled onto a Ruby on Rails project. The projects could hardly be more dissimilar: large (15+ developers, distributed agile desktop application vs. 4 developer, Ruby on Rails project). Add another significant change: I'm now (for the time being, at least), post-Windows. I upgraded my Mac in July to a MacBook Pro, and have been doing .NET development on it using Parallels (which, BTW, works great). On my new project, though, I'm fully Mac: the pairing workstations are Mac Mini's, with 2 keyboards, mice, and monitors. And I've just completed porting all my Java and Ruby conference talks over to the Mac.

This is a big deal for me. For my entire professional life, I've been living with a Microsoft operating system on a daily basis. Starting in DOS 5 back in 1993, then moving to Windows (I've been a power user in all these versions - 3.1, 3.11, 95, NT 4, 2000, XP). Now, though, I'm conducting both my personal and professional lives in OS X. And I'm giddy with joy. I only occasionally need to dip into Windows for 1 of the 2 applications for which I don't have a superior Mac replacement.

Dealing with low-level frustration and annoyance takes a measurable toll on your psyche. I'm not one to be overly religious about tools; I try to learn to use them to their utmost. However, I absolutely believe that my quality of life is better now, in small but subtle ways, mostly having to do with elegance and design. These "OS X rocks, Windows sucks ass" kind of blog entries are generally short on substance, just an inarticulate expression of the intangible. Well, here are some concrete examples.

Windows machines have 2 ways to connect to networks, wired and wireless. On my Dell Latitude 610, when a wireless network is near, it pops up a Windows task tray balloon notifying you that it would like to connect. Yet, when you connect to a wired network, you no longer have a need for the wireless one. Windows still pops up the annoying little balloon, about every 15 seconds, offering to connect you to a network you don't need. When you connect OS X to a wired network, it stops asking you about connecting to a wireless network because it figures out, correctly, that your networking needs are now met.

Another example: power users like to be able to get to the underbelly of all the GUI eye candy to get real work done. I would like access to the Excel command line, in the vain hope that I might be able to open multiple spreadsheets at a time. Yet, in their infinite wisdom, Microsoft has wired Windows to treat Office shortcuts differently, preventing you from getting to the underlying startup command. If you don't believe me, check out this screen shot or check for yourself.



I've done what all power users of Windows ends up doing: I wrote a Ruby script that uses COM automation to open multiple spreadsheets. In fact, my toolbox is full of little scripts and such that get around annoying Windows behavior. Actually, I should be grateful to Microsoft for their annoyances: much of the Productive Programmer book features ways to make programmers more productive in that environment.

Before I get a whole bunch of Spolsky-esque comments about why Windows is the way it is, let me state that I already understand. I know that it's terribly difficult to write an OS that handles all the wide-world of devices that Windows must support because it runs on so much hardware. And, I know that one of Apple's big advantages is their tight coupling of hardware and software. I don't believe that Microsoft is evil or incompetent, and I in fact like some of what they create: .NET has some really nice, elegant parts (and some warts too, like all technologies). But, at the end of the day, as a user of the OS, the little things matter to me. If you cast aside history for the moment, using OS X is much more pleasant and refreshing, regardless of the reasons that got us here.

Friday, October 27, 2006

Technology Snake Oil Part 10: Check-box Parity

A pervasive bit of Snake Oil that's been around a long time is Checkbox Parity. This is the requirement by software companies to add features (or make features up) so that they can create the matrix on the side of the box, showing how their version stands up against the competition via columns of checkboxes. The importance of this marketing scheme should not be underestimated. There is a famous essay I read in a treeware book a long time ago by someone whose name I can't remember (but I remember that it was someone notable as a writer). The essay discussed the writer's trials and tribulations with the first versions of Microsoft Word. To compete against WordPefect (the huge market leader at the time), the first version of Word for Windows needed Checkbox Parity with outlining. The author discusses trying to get this to work in Word (it was after all listed at a feature of the product) and continually being frustrated. Finally, after numerous calls to technical support, he finally got the admission that the feature flat out doesn't work, they knew it didn't work, but they had to include it as a feature to achieve Checkbox Parity. This is not a minor point: part of the reason that Word came to dominate WordPerfect in the market place depended on them appearing essentially equavalent in the nascent days of Word. More honest companies (like Ami) failed in the Darwinian environment for word processors that existed before Office crushed all competitors.

This Checkbox Parity also drove the intense competition in the early days of Java IDEs. JBuilder, in its heyday, released a new version every 8 months (which was disastrous for those of us who wrote books about it). This worked well for Borland, who had a very agile development team for JBuilder. It was disastrous for Visual Cafe, who wasn't so agile. For many managers (and, unfortunately, many technologists who know better), the dreaded checkbox matrix on the side of the box determines purchase. Forget well designed, elegant functionality. If you can hack together something that you can reasonably compare to an elegant solution, you can achieve Checkbox Parity.

This same Checkbox Parity will be used to bludgeon Ruby in the marketplace until Ruby achieves the same types of functionality that Java and .NET already have. The CCYAO of large companies will reject Ruby because it doesn't achieve Checkbox Parity with older technologies, regardless of its suitability for a particular development project. If you are trying to sell Ruby in the enterprise, you need a strong antidote to Checkbox Parity Snake Oil.

Monday, October 16, 2006

Technology Snake Oil Part 9: The CCYAO

There is a corporate title that no one talks about but who is critical in many organizations: the Chief Cover-Your-Ass Officer. He's the C-level executive to whom you must to sell technology choices. He's always skeptical of new technologies because that's his job.

Back in the days when client/server was the norm and PowerBuilder reigned as king of corporate development, the company for which I worked was promoting Delphi as a good alternative for a particular application for a trucking company. Anyone with any technical knowledge could see quickly that Delphi was a better choice. All the technical people at this company clearly acknowledged that they wanted Delphi, and that a PowerBuilder solution for this particular application was doomed to failure. After a series of meetings with the CCYAO officer and others, they told us their choice: PowerBuilder. When asked why: "There is a good chance that this project will not succeed, and frankly we think the only chance it will succeed is is we use Delphi and your solution. However, if it fails, none of us will be fired if we pick the standard that everyone else uses, PowerBuilder. So, we're going with PowerBuilder. Thanks for coming in."

This is the same C-level executive that coined the phrase "No one ever gets fired for choosing IBM", which has been upgraded to "No one ever gets fired for picking Microsoft". No matter what the technical merits of your solution, ultimately, you've got to sell it to the CCYAO officer.

Tuesday, October 10, 2006

The Condiment Conference Redux

Back in May, I spoke at the first AJAX Experience, and it was a blast. It has been years since I've been to a conference with so much enthusiasm. It is unusual for a conference to focus on what I called a "condiment" technology. You can't write a web application in just Ajax (although TiddlyWiki may prove me wrong on that). Generally, you write the web application in Java, .NET, Ruby, PHP, Python, Perl, or some other "main course" technology. Ajax provides the icing, both visually and via usability polish. Most conferences focus on main courses, but The Ajax Experience focuses on the icing.

This means that this conference has an eclectic mix of developers. Hallway conversations lack the implicit assumptions you can generally make at main course conferences. For example, all Java developers have an implicit context. At The Ajax Experience, you have to throw away your base assumptions, both in sessions and conversations. Just like travel broadens you because you meet people with different contexts and experiences, attending the Ajax Experience does the same for technologists. Instead of the usual low-level animosity that each technology tribe exhibits for the non-tribe members, everyone focuses on common ground. It happens again in October, in Boston. You owe it to yourself to be an ex-patriot for your main course technology and come to the United Nations of web development, The Ajax Experience.

Saturday, September 30, 2006

EKON 10

I just completed speaking at my 8th Entwickler Konferenz in Frankfurt (I missed the first two, so this one was EKON 10). Speaking at international conferences is enlightening because you quickly learn that the concerns and priorities of US developers don't apply all over the world. For example, I started speaking about Java at this conference back in 1999. At the time, the most popular tool was Delphi and I couldn't get the time of day from most developers when talking about Java. Germany IT is traditionally very conservative and Java was still an upstart platform. By 2001, Java became the safe choice and suddenly there was not 1 but 2 full conferences devoted to it (JAX and WJAX) that drew bigger crowds than Entwickler (which, by the way, is German for "Developer"). This year, I'm talking to everyone about Ruby on Rails in the hallways and no one has heard of it. That'll change in February, when I'm proposing a RoR talk at the Webinale Konferenz.

An interesting thing happened at breakfast this morning that highlights why I like this conference so much. Terry (my colleague from Atlanta) and I planned to meet another speaker from Amsterdam who we've known for years for breakfast before heading out for bicycling in the German countryside. While we were eating and chatting, one of the conference attendees came over, introduced himself and sat down (drawn by the sound of English and his recognition of one of the 3 of us from our sessions). A little later, another attendee came and sat on the other side. Before too long, we realized that our table had representatives from the US (Atlanta), Amsterdam, Greece, and Nigeria. We had 3 continents covered! Virtually no where else in the world can you spontaneously gather a group like this to talk about technology, programming, and weather. Just like working for an international company, it broadens your perspective on technology and other more important things.

Tuesday, September 19, 2006

Application Development Isolation from its Environment

Back in my DSW days, we did a fair amount of development in Delphi (a RAD application builder from Borland), and we noticed a trend. We frequently won return business by our customers (the hallmark of a successful consulting company), but we always ran into a major hassle: re-setting up the environment. Delphi, being a component-based development environment, took advantage of a rich ecosystem of third-party components. Why write something from scratch when you can buy it, frequently with source code included? However, using third party components meant that every application development environment is subtly different. Client A uses this widget, but you have to make sure not to use it for client B, because they don't own a license for it. Conceptually (but not actually), these controls were like ActiveX or .NET components in that they were installed on the developer's machine and became part of the operating system (at least as far as the developer tool was concerned). We thought a bit about how to isolate each project from another (some project setups would occupy a week of time, getting the right Delphi version and components installed just the way we left them). The problem was one of isolation: you can't encapsulate the development environment (or the developed application) at any level lower than the operating system.

They we developed a clever solution: start building our applications in VMWare. VMWare had just gotten Really Good at that time, and we realized that we could take a generic Windows ghost and install all the necessary developer tools on a VMWare image and develop on it. The speed hit at the time wasn't terrible, and it allowed us clean-room development for each client. When that phase of the project concluded, we save the VMWare image out to a server. Two years later, when that client came back for enhancements, we started up that application's development environment just like the day we left it. This approach saved us days of downtime, and made developing for multiple clients a breeze. Client A needs some minor tweaks while I'm working on client B's application. No problem, just bounce between virtual machine images.

Why do I bring this up now? Because the exact same scenario is playing out in the .NET development space. Most third-party components either GAC themselves or have stringent licensing requirements. Virtualization has gotten pervasive now, so if you have to do development on a machine that isn't a throw-away pairing machine image, life is easier if you sandbox it into its own virtual machine. I did this out of necessity on my former .NET project because I was developing on a MacBook Pro. However, I think this is wise for any development effort in a platform (like Delphi or .NET) that can't be isolated at any level lower than the entire operating system. This isn't as big a problem with Java or Ruby because they don't irrevocably couple themselves to the operating system. This is one of the prices you pay for that tight integration with Windows that .NET gives you: you can't de-integrate when you need to.

Monday, September 11, 2006

Thinking Different(ly)

I've fully made the switch. Instead of traveling with 2 laptops (a Dell Latitude 610 and a PowerBook G4), I've consolidated to a single machine: a 17-inch, fully loaded MacBook Pro. The tipping point for me? The ability to do real .NET development on the Mac.

Of course, I've seen and heard all the stuff about Parallels and how good it is: many orders of magnitude better than Virtual PC, which must create a virtual set of hardware on which Windows can run. Parallels (and the upcoming VMWare for the Mac) take advantage of virtualization hardware on the Intel chip, so you really do get near native speed when running Parallels inside OS X. Notice: not dual booting, but running Windows in a window inside OS X. But, I'm on a .NET project, and "it almost runs good enough to do .NET development" isn't quite enough. Thus, my hesitation up until this point to take the plunge. Well, I'm hear to say: it works as advertised. Building our project in Parallels on the Mac is essentially as fast as building it on the single processor Dell. The build time is within seconds of one another (for an 8 minute build).

But there are always things that you can't read about in reviews that still cause issues. I've been here before, and know that there are lots of little hidden gotchas. When I decided to move everything over, I reserved some time for glitches. And you know what? I got that time back, because I ran into very few minor ones and no major ones.

Here's an example of something you won't read about but is a huge deal if you are planning to use your Mac for .NET development. For a real .NET project, you must have (of course) the Windows XP operating system, a database server (MS SQL Server), and Visual Studio, including all the 3rd party components required by your project. For our application, you also need Office. How big do you make your virtual disk? This was a very important question in the VMWare days. Like "real" hardware, VMWare virtual disks (at least in the last version I used) cannot be re-sized. Once you create the disk, you are stuck with it. When using VMWare, getting that disk size right is critical. Not in Parallels. Parallels includes a utility that allows you to resize the partition. I started with a ridiculously optimistic 8 Gb drive. I quickly ran out of room. So, I used the Parallels utility to make the drive bigger. But here's the part you can't read about anywhere: once you start the virtual Windows back up, it views that new space as "unpartitioned", meaning that you can't use it for anything yet. But, Windows on Parallels is so Windows that you can run Partition Magic on that newly resized virtual disk and make your main disk bigger. I've done it 3 times now (and am now up to a 20Gb partition for our project).

Here's another illustration of the Window-y-ness of Parallels on OS X. I had some problems with the database setup, and Brian (our DBA) was kind enough to take a look for me. He's in London; I'm in Chicago. I started up Windows, gave him the IP address assigned by DHCP in Chicago, and he VPNed into our network and ran my Windows install via Remote Desktop. He never realized (until I told him later) that he was running Windows on top of OS X.

This represents a watershed event. The MacBook Pro + OS X (and it's siblings) are now the only machines that run every modern operating system. For consultants, that's huge. We can now go into any organization, find out what they are running, and fit in exactly. Your servers are running Ubuntu? No problem, I can create a virtualized version here on my machine. Red Hat, Windows Server 2003, Vista...you name it, I can now run it. The Mac has changed from an artistic, boutiquey machine to the ultimate Swiss-army chain saw for consultants. If I were Dell, I'd be worried. OS X and the wonderfully designed hardware make for a significantly better user experience. And now it's the power users machine of choice. Maybe I should buy some Apple stock...

Saturday, September 02, 2006

Pairing Everywhere

The more I pair program, the more I'm convinced that two (compatible) people always produce better results than just one. I know that pair programming is the best way to write code. This started me thinking about other creative artifacts that might benefit from pairing.

There are already some pretty famous pairs. Rogers and Hammerstein come to mind. One of the greatest series of history books, The Story of Civilization, was written by a pair of authors, Will and Ariel Durant. Because they were written in the 1920's, only Will's name appears on the first few, but he eventually acknowledged his wife in the later books as a co-author. Some great authors were essentially pairing with their editors. Numerous examples exist of great writers whose works were made better because of a strong willed editor: Theodore Dreiser, Ernest Hemingway, and on and on.

To this end, my friend and colleague Joe O'Brien and I tried a new trick this year at ThoughtWorks Away Day: pair teaching. He and I used 2 computers, 2 projectors, and one topic (Ruby for ThoughtWorkers Who Don't Know Ruby But Want to Know Why It Rocks: Learning Ruby Through Unit Testing). In the end, the sum was greater than the parts. It was a frantic 1 hour presentation, with something happening constantly. After the smoke cleared, another ThoughtWorker said that he really enjoyed it because his mind only wandered for about 4 minutes total during the entire time, and suggested that if we hire a clown to walk through the audience, juggling, and repeating our key points, that we would have held 100% of his attention. High praise, indeed.

Friday, August 25, 2006

Categorizing Creative Genius

I just read a fascinating article in the July Wired magazine about creative genius. The subject, an economist named David Galenson, has correlated age with perceived value in all sorts of creative fields, and has identified 2 curves. One, which he dubs "Concepualists", tend to peak early in their careers. For example, even though Picasso lived into his 90's, his most cited works in art history and other books were done before he was 30. Mark Rothko, (one of my favorites), did his most cited work the year he died, when he was 59. Galenson calls these guys "Experimentalists". He has done this correlation over painting, fiction, economists, music, and other fields. He believes that 2 distinct flavors of genius exist: one that manifests itself early, with bold, field-changing paradigm shifts (concepualist) and another, slower, accumulated genius (the experimentalist).

This instantly applies to other fields that he hasn't studied, like physics. I've often wondered why so many brilliant, earth shattering discoveries are made by young men Newton, Einstein, and Feynman were quite young when they produced their landmark works). However, if you look at someone like Stephen Hawking, he's still producing significant work. I think this is a great topic, one that resonates with observations I've made but never correlated myself. His book is named Old Masters and Young Geniuses: The Two Lifecycles of Artistic Creativity, and it's jumping to the top of my reading list with a bullet.

Sunday, August 20, 2006

Technology Snake Oil Part 8: Service Pack Shell Game

If you could see my face, you would see shock, dumbfoundment, and disgust. It pains me to
even write about something as stupid as this, but it keeps rearing its head. The
majority of my recent clients and someone I talked to casually from another
company recently are relying on one poisonous meme, which seems to be spreading.
The very bad idea: "We never deploy anything until the first service pack is
released".

Let's think about this for a second. If a vendor produced the
most perfect software ever conceived by mankind, there would never be a service
pack, thus none of these companies would ever deploy it. On the other hand, if I
release a really stinky version of some software that requires a service pack
after a week, it now meets this unassailable standard of deployability.

Two factors have led to this smelly idea. The first is just pure laziness on the
part of the decision makers who decide when things get deployed. Regardless of
the service pack level, you should always evaluate software on its merits. A
prescription like the Service Pack Shell Game ignores important factors in
software and tries to find a metric that indicates quality. This is not even
close. When Windows NT Service Pack 1 was released, it was a disaster. Service
Pack 2 basically rolled back all the changes that SP1 wrought. That's why, to
this day, you still see software that requires NT SP3, because that was the
first real service pack that actually fixed anything.

The other reason this is happening is both more subtle and dangerous. Have we really gotten to the
point where we distrust commercial software this much? It's because vendors have
consistently released software that is not ready for prime time and told us that
it's of shipping quality. Companies even apply this selection process to open
source software now. Open source has no marketing department pushing releases
out the door. Generally, open source software ships when it is ready. Thus, most
open source has fewer "service packs" than commercial software. Yet this same
flawed prescription is often applied to it. Software, no matter what the source,
should be vetted based on it's quality, which should be determined by (as much
as possible) objective means. Choosing a random metric like "after the first
service pack" guarantees you'll get hit-and-miss quality software.

Friday, August 18, 2006

ejbKarmaCallback()

When you work with a noxious technology enough, it eventually comes back to bite you. Call it software development karma. While I was at OSCON in Portland, the first hotel room where I was placed had massive problems connecting to the Internet. It was wired access, so there was something related to my room that was causing the problem. I endured several maintenance guys and several phone calls with the actual provider. You all know the drill intimately.

Anyway, at one point, it was declared "Fixed!", and I was instructed to point my faithful browser to the Internet. Lo and behold, Software Karma decreed that it was not to be. I got the following error, captured here in all its public glory.

pic of stack trace

Gaaaaaah! I now know waaaayyy more about their network infrastructure than I would like. They are using Tomcat and EJB's...to connect me to the Internet???!? I'm sure this is exactly the kind of application the EJB designers had in mind when they birthed this technology. Do we think that maybe this is total overkill? Couldn't the same be done with a simple web application backed by a database. Sigh. That's what I get for dabbling in evil -- sometimes it comes back to haunt you in the strangest places.

Sunday, August 13, 2006

Scumbag Spammers

If you have posted a comment to my blog lately, you've noticed that I've turned on the "Word Verification" feature of Blogspot. It's because of the scum of the earth, spammers. They've started posting spam comments (spamments?) to blogs. How clever. How annoying. How I hope they choke on their own vomit as they slide under a gas truck.

Saturday, August 12, 2006

Search Trumps Hiearchies

I wrote a while back about Pervasive Search, and how it changed the way I find things. I find myself using search more and more versus navigating hierarchies. As developers, we tend to create lots of files, in regular strict hierarchical structure (in fact, I've been blogging about namespaces vs. packages recently as well). File system paths are now too cumbersome to endure. Instead of walking through Explorer or the tree in my IDE, I'm using search.

I use search at 2 levels. Within the IDE, I use the brilliant feature in both IntelliJ and ReSharper to "Find File" (keyboard shortcut: Ctrl-N). This lets you type in the name (or partial) name of a file and open it in the editor. Better yet, it finds patterns of capital letters in names. So, if you are looking for the ShoppingCartMemento class, you could type "SCM", and "Find File" will find it. Highly addictive. And, it works equally well in IntelliJ and Visual Studio with ReSharper (and my Eclipse friends tell me it has made it there as well).

The other place I've been using search a lot is the filesystem, when looking for either a file on which to perform some operation (like Subversion log) or looking for some content within a file. Google Desktop Search has gotten better and better. You can now invoke it with the key chord of hitting Ctrl twice. And, you can download a plug-in that allows you to search through any type of file you want, including program and XML documents. Once you've found the file in question, you can right-click on the search result and open the containing folder. This is the only way to get to some file buried deep in some package or directory structure. My coding pair and I have started using this heavily, and it has sped us up. And, it eliminates annoying repetitive tasks like digging through the rubble of the filesystem looking for a gold nugget.

Thursday, August 03, 2006

Partial Classes

When I first saw that .NET 2 supported partial classes, I groaned. It
looked like a language feature that helps one thing and hurts a dozen
more, once people start abusing it. However, I've come around to appreciate (and dare I say it, like) partial classes. They are obviously useful for code generation (which is why, I suspect) they were added in the first place). However, they are also handy for other problems.

Testing is one place where partial classes offer a better solution than the one offered by Visual Studio.NET 2005. In VS.NET, if you want to use MS-Test to test a private method, the tool uses code generation (without partial classes) to create a public proxy method that turns around and calls the private method for you using reflection. This is not a big surprise; the JUnitX add-ins in Java help you do the same thing. But using code gen for this is a smell: if you change your private method using reflection, the generated code isn't smart enough to change, so you have to do code gen again, potentially overwriting some of the code you've added. Yuck.

Here's a better solution. I should add parenthetically that I don't usually bother testing private methods (especially if I have code coverage) because the public methods will exercise the private ones (otherwise, the private methods shouldn't be there). However, when doing TDD, I sometimes want to test a complext private method. And partial classes work great for this. The example I have here is a console application that does some number factoring (why isn't important in this context). I have a method theFactorsFor() that returns the factors for an integer. Here is the PerfectNumberFinder class, including the method in question:

namespace PerfectNumbers {
internal partial class PerfectNumberFinder {
public void executePerfectNumbers() {
for (int i = 2; i < 500; i++) {
Console.WriteLine(i);
if (isPerfect(i))
Console.WriteLine("{0} is perfect", i);
}
}

private int[] theFactorsFor(int number) {
int sqrt = (int) Math.Sqrt(number) + 1;
List<int> factors = new List<int>(5);
factors.Add(1);
factors.Add(number);
for (int i = 2; i <= sqrt; i++)
if (number % i == 0) {
if (! factors.Contains(i))
factors.Add(i);
if (!factors.Contains(number/i))
factors.Add(number/i);
}
factors.Sort();
return factors.ToArray();
}

private bool isPerfect(int number) {
return number == sumOf(theFactorsFor(number)) - number;
}

private int sumOf(int[] factors) {
int sum = 0;
foreach (int i in factors)
sum += i;
return sum;
}
}
}

Rather than use code gen to test the method, I've made the PerfectNumberFinder class a partial class. The other part of the partial is the NUnit TestFixture, shown here:

namespace PerfectNumbers {
[TestFixture]
internal partial class PerfectNumberFinder {

[Test]
public void Get_factors_for_number() {
int[] actual;
Dictionary<int, int[]> expected =
new Dictionary<int, int[]>();
expected.Add(3, new int[] {1, 3});
expected.Add(6, new int[] {1, 2, 3, 6});
expected.Add(8, new int[] {1, 2, 4, 8});
expected.Add(16, new int[] {1, 2, 4, 8, 16});
expected.Add(24, new int[] {1, 2, 3, 4, 6, 8, 12, 24});

foreach (int f in expected.Keys) {
actual = theFactorsFor(f);
for (int i = 0; i < expected[f].Length; i++)
Assert.AreEqual(expected[f][i], actual[i],
"Expected not equal");
}
}
}
}

I like this because it allows me to test the private method without any messy code generation, reflection, or other smelly work-arounds. Partial classes make great test fixtures because they have access to the internal workings of the class but don't have to reside in the same file. It's dangerous to pile infrastructure on new features like this (especially scaffolding-type infrastructure like classes), but this one seems like a more elegant solution to the problem at hand than stacks of code generation.

Tuesday, August 01, 2006

Pontificating at OSCON

I gave a talk as OSCON last week on Building Internal DSLs in Ruby. Apparently, there is a fair amount of interest in this subject: I was in one of the small rooms, but it was packed to the rafters, with standing room only along the back and side walls. I didn't realize it, but John Lam took a snapshot of me in action and posted it to his blog: .
It's tough to get a good shot while someone is talking, so it shows that John is both a formidable Ruby/.NET guy and a talented photographer!

The Fact of the JMatter

Several years ago, some brilliant designers created Naked Objects, a Java framework that generates applications from domain objects. You supply the POJOs with behavior, point Naked Object at them, and you have a full-blown Swing application that allows you to edit, insert, delete, and browse the objects and their relationships. You could literally create sparse, functional applications in minutes. However, Naked Objects never got much beyond a proof of concept. The automatically generated applications were utilitarian but uninspiring.

Fast forward to now. Eitan Suez, one of my fellow No Fluff, Just Stuff
speakers, has taken the Naked Object idea and run with it. He has created the JMatter framework (found here). It takes the concepts of Naked Objects and updates it to the here and now. JMatter applications still auto-generate from POJOs, but the user interface and interactions are very rich. The sample application that appears on the JMatter web site literally took less than 2 hours to create; written by hand, it equates to developer-weeks worth of effort. It also illustrates a growing trend in development: creating framework and scaffolding code automatically, freeing developers to focus more on producing applications. We've seen this approach done well in Ruby on Rails. JMatter shows that you can apply the same concepts to Swing development. Eitan has released JMatter with a MySQL-style license, so it's worth jumping over to his site to get a preview of the future.

Friday, July 21, 2006

DSLing @ OSCON

I'm off to Portland, Oregon next week (my first ever trip to Oregon, so I can knock that off my travel map at World66), speaking at my first OSCON. I'm doing a talk on Building DSLs in Ruby, based on material that Jeremy, Joe, Zak, and I have produced for the Pragmatic Press book upon which we are working (slowly). I'm also signed up for some pre-conferecnce tutorials, including a 4 hour talk about VIM (I just had to see someone use VIM for 4 hours - I expect it to be quite impressive).

If you are in Portland, look me up. I speak on Thursday, and have some meetings on the other days, but mostly I'll be hanging around. A bunch of my No Fluff friends will also be there, so there may be some Magic games or even some Settlers of Catan.

Tuesday, July 18, 2006

Boy Scout Capabilities

I was having a conversation with a co-worker today whose first name prominently features the letter "Z". Our topic: how does a company like ThoughtWorks, which hires lots of experienced developers, determine at what level that person should be hired. Some candidates are cut and dried: you can tell when you interview them. But what about the developers who fall through cracks? Maybe they are an ace developer in 4 languages, but they've never done agile. Or, a great militant Agilist, but they have never done test-driven development. As a company, we need 2 things: how to categorize these folks upon hiring and, more importantly, how to fill in knowledge gaps after they arrive. After all, the ultimate goal is to create well rounded ThoughtWorkers, who are good at all the things we value highly.

In talking about this subject, I came up with the idea I called the Merit Badge approach. Just like in the Boy Scouts, when a scout moved from one troop to another, you knew their rank instantly because of the acquired merit badges. Each merit badge had deterministic acceptance criteria, and you knew that the scout in question had mastered the badge criteria before moving to the next one. A certain number of badges, covering a certain set of areas, lead to increased rank. If a company like ThoughtWorks wants all Eagle scouts, we must invest in our rookie scouts to enable them to get to that level. We should have technology merit badges. If we get a good candidate that knows everything but TDD, we should send them to a TDD training class or similar until they have mastered that skill. Advancements in the technical ranks becomes an exercise is acquiring useful skills. That keeps the process more objective and allows for clear ascension paths through the technical ranks. The People People can track the merit badges and recommend training and mentoring for the next milestone.

And, we'd all get to wear those cool sashes!

CJUG Redux

I'm speaking at the Chicago Java Users Group tonight (the Downtown one), giving my No Fluff, Just Stuff talk entitled The Productive Programmer, based on material from the book that David Bock and I are (slowly) working on for Pragmatic Press. It's completely technology agnostic, so if any .NET guys want to crash the party, feel free (sure to generate lively conversation). First-time attendees pay no dues or admission, so that makes this a really, really cheap date for you and your significant other.

Sunday, July 16, 2006

Ubiqui-GPS

GPS technology has suddenly gotten really cheap, and I've taken advantage of it in 2 big ways. First, I managed to get a GPS watch from woot.com for a great price, which includes the arm-mounted GPS receiver, for urban running. It's so accurate, it provides miles per hour in real time while you are running. The other cool use of GPS is the updated version of Microsoft's Streets and Trips. This mapping software used to be nice-to-have for road warriors, now it has moved to essential because it includes a small GPS receiver. You arrive in a foreign city with only the hotel address, punch it in, and you have turn-by-turn directions, spoken via your laptop's speakers, with the traced-out route on the screen. Having Streets and Trips on a laptop is better than having one of the little Palm-sized units because a) I'm taking my laptop with me everywhere anyway and b)the screen on the laptop is much bigger and nicer. The only downside is that you've got to be within a couple of hours of your destination or have a car adaptor for your laptop.

GPS has reached the point where it is cheap, available, and plentiful. My friend Scott Davis has a nice keynote presentation at No Fluff, Just Stuff this year where he argues that location based services will be very important in the near term. The combined technologies of cheap GPS, mashup applications that leverage tools like Google maps, and the growing awareness in software of actual location suggests rich applications beyond what we've got now. If we can just get all this down to the phone level, the only thing left will be flying cars.