I was talking to my friend and fellow roaming-around-the-world speaker Jason Hunter a little while ago. He made the observation that the definition of distance has changed. He lives in San Francisco and I live in Atlanta. Yet, we see each other on a fairly regular basis. It's almost like we're neighbors, except that the common element is that we travel and work in similar places, not that we live next to one another. This is really true with guys like Scott Davis and Venkat Subramaniam, who I see about 1/2 the weekends of the year, always in a different city (driven by the No Fluff, Just Stuff schedule). I consider them my virtual neighbors. During the busy No Fluff, Just Stuff times, I see them a lot more (and more reliably) than my physical neighbors.
And it gets even more like living in the same physical neighborhood. The other day, I walked into the Red Carpet Club at O'Hare airport and heard someone call my name. Brian Sletten was waiting for a flight and we sat and chatted a while. What's funny is that Chicago is home for neither of us, and we were not in Chicago for the same reason. We both happened to be in Chicago at the same time. Just like you bump into your down the street neighbor at the hardware store, I bump into my virtual neighbors in random airports.
If you travel as much as I do, this is inevitable, I guess. As a company, ThoughtWorks travels a lot. I've bumped into co-workers in airports 2 different times when it was not the home city for either of us and we weren't going to the same place. The world is indeed smaller.
Monday, November 26, 2007
Wednesday, November 21, 2007
JRuby Podcast on JavaWorld
My good friend Andy Glover interviewed me for a pod-cast for the Java World site recently, and it has magically appeared. Here is the site blurbage:
Neal Ford and Andrew Glover are both well respected Java developers, as well as big fans of Ruby. Neal FordIn this in-depth discussion, Ford talks about why he believes Ruby is the most powerful language you could be paid to program with today, and explains the particular benefits of programming with JRuby. Ford also reveals why he believes Java developers will continue to migrate to languages other than Java, even as many continue to call the Java platform home. This is an essential, engaging discussion for those interested in learning more about JRuby and the trend toward what Ford calls polyglot programming.
It was a lively conversation, and Andy asked me about lots of stuff I've been thinking about a lot lately. As in all good conversations, the time flew by, and before I knew it, the guy recording it was shutting us down.
Neal Ford and Andrew Glover are both well respected Java developers, as well as big fans of Ruby. Neal FordIn this in-depth discussion, Ford talks about why he believes Ruby is the most powerful language you could be paid to program with today, and explains the particular benefits of programming with JRuby. Ford also reveals why he believes Java developers will continue to migrate to languages other than Java, even as many continue to call the Java platform home. This is an essential, engaging discussion for those interested in learning more about JRuby and the trend toward what Ford calls polyglot programming.
It was a lively conversation, and Andy asked me about lots of stuff I've been thinking about a lot lately. As in all good conversations, the time flew by, and before I knew it, the guy recording it was shutting us down.
Thursday, November 15, 2007
Ruby Matters: Frameworks, DSLs, and Dietzler's Rule
As an industry, we've been engaged in an experiment for the last decade or so. This experiment started back in the mid to late 90's, largely driven by the fact that the demand for software vastly outstripped the supply of those who could write it (this wasn't a new problem then -- we've had this problem almost since the idea of business software started). The goal: create tools and environments that would allow average and/or mediocre developers to be productive, regardless of the messy facts already known by people like Fred Brooks (see Mythical Man Month). The reasoning goes that if we create languages that keep people out trouble by restricting what damage they can do, we can produce software without having to pay those annoying software craftsman ridiculous amounts of money (and you'd probably never be able to find enough of them even then). This thinking gave us tools like dBASE, PowerBuilder, Clipper, and Access: the rise of the 4GL's.
But the problem was that you couldn't get enough done in those environments. They created what my colleague Terry Dietzler at the time called the "80-10-10 Rule" for Access: you can get 80% of what the customer wants in a remarkably short time. The next 10% of what they want is possible, but takes a lot of effort. The last 10% is flat out impossible because you can't get "underneath" all the tooling and frameworks. And users want 100% of what they want, so 4GLs gave way to general purpose languages (Visual BASIC, Java, Delphi, and eventually C#). Java and C# in particular were designed to make C++ easier and less error prone, so they built in some fairly serious restrictions, in the interest of keeping average developers out of trouble. The problem is that they created their own version of the "80-10-10 Rule", only this time the stuff you couldn't do was much more subtle. Because they are general purpose languages, you can get pretty much anything done...with enough effort. Java kept bumping into stuff that would be nice to do but was way to much work, so frameworks were built. And built. And built. Aspects were added. More frameworks were built. It is so bad that meta-frameworks were built: the Avalon framework was a framework for...building other frameworks!
We can see what this trend has done to productivity when building complex software. What we really want is the productivity of 4GLs with the generality and flexibility of powerful general purpose languages. Enter frameworks built with Domain Specific Languages, the current exemplar being Ruby on Rails. When writing a Rails application, you don't write that much "pure" Ruby code (and most of that is in models, for business rules). Mostly, you are writing code in the DSL part of Rails. That means that you get major bang for the buck:
You get a huge bunch of functionality with this little bit of code. 4GL levels of productivity, but with a critical difference. In a 4GL (and the current mainstream statically typed languages), it is cumbersome or impossible to do really power stuff (like meta-programming). In a DSL written on top of a super powerful language, you can drop one level of abstraction to the underlying language to get done whatever you need to get done.
This is the best approach currently available. The productivity comes from working close to the problem domain in the DSL; the power comes from the abstraction layer simmering just below the surface. Expressive DSLs on top of powerful languages will become the new standard. Frameworks will be written using DSLs, not on top of statically typed languages with restrictive syntax. Note that this isn't necessarily a dynamic language or even Ruby tirade: a strong potential exists for statically typed type-inference languages that have a suitable syntax to also take advantage of this style of programming. For an example of this, check out Jaskell and in particular the build DSL written on top of it called Neptune.
But the problem was that you couldn't get enough done in those environments. They created what my colleague Terry Dietzler at the time called the "80-10-10 Rule" for Access: you can get 80% of what the customer wants in a remarkably short time. The next 10% of what they want is possible, but takes a lot of effort. The last 10% is flat out impossible because you can't get "underneath" all the tooling and frameworks. And users want 100% of what they want, so 4GLs gave way to general purpose languages (Visual BASIC, Java, Delphi, and eventually C#). Java and C# in particular were designed to make C++ easier and less error prone, so they built in some fairly serious restrictions, in the interest of keeping average developers out of trouble. The problem is that they created their own version of the "80-10-10 Rule", only this time the stuff you couldn't do was much more subtle. Because they are general purpose languages, you can get pretty much anything done...with enough effort. Java kept bumping into stuff that would be nice to do but was way to much work, so frameworks were built. And built. And built. Aspects were added. More frameworks were built. It is so bad that meta-frameworks were built: the Avalon framework was a framework for...building other frameworks!
We can see what this trend has done to productivity when building complex software. What we really want is the productivity of 4GLs with the generality and flexibility of powerful general purpose languages. Enter frameworks built with Domain Specific Languages, the current exemplar being Ruby on Rails. When writing a Rails application, you don't write that much "pure" Ruby code (and most of that is in models, for business rules). Mostly, you are writing code in the DSL part of Rails. That means that you get major bang for the buck:
validates_presence_of :name, :sales_description, :logo_image_url
validates_numericality_of :account_balance
validates_uniqueness_of :name
validates_format_of :logo_image_url,
:with => %r{\.(gif|jpg|png)}i,
You get a huge bunch of functionality with this little bit of code. 4GL levels of productivity, but with a critical difference. In a 4GL (and the current mainstream statically typed languages), it is cumbersome or impossible to do really power stuff (like meta-programming). In a DSL written on top of a super powerful language, you can drop one level of abstraction to the underlying language to get done whatever you need to get done.
This is the best approach currently available. The productivity comes from working close to the problem domain in the DSL; the power comes from the abstraction layer simmering just below the surface. Expressive DSLs on top of powerful languages will become the new standard. Frameworks will be written using DSLs, not on top of statically typed languages with restrictive syntax. Note that this isn't necessarily a dynamic language or even Ruby tirade: a strong potential exists for statically typed type-inference languages that have a suitable syntax to also take advantage of this style of programming. For an example of this, check out Jaskell and in particular the build DSL written on top of it called Neptune.
Saturday, November 10, 2007
My Horse Scale of SOA
I've been giving some SOA talks over the last few years, and I struggled for a while finding a good metaphor to describe the evolution from most people's existing enterprise architecture to the magical, mysterious enterprise architecture described in most of the marketing material around SOA. Then, on one of my talks, I stumbled upon it, and later created an image that sums it up: Neal's Horse Scale of SOA:
You see, the marketing literature describes something that doesn't exist in the real world: they are describing a unicorn. You've seen paintings, drawings, and movies featuring unicorns. If you came from another planet, you would assume that unicorns lived here because there are so many representations of them. The problem is that most company's enterprise architecture looks more like a broken-down donkey. The SOA experiment is to see how close you can get to a unicorn before you run out of money. Maybe you'll get to Shetland pony and stop. Or perhaps you'll make it all the way to a thoroughbred racehorse. There are even a few that'll create unicorns, but they are exceedingly rare.
The point is that you can't trust the magically vision marketed by pundits and (especially) vendors. Building unicorns is expensive, and the more donkeys you have around, the more it will cost. SOA isn't a zero-sum game. It's should be a spectrum towards improving the communication and interoperability between all your disparate equines (i.e., applications and services).
You see, the marketing literature describes something that doesn't exist in the real world: they are describing a unicorn. You've seen paintings, drawings, and movies featuring unicorns. If you came from another planet, you would assume that unicorns lived here because there are so many representations of them. The problem is that most company's enterprise architecture looks more like a broken-down donkey. The SOA experiment is to see how close you can get to a unicorn before you run out of money. Maybe you'll get to Shetland pony and stop. Or perhaps you'll make it all the way to a thoroughbred racehorse. There are even a few that'll create unicorns, but they are exceedingly rare.
The point is that you can't trust the magically vision marketed by pundits and (especially) vendors. Building unicorns is expensive, and the more donkeys you have around, the more it will cost. SOA isn't a zero-sum game. It's should be a spectrum towards improving the communication and interoperability between all your disparate equines (i.e., applications and services).
Tuesday, November 06, 2007
Language Spectrum
It came up the other day in a conversation as to which programming language I would use absent the messy constraints like "Must make money to continue to eat". I think it would look something like this, from most preferred to least:
Clearly, this represents my relatively recent evolution towards dynamically typed languages. They are simply much more productive if you assume that you write tests for everything, which I always do. Notably absent from the list is Delphi, which is so yesterday's news to me. It became deprecated as soon as C# grew all of its good features and left it behind.
This doesn't mean that I think that Ruby embodies the perfect language (haven't seen one of those yet). But, given the landscape, it feels pretty good, and I keep learning cool new stuff about it.
- Ruby (I'm quite fortunate that I'm getting to use this language for money right now)
- Lisp (I've never gotten paid to write Lisp, but would like to)
- Smalltalk (note that I've never done "real" Smalltalk development, but I know about its cool features)
- Groovy
- JavaScript
- Python
- Scala
- Java or C# (or any other mainstream statically typed language)
Interestingly enough, I think C# has the edge on language features (the new stuff they're adding for LINQ, and not doing stupid stuff like type erasure for generics) but the libraries are awful. Java the language is getting really crusty, but they have the best libraries and frameworks in the world (and the most of them too). If you could write C# code with Java libraries, you'd really have something. Of course, they are still statically typed, so you have to pay the static language productivity tax. - boo
- Haskell
- O'Caml
- Perl
- Language_whose_name_I_cant_write_here_because_all_filters_in_the_world_will_block_it
- Cobol (I've never done any real development here either, and don't plan to)
- assembler
- Jacquard Loom (whatever that language looks like)
- Flipping switches for 0's and 1's
- Universal Turing machine (infinite paper strip with a read/write head that moves forwards and backwards). It's just hard to find infinitely long paper strips these days.
Clearly, this represents my relatively recent evolution towards dynamically typed languages. They are simply much more productive if you assume that you write tests for everything, which I always do. Notably absent from the list is Delphi, which is so yesterday's news to me. It became deprecated as soon as C# grew all of its good features and left it behind.
This doesn't mean that I think that Ruby embodies the perfect language (haven't seen one of those yet). But, given the landscape, it feels pretty good, and I keep learning cool new stuff about it.
Thursday, November 01, 2007
Building Bridges without Engineering
One of the themes of my "Software Engineering" & Polyglot Programming keynote is the comparison between traditional engineering and "software" engineering. The genesis for this part of the talk came from the essay What is Software Design? by Jack Reeves from the C++ Journal in 1992 (reprinted here), a fissile meme that Glenn Vanderburg tossed into the middle of a newsgroup conversation about that very topic. Even though the essay is quite old, it is every bit as pertinent today as when it was written. The update that Glenn and I have given this topic is the addition of testing, which gives us professional tools for designing software. We don't have the kinds of mathematical approach that other engineering disciplines do. For example, we can't perform structural analysis on a class hierarchy to see how resilient to change it will be in a year. It could be because those types of approaches will just never exist for software: much of the ability for "regular" engineers to do analysis has to do with economies of scale. When you build the Golden Gate bridge, you have over one million rivets in it. You can bet that the civil engineers who designed it know the structural characteristics of those rivets. But there are a million identical parts, which allows you to ultimately treat them as a single derived value. If you tried to build a bridge like software, with a million unique parts, it would take you too long to do any kind of analysis on it because you can't take advantage of the scale.
Or it may just be that software will always resist traditional engineering kinds of analysis. We'll know in a few thousand years, when we've been building software as long as we've been building bridges. We're currently at the level in software where bridges builders were when they built a bridge, ran a heavy cart across it, and it collapsed. "Well, that wasn't a very good bridge. Let's try again". There was a massive attempt at component based development a few years ago, but it has largely fallen by the wayside for everything except simple cases like user interface components. The IBM San Francisco project tried to create business components and found (to the non-surprise of software developers everywhere) that you can't build generic business components because there are far too many nuances.
Manufacturing is the one advantage we have over traditional engineers. It is easy and cheap to manufacture software parts, by building the parts of software. So why not take advantage of that ability and manufacture our software parts in both the atomic, small pieces and then the larger interactive pieces and then test them to make sure they do what we think they do. It's called unit, functional, integration, and user acceptance testing. Testing is the engineering rigor of software development.
Here's the interesting part. If you told an engineer that you needed a large bridge and that you needed it so quickly that he doesn't have time to apply any of the best practices of bridge building (e.g., structural analysis), he would refuse. In fact, he would be liable for the bad things that would happen if he was foolish enough to proceed. We have none of that liability in the software world.
Responsible software developers test, just as responsible engineers use the tools of their trade to create robust, well designed artifacts. But we still have too much stuff that is untestable, along with pressure to write code that isn't tested because testing takes time. One of my litmus tests for deciding how to spend my time looking at new things (frameworks, languages, user interface approaches) is the question "is it testable?" If the answer is no (or even "not yet"), then I know that I needn't bother looking at it. It is professionally irresponsible to write code without tests, so I won't do it.
Or it may just be that software will always resist traditional engineering kinds of analysis. We'll know in a few thousand years, when we've been building software as long as we've been building bridges. We're currently at the level in software where bridges builders were when they built a bridge, ran a heavy cart across it, and it collapsed. "Well, that wasn't a very good bridge. Let's try again". There was a massive attempt at component based development a few years ago, but it has largely fallen by the wayside for everything except simple cases like user interface components. The IBM San Francisco project tried to create business components and found (to the non-surprise of software developers everywhere) that you can't build generic business components because there are far too many nuances.
Manufacturing is the one advantage we have over traditional engineers. It is easy and cheap to manufacture software parts, by building the parts of software. So why not take advantage of that ability and manufacture our software parts in both the atomic, small pieces and then the larger interactive pieces and then test them to make sure they do what we think they do. It's called unit, functional, integration, and user acceptance testing. Testing is the engineering rigor of software development.
Here's the interesting part. If you told an engineer that you needed a large bridge and that you needed it so quickly that he doesn't have time to apply any of the best practices of bridge building (e.g., structural analysis), he would refuse. In fact, he would be liable for the bad things that would happen if he was foolish enough to proceed. We have none of that liability in the software world.
Responsible software developers test, just as responsible engineers use the tools of their trade to create robust, well designed artifacts. But we still have too much stuff that is untestable, along with pressure to write code that isn't tested because testing takes time. One of my litmus tests for deciding how to spend my time looking at new things (frameworks, languages, user interface approaches) is the question "is it testable?" If the answer is no (or even "not yet"), then I know that I needn't bother looking at it. It is professionally irresponsible to write code without tests, so I won't do it.
Subscribe to:
Posts (Atom)