Monday, December 17, 2007

Something I didn't fully grok until just now

The Next Karate Kid is really, really awful. What a turd.

Thursday, December 6, 2007

Rapture

I haven't blogged anything about Bioshock, but for the record I did enjoy it very much, even though I found it a bit too easy and the DRM really pissed me off.

Eurogamer has this article which tries to defend Bioshock from the backlash that followed its release. It's a great read, but it doesn't get around to one of the problems which I think underlies the entire game review scene: hype.

Things have gotten bad enough that reviews for games aren't really informative anymore. Halo 3, which I thought was good but not near great, piled up dozens of perfect scores and drooling raves from reviewers. Any game that's hyped as hard as Halo 3 was (and Bioshock too) is going to have awesome reviews which don't really map to the quality of the experience for most gamers. Publishers can hype games almost to the point that it distorts the reviewer's critical thinking skills -- even the Gerstmann firing, which stinks to high heaven, implies that a publisher holds the review industry in such contempt that it can have people offed who don't toe its line.

So what is to be done? I don't know.

Tuesday, December 4, 2007

Aperture Science

We do what we must
Because we can.

Um. Sorry. Anyway, past time to mention Portal.

There's really not much I can say beyond how awesome it is. You've heard that from others by now, but I'll just add my insignificant voice to the chorus.

Bad points:

  • Too short. Your first runthrough (there will be others) takes maybe three to four hours. Once you know the puzzles, you can speedrun the thing in maybe an hour. MOAR.
  • Too easy. The extra puzzles you unlock after beating the main game are pretty hardcore, and the developer commentary (a feature that should be in EVERY game) explains that Valve were terrified of confusing the hell out of players; they were, after all, breaking new gameplay ground. Hopefully the deliriously joyful response will encourage them to take more risks.
Good points:
  • Genuinely clever and funny.
  • Addictive and really hard to stop playing.
  • Simple yet pleasing art and level design.
  • An honest attempt at something new.
  • Jonathan Coulton.
  • Voice acting -- Ellen McLain does an amazing job.
  • A bargain at $20.
Basically, if you haven't played it, shell out the $20 for it and enjoy the hell out of it.

Rating: ***** (5/5); if I can't give five stars for an experiment in new gaming that succeeds beyond all expectations, I should just stop playing games entirely.

Friday, November 16, 2007

Emacs font joy

Quickly:

* The neato Android font is actually a very readable and pleasant coding font.
* Great -- how do I use it for Emacs on my Ubuntu? Oh. AWESOME.

Thursday, November 15, 2007

ApacheCon Day 1

It looked like today was going to be another snorefest, but...

After a very satisfying breakfast (they know how to eat down here, that's for sure), it's off to Matt Raible's now-classic Comparing Java Web Frameworks talk. Aside from knowing his shit, the guy gives good presentation. He understands that powerpoints aren't about cramming half your doctoral thesis on 75 slides. While I've met lots of smart people here and learned a great deal, I wish more of them would try to see their presentations from someone else's point of view.

It was then off to the Georgia Aquarium, which bills itself as The Largest And Most Awesomest Such Place On Earth. Well, it is pretty cool. Beautiful tropical fish, whale sharks, electric eels, and sea otters. The smaller Asian variety of the latter was the best part; they were playfighting, dragging each other into the water and smacking one another upside the head. It wasn't aggressive to my untrained eye, but it was sure as hell cute.

(Do I feel bad about skipping a morning session? No. I've been stuck in the hotel the entire week with nary a peep of the outside world.)

Tasty vegetable soup for lunch. Bjorn had a rather worse time, but he can tell that story.

Lots of talk about community and how to strengthen it in the afternoon. Henri's talk on how to join OS projects was nicely bookended by Ted Leung's talk on open source antipatterns. Theme: how ASF does stuff (the open source community process) is more important that what it produces. Getting people to work together in a respectful and productive environment is a skill that's universally applicable.

We're all done after tomorrow.

Wednesday, November 14, 2007

ApacheCon Day 0

My headache *still* won't go away.

Several sessions today; some good, some not. I found Dave Johnson's talk on Roller pretty informative, and Greg Stein's talk on open source licensing was gratifying (although it was pointed out that he knew 3/4 of the people in the room and he was shamelessly preaching to the choir). Talking to him afterward, it seems that he doesn't know of anybody other than me who does both FSF and ASF. I've gone into this stupid schism before, but it still surprises me that I'm mostly alone on talking about how important it is for the two largest free software communities to find more common ground. How is it not obvious to other people, especially the ones smarter than me?

Other random highlights: chatting up someone in French, shocking them; the keysigning party, which attracted way more photographers than I was expecting (i.e., a number greater than 0); several enjoyable and informative chats with Google people; taking a close look at Abdera; Doc Searls' interesting and amusing keynote. I'm glad that some other people try to inject humor and entertainment into powerpoints, which are man's latest and most promising result in the search for something to bore others to death.

In between sessions, Bjorn and I are going to sneak off to the aquarium, which I've heard is something I need to see. Apparently whale sharks are cool.

ApacheCon Day -1 (Hackathon Day 1)

Well, if you have a productivity binge, you're likely to fall off the next day. And I did. Feeling sick doesn't help either.

On the plus side, I did get the chance to research a lot of Apache projects. And the Android SDK is sitting on my hard drive giving me the most seductive come-hither eyes. Why do I see myself spending lots of time in there?

I did meet Glen Daniels of WSO2, who's a nice guy, so I have to mention that.

More later; busy day lined up.

Tuesday, November 13, 2007

Easy hydrogen?

From Wired comes the story about a group of Penn State researchers that have developed a method to encourage bacteria to extract hydrogen gas from ordinary biodegradation.

Aside from being cool, it offers me a chance to say what I've become convinced is the greatest long-term problem for humanity: energy. You may have noticed the price of gas skyrocketing. While part of the reason is the idiocy in the Middle East, many people don't realize that the U.S. imports most of its oil from Canada and Mexico, the latter of which is a member of OPEC. The prices are climbing for us not because our suppliers are running dry but because demand for oil is rising rapidly worldwide. China's economy, which is growing almost 10% per year, is industrializing at a mind-boggling rate; India isn't far behind. With almost 2.5 billion people in those two countries alone, the demand for oil and coal has only started and isn't near peaking.

As most people who aren't deeply and pathologically insane will admit, fossil fuels are limited resources. Oil is the most popular source of energy simply because it is the most profitable for those who control it. Up to this point in human history, the use of oil has a positive energy coefficient; it takes less energy to get it out of the ground than it produces when combusted. As we use more and more oil, it will become harder to find, and we'll have to dig deeper, invent new technology, distill it from shale, etc. The common problem is that all of these approaches require more energy than simply building a derrick on an easy-to-tap oil reservoir a few hundred feet down. At some point -- no one agrees exactly when -- the energy coefficient will degrade to break-even, and it will no longer be useful to extract the remaining oil.

At that point, we are, to put it bluntly, fucked.

Hydrogen is one of the possible solutions. It's extremely powerful stuff; when combusted, it provides much more energy than an equivalent weight of oil, and if we ever figure out how to do fuel cells cheaply, a ready supply of hydrogen would go a long way toward solving small-use energy needs. The discovery of a process that produces hydrogen from simple organic matter without requiring other energy could be, if it pans out large-scale, a long step toward avoiding oil judgment day. Hydrogen as a fuel source has problems, of course; it's unstable, it's hard to transport and store, and the technology to use it in car-sized engines is still in prototype stage. But if we could be assured that we could have all of it we ever wanted, figuring out how to make its use safe and efficient is just an engineering problem. That's a problem we can solve.

At some point, I'll talk about why nuclear power is the best hope for the world's large-scale energy needs. I expect to piss off a lot of people when I do that.

Monday, November 12, 2007

ApacheCon Day -2 (Hackathon Day 0)

First time at ApacheCon, and things are getting off to a quieter start than I'd expected. I had Apache nerds pegged as a tamer, calmer sort than IT or video game nerds (the stories about the E3 Tecmo parties have sent many a frantic fundie running off in terror), and the day in the Peachtree Ballroom was all business.

I don't recognize half of the people here. Most of them seem to know each other, and as the day slipped into night (POETRY, IT IS), the conversations got a little more animated and for some reason turned to economic policy. I kept my mouth shut for most of the time, figuring that I had enough opportunities to look like a dolt this week and that there was no need to burn through all of them on the first day.

I'm happy to say that much productive work was done; I updated several Validator issues and then Henri, Bjorn and I powered through the 2.4 outstanding issues. Benevolent soul that he is, Henri pulled down ten enhancements from 3.0 that he thought made sense for 2.4, and we spent most of the night going through those. It's a pile of work, and I'm proud of it, even though my contributions were mostly limited to formatting patches and giving a +1 rubber stamp to Henri's mad committer antics.

The trip has not agreed with me; I feel bleary and sick in my stomach. I haven't eaten poorly unless you count some disappointing Merlot the fine folks at devzuz were kind enough to provide us. Hopefully I'll get some good sleep tonight, have another awesome Southern breakfast, and then squeeze the sponge for a few more drips of Open Source commitment.

Seriously, I'm glad I'm here. I think the best is coming.

Monday, October 29, 2007

Guitar Hero III quick hits

Rather than writing a long and possibly sucky review, I'll just go with the important points. (I got the 360 version, if it matters to you.)

High fives

  • The new wireless guitar controller is a big improvement. Yes, wireless is always better, but this baby has a more solid feel to it and a stout, thick neck that is actually a reasonable facsimile of a real Les Paul (one of which I happen to own). Much more comfortable to play and hold, and connects to the 360 with zero problems.
  • Track list is well-rounded. There's something for everybody, and the bonus songs aren't writeoffs.
  • It looks really good in HD.
Turn your head and cough:
  • Battle mode is fucking retarded. You can throw powerdowns at the other player that you earn by hitting Star Power-like streaks. They range in effectiveness from mildly annoying (upping the difficulty level of the song) to bullshit and cheese sandwiches (double note, lefty flip, etc.) The game randomly picks which powerdown you get, introducing a very unwelcome element of luck into the nice pure game of skill. To make things even better, the three battles you're forced to do in career mode interrupt the flow; the AI never misses notes unless it's suffering a powerdown, and it always seems to know when to hit you with one. Frustrating as hell and zero fun. Fuck you, Neversoft.
  • Art direction. There's a lot of color in this game. A LOT. Venues are bigger but not better; blinding wouldn't be a bad description. The character art ranges from silly to ridiculous, and not in the fun way. A new Japanese girl rocker wears eye-hurting shades of intense pink or green. Judy Nails has been turned into some kind of ugly-ass goth wannabe. Casey's now an anorexic Lindsay Lohan lookalike. Lars' spikes and shoulder pads are now bigger than he is. While Harmonix knew how to design for humor and silliness while still retaining a subtle sense of taste, Neversoft has adopted "louder is better" and thrown any sense of proportion to the winds. It's just lame.
  • Co-op career is a nice touch, but why the hell can't we do it online?
  • We're on Guitar Hero game #4, and pausing STILL split-second freezes in the middle of the song? Who thought this was a good idea? It makes pausing worthless, since you're forced to miss notes unless you're in a dead spot in the song. Would it kill them to give you a two-count back in or something? Of all the shit Neversoft chose to fuck with, they decided to leave in this idiocy.
  • Is it really necessary to throw big wiggly "50 Note Streak!" and "100 Note Streak!" text at us while we're playing? All it does is distract you. Basic rule of interface here, people: a popup is designed to interrupt the workflow and get the user's attention. This is a GAME -- his attention needs to be on the note chart and should not be interfered with. If the user wants to know what his streak is, he can see it under his score. The popups don't need to be there.
  • The approach is starting to get a bit old. The next game needs some serious rethinking. Maybe Rock Band is going to be that game, but since Guitar Hero is now a cash cow, Activision's interest in tweaking the game (and possibly annoying the legions of Don't-Change-It-Ever) is probably close to zero.
Well, who am I kidding really? When I'm not yelling at the screen for fucking with me in another battle, I'm enjoying the hell out of this game. If you're a Guitar Hero fan, it's a no-brainer purchase. If you're not one yet, it's hard to argue against this purchase, especially if you want a good controller to start with.

**** (4/5, irritating flaws and lack of anything really new remove perfection from its desperate clutch)

Sunday, October 7, 2007

Shameless plug

I recently managed to maneuver Commons Email 1.1 through the release gauntlet, and since it's the first thing at Apache that I've taken all the way to the finish, I feel the need to announce it here. P.R. is best when it comes from completely compromised sources.

If you've ever wanted to do anything with email in Java, you owe it to yourselves to check it out and see if it helps. Bugs, feature requests, angry letters, and invitations to box socials are all welcome.

Wednesday, October 3, 2007

halo3.moveon.net

I just can't let it go.

Yahtzee's latest Zero Punctuation, to no one's surprise, takes aim at Halo 3. While it's gratifying to see that someone who actually gets paid to blather about video games takes a stance quite similar to mine, it also reinforces the lonely realization that the determined few of us who dare to question the orthodoxy of Halo 3's Supreme Greatness are not going to be seen as the cold-blooded geniuses we are for some time, if ever.

Aside from the world-against-us feeling, which is always invigorating, what do I care if some basement-dwelling fanboys rip anyone who suggests Halo 3 has not been the best game ever conceived and delivered to the sweaty hands of man?

Part of the problem is that 3's multiplayer, which apparently was what I was supposed to buy and use nonstop, has become an excuse for the shortcomings of the singleplayer campaign. This annoys the hell out of me. Several people have tried to explain this away in a few different tacks:

  • "Oh, come on, Halo's really about the multiplayer anyway. No one cares about the singleplayer". Funny, that's not what the pre-release hype was all about. Anybody remember "Finish the Fight"?
  • "You have to understand the backstory from the first two games in order to really appreciate what happens in the third". Weak. This is the first Halo game on the 360, and there are going to be people who haven't played 1 or 2 that want to try 3 just to see what all the fuss is about. Are we really expecting those people to play 20 hours' worth of backstory before we pronounce them ready to enjoy the current game? Why not craft a compelling story that can stand on its own but also rewards longtime fans? Valve does this with Half-Life, Bethesda does it with Elder Scrolls, and even Bungie did it with Halo 2 (a game that has been retroactively rising in my esteem these days). This argument smacks too much of navel-gazing comic book nerds who yell at you if you don't read all your comics in chronological order, and it deserves no more respect.
  • "At least they tried to do a story, unlike some other games that don't even pretend to". They damn well better have a story, since Bungie has always had decent stories with their games (going back to Marathon and Durandal). They spent a lot of time in 2 developing characters, story arcs, and deepening considerably our understanding of the Covenant, the Arbiter, the Marines, and the Flood. Fanboy reaction to this was admittedly negative, so Bungie seems to have decided that time spent on the story was wasted since gamers specifically rejected that part of 2 while embracing the multiplayer. No, Bungie doesn't get points for trying here; they took the safe route and tried not to piss anyone off. When you do that, you get lackluster, uninspired gaming, no matter how pretty it looks.
Some game companies *coughidcough* have in the past succumbed to the temptation to write a multiplayer game, stick some bots into the multiplayer levels, and call the result a singleplayer game. This is the degenerate case of a singleplayer game -- a quick and dirty hack on top of the multiplayer that exists solely that the publisher can check off the "Features single player"/"No Internet connection required" bullet point on the Blockbuster Game Feature List™. Other examples include Unreal 2004; it's one of my favorite shooters, but it made little pretense of being a compelling singleplayer experience. Halo 1, by contrast, was specifically NOT about being a multiplayer game. In the days before Xbox Live, multiplayer was confined to splitscreen on usually tiny standard-def TVs. While it was fun for a group of friends in the same room, it sure wasn't any kind of "new standard" for other games.

Finally, I guess I'm just personally disappointed. I preordered 3 months in advance, I went to the store before midnight to get it, I took the day off work -- hell, Bungie (who all else aside are really awesome people who love their fans and try their damnedest to make them happy) came out in a party bus to sign autographs, shake hands, and dodge questions about Halo 4. I have to say it was one of, if not the best, experiences I've ever had at a public video game event. It's sad that the final product didn't live up to my expectations, but if the game makes $170 million in one day and scores dozens of perfect reviews from otherwise legitimate game sites, making me happy has got to be at the bottom of anyone's list.

P.S. I'm not ending on a "poor lil ol' me" kick here; it's my problem, I'm dealing with it, and video games are ultimately a stupid and pointless waste of time anyway. I may be emo over this, but I haven't lost touch with reality just yet. (Unlike those dumb bastards who forked over US$130 for the Super-Special Legendary Collector's Edition With Cat-Sized Helmet and Worthless Bonus Discs. HA-ha!)

Saturday, September 29, 2007

Interface21 Followup

Since the conversation has become public, I think it's only fair to mention that Rod and I had a pleasant and productive chat on Friday about the issues raised in our back-and-forth. He graciously apologized for his tone and I repeated that I had a lot of respect for him.

We're going to look into how I might contribute to Spring and what might be possible in bringing our two companies closer together on common interests.

Rod lived up to his reasonable and knowledgeable reputation, and I'm happy that whatever unintended rancor may have arisen, it is not an obstacle to working together for the benefit of all.

Now where did I put that Kumbayah MP3...

Wednesday, September 26, 2007

Good things sometimes come to those who sit patiently: Halo 3 review

Having finished Halo 3 a scant few minutes ago -- and having taken yesterday off work in order to play uninterrupted by annoying responsibilities -- I figure this is a great time to write that crappy game review I've always wanted to write. So here goes!

Warning: There be spoilers ahead. If you care, come back here after you get a life.

Go ahead, I'll wait.

Hmm.

Everybody else ready? Let's begin.

I see no real need to start off in the standard review fashion of rehashing the game's history, the previous two entries in the series, or the jaw-dropping amount of money it's going to make. You know all that anyway if you're reading this. If you don't, allow me to summarize for you:

Microsoft buys small game developer, invests tons of money, and game developer creates better-than-average first-person shoot-em-up on Microsoft's somewhat dodgy console, singlehandedly legitimizing its existence.

As I write this, Metacritic has collected 33 "professional" reviews of the game, none of which I have read. As Bioshock did last month, the game is causing these esteemed journalists to soil their pants in the sort of way that would have earned them sound ass-thrashings for their incontinence in high school. I have to agree with Yahtzee's comment on how the insane and ridiculous level of hype for these games eventually backfires when you realize that, well executed and beautiful as they are, after beating them you're still not going to be any more attractive to hot women who currently go out of their way to avoid breathing your stink.

You see, before I say one word about the game I'm already slightly annoyed by it. Whether it's rational or not, it's a bias and therefore should be up front in any review that claims to be objective. On to the real deal.

Visually, Halo 3 is impressive, but not new. On my dying 27" standard definition TV, the water effects are cool, the lighting has gotten better, and the sheen is applied liberally to all surfaces. While many gamers are really excited by these kind of environmental effects, I really don't care that much. The game is what matters. I doubt anyone will be disappointed by what's here, but it's not the Twenty-third Coming (or whatever we're up to for game messiahs). The art direction has again been pushed from Halo 2 as that game did from the first one; new ships, new weapons (though some are redundant, like the fuel rod gun/rocket launcher, and we see far too little of the flamethrower), and what seems like a grim determination to not repeat settings from the previous games.

Halo 1 fans no doubt remember dragging themselves through gray corridor after gray corridor after gray corridor, except sometimes when they were purple. Precious little of this appears in 3, and it's not missed. Yet there are fewer large open spaces than in 2. The number of vehicle segments has been scaled back as well; you only get one go with the Scorpion tank, a couple of brief periods in the super-jetpack Hornet, and a few Warthog outings. Eventually the forced variety and lack of cohesion in the play begins to feel a bit forced, as if the novelty is what you're supposed to care about rather than solid and flexible play.

The story is weaker this time around. Halo 2 offered the innovative device of switching back and forth between the Arbiter's story and the Chief's story. In addition to offering two distinct styles of gameplay, this also helped the story by telling it simultaneously from two viewpoints. One gets a rich view of the Covenant world and mindview through the Arbiter's eyes that the Chief would never have a chance to understand. 3 abandons the switching and tells the story entirely from the Chief's perspective, relegating the Arbiter to an AI NPC that follows you around and basically distracts the big evil from focusing solely on you. It feels unsatisfying to remember how much time was spent creating the Arbiter's character and then see him turned into a walking turret. The final confrontation between the Arbiter and the Prophet of Truth can't help but feel like an anticlimax.

A few other things in the story annoy me. So first Guilty Spark finds you, apologizes, helps you through most of the story, and then suddenly turns on you just before his role in the story has to end? I knew he'd be useless in the story once the new Halo was activated, and so I knew they'd have to get rid of him at some point before that happened. When you get to the control room, the dialogue is so predictable you'll be able to finish their sentences. Apparently they knew they needed Spark to activate the Halo, but didn't bother to find a more believable path in the story. Then there's the Flood. Truth tries to activate the rings, so the Flood help you. The second the rings are off, the Flood turns on you. And who didn't see this coming? The Arbiter and Chief look around, their body language thick with anger at the betrayal. Don't they remember being sent back and forth by Gravemind in Halo 2? Wouldn't they expect some kind of underhanded dealings? Captain Keyes also shows up just in time for her convenient extermination. The plot seems to lurch from point to point without much thought on how to get there.

Finally, I'm glad I sat through the credits to watch the end cinematic. I was all ready to rip Bungie a new one for killing the player offscreen at the end of the game, but they managed to save it by exiling him in the middle of nowhere while setting up the inevitable Halo 4. Regardless, the ending is just unsatisfying in a way that's hard to describe. The war, which began as a struggle against the Covenant, turned into a war against the Prophets, who were then killed, which then turned into a war against the Flood, who were then wiped out. We never get to see the real effect of the war on Earth, nor of the personal toll borne by those who survived. There can't be anyway, since every important character other than the Chief, Cortana, and the Arbiter gets whacked. What's left to care about? Why am I supposed to get teary-eyed when this Lord Hood guy gives a weak Gettysburg Address paraphrase on a dirty hill?

To the next point: The AI is just godawful. Countless times I was manning a turret and gleefully plugging infinite ammo at the bad guys when my own squad would calmly and without hesitation walk directly into my line of fire. Of course I'd cut them down before I could stop firing, so it got to the point where I had to make sure I was shooting over their heads. This was almost always too high to hit the bad guys. At times in the game I saw soldiers jumping on top of boxes or sliding down a hill to get to a battle. So why is it so hard to code them to go around a friendly turret or at least crawl under it? It's not like it's advanced soldering you only learn after five years in the freaking army! Enemies sometimes fly into battle on slow-moving rocket packs, making perfect targets for the sniper rifle. Brutes will drop bubble shields and then walk out of them, making it easy to pick them off. Snipers will give away their positions while still too far away to hit you with any accuracy. Grunts seem not to understand the concept of cover. There were too many lapses to list, and all of them together add up to a constant annoyance during play.

The point about thumbstick control being inferior to the mouse and keyboard has been made before, so I'll just amplify here that it does in fact suck and that the mouse/keyboard is in fact better. The major culprit is just not being able to turn your firing direction fast enough, as well as making it too hard to rotate your view at a comfortable rate. This problem isn't really fixable, so no need to hash it out further, but it does suffer by comparison to the PC.

Finally, the checkpoint/save system, which worked pretty well in the first two games, takes a step back here as well. Most of the time, getting past one knot of enemies or one particularly tricky bit of vehicle maneuvering earns a checkpoint as soon as you're in relative safety; however, there are times when you'll be in near-constant battle for upwards of a minute and make it past one or even two checkpoint positions without the game crediting them to you. At that point you become panicked not because you're being shot at but because if you get ganked by some cheese-rific enemy placement you'll have to fight the entire damn sequence over again. Again, this takes you out of the flow and just pisses you the hell off, especially when you take the same path, aggro 10 less enemies, and are awarded a checkpoint only 30 seconds after your previous one. The inconsistency is maddening.

After all that ripping, did I like the game? Well, yes. It's a solid, if unspectacular, first-person shooter that looks great and offers at least a try at a serious scifi storyline where most other games are happy to get by on violence and tits. It's got more bugs than it should, and the execution is off in some key places, but Halo fans won't be disappointed, and shooter fans will definitely find the good points to outweigh the bad.

But it's definitely not the Forty-second Coming.

**** (4/5)

Monday, September 24, 2007

Open Source FTW!

Wow, I write one little blog and the heavy hitters take notice. First, thanks to all who read and commented. I'll try not to let success go to my head. Second, there are a lot of touchy political things going on in this discussion, and while I don't wish to offend anyone, being honest about my beliefs and my practices is bound to piss someone off. To that person I say "sorry, but I'm famous now". :)

Seriously, there are a couple of things I want to look at.

From Rod's blog, where he paraphrases my support company point into an incorrect metaphor:

Rod claims that dependable car repairs requires trained mechanics. This must come as a surprise to all the non-mechanics employed by every other garage in the world. Someone has to do the sales and run the business and fix the building and keep the website updated and find new customers, etc., etc. Mechanics don't get you any of these, and who's going to tell me they're not required for a successful garage?
As Rod surely knows, car mechanics almost never design or build the cars they are paid to repair. A very small group of designers produce almost all the automobile designs used in the world. The guy at your local Fix-It-Cheap Auto Service doesn't work for Toyota and probably never has, but (assuming he's a competent mechanic) you don't insist that he has to before you trust him to fix your car; he's read the manuals, studied the engine specs, read the latest bulletins and recall notices, and worked on dozens of cars very similar to yours. He fixes your car for a hopefully reasonable price and you're out and happy.

Rod, Juergen, and the other i21 engineers are like the car designers. They decide how Spring should be architected, and then they write code that implements the design they've settled on. While they certainly can support Spring, there's no reason some other engineer (me) can't study the same code and achieve the same level of expertise as far as a customer's support needs require. In my experience, a customer wants their immediate needs met when they open a support case. If we help them fix their problem, they're happy, and if we educate them on how to avoid similar problems in the future, they're very happy. Working with the community to find the best long-term solution that works the best for all Spring users is the second problem and one any good developer is happy to help with.

One thing i21 might look into is having a certified Spring technician program backed up by a rigorous curriculum that makes that really mean something. Toyota certifies mechanics on very strict tests. Cisco, Microsoft, Red Hat, etc., all have similar programs. The companies make money on course and training fees and crank out qualified engineers that spread the mindshare into every corner of every company they serve.

Rod speaks out against companies who, he says, try to make money off open source without fairly compensating the original developers of that code. SourceLabs is not such a company, as our active participation, including committer access, to several projects will show. By implying that these companies are parasites, he makes the fallacious implication that the original developers are always owed a cut of revenue derived from the code. If that's not what he means, it's hard for me to suss out any other interpretation of his statements on this issue. I strongly disagree with this opinion; a true open source project (the FSF term "free software" might even be applicable here) does not belong to any one person or entity, but rather to the community of its users. Rod seems to think that the purpose of Spring is to enrich Interface21; I believe the purpose of Spring is to make developers' lives easier. I make this claim because Spring is licensed under the Apache Software License, which explicitly allows free distribution and even (urk) proprietary forking. If Spring does not truly belong to all of us, then it is mislicensed. If Interface21 honestly feels that Spring belongs to them, they should change the license to reflect that instead of misleadingly claiming to be an open source company.

Finally, Rod claims that SourceLabs has not committed anything back to Spring. I could make the admittedly weak argument that I've personally sent two little tiny patches to the build file which were both accepted, but Rod has also said that valuable contributions to Spring include evangelizing and helping to broaden its community. We have been doing this since the company's inception. I've personally given one of my canned "Spring rocks" speeches to customers, and I've also encouraged other people I know in my professional sphere to use it. Such contributions are valuable not just because Rod says they are but also because they make life easier for everybody using the Java platform, whether they're our customers, i21's, or nobody's.

I'd like to close by offering my help in a professional capacity as a Spring enthusiast with time on his hands to Rod and Interface21. It is explicitly part of my job description to improve Spring however possible. Guys, where can I help? I'm happy to assist with anything you'd like assistance with: code, test cases, documentation, even giving speeches or talks at conferences and venues I visit. My offer is not contingent even on future employment at SourceLabs; I'm volunteering as an open source developer. Have Juergen (or whoever handles this sort of thing for you) drop me a line. I don't want committer access unless and until you've decided I earn it. I don't insist you adopt my patches. I don't insist on helping design the car. But I will help you however you'd like to make Spring better with dedicated time each and every working week.

So how about it? Would you like the help? Let me know.

Thursday, September 20, 2007

Nonsense about Interface21

Rod Johnson, a man I respect and admire, has managed to step on one of my hot buttons. He's put up a new blog entry that reiterates his earlier complaint against other companies that offer support or services for Spring. While I agree with some of them, his brush is far too broad for the truth.

Before I start, full disclosure: I work for SourceLabs, a company that offers 24/7 enterprise-level support for Spring as part of a fully integrated and certified Java enterprise software stack. I'm proud of our products and how we've helped our customers.

Rod starts off with this:

OpenLogic needs to understand that the opening comment in Stormy's post that "Developers that work on open source software typically have day jobs that pay pretty well…so they work on open source software for free and write code during the day for big bucks" is largely wrong, understand where the open source software they hope to profit from comes from, partner appropriately, and set a price point that allows for genuine support. An alternative would be to stop claiming to provide enterprise support, and be clear that what is being offered is a kind of on-call development assistance, with no guarantee of being able to resolve critical issues.
Actually, Stormy's not largely wrong. Companies who sponsor open source by hiring its developers are neither a necessary nor sufficient condition for a successful project. The obvious example is Linux, which rapidly evolved into a stable and usable kernel long before the words Red Hat or SUSE entered the conversation. Back then, all Linux developers were far-flung, spare-time volunteers who didn't receive a dime for their work. When the first Linux companies decided to try offering support, there wasn't a "Linux21" company headed by Linus they could negotiate with, and Linus never claimed that he was the way, the truth and the life of Linux support.

Jesus may have said that no one comes to the Father without coming through me, but Linus explicitly disavowed control of the Linux community at a very early stage. He maintains his position through the universal respect of his judgment and that he has demonstrated that he is beholden to no one. Whether he succeeds or fails, he tries to do the best thing for Linux, not whatever company he's working for. Red Hat's Linux support is not somehow delegitimized for lacking an explicit Linus Stamp Of Approval, and neither is Canonical's, Novell's, SUSE's, etc. Plus, aren't those companies making money just fine, and isn't the competition between them good for all Linux users, individual and corporate? One might also mention XenSource, who never claimed to be the only legitimate source of Xen support, and yet recently sold for hundreds of millions of dollars. There are tons of consultants who are expert in Ant, Maven, Zimbra, JSF, etc., who may compete for contracts, but none of them can seriously claim to be the only legitimate option over the others.

Rod claims that dependable support requires committer access to the source code. This must come as a surprise to all the non-engineers employed by every other support company in the world. Someone has to do the sales and run the business and fix the servers and keep the website updated and find new customers, etc., etc. Committer access doesn't get you any of these, and who's going to tell me they're not required for a successful support company?

Good support means that you have a personal relationship with your customer. You know what they need, what other software they're using, what they're most concerned about, and you establish a bond of trust with them that you will be there for them. That goes way beyond giving them a pager number and saying, "Call me if it breaks".

Despite what Rod would have you believe, I (or anybody else) can offer top-quality Spring support thanks to the ASF license Spring uses. You don't need to be a Spring committer to fix bugs in it, and you don't need Interface21's permission or endorsement to use it. If there's a bug that impacts your production server, I can find it in the code myself, talk to the customer, and figure out what the best solution is: patch, workaround, client code change, etc. I can do that because I have access to the source code that matters -- the one the customer is using -- and because I understand, through careful and tedious preparation, what the customer needs.

Moving on, we next have this:
It may seem strange to you, but it's normal for individuals and companies to want a return on investing millions of dollars, passion, blood sweat and tears in something. Interface21 sustains and develops Spring and we do a good job. I think it's perfectly reasonable to expect that we can leverage this investment into an advantage in the support market.
Sure, Rod. You want to sell support for Spring, be my guest. Play up every advantage you can in the market, but it is a market. If you want to argue that control over committers makes you the best support option, go ahead, but as I said above, it's a pretty weak argument.
Authors and thousands of these developers contribute to the success of the project and community through their evangelism. That's an important way of contributing. OpenLogic does not; it merely aims to make money from projects that others have built, evangelized and conveniently maintain and enhance. Now that's a "sweat deal".
OpenLogic may not contribute to the projects they use, but at SourceLabs we do. Our credibility as an open source company depends on it. I'm a committer on the Apache Commons project. We have people in house who have committer access on Struts, Hibernate, and Maven, and we're also involved daily in Tomcat, Axis, and other projects. We have brought Spring into companies because we like working with it, thus increasing mindshare and support for Spring without anyone at Interface21 knowing anything about it or lifting one finger.

No one at Interface21 can seriously claim to be actively engaged with the other pieces of a full Java enterprise stack; Spring, for all its awesomeness, is a framework, not a stack. It doesn't provide everything an enterprise customer needs, and supporting only Spring is not going to cut it with companies who also need help with Hibernate or Struts. In fact, Rod has to say that if you do need help with the other parts of your Java solution, Interface21 can't help you since they don't control the source code of those projects.

It's true that aggregation in and of itself is not worth much. But neither is control of the code. Open source allows everyone to compete on a level ground in whatever arena seems valuable enough to conquer. I hope Interface21 isn't just afraid of a little competition from companies who understand that doing support right is much more complex than having an SVN account.

One final point for Rod: why did you open source Spring at all? If you're so convinced that no one else can offer credible support for it, why not just make it proprietary?

Monday, August 20, 2007

Sun is misusing the GPL for Java: Part 2

Slashdot | Sun Lowers Barriers to Open-Source Java

Now for Part 2.

Last time I ran down what I see as the differences between the FSF and the ASF's handling of their communities as expressed in their respective licenses and practices. I feel they start from very similar places yet wind up in ones that, in a purely practical sense, function differently, even to the point of not being able to collaborate. This is a shame, but both groups have sound reasons for doing things their way. It makes it hard on those of us who want to participate in both, however.

When Sun announced the release of Java under the GPL, I was naturally delighted. Finally a free software system could include Java, finally the gray cloud hanging over Java developers with a conscience could finally dissolve, finally we could inspect (and even fix) core Java classes without fear of taint or lawsuit, etc. It was a good day all around.

Until the news dropped recently that Sun was changing the license terms of the Java Compatibility Kit (JCK). This was expected since the previous terms of the JCK's license conflicted with the GPL, but Sun went a step further and declared that only Java implementations "substantially based" on Sun's GPL implementation would get access to the JCK. This is a good thing, since people who patch Java can still get it certified as Java, including ports to different platforms and architectures, but it is also a big middle finger to all existing Java implementations, especially Harmony and Classpath. (Also, several projects trying to offer a free software JVM, like Kaffe and Japhar, are also affected.) Sun has effectively delegitimized these other efforts, saying, "They're not Java, they never were Java, and they never will be. There is only one place to get real Java, and it's from us."

The most important thing about Java in Sun's eyes has always been control: control of its brand, control of its implementation, and control of its uses. Java has never been nearly as open as some people would have you believe: the first version was licensed in frankly despicable terms, turning off a lot of interested developers who were willing to look past the ubiquitous and insipid hype. It's also easy to forget that for several years, Java was crippled by poor performance and buggy internals. Microsoft's JVM was preferred not because it was included in the OS (applets were already a dud) but because it was faster and more stable than the official. Naturally Microsoft sought to embrace and extend Java into something they could control, and of course Sun fought back and eventually won, but the damage to Java's reputation had been done. The development and later uptake of C# is significant also because it offered something developers needed: a Java that performed very well that hooked deeply into Windows.

Finally, Sun's obsessive compulsion to control all things Java retarded its development. Sun doesn't have the resources to push Java along nearly as effectively as a group of interested developers passionate about the technology. If Java had been GPLed in, say, 1997, it would be very different from what it is now. I'm not sure it would be the same thing at all, but I am sure it would be a better thing. With developer interest, a free software license, and tons of time to research and experiment, we could have things like a real native code compiler, streamlined and less bloated VMs, tighter class libraries, and a myriad of other pipe dreams we're probably never going to get now. The determination to do new language features like generics while preserving very strict backward compatibility guarantees has resulted in a clunky and difficult development environment; if the technology were under the control of a group that tried to do what was best for the technology instead of what was best for the interests of a tiny minority of its users, there would be at least a fighting chance to avoid a situation like this.

Sun has found a way to control Java by ostensibly setting it free, and they're doing it by using the GPL as a club. The GPL's copyleft was designed to preserve the Four Freedoms after the software left the hands of the original developer. Sun has turned it into a weapon against people who can't or won't develop Java according to their rules. They plan to "open" the Java community not by joining with the existing ones but by creating a new one and marginalizing every other existing one.

When I say "misusing the GPL", I mean that Sun is going against the spirit of it, not the letter of it. Sun has every right to take this action, and as a member of the FSF it's hard to find fault with it -- it's what I've wanted for years. But in my time in the ASF, I've come to appreciate the work they've done, and they have a legitimate beef with Sun over their conduct in the JCP and by talking out of both sides of their mouth with respect to the ASF's access to other TCKs. (See Geir's letter for more.) This, apparently, is Sun's response.

While the new Java may be a GPL world, there is reason to wonder why anyone at the ASF should spend further time working on Java. Sun has left them for good, and pending a licensing resolution with the existing Java code, it will be more difficult to construct a system with Sun and ASF code from now on. At the same time, Classpath stands to benefit hugely from this, again at Harmony's expense; fixing Classpath and Sun Java is very easy while fixing Harmony is that much harder.

I'll have to think about what this means for ASF.

Thursday, August 16, 2007

Sun is misusing the GPL for Java: Part 1

Slashdot | Sun Lowers Barriers to Open-Source Java

I'm a proud member of the FSF as well as the ASF. For years, the incompatibility between the GPL and the ASL drove me nuts as well as many other developers. Building a complete free software Java platform is a Herculean task, and the duplication of effort between Classpath and Harmony struck me as needless. Of course the developers involved are free to do as they please, and I happily support any effort producing free software, but from the practical standpoint of just wanting one to use, neither had achieved parity with the Sun implementation. As a Java developer who wanted to make a living , I needed a rock-solid stable Java, and there just wasn't a free software option.

I don't believe there's any significant ethical difference between the FSF philosophy of free software and the ASF philosophy of open, community-driven collaborative development. I see it as two viewpoints on the same principle. Several developers work on projects for both organizations, and I know of no ethical conflict between doing so from anyone's perspective. However, the different emphases of both organizations, as expressed in their licenses, give rise to some annoying and probably unintended results.

Both the GPL and ASL are free software licenses because they protect the Four Freedoms, but the GPL uses copyleft to prevent someone else from restricting the Four Freedoms once it leaves the copyright holder's control. The ASL doesn't care about that beyond some patent restriction language that is not in and of itself a bad idea. The GPL is both a shield and a club: the shield protects the user from legal responsibility for the code and preserves the Four Freedoms for that user; the club is used to beat around the head and neck those who, having had the freedoms extended to them, would then seek to deny it to others. The GPL is the summation and distillation of everything the FSF believes in.

The ASL is emphatically NOT a distillation of everything the ASF believes. The ASF has tons of rules regarding how projects must be managed, handled, advanced, promoted, demoted, etc. These procedures are designed to ensure that any ASF project is developed in the open, that the community of users and developers can always be heard from, and that no project can ever be taken over by any individual or group hostile to the spirit of openness.

I think this is the key distinction between the two groups: FSF uses the GPL to control its community, ASF uses its culture and members to control the community.

I'm not trying to spark a debate about which is "better" or "more ethical"; again, these are different outgrowths of what I believe is the same thing, the love of what the FSF calls "free software" and what the ASF calls "open, collaborative software development". However, this doesn't mean that the different approaches work equally well in all possible situations.

Consider a software project whose copyright is held by the FSF. Anyone contributing to a FSF project must legally transfer their copyright on their contributed code to the FSF; this is done because the GPL copyleft is much easier to enforce if a single entity holds the copyright. The FSF cares about nothing more deeply than the Four Freedoms and will do anything it thinks is necessary to protect and defend them through the GPL. It's understandable that many individuals who would like to contribute to a FSF-controlled project cannot do so due to this restriction. For example, a programmer's company usually claims copyright on any code written while in the employ of that company, sometimes even code written on the programmer's own time and equipment. This programmer may not participate in the project through no fault of his own.

Also, imagine a software fork: the canonical example here is the Emacs/XEmacs division. The GPL protects the right to fork, and so XEmacs is just as legal as Emacs is. Without getting into the history of this rather ugly story, the FSF and Lucid, both acting in good faith, were unable to come to a consensus on the technical merits of Lucid's Emacs patches. The two projects forked, developed their own communities of users and developers, and continued on their separate ways. I'm not criticizing either program or either community; I only want to point out that the result of the dispute was a fork.

Contrast this to the ASF approach. ASF committers are not required to turn over copyright on their work other than under the terms of the ASL; however, ASL members must be individuals, not companies. Many contributors may insist on keeping their own copyrights, and the ASF allows this. ASF projects are controlled by a management committee, the leader of which reports to the Apache Board, and are answerable to the same. Major project decisions, such as releases, require a lazy consensus vote, and commits to a project require unanimous consent. Because the ASF puts so much effort into its community, forks and major disputes are much less common than in FSF-controlled projects; when they do arise, they are resolved not necessarily to everyone's satisfaction but to the point that the project may continue. For getting things done, this approach has obvious advantages. Again, the ASF cares about open development for its own sake and is less concerned with guaranteeing the FSF's Four Freedoms.

So what does all of this have to do with Java? My next post will discuss that, as well as ripping Sun a new one for misusing the GPL.

Thursday, May 24, 2007

Shiny rails and testing

Man, Rails kicks ass.

After getting a hold of Ruby a few months ago, I've been looking deeply into Rails. There are a couple things at work that use it, and I'll probably wind up maintaining them if we don't replace them altogether.

There's little point in me talking about how great Rails is, since you can find Rails worship sites just by typing random letters into a google search, but I am going to emphasize the thing that really kicks unseemly amounts of ass -- the one thing that no one else has done nearly as well: testing.

Rails does first-class support for model (unit) testing and controller (functional) testing better than any other framework I know, including the one I support professionally. While unit tests have been around forever, there hasn't been a concerted push to get a comprehensive functional test strategy in Java. Part of the problem, of course, is that Java has more web frameworks than France has baguette shops, and several of them seem to make a point of being as different as possible from any other framework. Rails, being an all-in-one solution for your web needs, can offer integrated tests easily, but for the Java offerings you more or less have to reinvent the testing axle every time you reinvent the wheel.

Case in point: Our framework includes Struts 1.3.8, which for purposes of this discussion means we support it and give people a number to call if it breaks. We also ship a sample application that uses our framework, and I volunteered to maintain/update/rewrite it for the next release. The guy who did the first version is no longer with the company, and he never bothered to write unit tests for his models nor functional tests for his controllers. Now it's no indictment of the technology if the developer is too lazy to code properly, but after filling in the missing tests I got to Struts. I'm not willing to ship an app without all necessary test coverage, and controller routing and request processing falls within that box. (If you disagree with this, you are wrong.) So I poked around for some test help and came across StrutsTestCase, which seemed to be just what the doctor ordered. Unfortunately, it's apparently been abandoned and the latest release is three years old, not to mention it doesn't even compile under Java 5. So I've been working on something similar for our needs.

All told, the googling/studying/problem solving/coding that I'll have done by the time this is finished will add up to a decent chunk of effort. I remind those reading that Struts 1 is still the most widely used MVC Java web framework, so one would expect to find solid testing support for it. Well, you won't, and there isn't.

The comparison to Rails, where a test case for your controller can be generated with a simple script and the entire test written and done in minutes, reflects badly on the Java world. I can only conclude that Struts shops write their action mappings and then laboriously, manually test each one through a running browser. In addition to being inefficient, slow, and boring, it takes forever, which is another way of saying that it won't be done. This is the kind of thing the computer should be doing for you, people. This isn't a Java vs. Ruby issue, and it's not even an issue particular to Struts; there is a resistance to spending code effort to automate tests that can and should be automated.

We need to make managers aware of the need for automated functional testing so time for it gets built into schedules. We need to improve our testing tools so they'll be ready to handle new technologies. We must spend our time working on difficult problems, not fighting the machine.

Guess I'll start by writing a test library.

Saturday, April 7, 2007

Getting religion: XML

XML is out of control.

The insidious little creature has ingratiated itself with the Java standards committees, which themselves were long ago bought and sold to the unholy behemoths that stuff the latest JEE turd down our throats. Under their tutelage, XML has become the configuration file format of the 21st century; we can't get away from it in Java development. The two major build systems use it. The entire JEE spec is predicated on it. Spring, which has always been billed as the solution to the mistakes of JEE, prides itself on its XML configuration. Hibernate, which despite its flaws is worlds better than JDBC access, requires us to do our ORM in glorious, repetitive XML.

This has got to stop.

XML is a markup language; that means it is supposed to contain human-understandable text and information about that text. It was designed to be flexible to ensure adapting to arbitrary formats is easy. Whichever side you're on of the OOXML/Office XML debate you're on, the fact that the document is represented in XML is a win for all developers.

The flexibility, which has been the key to XML's wild success, has also been contorted by eager Java beavers to twist it into a general-purpose configuration language. Now it's debatable whether an XML representation of, say, a properties file is any worse than a simple key/value listing, but I would argue that at least it's not worse. However, when you start mapping database table schemas to XML, inserting namespaces for different kinds of constructs, and attempting to integrate those configurations with other programs, you wind up in a nasty place quick. On top of that, some people have even begun to hack procedural logic into XML (see the antcontrib tasks and the JSP tag library). Suddenly your XML has become a crappy approximation of a programming language. At this point, why aren't you better off using a programming language? As the XML gets more and more hairy, the parser grows similarly hairy -- just so you can map your XML into Java. But why are you trying so hard to keep your configuration in XML anyway? So it can be portable? (What other app is going to use Spring's application-context.xml?) So other languages can read it? (Ruby doesn't need a Hibernate XML file for ActiveRecord and never will.) Even if you do think of a good answer to that question, is it worth the ugly creeping horror that is your configuration parser?

If you need a programming language, don't be afraid to use one.


All right, so maybe Hibernate can't read the database for the table metadata for some reason (though I'm still not convinced that it shouldn't at least try). What's wrong with using Java to describe the schema instead of XML? It's not like it's not going to wind up in Java anyway, and the extra level of indirection doesn't buy you anything. At least use Java (or Ruby, or something useful) to generate the XML if you insist on having it; at this point, reconfiguration becomes the same as refactoring, and a good Java IDE is immensely helpful with that.

(Thankfully, Spring has finally started work on a Java configuration option, and it is tasty.)

Bottom line, guys and gals: the right tool for the right job. XML is not always the right tool, so don't use it when it isn't.

Friday, April 6, 2007

Getting religion: Testing

I've spent the past week at a Spring training session -- of which more later -- but the first thing I want to say has to do with testing.

We have maybe 20 people in here; most are on ancient Java tech, including 1.3 and JDBC, and Spring of course sounds compelling to them as it hits right where they need the most help. The subject of transactions came up, and naturally our instructors pushed hard on us that 1) they were necessary; 2) Spring made it easy to do them; 3) we should be doing it. Now I was prepared for 2 and 3, but I thought 1 was unnecessary. Of COURSE you do transactions when you're doing database development. All kinds of crap can go wrong if your operations aren't atomic by logical unit rather than by connection. Now while I wasn't surprised to hear people say transactions were a pain to implement in Java, I was shocked to hear them say that because of that, they didn't bother with them -- or even worse, said they were unnecessary. One guy claimed to know his database "well enough" that the kind of conflict described as arising out of a failed transaction "shouldn't be allowed to happen".

I had to bite my tongue to keep from laying into this guy.

So the instructor followed up with, "Well, how do you know you aren't having any problems with your code due to not using transactions?" "Well, we haven't seen any problems and our users haven't reported any." Um, genius, if your web site doesn't work for users for no apparent reason, they're not going to use it. They're not going to compose a detailed error report and help you track down the bug; that's not their job. The web is not an expanded testing department. It's not my mom's job to help you find the bugs in your site. If a user can't use your site, they'll just leave it and not come back.

Some rant-ish points:

  • I don't care how well you think you know the database you're using; you're wrong. You don't know it completely; no one person does. No one person even wrote it, so it's ridiculous to say you understand it completely. Second, it's not just the database you need to worry about; it's also the JDBC driver, the VM, the native platform, the versions of each, and a hundred other factors. It is way bigger than you, and it's time to humble yourself to that; the days when a programmer understood the machine completely are long gone and aren't ever coming back.
  • One day your database may change. I guarantee you won't understand it nearly as well as you do your current one, and that's incomplete at best.
  • It is not okay to write some code, run it through a couple of use cases, fix the obvious errors, and declare it production-ready. Your job is not to show that your code might work given the right conditions; you must show that your code cannot break in the wrong conditions.
  • Your test cases (the programmer's, I mean, not the QA department's) must test not only for correct results but for sensible and well-defined behavior when giving incorrect input. It doesn't just have to work; it also has to not break.
  • Treat your users with respect. They aren't programmers, and they don't get paid to use your code. You get paid to give your users an easy, non-confusing experience. Don't try to lazy your way out of it by saying that bulletproof code isn't worth your time -- you have no other function but to produce that bulletproof code.
Remember this at all times:
This is computer science, not computer religion; we don't have faith that things work, we have proof that they do.
One of the bedrocks of scientific theory is the idea of falsification; a scientific theory must be able to disprove statements that are false. (This is why creationism and intelligent design aren't science, since they can't meet this standard.) In this case, a falsifiable statement would be something like: If this operation is interrupted by the user, the database update will fail. You then write a unit test that creates such an interruption and then check if the update failed (either an exception was thrown or the data violates some integrity constraint).

A unit test is really a small proof that a falsifiable statement is in fact false; when you assert something at the end of a test, you are basically claiming that the lines of your code are a list of statements that logically demonstrates the truth (or falsity) of your falsifiable statement. This is exactly the same thing as a geometric proof, and that's not a coincidence. A good unit test only exercises one part of your class, just as a good proof only tries to establish one fact.

You know that your code is ready when it has been proven to not fail. The trick is figuring out all the ways your code might fail, and that's impossible except for the most trivial programs. However, you should be able to predict the vast majority of failure scenarios and prove through unit and integration testing that they will not break your code. If you're not doing that, you're not practicing computer science.

It's time to get religion about computer science.

Saturday, February 24, 2007

Elitism can suck it.

Slashdot | Raymond Knocks Fedora, Switches to Ubuntu

I'm not an open source "guru", "expert", or "leading voice" as ESR has been variously labeled. I think ESR has started believing his own press releases recently and become a creature of his own ego (which, let's admit, all programmers have more than the average share of). He's done some good things for open source, free software, and for the community of computer users at large. I own The Cathedral and the Bazaar, and I'm not rushing to the nearest used bookstore to get rid of it. However, he seems to think he still speaks from some high position of authority; he seems to believe he's above the petty and meaningless squabbles he thinks pervade the open source community, and therefore his opinions are well-considered and without bias when they're nothing of the sort.

I don't care if he switches to Ubuntu. I did it myself, and I couldn't be happier with it. I used Red Hat, and then Fedora, for 8 years, and it's where I learned how to use Linux. I'm not happy with the quality of recent Fedora Core releases, but they don't exist to please me. My personal preferences are not an objective set of criteria for evaluating the worthiness of a distribution, and neither are ESR's.

The fact that he now has a financial interest in Ubuntu/Linspire -- and thus has a conflict of interest in trashing Red Hat the company and promoting Canonical -- turns this from a egomaniacal explosion into something akin to FUD.

The very things ESR has spent the past year decrying -- elitism and a lack of concern for the users -- are now his stock in trade. He thinks he's more important than the average hacker, and he's not. He thinks his opinions matter more than the average hacker, and despite the mainstream media prostrate at his feet due to his remembered glory, they're not. He thinks he knows what users need. Only the users know what the users need, and the developer's job is to try his damnedest to give it to them.

Thursday, February 22, 2007

Back in touch with tech; or, How about a real programming language?

There's a singular joy in learning a new programming language, especially when you've been in the one- or one-and-a-half language rut for three years.

I tore through The Pragmatic Programmer, which is just as good as advertised, the past few days. While much of the book was already obvious to me and had already been part of my programming practice for a while, just as much was cleverly and brightly presented. "Wow, I can write my own code generators and source analyzers?" Not that it's beyond my ability to do so, but it had somehow never occurred to me to actually try it. Or I just never had a need.

Anyway, they also suggest keeping your "knowledge portfolio" up to date. It's become clear to me in the past few months that a lot of my knowledge is dating rapidly. A new language sounds like just the fix. While I'm at it, why not pick one that can help me write whizzy-bang code generators and analyzers?

I've done Perl in the past, but in my semi-regular checkins I've become impatient with the interminable Perl 6 gestation period. I'm not really interested in spending a lot of time to learn something experimental, and Perl 5 is 10 years old at this point with hardly any changes in that time. Perl is a really neat hack and a cool thing to work with, and I'm certainly not opposed to it, but as I find my own identity as an engineer and develop my own style, I feel that it's not the direction I want to be going in technically.

Next you're saying, "how about Python?" Well, I looked. Cool community, neat ideas, huge install base, pretty mature. Yet... there's something about it that just sounds foreign to my ears. While my company does a lot of internal work in Python, little of it is in spaces that scratch my itches. I think I'll pass for now. Maybe one day I'll check it out again, possibly even learn it, but right now I'll focus on the other alternative: Ruby, the other language we use internally.

This Ruby thing is neat. It works for me because:

  1. it's different enough from Java and C# and D to be a good mind-stretcher;
  2. it kicks ass at text processing;
  3. it's dynamically typed and puristically object-oriented;
  4. Rails is the new hotness in web architecting.
My Ruby education is just beginning, but already there are some ideas in it that have made me sit back and clap excitedly. Here's one neat nugget from an exception handling mechanism:


rescue ProtocolError
if @esmtp then
@esmtp = false
retry
else
raise
end
end


See what that does there? rescue is an exception handler, like a catch in the C++ family. Yet the keyword retry tells the control flow to reenter the exception scope after attempting to repair the error. Are you freaking kidding?! That kicks ass! That one little bit just blew my mind enough to make me completely interested in what Ruby has to say.

Wednesday, February 14, 2007

So, you want to do Maven? Good luck.

Ick.

Maven is the Next Big Thing in Java build management. Its goals and philosophy are pretty far from Ant, which in all honesty is a step in the right direction. Ant scales poorly, uses some ridiculous syntax, and becomes harder to understand the more you try to do with it. For slapping together a simple .war or .jar, it works fine. For controlling a vast array of slightly differing builds in an intelligent and maintainable way... well, let's just not mention that.

Maven has caught on among the Movers and Shakers for one reason only: dependency management. Maven has "repositories" which store version information for "artifacts" (fancy name for jars). You specify what version of something you want, and it does the Right AND Intelligent Thing: downloads it to a well-defined and organized location, automatically sets up classpaths, and makes it available for other Maven builds. This: 1) flat-out rocks; 2) solves one of the biggest problems with making Ant scaleable; 3) makes version conflicts a manageable problem instead of a hair-pulling fit of frustration. "Dammit, I said commons-lang 2.1, not 2.2! Users couldn't download jars properly if it guaranteed them a second life as Jessica Alba's Barcalounger!" Such days are now but a painful memory.

Unfortunately, they just HAD to go and cover this juicy nugget of awesome with a ponderous load of wtf. The pom.xml file, which is to Maven as build.xml is to Ant, is stunningly complex. It's true that you can get a lot more mileage out of a lot less XML in the simplest case, but that assumes you've laid out everything in your project according to the Maven "standard directory layout" -- and don't ask, because you haven't. "No problem, I'll just specify the location of everything!" Sure, do that. But even if you can figure it out without any documentation (of which more later), your nice, cute, simple POM just doubled or tripled in size. Worse, it now looks slightly familiar... I know, it's that huge list of Ant properties that every medium-to-large project has which has nothing in common with the similar list from any other medium-to-large project!

So if I want to use Maven, my first option is to move everything relating to my project into an arbitrary directory structure, thus:

  1. screwing up my working Ant scripts, thus requiring me to rewrite them;
  2. screwing me on all the taskdefs I'm using that have no corresponding Maven plugin;
  3. giving me yet another moving-crap-around-subversion headache;
  4. randomly hosing other useful things like the build machine's triggers, my cool Ruby code coverage scripts I'm prototyping, or proprietary analyzers that the Manager insists we waste otherwise useful CPU time on;
  5. hope that the next version of Maven doesn't change its mind about what the arbitrary directory structure should be (like v2 did with v1), or I'll have to repeat this entire process.
If that sounds unpalatable, no problem: make a nice big ugly POM telling Maven where everything is. How is this better than what I already have? I get the same finished builds out of this POM -- once I've got it debugged -- that I do with Ant. Now I have two ugly sets of build scripts to maintain instead of one.

This is progress?

Recently I migrated a little Java library project of mine to Maven 2 from Ant in order to learn how it worked. I had a simple 150-line Ant script that compiled, ran tests, created javadocs, checked versions, and threw the whole mess into a neatly packaged jar with a bowtie on top. I'm not that retarded, I like to think, but the conversion took around 8 hours, and we're talking about maybe 75 classes in two packages with one external dependency. Now I have a 200-line POM that does the same thing in a completely different way, and it's only that short because I bit the bullet and converted to The One True Project Organization. Why would I try this on a big important project on which screwups or delays might mean my job?

One last item on the rant parade: documentation. Maven 2's is pathetic, and Maven 1, which is well near abandoned only two years after a huge effort to convert people to it, is only slightly better. Tons of important docs on the live site are still "coming soon", and the plugin documentation is long on parameters and short on examples. I don't care how to use esoteric feature X of weird tool Y. I just want a drop-in for doing javadocs and creating shippable packages. This shouldn't be hard to do, and once you figure it out, it really isn't, but no way should I be forced to spend valuable time ferreting out the solution myself through trial-and-error. (No, mailing lists and bug reports are not the answer. I should NEVER have to Ask The Expert When Simple Things Break.) Please, Maven guys, think about the users here.

There is a bit of good news. Maven 2 comes with some Ant tasks that allow you to do Maven-ish things from Ant, including dependency resolution. My shiny new Ant script (after the Conversion) downloads one small jar of Maven tasks, feeds a few simple bits of info on package name and version number, and the whole shebang gets downloaded right then and there and built in one go without needing to restart the build. And if the user ever wants to try the Maven build, it will reuse the previously downloaded dependencies without a single tag of additional configuration. Freaking awesome. Groin-grabbingly transcendent. The Real Build Solution, when it comes, will work in all respects as this little combination does.

Wednesday, February 7, 2007

Mac Java considered harmful

For the past two months, I've been doing daily Java development on a PowerBook G4. It sucks. Why?

First of all, it's slow. "Well, duh, Java is always slow," squawks the ill-informed peanut gallery. No, as a matter of fact, it's not; runtime performance has gotten to the point where it's comparable with native code on almost every server platform, and I'm not hacking 3D game engines here.

What I mean is that Apple's implementation doesn't compare favorably with Sun's or IBM's. The latest released version of the SDK from Apple is still stuck on 1.5.0_06, which is getting pretty long in the tooth, and the beta download pushes it all the way to 1.5.0_07. The latest from Sun is _11. Also, 1.6 final has been out for over a month now while the Apple beta of 1.6 is months old with no word on updates.

Finally, there's the widely-reported Steve Jobs quote from just after the iPhone presentation (which -- you heard it here first -- will be a huge bomb):

Java’s not worth building in. Nobody uses Java anymore. It’s this big heavyweight ball and chain.
Gee, thanks, Steve. I guess I'll just go shut down those servers running my company's websites.

I know several people are saying he was only talking about the iPhone, but this is a pretty blanket condemnation when he could have said something like "Java doesn't make sense on the iPhone for us". I may have even accepted that prima facie, since client-side Java (outside of J2ME) has been more or less a bust. But combined with the apparent neglect of their Java implementation after extravagant promises on its performance on OS X, this sounds ominous.

Saturday, January 27, 2007

MIXing it up

Is there any value to reading The Art of Computer Programming anymore?

The series is a classic, and that means it carries loads of baggage along with its merits. I don't know of any universally accepted definition for a classic other than the tongue-in-cheek one:

A classic is something that everybody wants to have read and nobody wants to read. -- Mark Twain
It sure strikes close to home for someone with a liberal arts education. Most software developers I know speak of the book with a mix of reverent respect and pitying affection; "it was an amazing achievement for its time, but it's so outdated now that it's more of a historical document than a useful guide for modern developers." My training has conditioned me to distrust this kind of attitude, if for no other reason than I spent my entire college career reading books that fit just this description. Plus, when I want to understand something through a book, I want the source; I hate textbooks and Reader's Digest-style summaries. I don't need any intermediaries between me and the original idea. I can handle it on its own merits. (Well, at least I SHOULD be able to.)

So I decided I wanted to read it. I got it for Christmas, and I'm up to about page 150. The first volume starts with some really intense math, and although I was always good at math, I'll admit frankly that I didn't understand most of it. Most "classics" are like that, though; the first time through your reaction is "Huh?", and the juicy nuggets only reveal themselves through repeated readings. So I pressed on and was treated to MIX, the ideal machine Knuth designed to illustrate algorithms in code.

MIX strongly resembles computers of the 60s, and its guts are unlike those of any modern machine. It's got registers and memory cells but no operating system; programs are written directly on the hardware layer in raw five-byte words. Bytes in MIX are not eight binary digits; they can be binary or decimal(!) and the only guarantee the programmer has is that they can contain at least 64 values (0-63) and at most 100. This is weird enough, as I've been thinking in binary forever. When dealing with values 64-100, you have to use two bytes to avoid undefined results; if it's a binary computer and needs two bytes for 90 and you only copy the first byte, you only get 63.

I haven't gotten to program this thing seriously yet (there's an assembly language called MIXAL that comes next), but it's radically different from a higher-order language. The machine does really next to nothing for you. You get to implement algorithms at the lowest level, which of course is the point, but I haven't implemented linked lists in assembly before.

Anyway, is implementing basic algorithms on an ideal machine really going to make me a better programmer? I don't know. I need to get a little further.

Where the hell does this class go?

Software development, however you want to define it, is really hard.

I've been doing this professionally for 10 years now, and recently I've been struck forcefully by how, well, bad a lot of software is. It breaks, it crashes, it doesn't do what it needs to, it decides to destroy unrelated data innocently minding its own business, it lies to your face about what will happen when you push the button. It doesn't matter who the developers are, or how big the team is, or how much time and effort are put into it; the goal, asymptotically approached, is making something that doesn't suck *that* bad.

Some developers are idiots. I have my moments, and I think there's a lot of truth in the simple "try not to make mistakes" approach as opposed to "find the best possible solution". But some are just straight-up, envy-inducing geniuses, and they work on really powerful and complex systems that wind up being terrible. How much effort has Microsoft spent on developing the Windows family? Is the quality of the end product proportional to the amount of effort invested in it? How many brilliant, motivated people worked on it through the years?

I got The Art Of Computer Programming for Christmas, and I eagerly tore into it. All right! The undisputed classic that lays out the guts of classical computer science! Surely in here I'll find my answers, clearly and cleverly presented in inky-black awesomeness, explaining why I can't architect an entire J2EE application correctly on the first attempt!

...yeah, it does sound pretty dumb, but I've always been the kind of person who figured that if he read just one more book, or if he found the right teacher, or if he learned the proper technique, then the formerly difficult and frustrating task would become easy, clear and fun. There was a time when this was true. When I first taught myself how to program, I was able to make the computer do damn near anything I wanted (so I thought). This part of Real Programmers Don't Use Pascal strikes too close to home for comfort:

When I got out of school, I thought I was the best programmer in the world. I could write an unbeatable tic-tac-toe program, use five different computer languages, and create 1000 line programs that WORKED (Really!). Then I got out into the Real World. My first task in the Real World was to read and understand a 200,000 line Fortran program, then speed it up by a factor of two. Any Real Programmer will tell you that all the Structured Coding in the world won't help you solve a problem like that-- it takes actual talent.
Well, I don't know whether I've got talent or not. I know I'm not a genius, because if I were I'd be off creating awesome things already. And I want to succeed in the Real World, not in the Happytime Sugar World Where No Challenge Trips You While You Saunter About Listening To The Sound Of How Awesome You Are. If a genius is just someone with supreme talent, like a Mozart or a Nietzsche, they don't need to develop skills in the same way we mortals do.

I'll save what I've found in TAOCP for the future, but suffice it to say that it wasn't exactly what I was expecting.

I read other programs and I can discern their structure; if it's Java (and it usually is, since that's my job), I can grok a method on one reading. I can understand an entire class in 15 minutes or so. After an hour I can fix bugs in the javadoc, write unit tests, and leave it a much better place than when I found it. If I have a package, it's harder to do. Some packages are just a velcro strip to stick loosely related functional units (java.text), while others provide a (hopefully simplified) abstraction over a much larger problem (javax.swing). The former are much easier to understand than the latter, which is fine since the latter is much more complex. But what do I do when I have to write something that's somewhere in the middle? How do you go top-down in architecture as opposed to bottom-up?

I want to be a better developer. To do that I must become at least competent in top-down design. I can't do it well, so I look for help in books. No help there. Can't learn it from geniuses since they don't have to think about what they're doing in the same way that I do. Can't read existing code because even if it is designed well it doesn't tell me anything about where it came from. So... what the hell do I do?