Saturday, September 25, 2010

Are you sure you want to read this? Ok / Cancel

Image © iStockphoto.com / budgetstockphoto

We're moving to soft-delete for many objects in our application. By saving old versions of objects we hope to correlate the effects of changes over time. This is a big deal for us because, with detailed history, we expect to increase our predictive capability by several orders of magnitude. It's also a big deal because it's a radical shift in our data model and we built many features assuming deleted objects go away. Once we decided we wanted the capability, however, it made sense to start sooner rather than later. I'm definitely excited about the analytics possibilities, but I'm also excited about a potential user-facing feature soft-delete enables: undo.

I dislike confirmation boxes. You know: "Are you sure you want to do this? Okay or cancel". I find them only incrementally better when they explicitly indicate the action to be performed: "Save these changes or discard these changes?".

Surely it's better to ask a user before making a permanent change than not to ask? Well, yes. But the problem is that there are too many permanent changes presented to the user. So many, I propose, that users learn to approve changes without really thinking about it. Hence, the intended purpose of the confirmation box -- to force the user to think about the change -- is lost in a flurry of okay/cancel dialogs.

The solution is not more confirmation boxes. The solution most certainly is not more annoying confirmation boxes.

The solution is to make the change not permanent. Put another way, if an action is not permanent then there is no need to ask the user for confirmation. Rather, the user is free to revert (i.e., undo) the operation should the user decide the operation was not intended.

From a task-accomplishment perspective it seems like a small, or maybe non-existent, difference. The task was performed or not performed. But from a user perspective it means the difference between an application that is frightening or inviting. Confirmation boxes might as well show skull and crossbones. Undoable operations encourage the user to explore boldly.

I could go on about the benefits of undo over confirmation, but I think others have written much better articles than I. I recommend these two articles.

Follow me on Twitter @jsjacobAtWork.

Thursday, September 9, 2010

Categorizing automated tests — Make it so!

Image © iStockphoto.com / Evgeny Rannev

I love Continuous Integration. Just as I can't imagine writing software without continually building the code, I can't imagine committing code without a full test suite automatically running. However, the automated test suite sometimes gets stuck on things on which we think it shouldn't get stuck.

We run a Hudson instance as our continuous integration server. Hudson monitors Subversion for check-ins and starts a new build/test cycle within five minutes of a change. At present, a successful full build/test takes just over an hour with most of the time spent in tests.

Ideally we don't want any tests to fail so any test failure should mean a necessary code change to fix the test. But in practice there are some tests that are okay to fail temporarily. The vast majority of the "okay to fail" tests are ones using external sandbox systems. External sandbox systems go down. As long as they go down only occasionally and temporarily and not when we are actively developing code against them, it really doesn't bother us that much. Except that the corresponding tests that try to communicate with them fail.

The problem is that, in the Maven process, failing tests in one module prevent tests running in dependent modules. In our project hierarchy these external systems are used in a project on which many other projects depend. We would like to continue testing these dependent projects especially since most of the tests do not depend on the external system.

There may be a way to restructure Maven projects to reduce the test dependencies, but I think that would lead either to the external tests moving to a different project from the classes they test or to an explosion of projects causing a source control nightmare. I don't like either of these scenarios.

I'm thinking more along the lines of test levels I learned at Pacific Bell many years ago. At Pacific Bell test levels were arranged according to the extent of the system covered by the test. Applying these test levels to our current application:

  1. Unit: the individual method or class using test data in development environment.
  2. One-up/one-down: partner methods and classes using test data in development environment.
  3. End-to-end: full application (backend and UI) using test data in development environment.
  4. Test environment: full application (backend and UI) using test data in test environment.
  5. Production environment: full application (backend and UI) using production data in production environment.

I believe our current automated tests apply to the first four levels. If we categorize our existing tests each will fit sufficiently into one of these levels. For example, a test that verifies a class in isolation is categorized as Level 1. And on the other end of the spectrum, a test that verifies connectivity with a sandbox environment is categorized as Level 4 (while potentially just a one-up/one-down test, an external system that might better be classified as "test environment" rather than "development environment" which is not available in the lower three levels. A simple test that used a mock system could be categorized at Level 2 because a mock system implies not using the external system).

I imagine configuring Maven to run four passes of tests across the projects with each successive pass adding a new level of test. During the first pass only unit tests are run. During the second pass only one-up/one-down tests are run. The third and fourth passes run end-to-end and test environment tests respectively. At any point an failure prevents the later tests from running. However, the key is that a higher-level test failure will not prevent a lower-level test from running no matter how the project dependencies are configured.

This would give us more information about where problems lie before digging into the console output. For example, if a release build fails because of tests in level 2 we know something is very wrong and the release must be stopped. On the other hand, a release build failing in level 4 tests might be okay because the lower three levels passed in all projects: maybe the problem is limited to a specific external sandbox system. We could decide to accept the risk and build the release packages.

And hey, we can start saying things like, "Please run a level 3 diagnostic."

Follow me on Twitter @jsjacobAtWork.

Sunday, August 29, 2010

Interview as audition

iStock_000011686156XSmall.jpg

We finished a round of hiring a few weeks ago. As at the other small companies where I've worked, I participated in the interview process. I've found that with several interviewers each interviewer can develop an individual interview style with the confidence that other interviewers will discover or delve into candidate aspects that may be missed by his or her own interview. When I interview I focus on relatively high-level architecting and designing skills. I believe I've been more successful the more I've run the interview like an audition.

Lots of companies interview engineers and they do a fine job finding qualified candidates. If you think your interviewing skills are fine, then this article isn't for you. If you think you are mostly fine and just want two tips to improve, here they are.

  1. Use Behavioral Interviewing techniques. For example, instead of asking the closed-ended question "Do you work well in teams?" which gets a "Yes" from almost everybody, ask a question which requires the candidate to give examples. "Tell me of a time when your team worked really well." Asking the opposite question is also fruitful: "Tell me of a time when your team completely failed." Follow-up if the candidate doesn't go into sufficient detail: "What made that team work? What made that team not work?" Specific examples can be checked with references if you have doubts about their accuracy.
  2. When asking an candidate to produce code, require the candidate to solve at least one of the problems while you watch. Ask questions about the design and ask the candidate to demonstrate the solution.

The rest of this article is about the second tip.

Long before I was a professional engineer I was an amateur actor and occasional director. I wasn't a very good actor or director, but I learned how to audition from both sides of the proscenium arch. When I auditioned actors I wanted to see basic technical skills and the ability to use those skills in a variety of situations. Including, of course, the role being auditioned.

Many of the engineers with whom I interview focus on technical knowledge questions. Usually the interviewer asks several questions on a technology that the candidate can get either right or wrong. These are fine things to ask in an interview, but I find the overall whole lacking the part of seeing the candidate put them in practice. This is where I bring my acting/directing experience to the interview process.

The software engineering equivalent of performing a scene is designing a solution to a problem. I like algorithmic problems, but you should pick something you understand well and can realistically apply to your company's product. Try to find problems that can be solved within 20 minutes but aren't trivial.

I tell the candidate that the problem is difficult and I'm not expecting a perfect answer right away. I ask the candidate to use the whiteboard and think aloud as much as possible because I'm evaluating the candidate's thinking and problem-solving skills and not just the finished design. Most people don't know how to think aloud so I also ask questions while the candidate works if the candidate forgets to talk.

Most candidates arrive at a first answer that is simple and wrong. Instead of telling the candidate that the answer is incorrect, I ask the candidate to demonstrate the design using input I specify. The candidate obliges and we walk through the algorithm. At some point the algorithm doesn't behave as desired and the candidate realizes there is a problem.

This is an important point to reach. I re-iterate that I don't expect a perfect answer immediately and that mostly I want to see the candidate's thinking process. Sometimes the candidate understandably pauses quietly to think deeply. I ask the candidate what did he or she learn while running the design. I repeat my request that the candidate continue thinking aloud.

What remaining self-consciousness or interview-nervousness existed usually evaporates. Most candidates drop all pretense and enter raw problem-solving mode. I get to see if they tweak a pretty-good design or throw it out completely. I get to see if they cooly evaluate options or wildly try everything. I get to interact with them under pressure. Many solve the problem on their own. Some need guidance. A few never find a solution. But in each case I got to see the candidate be an engineer rather than just talking about being one.

I don't think this interview style is for everyone. Admittedly it requires a lot more improvisation than just going down a list of pre-written questions. If you invest the time, I think you will find this an invaluable technique.

Image © iStockphoto.com / John-Francis Bourke

Follow me on Twitter @jsjacobAtWork.

Friday, August 20, 2010

Don't beat yourself up over past decisions

Recently we designed a change to a message queueing system. Our discussion surfaced several problems to overcome while modifying the existing system. At the time I felt somewhat disappointed. "Why was the system designed so seemingly poorly?" I asked myself.

Then I stopped pitying myself and started doing my job: figuring out how to make it work.

When the system was first built I had the opportunity to raise issues with potential problems. But I didn't because, given what we knew then, the system was designed properly. In fact, this particular system had performed adequately for several years. I was being ridiculous for doubting the system's design.

Our design discussion continued. Yes, there were problems to resolve including a tricky server migration. However, we discovered the problems weren't so big after all. We figured out a reasonable transition from the old system to the new system.

Like many people, I easily forget that systems are built with limited resources: time, people, information. We do what we can with our available resources. When requirements change, this means we have more information now than we had before. We shouldn't fault our former selves too harshly for failing to predict the future. We should acknowledge that we made design decisions with a given set of information and are redesigning the system with a new set of information.

Image © iStockphoto.com / Stéphane Bidouze

Follow me on Twitter @jsjacobAtWork.

Friday, August 13, 2010

What's in my software development toolbox 2010

I think it might be helpful to describe my day-to-day working environment to give context to my writings. Like many people, my development process changes to suit my needs. However, drastic changes are few and far between and my current setup has largely been set for several years. I actually have two of these setups: one at work and one at home. This means I use this rig both for work (i.e., software development) and for play (photos, movies, recording music). Although work and play overlap in this configuration, I'll only describe how I use it for work.

At its core, my workstation consists of an iMac 24" and an external 24" monitor. I maintain four "Spaces" (virtual desktops). Of these four spaces I use three regularly.

In the first Space is my communication and todo lists. I use a "Getting Things Done" process so email and other requests go into OmniFocus, iCal, and Evernote fairly often. These automatically synchronize with my iPhone and iPad.

In the second Space is my coding. Eclipse (integrated development environment), MySQL (database), and several terminal windows. I run Safari to see the web application, Trac (issue tracking) database, and API documentation.

In the third Space is production. Elasticfox (Amazon EC2 instances), production admin pages, MySQL (database). I run release builds here.

I use the fourth Space (not pictured) when needed. Sometimes I display Hudson (continuous integration build server). Sometimes I use it for documentation. Sometimes the staging environment. Whatever doesn't naturally fit in the other Spaces.

I would be remiss to lead you to believe all development happens at my workstation. Software engineering is much more than coding. Designing software includes brainstorming with coworkers, thinking, and drawing on yellow pads and whiteboards (at least for me). While I spend a lot of time at my workstation, I couldn't do my job properly if it was the only place I spent my time.

Follow me on Twitter @jsjacobAtWork

Wednesday, August 11, 2010

Work/personal separation

I have social networking accounts (Facebook, LinkedIn, MySpace, etc). But I think of those as personal accounts. Of course I occasionally post work-related things. However, I try not to abuse my personal relationships with professional posts.

So that's why I decided to make a clean break. Or rather, to create a specific professional area.

This is where I will talk about my work and shamelessly promote whatever company where I work.

Share and enjoy.