The Dopefly Tech Blog

<< The Dopefly Tech Blog Main page

Do you know your your code coverage percent?

posted under category: Browsers on March 19, 2008 at 1:00 am by MrNate

Here's one that hits me excessively hard. What percentage of my code is covered with unit tests?

The reason I ask is because I know my code coverage percentage is abysmally low, easily in the single digits. It's hard to talk about because I'm a huge offender. However, even after looking in the mirror and viewing the plank in my eye, I know that this is typical in the world of ColdFusion developers. Most of you have never written a formal, automated test case. Ever. I know you. You don't. Admit it.

If you do it, kudos. Get yourself a cookie.

Now, code coverage. It's the measurement of how much of your application is covered by automated tests. I would argue that it is probably impossible to have 100% code coverage with things like environmental variables such as sessions, server operating systems, client browser dot versions and the types of input parameters your application can be given.

Let's take this simple function:

function add(p1, p2) { return p1+p2; }

How many test cases do you need to write to have this code fully covered? Unfortunately, probably a lot. Starting with 0+0, 1+1, 0+1, 1+0, sure, that's easy. What about approaching the upper limits of a java integer? negative numbers? doubles and floats? Will it unbox automatically? What about those pentium and java bugs from years back that would return floating points on simple arithmetic? Maybe that's absurd, but at least we're not using it...

"IN HAZARDOUS ENVIRONMENTS REQUIRING FAIL-SAFE CONTROLS, INCLUDING [...] THE DESIGN, CONSTRUCTION, MAINTENANCE OR OPERATION OF NUCLEAR FACILITIES, AIRCRAFT NAVIGATION OR COMMUNICATION SYSTEMS, AIR TRAFFIC CONTROL, AND LIFE SUPPORT OR WEAPONS SYSTEMS"
(what not to do, from the Coldfusion End User License Agreement 6.c)

So I think you can get 95% in just a few tests, zeros, small, medium and large sized numbers, as well as non-numbers and assert errors when it should throw errors.

(jump break, almost done)

How do you really define completely covered code? Is 95% good enough? Probably, as long as your unit tests are living and breathing with the application. I think one of my favorite quotes in this arena is "Make mistakes once." That is, once you find a bug in your software, add a condition to your test cases to recreate it, so that it never happens again.

While I know there are people even in the CF world that actually keep a good code coverage, and maybe a few that even track a code coverage percentage as if 100% were a goal. Where do you stand?

Too old to comment!
On Mar 19, 2008 at 1:00 AM marc esher (marc.esher who really likes gmail.com) said:
What's my %: Impossible to measure in CF. There are no coverage tools available.

What's my goal: 85%, maybe 80. and that's in a good week. like, a super duper in the zone not bugged by 10 people every 20 minutes with "marc can you help me with...." week. even when I work with good coverage tools in java (clover kicks butt), I find it damn near impossible to get above 80-85% without spending what seems to be an inordinate amount of time squeezing out those last few percentage points. mostly because the rest is in stuff that's just too damn hard to test (poor design, really tough external dependencies, etc).

This is a great question. The guy who started MXUnit, Bill Shelton, is itching to get into code coverage (he's a true compsci geek, unlike myself). But I'm not sure it's possible without some help from adobe, specifically, getting a bridge between the cfml and the byte code. clearly they can do it, as evidenced by eclipse debugging in cf8. at least, i think they must be bridging b/w byte code and cfml... i could be wrong.

How do you measure your coverage?

I remember when I first started using clover, I thought to myself "I'm at 60% or so". Ha. I was at like 30 or 40. So many paths untouched. Learning how to unit test is an art, man. it takes time and practice and a real love for doing it.

the thing i love most, though, is how much better it makes my code. it's like trading in an 82 ford festiva for a maserati some days.

On Mar 20, 2008 at 1:00 AM Remy Beher (remy.becher who spends every waking moment visiting 192.com) said:
For Java development we use tools like Clover (http://www.atlassian.com/software/clover/) or Eclemma (http://www.eclemma.org/) to tell us the (in the beginning inconvenient) truth about test coverage. I know it's no substitute for actually writing tests but it does raise awareness with developers and make it easier to fully cover your code base.

On Mar 20, 2008 at 1:00 AM Peter Bell (peter whose email lies with pbell.com) said:
The problem is that code coverage just isn't that useful a metric. Coverage tools test percentage of lines exercised. The problems with code aren't usually with un-exercised lines (although I agree that can be a problem), but with boundary conditions that break exercised code. It's when you run the same line of code which expects a number from 1-10 and pass in 5, 1, 10, 11, 0, -1, "fred", 2.76, and 9999999999999999999999999999 that you get a better sense of whether your code is working.

It's difficult for me as I generate far more code than I write, but I'm trying to get into the habit of TDD. With thatr approach it really becomes part of the design process and you find that the tests actually help you to design.

Ideas like Behavior Driven Design (check out Dan Norths stuff on it or look at rspec) are very cool as they allow you to have executable descriptions of functional scenarios to test your design - that's something I'm playing with integrating into my process. It's also interesting to look at Fit (and FitLibrary and Fitnesse) and Concordion to look at allowing end users to actually be able to write various acceptance test cases themselves (although the range of testing DSL grammars they support are somewhat limited by the input/output format they use.

Testing is definitely a huge issue. I was chattign with a bunch of people at the BCS SPA conference this week and the consensus seemed to be you better write tests first - otherwise you'll probably never write them at all.

There was also a very inetresting session on "closing the knowing/doing gap" which had quite a bit to say about how to go from knowing about something to getting into the habit of doing it consistently.

On Sep 23, 2017 at 2:58 PM Holly Bates (http://fullversoftware.com/) said:
Good post! Thanks for sharing this code.
Too old to comment!