When I was working in C++ and Java, I had this strange thought. I thought that much of what we were trying to do with unit testing was targeted at giving ourselves some of the benefits of a read-eval-print loop in a compiled language. The nice thing about testing is that we can get rapid feedback. A REPL gives us that too, albeit without the ability to run it over and over again.
I wonder, though, whether we can have have it both ways. A while back I did the StringCalculator Kata in Haskell, and even though I wasn't writing tests in a file, I adopted a particular rhythm of working. I typed expressions at the REPL containing functions that didn't exist yet, then I wrote them and "arrowed up" in the history to reevaluate them. It seemed to me, that if I dumped a transcript of that session it wouldn't take much work to fashion it into a set of tests. The process could even be automated, although, doubtless, you probably have to go back and edit the source: drop redundancies, name test cases well, etc.
The neat thing about a tool like this is that it wouldn't have to be perfect. And, perhaps, the user would be able to give hints at the REPL. Sort like "break here when translating to tests."
Someone has to have done this. Anyone know? I know Python has doctest but it doesn't seem to be quite the same.
Blog inspired by Mark Simpson: https://verdammelt.posterous.com/tdd-and-repl-analogies