Prototyping vs. Experimentation

3 Jun

We are in the process of planning for RAD Public Workshop v2.0.

Overall, we judged Public Workshop v1.0 to be a success. Thirty-six participants came, spent an energetic day, some told us they took away insights for their research, and most agreed this is a conversation worth continuing.

When we met a few days later to debrief, we discussed both the elements of our curriculum and logistical plan that seemed to contribute to those successes as well as those that didn’t work as well. D.thinking style, the RAD team instantly slipped from identifying something that had not worked as smoothly to brainstorming a multitude of other configurations that could have avoided the perceived issue we’d just identified. (I think our jumping between analytical, research style identification of possible problems and immediately generating possible ways to solve them have frustrated the wonderful woman who had observed the day for us and who was providing feedback about her observations.)

Now our task is to choose among the multitude of ideas we generated to design v2.0. We’re relying on instinct honed after vBeta and v1.0 and on our respective prior experiences running similar workshops or courses. For instance, before v1.0, we were thinking of our curriculum more as teaching d.methods, but a key comment by a participant made us realize that the workshop curriculum – like many d.school courses – is actually using d.methods like prototyping to teach larger d.mindsets like a bias towards action. Plus there are certain structural constraints  like our crazy intersecting travel schedules and personal preferences that weight the balance one way or the other given two options that in theory seem rather equal.

We are clearly prototyping, not experimenting, though the two activities share many similarities. Yes, when we run workshop 2.0 in September, we will be testing an unknown configuration of activities and will be vitally interested in observing what results. We are paying similar conditions to the starting conditions to what we would if this were an experiment; we know that the observed results in v1.0 came from curriculum 1.0+advertising strategy 1.0+registration form 1.0 etc. And we are recording the workshop results so we don’t forget what happened, should we eventually get to v8.0.

But what we are not doing – which we would certainly do if we were experimenting – is making incremental changes. To truly understand the effect of advertising strategy 1.0, we ideally would run workshop v1.0 again changing only the advertising strategy to advertising 2.0.  Classic experimental methods rely on isolating causal variables to make causal explanations. The importance of varying only a single factor at a time goes back to the nature of logical argument that classic experimentation relies on, expressed in its quintessential form by John Stuart Mill’s methods for causal identification. (While there are now sophisticated statistical/econometric techniques that overcome some of the limitations when isolating a single variable is not possible, a classic experiment varying one factor and keeping the others constant that shows the effect is present when the factor is present and absent when it is absent is still often considered the “gold standard” for establishing proof that the factor caused the effect.)

So why are we not making incremental changes? Three reasons:

1) Logistical constraints – We simply don’t have the time or resources for as many iterations as we have ideas for elements we want to test.

2) Multiple causes – Mills’ methods and varying single factors do not lead to conclusions about effects with multiple causes or interaction effects (effects of factor A that result differently depending on whether factor B is present or not). Like many effects in education that deal with complex human group behavior, the effect we are interested in (an energetic group experience in which people learn d.mindsets) is probably the result of multiple causes and interactions between different factors.

Most importantly, however,

3) Different purposes – We are not interested in causation so much as finding a recipe that works.

Ultimately, we are not interested in establishing how each of the different elements in our workshop plan individually contribute to the overall success of v2.0. What we care about in the end is finding a successful combination of elements that collectively produce a stellar workshop. And since we are interested in finding that magic recipe as quickly as possible, it makes sense in our case to vary whatever we think needs varying and try it out. It is this concern with getting to a solution that makes what we are doing in our workshop creation at the moment prototyping and d.thinking.

Two other thoughts:

1) Education has a educational design sub-field which does similar work getting to a solution that works but pays more attention to the causes behind the success in order to make the findings generalizable. (See Theme Issue: The Role of Design in Educational Research Educational Researcher January 2003 32: 3-4, doi:10.3102/0013189X032001003)

2) Later, of course, when we do have a combination that seems to work, there might be an opportunity to return to study what we’re doing with a more analytical eye and move back along the r.thinking side of the spectrum to explain the causes for that success. This is what my dissertation is doing with a software tool.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: