What to do when you spend several months working on an idea that fails in a masters thesis?

Step 1: Don't panic

I was in a similar situation halfway through my MSc. I was in a panic, sure that my academic career was in ruins. My supervisor calmed me down, reminded that a negative result was still a result, and and told me that a for a master's degree, it was not strictly required that I make a scientific contribution or have a publication. In the worst case, in my thesis I would present my negative results, explain why this technique didn't work, and suggest what could be done differently by future researchers. (Once I was relaxed enough to think clearly, I came up with new things to try, and everything worked out grand.)

I suggest you discuss the "worst case scenario" with your supervisor; you'll probably find out it's not as bad as you think. Remember that this is research: positive results are not guaranteed.

Step 2: Think about why this technique isn't working.

I'm sure you've learned something about why your technique isn't working. That should give you some ideas for what to try next. If you're out of ideas, sit a friend down and explain everything to them. The friend doesn't need to know anything about machine learning; they're just a sounding board. The naive questions they ask may give you ideas. Maybe you need a week off to recharge your batteries.

Step 3: Try something new.

Take those new ideas you got in step 2, and apply them. But now that you're more experienced, think about how you could find out more quickly if the idea is feasible, so you can change tack again if needed.


Just a general answer, more to the overall issue than to your specific case: "Failure" to get the expected results isn't necessarily "failure" in the sense of not producing a good thesis. Although in the specific case you mentioned (machine learning) there is often a desire to produce something usable, in many cases a thesis topic is motivated by prior research. A negative finding can still be significant if it adds to the overall knowledge total in the area (for instance, by showing that predictions from earlier research are not confirmed by yours).

For my PhD, I spent over a year conducting a series of experiments to test a certain hypothesis derived from earlier research. I found no evidence in support of the hypothesis. Nonetheless, I wrote it up as a negative result, and framed it as placing limits on the theoretical proposals that motivated the project (i.e., "people suggested things might work like this, but I checked and apparently it's not so"). My committee thought it was a useful contribution and I got the PhD.

A lot depends on your field and your committee. It is easier to do what I described in a field where there is a lot of speculative theorizing relative to the amount of hard data. I can imagine it'd be a lot harder to do that in machine learning. Also, the bias against negative results (the so-called "file-drawer problem") can create pressure to produce a positive finding. In the broadest sense, though, if you had a good reason to go looking for something, not finding it can be as informative as finding it, and that's part of science.


Years ago Marguerite Lehr, a colleague of mine at Bryn Mawr college, told me of a conversation she'd had years before that with Oscar Zariski, a brilliant algebraic geometer then at Johns Hopkins. She told him about a failed attempt to solve a particular problem. He said "you must publish this." She asked why, since it had failed. He replied that it was a natural way to attack the problem and people should know that it wouldn't work.