How to deal with an adviser that wants to force you to get his desired results?

The meaning of unexpected results

It is important to be skeptical about your computational approach. However, at the same time, computational approaches are (almost) completely worthless if we just ignore them when results are unexpected (unless you already have dominant evidence that the result is not only unexpected but also simply wrong). An exception would be if your approach is in the area of generative models, where a parsimonious model is suggestive of an underlying mechanism, which is not the area of your model: you are trying to do prediction for an unknown case (extrapolation).

The art is in determining whether your initial model of the world (i.e., expectation) is wrong or if your computational model is wrong.

In a long discussion in chat, I think we came to a conclusion that in your specific case, it may be that this is an issue of extrapolation to a condition where you do not have truly comparable training data.

How to stop worrying and learn to love the model

If you want to convince your advisor, colleagues, peer reviewers, or yourself that your model should be trusted, your next steps are to test the conditions that lead to your result.

Do all the appropriate tests for model convergence in the original training. Check for input parameters that are outside the range in the training set. Use graphical representations of your model to show how input are mapping to outputs. Remove or scale variables to test the sensitivity of your model to those changes. And additionally, as your advisor suggests, figure out what it takes to make your model fit the expected result. All of these approaches will help you find if something is wrong in the model or support you if something is wrong in the prior expectations.


Any computational model which tries to model a disease scenario has flaws, since all models try to reduce an extremely complex problem to a simple one as Buffy pointed out in their answer.

Furthermore, your question makes me think that you are working in/with a computational/bioinformatics group. If the results that you present are counter-intuitive, I must side with your advisor on the statement that a study which presents counter-intuitive results will not be well received. Any counter-intuitive results derived via computational models will need to undergo vigorous hypothesis testing via experimental methods to be well accepted by the community.

If you still want to present such findings, you can

  1. Avoid any mention of causal links.

  2. You can present the results as a secondary finding while comparing your model to other such models described in literature.

  3. You can also break up the bigger finding into smaller parts which may be well received by themselves, but not together (Present them independently).

Coming to the part about

measure an important parameter for clinicians that people's life will depend on that

Investigate the result of this application on that data as well as its outcome and relevance for clinicians.

Results from single academic studies are rarely used as a backdrop for larger clinical applications. Any academic finding however grand they may be, will undergo control analysis in multiple rounds of replication studies, and then it will be presented as part of a larger landmark review article. Results presented in such a context may end up reaching the desk of a clinician. Even then, they will think twice before applying those results towards their patients.

Although it is great to think about the ethical context of studies in basic research, I would strongly advice you to think about yourself in the grander schema academic research before you associate a high weightage towards such concerns.