How to tune algorithm performances in computer science papers

It would be dishonest to do this without mentioning that the algorithm was tuned differently. You ought to specify what the tuning changed and how this affects the results of the algorithm.

You should also list accuracy results for the fast algorithm and speed results for the accurate algorithm. (Your probably also want some numbers for middle of the road tuning too). Not listing the "bad" results isn't dishonest, but it's bad science. If you didn't include these numbers, I'd expect your reviewers to bring it up and ask for them.


To paraphrase your question, "Is it honest to suggest that my algorithm is both fast and accurate when, in fact, it can only be fast and not-so-accurate or accurate and not-so-fast?"

NO!!!

Of course it isn't. Seriously, why do you even need to ask?


I am just a Master student, so I do not know much of the dynamics of “the game”. Therefore I can only give some spectator opinion.

One of my supervisors likes to have brutally honest plots in his papers. His work focuses on the scaling of parallel algorithms. For starters, he chooses strong scaling instead of weak scaling. The former is taking a fixed problem size and using more processors $P$ to run. Ideally, one would obtain a $1/P$ drop in time. Taking a double-log-plot of time versus process count and also plotting the $1/P$ perfect curve, you see quickly when it goes bad.

Weak scaling is scaling of the problem size with the resources. Then the time needed should stay constant. For problems which become hard to parallelize at some fine level, you will never see anything interesting in weak scaling. With strong scaling you can go into the extremes like “one pixel per core” or “one atom per thread”.

He said that the interesting parts (in science) are those that do not work yet. He surely can make up a plot that makes the algorithm look great. But that is not what he is interested in. He wants to know how far it can be pushed.

I really admire this brutal honesty. If one has results which are only so-so, then this method will clearly show that they are not that great. On the other hand, if you take away all the attack surface yourself, nobody can rip you apart later for hiding anything.

Therefore I would make plots which show how bad the accuracy gets when you optimize for speed. I'd include a honest accuracy vs. speed (or vice versa) plot. Then one can either see whether there is a sweet spot in the middle and how well that actually is.

If your algorithm goes to the very extremes but has a nice middle ground, it is worth mentioning, I guess. And if the extremes are only a few percent slower or less accurate, that also is a result.