1. About
  2. Features
  3. Explore

I've written a classification algorithm that does a pretty good job at classifying some datasets. However, I compared my algorithm to other classification methods, and their results exceed my results by a little. For instance, when classifying a dataset from a repository, my algorithm is getting 95% correct while another algorithm usually gets 99% correct.

Should I continue to publish my results although 1) my algorithm is a little slower, and 2) my algorithm's results are not as good as the other results.

I'm a little torn. I'm excited as my paper and results are a contribution to the classification field as the algorithm is novel. Also, I'm of the stance that you can't beat EVERY algorithm. If we only published algorithms that could (loosely) beat other algorithms either A.) we'd never have new innovations, or B.) eventually every dataset would be 100% classified each time, or C.) every algorithm could instantaneously classify a dataset (speed).

I hope that my algorithm will continue to grow and others will pick it up and extend it. I hope that one day -- with tweaks -- my algorithm can reach 99% too.

I'm afraid of being rejected by the journal again. Yes, my first submission was rejected. One of the reasons for the rejection was that my dataset was small. However, when the dataset was small I was beating the other algorithms. Now, as the dataset has grown, the other algorithms are now beating me. I'd like not to be rejected again.

1 Answer 1

If you want to get technical, in general no learning algorithm performs any better than any other.

The question then, is what can be learned from your algorithm. In your question, you speak of it as a fond intellectual child that you wish to grow and nurture, and you speak of your personal concerns about acceptance and rejection. Here is the thing, though: none of that matters for a publication. What matters is this: what new knowledge or capability is brought into the world with your work on your algorithm, and how can this be objectively evaluated?

Here are some possibilities that I can see:

  • Your algorithm may perform better on an interesting and useful class of problems, and thus be of practical interest.
  • Your algorithm may perform worse, but have some other desirable property, such as executing very quickly or using very little memory, in which case the performance comparison just needs to show that it is sufficient, and you can show much better performance with regards to those other properties.
  • Your algorithm may perform worse, but do so in a way that is enlightening, e.g., taking a more human-like approach, or showing how something can be accomplished by a very unusual and unexpected route. In this case, the performance comparison is simply showing that your algorithm does not perform too badly to be interesting, and the narrative should focus on the path taken to achieve your results and why that is interesting.
  • Your algorithm may only have taught you personally some interesting things about classification and scientific research, in which case you should mourn the passing of a fine research idea and move on with your life.

Only you and those who know your work well will be able to tell which category it truly fits into.