In recent years, several conferences started to include “Journal First Tracks” (JFTs), for example ICSE’19, FSE’18, ICSME’17. Generally, JFTs invite authors of papers published in journals and never presented in conferences. In JFTs, authors can present their papers, hence summarising their works for an audience that may have not read their works and could not ask questions before. I see JFTs as the latest symptom of a problem in software-engineering research: software-engineering research focussing on fade and self-promotion rather than advancing science.
First, let’s recall what science is: “The intellectual and practical activity encompassing the systematic study of the structure and behaviour of the physical and natural world through observation and experiment.” (Oxford) and science advances through research, defined as “The systematic investigation into and study of materials and sources in order to establish facts and reach new conclusions.” (Oxford). Such systematic investigation requires sounds methods, hard work, and frequent discussions, changes of directions, and acknowledging that we can make mistake. Such systematic investigation also requires the evaluations and inputs from independent third parties, often other researchers, through the publication of the research works at different stages of the research.
In the past, consequently, the publication cycle in software-engineering research used to be straight forward and incremental: first ideas and preliminary results would be discussed in labs. or departments. Then, researchers would submit their work to workshops, during which other researchers could criticize their works, providing invaluable inputs and creating opportunities for collaborations. Then, researchers would send their work to conferences, which would vet further their works and would provide further opportunities for discussions and improvements. Finally, researchers would submit their complete works, incorporating the various feedbacks from their peers, to journals. Journals provide in-depth reviews and allow authors to reflect upon and improve their works through the reviewers’ comments.
Nowadays, the publication cycle in software engineering research is upside-down! Workshops are disappearing. Conferences reject most of the submitted works. Authors prefer to submit their works directly to journals… And conferences implement JFTs. This observation begs the question: “Why do researchers prefer to submit to journal first?” The answer is that conferences are not fulfilling their mandates anymore. They used to be places for discussions and sharing but have become unbreakable gateways to places of self-promotion. ICSE has an average acceptance rate of 17%. Does such a low acceptance rate make any sense?Does it mean that 83% of the submitted papers are bad? Conference attendees also often complain about the lack of discussions. Does such alack of discussion make any sense?
Consequently, more and more authors submit their works directly to journals so that they can (1) get constructive feedbacks, (2) engage in discussions with the reviewers, and (3) improve their works until publication. Some conference organisers realised the damage that low acceptance rates and lack of discussions did to the community. However, rather than deeply revising their ways, they introduced JFTs, which further increase the problem. JFTs contribute to low acceptance rates by focusing on certain journals and certain authors. They also contribute to the lack of discussions by promoting works that, presumably, are already good enough. Hence, they perpetuate a system in which showing off (“Look at me presenting my complete, perfect journal paper”) is more important than research (“Help me understand this problem better by giving me your advices”).
Other colleagues recognised the problem and attempted to reduce it. I found that the policy put in place by Martin Robillard and Alessandro Orso for ICSE 2017 was a good step in the good direction. I understand why Lionel Briand promotes submission to journals and agree that conferences should not be “king makers” anymore. The community must seriously consider this problem and discuss solutions. It should not perpetuate a system that is so unfair to the hard work of many at the benefit of a few. It should not perpetuate a system that is not conductive of advances in software engineering research and, rather, promotes individuals over science.
Researchers, funding agencies, and research organisations must play a role in re-establishing a saner order and promoting science. In particular, the numbers game decried by David Parnas in 2007 must stop. (1) Too many papers cannot be reviewed with the proper care and discussions required to advance science. (2) Readers cannot study, discuss, and extend carefully papers because many lack proper theories and discussions. (3) Researchers cannot sustain such publishing rates, which take toll on their (mental) health and work–life balance. NSERC Discover Grant Evaluation Group works hard to evaluate fairly and sanely grant proposals and fund deserving proposal according to their merit (and that of their researchers). However, other organisations must be careful not to play the numbers game and compare incomparable results, domains, and fields.
DISCLAMER: I was member of the NSERC 1507 Evaluation Group until 2016. I published at ICSE’10 but never submitted since then. I sent two of my students' journal papers in Journal First Tracks at ESEC/FSE'18 and ESEM'18. I was reviewer for FSE’16. I published at and reviewed for ICSME regularly.