Academic research has long been rooted in established methodologies and tools that have stood the test of time. These methods (e.g. heuristic approaches) form the backbone of scholarly work, leading to countless breakthroughs and advancements. However, as we delve deeper into an era characterized by technological revolutions and interdisciplinary approaches, the reliance on traditional methods is increasingly being questioned. The reluctance to deviate from tried-and-tested paths due to comfort or institutional constraints poses a significant barrier to embracing innovative research practices.
Resistance to change can be understood as an attitude or behaviour of an individual who may obstruct the achievement of new goals or the introduction of novel ideas, methods, or devices [1]. The advent of cutting-edge technologies, notably AI and machine learning platforms, is rapidly transforming the landscape of research possibilities. Despite this, many in the academic world remain hesitant, if not outright resistant, to integrating these new tools into their research practices.
A pertinent example is the introduction of calculators in the 20th century. At that time, calculators were perceived as a threat to traditional methods of learning mathematics, which focused on manual computation and memorization of formulas. According to Banks [2], in 1975, some regions in the United States even banned calculators during standardized tests. Despite initial resistance, the growing ubiquity of calculators in classrooms led to a shift in educational policies by the mid-1980s.
This historical context of calculator integration in education parallels the current debates surrounding the adoption of AI technologies like ChatGPT in academic settings. Similar to the initial doubt towards calculators, AI tools are currently facing debates about their impact on traditional learning and research methodologies.
Rick Maurer proposes a model [3] that categorizes resistance to change into three distinct levels: "I don’t get it," "I don’t like it," and "I don’t like you." This framework suggests that resistance stems not only from a lack of understanding or disagreement with the change itself but also from personal aversions towards the individuals driving the change. In the context of AI and tools like ChatGPT, overcoming this layer of resistance is crucial. By publishing research in prestigious journals and conferences, AI engineers and researchers can build credibility and trust within the academic and professional communities. Sharing detailed findings, breakthroughs, and transparent methodologies showcases the technology's capabilities. It demonstrates the dedication, expertise, and reliability of the individuals and teams involved, thereby addressing the personal biases or distrust that might be encapsulated in the "I don’t like you" sentiment.
As we move forward, I think it is essential for the academic community to remain open to new methodologies and technologies, balancing the preservation of traditional rigour with the adoption of innovative practices. In doing so, academia can continue to be a leader in knowledge generation and intellectual advancement in an increasingly complex and technologically driven world. The integration of AI into academic research is not just a trend but a significant step towards a more expansive and efficient future in scholarly exploration.
[1] Chawla, A. and Kevin Kelloway, E., 2004. Predicting openness and commitment to change. Leadership & Organization Development Journal, 25(6), pp.485-498.
[2] Banks, S., 2011. A historical analysis of attitudes toward the use of calculators in junior high and high school math classrooms in the United States since 1975.
[3] Maurer, R., 2018. Build Support for Change Before You Need It. The Journal for Quality and Participation, 41(1), pp.14-16