ESCP Professor Gorgi Krlev and co-authors propose a new framework redefining how researchers review knowledge in the age of AI.
Literature reviews have long been a cornerstone of academic research. They summarise what is known, identify gaps, and help shape new lines of inquiry. Yet for many scholars, they remain one of the most time-consuming and least rewarding parts of research. A new study led by Professor Gorgi Krlev (ESCP Business School), in collaboration with Timothy R. Hannigan (University of Ottawa) and André Spicer (University of London), argues that it is time to rethink how we review knowledge, and what role humans will continue to play as artificial intelligence takes on more of the work.
Published in the Academy of Management Annals, the paper analyses over 1,400 literature reviews published since the 1980s. The authors set out to understand what makes a high-quality review and how reviewing practices have evolved. Their findings reveal that there is no single standard model. Instead, literature reviews serve multiple purposes depending on the maturity of a field and the kinds of questions being asked.
Ten purposes, four directions
The team identified ten distinct purposes of review articles, which they grouped within a framework called the “Directional Space.” It helps scholars decide not just how to conduct a review, but why it is needed and where it should take a field.
The framework highlights four broad “directions” a review can take:
- Ordering – organising and consolidating scattered evidence in emerging fields;
- Renovating – challenging assumptions to refresh mature or stagnant research areas;
- Expanding – connecting fragmented or parallel conversations across disciplines;
- Reshaping – redefining the boundaries or direction of a field altogether.
This perspective shifts attention from procedures to purpose, encouraging researchers to align their method with what a field truly needs.
The changing role of the human reviewer
Artificial intelligence adds a new dimension to this conversation. Large language models can already automate much of the descriptive work involved in reviewing: collecting, sorting and summarising large bodies of literature. Yet the authors stress that such tools cannot replace human interpretation.
“The shift changes the human role from verifying sources to making sense of them, deciding what matters and where a field should go next,” the study notes.
As AI handles routine synthesis, the value of human reviewers lies increasingly in their ability to judge meaning, context and direction.
Towards more reflexive and collaborative reviewing
Krlev and his co-authors propose that future reviewing will benefit from technologically infused and collective practices. These include shared knowledge platforms, visual “evidence maps,” and ongoing, distributed collaboration among researchers. Such approaches could make reviews faster, more transparent and more relevant across disciplines.
They also highlight that diversity in review purposes strengthens a field’s resilience. Unlike disciplines that rely heavily on standardised methods, management research benefits from having multiple ways of reviewing knowledge, allowing new perspectives to emerge even as others evolve or fall out of use.
Reclaiming the purpose of the review
For Gorgi Krlev, this evolution is less about replacing old methods and more about restoring the review’s original purpose.
“In the social sciences, a good review is not just a summary of what we know. It is a form of sensemaking, an interpretation of how knowledge fits together and where it might lead next.”
The literature review is evolving. In the age of AI, its future lies not in stocktaking but in sensemaking.
Campuses