Lessons Learned from using LLMs for Microservices Migration

The migration from monolithic systems to Microservices Architecture (MSA) is becoming increasingly essential in modern software development, driven by the need for scalability and agile deployment. Traditional monolithic systems consolidate all functional components into a single unitary software system, which limits scalability and rapid development. In contrast, MSA proposes a modular approach, dividing systems into smaller, autonomous units, each responsible for distinct business functionality. This facilitates independent deployment and scaling, aligning with dynamic business needs [1].

The Challenge of Migration

Migrating to MSA, however, is often costly, time-consuming, and complex [2]. It involves multiple critical steps such as planning, identifying microservices, restructuring the code base, integrating microservices patterns, and adapting deployment processes. these steps demand extensive expert intervention, which significantly increases both the cost and duration of migration projects.

Previous works in migration have proposed effective approaches for organizing and identifying appropriate divisions for microservices within monolithic architectures. However, they often fell short in more complex areas such as generating APIs, handling interactions between services, and establishing robust communication protocols. These aspects are crucial for the functionality and efficiency of a microservices architecture but require a deeper level of automation that earlier technologies could not provide.

The Role of Large Language Models (LLMs)

The recent advancements in Large Language Models (LLMs) have sparked considerable interest in their potential to streamline complex software engineering tasks, particularly for code generation [3]. This development offers promising prospects for automating intricate aspects of migrating from monolithic architectures to Microservices Architecture. We hypothesise that LLMs could significantly diminish the necessity for extensive expert intervention, which traditionally contributes to high costs and extended timelines of migration projects. To this end, we have defined precise steps for the migration process and, for each step, have systematically developed the appropriate prompts. This systematic approach ensures that each phase of the migration is addressed with specific, context-specific guidance from the LLMs, enhancing the migration of legacy systems into scalable microservices architectures. We applied this methodology to the case study of PetClinic, demonstrating the practicality and impact of our approach in a real-world scenario.

Lessons and Recommendations

Our hands-on experience with applying LLMs in the migration process has provided valuable insights, leading to the extraction of several crucial lessons and actionable recommendations.

  • Code Generation Over Code Refactoring: LLMs tend to be more effective at generating new code rather than refactoring existing monolithic code because new code allows for the application of modern coding practices without the constraints of legacy systems. For tasks involving code refactoring, it is beneficial to employ a structured approach that incorporates either rules-based or chain-of-thought prompting to enhance the output.

  • Guidance is Essential: LLMs require detailed guidance to provide accurate and relevant code generation. They may default to generating more generic or simpler outputs due to "model laziness." Creating comprehensive guidance that specifies coding standards, architectural patterns, and best practices to follow, including rules around code structure, naming conventions, and error-handling practices, is crucial.

  • Systematic Prompt Crafting is Crucial: Small changes in prompts can dramatically alter the outputs generated by LLMs, highlighting the precision required in prompt crafting. Using an iterative approach where initial outputs from the model are evaluated and used to refine the prompts continuously helps align LLM outputs with specific project goals.

  • Choosing the Input: Since LLMs have an input size limitation, it's crucial to correctly identify and prioritize the pertinent information and code parts. We recommend using a phased approach, focusing on key classes and methods of each microservice, their primary responsibilities, and the dependencies between them.

  • Necessity of Imposing Pattern Generation: Directing LLMs to focus on generating specific architectural patterns is crucial for migration and maintaining consistency across the architecture. Clearly defining the architectural patterns suited for the system's needs, such as API Gateways, Circuit Breakers, and Service Registries, and including the desired library, language, and any specific requirements in the prompt is essential.

Conclusion

The migration from a monolithic architecture to MSA is increasingly important, given the enhanced scalability and agile deployment that a microservice architecture offers. However, the migration process is inherently costly. With the recent emergence of LLMs)in software engineering, we hypothesize that LLMs could be instrumental across the entire migration process by reducing the need for manual labour.

Our experience of using LLMs for migrating from monolithic architectures to MSA has revealed several crucial lessons. Firstly, LLMs are more effective at generating new code rather than refactoring existing code. Detailed guidance is crucial in providing comprehensive instructions on coding standards and best practices. The precision in prompt crafting is important, with an iterative refinement process helping align LLM outputs with project goals. Finally, directing LLMs to generate specific architectural patterns ensures consistency across the architecture. These insights collectively highlight the potential of LLMs to streamline migration processes, reduce expert intervention, and facilitate a more agile and scalable microservices architecture.

References

[1] Fritzsch, J., Bogner, J., Zimmermann, A., Wagner, S.: From monolith to microservices: A classification of refactoring approaches. In: Software Engineering Aspects of Continuous Development and New Paradigms of Software Production and Deployment: First International Workshop, DEVOPS 2018, Chateau de Villebrumier, France, March 5-6, 2018, Revised Selected Papers 1. pp. 128–141. Springer (2019)

[2] Fritzsch, J., Bogner, J., Wagner, S., Zimmermann, A.: Microservices migration in industry: intentions, strategies, and challenges. In: 2019 IEEE International Conference on Software Maintenance and Evolution. pp. 481–490. IEEE (2019)

[3] Hou, X., Zhao, Y., Liu, Y., Yang, Z., Wang, K., Li, L., Luo, X., Lo, D., Grundy, J., Wang, H.: Large language models for software engineering: A systematic literature review (2024)