Why Generative AI's Lack Of Modularity Means It Can't Be ...
One of the most significant shifts in computing over the past few decades has been the increasing utilization of open-source software across various platforms, from cloud computing to smartphones. For the distributed development methodology introduced by Linus Torvalds with Linux to be successful, modularity plays a crucial role.
Modularity enables coders worldwide to work independently on self-contained elements that can be easily upgraded or replaced without the need for a major redesign of the overall architecture. Eerke Boiten, Professor of Cyber Security at De Montfort University Leicester, highlighted the benefits of modularity in an article published on the British Computer Society website. He emphasized that parts can be engineered and verified separately, facilitating parallel development and reuse of components.
Limitations of Current Generative AI Systems
However, current generative AI systems face a significant challenge as they lack modularity. Unlike open-source software, these AI systems do not have an internal structure that corresponds to their functionality. This absence of modularity hinders the development and reuse of AI components, making it difficult to achieve separation of concerns or piecewise development.
Additionally, most current AI systems do not create explicit models of knowledge, relying instead on techniques in image analysis where knowledge models for computers are challenging to create. This non-modular nature of generative AI systems poses obstacles in testing and verification processes.
A recent article in Nature delves into the limitations of today's generative AI systems, highlighting their monolithic nature and the challenges in testing them. Boiten points out that the lack of modular components hampers verification by parts, making it challenging to establish confidence in the system during development.
The Reliability Issue
Generative AI systems are inherently unreliable due to their stochastic design, leading to unpredictable outputs for the same input. Testing these systems becomes complex due to the vast input and state spaces, making exhaustive testing unfeasible.
Boiten warns that without modular components, current AI systems do not allow for meaningful verification through unit testing or integration testing. The inability to verify AI systems by parts compromises the overall reliability and trustworthiness of these systems.
While advancements continue in the field of AI, the lack of modularity in current generative AI systems raises concerns about their reliability and suitability for serious applications. The need for hybrid AI systems that combine symbolic and intuition-based approaches may offer a potential solution to address these challenges.
Despite ongoing developments, the fundamental issues with the lack of modularity in generative AI systems remain a critical obstacle in ensuring their reliability and trustworthiness in various applications.




















