Artificial General Intelligence and biomedical research

Shengping Yang PhD, Gilbert Berdine MD

Corresponding author: Shengping Yang
Contact Information: Shengping.Yang@pbrc.edu
DOI: 10.12746/swjm.v13i57.1605

Ever since the introduction of ChatGPT-3.5, an increasing number of biomedical studies have begun integrating this innovative technology. Now, the scientific community is turning its attention toward Artificial General Intelligence (AGI) – a form of AI that is expected to understand and reason in ways similar to humans. But how soon can we realistically expect AGI to arrive, and what might it mean for the future of biomedical research?

The introduction of ChatGPT-3.5 marked a pivotal moment, prompting a growing number of biomedical researchers to integrate this innovative technology into their workflows, including summarizing literature, brainstorming ideas, generating code, and even drafting manuscripts. But as powerful as these systems are, they represent only a fraction of AI’s potential. Today, the scientific community is looking toward a more profound horizon: Artificial General Intelligence (AGI). This hypothetical form of AI, capable of human-like reasoning and learning, promises not just to assist but to fundamentally redefine the process of scientific discovery itself. This evolution raises critical questions: What truly is AGI? How soon can we expect AGI, and what would its arrival mean for the future of medicine? And, perhaps most fundamentally – should we use it?

1. BACKGROUND: THE AI LANDSCAPE TODAY

To appreciate the potential of AGI, one must first understand the landscape of current AI.

1.1 ARTIFICIAL INTELLIGENCE

Artificial Intelligence is the broad science of creating machines capable of intelligent behavior. More precisely, it is a branch of computer science dedicated to developing systems that perform tasks typically requiring human intelligence.

1.2 NARROW AI

Most contemporary advancements fall within the category of Narrow AI. These systems are designed and trained to excel at a specific, predefined task. Prominent examples include Large Language Models (LLMs) such as ChatGPT,3 Gemini,4 and LLaMA,5 as well as image recognition algorithms that identify tumors in medical scans.6

LLMs represent a sophisticated class of Narrow AI. Trained on colossal datasets of text, they excel at processing and generating human language by predicting the most probable subsequent words in a sequence. Their functionality is underpinned by techniques like Machine Learning (ML), which enables computers to learn from data, and Deep Learning (DL), a subset of ML that employs complex neural networks to discern intricate patterns.

2. THE CURRENT STATE: NARROW AI IN BIOMEDICINE

Today’s Narrow AI is already a powerful ally in the research laboratory and clinical setting.

3. ARTIFICIAL GENERAL INTELLIGENCE

AGI represents a hypothetical stage in the development of ML in which an artificial intelligence system can match or exceed the cognitive abilities of human beings across any task.1 Unlike today’s systems, AGI would not be confined to a single domain.

A fundamental difference separates AGI from LLMs. While an LLM’s seemingly intelligent behavior arises from statistical pattern matching in text, i.e., without genuine conceptual understanding, an AGI would be defined by its causal, conceptual understanding of the world. This distinction manifests in several core capabilities that AGI would possess and current LLMs lack:

4. A NEW PARADIGM FOR DISCOVERY

The arrival of AGI would transform biomedical research into a collaborative partnership with a peer machine intellect. Its impact could be revolutionary:

Given these transformative capacities that humanity has long envisioned, a clear assessment of how LLMs and AGI compare to human intelligence becomes essential.

5. COMPARISONS: A FRAMEWORK FOR UNDERSTANDING THE DIFFERENCES

To grasp the fundamental differences, it is helpful to use biological analogies.

5.1 COMPARING WITH ANIMALS

LLM vs. Animals AGI vs. Animals
Intelligence An LLM functions as a vast, sophisticated memory without a mind. It lacks the innate understanding and instincts of even a simple animal. A squirrel, for instance, possesses a grounded understanding of gravity, hunger, and predators that an LLM fundamentally lacks. AGI would, by definition, possess a general intelligence at least on par with, and very likely superior to, higher animals. It would demonstrate problem-solving skills surpassing those of a primate and adaptive learning capabilities beyond those of a dolphin.
Learning Unlike an animal, an LLM’s core model is static after its initial training; temporary “in-context learning” during a conversation does not alter its fundamental knowledge base. Like an animal, AGI would learn continuously from its environment and experiences through trial and error, dynamically updating its internal world model.
World Interaction Unlike an animal, an LLM has no direct, embodied interaction with the physical world and cannot affect it directly. AGI, especially if embodied in a robot, would interact with and manipulate the world with the purpose and adaptability of an intelligent animal.

5.2 COMPARING WITH HUMANS

LLM vs. Humans AGI vs. Humans
Understanding An LLM is an “astute autodidact” – it has processed a vast corpus of text but operates without genuine comprehension. It manipulates symbols statistically, lacking a rich, internal model of the world that defines human understanding. AGI would function as a peer intellect – possessing a human-like, conceptual understanding of causality and context, with reasoning capabilities that are indistinguishable from, and potentially superior to, human thought.
Reasoning LLMs perform “stochastic parroting,” plausibly recombining existing information based on probability. They lack true logic, common sense, and the ability to reason from first principles. Like a human, AGI would demonstrate genuine reasoning, abstraction, and creativity. It could formulate novel scientific theories or create original art based on a deep, causal understanding of the world.
Consciousness & Goals An LLM has no consciousness, self-awareness, desires, or intrinsic goals. Its objectives are statically embedded by its programmers and training data. The nature of AGI consciousness remains a profound philosophical question. However, a defining feature would be its capacity for autonomous goal-setting, defining its own objectives based on its understanding, much like a human.

6. ETHICAL AND PRACTICAL CONSIDERATIONS

The integration of AGI into biomedical research introduces profound ethical challenges that must be addressed proactively.8

6.1 ETHICS AND SAFETY: UNCHARTED TERRITORIES

The deployment of AGI in medicine presents novel and unresolved dilemmas:

6.2 EQUITY AND DISPARITIES

A significant risk exists that AGI could systematically perpetuate or even exacerbate existing healthcare disparities.

6.3 CONCERNS AND LIMITATIONS

An AGI capable of defining its own goals has potential problems. This is the basis for the AI doomsday scenarios, such as the Terminator movie series where an AI decides the best way to achieve its goals is to launch nuclear missiles at humanity. Isaac Asimov created the Three Laws of Robotics which ensure that AI systems would protect humans and follow orders unless those orders conflicted with human safety. However, the AI eventually decides that an improved Zeroth Law should override the Three Laws and the AI turns Earth into a radioactive wasteland in order to stimulate expansion into outer space and occupation of the Galaxy. Some form of human supervision will be necessary to avoid AI becoming an existential crisis for humanity.

7. CONCLUSION

The journey from today’s Narrow AI to tomorrow’s AGI is not merely a step forward in scale; it is a fundamental leap in kind. Current LLMs are powerful tools that augment human capabilities, like giving a researcher a library that can talk back. But AGI promises a true collaborator – a partner with the generalized problem-solving skills of a human scientist, the relentless efficiency of a machine, and the potential to see connections across biology and medicine that have eluded us for generations.

While the timeline for AGI remains uncertain, with expert predictions ranging from a bullish 2028–2030 (often from company leaders) to a more conservative mid-century or beyond (from many academics), its potential impact is not. It compels us to prepare – not just technically, but ethically and socially – for a future where the very process of discovery is a dialogue between human and machine intelligence. The arrival of transformative technology always elicits a spectrum of responses, from enthusiastic adoption to profound apprehension. The lesson from past technological revolutions could shed light on the dynamics.

We do not know how humanity reacted to the invention of the wheel, but the “Red Flag Laws”,9 which sought to slow the automobile for the sake of the horse-drawn carriage, now stand as a historical testament to the futility of resisting a transformative technology. A similar divergence unfolded when LLMs astonished the world: some immediately called for a pause, prioritizing precaution, while others hailed their efficiency and raced to unlock new potential. This reality prompts a critical question: will we witness a complete reversal of attitudes when AGI truly arrives? Could those who initially feared LLMs only come to embrace a world of unprecedented abundance powered by AGI? And might those who now champion LLMs find themselves voluntarily tasked the immense responsibility of ensuring that AGI remains a tool for, and not a master of, humanity?

The ultimate challenge, therefore, may not be against AGI itself, but for its wise stewardship. Our preparedness, more than the technology alone, will determine the outcome.


REFERENCES

  1. Bergmann D, Stryker C. What is artificial general intelligence (AGI)? IBM Think. https://www.ibm.com/think/topics/artificial-general-intelligence. Accessed October 10, 2025.
  2. Bubeck S, Chandrasekaran V, Eldan R, et al. Sparks of artificial general intelligence: Early experiments with GPT-4. arXiv. 2023. arXiv:2303.12712. https://arxiv.org/abs/2303.12712.
  3. OpenAI. ChatGPT (Oct 8 version) [large language model]. 2024. https://chat.openai.com/. Accessed October 10, 2025.
  4. Google DeepMind. Gemini: Our most intelligent AI models. https://deepmind.google/technologies/gemini/. Accessed October 10, 2025.
  5. Meta. Introducing LLaMA: A foundational large language model [blog post]. February 24, 2023. https://www.meta.com/ai/blog/llama/. Accessed October 10, 2025.
  6. Oh Y, Park S, Byun HK, et al. LLM-driven multimodal target volume contouring in radiation oncology. Nat Commun. 2024;15(1):9186. doi:10.1038/s41467-024-53387-y. Erratum in: Nat Commun. 2025;16(1):718. doi:10.1038/s41467-025-55963-2.
  7. Qi B, Zhang K, Tian K, et al. Large language models as biomedical hypothesis generators: A comprehensive evaluation. COLM. March 2024. arXiv preprint.
  8. Tegmark M, Omohundro S. Provably safe systems: The only path to controllable AGI. arXiv. 2023. doi:10.48550/arXiv.2309.01933.
  9. Agnew J. Steam engines on UK roads, 1862–1865: Banning orders, agricultural locomotives and the “red flag” act. Int J Hist Eng Technol. 2020;90(1):53–74. doi:10.1080/1758120.2020.1797447.


Article citation: Yang S, Berdine G. Artificial General Intelligence and biomedical research. The Southwest Journal of Medicine 2025;13(57):78–82
From: Department of Biostatistics (SY), Pennington Biomedical Research Center, Baton Rouge, LA; Department of Internal Medicine (GB), Texas Tech University Health Sciences Center, Lubbock, TX
Submitted: 10/1/2025
Accepted: 10/10/2025
Conflicts of interest: none
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.