Abstract

ChatGPT and Medicine: Fears, Fantasy, and the Future of Physicians

Christopher J Peterson MD, MS

Corresponding author: Christopher J Peterson
Contact Information: Cjpeterson1@carilionclinic.org
DOI: 10.12746/swrccc.v11i48.1193

ABSTRACT

The generative artificial intelligence (AI) ChatGPT has attracted media attention for its ability to answer a wide variety of questions with a human-like writing style, including questions from the USMLE licensing examination. Some wonder if this indicates physicians’ eventual demise at AI’s hands. On the contrary, physicians contribute a unique skill set that technology cannot reproduce or replicate. ChatGPT also has critical limitations that will likely prevent it from replacing human operators or thinkers. Furthermore, the challenges from and worries over new technology are nothing new, with professionals and industries historically adapting to these changes.

Keywords: ChatGPT, artificial intelligence, machine learning, clinical practice, medical profession

INTRODUCTION

Many observers have been concerned by the recent news that ChatGPT was able to pass the USMLE medical licensing examinations (or rather, a subset of questions for each examination).1 ChatGPT is a form of generative AI (i.e., creates an output) that uses natural language processing to produce textual responses to questions. It uses a predictive model trained on large amounts of data to predict the next word in a sequence.2 From this, ChatGPT produces answers that mimic human writing and diction and responds to a wide range of topics. Moreover, some studies have shown that ChatGPT can produce outputs that appear to rival experts and professionals, including physicians. Some wonder if this might be the end of doctors, with medicine reduced to interactions between a patient and its AI “physician.”3 Others wonder what this says about medical education–when a computer can pass an examination that medical students spend years preparing for. Some wonder if ChatGPT will be a pathway to laziness, apathy, and ignorance; one creator comically depicts a scenario in which a physician cannot treat a presumed myocardial infarction because the ChatGPT site is down.4 Physicians themselves might wonder, “Will AI take my job?”

These questions are not new; technology has frequently been depicted as a threat, real or otherwise, to various professions. Science fiction has frequently depicted an “automated” future. The doctor of “tomorrow” might be an autonomous robot (Star Wars) or a person needing nothing more than a “remote control” and an examination table (Star Trek: The Next Generation). However, many have defended the irreplaceability of specialized professions, such as physicians.

ChatGPT seems to pose a new existential threat. It does more than merely present results to the user (like a search engine does) but also creates highly individualized answers to complex questions, all in diction, grammar, and syntax that seem as though there is a person on the other side of the screen, frantically writing out each response. Indeed, its diction and syntax are often so indistinguishable from human writing that some fear it will be used to create work for everyone, from high school students to medical researchers.2 To this point, a study of reviewers given a mix of human and AI-generated abstracts correctly identified only 68% of abstracts as AI-generated and misidentified 14% of original, human-authored abstracts as AI generated.5 In other words, this isn’t Google giving you webpages–it’s a machine that writes a unique answer just for you. With this distinct leap in technological progress, many have legitimately wondered if machines will replace them. For medicine, the question becomes that if this machine can not only “talk” like a doctor but also passes the same examinations, what’s keeping it from taking our place as well?

These concerns are both daunting and disconcerting, and what the world will look like with this new technology remains to be seen. Indeed, it will likely be some time before we truly understand how ChatGPT will impact our society. However, I believe a strong case can be made for one of these concerns–namely, why our profession isn’t doomed. It’s a daunting question. Fortunately, there are several reasons why our profession isn’t doomed.

WEVE BEEN HERE BEFORE

The mythological Hydra was a reptilian water monster with multiple snake-like heads. An unusual opponent, severing one head resulted in two more growing in its place. Similarly, the conquest and conquering of the disease involve many challenges, like the many-headed Hydra.6 And whenever we feel we have conquered one challenge, we often find only more challenges (“two more heads”) taking its place. Just look at antibiotics. The discovery of penicillin and sulfonamides in the early 20th century promised a new era in which infections could be easily controlled. However, microbes quickly developed resistance, necessitating new antibiotics and careful stewardship, an arms race that continues to this day.7 Indeed, the antibiotic development pipeline, constant surveillance of resistance (including in-hospital antibiograms), and stewardship programs represent a far more complex world of management than existed before the discovery of penicillin. Physicians have to learn the intricacies of appropriate antibiotic prescribing practices. There are even professional conferences focused on antibiotic resistance. And recently, the World Health Organization,8 Centers for Disease Control and Prevention,9 and United Nations10 have all sounded the alarm on a rapidly emerging resistance–a problem that didn’t exist before the era of antibiotics. Of course, the analogy here isn’t perfect; antibiotics after all gave us more than “two hydra heads,” as these drugs have saved untold lives. However, after the problem of no solution to bacterial infection was solved, the solution of antibiotics created new problems that did not previously exist and were just as lethal as the problem of no antibiotics.

The literary and film plot device deus ex machina (Latin for “God from the machine”) introduces an unexpected element (such as a new piece of technology) to rescue the protagonists from an unsolvable problem (think Adam West’s Batman and his handy shark repellent). For understandable reasons, this storytelling device has been criticized as both lazy writing and incongruent with reality.11 And yet, we still find ourselves drawn to the idea of the deus ex machina in our own life stories when we fantasize about technology that will rescue us from our problems. We should remember that no single piece of technology has ever been humanity’s saving grace–a place that we’ve never been before, nor likely ever will.

THE THIN LINE BETWEEN CONFIDENCE AND ARROGANCE

Answers from a seemingly all-knowing computer, written with the style and confidence of an expert, can cultivate a sense of trust. Yet sounding right and actually being right are two different things–and ChatGPT is no exception. To be sure, ChatGPT has been able to produce impressive responses to a wide range of questions.12 For example, one (pre-print) study had 17 physicians ask ChatGPT 284 medical questions with answers graded for correctness (1-completely incorrect to 6-completely correct) and completeness (1-incomplete to 3-complete plus additional context). ChatGPT scored a mean and median accuracy of 4.8 and 5.5. It also scored highly on completeness–a mean of 2.5 and median of 3.13 It’s this kind of uncanny accuracy that can make physicians look over their shoulders, wondering if AI is coming for their white coats and stethoscopes.

Despite its success, ChatGPT has also been found to fabricate, or “hallucinate,” sources and references.14 For example, ChatGPT fabricated over two-thirds of references in response to a set of medical questions.15 Yet despite previous examples of accuracy, it’s neither consistent nor demonstrably superior to physicians. For instance, when presented with clinical vignettes, physicians scored higher on diagnostic accuracy (98.3% vs. 53.3%, P < 0.001).16 In another instance, when asked specific questions about a recently published scientific article, ChatGPT gave grossly inaccurate responses to all five prompts.17 There are also concerns that ChatGPT could be used for nefarious reasons, such as the creation of misinformation.18 This and the concern over ChatGPT-generated content being passed off as human-produced have spurred the creation of “AI detectors”–programs that can detect text or images created by generative AI.19 As one writer puts it, “ChatGPT has all the answers–but not always the right ones.”20

Like any technology, ChatGPT will need to be further refined. A new version of ChatGPT (GPT 4, an improvement on the ChatGPT 3 used to pass the USMLE) has shown improvements in accuracy.21 Its limits are unknown, and so speculations will inevitably continue. However, we should also remember the optimistic predictions in the past that turned out to be incorrect in the present. For example, collections of Victorian-era drawings about life prove humorous in retrospect, with depictions of robot tailors and butlers, personal flight apparatuses, and air travel as casual as automobiles.22 Indeed, history is replete with bold and optimistic but ultimately unrealized predictions about future technology. In 1969, proposals for moon and Mars bases were discussed and presented at NASA, with a goal of a crewed landing on Mars by 1982 considered “logical.”23 Of course, NASA is now in the early phases of returning astronauts to the moon, with bold predictions about Mars landings from other thinkers and visionaries stirring public interest but clearly falling short.24 A New York Times article notes the shortcomings of futuristic predictions by critiquing an older article’s attempting to do just that, and then insightfully notes, “the 1982 predictions said less about the future than about what sorts of stories people wanted to hear.”25 Of course, some futurists’ predictions have turned out to be remarkably prescient (including many from the 19th century).26 And yet, predicting patterns in everyday occurrences like the weather or the economy continues to prove incredibly difficult. Mathematician and author David Orrell emphasizes the “uncertainty of living systems” as a barrier to effective predictive models, noting that the classical Greek perspective of a mathematically harmonious universe hasn’t produced ways to foretell the future.27 So, while the uncertainty of ChatGPT’s impact will understandably invite speculation, we should avoid crossing the line between confidence and arrogance by placing too much trust in either our machines or our own prognostic abilities.

THEYRE MORE LIKE GUIDELINES THAN ACTUAL RULES

The era of evidence-based medicine has undoubtedly increased the rigor of clinical practice. Medical societies provide updates on best practices based on the best available evidence, allowing collective knowledge to guide our decisions, thus providing more consistent clinical practice and better outcomes.28 However, in this world of such rigor, one could get the impression that medicine can be reduced to a series of algorithms or Up-To-Date searches. In reality, anyone who thinks medicine is this simplistic has clearly never attended a tumor board or a critical care noon conference. That’s because science, for all its effectiveness, is imperfect, limited, and incomplete (hence why we keep re-searching). It often approaches truth (like the calculus concept of taking the limit), and yet its limitations mean that change is inherent.29 Some experiments can be reproduced. Some get retracted. Some studies fail to be generalizable. Others reveal the opposite of what was expected. Even guidelines frequently require updating, with one study noting that 1 in 5 guidelines become outdated after three years,30 with recommendations for reassessment every three years.31 The COVID-19 pandemic, with its frequent changes to our understanding and management of SARS-CoV-2, is a fresh reminder of this. So, while guidelines are useful, they should never become so dogmatic or sacrosanct that we view them as beyond reproach–or something that can be programmed into people or machines.

To this point, physicians should be familiar with the inherent dangers of dogma and the need to challenge it rationally.32 I’m an avid reader of the Internet Book of Critical Care project (or “IBCC” to those in the know) by Joshua Farkas, a pulmonary critical care physician at the University of Vermont.33 He has taken both a microscope and a battering ram to medical dogmas and established guidelines (just look at his remarks on the Surviving Sepsis campaign).34 Far from rejecting medical reasoning, he rather embraces it by questioning (and even exposing) weakness in our practices, habits, and, yes, even guidelines. In a section on pulmonary embolism, he begins by stating, “PE is a humbling disease”35 At first glance, this seems like an odd statement in the era of PERT response teams and embolectomy. Dr. Farkas elaborates that despite standardized diagnostic criteria and risk factors, PE is infrequently so well-defined or easily recognizable in clinical practice.36 Despite recognition by Virchow of the causative factors of thrombi and emboli and more than a century of research, PE nonetheless retains a high mortality rate.37 Clearly, we still have room to grow (and so will the technology we invent).

ONLY AS GOOD AS THE SUM OF ITS PARTS

Chat GPT isn’t an oracle or a crystal ball, at least in the sense that it can’t tell us things that we don’t collectively know. Its responses are based on the knowledge we already have–knowledge that will likely change and evolve over time. And the future keeps evolving at breakneck speeds. Currently, scientific knowledge doubles at an astounding rate (every 17 years).38 Ironically, rather than shrinking our universe to size, progress has only shown it to be more complex and revealed how little we truly know.39 And this quest for knowledge isn’t likely to end anytime soon. Some thinkers even argue that knowledge is infinite–a well that won’t dry up no matter how many grad students and grant funding we throw at it.40–42 The need for innovation and discovery, then, may always exist. So, while machines like ChatGPT can summarize information, they are not designed to innovate, which means they would be not much use at the “edge of knowledge,” where topics are still being debated, discussed, and reexamined.

Patients are also far more complex than formulaic USMLE questions would have one believe. Illness rarely falls into neat presentations. One study found that nearly one-third (28.6%) of older adults at an emergency department had atypical presentations of illness.43 As the saying goes, “Most patients don’t read the textbooks.” People also exist within a milieu of emotional and social contexts. A physician’s challenges may not be strictly “medical” (waiting for placement or prior authorization, anyone?). Recent emphasis on genetically tailored, “personalized” medicine,44 diversity of skin color in dermatology training,45 and cultural competency46 emphasize just how difficult it is to sort patients into rigidly defined groups. As one patient puts it, “We can’t be reduced to data.”47 Indeed, outliers and variations make creating a “one size fits all” algorithm difficult, if not impossible. These outliers can be particularly challenging for generative AI, which is designed to be predictive rather than creative. As one writer puts it, “Zebras exist, but a probabilistic reasoning algorithm would never for look for them.”48

This complexity shouldn’t make it surprising that the jury is still out on the relationship between USMLE scores and physician performance. While USMLE scores do tend to correlate well with other test scores (such as in training examinations)49,50 it is less well-correlated with other parameters such as faculty evaluations, competency milestones, and emotional intelligence.50–54 Of course, measuring what makes an effective physician is undoubtedly challenging. How does one measure empathy or clinical gestalt? Medicare metrics and other healthcare measures attempt to do so, particularly regarding efficiency and fiscal responsibility. However, those measured by such metrics tend to realize their inadequacies.55–57 In one instance, the introduction of 30-day readmission metrics may have prompted physicians to delay admission to be outside the 30-day window.58 This also holds true for standardized tests and headlines about machines that can pass them. Because if what makes an effective physician cannot be encapsulated by a test or society metric, then a machine that can “talk” like a physician surely can’t replace one either.

ASKING THE RIGHT QUESTIONS

A significant component of medical training is learning not just to ask patients questions but to ask the right questions. In a sea of possibilities, determining which questions are relevant to the case at hand, wording them in ways that handle sensitive issues, and allowing for open-ended answers is a crucial skill set.59 To that end, descriptors for clinical signs of symptoms also matter. There is never simply “chest pain” but rather “chest pressure,” “with radiation,” or “worse with movement.” Indeed, one trait of great thinkers and experts is knowing how to ask the right questions.60

So it is with ChatGPT. Users still have the task of transcribing their queries into terms that can be linked to the correct information. Finding the right information to match our queries is challenging even for experts, let alone laypersons. For example, research and academic librarians have moved from not only curating data to now helping scholars hone in on relevant articles and develop research plans,61 an important task with millions of scholarly articles online (114 million as of 2014).62 PubMed users will be familiar with search optimization tools such as MESH terms, Boolean operators, and various filters–tools needed to search through mass amounts of information.63 Similarly, in the world of conversational and generative AI like ChatGPT, careers focused on appropriately querying information have also developed. Prompt engineering is a new field focused on optimizing user queries and AI output.64 Inherent to this is the recognition that prompts and output don’t perfectly align. This reflects an older field called search engine optimization, used by marketing firms and other organizations to maximize hits for their particular item or brand and ensure it doesn’t become buried under the innumerable competing ideas and products.65 The mere availability of information doesn’t mean it will be appropriately categorized or that ideas won’t compete amongst “themselves” for attention. Nor does it mean that patients will be able to appropriately use medical terms and interpret medical literature, as evidenced by issues with internet self-diagnosis.66 Indeed, access to seemingly unlimited information doesn’t mean that humanity will know how to find what they’re looking for–a question as universally existential as it is contemporary.

THE HUMAN ELEMENT

The famous painting by Sir Luke Fildes, simply named The Doctor, depicts a 19th-century physician holding vigil at the bedside of one of his patients, a young and very ill child. Rather than depict the life-saving skills of the doctor, the artist rather portrays the steadfast physician keeping watch even when he seems to have little to offer. A cure seems uncertain, perhaps unlikely; some even suspect that this may have been influenced by the death of the artist’s own son.67 Nevertheless, it resonates because it embodies the humanity and empathy that are associated with the practice of medicine. Indeed, there has been a resurgence in the medical humanities, with programs integrated into medical education68 and residency programs.69 Patients want human doctors, both literally and morally. The physician-patient relationship itself can be therapeutic, irrespective of treatment modalities.70 Dr. Abraham Verghese, physician and author at Stanford, recalls a story of a patient who, dying of AIDS and without further treatment options, nonetheless insisted that his physician continue the ritual of physical examination, something that had become a symbol of the patient-doctor relationship and human connection.71 And although ChatGPT can be trained to appear empathic (and in some instances, embarrassingly more so than physicians72), it’s questionable how impactful this programmed “empathy” will be and how easily patients would accept it (they already have concerns about AI in healthcare).73 While machines can possess incredible efficiency in some areas, it’s unlikely that people will trust them as much as humans (particularly in high-risk scenarios).74,75 Just imagine how comfortable you would feel flying in an airplane piloted solely by AI. Or having surgery by an autonomous robot? Kurt Gray, an associated professor of psychology and neuroscience at the University of North Carolina, attributes at least part of this mistrust to a machine’s inability to feel emotions.74 A recent survey showed that a majority of respondents would not trust AI to make life-or-death military decisions, serve as a juror, or fly an airplane.76 There is just something in the human element that we have both a kinship with and a sense of optimism in that transcends the precise but unfeeling gears of technology.

This is where the well-trained physician is indispensable. What do you do when the algorithm doesn’t fit? When you’re right on the cutting edge of knowledge? Or in the gray areas of clinical decision-making? In this situation, more abstract skills such as intuition and gestalt become important–intangible skills that are both poorly understood, difficult to teach, and must be learned from experience. Behind the emphasis of clinical gestalt is the belief that human decision-making cannot be simply reduced to individual components.77 Rather, clinical experience and the human subconscious combine in ways to produce instincts that are not immediately available to conscious reasoning. The word “gestalt” comes from debates between two schools of thought–the atomists and the Gestalt psychologists. Atomists believed that the mind could be reduced to discrete units, whereas Gestalt psychologists (German for “whole” or “form”) believed that there was something insoluble about experience that couldn’t be reduced to sensory components. Physicians have reported their intuition, this “gut feeling,” as being crucial to making challenging diagnoses.78–81 Indeed, intuition is considered particularly important in areas of uncertainty, such as when information is limited or there are no clear diagnoses or management options.82 In a word, the physician’s mind is more than “the sum of its parts,” more than memorized information and board examination scores.

Of course, intuition and gestalt have their risks as well, including perpetuating one’s own biases and as an excuse for poor analytical reasoning.82 And studies of physician gestalt, such as the ability to predict acute coronary syndrome or appendicitis, are mixed at best,5,83–87 though these predictions were often made in lieu of other readily available diagnostic data. Perhaps the role of physician intuition, then, is not to supersede machines or algorithms (will instinct ever be more accurate than an ECG?) but for areas where there are no answers or formulae. In his book Blink, Malcolm Gladwell captures both the uncanny accuracy of the intuitive mind and its deceptions, suggesting that it is neither wholly reliable nor irrelevant.88 But unlike humans, ChatGPT is not trained to be analytical, to question its reasoning, or to innovate. Indeed, ChatGPT struggled with pharmaceutical chemistry questions that involved application and analysis.89 Artificial intelligence like ChatGPT also lacks curiosity, an essential ingredient for advancement and discovery.90 “Generative AI is not thought, it’s not sentience,” notes Dr. Halamka, President of the Mayo Clinic Platform.91 And to those who forecast that one day AI will be able to replicate these human qualities perfectly, I question whether humans could advance AI like ChatGPT to the point at which it reproduces human consciousness, especially when we don’t understand how human consciousness works in the first place.92 No, a more productive relationship is machine augmenting human qualities, where indispensable human traits are combined with the rigor and accuracy of technology.93 We’ve already been doing this for millennia; why should this be any different now?

TOWARD THE FUTURE

The information age has given way to information overload, where there is simply too much knowledge for any one person to handle. The COVID-19 pandemic, with its rapid influx of countless papers, created such an information overload.94 In one instance, dozens of studies examined the possible protective link between COVID-19 and smoking, creating a debate that, regardless of the outcome, would not have changed medical management (who would recommend a patient take up smoking to ward off COVID?).95 Physicians are also overloaded with patient data, from labs to vital signs to EMR alerts, particularly in high-acuity fields like critical care.96 Technology like ChatGPT could help draw out some signal from the noise. It could provide quick, concise summaries of medical information and save physicians precious time from searching the vast internet library, allowing physicians to focus on more advanced cognitive challenges. To this point, I once heard my organic chemistry professor lament how, when he was a graduate student, his professor required him to memorize the periodic table for an advanced chemistry class. The periodic table is perhaps the most ubiquitous and fundamental scale in chemistry, something that is easily and readily available. “The periodic table exists because you’re supposed to look it up.” Undercutting his story was his frustration with time lost, time that could have been spent actually applying the science he was training in. While there is a need for foundational knowledge, an overemphasis on memorization, whether in chemistry or medical school, may do so at the expense of critical thinking or contextual reasoning, the very skills that make physicians distinct from machines.

Thus, allowing AI to assist with more repetitive, less cognitively taxing duties might make physicians more productive and even improve physician satisfaction and burnout. Some have suggested, for example, that AI is ideal for administrative work,48,97 tasks that are far and away the most common frustrations of physicians today.98 Who wouldn’t want AI to write your discharge summaries for them?99 Or draft insurance pre-authorization or appeal letters?100 Indeed, technology has historically allowed humans to outsource repetitive and mundane tasks, allowing them to focus on more cerebral pursuits which will move society forward.101 Without labor-saving devices, developments that allowed for intellectual pursuits, scientific development, and medical specialization would likely not be possible. Thus, far from making physicians obsolete, AI could free physicians to pursue higher-level tasks, such as more physician-patient interaction. Dr. Eric Topol, a physician-scientist at the Scripps Research Institute, argues that liberation from monotonous tasks might actually improve physician-patient interactions and “bring humanity back to medicine.”102

Perhaps the greatest defense against burgeoning technology is that as old as life itself–the ability to adapt and evolve. For decades, the threat of automation sparked worry over job shortages and human irrelevance. In some instances, the threats were very real. The invention of the automobile proved devastating to buggy whip manufacturers, who either changed their product line or went out of business.103 Other companies, such as Blockbuster, Kodak, and Yahoo,104 either lost their edge or became obsolete due to their inability to keep pace with new advancements. Yet overall, despite the concerns, the people (and jobs) are still here (though technology has undoubtedly changed how we work).105 The possibility of robotic surgeons has been discussed for decades. Nevertheless, widespread integration seems, at best, decades away (and still within the hands of a human operator).106 Moreover, we find technology only necessitating the need for more specialized physicians (and not the other way around). The development of computed tomography (CT) and magnetic resonance imaging (MRI) only furthered the field of radiology, with radiologists becoming even more specialized.107 Rather than fear the challenges posed by CT technology, radiology embraced it and cemented themselves as crucial players in a medical world in which CT scans are the norm. No, where technology solves one problem, it frequently creates several more (or put differently, when one door opens, we only find more challenges on the other side).

Therefore, physicians will, as always, need to grow and evolve with new technology or risk being left behind. The European Society of Radiology notes that the radiologist of today, a profession considered by some to be most threatened by AI, involves roles beyond diagnostic duties but includes that of innovator, scientist, teacher, and communicator.108 Rather than one particular skill set, a physician takes on a much more expansive role. Accomplishing this may require retraining to become familiar with burgeoning technologies like AI.109 It will also require motivation to continue to grow long after formal training has ended, something that is arguably inherent in a physician’s ethics. Fortunately, there are several excellent reviews detailing how AI could augment clinical practice.110,111 The American Board of Artificial Intelligence in Medicine (ABAIM) provides review courses and networking with other professionals interested in the intersection of AI and medicine (abaim.org). The need for continual professional growth is nothing new; conferences and continuing medical education are proof of that. Yet I think we underestimate how what makes a physician valuable is not just a skill set but the ability to both change and be a force for change. As one headline puts it, “AI won’t replace doctors, but doctors who don’t use AI will be replaced.”112 While this remains to be seen, physicians will doubtless need to continue to adapt to remain productive and relevant.

To be sure, ChatGPT does pose some potential ethical problems for medicine, including the risk of user dependency, failure to recognize inappropriate or harmful requests, perpetuation of biases in training data, generation of misleading and “deepfake” content, unequal access, authorship attribution, transparency, and expandability–the list goes on.2 And any integration into medicine will be met with challenges. But one of those challenges is unlikely to be the inevitable demise of the physician. I hear some of my colleagues claim jokingly, although perhaps with some unease, that AI is “going to replace us one day.” The fear of change, inadequacy, and becoming obsolete is neither new nor confined exclusively to physicians. Nevertheless, I hope that my colleagues recognize the inherent worth that a physician brings and see AI like ChatGPT not as a threat but as a reminder to value and hone the skills that no machine can reproduce.

PARTING THOUGHTS

A discussion of a new and daunting technology would not be complete without a science fiction reference. While many films like The Terminator or The Matrix explore the demise of humanity from its own technology, others explore the opposite–the irreplaceability of human nature despite even the most advanced machines. Christopher Nolan’s Interstellar emphasizes the need for human ingenuity, intuition, and connection to solve humanity’s crisis, even in the face of automation and multifaceted robots. Even the recent Hollywood blockbuster Maverick joins this debate, where the ingenuity and grit of human aviators succeed in a world that, to some, would be better off replaced by drone aircraft. But perhaps my favorite “man vs. machine” conflict is from the 1967 television series The Prisoner. In it, we find our hero (“Number 6”) countering a supercomputer engineered to streamline education and exploit the masses. It could supposedly teach any concept, solve any problem, and answer any question. Indeed, it was claimed it could provide a three-year university in only three minutes (and all from the comfort of your home!). Our cautious and skeptical hero disagreed, claiming he had a question that not even a supercomputer could answer. With sufficient TV drama, Number 6 feeds his question to the computer. The result? His suspicions were confirmed when the computer, straining under the immense demand from this query, disintegrated in a burst of smoke, fire, and hubris. The question? “W-H-Y-?”.113 The great existential question. One to which there are no formulae or easy answers. One that will undoubtedly require the human element to find a solution. One that is found inside the human physician and cannot be replaced by fear, fantasy, or futuristic technology.

ACKNOWLEDGMENTS

I would like to thank Alexander Cook and David Peterson for their thoughtful review and valuable insight.


REFERENCES

  1. Kung TH, Cheatham M, Medenilla A, et al. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLOS Digital Health. 2023;2(2):e0000198. doi:10.1371/journal.pdig.0000198
  2. Ray PP. ChatGPT: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Internet of Things and Cyber-Physical Systems. 2023;3:121–154. doi: https://doi.org/10.1016/j.iotcps.2023.04.003
  3. Kocher B, Emanuel Z. Will robots replace doctors? 2019. Brookings Institution. Available at: https://www.brookings.edu/articles/will-robots-replace-doctors/
  4. If ChatGPT is used by doctors:. Accessed 29 May 2023. YouTube.
  5. Gao CA, Howard FM, Markov NS, et al. Comparing scientific abstracts generated by ChatGPT to real abstracts with detectors and blinded human reviewers. NPJ Digit Med. Apr 26 2023;6(1):75. doi:10.1038/s41746-023-00819-6
  6. Greenberg M. The Hydra: The Multi-Headed Serpent of Greek Myth. Mythology Source. Accessed 25 May 2023, 2023. https://mythologysource.com/hydra-serpent-greek-myth/
  7. Aslam B, Wang W, Arshad MI, et al. Antibiotic resistance: a rundown of a global crisis. Infect Drug Resist. 2018;11:1645–1658. doi:10.2147/idr.S173867
  8. WHO Organization. Antimicrobial resistance. Accessed 18 June 2023, https://www.who.int/news-room/fact-sheets/detail/antimicrobial-resistance
  9. Centers for Disease Control and Prevention. ANTIBIOTIC RESISTANCE THREATS in the United States, 2013. 2013. https://www.cdc.gov/drugresistance/threat-report-2013/pdf/ar-threats-2013-508.pdf
  10. United Nations Environment Programme. Bracing for Superbugs: Strengthening environmental action in the One Health response to antimicrobial resistance. Accessed 18 June 2023, https://www.unep.org/resources/superbugs/environmental-action
  11. Deus Ex Machina. LitCharts. Accessed 18 June 2023, https://www.litcharts.com/literary-devices-and-terms/deus-ex-machina
  12. Garg S. 110 Best ChatGPT Examinationples To Look At In 2023. Accessed 30 May 2023, https://writesonic.com/blog/best-chatgpt-examinationples/
  13. Johnson D, Goodman R, Patrinely J, et al. Assessing the Accuracy and Reliability of AI-Generated Medical Responses: An Evaluation of the Chat-GPT Model. Res Sq. Feb 28 2023;doi:10.21203/rs.3.rs-2566942/v1
  14. Moran C. ChatGPT is making up fake Guardian articles. Here’s how we’re responding. The Guardian. Accessed 27May 2023, https://www.theguardian.com/commentisfree/2023/apr/06ai-chatgpt-guardian-technology-risks-fake-article
  15. Gravel J, D’Amours-Gravel M, Osmanlliu E. Learning to fake it: limited responses and fabricated references provided by ChatGPT for medical questions. medRxiv. 2023:2023.03.16.23286914.
  16. Hirosawa T, Harada Y, Yokose M, Sakamoto T, Kawamura R, Shimizu T. Diagnostic Accuracy of Differential-Diagnosis Lists Generated by Generative Pretrained Transformer 3 Chatbot for Clinical Vignettes with Common Chief Complaints: A Pilot Study. Int J Environ Res Public Health. Feb 15 2023;20(4). doi:10.3390/ijerph20043378
  17. Zheng H, Zhan H. ChatGPT in Scientific Writing: A Cautionary Tale. The American Journal of Medicine. doi:10.1016/j.amjmed.2023.02.011
  18. Hsu T, Thompson SA. Disinformation researchers raise alarms about AI chatbots. New York Times. Feb 8, 2023. Accessed June 15, 2023. Available at: https://www.nytimes.com/2023/02/08/technology/ai-chatbots-disinformation.html
  19. Cingillioglu I. Detecting AI-generated essays: the ChatGPT challenge. The International Journal of Information and Learning Technology. 2023;40(3):259–268. doi:10.1108/IJILT-03-2023-0043
  20. Tara R. ChatGPT Has All the Answers – But Not Always the Right Ones. Accessed 03 June 2023, https://www.engineering.com/story/chatgpt-has-all-the-answers-but-not-always-the-right-ones
  21. Heaven W. GPT-4 is bigger and better than ChatGPT—but OpenAI won’t say why. MIT Technology Review. 2023. Available at: https://www.technologyreview.com/2023/03/14/1069823/gpt-4-is-bigger-and-better-chatgpt-openai/
  22. Cascone S. In 1900, This Artist Gazed Into the Future. See How He Imagined the Year 2000 Would Look (Spoiler: It’s Very Inaccurate). Artnet Worldwide Corporation. Accessed 30 May 2023, https://news.artnet.com/art-world/french-artist-predicted-the-year-2000-2008650
  23. Braun W. Manned Mars Landing Presentation to the Space Task Group. NASA. 1969. Available at: https://www.nasa.gov/sites/default/files/atoms/files/19690804_manned_mars_landing_presentation_to_the_space_task_group_by_dr._wernher_von_braun.pdf
  24. Houser K. Futurism Asks: When Will Humans First Land on Mars? Accessed 16 June 2023, https://futurism.com/futurism-asks-when-will-humans-first-land-on-mars
  25. Herrman J. A Short History of Predicting the Future. The New York Times Company. Accessed 27 May 2023, https://www.nytimes.com/2021/11/23/business/dealbook/futurology-predictions.html
  26. Handwerk B. The many futuristic predictions of HG Wells that came true. Smithsonian Magazine. 2016. Available at: https://www.smithsonianmag.com/arts-culture/many-futuristic-predictions-hg-wells-came-true-180960546/#:~:text=Atomic%20Bombs%20%26%20Nuclear%20Proliferation&text=Wells%20recognized%20the%20world%2Dchanging,government%20to%20avoid%20future%20conflicts.
  27. Lester A. The problem with predictions. Accessed 16 June 2023, https://news.harvard.edu/gazette/story/2013/04/the-problem-with-predictions/
  28. Lewis SJ, Orland BI. The importance and impact of evidence-based medicine. J Manag Care Pharm. Sep 2004;10(5 Suppl A):S3–5. doi:10.18553/jmcp.2004.10.S5-A.S3
  29. Crawford A. Good Science Changes: That’s a Good Thing. University of Michigan School of Public Health. Accessed 12 June 2023, https://sph.umich.edu/findings/spring-2021/good-science-changes-thats-a-good-thing.html
  30. Martínez García L, Sanabria AJ, García Alvarez E, et al. The validity of recommendations from clinical guidelines: a survival analysis. Cmaj. Nov 4 2014;186(16):1211–9. doi:10.1503/cmaj.140547
  31. Shekelle PG, Ortiz E, Rhodes S, et al. Validity of the Agency for Healthcare Research and Quality clinical practice guidelines: how quickly do guidelines become outdated? Jama. Sep 26 2001;286(12):1461–7. doi:10.1001/jama.286.12.1461
  32. Gordon R. Why We Need to Challenge Medical and Surgical Dogma. Accessed 12 June 2023, https://medium.com/@rongordonmd/why-we-need-to-challenge-medical-and-surgical-dogma-8232f4a65404
  33. Farkas J. The Internet Book of Critical Care (IBCC). Metasin LLC. Accessed 01 Jan 2023, https://emcrit.org/ibcc/toc/
  34. Farkas J. PulmCrit- Top ten problems with the new sepsis definition. Metasin LLC. Accessed 26 May 2023, https://emcrit.org/pulmcrit/problems-sepsis-3-definition/
  35. Farkas J. Submassive & Massive PE. Metasin LLC. Accessed 28 May 2023, https://emcrit.org/ibcc/pe/
  36. Stein PD, Beemath A, Matta F, et al. Clinical Characteristics of Patients with Acute Pulmonary Embolism: Data from PIOPED II. The American Journal of Medicine. 2007/10/01/2007;120(10):871–879. doi:https://doi.org/10.1016/j.amjmed.2007.03.024
  37. McFadden PM, Ochsner JL. A history of the diagnosis and treatment of venous thrombosis and pulmonary embolism. Ochsner J. Winter 2002;4(1):9–13.
  38. Bornmann L, Haunschild R, Mutz R. Growth rates of modern science: a latent piecewise growth curve approach to model publication numbers from established and new literature databases. Humanities and Social Sciences Communications. 2021/10/07 2021;8(1):224. doi:10.1057/s41599-021-00903-w
  39. Hogan J. The More We Know, the More Mystery There Is. Accessed 30 May 2023, https://blogs.scientificamerican.com/cross-check/the-more-we-know-the-more-mystery-there-is/
  40. Loeb A. Why the Pursuit of Scientific Knowledge Will Never End. Scientici American. Accessed 27 may 2023, https://blogs.scientificamerican.com/observations/why-the-pursuit-of-scientific-knowledge-will-never-end/
  41. Boghossian D. Certainty is the Enemy of Truth: David Deutsch’s Infinite Explanations, the Limits of Knowledge, and the Value of Human Fallibility. A Medium Corporation. Accessed 27 May 2023, https://medium.com/approximations/certainty-is-the-enemy-of-truth-david-deutschs-infinite-explanations-the-limits-of-knowledge-768de50d576a
  42. What’s Beyond Physics? | Episode 802 | Closer To Truth. Accessed 27 May 2023. https://www.youtube.com/watch?v=XlK7Yn-aMsQ. YouTube.
  43. Limpawattana P, Phungoen P, Mitsungnern T, Laosuangkoon W, Tansangworn N. Atypical presentations of older adults at the emergency department and associated factors. Archives of Gerontology and Geriatrics. 2016/01/01/ 2016;62:97–102. doi:https://doi.org/10.1016/j.archger.2015.08.016
  44. Goetz LH, Schork NJ. Personalized medicine: motivation, challenges, and progress. Fertil Steril. Jun 2018;109(6):952–963. doi:10.1016/j.fertnstert.2018.05.006
  45. Rabin RC. Dermatology Has a Problem With Skin Color. Accessed 12 June 2023, https://www.nytimes.com/2020/08/30/health/skin-diseases-black-hispanic.html
  46. Grewal US, Abduljabar H, Sulaiman K. Cultural competency in graduate medical education: A necessity for the minimization of disparities in healthcare. EClinicalMedicine. May 2021;35:100837. doi:10.1016/j.eclinm.2021.100837
  47. Mittelman M, Markham S, Taylor M. Patient commentary: Stop hyping artificial intelligence—patients will always need human doctors. BMJ : British Medical Journal (Online). 2018 Nov 07 2018-11-12 2018;363. doi:https://doi.org/10.1136/bmj.k4669
  48. DiGiorgio AM, Ehrenfeld JM. Artificial Intelligence in Medicine & ChatGPT: De-Tether the Physician. Journal of Medical Systems. 2023/03/04 2023;47(1):32. doi:10.1007/s10916-023-01926-3
  49. Panda N, Bahdila D, Abdullah A, Ghosh AJ, Lee SY, Feldman WB. Association between USMLE Step 1 scores and in-training examinationination performance: a meta-analysis. Academic Medicine. 2021;96(12):1742–1754.
  50. Zuckerman SL, Kelly PD, Dewan MC, et al. Predicting Resident Performance from Preresidency Factors: A Systematic Review and Applicability to Neurosurgical Training. World Neurosurgery. 2018/02/01/ 2018;110:475–484.e10. doi:https://doi.org/10.1016/j.wneu.2017.11.078
  51. Shirkhodaie C, Avila S, Seidel H, Gibbons RD, Arora VM, Farnan JM. The Association Between USMLE Step 2 Clinical Knowledge Scores and Residency Performance: A Systematic Review and Meta-Analysis. Academic Medicine. 2022:10.1097.
  52. Sajadi-Ernazarova K, Ramoska EA, Saks MA. USMLE scores do not predict the clinical performance of emergency medicine residents. Mediterranean Journal of Emergency Medicine & Acute Care. 2020;1(2) doi: https://doi.org/10.52544/2642-7184(1)2001
  53. Miller B, Dewar SB, Nowalk A. 74. LACK OF CORRELATION BETWEEN USMLE SCORES AND PERFORMANCE IN PEDIATRICS RESIDENCY TRAINING BASED ON ACGME MILESTONES RATINGS. Academic Pediatrics. 2019;19(6):e34. doi:10.1016/j.acap.2019.05.088
  54. Boden AL, Staley CA, Boissonneault AR, Bradbury TL, Boden SD, Schenker ML. Emotional Intelligence in Medical Students is Inversely Correlated with USMLE Step 1 Score: Is there a Better Way to Screen Applicants? Academic Medicine. 2017;16
  55. Bojar D. Are Healthcare Metrics Hurting Healthcare? NautilusNext Inc. Accessed 27 May 2023, https://nautil.us/are-healthcare-metrics-hurting-healthcare-237089/
  56. Survey Finds Many Primary Care Physicians Have Negative Views of the Use of Quality Metrics and Penalties for Unnecessary Hospital Readmissions. The Commonwealth Fund. Accessed 31 May 2023, https://www.commonwealthfund.org/press-release/2015/survey-finds-many-primary-care-physicians-have-negative-views-use-quality
  57. Why Quality Measures Don’t Measure Quality. Center for Healthcare Quality & Payment Reform. Accessed 18 June 2023, https://chqpr.org/downloads/Why_Quality_Measures_Do_Not_Measure_Quality.pdf
  58. Gupta A, Allen LA, Bhatt DL, et al. Association of the Hospital Readmissions Reduction Program Implementation With Readmission and Mortality Outcomes in Heart Failure. JAMA Cardiology. 2018;3(1):44–53. doi:10.1001/jamacardio.2017.4265
  59. Srivastava SB, Lauster C, Srivastava B. The patient interview. Fundam Ski Patient Care Pharm Pract. 2013;1:1–36.
  60. Vale RD. The value of asking questions. Mol Biol Cell. Mar 2013;24(6):680–2. doi:10.1091/mbc.E12-09-0660
  61. Cooper ID, Crum JA. New activities and changing roles of health sciences librarians: a systematic review, 1990–2012. Journal of the Medical Library Association: JMLA. 2013;101(4):268.
  62. Khabsa M, Giles CL. The Number of Scholarly Documents on the Public Web. PLOS ONE. 2014;9(5):e93949. doi:10.1371/journal.pone.0093949
  63. McKeever L, Nguyen V, Peterson SJ, Gomez-Perez S, Braunschweig C. Demystifying the Search Button. https://doi.org/10.1177/0148607115593791. Journal of Parenteral and Enteral Nutrition. 2015/08/01 2015;39(6):622–635. doi:https://doi.org/10.1177/0148607115593791
  64. Prompt Engineering Guide. DAIR.AI. Accessed 03 June 2023, https://www.promptingguide.ai/
  65. Yalçın N, Köse U. What is search engine optimization: SEO? Procedia – Social and Behavioral Sciences. 2010/01/01/ 2010;9:487–493. doi:https://doi.org/10.1016/j.sbspro.2010.12.185
  66. Caron C. Teens turn to TikTok in search of a mental health diagnosis. The New York Times. Oct 29, 2022. Available at: https://www.nytimes.com/2022/10/29/well/mind/tiktok-mental-illness-diagnosis.html
  67. Moore J. What Sir Luke Fildes’ 1887 painting The Doctor can teach us about the practice of medicine today. Br J Gen Pract. Mar 2008;58(548):210–3. doi:10.3399/bjgp08X279571
  68. Wailoo K. Patients Are Humans Too The Emergence of Medical Humanities. Daedalus. 2022;151(3):194–205.
  69. Cohen SM, Dai A, Katz JT, Ganske IM. Art in Surgery: A Review of Art-based Medical Humanities Curricula in Surgical Residency. Journal of Surgical Education. 2023/03/01/ 2023;80(3):393–406. doi:https://doi.org/10.1016/j.jsurg.2022.10.008
  70. Decety J. Empathy in medicine: what it is, and how much we really need it. The American journal of medicine. 2020;133(5):561–566.
  71. Abraham Verghese: A doctor’s touch. Accessed 30 May 2023. https://www.youtube.com/watch?v=sxnlvwprf_c. YouTube.
  72. Ayers JW, Poliak A, Dredze M, Leas EC, Zhu Z, Kelley JB, Faix DJ, Goodman AM, Longhurst CA, Hogarth M, Smith DM. Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum. JAMA Intern Med. 2023 Jun 1;183(6):589–596. doi:10.1001/jamainternmed.2023.1838.
  73. Esmaeilzadeh P, Mirzaei T, Dharanikota S. Patients’ Perceptions Toward Human-Artificial Intelligence Interaction in Health Care: Experimental Study. J Med Internet Res. Nov 25 2021;23(11):e25856. doi:10.2196/25856
  74. Gray K. WHAT PSYCHOLOGY TELLS US ABOUT WHY WE CAN’T TRUST MACHINES. Accessed 12 June 2023, https://www.dukece.com/insights/what-psychology-tells-us-about-why-we-cant-trust-machines/
  75. Staff KaW. Confidence Games: Why People Don’t Trust Machines to Be Right. Accessed 12 June 2023, https://knowledge.wharton.upenn.edu/article/why-people-dont-trust-machines-to-be-right/
  76. King S. Krista 2023 AI Trust Survey. Krista. Accessed 18 June 2023, https://krista.ai/ai-trust-survey-2023/
  77. Cervellin G, Borghi L, Lippi G. Do clinicians decide relying primarily on Bayesians principles or on Gestalt perception? Some pearls and pitfalls of Gestalt perception in medicine. Internal and emergency medicine. Italy: Springer; 2014. p. 513–519.
  78. Peterkin A. Physician intuition. Cmaj. Apr 10 2017;189(14):E544. doi:10.1503/cmaj.160972
  79. Woolley A, Kostopoulou O. Clinical intuition in family medicine: more than first impressions. Ann Fam Med. Jan–Feb 2013;11(1):60–6. doi:10.1370/afm.1433
  80. Van den Brink N, Holbrechts B, Brand PLP, Stolper ECF, Van Royen P. Role of intuitive knowledge in the diagnostic reasoning of hospital specialists: a focus group study. BMJ Open. Jan 28 2019;9(1):e022724. doi:10.1136/bmjopen-2018-022724
  81. Stolper CF, van de Wiel MWJ, van Bokhoven MA, Dinant GJ, Van Royen P. Patients’ gut feelings seem useful in primary care professionals’ decision making. BMC Primary Care. 2022/07/20 2022;23(1):178. doi:10.1186/s12875-022-01794-9
  82. Hall KH. Reviewing intuitive decision-making and uncertainty: the implications for medical education. Medical Education. 2002;36(3):216–224. doi: https://doi.org/10.1046/j.1365-2923.2002.01140.x
  83. das Virgens CM, Lemos L, Jr., Noya-Rabelo M, et al. Accuracy of gestalt perception of acute chest pain in predicting coronary artery disease. World J Cardiol. Mar 26 2017;9(3):241–247. doi:10.4330/wjc.v9.i3.241
  84. Tanner NT, Porter A, Gould MK, Li X-J, Vachani A, Silvestri GA. Physician Assessment of Pretest Probability of Malignancy and Adherence With Guidelines for Pulmonary Nodule Evaluation. Chest. 2017/08/01/ 2017;152(2):263–270. doi:https://doi.org/10.1016/j.chest.2017.01.018
  85. Janneke MTH, Wim AML, Petra MGE, et al. Ruling Out Pulmonary Embolism in Primary Care: Comparison of the Diagnostic Performance of “Gestalt” and the Wells Rule. The Annals of Family Medicine. 2016;14(3):227. doi:10.1370/afm.1930
  86. Ariella PD, Christian M, Mark HE. Clinical gestalt to diagnose pneumonia, sinusitis, and pharyngitis: a meta-analysis. British Journal of General Practice. 2019;69(684):e444. doi:10.3399/bjgp19X704297
  87. Luu IHY, Frijns T, Buijs J, et al. Systematic screening versus clinical gestalt in the diagnosis of pulmonary embolism in COVID-19 patients in the emergency department. PLOS ONE. 2023;18(3):e0283459. doi:10.1371/journal.pone.0283459
  88. Gladwell M. Blink: The power of thinking without thinking. 2006;
  89. Fergus S, Botha M, Ostovar M. Evaluating Academic Answers Generated Using ChatGPT. Journal of Chemical Education. 2023/04/11 2023;100(4):1672–1675. doi:10.1021/acs.jchemed.3c00087
  90. Randieri C. Can AI Replace Human Curiosity? Forbes, Inc. Accessed 29 May 2023, https://www.forbes.com/sites/forbestechcouncil/2023/03/22/can-ai-replace-human-curiosity/?sh=55b0a0e91991
  91. Halamka JD. ChatGPT and AI integration in health care with John D. Halamka, MD, MS. In: Unger T, editor. AMA Update. AMA2023.
  92. French K. What’s So Hard About Understanding Consciousness? NautilusNext Inc. 27 May 2023. https://nautil.us/whats-so-hard-about-understanding-consciousness-238421/
  93. De Cremer, D., & Kasparov, G. AI should augment human intelligence, not replace it. Harvard Business Review. 2021;18: 1. Available at: https://hbr.org/2021/03/ai-should-augment-human-intelligence-not-replace-it
  94. Li W, Khan AN. Investigating the Impacts of Information Overload on Psychological Well-being of Healthcare Professionals: Role of COVID-19 Stressor. Inquiry. Jan–Dec 2022;59:469580221109677. doi:10.1177/00469580221109677
  95. Anderson C, Peterson C, Dennis J. Mass publication during the COVID-19 pandemic: too much of a good thing? The Southwest Respiratory and Critical Care Chronicles. 2022;10(42):22–24.
  96. Herasevich, V., Pickering, B., & Gajic, O. (2018). How Mayo Clinic is combating information overload in critical care units. Harv Bus Rev. Available at: https://hbr.org/2018/03/how-mayo-clinic-is-combating-information-overload-in-critical-care-units#:~:text=A%20rules%2Dbased%2C%20ambient%2D,insights%20from%20clinicians%20and%20patients.
  97. Parikh R. Al can’t replace doctors. But it can make them better. MIT TECHNOLOGY REVIEW. 2018;121(6):28–29.
  98. Kane L. ‘I Cry but No One Cares’: Physician Burnout & Depression Report 2023. WedMD LLC. Accessed 29 May 2023, https://www.medscape.com/slideshow/2023-lifestyle-burnout-6016058#1
  99. Patel SB, Lam K. ChatGPT: the future of discharge summaries? The Lancet Digital Health. 2023;5(3):e107–e108. doi:10.1016/S2589-7500(23)00021-3
  100. Landi H. Doximity rolls out beta version of ChatGPT tool for docs aiming to streamline administrative paperwork. Accessed 24 June 2023, https://www.fiercehealthcare.com/health-tech/doximity-rolls-out-beta-version-chatgpt-tool-docs-aiming-streamline-administrative
  101. Rafferty JP. The rise of the machines: Pros and cons of the industrial revolution. Encyclopedia Britannica. 2019. Available at: https://www.britannica.com/story/the-rise-of-the-machines-pros-and-cons-of-the-industrial-revolution
  102. Park A. Cardiologist Eric Topol on How AI Can Bring Humanity Back to Medicine. TIME USA, LLC. Accessed 29 May 2023, https://time.com/collection/life-reinvented/5551296/cardiologist-eric-topol-artificial-intelligence-interview/
  103. Stross R. Failing like a buggy whip maker? Better check your simile. New York: New York Times. 2010:4.
  104. Newman R. 10 Great Companies That Lost Their Edge. Accessed 24 June 2023, https://money.usnews.com/money/blogs/flowchart/2010/08/19/10-great-companies-that-lost-their-edge
  105. Nunes A. Automation Doesn ‘t Just Create or Destroy Jobs It Transforms Them. Harvard Business Review. 2021;
  106. Lirici MM. Current ‘robotic surgery’: a real breakthrough or a misleading definition of laparoscopy with remote control of mechatronic instrumentation? Minimally Invasive Therapy & Allied Technologies. 2022/10/03 2022;31(7):979–980. doi:10.1080/13645706.2022.2119416
  107. Rosenkrantz AB, Hughes DR, Duszak R, Jr. Increasing Subspecialization of the National Radiologist Workforce. Journal of the American College of Radiology. 2020;17(6):812–818. doi:10.1016/j.jacr.2019.11.027
  108. Brady AP, Beets-Tan RG, Brkljačić B, et al. The role of radiologist in the changing world of healthcare: a White Paper of the European Society of Radiology (ESR). Insights into Imaging. 2022/06/04 2022;13(1):100. doi:10.1186/s13244-022-01241-4
  109. Vogel L. Doctors need retraining to keep up with technological change. Cmaj. Jul 30 2018;190(30):E920. doi:10.1503/cmaj.109-5637
  110. Ahuja AS. The impact of artificial intelligence in medicine on the future role of the physician. PeerJ. 2019;7:e7702.
  111. Liu P-r, Lu L, Zhang J-y, Huo T-t, Liu S-x, Ye Z-w. Application of Artificial Intelligence in Medicine: An Overview. Current Medical Science. 2021/12/01 2021;41(6):1105–1115. doi:10.1007/s11596-021-2474-3
  112. ANI. AI won’t replace doctors, but doctors who don’t use AI will be replaced: Sangeeta Reddy of Apollo Hospitals. https://www.businessinsider.in/science/health/news/ai-wont-replace-doctors-but-doctors-who-dont-use-ai-will-be-replaced-sangeeta-reddy-of-apollo-hospitals/articleshow/97900892.cms
  113. Scott, Petter Graham. “The General.” The Prisoner. Everyman Films. Nov 5, 1967.


Article citation: Peterson CJ. ChatGPT and Medicine: Fears, Fantasy, and the Future of Physicians. The Southwest Respiratory and Critical Care Chronicles 2023;11(48):18–30
From: Department of Internal Medicine, Virginia Tech School of Medicine, Roanoke, VA
Submitted: 6/18/2023
Accepted: 6/28/2023
Conflicts of interest: none
This work is licensed under a Creative Commons
Attribution-ShareAlike 4.0 International License.