Best Medical AI: Cognitive Computing or Large Language Models?

by Barry P Chaiken, MD

(Download a PDF of this document)

Cognitive computing and large language models are two approaches to AI that are often used interchangeably but are different.

Cognitive computing is a broad category of AI technologies that mimic human cognition and problem-solving skills. These technologies include natural language processing, machine learning, computer vision, and other AI techniques. Cognitive computing systems can also learn from their interactions with humans and improve their performance over time. Cognitive computing aims to build systems that can understand, reason, learn, and interact with humans more naturally and intuitively.

On the other hand, large language models are deep learning-based models trained on massive amounts of text data to generate human-like text. These models can perform various natural language processing tasks, such as language translation, text summarization, and sentiment analysis. They use unsupervised learning techniques to learn the statistical patterns of language and can are fine-tuned for specific tasks. These models use a type of neural network called a transformer network, which can process and generate text more sophisticatedly than earlier language models.

Model Differences

The main difference between cognitive computing and large language models is that cognitive computing is a broader concept encompassing a wide range of AI technologies and applications. Cognitive computing focuses on building systems that can think and reason like humans. In contrast, large language models are machine learning models focused on generating high-quality text indistinguishable from human-written text.

Both cognitive computing and large language models can are used in medical care decision-making, but they serve different purposes and have different strengths.

Cognitive computing is well-suited for complex decision-making tasks that require a high level of reasoning and judgment. Cognitive computing systems like IBM Watson are designed to analyze and interpret complex data from various sources, including medical records, clinical guidelines, and research papers. These systems use natural language processing, machine learning, and other AI techniques to understand the context and meaning of the data and provide insights and recommendations to healthcare providers. These systems analyze patient data, identify patterns and trends, and assist clinicians in making more informed decisions.

Large language models, on the other hand, are particularly good at generating human-like text, such as clinical notes or patient summaries, that can facilitate documentation and support medical decision-making. These models can also be used to analyze medical data, such as in electronic medical records, and extract insights, such as identifying patterns in patient symptoms or predicting disease outcomes. They also can automate specific tasks, such as medical transcription, clinical documentation, and patient communication.

Choosing an Approach

Ultimately, the choice of which system to use for medical care decision-making will depend on the specific needs of the healthcare organization and the nature of the decision-making task. A cognitive computing system may be more appropriate in some cases, while a large language model may be more effective in others. It is also worth noting that cognitive computing and large language models are still relatively new technologies in healthcare, and their effectiveness in real-world settings is still being evaluated.

Here are some references that you may find helpful in understanding cognitive computing and large language models.

Cognitive Computing:

  • “Cognitive Computing: A Brief Guide for Game Changers” by Peter Fingar (2016).
  • “Cognitive Computing and Big Data Analytics” by Judith Hurwitz, Marcia Kaufman, and Adrian Bowles (2015)

Large Language Models:

  • “Attention Is All You Need” by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin (2017)
  • “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding” by Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova (2018)
  • “GPT-3: Language Models are Few-Shot Learners” by Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger,
  • Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei (2020)

These resources provide an excellent introduction to cognitive computing and large language models, but there are many other sources available online and in the academic literature that you may find helpful depending on your specific interests and level of expertise.

Author Note: I asked ChatGPT (4.0) to write an article explaining the difference between cognitive computing and large language models. By requesting several “regenerations” of the responses, I constructed a more informative article from pieces of each version.


Hallucination in AI refers to the generation of outputs that may sound plausible but are either factually incorrect or unrelated to the given context. These outputs often emerge from the AI model’s inherent biases, lack of real-world understanding, or training data limitations. In other words, the AI system “hallucinates” information that it has not been explicitly trained on, leading to unreliable or misleading responses.

Source: https://bernardmarr.com/chatgpt-what-are-hallucinations-and-why-are-they-a-problem-for-ai-systems/


Hallucinations – Cognitive Computing:

  1. “Cognitive Computing and the Future of Healthcare” by Kevin Desouza and Kendra Smith (2017)· 
  2. “What Is Cognitive Computing?” by IBM: https://www.ibm .com/cloud/learn/cognitive-computing
  3. ·”Cognitive Computing: A Brief Guide for Game-Changers” by Deloitte: https://www2.deloitte.com/content/dam/Deloitte/uk/Documents/deloitte-analytics/deloitte-uk-cognitive-computing-a-brief-guide-for-game-changers.pdf
  4. “Cognitive Computing and Artificial Intelligence: A Primer” by Harvard Business Review: https://hbr.org/2016/11/cognitive-computing-and-artificial-intelligence-a-primer

Hallucinations – Large Language Models:

  1. The AI Language Models That Are Too Dangerous to Make” by MIT Technology Review: https://www.technologyreview.com/2022/02/22/1072585/ai-language-models-dangerous-to-make/
  2. “Language Models are Few-Shot Learners” by OpenAI: https://arxiv.org/abs/2005.14165
  3. “GPT-3: Language Models are Few-Shot Learners” by AI21 Labs: https://www.ai21.com/blog/gpt-3-language-models-are-few-shot-learners

Leave a comment


This site uses Akismet to reduce spam. Learn how your comment data is processed.