For Type 1, Type 2, and hospital diabetes management, wearable tech provides measurable benefits. The wearable links to an insulin pump and a glucose monitor. Either a wired or wireless device monitors glucose levels and controls insulin release. Data moves to the cloud, where clinicians monitor the patient’s condition. Researchers are exploring using artificial intelligence to monitor patients and customize insulin control. Success with these devices does not suggest that other mobile health apps provide benefits and must be studied separately.
Health disinformation expanded greatly during the COVID-19 pandemic, decreasing the number of people receiving protective vaccinations. The politically charged vaccine debate encouraged individuals to generate misleading and factually wrong content distributed through media channels, social media, and targeted emails. Dr. Peter Hotez, MD, a researcher at Texas Children’s Hospital, estimates that the disinformation led to an excess of more than 200,000 deaths.
With the expansion of bad actors using AI to generate content, I expect an exponential increase in healthcare disinformation. A recent study in JAMA Internal Medicine documents the generation of over 100 distinct blog articles containing over 20,000 words of disinformation on preselected topics – vaccines and vaping – in just 65 minutes. Ai also generated both images and videos to support the disinformation.
While it is concerning that the proliferation of disinformation has complicated our political discussion, healthcare disinformation presents a threat that is both immediate and personal.
As more pathogens become antibiotic-resistant, you would expect drug companies to ramp up R&D to deliver new treatments. Unfortunately, antibiotics do not offer the same profits as chronic disease drugs. A partnership linking a for-profit pharmaceutical company with a non-profit organization may be a path forward that delivers needed antibiotics to high-income companies at a fair market price and to middle- and low-income countries at lower prices.
While medical AI offers promise in drug discovery, reduction in diagnostic errors, and improvement in personalized treatment plans, I worry that the lack of proper oversight will deliver sub-optimal products and unintended outcomes. While the FDA reviews all medical AI products released for sale, software developed by provider organizations and deployed internally requires no FDA evaluation.
Large language models and their AI output are only as good as the data fed into the model. Small data unrepresentative of the targeted population generates hallucinations – fabricated or incorrect results.
We do not need programs like the one developed by a vendor for sepsis detection that flagged one in five patients while only 12 percent developed sepsis. A study of 118 FDA-approved programs revealed that the majority were tested on fewer than 500 cases.
We need to get medical AI right. It starts with strict oversight by the developing organization and outside independent parties, and complete transparency by every organization that develops and uses AI.
The President’s executive order on artificial intelligence offers a first step to establishing reasonable guardrails around AI. No one wants to see bad actors use AI to develop biological weapons, unsafe products, or commit fraud. However, designing safeguards and putting them into practice presents significant obstacles.
Companies that build large language models must offer transparency to the model’s structure by revealing the data used to train it and the “intent” instructions to ensure the model protects human interests. Anthropic, an AI company, is building its models with a built-in “intent constitution” based on the UN’s Universal Bill of Rights. While I am concerned that tech companies, driven by profit, may bypass some necessary safeguards, my greatest fear is bad state actors using AI, ignoring human rights, and advancing an evil agenda.
How easy is it for us to miss an email? Neglecting a healthcare insurance enrollment email can deliver financial disaster. No health insurance! October through December is the open enrollment period for most employer plans. If you forget to re-enroll, the coverage you get may not be what you need. Organizations should default to rolling over the same coverage as the current year to the next year to protect families. Maintaining health insurance should be easy and not hard.
While the U.K. maintains a valuable stash of patient information that can significantly improve patient care and drive medical research, the data, like in the U.S., remains siloed in separate, incompatible data sets. In September, the NHS awarded a 12-month contract to the existing database provider to bring the data together. We need to pay close attention to the effort and learn how we can try to achieve the same.
The EHR provides physicians with voluminous patient data filled with redundant and sometimes incorrect information. Clinicians spend unreasonable amounts of time combing through the data to discover the relevant information required to direct care. Google’s generative AI search claims to allow physicians to ask the EHR straightforward questions and obtain actionable, specific, and quickly returned answers. Google’s AI-driven tool can provide search across multiple systems and existing data formats.
Due to the rapid advancement of AI, the creation of an AI chat version of each of us will become a reality. But who owns the IP, and how can it be protected?
The bipartisan “No Fakes Act” of 2023 takes steps to set rules to protect our digital replicas.
When computers became commonplace in the workspace, they replaced staff that took dictation and typed letters. But not all of these workers lost their jobs. Those who pivoted to using word processing and slide-making software transferred their typing and design skills to new formats.
With generative AI infiltrating the workforce, employees feel threatened that AI will replace them. However, AI does not have knowledge or intent. Only humans do (for now).
Employees who want to stay ahead of the AI revolution must also pivot like those before them threatened by new technology. Combining human intelligence and experience with AI is done through creating high-quality prompts. Therefore, the path to successfully using AI and protecting your job is to become a prompt engineer. This capability does not require computer science skills but only practice working with AI. Learning what words are most effective in prompts and focused practice will train you to be a prompt engineer. As your engineering skills grow, you will notice an increase in the quality and efficiency of your work.
AI will only replace you if you let it.
While AI can enhance care delivery, it will only partially replace physicians. AI-based pattern recognition software already helps evaluate clinical images, such as chest X-rays and screening mammograms. Yet, not all clinical work is cognitive; therefore, humans must be present to do procedures such as cardiac catheterization and cystoscopy.
While AI can assist specialists, it does have a role in boosting the documentation productivity of PCPs. EHRs, as currently designed, do not lend themselves to AI enhancement.
Pharmaceutical companies are fighting hard to limit the impact of the Inflation Reduction Act that allows CMS to negotiate drug costs with pharma companies. The Administration has listed ten drugs they will negotiate pricing with the rates effective in 2026. The medications chosen required specific characteristics: 1) Currently under patent and 2) highly prescribed for Medicare beneficiaries.
In addition, the government chose drugs that have been available for enough time for the pharma companies to receive significant revenue from the drug but were also several years away from the availability of generic versions. Choosing drugs newly released to the market would have discouraged companies from developing new medicines due to this approach’s limitations on new drug potential revenue. Targeting drugs close to the availability of generics would make little difference in savings as the competition from generics would work to lower prices.
This approach by the Administration is an excellent balance to protect consumers from high drug costs while preserving reasonable profits from drug development by pharma companies.
Remember the date – November 30, 2021. Before that date, AI helped organize and curate content. After that date, thanks to applications like ChatGPT, it can also create it. And now, a federal judge ruled that AI-created art could not be copyrighted as no human being created it. What does that mean for authors, artists, software programmers, and researchers who create for a living? I expect much debate over the rules that need writing to manage intellectual property created by or with AI.
Medications are a national security issue. When patients, especially children, lack needed drugs, we must take steps to fix the problem. Children with ADHD are unable to obtain generic medications to control their behavior. In an earlier post, I shared how cancer patients are dying from the shortage of chemotherapeutic generics.
Perhaps we need a “Drug Czar” not to fight illicit drugs but to ensure an adequate supply of legal life-saving medicines.
Several years back, with a data science colleague, I flirted with creating a machine-learning stock trading algorithm. While my project never launched, others, much more knowledgeable of financial markets, marched on. Several leading firms on Wall Street specialize in computer-driven ML trading algorithms. The advancement in AI has exponentially accelerated this work. There are dangers ahead.
If we do not understand how these AI programs make decisions, how can we ensure they will not run amok? We have already seen how a simple data entry in one of these programs can generate autonomous computer-driven panic selling in seconds, preventing humans from intervening. We now have trading stops after percentage price swings, but is that enough? What is our response?
According to an analysis by the ONC using the 2022 American Hospital Association Information Technology Supplement, 83% of hospitals collect SDOH data. The SDOH data refers to environments in which the participants in the study live, work, and plan. We know that SDOH data is helpful in prescribing effective treatment plans.
Although hospitals collect a broad range of SDOH information in EMRs, its impact on care needs clarification. Making SDOH data part of the clinical workflow and embedding SDOH analytics within the clinical workflow is the only effective way to use the data to improve clinical and financial outcomes.
While the focus on the administrative is mostly on physicians, patients also strain under its weight. In this NY Times article, a maternal-fetal medicine physician describes the experience of a pregnant woman who almost died from sepsis due to the inability to obtain a $12 antibiotic. The obstacle was not cost but the administrative burden of filling the prescription under her health coverage.
On a personal note, I am in a four-month administrative burden scenario trying to correct payer errors from my previous employer’s HR department mistakes. Multiple letters and calls to payers and providers seem to waste time and money. Unlike healthcare professionals like me, I can only imagine the strain felt by other people trying to manage the administrative burden on themselves or their loved ones. This is something we can and should fix ASAP.
A report from the University of CA, Berkeley, documents the rapid expansion of private equity (PE) into healthcare delivery as they buy up numerous primary and specialty practices. In some regions, their market share is so high that they can extract higher fees. In 13% of Metropolitan statistical areas (MSA), a single PE firm’s market share exceeds 50%. In regions where a PE firm controls more than 30% of the market, elevated prices for gastroenterology increased by 18%, OBGYN by 16%, and dermatology by 13%.
No one schooled in economics should be surprised by these findings. When the number of sellers shrinks, prices rise. But what about quality of care and access? The report demands further study of this issue and calls for public reporting on related metrics.
The concentration of services in a few sellers stifles innovation, reduces quality, and raises prices. While this concentration of sellers is unacceptable for any market, I am apprehensive about how such market dynamics will impact quality, safety, access, and the already high cost of healthcare.
Bicillin, a Penicillin G brand drug manufactured by Pfizer, is in short supply during our current syphilis epidemic. Pfizer will ramp up production by 50%, but not until 2024. Meanwhile, pregnant mothers and their babies are at great risk. How does this happen?
The free market, not working efficiently, needs fixing, and supply needs to satisfy demand. In these instances, the government often intervenes to stabilize the market. What that intervention should be, I need to find out. But it is time we brought all stakeholders together, patients, providers, payors, employers, and government, to develop a plan that starts on the way to solving this medication shortage problem. Until then, we will have worse outcomes and higher costs.
Improperly deployed EHRs that did not incorporate clinical workflow into their implementation led to inefficient processes, decreased productivity, and poorer clinical and financial outcomes. In addition, physician burnout is tied directly to these poor workflows.
Here comes AI which will disrupt processes. Again workflow design is critical to delivering expected outcomes. Will we make the same mistake again and not redesign our work, or will we be proactive and leverage AI in the best possible way? I hope for the latter but history makes me worry about the former.
Radiology AI is not widely implemented in many high-income countries. Yet adoption has been rapid for specific use cases in some low-income and middle-income settings. The use of radiology AI centered on accurate chest X-ray diagnosis of tuberculosis is an exemplar.
While focus researchers focus on using AI for patient diagnosis and treatment, many physicians use it to enhance compassionate patient communication. Unsurprisingly, AI works effectively here, considering it learns from millions of pieces of similar content and identifies the most common themes through statistical methods. It also learns from related correspondence that expresses compassion.
In many ways, AI can tease out what is “good” in how patients and physicians communicate. I expect more positive unintended uses for AI. The journey is just beginning.
While AI looks to replace workers in many industries, healthcare will always require humans to deliver care. AI will significantly impact non-clinical jobs as healthcare administrative work is most closely aligned with banking and insurance.
To protect workers, healthcare organizations must start planning to retrain these administrative workers allowing for their redeployment to patient-facing roles. Redeployment will increase provider efficiency and enable these workers to continue contributing to their organizations.
Unless we address clinician burnout, there will not be enough of them to care for us all. We must reduce the clinician documentation burden by removing any administrative and non-clinical content requirements from clinicians.
We must finally implement a unique patient ID first mandated in the mid-1990s under HIPAA. Interoperability continues to be a problem that must be fixed.
If we want to maintain our supply of clinicians while enhancing patient safety, quality, and access to care while managing costs, treating our diseased EHR provides a good path forward.
While cheap meds might be attractive when abroad during your summer travels, buyer beware. Fake drugs may lack active ingredients or include harmful ingredients. While less than 1% of drugs in the US are fake, that number rises to over 40% in some middle and low-income countries.
Counterfeit medicines are usually packaged to look like the real thing. Other health products, such as medical devices, mosquito nets, vaccines, or bug sprays can also be fakes
It is hard to spot counterfeit drugs, the only real way to know if a drug is counterfeit is through chemical analysis done in a laboratory. Sometimes, counterfeit drugs differ in size, shape, or color, or are sold in poor-quality packaging, but they often appear identical to the real thing.
With the Internet, we can research much faster than visiting the library. But does that make us more productive? The average time spent on a single computer screen is 47 seconds, and it was 150 seconds in 2004. AI can generate thousands of pages of content in seconds, but we know AI creates hallucinations – falsehoods and stuff that is just made up.
Perhaps the greatest threat of AI is not a cataclysmic event started by AI but its ability to overwhelm us with garbage information making us dumber and less productive. How can we sift through it all to get to what is relevant?
A UN 2023 report ‘Bracing for Superbugs” predicts that antimicrobial-resistant infections (AMRs) will kill more people than cancer by 2050. While we should ramp up our R&D to develop new antibiotics, pharmaceutical companies expect minor profits from such therapeutics so invest in more profitable lines. Our prodigious use of antibiotics in animal feed to fatten livestock and associated profits are directly linked to the increase in the number of these superbugs. Unless we shift our priorities, the current challenge AMRs pose will only increase.
ChatGPT frequently gains capabilities by releasing plugins that open up AI to regular users. With these new tools, ChatGPT interprets documents, searches for jobs, constructs AI prompts, and writes code.
Although AI cannot replace healthcare workers, its expanding utility is vital in delivering care. By learning to leverage these tools early, we can improve care delivery safety, quality, and efficiency.
The critical shortage of cancer drugs forces patients to scramble for doses and skip treatments. Many of these are inexpensive generic medications. Congress is holding hearings while patients are waiting for life-saving care.
Sam Altman, CEO of OpenAI, testified in Congress, calling for federal regulations to govern the development and use of AI systems. There is bipartisan sentiment that rules are needed to ensure the proper use of AI to protect consumers.
Rather than coming late to the game as ONC did with pushing for healthcare data interoperability, the agency needs to move quickly to establish a structure that includes all stakeholders to set rules for using AI in healthcare delivery.
We cannot assume the broad rules set by Congress will adequately address the nuances and particularities of healthcare data and its use.
Big Tech became big through its overwhelming dominance of Web 2.0 in the mid-2000s. Think Amazon, Facebook, and Google, among others. These companies revolutionized communication and significantly changed our society by perfecting ways to monetize data collected from our use of their online social networks and buying sites.
While generative A.I. continues to evolve and its eventual impact is unknown, the changes it will bring to society will be revolutionary.
According to Lina Khan, current Chair of the Federal Trade Commission, the U.S. needs to put together meaningful plans to regulate A.I. Without oversight, Khan believes A.I. market power will concentrate in the few big tech companies that dominate our society currently stifling innovation, competition, and delivering unintended negative consequences.
We need to move forward carefully with A.I. development and some regulation. What should that be? I am still in the data collection phase. How should A.I. be regulated? Please leave your comments in this post.
Geoffrey Hinton, a winner of the Turing Award, the Nobel Prize for computing, quit Google to warn of the dangers of AI. Hinton believes AI has advanced in some areas that exceed human intelligence. Recently, a bipartisan group of legislators submitted legislation to prevent the military from using AI to control nuclear weapons.
With all this discussion on the dangers of AI, it is time for healthcare to ramp up its debate on how to regulate the use of AI in healthcare delivery. It is not hyperbole to say that our lives depend upon what we do.
A recent study in JAMA Internal Medicine reported that patients in an online forum believe chatbots generated quality and emphatic responses. Does that mean AI will start to replace physicians interacting with patients? I seriously doubt it. In its current state, AI is incapable of delivering reliable diagnostic and therapeutic plans considering the risk of “hallucinations” and other misinformation. Healthcare is as much about human interaction as it is about science.
Much development work is left to do before Star Trek’s Emergency Medical Hologram becomes a reality. What’s your take? Leave your comments in this post.
You may think AI is something to watch from the sidelines, but you might be missing something. AI now offers real value to non-scientists by helping them complete routine tasks. In this recent NYTimes article, 35 ordinary people share how they use AI. Here are some examples:
● Organize a messy computer desktop.
● Plan a garden plot.
● Write a wedding speech.
● Cope with ADHD
● Make a digital music playlist.
How are you using AI? If not, why not? Leave your comments in this post.
In the Economist article titled “The World Needs an International Agency for Artificial Intelligence, two AI experts discuss the need for a global agency to set governance for artificial intelligence (AI). The authors, Yoshua Bengio and Stuart Russell, argue that such an agency could help mitigate the risks associated with AI and ensure that people and companies use the technology to benefit humanity.
Bengio and Russell highlight the potential dangers of AI, such as the development of autonomous weapons, the exacerbation of economic inequality, and the erosion of privacy. They argue that an international agency could address these concerns by promoting transparency, accountability, and collaboration among AI researchers and policymakers.
The authors propose that the agency should have three main functions:
• Setting technical standards for AI.
• Overseeing the development of AI applications.
• Ensuring AI is used ethically and responsibly.
They suggest the agency could be modeled after organizations such as the International Atomic Energy Agency or the World Health Organization.
Bengio and Russell acknowledge that creating an international agency for AI will be challenging. However, they believe that it is necessary to ensure that AI is developed and used in a way that benefits humanity and protects the planet.
Source: The world needs an international agency for artificial intelligence, say two AI experts, The Economist, April 18, 2023
Sure, AI shows great promise in enhancing diagnostic accuracy and improving treatment plans, but there are risks. While humans always make errors, we expect computers to be consistently accurate, and no one checks the math in an Excel spreadsheet. So I worry that AI medical software will be perceived as errorless when it is error-prone. And when we are managing health, we must focus on reducing errors, not adding points of failure. Therefore, let’s be careful as we roll out this new technology. What do you think?
An International Non-Proliferation Treaty for AI?
This week tech leaders warned of the risks of artificial intelligence and suggested controls are needed to prevent AI from becoming a danger to society. But who should provide the oversight? The European Commission is currently the only government institution to draft a law protecting individual rights from AI’s influence.
With tech companies forging ahead in a race to develop, deploy, and monetize AI, there are currently no guardrails in place to prevent the misuse of the technology. In a worldwide survey, citizens scored their trust in government and tech companies lower than all other options.
Whom do you trust to set governance rules on the use of AI?
AI: The Next Big Thing That’s Not Going Away
I am bullish on the value artificial intelligence can bring to healthcare delivery. But like every new technology, AI has its limits. What are those limits? How differently should we think about AI than other information technology tools? Here’s a link to a weeklong series on AI from the New York Times that can help get us started. It’s a great way to get ahead of a tsunami of talk about AI. With better understanding and practice using the tool, we all can better manage the hype cycle tied to it.
Is AI Coming for Your Healthcare Job?
Tomas Puyeo’s article on artificial intelligence reviews the impact of AI technology on employment in several industries. He writes:
“Meanwhile, many companies have been trying to beat humans in diagnostics. They don’t always succeed, but they’re close, and it’s very likely they will prevail: It’s impossible for a human to keep abreast of all the scientific papers in their field, calculate complex probabilities, weigh their personal experience with patients, make sure it’s not weighed more than data on papers, incorporate real-time data on local epidemics… But AIs can. It’s simply a matter of teaching them.”
I agree that AI will improve diagnostic and therapeutic decision-making. And certain jobs will be eliminated while others will increase, but I expect it to improve quality and facilitate personalized medicine. What do you think the impact of AI on healthcare will be in the next five years? Put your comments in the post.
While AI Tools Improve, Caution Remains
OpenAI plans to release its upgraded ChatGPT tool in the next few weeks based on the newer GPT-4 platform. While its capabilities exceed those of its application based upon GPT-3.5, limitations remain. The latest version delivers more focused and succinct responses and describes images well. But every AI tool is limited by the data available for the AI model to ingest.
For physicians and nurses to safely utilize these tools, their institutions must rigorously test them before deployment. Solely using AI output to direct patient care without clinician intervention presents the most significant risk to patients.
Physician + AI = Better Healthcare
Yes, artificial intelligence can improve healthcare now. Here’s why. Computers do not tire, their eyes do not get strained, and they never get distracted. According to a recent NY Times article, Hungary is leading the way in using AI to screen for breast cancer. What makes the approach so promising is the combination of AI software screening images that identify potential areas of concern and a trained radiologist’s review of the scans. This two-pronged review helps focus the radiologist on the most significant parts of the image requiring further evaluation.
Although this may increase the radiologist’s efficiency by decreasing the time devoted to reviewing a scan, it focuses the radiologist on areas of concern. Therefore, it increases the likelihood of identifying cancers earlier.
Collecting clinical data from each scan and adding it to the AI model dataset increases the model’s reliability in identifying cancer lesions. The proper use of AI in clinical care is to enhance the work of caregivers and not replace them. While AI may be able to help physicians diagnose cancer, it never will replace the human touch required to inform patients of a life-threatening diagnosis and the frightening treatment road ahead.
The Chronic Illness Phase of the Pandemic Remains
More than 35 million Americans have experienced long Covid symptoms. Studies put the overall share of long Covid symptoms at 6.2% at three months and 0.9% after 12 months. A more severe illness increases the likelihood of long Covid.
While we may think the healthcare burden of the pandemic is behind us, there appears to be a long tail of Covid illness that we will need to treat for many years. We need to do more research on treating long Covid while accelerating our vaccine research to prevent another pandemic anytime soon. Congress cut funding for coronavirus vaccine research, but fortunately, Europe continues its investment, a topic I reviewed in an earlier post.
What leadership role should the U.S. take in preventing the next pandemic?
Living Longer. Can Our Economies Adjust?
Statistica’s latest report on life expectancy offers good news. The gap between highly developed regions and the rest of the world is decreasing. The global progress in ensuring access to healthcare, clean water, education, and food security continues to make a difference. Africa lags behind other regions, but it is closing the gap. But what does this mean for our economies? Long lifespans put a burden on governments to provide social services. Will we have enough workers to support more citizens in their 70s, 80s, and beyond? Thinking that increasing the population is the only way to sustain growth while supporting longer lifespans seems illogical. How many billions more people can the resources of the Earth support? I do not know the answers to these questions, but we need to think differently about how our economies are structured and the role of each of us in supporting them. How do you think our economies should change to support longer lifespans?
Are Clinicians Reacting to Bloated EHRs as We Do To Online Ads?
As humans, our ability to devote attention to something is limited. Fortunately, our brains are very good at weeding out unimportant stimuli so we can focus our minds on the essential task. Without this ability, every sight, sound, touch, and smell would paralyze our actions.
In a recent Ezra Klein NY Times podcast, Tim Hwang discussed the impact of the avalanche of targeted online ads described in his book titled “Subprime Attention Crisis.” Ezra and Tim reviewed the problems facing companies that advertise online or sell online ads. They emphasized how the online ad market is changing due to consumers “tuning out” the ads. The overwhelming number of ads has trained consumer brains over time to ignore the ads due to a maxed-out attention quota.
Our consumer experience with ads differs significantly from our work with EHRs. Do EHRs have too much information recorded in them? Do copying and pasting notes bloat the EHR to such an extent that clinicians need help identifying what is important and worth reviewing? Do medical errors happen due to this documentation bloat? And is AI a tool that can help focus the clinicians on what is essential? Clinicians now struggle with too much patient data, not too little. And in turn, their patients suffer. As we further expand patient data collection, we must figure out a way to effectively manage this information. If we have not surpassed the ability of clinicians to ingest the patient information currently in the EHR, it is not long before that day will come.
(The podcast and its transcript are available using this link. https://www.nytimes.com/2023/02/14/opinion/ezra-klein-podcast-tim-hwang.html)
The use of artificial intelligence in patient care is likely to have a significant positive effect. AI can help to improve the accuracy of diagnoses, reduce medical errors, and expedite the delivery of treatments. AI can also help healthcare providers to better understand and predict patient outcomes and make more informed decisions about treatments. AI can also help to improve patient engagement by providing personalized care and more efficient communication between patients and providers. Finally, AI can help to reduce healthcare costs by streamlining administrative processes, such as billing and scheduling.
How Will Artificial Intelligence Streamline Healthcare Administrative Processes?
AI can be used to streamline administrative processes in healthcare by automating mundane tasks and eliminating manual data entry. AI can also be used to improve accuracy in billing, scheduling, and other administrative functions. AI can also be used to analyze large amounts of data quickly and accurately in order to identify trends and help healthcare providers make more informed decisions. Additionally, AI can be used to improve patient engagement by providing personalized care and more efficient communication between patients and providers. #AI #artificialintelligence #healthcarecosts #openai #drbarryspeaks
Who Can Afford Life-Saving Drugs?
The past decade delivered highly effective treatments for cancer, autoimmune syndromes, and many rare diseases. But to be effective, patients need to access those miracle drugs. Whether insured by private insurance or a government program, out-of-pocket costs are often a barrier to obtaining those medications, even for middle and upper-middle-class families. Over the past seven years, the number of drugs covered by Medicare Part D costing $70,000 or more per year rose from 40 in 2013 to more than 150 in 2020. Many patients choose less expensive treatments that are less effective and bring more side effects due to costs, while others avoid treatment altogether.
While we must continue to fund pharmaceutical firms so they can develop more miracle drugs, we also need to figure out a balance between the cost of development and the price to patients and the government. Continued delivery of miracle cures that few can afford works against health equity, and it misappropriates the government funding that underpins much of the research that leads to these treatments.
High administrative healthcare costs in the United States are largely due to the complex and fragmented healthcare system. The system is composed of multiple insurers, providers, and third-party administrators, all of whom have different requirements and processes that lead to higher overhead costs. Additionally, the lack of standardization and coordination between these entities, as well as the lack of transparency in healthcare pricing, contribute to higher administrative costs.
How Can the United States Reduce Healthcare Administrative Costs?
- Streamline and simplify the paperwork process. Automation of administrative processes, such as electronic health records and claims processing, can reduce the time and resources needed to complete paperwork.
- Promote price transparency. Making pricing and quality data available to consumers can help reduce administrative costs by encouraging competition and allowing providers to focus on delivering high-quality care.
- Increase competition. Increasing competition among insurers and providers can help reduce administrative costs by promoting more efficient operations and driving down prices.
- Implement uniform billing and coding standards. Establishing uniform standards for billing and coding across all providers can help simplify and speed up the payment process.
- Encourage value-based payment models. Value-based payment models, such as bundled payments and accountable care organizations, can help reduce administrative costs by incentivizing providers to focus on quality of care.
Should We Pay Physicians to Respond to Patient Emails?
As the medical impact of the pandemic recedes, the digital effects remain. While clinics and physician offices were closed, we learned to use telemedicine, patient portals, and secure email to interface with our clinicians. And we continue to use these digital tools. But how is this impacting the physician? We know physicians are leaving practice due to burnout. Are the use of these tools contributing to this problem?
One study suggests that patient emails have increased by 50% over the last three years. Many physicians report spending hours during “pajama time” before bed responding to emails.
These email messages are a form of patient care, yet payers rarely reimburse physicians for their time. Failing to respond is poor quality care and could lead to unnecessary patient visits to already overbooked clinics and physician offices. Physician responses to emails lead to cost savings and a better patient experience. It is time to devise a standard method to reimburse physicians for this work. Everyone would benefit.
COVID’s Impact on Healthcare Continues Worldwide
While the pandemic horrors of early 2020 diminish due to vaccines and treatments, the negative impact on patients and healthcare providers continues. Reduced access to routine services over the past three years led to delayed cancer diagnoses, elective surgeries, and routine disease screening. And changes in care protocols decreased hospital productivity while expenditures rose. These changes are evident in both the U.S. and across all high-income countries.
A research article in the December issue of Mayo Clinic Proceedings documented clinician burnout due to the pandemic. (https://www.mayoclinicproceedings.org/article/S0025-6196(22)00515-8/fulltext ) We must change how we deliver, organize, and pay for care.
Aging Workforce: Will They Be There for Us?
The recent nursing strikes in NYC highlight the workforce challenges facing hospitals. Are there enough nurses to deliver high-quality, safe care? What does providing a working environment that protects staff and patients mean? How should our healthcare workforce be compensated?
These are all important issues to discuss and debate to arrive at reasonable solutions. No matter what we decide, we face a future shortage of nurses. While Millenials embraced the profession, the same cannot be said for the generations that followed. We need to take steps to make entry into the nursing profession easier. More training programs and access to financial support are required. By taking care of them, they will be there to take care of us.
- Technical Interoperability: Different systems use different data formats, communication protocols, and security measures, making it difficult for them to communicate and share information.
- Legal and Regulatory Issues: Different states, countries, and organizations have their own laws and regulations around the sharing of healthcare data, making it difficult to create a standardized framework for interoperability.
- Cost and Resources: Implementing interoperability can be costly and can require considerable resources.
- Different Clinical Terminologies: Different healthcare systems use different clinical terminologies, making it difficult to share data accurately and consistently.
- Data Quality and Privacy Concerns: Data must be accurate and up-to-date, and privacy must be maintained, in order for interoperability to be successful.
Does Healthcare Have a Southwest Airlines Problem?
The recent Southwest Airlines travel fiasco can be traced to its failure to upgrade its information technology systems. Due to antiquated systems and poor interoperability, the airline needed to know where its crews were stranded to reallocate staff to flights efficiently. Its systems did not offer that capability.
In describing the situation, Zeynep Tufekci wrote in the NY Times, “Well, if you are a corporate executive whose compensation is tied to stock prices and earnings statements released every three months, there are strong incentives to address any immediate problem by essentially adding a bit of duct tape and wire to what you already have, rather than spending a large amount of money — updating software is costly and difficult — to address the root problem.”
Do you see similarities in how some healthcare business managers lead their organizations? Could some of the problems of staff shortages, quality of care, and patient experience be related?
We Need to Take Care of Them……
So They Can Take Care of Us
For almost three years, they have been there. Out doctors and nurses and therapists and administrative staff, accepting the unknown personal risk to their health to be sure we got the care we needed. Now faced with the triple threat of COVID, RSV, and flu, they show up every day to do what they can to treat and comfort us.
Yes, American healthcare has problems. And many people have great suggestions on how to fix it. But unless we figure out how to take care of them, no matter what solutions we implement, there will not be enough of them to take care of us.