Artificial Intelligence: The Good, the Bad, and Responsible Use in Healthcare


Artificial intelligence is proving itself as a powerful tool in the healthcare industry. From supporting patient screening and monitoring to diagnosis and research functionality, AI has risen to support resource-constrained healthcare professionals. But it’s not just used in clinical activities. AI has shown tremendous value for healthcare financial services as well.


blog image

Artificial Intelligence (AI) is a valuable tool in healthcare, and its use is growing rapidly. Not only can AI help healthcare providers in the care of patients; it can also support healthcare finance activities. As digital technologies become increasingly vital to medical facilities, AI will play an important role in securing them. As powerful as AI is to the healthcare community, it’s not without vulnerabilities — particularly with regard to security, privacy, and ethics — that must be resolved before its potential can be fully realized.

AI in healthcare

During the pandemic, AI rose as a powerful solution for resource-constrained healthcare facilities. AI technology can gather and analyze data from disparate sources to:

  • Diagnose and triage patients
  • Predict outcomes
  • Monitor patients remotely (e.g., glucose monitoring for diabetics)
  • Monitor disease outbreaks
  • Create transmission models
  • Support drug research and development.

From a healthcare IT perspective, team members can use AI to protect against cyberattacks and data breaches. It is also being used extensively in the financial side of healthcare.

AI streamlines the medical coding and billing processes, allowing medical facilities to relieve existing coding and billing staff and assign them to more strategic tasks. It can reduce costly coding and billing errors and be used to delve deeper into the reasons behind a claim denial. Billers can use it to conduct audits in real time so they can rectify problems before incurring high costs. Overall, AI has the ability to learn and constantly improve operations.

Security, privacy, and ethics issues

AI is still considered an emerging technology, especially in the healthcare industry. As with any new technology, AI is not immune to vulnerabilities. The top concern with AI is ensuring the security and privacy of sensitive patient information. Although there are existing regulations in place to protect this information, such as HIPAA, these rules were not created with AI in mind, leaving some potential points of risk. As Linda Malek, partner at Moses & Singer, recently shared in Health IT Security, “There is still a gap in terms of how security and privacy should be regulated in this area . . . De-identification, as it’s currently defined, may not be enough to really protect the anonymity of the data. That’s an example of where HIPAA doesn’t really take this kind of technology into account.”

AI technology is not any more vulnerable than other technologies, but any network-connected technology used in patient healthcare or medical financial services needs to be protected. AI requires massive amounts of digital data. Patient medical and financial data must be combined with other data to provide context so that AI can “learn.” This means many people, from healthcare providers to medical billers and coders, have access to these data stores, creating security and privacy issues. This data is vulnerable to inadvertent leaks insider or intentional data theft from insideas well as malicious cyberattacks and data breaches from outside. There is also an ethical issue over who actually owns this patient medical and financial data.

Overcoming the challenges of AI

Healthcare is inherently complex, with multiple internal and external stakeholders who need varying levels of access to sensitive information. Aside from the immediate, internal security policies and procedures that a healthcare facility must enact to protect itself and its patients, it is also important to ensure external business associates and AI vendors sign a Business Associate Agreement (BAA) to ensure they have sufficient safeguards implemented as well.

The Food and Drug Administration (FDA) is working on a set of regulations and guidance regarding AI and the privacy and security of medical devices. With the implementation of these regulations, the FDA would expect a commitment from manufacturers about transparency and performance monitoring for medical devices that use AI.

Healthcare organizations must de-identify data to minimize the risk of it being traced back to an individual. For organizations that use AI-derived data, it’s crucial to regulate who uses the data secondary to its primary use. Patients may give consent for a healthcare organization to use their data, but often that data is accessed by third parties to whom patients have not given consent. There also need to be regulations in place for entities that develop AI applications or use the data, making them accountable for securing their applications or AI-derived data. These regulations must apply to both the clinical and financial sides of healthcare organizations. Although these regulations are not yet in place, healthcare organizations can be proactive and take measures to safeguard their use of AI and the data derived from it.

If your organization is currently using or planning to implement AI technologies or AI-derived data, it’s a good idea to partner with a company that can help you protect this data and use the applications appropriately. For assistance, visit TruBridge.com or call us at 877-543-3635.