It is the one topic you cannot seem to escape no matter what setting you are in. Whether you’re scrolling on social media, listening to music, or speaking to “someone” on the phone, AI is incorporating itself into our everyday lives, for better or worse.
AI has begun to incorporate itself into the healthcare sector as well, through scheduling, billing, documentation, as well as patient communication. This may not be an issue to some, but the informal use of AI in the healthcare space can have some dangerous consequences. The question is not “Are my employees using AI?” or “Is my practice using AI tools?” but rather it should be, “Are these AI tools being used compliantly and responsibly?”
The convenience of AI tools does not outweigh the requirements to your patients and their information private.
Where is AI Already Existing in My Practice?
Some of the most common places that forms of artificial intelligence are already present may not be the first thing that comes to mind, and have been in place a while, but are still forms of AI. AI scribes and documentation tools for recording phone calls and meetings, chatbots which are used for patient communication, and automated billing and coding tools have been present for a few years now. Other ways AI has incorporated itself into many marketing techniques and other forms of administrative support as well.
Each of these different forms of artificial intelligence is used to streamline the businesses capabilities and make them run more efficiently, but the most important thing to note about each of these forms of AI, is that they may involve protected health information.
HIPAA Risks & the Questions You Should Be Asking…
While the convenience of working with AI may be a prominent factor in the use of these tools, there are quite a few risks to practices HIPAA requirements.
- Submitting any form of PHI into public AI platforms is a breach of HIPAA protocol, so one must ensure any information which would be considered PHI is taken out prior to searching. These platforms may have unclear vendor security and safeguards in place. Where are these AI platforms storing all the questions you feed into it, and for how long? Are these servers and platforms secure?
- Business Associate Agreements are required when contracting with AI platforms and products. This ensures responsibility by both parties to keep patient information secure. Do these AI platforms sign Business Associate Agreements? Or are they disregarding the security and ethicality of the information they intake?
- Employees may be using these AI tools on their personal devices to help them with their day-to-day work. Do these employees have the proper training to use AI properly and securely without exposing private information to public systems?
These concerns and questions should be thought about prior to onboarding AI solutions into your practices, to keep your company safe from fines and potential breaches.
Your company may have begun the process of researching how AI can benefit your programs and procedural requirements. To ensure compliance in all regards, they must also include AI platforms in your risk assessments to tie up any loose ends.
The Human Side of It:
Humans are, and always will be one of the greatest threats to HIPAA security. Human error is one of the top causes of HIPAA breaches. This human error also includes the human use of AI tools improperly.
Well-meaning intentions can have negative impacts on the company if not properly trained on the impacts of AI and their usage. Staff may innocently misuse AI platforms in hopes of helping patients and increasing their efficiency, without thinking of the data they are inputting into public platforms.
Streamlinz Tip: AI use should be included in the risk analysis for your company, through the Security Rule. These are considered new technologies which may lead to gaps in your system and new vulnerabilities. The gaps in documentation are the biggest cause of compliance gaps regarding the use of AI tools, once proper training has been given to employees.
Moving Forward:
To ensure that AI is being utilized properly to benefit both your patients and your practice, implement high level guidance to help with vendor evaluation, policy updates, employee training, and decision documentation. Ensure you are regularly reviewing your safeguards and continue to reinforce safe decision making when it comes to the AI space.
Overall, AI can be an incredibly useful tool to improve the efficiency of the practice as well as benefiting the patient experience. As new artificial intelligence technologies continue to evolve, so do our compliance policies and techniques.
Being proactive with our AI use and technologies can help to prevent damage control and breach later. Streamlinz is here to help with any questions and concerns you may have regarding your AI use and its relation to your company’s compliance.
