A recent memo from the US government has clarified that AI cannot be used as the sole basis for denying claims. This comes in response to lawsuits against health insurers, such as United Healthcare and Humana, who have been accused of using AI to wrongly deny coverage.
Patients claim that the AI model nHPredict has a 90% error rate, highlighting a dangerous aspect of the technology that is receiving increased attention. The Centers for Medicare & Medicaid Services expressed concern about the potential for algorithms to exacerbate discrimination and bias and have urged insurers to ensure their models comply with anti-discrimination requirements.
Several states, including New York and California, have also warned insurance companies to verify the fairness of their algorithms. While insurance should cover the cost of claims related to falls, many people find themselves wondering if it was a person or an AI that made the decision when their claim is denied.