Are you a registered Democrat? You could be more likely to experience anxiety these days, causing you to need more mental health care. Have you lived in neighborhoods near industrial zones? That could increase your chance of chronic illness. Do you buy video or board games? You might be less likely to exercise, raising your medical costs in the long term.
According to an investigation that ProPublica and NPR released on Tuesday, health insurers have begun acquiring huge amounts of non-health-related data about the people they insure or will potentially insure. This data includes race, net worth, consumer behavior, criminal and civil court records, and prior addresses, among other things.
Health insurers buy it from data brokers, who scoop up pretty much everything from the data trails we all leave behind as we move through the world. Those data brokers, as well as the health insurers themselves, also create algorithms to find relevant patterns in this data — like relationships between particular purchasing habits or life events and increased health care expenditures.
While health insurers claim they’re not using these algorithms to set insurance costs for individuals, they’re unable to cite any law that would prevent them from doing just that. And considering that the very purpose of insurance is to assess risk and charge customers accordingly, there’s a very real concern that insurers will start using these algorithms to set their fees.
Existing health disparities mean that data will consistently show members of certain groups to be more likely to need more health care. What will happen, then, if this data starts being used against those groups? We know, for example, that black women are much more likely to experience serious complications from pregnancy than white women. So, health insurers might conclude that a woman who is black and recently married is likely to cost them more money than a white woman in the same position. Even in cases where they don’t have accurate race data, insurers might draw the same conclusion for women who purchase black hair-care products or those who have tweeted about television shows like Atlanta or Scandal.
More broadly, people who live in poor neighborhoods and neighborhoods of color are much more likely to have health problems than those in affluent neighborhoods. The ProPublica piece quotes one health data vendor joking, “God forbid you live on the wrong street these days … you’re going to get lumped in with a lot of bad things.” Is it fair to make health care more expensive for people based on zip code or race?
The Affordable Care Act prohibits insurers from discriminating on the basis of pre-existing conditions or gender, but it doesn’t say anything about race, religion, national origin, or anything else insurers can learn about you from data brokers. At the state level, where insurance in this country is largely regulated, more than half of states don’t even ban using race explicitly in pricing health insurance. That’s a problem, especially in the age of big data, when it’s extremely tempting for insurers to raise prices for customers they perceive to be risky, sometimes in order to drive them away.
Actors in other lines of insurance, like auto or homeowners’ insurance, have started to use digital data to raise prices for customers who they predict won’t switch insurers if their rates go up. It’s a big enough problem that 20 states have issued bulletins banning the practice.
Historical and ongoing racial discrimination has created an enormous racial wealth gap, and because we continue to live in such a segregated country, almost all the data held by data brokers reflects and encodes racial disparities. When predictive models are built using this data, people of color are consistently disadvantaged— black people whose credit scores are as good or better than those of whites might not get a loan simply because of the neighborhood in which they live.
If that happens in the lending context, the federal Equal Credit Opportunity Act protects the borrower. When similar algorithmic discrimination occurs in the housing market, the Fair Housing Act provides protection, as does Title VII when there’s a job at issue. Since, in addition to barring intentional discrimination, each of these statutes prohibits neutral policies that nonetheless have a disparate impact on members of protected groups— like people of color— they are vital in the era of algorithmic decision-making.
The ProPublica report shows that the danger of discrimination in insurance is increasingly real. But there’s a big hole in civil rights law when it comes to insurance. State legislatures should explore new ways to prevent discrimination in health insurance, including requirements that insurers audit their own use of consumer data for discriminatory effects and publish the results. Consumers deserve no less.
Rachel Goodman is a staff attorney with the ACLU’s Racial Justice Program.