Last week, Lemonade insurance company tweeted that its AI technology could “pick up on non-verbal cues” in the analysis of videos its policyholders submit to explain their claim details.
Social media users quickly spotted the tweet and asked whether the AI would spot certain minorities.
The Twitter users asked Lemonade insurance whether that AI technology would flag people from minority backgrounds or individuals with conditions such as autism.
Later that week in a Forbes report, the insurance company’s vice president Yael Wissner-Levy made a statement in which he explained that the insuretech had deleted the tweets, including the original, because the messages they had intended to make were “very much misunderstood and incorrect information was spreading.”
The company posted a blog on Wednesday which stated that, “We deleted this awful thread which caused more confusion than anything else.” It added that when the term “non-verbal cues” had been used, it was a poor word choice.
Lemonade insurance underscored that it does not include phrenology or physiognomy-based technology.
The blog post from the insurance company said that its technology does not rely on pseudoscience such as those, which have been disproven. Those disproven theories suggest a link between intelligence and character and a person’s physical appearance. The insurer also added that its practices do not include an evaluation of a customer’s claims based on that policyholder’s “background, gender, appearance, skin tone, disability, or any physical characteristic.”
Instead, the post stated that the AI the company employs is a facial-recognition technology meant to help spot instances when claims have been filed by “the same person under different identities.” Customers are asked to submit videos of themselves along with their claims, in which they describe the situation in their own words. The AI is meant to be used as a tool to spot when a person is wearing a disguise to make multiple claims. Lemonade insurance also added that when the technology flags a claim as having a potential issue, it is later reviewed by a human employee and that an auto-denial is never issued based on AI findings.