Four investors explain why AI ethics cannot be an afterthought

Billions of dollars overflowing into AI. Yet AI models are already affected by bias, as evidenced by mortgage discrimination against black would-be homeowners.

It’s fair to ask what role ethics play in building this technology and, perhaps more importantly, where investors fit in as they rush to fund it.

A founder recently told EntertainmentCab+ that it’s hard to think about ethics when innovation moves so fast: people build systems, then break them down, then edit them. So part of the responsibility falls on investors to make sure these new technologies are built by founders with ethics in mind.

To see if that happens, EntertainmentCab+ spoke to four active investors in the space about how they feel about ethics in AI and how founders can be encouraged to think more about biases and do the right thing.


We’re broadening our lens and seeking more investors to participate in EntertainmentCab surveys, where we poll top professionals about the challenges in their industry.

If you are an investor and would like to participate in future surveys, please complete this form.


Some investors said they approach this by doing due diligence on a founder’s ethics to help determine whether they will continue to make decisions that can support the company.

“Founder empathy is a huge green flag for us,” said Alexis Alston, director of Lightship Capital. “Such people understand that while we look for market returns, we also look for our investments not to have a negative impact on the world.”

Other investors believe that asking hard questions can help separate the wheat from the chaff. “Every technology has unintended consequences, whether it’s bias, diminished human agency, invasions of privacy, or anything else,” said Deep Nishar, general manager of General Catalyst. “Our investment process revolves around identifying such unintended consequences, discussing them with the founding teams and assessing whether precautions have been or will be taken to mitigate them.”

Government policy is also focusing on AI: the EU has passed machine learning laws and the US is planning an AI task force to investigate the risks of AI. This is in addition to the AI ​​Bill of Rights introduced last year. With many leading VC firms injecting money into AI efforts in China, it’s important to ask how global ethics within AI can be enforced across borders as well.

Read on to learn how investors approach due diligence, the green flags they look for, and their expectations from AI regulation.

We spoke with:


Alexis Alston, Director, Lightship Capital

When you invest in an AI company, how much due diligence do you do about how the AI ​​model claims or treats bias?

For us it is important to understand exactly what data the model receives, where the data comes from and how they clean it. We’re doing quite a bit of technical diligence with our AI-focused GP to ensure that our models can be trained to reduce or eliminate bias.

We all remember when we couldn’t have the faucets turn on automatically to wash our dark hands, and the times Google image search “accidentally” equated black skin with primates. I will do everything I can to make sure we don’t end up with such models in our portfolio.

How would passing machine learning laws in the US similar to those in the EU affect the rate of innovation the country sees in this sector?

Given the lack of technical knowledge and sophistication in our government, I have little faith in the US’s ability to pass actionable and accurate legislation around machine learning. We have such a long tail when it comes to timely legislation and technical experts to be part of task forces to inform our legislators.

I don’t actually see any legislation making major changes to the pace of ML development given the way our laws are usually structured. As with the race to the bottom for designer drug legislation in the US a decade ago, the law never kept up.

Leave a Comment