Artificial intelligence failures often generate a lot of laughs when they make silly mistakes like this goofy photo. However, "the problem is that machine learning gaffes aren't always funny … They can have pretty serious consequences for end users when the datasets that are used to train these machine learning algorithms aren't diverse enough," says Lauren Maffeo, a senior content analyst at GetApp.
In her Lightning Talk, "Erase unconscious bias from your AI datasets," at All Things Open 2018, October 23 in Raleigh, NC, Lauren describes some of the grim implications and advocated for developers to take measures to protect people from machine learning and artificial intelligence bias.
To learn more about this issue, watch Lauren's talk and read her Opensource.com article, "The case for open source classifiers in AI algorithms," which delves further into this problem.
1 Comment