gray bottom divider with green border

AI Can Reflect the Bias Within

Artificial Intelligence and AI Bias

Technology advancements such as in-memory processing, big data analytics, and high-speed networks are digitally transforming every aspect of our lives. These technologies have given rise to Artificial Intelligence (AI), and like any form of intelligence, it can learn AI bias.

We’ve all seen what can go wrong with artificial intelligence in movies like The Terminator, 2001: A Space Odyssey, WarGames, The Matrix, and many others. The premise is rogue software and robots taking control. The idea isn’t as far-fetched as the movie plots may seem. AI technology’s use in security applications has had disturbing results.

AI is Only as Smart or Biased as the Data

For instance, a development company failed to sample a variety of skin tones. Therefore, the app was unable to recognize people of color. Like a child learning racism from their parents, developers can bias an application. Seemingly harmless decisions like sending notifications alphabetically by the last name could leave customers B-Z in the dark on an inventory announcement. After all, this is coding and behind that code is people writing the script.

Similarly, this article from the MIT Technology Review states that “The problem of bias in machine learning is likely to become more significant as the technology spreads, and as more people without a deep technical understanding are tasked with deploying it.” The challenge with artificial intelligence is that it is only as “smart” as the data provided. Like people, AI can only make assumptions on experiences, and machine learning can only compare causality on clusters of activity. 

Choosing the data sources exposes AI technology to confirmation bias and even discrimination. Much like life experience, if your machine learning does not include a broad set of people, behaviors, and circumstances, false assumptions will occur. Teaching data scientists methods for avoiding intentional and subconscious bias is vital for avoiding these mistakes. Even this article from Forbes says, “we can’t rely on technology to solve the equation of algorithm bias. No clever app is going to give AI systems the comprehension needed to spot and correct these errors. It’s a people issue.”

We may be a long way from merely pressing the go button and putting our security and lines of business on autopilot, but AI is already adding value, and it is here to stay. So how can you support your technology with AI without creating AI bias?

gray top divider with green border

Subscribe to our Newsletter

MarkITnerds’ Breaking News, Latest News and Current News from respectful, honorable and reliable sources. What’s not to like? 

gray bottom divider with green border