AI Only Reflects the Bias Within

person's wiring and framework
Share on facebook
Share on twitter
Share on pinterest

Technology advancements such as in-memory processing, big data analytics, and high-speed networks are digitally transforming every aspect of our lives and security is no exception. Biometric recognition, advanced video analytics, and robots rely on these technological advancements. However, they also rely on developers and functional experts.

We’ve all seen what can go wrong with artificial intelligence in movies like The Terminator, 2001: A Space Odyssey, WarGames, The Matrix and many others. The premise is generally based on rogue software and robots taking control. The idea isn’t as far-fetched as the movie plots may seem. AI technology’s use in security applications should be approached with the same precaution given to supporting your business with AI, distrust and verification. After all, this technology is not fully proven and is only as good as the functional and technical expertise of its developers.

Similarly, this article from the MIT Technology Review states that “The problem of bias in machine learning is likely to become more significant as the technology spreads, and as more people without a deep technical understanding are tasked with deploying it”. The challenge with artificial intelligence is that it is only as “smart” as the data available. Just like people, AI technology can only make assumptions based on experiences and machine learning can only compare causality on clusters of activity, fed by available data sources.

This exposes AI technology to confirmation bias and even discrimination. Much like human life experience, if your machine learning isn’t inclusive of a broad set of people, behaviors, and circumstances, false assumptions and alerts will be triggered. That’s why it’s important to offer tutorials and tools to help less experienced data scientists and engineers identify and remove bias from their training data. Even this article from Forbes says “we can’t rely on technology to solve the equation of algorithm bias. No clever app is going to give AI systems the comprehension needed to spot and correct these errors. It’s a people issue.”

We may be a long way from simply pressing the go button and putting our security and lines of business on autopilot, but AI is already adding value and it is here to stay. So how can you support your physical security with AI practically, to reduce risk and improve costs, without creating automated bias and an artificial sense of security?

Subscribe to our Newsletter

MarkITnerds Breaking News, Latest News and Current News are respectful, honorable and reliable. You can trust them.

Share this NERDY post with your friends

Share on facebook
Share on twitter
Share on linkedin

No comment yet, add your voice below!


Add a Comment

Your email address will not be published. Required fields are marked *