How AI Can Detect Weapons Yet Still Leave Us Vulnerable
While AI can reduce harm by detecting dangerous items, it may make us overreliant and vulnerable to attack by the weapons it misses.
“Technology tends toward avoidance of risks by investors. Uncertainty is ruled out if possible. People generally prefer the predictable. Few recognize how destructive this can be, how it imposes severe limits on variability and thus makes whole populations fatally vulnerable to the shocking ways our universe can throw the dice.”
- Frank Herbert
Introduction
In a world increasingly reliant on technology, the marriage of Artificial Intelligence (AI) and security systems promised a revolution. The dream of creating 'weapons-free zones' through the power of AI was compelling, and it drove many, including a New York school district, to invest heavily in this futuristic tech. The medium is the message, and when the medium takes hold of us, it leaves us open to an attack from the fringes. In this case, assailants using alternative weapons in order to circumvent AI -based detection.
The application of automation and large-scale processing in security and safety measures brings forth significant advantages, particularly in addressing issues such as gun-related incidents in schools and the presence of weapons in public spaces. By implementing automated systems, security measures can be streamlined and made more efficient, contributing to enhanced safety.
In the context of gun issues in schools, automation can play a vital role in detecting and preventing potential threats. Advanced surveillance systems equipped with artificial intelligence (AI) can analyze real-time video feeds to identify suspicious behavior or the presence of firearms.
A potential consequence of this advancement is the obsolescence of human security personnel, as automation takes over their tasks. This shift in reliance on technology raises questions about the extent to which perfect security can be achieved, considering the intrinsic limitations of any system. While automation satisfies the age-old desire for flawless security, it is an unattainable goal due to the evolving nature of threats and the complexity of human behavior. Furthermore, if taken to extremes, automated systems can become security risks themselves. They may overlook potential threats, be susceptible to manipulation, or even be exploited for unauthorized surveillance. Hence, a careful balance must be maintained to harness the benefits of automation while mitigating the associated risks.
Evolv Technology, a promising player in the AI weapons detection field, sold an AI-powered weapons scanner to a New York school district for nearly $4 million. They pitched a world where their cutting-edge AI could detect guns, knives, and explosive devices, ten times faster than traditional metal detectors. It seemed like a dream come true; a seemingly foolproof method to ensure the safety of students. However, the system fell short in an alarming way. In 2022, a 17-year-old student managed to walk through the scanner with a nine-inch knife, leading to a stabbing incident. While no one was killed, it could have gone very differently.
In one of my previous stories, I highlighted the massive concerns around AI-based surveillance, how it can infringe on freedom and be used as a means for control and subjugation. You can check out the full story here:
Cut by the Edge Cases
A subsequent investigation by the BBC revealed a concerning truth: during 24 walk-throughs, the scanner missed 42% of large knives. This failure is a stark reminder that AI, despite its promise and potential, is not yet foolproof. Some items could pass through the system, depending on a number of factors, including the sensitivity settings of the Express system on a particular day. This revelation underscores the significant challenges that still face AI-powered systems, particularly those entrusted with human safety. Unfortunately, human ingenuity will continue to prevail here, as even if the knife issue is resolved, other weapons will circulate. It will perhaps lead to the use of tools and weapons that are extremely hard to detect, perhaps even launching an economic demand for weapons that can fool AI-detection systems.
This fits McLuhan’s concept of the “tetrad." He proposed that every technology 1) enhances something, 2) makes something obsolete, 3) retrieves something from the past, and 4) when pushed to its limits, reverses into something else. When the AI scanner failed to detect a knife, it revealed its reversal into a potential security hazard, which brings us to the importance of transparency, accountability, and continuous improvement in AI deployment.
Lack of Transparency
In the grand scheme of things, the tale of the knife-evading AI scanner is an isolated incident. Evolv Technology claims that its Evolv Express system detected over 90,000 firearms and over 80,000 bladed weapons in 2022. The scanner is being used in hundreds of schools, stadiums, and theme parks like Six Flags, despite its inconsistencies.
Yet, this case prompts a necessary reflection on the current state of AI and its role in our society. As we move towards a future increasingly dependent on AI, we need to address the 'black box' problem - the lack of transparency and explainability in how AI makes decisions. This need has been identified as one of the key trends in AI for 2023, with organizations working towards eliminating bias, unfairness, and opacity from their automated systems. This becomes increasingly difficult as the economic value of AI increases but at a rate that other technology and science can’t keep up with. Already, the use of large language models, diffusion and other deep learning technologies, means that we are spreading black-box technology like wildwfire.
You can read my story to learn more about this here:
Conclusion
The AI-scanner-fails-on-knives incident serves as a stark reminder of the complexities and challenges in integrating AI into our lives. It's a cautionary tale about the importance of transparency, accountability, and continuous improvement in the deployment of AI technologies. But it's also a story of potential. The same technology that failed to detect a knife has successfully identified tens of thousands of weapons, offering a glimpse of a safer, AI-enhanced future.
In the end, the incident underscores the importance of balancing our enthusiasm for AI's potential with a healthy dose of skepticism and diligent oversight. It's a poignant lesson about ensuring our reliance on AI does not outpace our understanding of its limitations and our capacity to control its outcomes.