Japan’s National Police is set to launch an ambitious project that utilizes advanced AI-powered security cameras for crime prevention.
These cameras are designed with machine-learning pattern recognition capabilities for behavior detection, object detection, and intrusion detection.
The initiative is slated to be implemented this fiscal year in response to the assassination of former Prime Minister Shinzo Abe and the attempted attack on incumbent Prime Minister Fumio Kishida.
The AI systems will focus on recognizing patterns associated with suspicious activities such as repetitive or anxious glances, restlessness, and fidgeting—all potential indicators of guilt.
This development marks a troubling advance in what modern law enforcement is capable of achieving.
In China, surveillance is a constant presence. From police cameras on street corners to online monitoring and censorship, the populace is subject to observation at all times.
Now, a new generation of technology seeks to use data gathered from everyday activities in order to predict crimes and protests before they occur.
Unfortunately, these predictive systems are not only targeting those with criminal records; they also identify vulnerable groups such as ethnic minorities and people with mental illness histories.
This advanced technology utilizes algorithms that comb through data in search of patterns or anomalies that could suggest potential risks.
While this type of system may be seen as controversial in the West, it is praised for its success in China; reports have documented instances where suspicious behavior was flagged by the algorithms and investigations were conducted which revealed fraud and pyramid schemes.
Nevertheless, these technologies’ reach extends beyond surveillance; they serve as an effective tool for maintaining social control over the population.
The Chinese government under President Xi Jinping has taken drastic steps to ensure social stability – any perceived risk of disruption is quickly addressed using technological means such as enforcing lockdowns during the COVID-19 pandemic or silencing dissenters.
Regrettably, other world leaders including Justin Trudeau appear to be modeling their own policies after China’s approach.
In 2011, TIME Magazine described predictive policing as a groundbreaking innovation; and since then, it has been quietly rolled out across the United States.
Numerous police departments have begun experimenting with predictive software in an effort to foresee and prevent crimes before they occur.
Developers of this technology advertise it as a way to reduce human bias, improve precision when enforcing laws, and optimize resource allocation.
This trend gained traction when federal grants were dedicated to smart policing solutions.
The Los Angeles Police Department (LAPD), under the leadership of Chief William Bratton, initiated one of the first tests in 2009 with $3 million in federal funding.
The goal was to identify areas where crime is likely and deploy officers preemptively in order to discourage criminal activities.
Bratton’s involvement gave credibility to this technology, leading more departments across the country to adopt it by 2014: a survey revealed that 38% of 200 surveyed departments were using predictive policing at that time and 70% planned on implementing it within the coming years.
Using data for finding high-crime areas and allocating resources accordingly is a practical use of data, however, given rapid advancements in AI technology as well as our universal tracking devices, how long will it be before this pre-crime tech is used on citizens?
These questions must be answered because AI surveillance is quickly reaching alarming levels reminiscent of “Black Mirror”.