Artificial intelligence is making its way into almost every aspect of American life. Advocates tout its increased efficiency and reduced errors compared to humans when performing tasks.
While AI certainly has the potential to improve aspects of our society, it is not without the potential to do harm.
Criminal justice is one area where AI should be monitored carefully to ensure it does not infringe on our rights. Law enforcement should not assume that the product of an AI tool will be reliable. And no one should assume its results are consistent with the protections afforded to every American citizen.
Example of Law Enforcement Using Artificial Intelligence Tools
AI has already made its way into criminal justice circles. In late 2024, WIRED magazine reported on the story of Cybercheck technology, a product of Global Intelligence. It is a cautionary tale about turning over vital roles to AI.
Cybercheck was purported to be a tool that could give police a clear picture of where a person was at specific times. This geolocation information supposedly came from a complicated analysis of publicly available sources.
The ability to tell where an accused person was is beneficial if you are trying to connect them to a crime. For that reason, hundreds of law enforcement agencies used it to conduct thousands of searches. At least one case in Texas tried to use a Cybercheck report in court.
The Real Story
While the reports seemed useful, it turns out that the results are highly questionable.
Cybercheck doesn’t retain any data about how it reaches its conclusions. Defense attorneys rightfully want to know what information tied their clients to specific times and places; Global Intelligence cannot provide that. They claim Cybercheck uses public sources, but some experts say this kind of information would not be available without a warrant.
In other cases, the information provided was incorrect. Reports have cited non-existent email addresses or routers that had never had the same name listed in the report.
Phantom Results and the Six-Fingered Man
Large language model (LLM) AI, such as ChatGPT, has the unfortunate tendency to invent statistics and historical facts that are not based on reality. These are called “hallucinations.” Because LLMs perceive patterns, they may recognize a “pattern” that doesn’t actually exist. This leads them to generate inaccurate data.
AI image generators are also notorious for their failure to produce realistic hands and feet. While a hand with three thumbs and 13 total fingers might be amusing, it shows that AI still has serious limitations. Using its output to put people behind bars is not something law enforcement should be considering.
Call for Experienced Texas Criminal Defense Guidance
If you’ve been accused of a crime, it’s important to know whether or not AI is being used as part of your prosecution.
At Lee & Wood, our criminal defense attorneys can give you the strong, intelligent defense you need. We can protect your rights and stand up for you in and out of the courtroom. Contact us to schedule an appointment to discuss your situation.