The Electronic Frontier Foundation has obtained the output scripts of creepy-faced Army chatbot Sgt. Star. Star took part in almost 3 million conversations in the last five years. Unexpectedly, the poorly-redacted document revealed that the FBI and CIA have already used chatbots to “engage PEDOPHILES AND TERRORISTS online.” Exactly what that looks like is unclear.
How AI will come to mediate interaction with institutions is an interesting question. In Neill Blomkamp’s Elysium, there’s a surreal bit where Matt Damon meets with a graffiti-covered robot parole officer. It interrupts him, doesn’t care about his story, and offers him a handful of pills when it extends his parole—so it captures the feeling of some real government workers. The EFF read into the Army’s response even further:
In our request, we sought his output script (every possible response in his database) as it stands now, but also for each year since he went live. That way, we could compare how his answers have evolved and grown through military policy changes, such as the end of Don’t Ask Don’t Tell and the ban on women in combat. The Army gave us the 835 current responses, but could not give us the rest. Apparently, the historical scripts don’t exist because the script is a “living body.” The Army’s exact words in describing the spreadsheet of 835 responses…
Do they mean, “living” like the Constitution is said to be a living document? Or are they using it in the sense of Johnny 5 from Short Circuit? We plan to file a FOIA appeal to learn more. Likely what they mean is that they don’t maintain older versions of Sgt. Star’s script because they constantly update a single file. If that’s the case, then that indicates poor record-keeping by the Army. If the FBI treats their bots in the same way, that would raise serious questions about the ability of defendants to challenge the reliability of a bot if they are charged with a crime after the bot’s programming has changed.