Can causation become a victim of technology?

The future has taken another step forward. But it was stopped dead by its inventor, OpenAI.

According to its website, OpenAI Inc. is a non-profit AI research company, which discovers and paves the way for safe artificial general intelligence. Elon Musk is one of the backers.

OpenAI’s technology, called GPT2, has been shown to be able to create written passages that mimic the style and content of a given sample. Now think about it for a moment. Artificial intelligence can recreate your literary voice base on a simple handwriting sample. And that “you” can be just about anyone: John Nosta, William Shakespeare, or, I guess, Elon Musk himself. My feeling is that the printed word – in e-mails to proclamations – has always been a true reflection of the author. The style and content have been the typographic personality that enhances communication. But this idea could fall victim to the intrusion of technology into humanity.

These innovations are not without merit. The utility is important and, as suggested in their blog post, can include a wide range of applications that include writing assistants, improved translation, and better speech recognition. But the story is certainly a little more complex. The ability to imitate a writing style has raised significant concerns. From the generation of fake news to outright identity theft, the potential for abuse is worrying. It is so worrying that this technology has encountered technological breakthroughs. OpenAI will not open this modern Pandora’s box “, but will leave the lid open a bit for” experimentation “.

Our model, called GPT-2 (a successor to GPT), was trained simply to predict the next word in 40 GB of Internet text. Due to our concerns about malicious applications of the technology, we do not release the trained model. As a responsible disclosure experiment, we instead publish a much smaller model for researchers to experiment with, along with a technical paper.

But that’s only part of the story.

Just beyond the ability to create words that accurately reflect specific authors is the emerging ability to edit videos. The content and the video merge to create a “new reality” which is truly confusing. Nietzsche’s words crashed into John Bulishi’s character, perhaps the comedic side of this mash-up, but the harshest reality is reality itself. The truth suffers in the context of technology because what remains is almost impossible to describe. In physics, there is a term called non-causal reality which describes how events don’t make sense if things can travel faster than the speed of light. Simply put, a non-causal reality (speed faster than light) could cause an observer to see an effect precede its cause. And the questions that follow demand attention: what can we believe?

Science and philosophy introduced the idea of ​​the causal world. And in our causal world, physics has introduced a non-causal reality, controlled by the speed of light, at least for now. Advances in video editing and manipulation combined with GPT2’s language skills begin to suggest that a new non-causal reality may emerge. Our ability to discern fact from fiction and even navigate our reality may be about to change. Perhaps the best word is not “change”, but “change” to a point where an empirical reality is simply part of a hypothetical domain that extends our multiverse.

OpenAI has taken an inevitable step forward and is opening the lid of technology’s new Pandora’s Box. Opening it was a huge effort. I wonder if their cautious attitude of keeping that lid closed – at least to integrate innovation and exploitation – will be much more difficult to do.

Photo credit: Open IA

Norma A. Roth