In re-reading the White House Executive Order on developing safe, secure, and trustworthy AI, it's apparent that we're in one of those historic periods where new technology development has occurred in advance of the creation of effective governance measures.
We've seen this before with a variety of innovations, many of them in the domains of warfare:
- The creation of dynamite
- The use of rifled muskets implemented with Napoleonic War tactics in the U.S. Civil War
- The development of the Maxim Gun
- The utilization of industrial chemical agents in the WW1 trenches
In all of the cases above, the fascination with new technology, and the rapid inclusion of these innovations in existing use cases led to unprecedented levels of destruction.
I contend that we're in a similar place now with AI. The speed with which we're achieving breakthroughs is such that there appears to be very little discussion about HOW these innovations might be put into practice and what the impact really could be.
History teaches us that even in the embryonic stages of disruptive creation, we should be asking these questions.
Comments
You can follow this conversation by subscribing to the comment feed for this post.