Welcome back to the Rhetorical Roundhouse blog! It's been a couple weeks since I last discussed the intersection of Digital Humanities and martial arts because last week I finally got off my butt and release my first 10,000 kicks update. That said, I'm back this week to talk about "artificial intelligence" and why that phrase may be a bit of a misnomer these days. To do that, I'm going to start by talking about Dragon Ball Z.
For those that may be unaware, the Dragon Ball franchise is an anime that revolves around martial artists achieving god-like fighting abilities in response to the newest villainous threat of the week. The show features more deus ex machinas than the complete works of Euripides and one of my favorites was the aptly named "hyperbolic time chamber." This was an inter-dimensional space that characters could use as a training room when they wanted to maximize their gains. For every one day that passes in the regular world, a year passes in the time chamber. Needless to say, almost every time a powerful villain approached, Goku and crew would exponentially improve their talents by abusing space-time.
Imagine the possibilities. Even if you're not a martial artist, I'm sure you could think of a million ways you could make use of such a space. Need to prepare for a presentation? Suddenly the couple hours you have turns into a month. How about finishing a PhD in less than a week? These are just some of the possibilities when we think about expediting our potential to learn and grow in finite amounts of time. And, while we puny humans haven't yet come up with a way to do this for ourselves, we have created intelligent beings who can.
In 1997 the IBM program referred to as Deep Blue made history when it defeated chess champion and grand master Garry Kasprov. Deep Blue accomplished this by accessing a database of all the best chess strategies, playing without experiencing mental or physical fatigue, and processing information in real-time without any emotional interference.
Chess is a complex game, to be sure. But the movements are limited and the in-game information is public, perfectly known. Because of this, Deep Blue could simply refer to a more reliable database of the best possible moves.
Consider the game of Go, by contrast. Again, players have perfect information, but the number of legal moves is exponentially more (2x10^170 to be exact!). A program like Deep Blue would never be able to play Go (partly because it was built and optimized exclusively as a chess program), but the 2016 AlphaGo program sure can. More importantly, not only can AlphaGo play, but it can create decision trees of up to thirty moves ahead based on the play patterns of all the best competitors, and it can learn. That's what sets AlphaGo apart. Because it's designed as a general learning machine, AlphaGo constantly evolves and invents distinctly non-human strategies of play. It learns via a sort of hive mind stitched together from the play experiences of variously tuned versions of itself playing millions of games of Go in a day. This is the machine learning breakthrough that allowed AlphaGo to defeat Lee Sedol and achieve its fame.
AlphaGo is the hyperbolic time chamber in action--because of its immense processing power and lack of human weaknesses, it will play more games of Go than any human collective ever could.
Ok, well maybe we should leave the mathematically bounded board games to the computer. At least we humans are still the only ones who can create real art, right? I mean, I'd love to see a robot paint like Rembrandt!
Crap.
Ok well, maybe they can paint, but they sure can't write poetry--there are definitely no Shakespeare bots out there... But, in all seriousness, for a while the last frontier for Artificial Intelligence was natural language processing. As it turns out, IBM's Watson showed that this hurdle could be overcome by dominating the Jeopardy! Tournament of Champions. And, consistently, more and more programs are emerging which demonstrate the machine's continued progress toward talking real pretty.
One of my favorite examples of human/machine language interfacing is a bit different, though. Instead of teaching robots to talk or respond to us, some scholars celebrate the way we talk to them. The Google Poetics project, for example, has this to say about the collaborative invention of search engine poetry:
"Google’s algorithm offers searches after just a few keystrokes when typing in the search box, in an attempt to predict what the user wants to type. The combination of these suggestions can be funny, absurd, dadaistic - and sometimes even deeply moving."
Google Poetics hosts a variety of these combinations of suggested searches on a simple Tumblr blog. Here's an example of the kind of silliness you might expect this to look like:
But, as the about page reminds us:
"There is, however, more to these poems than just the occasional chuckle. The Google autocomplete suggestions are based on previous searches by actual people all around the world. In the cold blue glow of their computer screens, they ask “why am I alone” and “why do fat girls have high standards”. They wonder how to roll a joint and whether it is too early to say “I love you”. They seek information on ninjas, cannibals, and Rihanna, and sometimes they just ask “am I better off dead?”
Despite the seemingly open nature of Western society, forbidden questions and thoughts still remain. When faced with these issues, people do not reach out to one another, instead they turn to Google in the privacy of their own homes. "
It's rather chilling to think of these poems as tiny confessionals, intended-to-be private divinations, or shameful thoughts uttered into the comfort of the void...
This is the intersection that I find most fascinating: the ways the human and the machine inspire one another. In my own life, I owe a lot of my martial arts drive to digital animation, next-gen gaming consoles, and the rapid transmission of technical information via the internet/social media. Were I not born in the era I was, it's unlikely I'd be able to or have the desire to kick above my head. For example, Hwoarang was one of my favorite characters in the Tekken video game franchise because he was a perfected Tae Kwon Do fighter--I wanted to be just like him. Unfortunately, I have a long way to go before achieving that, but it's not impossible as stuntman Eric Jacobus shows us!
Of course, as I discussed in a previous post, humans inspire machines and programs as well. Motion capture technology allows us to transduce the embodied knowledge human martial artists have trained so hard to achieve into digital data fit for machine consumption. One of the cooler examples of this from this weeks reading is the Yaskawa Electric Corporation's "Bushido Project."
If we can engineer a robot arm with the motor control to swing a sword and then program it with all the embodied knowledge of a master swordsman, why can't we take the next step and create a full-fledged fighting robot? Again, I'm reminded of Tekken and the characters Mokujin and Tetsujin. The former is a wooden training dummy come to life, as if by magic. The latter is the same character only, this time, made from iron instead of wood, enlivened by artifice instead of mysticism.
The thing about Mokujin/Tesujin in the games is that they don't actually have their own distinct move lists. Instead, when you play as one of these characters, your moves generate randomly by selecting a list from another character. So, for instance, it may be that I select Tetsujin but see him fight just like Hwoarang. But, what if, this robot had the hive mind and learning algorithms of the AlphaGo machine? Instead of accessing a database of Go strategy, it accessed competitive fight data from the best athletes in boxing, Tae Kwon Do, or MMA? What if it could compare the possible outcomes of a fight up to thirty moves ahead of the human fighter and devise strategies to best them every time? Could we build a gold medalist?
To do so, we'd need an enormous amount of embodied data. But, given the fact that so many combat sports use electronic scoring systems and are digitally recorded for review, this may not be all that difficult. Though, in the words of Dr. Ian Malcom, "Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should." Maybe we should stop to ask some ethical questions before continuing to press into the unknown, maybe we should turn a critical eye to the data=brokering corporations that surveil and control nearly every aspect of our lives on and offline before...
ANNND it's gone. Oh well, I'm sure we'll do a better job at being critically aware of the next paradigmatic shift in technology :)
Thanks for reading and I hope you tune in next week for some conference updates from ATTW and CCCC's in Pittsburgh!
Kamsahamnida!
Comments