
Michael Rayo Assistant Professor, HRS – Health & Rehab Sciences, Assistant Professor, Integrated Systems Engineering, Faculty Affiliate, TDAI Credit: Courtesy of Kassidy D’Annolfo
By combining elements of capture the flag and reality TV show “Big Brother,” an Ohio State lab is working to establish a new standard for the use of artificial intelligence.
The Cognitive Systems Engineering Laboratory is studying the effects of AI assistance in human decision-making, Michael Rayo, the lab’s co-director and study’s principal investigator, said.
There is a move to incorporate AI into society, Kassidy D’Annolfo, an undergraduate research assistant on the study, said.
“As we integrate artificial intelligence and other machines or automation into our daily work, it’s not just about replacing the person. It’s about making a team between the person and the machine,” D’Annolfo said.
Rayo said the ultimate questions the study aims to answer are how to distribute work better among individuals and how people can better communicate with one another.
“What we want to start reviewing is: Are there patterns of the teaming relationships across these different settings, where nothing else is the same but we start to see these similarities?” Rayo said. “If we can see these similarities, then we can start to provide guidance for designers of these technologies moving forward in whatever setting or industry they’re working in.”
D’Annolfo said the lab hypothesizes that human-human teams will have greater success in predicting outcomes.
The study has multiple branches and is beginning the first phase, in which about 20 student participants compete in a video-recorded game of capture the flag, Rayo said.
For the second phase, the lab will show the video to another set of participants, who will then be divided into human-human and human-machine teams, D’Annolfo said. The process will mimic the viewing of “Big Brother,” in which viewers watch the contestants attempting to live together while all of their actions are recorded.
“How can you pull out individual cues and how can the analytic [machine] help you or not help you pull out those cues — those little things like a kiss or a hug or an argument? And then extrapolate that into larger significance,” D’Annolfo said.
Visual cues from participants will aid the teams in predicting what individuals will do next — whether that be who captures the flag first or who is booted from the “Big Brother” house, D’Annolfo and Rayo said. Reading cues may be easier for human-human teams, Rayo said.
“We have the ability to direct each other as people a lot easier than we can direct machines. We also have the generic ability to understand each other potentially and our actions better than we have to understand the machine’s,” Rayo said.
Rayo also said the environment and specifics of the study’s procedure were created carefully to consider the societal application of AI integration.
“We wanted a world that had both real intentions, so it wasn’t random, and was able to be somewhat diverse,” Rayo said.
This, in conjunction with the idea that most sports have very rigorous rules, led the lab to choose capture the flag, Rayo said.
As a lot of the interest around AI comes from sensitive and confidential circumstances in health care, military and government spheres, the lab had to create a setting both versatile and original, Rayo said. While many labs with similar studies tend to use completely computer-generated scenes for people to react to, Rayo’s group choice represented a precise objective.
“We were trying to understand how to build a new world or find a new world,” he said.
D’Annolfo said the analytic used as the other half of the human-machine team is not true AI, but a simulation of it.
“We’re doing all of the manual work behind it to pick out: What cues do we want the analytic to highlight? What information do we want it to pull out, like what do we want the analytic to do?” D’Annolfo said. “Then, we’re just going to design some sort of overlay or on-screen indicator to do that. So there will be no actual artificial intelligence behind the scenes.”
While the project is currently funded through 2020, Rayo and D’Annolfo said they expect it to continue for about three more years. With an anticipation of between 100 and 200 student volunteers participating in the capture-the-flag portion and nearly 150 more in the analytic phase, the study promises to be extensive, Rayo said.