Humans & AI: Can We Just Get Along?


Editor’s Note: This contributed article was written by Daniel Serfaty, the CEO and Founder of Aptima, Inc., which is an industry leader in human-AI teaming. In this article, he discusses what is needed to get the most out of the collaboration, and why that collaboration is an important tool for military training. 

Humans have developed tools and technologies throughout our history, from the hammer and wheel, to the aircraft and the Internet. We’ve changed, modified, and created new tools for new demands. But never before have we had a technology like AI – a tool that actually watches, learns, and changes in response to us. And the more we use it, the more it changes. In some cases, we understand AI behavior, like a mapping app that plots our route. Even when we defy it and make the wrong turn, it adapts to us, reorients us, and we adapt back. 

But what about AI and how it will impact how we train and learn? Will it replace teachers? Will classrooms disappear? How will soldiers train with AI?

These were just some of the questions we posed to several experts from defense, industry, research, and academia during a panel discussion, “Imaging 2030: AI Empowering Learning.”

The low hanging fruit of AI

While it’s easy to be carried away by AI’s future possibilities, it’s the near-term potential of AI that most people want to understand. A unanimous point we’ve seen across industry experts is that AI isn’t here to replace human jobs – the trainers, developers, and instructional designers. Rather, it would augment them. In fact, we see this in the military, with AI automating data-intensive functions to help warfighters with decision-making, freeing them up for more strategic, higher value activities. 

Fellow panelist, Colonel Robert H. “Hammerhead” Epstein, Commander of the Air Force Agency for Modeling and Simulation offered, “AI will not replace our instructors, rather it will enable them to produce better-trained Airmen quicker by automating repetitive tasks and optimizing live instructor contact time. The Air Force is aggressively leveraging cutting-edge artificial intelligence technology to take full advantage of this emerging way to train our people.”

More efficient, more effective learning with AI

The unique ability of AI to process tremendous amounts of data, far beyond our human capacities, is what will make learning so much more efficient. By taking the data that’s being collected, but not exploited – and not just scoring data, but rich data from the field, the classroom, and simulation – we can use AI to target in very specific and precise ways where we are deficient, and what we need to learn. This is the future of precision learning. 

The second aspect of this is the exponential power of machine learning, which is now a thousand times faster than a few years ago. So, instead of waiting hours or days for training feedback, what if we can reduce that gap to fractions of second? By adapting training in real-time, to adjust scenarios and learning paths, trainees can focus on improving what they didn’t do well, and not repeating what they did. We know from research that delivering feedback in the moment, as learners learn, dramatically improves their performance, and diminishes over time. In this way, AI can help trainees reach higher proficiencies in less time, making training more efficient and more effective.

Smarts everywhere’

To illustrate the point, Sae Schatz, Ph.D., the Director of the Advanced Distributed Learning Initiative, explained the future in terms of a total learning architecture embedded with AI. With AI ‘smarts’ injected everywhere— inside simulations, databases, learning record management systems, and more – AI can enable this feedback.

“We’re already seeing the first inklings with learning analytics dashboards, which are becoming common in digital learning platforms and at many higher education institutions,” Schatz said. “Like so many jobs, future instructors are likely to be human-AI teams. AI can help instructors and institutions understand their students’ states, diagnose hidden issues, and predict future outcomes to support early interventions.”

Schatz makes clear, however, that “a human touch is still needed throughout. It’s the combination of people and algorithms that has the highest potential.” In other words, using AI to personalize learning at scale to help guide students would be like inserting a smart and caring teacher into the learning process, so that even an average trainer can become an excellent teacher with AI.

This represents a profound change to the learning environment, but unless the humans in the loop—the trainers, observers, and commanders—are able to adapt to this new reality, it won’t work as envisioned.

The human element

We humans excel at watching and instinctively learning about each other. We subconsciously observe a co-worker, knowing they can be sluggish without their coffee in the morning. But AI learns very differently. And in an aircraft cockpit, an AI-enabled autopilot could come to a very different conclusion, deciding to take over unexpectedly. 

Humans have only adapted in this regard when working with other humans. We know how frustrating new technologies can be when they force us to change in ways we aren’t used to. If your computer actively tried to adapt to you, you might not only find it annoying but disruptive. If AI observes and changes to better suit you, but you didn’t change in the same way or acknowledge the augmentation, it would be like ignoring a teammate, or working sub-optimally. In other words, the human operator has to know what AI knows in order to work as team. This new paradigm of human-machine teams requires mutual feedback loops so that humans and AI can communicate, understand, and learn from each other for concurrent cycles of adaptation.

Humans and AI in action

The panel discussion I moderated at last year’s I/ITSEC between Charlie, a bot powered by the latest AI technology and natural language processing and human experts, was a way to demonstrate this interaction.

Moderator Daniel Serfaty responds to the audience, with panelists Sae Schatz, Ph.D., Benjamin Nye, Ph.D., and Charlie AI.

Unlike an Alexa or Siri that simply spits back facts, Charlie was designed to generate her own thoughts. During the discussion, she could say something profound and insightful one moment, and child-like or naïve the next. Those inconsistencies could be surprising, yet we didn’t see AI as being simple or stupid. Rather, it is like learning to live with another species, one whose way of thinking and reasoning is different, and will continue to mature. 

“AI is its own unique type of intelligence that will continue to evolve,” said Patrick Cummings, an Aptima research engineer, who led the team that developed Charlie. “And we’ll continue developing ways for AI to better adapt and work with us. For example, each time Charlie has a conversation, we’re generating training data to fine-tune her, making her better at learning and adapting more quickly and accurately for the next time.”

A key takeaway from the discussion was that together, Charlie and the human experts each brought something unique to the table. And so, if we expect the combination of humans and AI to be greater than the sum of the individual parts, they cannot be haphazardly thrown together. For a symbiotic relationship, we need to continue refining this co-evolution and mutual adaptation, taking into account each species’ respective strengths, weaknesses, biases, and behaviors. 

AI is coming into our lives, and the better we can take advantage of this extraordinary technology, the more productive our relationship with it will be as we learn, work, and train.

To learn more about Aptima’s work, visit their company website HERE.