Hyperedge- . IoT, Embedded Systems, Artificial Intelligence,
DARPA is leveraging AI to manage the complexity of Mosaic Warfare, learning from tests and evaluations conducted during the recent AlphaDogfight Trials simulations. (Credit: DARPA) 

Cyber-Physical System of Systems Architecture with Human On the Loop 

Hyperedge- . IoT, Embedded Systems, Artificial Intelligence,
Dr. Tim Grayson, Director STO, DARPA

Dr. Timothy Grayson is the Director of the Strategic Technology Office (STO) at DARPA. In this role, he leads the office in development of technologies to enable soldiers to field, operate, and adapt distributed, joint, multi-domain combat capabilities. Dr. Grayson came to STO in 2018 from a diverse career in government and industry, including as an entrepreneur, and service in the intelligence community as a Senior Intelligence Officer. He holds a PhD in Physics from University of Rochester, where he specialized in quantum optics, and a BS in Physics from University of Dayton with minors in mathematics and computer science. 

At AI World Government in Alexandria, Va., Oct. 18-19, Dr. Grayson will deliver a keynote address on Managing the Complexity of Adopting AI, focusing on the design of enterprise interconnected architectures. This is akin to designing the Industrial Internet of Things in the commercial world and is characterized at DARPA as Mosaic Warfare. Dr. Grayson recently spent a few minutes talking to AI Trends Editor John P. Desmond about the work.  

AI Trends: Can you outline DARPA’s vision for Mosaic Warfare and the Industrial Internet of Things (IIoT) within that? What is the role of AI in that vision?  

Dr. Grayson: We use the term ‘mosaic’ in the context of what we call Mosaic Warfare. If you look at where the DOD has been heading, it’s moving toward what I’ll call a system of systems type of approach. Instead of having the weapons, the comms, the sensors, the decision aids, all tightly integrated together into one platform, they’re very disaggregated.  

I can find whatever sensor is the best for the job and whatever weapon is the best for the job, and don’t care about what platform they’re on. That’s been the trend, not just at DARPA, but across the department. In addition, where we’re going is trying to make that system of systems architecture process much more adaptable and agile, one of the terms we often use with mosaic is a so-called ‘monolith busting.’  

So a system of systems is busting up monolithic platforms into these distributed, disaggregated capabilities. What we’re doing with Mosaic is preventing the need to effectively hand-engineer every single one of those systems of systems architectures—so Mosaic is busting monolithic architectures as well.  

Today, if someone says, ‘Hey, I want to swap out sensor A and bring in sensor B.’ That’s a manual engineering process to do that integration. We’re trying to create the tools and the infrastructure to make that composition of architectures much easier and faster, so that ultimately it doesn’t even require engineers to be involved in designing things. It’s almost more of an operator saying, ‘Here’s the architecture that I want.’ And then technician-level people use Mosaic tools to configure it and make it so. It greatly reduces the time required to create very complex, machine-centered, cyber-physical architectures. 

While it’s not in our core mission, from a dual-use perspective there’s an analogy to the commercial world with the so-called Industrial Internet of Things or IIoT. Mosaic Warfare is helping the military deal with a very heterogeneous mix of different domains, different military Services, different ages of equipment, and legacy systems. That’s a very similar problem that the commercial IIoT world is dealing with in areas such as the smart grid, advanced manufacturing, distributed supply chains and advanced transportation.   

All of those market verticals trying to get to new levels of digitization and modernization through the IIoT have this problem of building systems and systems architectures without throwing away the legacy, in-place, really expensive electromechanical equipment that still works.  

We have billions and billions of dollars of sunk cost in legacy equipment. So, the IIoT world needs to be able to digitize and modernize over this very heterogeneous fabric of legacy stuff. And that’s effectively exactly what we’re trying to do for the DOD. If instead of thinking about a sensor on an aircraft and some military tactical radio and instead consider some kind of voltage transducer on a smart grid and the SCADA [Supervisory Control and Data Acquisition] system used to control it—that’s a very analogous kind of problem.  

AI can help us compose these systems of systems architectures, reconfigure them, integrate new capabilities and very quickly modify their functions, modifying how things collaborate and act together. AI is wonderful in the sense of the options it provides. It’s also horrible in the complexity that it creates.   

In the ‘execution’ portion of our portfolio, AI is applied all up and down what I might call a cognitive or decision stack. At a very high level is macro-level decision-making. What are my objective functions? What assets do I want to use to actually compose into one of these systems of systems architectures? What specific tasking or control actuation do I want to give to each of those things?  

You just start moving down that stack of decision-making, until ultimately the decision-making turns more into actuation. For instance, if it’s an aircraft, the actual flying and maneuvering of the aircraft and how you’re going to move the control surfaces, is a decision process. How I’m going to control a multifunction sensor that has a bunch of different modes to it, is a decision process.  

So at each of those steps of decision, the AI ends up coming in as a decision aid to help the humans explore options.   

As you move down that stack, the AI is more directly controlling things. It might actually be flying and maneuvering the aircraft itself while the human is making the high-level battle management decisions. But at the end of the day, I would say for us, AI is first and foremost about managing that complexity.   

A number of ethical frameworks that attempt to guide the use of AI have been put forward in the past several years. Which ethical framework does DARPA operate within, and is there a way you can assess what impact it’s having?  

The terminology we use tends to be a ‘human-on-the-loop.’ So very few of the AI applications that we’re developing are designed to run fully autonomously. You can think of our AI mostly as falling into the category of decision aids.   

We often refer to a human-machine symbiosis, where it’s a partnership and where the AI is doing part of the job that fits well for a computer, while the human compliments that with more cognitive, open-ended types of efforts. So the AI is offloading some of the cognitive burden from the human, letting the computer do what it does best, but the human is still directly involved as part of that team.   

Could you describe the Air Combat Evolution (ACE) program, which as I understand uses warplane dogfighting, as in a video game, as an entry point into developing human machine teaming?  

ACE has been a very exciting program. It started with a big event about a year ago called the AlphaDogfight Trials. If you go to YouTube, you will find a number of videos that capture the whole AlphaDogfight event. As I discussed, at the bottom of the AI decision stack, are things that are really complicated. Flying a plane isn’t easy. Flying tactical maneuvers in a dogfight is really hard, but it turns out that it’s something that AI can do quite well. We wanted to start out the ACE program by first proving that hypothesis, that AI could do something like tactical maneuvers of an aircraft well.  

So AlphaDogfight was meant to stimulate the creative juices of the AI community and get them focused on that type of question. Could you actually do dogfighting with AI? The AlphaDogfight Trials event pitted eight teams against each other. And they were everything from literally four guys in a garage to major corporations and lots of teams in between.  

They competed through a couple of different trials and at the final event, which again, was a little over a year ago. They first competed against each other in a bracket-style competition. The winner went up against a human pilot flying on a flight simulator. And it was one of those John Henry moments, where the AI smoked the human. And it wasn’t just any pilot. This guy was a Weapons School instructor pilot who was very skilled, very experienced, and didn’t get a single victory, 5-0. [Ed. Note: In the 19th century folk tale, John Henry died defeating a steam-powered hammer during a competition to drill blast holes into a West Virginian mountainside.] 

It got people’s attention that AI could execute these very complex skills, but to have the AI beat the pilot is not really what the program was about. It’s not the AI versus the human, it’s the AI plus the human. I mentioned earlier this notion of human-machine symbiosis. 

Flying a plane involves very complex skills, very dynamic, split-second timing, superb hand-eye coordination, all that good stuff. It’s very cognitively taxing on a human. But when you think about it from an AI perspective, it’s a relatively easy problem for a computer. It’s what in AI is referred to as a closed-world problem. The boundary conditions are very well-defined. The objectives are very well-defined. It’s all bounded by the performance and the physics of flying an aircraft.  

That’s the kind of thing that AI can do really well. What the AI does not do well is higher-level strategy. That’s something where human intuition, human context, so-called open-world problems are where the human still very much dominates the AI. So by teaming the human pilot with the AI, we can free up that cognitive burden on the human pilot and let the AI do what it does best, and let the human do what he or she does best.  

That teaming ultimately leads to the end goal of that program, which is about building trust. The former program manager used an analogy of driving a car with adaptive cruise control. The first time you use it, and you see a sea of red lights in front of you, it’s hard to trust the radar to stop the car. You want to step on the brake.   

That notion of learning to trust the AI, is what ACE is ultimately going to try to do. It’s going to be measuring the trust of human pilots, interacting with the AI and interacting with an AI-flown aircraft. And then slowly as the human becomes acclimated to it, we will ratchet up the aggressiveness and how much authority the AI is given, ultimately trying to create that protocol for how to teach trust in AI.  

Could you describe the Gamebreaker research effort and the goals of the AI within that?  

Gamebreaker represents an example of a broader set of initiatives we have at DARPA across the entire agency called AIEs, AI Exploration initiatives.  These are relatively quick-turn projects that are more fundamental research in nature, and small by DARPA standards. They’re quick-turn both to get on contract, and to execute. Gamebreaker was one of these AIEs.  

Gamebreaker was asking the question, thinking in terms of game theory, can we use AI to measure who has the advantage in some competition? And once we do that, can the AI also tell us what factors are causing that advantage? And these could be very dynamic and situation-specific. So as in a video game, which player has more strengths, more hit points, more energy—all things that would be intrinsic advantages—but then it also might be situational, as in who has the proverbial high ground at any given moment in time. 

And some of it could be soft factors like who just has the most skill. Gamebreaker is using AI to measure those types of factors in a video game type of simulation engine. Right now, we’re looking at how you could extract from these tools and possibly turn them into strategy tools.  

If you can measure who has advantage or what factors are leading to an advantage, could I use that for strategic planning? Anywhere from planning a mission or planning for operations through more strategic portfolio management? What’s the right mix of stuff that I want to buy to give us the greatest advantage? Such a strategic planning tool could be as important in the business world as it is for the DOD.  

In the Strategic Technology Office, are you pursuing any non-traditional partnerships with industry, especially with the many startups pursuing AI in their businesses?  

We have a huge interest in non-traditional partnerships. I will say that it’s sometimes challenging creating those opportunities. The AIEs have definitely attracted a mix of non-traditional players. We’re also pursuing “other transactions,” which are contract vehicles for prototyping within the government. These are quick-turn contracts that don’t necessarily carry all the [procurement] burdens of a traditional federal contract. And I’m very interested in figuring out a way to work more with venture-funded types of startups, or people who otherwise have their own capital.  

DARPA is not a customer for finished products. You could think of us more on the investor side of the table, except we’re investing in technology. We’re not investing in companies; we’re investing in technology and capability. So the trick in working these non-traditional partnerships is finding where there’s that mutual interest, where a startup and its investors already have a go-to-market strategy and a product development roadmap.  

We work with those kinds of organizations and say, okay, where is there something from a technology perspective that is leap ahead enough for us to consider it ‘DARPA hard,’ but yet is close enough to the company and their investors’ current strategy that it’s not going to derail them or distract them too much. But it might, in a positive sense, help de-risk or accelerate some of the things they’re doing. And another nice thing about partnering with DARPA is that our investment, if you like, comes in as non-dilutive revenue, as opposed to being an equity position, if we were a traditional investor.  

Regarding recruitment, are you able to find the people you need to staff what you’re trying to do with AI at DARPA? 

For the most part, yes. Recruitment is always a challenge. The business model at DARPA is that all the technical people, from the program managers up through the agency director, including folks like myself, are all term appointments. So everyone’s got a clock and the typical tenure is about four years.  

That means if you do the math that we are getting about 25% annual attrition which by any company standards, that would be incredibly terrifying. So in general, from the pace of that natural attrition, we have a big recruiting challenge. But because of DARPA’s reputation and cache, we have typically not had a huge issue with recruiting really top-notch quality program managers (PMs) from what I’ll call the traditional defense sector, including contractors and government labs.  

While we do have a great stable of PMs, for AI and some other areas of our work, the government isn’t necessarily out in front. A lot of it’s being driven by the commercial industry, and commercial applications. When we’re competing against all the commercial companies out there [for talent], that becomes more of a challenge.   

Even reaching out to that commercial and startup world, we have had PMs come to DARPA who have already been very successful entrepreneurs. They may have created a company or two, and were able to do their exit strategy, and now they want to essentially give back. They see this as their patriotic calling to come and apply their skill to DOD.  

One of the areas that I do keep my eyes on are the younger people, not so much for DARPA, but for the broader defense industrial base. I do worry about us recruiting AI talent and some of this other high-skill talent. Are people coming straight out of college, the top AI talent just graduating, are they looking at the defense sector for possible jobs?  

I worry that that’s often not the case. I’ve had conversations with some young people and to sell it, I push the idea that the DOD is important, and they ought to consider it. Sometimes I get a negative response. I wonder if it’s that they are not sure if they like the idea of weapons systems and defense, but by and large, that’s not the issue. Then I wonder if it’s about money, but the defense industry pays pretty well.   

The thing that really surprises me is I have spoken to students who may think that, ‘Oh, DOD, isn’t cool.’ My message is that if you actually saw some of the work going on at DARPA and compared it with commercial technology, we’re incredibly cool! From a recruiting perspective, my message to people—again, not necessarily coming to DARPA, because by the time they’re mid-career and successful, they know that DARPA’s pretty cool already—but for those young people just getting into the field in general or specifically, looking at possible careers in the defense sector—I’d say don’t count it out. A lot of incredibly cool stuff is going on, and you can put your technical skills to good use. And it’s going to be personally also very rewarding.  

Regarding young people, do you have any advice or suggestions for students, either high school or college, who might be thinking about working in AI? How should they focus? What should they study? 

So the fascinating thing about AI is that it can cover the whole range of skill sets. So, on the one hand, there’s still a lot of fundamental research to be done there. And in that regard, that requires some of the best mathematicians and computer science-minded people to invent the new classes of AI, or the new advances in AI.  

At the other extreme, there are AI tools where you, frankly, don’t need to be all that technical to do AI. A great example is a market vertical called RPA or Robotic Process Automation. I’ve poked into the RPA world. People practicing RPA tend to be more management consultants than technologists, even though they’re doing AI. The important thing in RPA is that you can understand what a customer situation is.  

What’s their business model, where are their costs? Where are their bottlenecks? What kind of data do they have? The AI might be some shrink-wrapped commercial tool that they apply to some data and it spits out some apps. And then the back end becomes important. How do you incorporate that new automation into their workflows? What has to happen in terms of change management and even cultural management to get the most value out of that AI? The technology, by itself, doesn’t help. You need a combination of changes in process and an organization and institutional things in equal parts to the technology. I think there’s a lot of room in there for people who aren’t deeply technical but understand the value of using automation and using AI as a tool. And then of course everything in between. 

Is there anything you would like to add or emphasize?  

I want to build a little off of that very last point I was making with respect to what young people should think about and talk about this notion of the synergy between technology and institutional change or process change, if you like. In general, the organizations that innovate the best, look at both of those things in tandem; they don’t rely just on technology.  

They also look at how a business process or a business model or some operational process can change as well. I sometimes worry that in the AI world, particularly in the government sector, that because it’s technical in nature, people think of it as this technical widget. I’m going to give you a requirements’ gap and do some of that AI magic and give you some AI. And AI doesn’t work that way.  

There is very much a human element and a manual implementation element to it. That’s partly a good news story. It means that I don’t necessarily have to wait five years to get my next capability. As an operator today, I can get AI to help me with today’s problems. The downside to that is there is a manual process to tune the AI, to manage the data, to build those tailored applications.  

From my travels, the notion of thinking of ‘AI as a Service’, as opposed to, as a technical widget, is something still very foreign within government, or at least within many parts of government. In my travels, I’ve seen that is an issue in industry today as well. 

So I think there’s a common theme through a lot of this, whether it’s our ethics framework, what we’re doing with ACE or how we look at the implication and the implementation and the business models. AI is really cool computer stuff and automation, but you can’t take the human element out of it. And I think that’s what I’ll leave you with as my parting thought. 

Learn more at the Strategic Technology Office of DARPA.

Read more about this on: AI Trends