Autonomous aircraft (and land/sea vehicles) are already flying and on the cusp of operations. But as the military forges ahead, those who will team with these systems in the future and those building them today lack a common way of expressing their needs and capabilities. A proposed autonomy framework could change that, avoiding confusion and cost in the process.
Heather Penney is one of the minds behind what’s called the “Two View Autonomy Framework”, which aims to establish a common conceptual structure within which warfighters and engineers can discuss the development of autonomy in unmanned aircraft.
“What got me thinking about this was that manned-unmanned teaming – human flight leads working with unmanned aircraft – is going to be important to the future design of the Air Force,” she explains.
Penney is a former F-16 pilot and currently a Senior Resident Fellow at the Washington DC-based Mitchell Institute for Aerospace Studies. The paper she co-authored with Major Christopher Olsen (USAF), recognizes that without a shared understanding of what mix of autonomy and human control unmanned aircraft will need in the future, their requirements and development could be clouded.
In crafting their new autonomy framework, Penney and her colleagues kept a vital point in mind. The relatively small, budget-constrained Air Force of the future will likely rely on unmanned aircraft to make-up for the quantity it lacks in trained human pilots and manned aircraft.
“We have a pilot gap,” Penney affirms. “We’re going to be potentially operating in highly contested areas where we have to accept that attrition is a real risk. We have shed so much training and enterprise capacity in the belief that we can get smaller and better, that we no longer have the ability to train pilots, absorb and experience them at the rate that we need to in a peer conflict.”
Whether one accepts the argument that unmanned teaming aircraft will be sufficiently combat effective or not, having a common framework along which to develop them could bring their cost down.
“There’s this belief that [unmanned] teammates will be more affordable than manned aircraft,” Penney says. “I think they’ll be more affordable but I don’t think they’ll be cheap and I’m not talking about attritable or expendable aircraft.”
The framework that the Mitchell Institute researchers are proposing seeks to establish common definitions by laying out five tiers of aircraft autonomy in a way that’s reminiscent of the Society of Automotive Engineers’ (SAE) five levels of driving autonomy. A key difference is that the SAE levels were defined and expressed by engineers, not users. Penney’s construct makes the user – the warfighter – an equal partner in defining levels of autonomy.
The “Two View” framework combines the five tiers of autonomy with three tasks that a pilot/flight leader must execute to perform a combat sortie. Each task might (or might not) be realized with varying levels of autonomy. These tasks and tiers provide a common point of reference for what the warfighter requires and what engineers can provide in return.
“By connecting the two views, within the [autonomy level] framework, that gives them a structure to have conversations,” Penney explains.
The Warfighter view of an autonomous aircraft is modeled after combat pilots’ cognitive tasking, what they actually do in the battlespace Penney explains. A combat sortie was broken down into core flying responsibilities (aviate-navigate), mission responsibilities (tasks pilot must undertake to execute mission), and teaming tasks (how a pilot collaborates w/unmanned members of his flight or within the broader mission).
An unmanned aircraft might take on these tasks with varying levels of autonomous capability from Level 1 defined as largely deterministic programming requiring significant direction/oversight from a human flight lead/commander to full independence at Level 5.
“Level 3 and Level 4 are where we place the break between deterministic programming and larger elements of machine learning,” Penney says. The lower three levels are akin to automation while Levels 4-5 indicate true autonomy wherein a vehicle has more self direction and the ability to adapt to changing circumstances.
According to Penney, the framework can convey what warfighter expectations are and what technology can accommodate them at a particular level. Users and developers can mutually understand what can be delivered and what mix of predictable, achievable autonomy might speed a capability to field. Basically, both sides get on the same page.
The researchers purposely left it a general, non-prescriptive framework so as not to box-in technological or process approaches to autonomous tactical aircraft. They also don’t want the Two View Autonomy Framework to be a standards or contractual template. “Some framework and structure is better than no framework or structure,” Penney adds.
Up to now, the lack of a common framework has led to policy makers arriving at long or even near-term decisions that “may not be based on what the technology is capable of doing or what types of technology they should be able to anticipate,” Penney asserts. “It prevents the Hill from being able to fully understand what they’re buying.”
But they have already bought autonomy as the Navy’s MQ-25 program demonstrates. The Mitchell Institute authors have not run their framework by the Navy-Boeing MQ-25 team or Boeing’s Airpower Teaming System program (Loyal Wingman) in Australia, partly because those development efforts are long since underway. However, the framework has been reviewed by Air Force headquarters staff which Penney says also recognizes the need for a common reference.
“We believe that there are several [Air Force organizations] that should adopt this because their interests are at stake,” she adds. The list would include Air Force Operations (A3), Air Combat Command, Global Strike Command and the Air Force Research Laboratory. With these organizations onboard, the framework would naturally promulgate to industry the thinking goes.
Penney calls the two-view framework “step one” in helping understand the potential capabilities and limitations of unmanned systems in all domains, a concept that mirrors the fungible nature of flying in combat.
“There’s very complex programming that humans are constantly going through as pilots in combat. A pilot is always limited by what’s happened in the past. How much fuel have I spent? How far away am I from the target or from home base? That affects potential future tradeoffs. That’s why we acknowledge in the [research] paper that these categories don’t have hard walls.”