SF.20.22.B10118: Powergrid Synchronization Backbones: Master Stability Functions for Real World Generator Models
The role of the Essential Synchronization Backbone (ESB) in powergrid systems is being investigated as part of an AFOSR LRIR grant with an eye toward existing powergrid hardening and building robust microgrids. The current methods for identifying or approximating an ESB of a synchronizable system relies heavily on the Master Stability Function (MSF) formalism and analysis of the network topology. Dynamical system models for generators range from too simplistic and unrealistic to highly detailed, but perhaps prohibitively complex. We seek to continue previous efforts in the literature toward identifying the model with the ideal balance of complexity and utility for synchronization analysis. This project will explore the application of the MSF formalism to available powergrid models in order to bridge the gap between efforts of the electrical engineering community, which tend to focus more on power flow equations, and those of the complex networks community that primarily focus on identical simple oscillators. This work will compliment ongoing efforts to generalize the original definition of the ESB to weighted and directed networks to provide algorithmic approaches to powergrid design especially as many systems transform from centralized industrial generation networks toward a more decentralized prosumer-based energy economy with less well-defined system inertia.
SF.20.22.B10117: Geometric Partition Information Theory: a Comparative Analysis
The basic formulation of information theory metrics including various divergence measures and conditional mutual information estimates are powerful tools for the analysis of interacting systems as a process of exchanging and transferring information. However, all such tools are fundamentally based on entropy estimates that stem from Shannon’s discrete theory; and while these tools are still optimal for the consideration of phenomena occurring on discrete symbolic representations, recent advances in entropy estimation have shown that in the case of dynamical systems on bounded but continuous state spaces, Shannon’s approach falls short of optimality. The newly defined concept of Geometric Partition Entropy provides a new foundation for adapting Shannon’s work to a more rigorous set of analysis tools for continuous state spaces. In this summer project, we seek to revisit and reformulate the basic metrics within this alternate context using the newly generalized geometric partition entropy in high dimensions. The applicant will work with the government mentor to formalize the new metrics and provide a comprehensive comparison of the utility of this new information theory for continuous state spaces with the current state-of-the-art methods.
SF.20.22.B10116: Identification of Users in Chat using Keystroke Dynamics
Traditional username and password techniques, or Common Access Card, (CAC) login, does not continually monitor usage behavior over time. Keystroke Dynamics is a technique used to measure timing information for keys pressed/depressed on a computer keyboard and identifying unique signatures for the way an individual types. The current practice of Keystroke Dynamics, also known as Keystroke Biometrics, is understanding this rhythm, to distinguish between users for authentication – even after a successful login. Current enrollment techniques require users to establish a consistent baseline and is traditionally accomplished by typing common words multiple times. While effective, this process is often rejected by users who do not see the value in an extensive enrollment process. Enhanced enrollment would identify common words automatically, by frequency, without intrusion, over the course of an enrollment period, but not specifically requiring users to type words or phrases. Once a keystroke dynamic baseline is established, detection algorithms will run in the background to monitor and observe patterns.
This assignment will focus heavily on enhancing the enrollment techniques using fusion of additional methods, and applying them to chat. Our specific goal is to preprocess the data, feature-engineer, and ultimately classify them for authentication. The approach targets a person’s behavior, and then authenticates using Machine Learning algorithms, based on data from a short period of time.
SF.20.22.B10107: Towards Advances in Graph Analytics
Many systems and process can be represented as a graph. This topic is interested in graph analytic research aimed at uncovering and exploiting inherent network structures, deriving graph representations from disparate data sources for applied problems, and scaling graph analytic techniques for extremely large graphs. Areas of specific interest include but are not limited to, graph neural networks, embedding techniques, graph representations, graph feature engineering, link prediction, and node classification. Proposers are strongly encouraged to contact the points of contact for this research topic to discuss possible proposals.
SF.20.22.B10106: Autonomous Model Building for Conceptual Spaces
Conceptual Spaces are a new form of cognitive model that seeks to represent how the human mind represent concepts. Conceptual Spaces allow for a geometrical representation of concepts allowing for a model to be built linking inputs and outputs. They are advantageous to other machine learning algorithms in the fact that they do not hold the common frame problem (i.e. they are not a “black box”) and the underlying model is capable of being manipulated to fix underlying issues. Originally Conceptual Spaces were developed as a physiological model with little to no underlying mathematical framework. Later mathematical model were developed to represent Conceptual Spaces. However, current techniques for building the models involve intensive human interaction which can be tedious and are subject to human biases. The research goal is to implement machine learning and/or other autonomous approaches for the development of autonomous model building and implementation of Conceptual Spaces.
SF.20.22.B10099: Explainable Reinforcement Learning (XRL)
The demand for explainable Reinforcement Learning (RL) has been increased due to its ability to become a powerful and ubiquitous tool to solve complex problems. However, RL exhibits one of the problematic characteristics: an execution-transparency trade off. For instance, the more complicated the inner workings of a model, the less clear it is how the predictions/decisions are made. Since the RL model learns autonomously, the importance of the underlying reason of each decision becomes imperative for gaining trust between an agent and a user, which is based on the success or failure of the model. The problem with current XRL is that most of the methods do not design an inherently simple RL model, instead, they imitate and simplify a complex model, which is cumbersome. Furthermore, XRL methods often ignore the human aspect of the field such as behavioral and cognitive science or philosophy by not taking them into account in XRL. Therefore, we seek novel projects to understand the following issues:
1) Provide experimental design to explain end goals by developing world models, counterfactuals (what-if) to build trust between an agent and a user, and adversarial explanations to provide validity of the surroundings.
2) Develop a novel algorithm to be able to accurately provide why each decision/prediction is made by the model.
SF.20.22.B10098: Efficient Transfer Learning in Reinforcement Learning (RL) Domains
Reinforcement learning (RL) model has achieved impressive feats in simulation (e.g., low-fidelity physics-based simulator) but has been a challenge when transferring into high-fidelity physics-based simulator/real world scenarios. To train an RL based model, it needs enough samples to produce impressive results. Therefore, it poses two challenges when transferring into high-fidelity physics-based simulator/real world scenarios: a) generating samples every time to run on an RL based model are computationally expensive and can cause policies (i.e., maps perceived states to actions to be taken in those states) to fail at testing b) it does not make sense to train policies separately to accommodate all the environments that an agent may see in high-fidelity physics-based simulator/real world. As a result, under this topic, we seek novel projects to understand the following issues:
1) Novel algorithm to perform transfer learning efficiently from low-fidelity to high-fidelity physics-based simulator or the real world
2) Novel experimental design for an effective transfer learning by measuring jumpstart, asymptotic performance, total reward, transfer ratio and time to threshold.
3) How to fuse uncertainty-aware neural network models with sampling-based uncertainty propagation in a systematic way
4) How to effectively perform transfer learning between a low fidelity to high fidelity physics-based simulator with minimally similar observational spaces and dynamic transitions
SF.20.22.B10096: Modeling Mission Impact in System-Of-Systems. A Dynamical Approach
Dependency relationships between systems are critical in mission impact analysis defined in networked systems-of-systems (SOS); several models have been proposed to capture, quantify, and analyze the dependency relationship between systems under the system’s administrator and user’s perspectives. However, few efforts have been made in models that capture the dynamic behavior of dependencies between system components. This research topic will explore:
• Rigorous mathematical models for the analysis and simulation of the interdependencies in networks of system-of-systems.
• Models based on actual measurement of time-variant dependency variables.
• Models for the analysis and simulation of cascading failures in networks with switching topology.
• Optimal control on networks of SOS.
Some research areas of interest in this topic includes but are not limited to dynamical systems, dynamic graphs, network of multi-agent systems, and optimal control.
SF.20.22.B10091: Multi-sensor and Multi-modal Detection, Estimation and Characterization
Modern, contested Air Force mission spaces are varied and complex involving many sensing modalities. Mission success within these spaces is equally critical to the engaged Warfighter, and, Command, Control, Communications, Computer, and Intelligence (C4I) personnel/systems, which both leverage actionable information from these heterogeneous sensing landscapes. Interfering sources, low probability of intercept signals, and dynamic scenes all collude to deceive the Air Force’s ability to derive accurate, relevant situational awareness in a timely fashion. Furthermore, legacy sensing systems which typically provide stove-piped human interpretable intelligence with potentially missing information would likely be more valuable if thought of collectively with other sensing data located further up the sensor processing pipeline (i.e., upstream data fusion). The fundamental research objectives under this topic includes areas such as multi-modal target association/fusion, multi-sensor/modal detection, tracking, characterization, multi-sensor selection, and parameter optimization and location for improved sensor fusion performance. We are interested in advancements within these areas that may come from a variety of methods; Bayesian based, topological data analysis, machine learning, information theory, and other novel mathematical approaches to name a few. Trade-offs include computational complexity, communication requirements, and the balancing of smart computational nodes versus centralized/distributed processing. The overall research goal is to leverage all available signals and data from the sensed environments and domains ultimately generating a cohesive situational awareness of the complete mission space.
SF.20.22.B10090: Robust Modular Neural Network for Edge Computing
As a powerful component of future computing systems, Deep Neural Networks (DNNs) are the next generation of Artificial Intelligence (AI) that intently emulates the neural structure and operation of the biological nervous system, representing the integration of neuroscience, computational architecture, circuitry, and algorithms. Overall however, DNNs still have limited architecture design perspectives in the following aspects: (1) The inefficient processing pipeline for a large-scale network structure; (2) The costly training operation with the increasing demand of data density; (3) The improper network behavior and accuracy diminishment due to the unforeseen data. The scope of this effort is to formulate the fundamental research to advance the understanding of neuroscience, facilitate the development of neuromorphic computing hardware and algorithm, and accelerate neural operation to extreme efficiency. To be specific, this research focuses on developing a working prototype of modular neural network on embedded development platforms to support the transfer learning and associate memory techniques, improve the costly training operation, and reuse with confidence to discover unknown objects. Additional interest is in exploring the robotic applications with respect to multimodal sensory information processed by the modular neural network.
SF.20.22.B10089: Hyperdimensional Computing (HDC)/ Vector Symbolic Architectures (VSA)
Hyperdimensional computing (HDC) or vector symbolic architectures (VSA) is an algebra for performing machine learning (ML) via computing on high dimensional symbols. In practice, these symbols are expressed as hypervectors, vectors >1,000 elements long. The value of HDC for ML is not to replace artificial neural networks (ANN) but to establish a uniform information representation and formal algebra, akin to {0, 1} and Boolean algebra for digital logic, to solve larger ML problems than possible with a single ANN. Such an approach is expected to produce design rules for combining groups of disparate ANNs analogous to digital circuit design. Work under this topic include a) training and integration of diverse ANNs output consistent with HDC, e.g. sensor fusion; b) online and collaborative learning among distributed platforms, e.g. robotic swarms, nanosats; and c) hardware demonstrations of HDC algorithms on traditional, e.g. FPGA, and/or novel neuromorphic computing hardware.
SF.20.22.B10088: Distributed Optimization and Learning with Limited Information
Modern optimization and learning problems are often with very high-dimensional states, especially when deep neural networks are involved. In the corresponding distributed optimization and learning algorithms, relevant local information shared among neighboring agents is thus frequently high-dimensional, which leads to expensive communication costs and vulnerable information transmissions. This research topic will develop distributed optimization and learning algorithms with limited information transfer between agents for the purposes of
• Communication efficiency,
• Privacy preserving,
• Information security.
Some distributed problems of interest in this topic include, but are not limited to convex and nonconvex optimization, online optimization, reinforcement learning, and neural network optimization.
SF.20.22.B10087: Resilient Distributed Optimization and Learning
In many military applications, large volumes of heterogeneous streaming data are needed to be collected by a team of autonomous agents which then collaboratively explore a complex and cluttered environment to accomplish various types of missions, including decision making, optimization and learning. In order to successfully and reliably perform these operations in uncertain and unfriendly environments, novel concepts and methodologies are needed to 1) analyze the resiliency of algorithms, and 2) maintain the capability to reliably deliver information and perform desired operations. This research topic will develop resilient distributed optimization and learning algorithms in the presence of
• Abrupt changes in the inter-agent communication network,
• Asynchronous communications and computations,
• Adversarial cyber-attacks capable of introducing untrustworthy information into the communication network.
Some distributed methods of interest in this topic include, but are not limited to weighted-averaging, push-sum, push-pull, stochastic gradient descent, and multi-armed bandits.
SF.20.22.B10086: Secure Speculative Execution for Future Processor Architectures
For many years, every new generation of computer processors offered significantly better performance compared to the previous generation, thanks to a steady increase in clock frequencies, fabrication technology advances (resulting in smaller transistor size), and faster memory, among other factors. This “free lunch” period slowed down when in the early 2000’s, manufacturers began to face the so called “power wall”, where they could no longer simply crank up the frequency of a CPU to achieve better performance. After that point, computer architects increasingly relied on microarchitecture techniques that utilized the growing abundance of resources (transistors) to drive performance gains. Complex performance-oriented microarchitectural optimizations like speculative execution (SpecEx), branch prediction, simultaneous multithreading (SMT), and out-of-order execution (OoO) became a major driver for performance increase. In a series of recent publications, widely publicized as Spectre and Meltdown, researchers have shown that the integration of these key performance-driven techniques can be leveraged to access and indirectly disclose (leak) forbidden data. Without relying on any software-level vulnerabilities, this type of attacks relies on widely used techniques implemented in hardware (i.e. speculative execution, branch prediction, OoOE, SMT) that are correctly doing what they are designed to do. Currently, development of hardware or software solutions to mitigate multiple attacks is an open area of research.
This research topic aims at state of the art research in microarchitectural techniques that enable secure speculative execution. Topics of interested include, but are not limited to:
- Secure implementations of speculative execution techniques in future processors.
- Performance assessment of speculative execution techniques.
- Novel design of techniques for known speculative execution vulnerabilities
SF.20.22.B10085: Foundations of Resilient and Trusted Systems
Research opportunities are available for model-based design, development and demonstration of foundations of resilient and trustworthy computing. Research includes technology, components and methods supporting a wide range of requirements for improving the resiliency and trustworthiness of computing systems via multiple resilience and trust anchors throughout the system life cycle including design, specification and verification of cyber-physical systems. Research supports security, resiliency, reliability, privacy and usability leading to high levels of availability, dependability, confidentiality and manageability. Thrusts include hardware, middleware and software theories, methodologies, techniques and tools for resilient and trusted, correct-by-construction, composable software and system development. Specific areas of interest include: Automated discovery of relationships between computations and the resources they utilize along with techniques to safely and dynamically incorporate optimized, tailored algorithms and implementations constructed in response to ecosystem changes; Theories and application of scalable formal models, automated abstraction, reachability analysis, and synthesis; Perpetual model validation (both of the system interacting with the environment and the model itself); Trusted resiliency and evolvability; Compositional verification techniques for resilience and adaptation to evolving ecosystem conditions; Reduced complexity of autonomous systems; Effective resilient and trusted real-time multi-core exploitation; Architectural security, resiliency and trust; Provably correct complex software and systems; Composability and predictability of complex real-time systems; Resiliency and trustworthiness of open source software; Scalable formal methods for verification and validation to prove trust in complex systems; Novel methodologies and techniques which overcome the expense of current evidence generation/collection techniques for certification and accreditation; and A calculus of resilience and trust allowing resilient and trusted systems to be composed from untrusted components.
SF.20.22.B10084: Formal Methods for Complex Systems
Formal methods are based on areas of mathematics that support reasoning about systems. They have been successful in supporting the design and analysis of systems of moderate complexity. Today’s formal methods, however, cannot address the complexity of the computing infrastructure needed for our defense.
This area supports investigation on new powerful formal methods covering a range of activities throughout the lifecycle of a system: specification, design, modeling, and evolution. New mathematical notions are needed: to address the state-explosion problem, new powerful forms of abstraction, and composition. Furthermore, novel semantically sound integration of formal methods is also of interest. The goal is to develop tools that are based on rigorous mathematical notions, and provide useful, powerful, formal support in the development and evolution of complex systems.
SF.20.22.B10083: SIKE for Post-Quantum Cryptography
The study of post-quantum cryptography (PQC) has developed mightily over the past decade, with the National Institute of Standards and Technology (NIST) even holding a contest to standardize a set of PQC algorithms for various cryptographic tasks. While this contest is still ongoing, several promising candidates have been excluded for reasons other than theoretical security. In particular, the Supersingular Isogeny Key Encapsulation (SIKE) has been implemented at the highest level of security while offering quantum computers no advantage over classical computers. Moreover, SIKE is compatible with several other elliptic curve algorithms, hence, is a promising candidate for a hybrid scheme. The advantage of combining PQC and classical cryptography is that it requires less overhaul than replacing classical techniques, while still improving security and eliminating the threat of quantum computers. To date, however, no satisfactory hybrid schemes exist. The fundamental areas of
research related to this project, therefore, can be described in the following steps:
1. Determine parameter sets that allow for seamless interaction between SIKE and other elliptic curve cryptography;
2. Determine how to combine SIKE and classical elliptic curve cryptography to maintain efficiency and security;
3. Discover efficient algorithms to generate instances of SIKE;
4. Determine parameter sets that allow for adaptations of SIKE to lightweight devices;
5. Study the practical implementations of SIKE and their resilience against side-channel attacks.
Development of new cryptographic methods are not of interest under this topic.
SF.20.22.B10082: Decentralized Secure Information Dissemination Middleware
The current Information Management (IM) systems design practices are based on a centralized middleware services that mediate information exchanges between data producers and consumers. Typically, the IM services are protected with a perimeter/defense-in-depth security approach with specialized hardware in a private network assuming both the nodes deployed on the services and users are trustworthy. However, this is proven to be ineffective to address future secure information dissemination challenges in highly contested environments. One promising approach of a growing significance in recent years is Decentralized Application design practices, referred to as DApps, with Zero-Trust (ZT) security model. ZT is an evolving set of cybersecurity paradigms that shifts from the centralized application security schemes to securing users, assets, and resources in segregated and decentralized fashion. Topics of interest include but are not limited to:
• Decentralized middleware application design and implementation methodologies.
• Zero-Trust security model for time-sensitive information producer and consumer interaction.
• Smart-contract based security policy enforcement model.
• Decentralized data oracle framework linking external data to the smart contracts.
• Decentralized secure file storage and query repository model that can utilize public hyper ledgers.
SF.20.22.B10081: Emerging 5G Technologies for Military Applications
5G- to-Next-G (5G-XG communications and network technologies can be leveraged to enhance military communication capabilities. In particular, 5G- XG -enabling technologies are envisioned to provide higher data rates, lower latency, lower power consumption, security enhancements and ubiquitous access including non-terrestrial links. The three major use case domains of 5G-XG —enhanced mobile broadband (eMBB), ultra-reliable low latency communication (URLLC) and massive machine type communications (mMTC)—provide the opportunity to harness commercial technology for future AF use cases such as smart bases, self-driving vehicles, augmented and virtual reality technologies for training, dynamic spectrum management and sharing technologies to facilitate coexistence of commercial and military spectrum dependent systems (SDSs). The 5G-XG research areas of interests for this topic include but not limited to:
• Dynamic spectrum management and sharing with unlicensed and shared bands
• Aerial Internet of Things (IoT)
• Waveform design for enhanced security and high mobility
• Small cell mission scenarios
• AI and ML enhanced/incorporated spectrum management, dynamic sensing and sharing
• Smart base/smart port use cases with small cell, V2X, low power and localization technologies
• Advanced physical layer techniques such as carrier aggregation, full-duplex and massive MIMO
• Beamforming and adaptive nulling for interference tolerance and spectrum sharing/co-existence
• Millimeter-wave and terahertz band communications
• Spectrum-sharing-by-Design for the Internet of Things
• Shapeshifting Neural Networks for Effective, Efficient and Secure Hardware-based Inference
• Edge-Assisted Task Offloading Through Real-Time Deep Reinforcement Learning
• Quality of Service (QoS) enhancement via Non-terrestrial Networking (NTN)
SF.20.22.B10080: Next Generation Wireless Networking: 5G Mesh Networking
5G networks have introduced innovative concepts such as Non-Terrestrial Networks (NTN) , Integrated Access and Backhaul (IAB), virtual Radio Access Networks (vRAN) and Network Slicing (NS). These concepts make it possible, in unified communication infrastructure, to provide multiple customized networks over terrestrial and aerial domains.
The topic seeks highly motivated research on how 5G and its enabling technologies – virtual Radio Access Networks (vRAN), Integrated Access and Backhaul (IAB), Software Defined Networking (SDN), Network Function Virtualization (NFV), cloud infrastructure along with network management and orchestration – can support dynamic, resilient local and global communications. For example, high level network control makes it possible for network designers to specify more complex tasks that involve integrating many disjoint network functions (e.g., security, resource management, and prioritization, etc.) into a single control framework, which enables: (1) robust and agile network reconfiguration and recovery; (2) flexible network management and planning; and, in turn, (3) improvements in network efficiency and controllability.
SF.20.22.B10079: Multi-agent Approaches for Planning Air Cargo Pickup and Delivery
Efforts to improve air logistics planning have been ongoing for decades, helping drive the development of critical techniques such as the simplex method for solving linear programs. The classic air cargo pickup and delivery problem can be broadly defined in the following way [1]: the air network consists of a graph, where nodes are capacity-constrained airports, and edges are routes with an associated cost and time-of-flight. Each cargo item is stored at a node, and must be picked up by agents (airplanes) and delivered to a target node. The primary objective is to deliver cargo on time, with a secondary objective to minimize cost.
We seek to explore the following topic areas:
1) New techniques for solving the air cargo problem. Recently, there has been success in using machine learning to solve related problems such as the Vehicle Routing Problem [2] or Pickup and Delivery Problem [3]. Graph Neural Networks have also showed potential for solving planning problems [4]. Meanwhile, operations research continues to provide promising results, e.g. in the area of Multi-Agent Path Finding for robot and train routing [5]. We seek application of these or other techniques to improve over existing methods in terms of optimality, computational cost, and scalability.
2) Extensions to address stochastic events. Disruptions may render a plan obsolete. For example, routes (edges) or airplanes (agents) may become unavailable due to storms or maintenance issues. Even minor local delays can propagate through the system and lead to long-lasting consequences. New delivery needs may also arise, e.g., a new cargo item may appear at one of the nodes with an urgent deadline. We seek techniques to update an existing plan without requiring the problem to be completely re-solved.
[1] “The Airlift Planning Problem" https://dl.acm.org/doi/abs/10.1287/trsc.2018.0847
[2] “Reinforcement Learning for Solving the Vehicle Routing Problem”: https://papers.nips.cc/paper/8190-reinforcement-learning-for-solving-the-vehicle-routing-problem
[3] "Heterogeneous Attentions for Solving Pickup and Delivery Problem via Deep Reinforcement Learning": https://arxiv.org/pdf/2110.02634
[4] “Graph Neural Networks for Decentralized Multi-Robot Path Planning “: https://arxiv.org/abs/1912.06095
[5] “Multi-Agent Pathfinding: Definitions, Variants, and Benchmarks”: https://www.aaai.org/ocs/index.php/SOCS/SOCS19/paper/view/18341/17457
SF.20.22.B10078: Automated Planning Decision Support with Uncertainty
Automated planning generates valid action sequences for problems with clearly defined goals, resources, constraints and dependencies. For military applications, any proposed plan must be human understandable and communicated clearly to command staff to motivate action. Leveraging these techniques to support human decision-making will likely require methods for human planners to explore, customize and compare plan options in the context of the problem being solved. There are additional challenges with planning for real-world problems that include interpreting and solving large-scale problems and uncertainty about the state of the environment. We seek to develop and demonstrate methods for automated planning to guide and evolve with human decision-making processes in complex problem spaces. Areas of interest include Automated Planning, Data Mining, Discrete Optimization, and Robust Optimization.
SF.20.22.B10077: Measuring Decision Complexity for Military Scenarios
The goal of this research is to develop metrics that quantify the complexity of an adversary decision-making process, as well as measure complexity imposed on an adversary by United States Air Force (USAF) actions. The goal is to define potential complexity metrics to assess the state of an adversary decision system before and after an attack, model the impacts of complexity imposition on an adversary’s decision system to develop analytical assessments strategies, and to compare the relative efficiency of different military actions. The analysis will provide as a means for assessing and quantifying the value of different military actions against an adversary. The end goal is to provide new insights into using complexity as a measure of how effective a military action will be in a military conflict. Some areas of interest include operations research, stochastic optimization, game theory, complexity theory, graph theory, and complex adaptive systems.
SF.20.22.B10076: Model-Based Systems Synthesis
In this research, we wish to investigate a scalable approach to model-based systems synthesis for composing/decomposing and solving complex systems problems with safety & security constraints. The goal is to develop efficient, quantifiably effective strategies, to enable variable system cloaking, preserve maximum opacity, and prevent information leakage. Some areas of interest include program synthesis, constraint programming, and model-based systems engineering.
SF.20.22.B10075: Learning to Synthesize Programs
Recent advances that pair large-scale transformer-based models with large-scale sampling and filtering introduced the world of competitive programming to a new contender yielding promising results (namely, DeepMind's AlphaCode). With the ability to reason and think critically, we typically expect highly skilled human programmers to develop higher-quality programming solutions and outperform code generators on never-before-seen problems. A code-generation system's ability to beat even mid-level programming competitors or solve college-level mathematics problems (e.g., Codex) represents unprecedented gains in AI research (including Program Synthesis, long considered to be the Holy Grail of Computer Science). Along this line of research, we seek to investigate additional techniques and architectures (e.g., hierarchical transformers) and explore the associated benefits and risks.
SF.20.21.B10030: Millimeter Wave Propagation
This effort addresses millimeter wave propagation over air-to-air; air-to-ground; and Earth-space paths to support development of new communication capabilities. The objective is to develop prediction methods that account for atmospheric effects that give rise to fading and distortion of the wanted signal. Predictions may range from near term to statistical distribution of propagation loss. Research topics of interest are those that will provide information, techniques and models that advance the prediction methodologies.
SF.20.21.B10029: Data-Efficient Machine Learning
Many recent efforts in machine learning have focused on learning from massive amounts of data resulting in large advancements in machine learning capabilities and applications. However, many domains lack access to the large, high-quality, supervised data that is required and therefore are unable to fully take advantage of these data-intense learning techniques. This necessitates new data-efficient learning techniques that can learn in complex domains without the need for large quantities of supervised data. This topic focuses on the investigation and development of data-efficient machine learning methods that are able to leverage knowledge from external/existing data sources, exploit the structure of unsupervised data, and combine the tasks of efficiently obtaining labels and training a supervised model. Areas of interest include, but are not limited to: Active learning, Semi-supervised leaning, Learning from "weak" labels/supervision, One/Zero-shot learning, Transfer learning/domain adaptation, Generative (Adversarial) Models, as well as methods that exploit structural or domain knowledge.
Furthermore, while fundamental machine learning work is of interest, so are principled data-efficient applications in, but not limited to: Computer vision (image/video categorization, object detection, visual question answering, etc.), Social and computational networks and time-series analysis, and Recommender systems.
SF.20.21.B10028: Computational Trust in Cross Domain Information Sharing
In order to transfer information between disjointed networks, various domains, or disseminate to coalition partners, Cross Domain Solutions (CDS) exist to examine and filter information that ensures only appropriate data is released or transferred. Due to the ever increasing amount of data needing to be transferred and newer, more complex data format or protocols created by different applications, the current CDSs are not keeping up with the current cross domain transfer demands. As a result, critical information is not being delivered to the decision makers in a timely manner, or sometimes, even at all. In order to meet today’s cross domain transfer needs, CDSs are looking to employ newly emerging technologies to better understand the information that they use to process and adapt to large workloads. These emerging technologies include, but are not limited to, machine learning based content analysis, information sharing across mobile and Internet of Things (IoT) based devices, cloud based cross domain filtering systems, passing information across nonhierarchical classifications and processing of complex data such as voice and video. While adding these new technologies enhance CDSs’ capabilities, they also add a substantial complexity and vulnerabilities to the systems. Some common attacks may come from a less critical network trying to gain critical network access, or malware on the critical-side trying to send data to the less critical side. Research should investigate and examine methods to efficiently secure emerging technologies beneficial to CDSs. Researchers will collaborate heavily with the AFRL’s cross domain research group for better understanding of cross domain systems as they apply their specific areas of emerging technology expertise to these problems. The expected outcome may include a design and/or a proof of concept prototype to incorporate emerging technologies into CDSs. It may also include vulnerability analysis and risk mitigation for those emerging technologies operated in a critical environment.
SF.20.21.B10027: Robust Adversarial Resilience
In recent literature, deep learning classification models have shown vulnerability to a variety of attacks. Recent studies describe techniques employed to defend against such attacks, e.g. adversarial training, mitigating unwanted bias, and increasing local stability via robust optimization. Further studies, however, demonstrate that these defenses can be circumvented through adapted attack interfaces. Given the relative ease by which most defenses are circumvented with new attacks, we will explore adversarial resilience from two angles. The first will be to improve the resistance of models against attacks in a robust fashion such that one-off attacks won’t circumvent defensive measures. The second will be to attempt to classify subversion attacks by training a separate model to identify them. In order to accomplish both tasks, we will seek to understand the fundamental theory of deep learning architectures and attacks. We hypothesize that a mathematical analysis of attacks will show similarity between attacks that can be exploited by a classifier. We also hypothesize that a mathematical analysis of deep learned models will identify algorithmic weaknesses that are easily exploited by attacks. Understanding how attacks are generated, and how to identify the resultant adversarial examples, is necessary for generalizing countermeasures. Attacks may prey on measures used by the classifier, allowing for targeted deception or misclassification. These attacks often are designed for transferability; even classifiers employing typical countermeasures remain vulnerable. Other attacks prey on the linearity of the underlying model – these adversarial attacks require minimal modification to the data. Considering a nonlinear basis, such as radial basis functions, may improve resilience against such attacks. Exploring this design space will provide insight into methods we can employ to reduce adversarial impact.
SF.20.21.B10026: Processing Publicly Available Information (PAI)
Publicly Available Information (PAI) includes a multitude of digital unclassified sources such as news media, social media, blogs, traffic, weather, scholarly articles, the dark web, and others. Being able to extract relevant supplementary information on demand could be a valuable addition to conventional military intelligence.
It would be of interest to: (1) categorize trustworthy PAI sources, (2) pull in textual information in English (generate English translation over major foreign languages), and (3) setup a library of natural language processing (NLP) tools which will summarize entities, topics, and sentiments over English texts. Examples of trustworthy PAI sources include highly credible users that belong to major and local news, emergency responders, government, university, etc. Topics of interest relate to business and economics, conflicts, cybersecurity, infrastructure, disasters and weather, etc. Important to have capabilities to resolve location even in the absence of geotags. Finally need to have confidence metrics for all capabilities developed. The researcher may chose, based on their expertise, to work on a subset of the outlined tasks.
Related to detecting misinformation in the public information domain it would be of interest to design algorithms that will identify discrepancies in information about a query topic using documents in two languages; for example comparing Wikipedia articles in two languages about the same topic. The developed algorithms should be able to answer and rank which article topics are more aligned. The algorithms should highlight the types of commonalities and discrepancies.
SF.20.21.B10025: Hyperdimensional Computing (HDC)/ Vector Symbolic Architectures (VSA)
Hyperdimensional computing (HDC) or Vector Symbolic Architectures (VSA) are potentially the mathematically rigorous engineering design rules sought by the machine learning (ML) community to stitch together disparate artificial neural network (ANN) frameworks. In HDC, information is represented by high dimensional vectors (d ~ 1e4), which may be added (superimposed) or multiplied together to create sets or dictionary key-entry pairs, respectively. A similarity metric measures the correlation between any two vectors. These formalisms follow a connectionist approach: linking concepts together such as in sequences, graphs, and trees. This research topic considers the “edge computing” potential of these methods, e.g. sensors fusion for robot navigation, methods for computing with sparse hyperdimensional vectors, implementing a resonator network in hardware, and memristor crossbar implementations. Additional interest is in exploring the application of sheaves from topological geometry with respect to hyperdimensional vectors derived from sensor data.
SF.20.21.B10024: Quantum Control Hardware Design and Security
Successful control of quantum systems relies on novel methods of integration with existing classical technologies designed around microcontrollers and field programmable gate arrays (FPGAs). Devising new control schemes is a fundamental research concern to the successful integration of multiple quantum technologies in heterogeneous systems. New technological developments with FPGAs now provide hardware acceleration with tightly-integrated co-processors, and the additional ability to reconfigure on-the-fly. Utilizing these new classical computing architectures enables rapid feedback mechanisms not currently seen in use for quantum control systems. The focus of this research is to (1) develop novel control solutions that exploit new computing architectures and methods for rapid feedback control of quantum systems, (2) supporting classical control systems making quick decisions about quantum circuit reconfiguration, and (3) handling and processing large amounts of data output by quantum processes.
Of additional importance in tightly-integrated computing architectures are the fundamental security aspects of controlling quantum systems. Considering the future of quantum control requires thoughtful insight into how to make these systems robust. Careful examination of all facets supporting robustness will lead to sound solutions in an ever-evolving field. Fields of research interest related to quantum control security include (4) deriving novel baseline architectures from a security perspective, and (5) fundamental research into mathematical concepts pushing beyond existing primitives to secure quantum- and classical-to-quantum communications.
SF.20.21.B10022: Towards Data Communication using Neutrinos
Existing beyond line of sight (BLOS) data communications relies on electromagnetic radiation for transmission and detection of information. This topic involves investigating a non-electromagnetic data communications approach using neutrinos. Technical challenges to address include:
* Transmission: Particle accelerations are limited in transmit power and data modulation bandwidth. Perform analysis of the state-of-the-art particle accelerators and optimize particle accelerator designs primarily for digital communications.
* Propagation: Measuring the absorption coefficient and beam divergence of neutrino beams is key to distant neutrino communications. Propose techniques to measure and additionally perform data analysis of experimental data from ongoing experiments measuring both cosmic and accelerator neutrinos such as CERN.
* Detection: To achieve a practical bit error rate in data communications, increasing detector sensitivity or neutrinos detected per bit is crucial. Investigate neutrino detection methods to increase receiver sensitivity and optimize for digital communications.
SF.20.21.B10021: Discovering Structure in Nonconvex Optimization
Optimization problems arising from applications are often inherently nonconvex and non-smooth. However the tools used to study and solve these problems are typically adopted from the classical domain, not adequately addressing the challenges posed by nonconvex problems. The purpose of this research is to develop accurate models and efficient algorithms which take advantage of useful structure or knowledge derived from the application in question. Examples of this structure include sparsity, generalizations of convexity, and metric regularity. Some areas of interest are sparse optimization, image and signal processing, variational analysis, and mathematical foundations of machine learning.
SF.20.21.B10020: Generalized Approaches for Manipulated Media Detection
Technology to generate and manipulate media has become so advanced that humans have a difficult time identifying them. Additionally, the advancing of the technology to generate synthetic media and alter media has made it possible to counterfeit a variety of media. Several headlines to show the breadth of media types:
* Deepfake Putin is here to warn Americans about their self-inflicted doom [1]
* A growing problem of ‘deepfake geography’: How AI falsifies satellite images [2]
* Scammer successfully deepfaked CEO’s voice to fool underling into transferring $243,000 [3]
* AI-generated text is the scariest deepfake of all [4]
The realism and seemingly credibleness of these imitations makes them a potential threat to our nation’s security. Currently, there are many detectors available to detect fake media, but they only detect a small subset. Is there a detector that can detect a larger subset or even all fakes? There are multiple ways to manipulate and generate media, varying in approach, which makes it difficult to create generalized detectors. Researchers will investigate generalized and robust approaches for detecting generated and manipulated media, including data not just generated by generative adversarial networks and not solely focused on humans. Approaches of interest are:
* Ensemble of networks (e.g. multiple biological features)
* Constructive – utilizing the properties and structure of the data
* A combination of constructive and semantic – in addition to constructive methods also consider whether something in the data doesn’t make contextual sense like a missing earring
* Attribution
[1] https://www.technologyreview.com/2020/09/29/1009098/ai-deepfake-putin-kim-jong-un-us-election/
[2] https://www.washington.edu/news/2021/04/21/a-growing-problem-of-deepfake-geography-how-ai-falsifies-satellite-images/
[3] https://gizmodo.com/scammer-successfully-deepfaked-ceos-voice-to-fool-under-1837835066
[4] https://www.wired.com/story/ai-generated-text-is-the-scariest-deepfake-of-all/
SF.20.21.B10019: Behavioral Biometrics for Secure and Usable User Authentication Using Machine Learning
Trust and influence have been a forefront in basic understanding of human reliance with machine interactions via proper calibration for the AFRL’s direct mission. Trust and influence can be derived into secure and usable user authentication via behavioral biometrics. Behavioral biometrics is an emerging family of technologies that utilize signals of human behaviors to identify and authenticate individuals. In recent years modalities such as keystrokes and mouse dynamics have shown promising ability to effectively distinguish between users. Unlike existing knowledge-based authentication such as passwords that are based on “what you know” and possession-based approaches based on "what you have", such as YubiKey,behavioral biometrics verify a user based on “what you are”. On the other hand, usability for every additional factor added to the current mainstream multi-factor authentication (MFA) can be obtrusive and onerous for a user, e.g., the login with a one-time password by receiving a text message. Behavioral biometric has a good potential to eradicate this usability problem while maintaining a desired level of security, due to its unobtrusiveness and its ability to continuously monitor a user to detect imposters and increase security.
Under this topic, we seek novel projects to understand the following issues: 1) Novel modalities such user interaction with graphical user interface (GUI);
2) Novel context such as when a user interacts with Facebook or a user is composing a document;
3) Novel authentication and fusion algorithms that are easy to tune and practical;
4) Benchmarking projects that are designed to build trust in the state-of-the-art behaviroal biometrics algorithms;
5) Novel experimental design with involvement of human subjects;
6) User privacy protection.
SF.20.21.B10018: Short-Arc Initial Orbit Determination for Low Earth Orbit Targets
When new objects are discovered or lost objects rediscovered in Low Earth Orbit (LEO), very short arcs are obtained due to limited pass durations and geometrical constraints. This results in a wide range of feasible orbit solutions that may well-approximate the measurements. Addition of a second tracklet obtained a short time later – about a quarter of the orbit period or more – leads to substantially improved orbit estimates. However, the orbit estimates obtained from performing traditional Initial Orbit Determination (IOD) methods on these tracklets are often insufficient to reacquire the object from a different sensor a short time later, resulting in an inability to gain custody of the object. Existing research in this area has applied admissible regions and multi-hypothesis tracking to constrain the solutions and evaluate candidate orbits. These methods have been primarily applied to Medium Earth Orbit and Geostationary Orbit and have aimed to decrease the total uncertainty in the orbit states. The objective of this topic is to research and develop methods to minimize propagated measurement uncertainty for LEO objects at future times, as opposed to minimizing the orbit state uncertainty over the observed tracklet. This will improve the ability to reacquire the object over the course of the following orbit or orbits to form another tracklet, which will result in substantially better orbit solutions. Sensor tasking approaches which maximize the likelihood of re-acquisition are also of interest.
SF.20.21.B10017: Machine Learning Applications for Geospatial Intelligence Processing
Our team is seeking advanced Machine Learning (ML) applications to improve the geospatial intelligence (GEOINT) processing. Enhancements in capabilities, such as effectively identifying objects of interest on overhead/satellite imageries and discovering patterns of human behaviors/events though analyzing geospatial information system (GIS) data, are critical for improving efficiency and decision-making.
Following are the identified topics of interest as being potential high-risk, high-reward research areas.
1) Novel Methods for Applying Computer Vision to GEOINT: A key facet of computer vision is object detection - finding objects and their specific location or boundaries within an image. By applying object detection to overhead/satellite imageries can be rapidly and automatically analyzed for infrastructure mapping, anomaly detection, and feature extraction. Advances in detecting difficult to identify objects are of particular interest. In addition, insight into the meaning of GIS observations could be greatly enhanced by incorporating ML models of the environment and common-sense knowledge. Novel approaches to fusing computer vision with semantic models are needed. Examples of this topic includes, but not limited to: object detection comprised primarily of linear shapes and computer vision with semantic reasoning for GIS observations.
2) Techniques for Reasoning across Multiple Data Domains: Explosive growth in geospatial, temporal, and social media data paired with the development of new ML and visualization technologies have provided an opportunity to fuse disparate data sources into an unparalleled situational awareness platform. Social media outlets increasingly include geolocated evidence in connection with individual activity and correspondence. Innovative techniques for reasoning across social networks within the context of GIS will allow us to model and predict human behaviors/events within complex geographic landscapes. Example of this topic includes, but not limited to: automation of socio-temporal-geo correlation to drive predictive modeling.
3) Advances in Synthetic Data Generation: While there is an abundance of available GIS data, fully aligned and annotated ground truth data for training and testing is difficult to acquire. Methods for rapidly generating realistic geospatial landscapes with known features (infrastructure, anomalies, etc.) and automatically translating such landscapes into realistic satellite collection simulations is needed. Example of this topic includes, but not limited to: automated generation of aligned data sets across multiple phenomenologies (electro-optical, synthetic aperture RADAR, etc.).
SF.20.21.B10016: Modeling Battle Damage Assessment
Combat Assessment is the determination of the overall effectiveness of force employment during military operations. Combat Assessment provides key decision makers the results of engaging a target and consists of four separate assessments: Battle Damage Assessment (BDA), Collateral Damage Assessment, Munitions Effectiveness Assessment, and Re-attack Recommendations. BDA is the core of combat assessment and is a necessary capability to dynamically orchestrate multi-domain operations and impose complexity on the adversary. The goal of this effort is to research methods to model complex and evolving systems from incomplete, sparse data to support BDA uses cases. Emphasis will be given to models which accurately reflect the underlying physics and other domain specific constraints of systems. Of additional interest is the development of domain-aware graph analysis techniques for assessing resiliency of adversary systems, multi-INT data fusion to address gaps in data, and analytic process automation.
SF.20.21.B10015: Analyzing Collateral Damage in Power Grids
Reliability assessment in distribution of power grids has played an important role in systems operation, planning, and design. The increased integration of information technology, operational technology, and renewable energy resources in power grids have led to the need of identifying critical nodes whose compromise would induce cascading failures impacting resilience and safety. Several approaches have been proposed to characterize the problem of identifying and isolating the critical nodes whose compromise can impede the ability of the power grid to operate. The goal of this research is to develop a computational model for the analysis of collateral damage induced by the disruption of critical nodes in a power grid. The proposed model must provide strategic response decision capability for optimal mitigation actions and policies that balance the trade-off between operational resilience and strategic risk. Special consideration will be given to proposals that include and are not limited to data-driven implementation, fault graph-based model, cascading failure model, among others.
SF.20.21.B0014: Game Theoretical Insights Into the Competition Phase of Multi-Domain Warfare
Game theory enables the development of important insights for interacting agents or decision-makers. Extensive research exists to understand various game-types, to include cooperative/non-cooperative, simultaneous/sequential, and perfect information/imperfect information. This effort will explore the modeling of a competition as a multi-stage, multi-player, multi-domain game where there are competing players looking to best position themselves against adversaries across multiple domains (through investments, positioning of capabilities, etc). This effort should focus on the game formulation, and any possible analysis conducted on the formulation of the game with the generation of simulated data. Insights of interest include the impact of non-competitive players, the impact of successful pre-positioning capabilities, among many others.
Possible considerations for other research extensions include the exploration of network flow under interdiction attacks (positioning of capabilities in a network) through the exploitation of the existing network or generation of new components of the network (vehicles, routes, nodes, edges, etc.).
SF.20.21.B0013: Multi-Unit, Multi-Action Adversarial Planning
Planning is a critical component for any command and control enterprise. While there have been impressive breakthroughs with domain independent heuristics and Monte Carlo tree search, in adversarial settings with multiple units, further work is still required to deal with the enormous state and action space to find quality actions that progress towards the goal and are robust to adversarial actions. We seek to develop new adversarial, domain-independent heuristics that exploit interactions between adversaries’ components. In addition to developing new heuristics, we are also interested in more intelligent and efficient search techniques that will allow planning over multiple units. Areas of interest include Automated Planning, Heuristic Search, Planning over Simulators, and Game Theory.
SF.20.21.B0012: Multi-Resolution Modeling and Planning
Modeling and simulation is a powerful tool, but must be calibrated to a level of detail appropriate for the current planning objective. Tools that provide high fidelity modeling (flight surfaces, waypoint pathing, etc..) are appropriate for tactical scenarios, but at the strategic level representing every platform and resource at high fidelity is often too complex to be useful. Conversely, lower fidelity simulation can provide strategic assessment, but lacks the specific space and timing detail to be used for issuing orders to elements. We seek to develop and demonstrate methods for multi-resolution modeling and planning, bridging the gap between multiple levels of representation that can support abstraction and specialization as we move between the different fidelities of action. Areas of interest include Automated Planning, Modeling and Simulation, Discrete Optimization and Machine Learning.
SF.20.21.B0011: Processing in Memory for Big Data Applications
The maturation of non-volatile memory (NVM) has opened up new opportunities for computation and computer architectures. NVM can be integrated with conventional CMOS processes in many ways to create hybridized systems that take advantage of the strengths of the different technologies to create new, high performance and energy efficient systems able to handle the high performance computing needs of big data applications, such as deep neural networks. NVM can itself be used to speed computation through crossbar based operations in conjunction with conventional CMOS electronics and the necessary software support. This topic would be pursuing hybrid NVM/CMOS systems for high performance computing, with an emphasis on machine learning applications. These systems may be monolithically integrated or may use advanced packaging to create hybrid hardware. The proposed concept should consider software as well as hardware for creating a high performance computing system.
SF.20.21.B0009: Superconducting and Hybrid Quantum Systems
The Superconducting and Hybrid Quantum Systems group focuses on the development of heterogeneous quantum information platforms and the exploration of related fundamental physics in support of the quantum networking and computing missions of AFRL’s Quantum Information and Sciences Branch. A central theme of the group’s work is to develop quantum interfaces between leading qubit modalities to utilize the respective advantages of each of these modalities for versatility and efficiency in the operation of quantum network nodes. Towards this end, the group’s research is composed of several main thrusts: the development of novel superconducting systems for generating and distributing multi-partite entanglement; the development of interconnects for encoding and decoding multiplexed quantum information on a superconducting quantum bus; the investigation of hybrid superconducting and photonic platforms for transduction of quantum information between microwave and telecom domains; and exploration of quantum interface hardware for bridging trapped-ion and superconducting qubit modalities.
SF.20.21.B0008: 5G Core Security Research
Very often most cyber-attacks exploit vulnerabilities and misconfigured system settings. The AFRL Laboratory for Telecommunications Research (LTR) is interested in researching and developing methodologies for identifying vulnerabilities in software implementations of 4G/5G global telecommunications specifications. Our goal is to protect core telecom network elements from cyber intrusions. LTR conducts in-depth security assessment across all core network layers and the interaction with the radio access network so that designers can build in resiliency. We seek to identify software security issues that adversaries use to penetrate network defenses. LTR maintains a commercial implementation of 4G/5G network to equip the cyber research professional with the tools necessary to develop and validate novel methodologies for the protection of modern mobile telecommunications networks.
SF.20.21.B0007: Assurance and Resilience through Zero-Trust Security
Zero-trust cybersecurity is a security model that requires rigorous verification for any user or device requesting access to computing or network resources. In the context of cloud security, zero-trust means that no one is trusted by default from inside or outside the commercial and public cloud systems, including the Cloud Service Provider (CSP). This security model incorporates several expensive approaches and complex technologies that rely on public-key machinery, zero-knowledge-proof, etc., making designing efficient and scalable solutions based on zero-trust challenging and almost infeasible in practice.
This research topic seeks novel approaches to: 1) enabling warfighters to efficiently and securely outsource private data and computation with mission assurance and verifiable correctness of results to untrusted commercial clouds without relying on a Trusted Third Party (TTP); 2) improving the resilience and robustness of the Air Force’s mission-critical applications by effectively distributing them across multiple heterogeneous CSPs to prevent a single point of failure, avoid technology/vendor lock-ins, and to enhance availability and survivability; 3) optimize the trade-off between strict zero-trust security and rigid performance requirements for timely-sensitive mission applications. Research topics of interest include, but are not limited to:
- Decentralized identity and access control mechanisms and protocols, including those that support anonymity.
- Novel application of existing cryptographic primitives and protocols to zero-trust computing paradigms.
- Design cross-cloud, CSP-independent, privacy-aware protocols and frameworks that operate in the presence of emerging zero-trust security mechanisms, enable secure and transparent migration of application and data across heterogenous CSPs, and facilitate multi-objective optimization in the security-mission trade space.
- End-to-end data protection, concurrency and consistency for multi-user multi-cloud environments.
The development of new cryptographic primitives or protocols are not of interest under this topic.
SF.20.21.B0006: Trapped Ion Quantum Networking and Heterogeneous Quantum Networks
Quantum networking may offer disruptive new capabilities for quantum communication, such as being able to teleport information over a quantum channel. This project focusses on the memory nodes and interconnects within a quantum network. Trapped ions offer a near-ideal platform for quantum memory within a quantum network due to the ability to hold information within the long-lived ground states and the exquisite control possible over both the internal and external degrees of freedom. This in-house research program focuses on building quantum memory nodes based on trapped ions, operating a multi-node network with both photon-based connections to communicate between the network nodes and phonon-based operations for quantum information processing within individual network nodes. In addition, the work focuses on interfaces to other qubit technologies (superconducting qubits, integrated photonic circuits, etc.) for heterogeneous network operation, quantum frequency transduction, and software-layer control. This work will be performed both in the in-house research laboratories at AFRL and the nearby Innovare Advancement Center.
SF.20.21.B0004: Secure Function Evaluation for Time-Critical Applications
Secure Function Evaluation (SFE) enables two participants (sender and receiver) to securely compute a function/exchange data without disclosing their respective data. Garbed Circuit (GC) has been proposed to address this problem. State-of-the-art solutions for implementing GC employ a Oblivious Transfer (OT) algorithms and/or Predicate Based Encryption (PBE) based on Learning With Errors (LWE) algorithms. The performance of these solutions are not practical for time-critical applications. Existing GC-based SFE protocols have not been explored for applications with multiple participants in controlled/managed settings (i.e., event-based systems/publish and subscribe) where the circuit construction can be simplified with a limited set of gates (e.g., AND, OR, and/or NAND) while excluding the inherent complexity for the arithmetic operations (Addition and Multiplications). Areas of consideration under this research topic include developing and implementing time-constraint cryptographic protocols using universal GC for a given applications type with relaxed constraints
SF.20.21.B0003: Automated Threat Modeling and Attack Prediction for Cloud Computing Systems and Software
Traditional threat modeling schemes for a given software is typically developed from the software architecture diagrams and the subsystems (network topology) which is very effective when the applications are within the organizational boundaries. However, cloud-native software applications evolves over time by following the Continues Integration and Continuous Development, referred to CI/CD, best practices to support the ever changing demand of the businesses with the help of the dynamicity of the underlying cloud infrastructure deployment service models (Iaas, Paas, SaaS, FaaS, VMs, Containers). Thus, the prescribed CI/CD architecture does not reflect the descriptive architecture of the software (i.e., initial Docker image) and all its interacting subsystems (Services, Docker swarms, etc.), thereby, ineffective for exiting thread modeling and attack prediction techniques.
Areas of consideration under this research topic include but are not limited to:
1) Developing a sound theoretical foundation for modelling threats on dynamic cloud computing systems and a practical Automated Threat Modelling Framework (ATMF).
2) A practical machine learning models for attack prediction driven by the ATMF data sets.
SF.20.21.B0002: Analyzing the Imposition of Multi-Domain Actions for Optimizing Operational Effects
A key concern for the Air Force and Joint Force is the ability to leverage multi-domain (MD) operations. There is a fundamental lack of understanding about how to measure, analyze, and quantify the imposition of MD actions to maximize operational effects. In this research, we seek to examine decision processes, given some mathematical representation, in which we impose prescribed courses of action. The purpose is to understand the influencing of behaviors, reshaping of expected traversals, and maximizing of desired outcomes. Our goal is to investigate novel approaches and explore various modes of analysis that aid in the development of a scheme for classifying the sets of actions into varying levels or notions of complexity. Some areas of interest include stochastic optimization, game theory, complexity theory, graph theory, and complex adaptive systems.
SF.20.21.B0001: Next Generation Wireless Networking: 5G Mesh Networking
5G networks have introduced innovative concepts such as Non-Terrestrial Networks (NTN) , Integrated Access and Backhaul (IAB), virtual Radio Access Networks (vRAN) and Network Slicing (NS). These concepts make it possible, in unified communication infrastructure, to provide multiple customized networks over terrestrial and aerial domains.
The topic seeks highly motivated research on how 5G and its enabling technologies – virtual Radio Access Networks (vRAN), Integrated Access and Backhaul (IAB), Software Defined Networking (SDN), Network Function Virtualization (NFV), cloud infrastructure along with network management and orchestration – can support dynamic, resilient local and global communications. For example, high level network control makes it possible for network designers to specify more complex tasks that involve integrating many disjoint network functions (e.g., security, resource management, and prioritization, etc.) into a single control framework, which enables: (1) robust and agile network reconfiguration and recovery; (2) flexible network management and planning; and, in turn, (3) improvements in network efficiency and controllability.
SF.20.20.B0010: Multi-sensor and Multi-modal Detection, Estimation and Characterization
Air Force mission space is varied and complex, involving many sensing modalities to understand and derive actionable intelligence from. Interfering sources, low probability of intercept signals and dynamic scenes all collude to deceive the Air Force’s ability to derive accurate situational awareness in a timely fashion. Furthermore, legacy sensing systems typically provide stovepiped human interpretable intelligence that may have missing information, due to processing, that would likely be more valuable if thought of collectively with other sensing data, further up the sensor processing stream; upstream sensor data fusion. The fundamental research of interest under this topic includes areas such as multi-modal target association/fusion, multi-sensor/modal detection, tracking, characterization, and multi-sensor selection, parameter optimization and location for improved sensor fusion performance; exploiting fusion results to actively tune sensors to improve the solution. We are interested in advancements in solutions to these areas that can come from a variety methods; Bayesian based, geometric algebra, machine learning and information theory. Trade-offs include computational complexity, communication requirements, the balancing of smart computational nodes vs centralized processing vs distributed processing. The overall research goal is to leverage all available signals and data from the sensed environments and domains, to generate a cohesive situational awareness of the complete mission space.
SF.20.20.B0009: Emerging 5G Technologies for Military Applications
Emerging 5G communications and network technologies can be leveraged to enhance military communication capabilities. In particular, 5G-enabling technologies are envisioned to provide higher data rates, lower latency, lower power consumption, security enhancements and ubiquitous access including non-terrestrial links. The three major use case domains of 5G—enhanced mobile broadband (eMBB), ultra-reliable low latency communication (URLLC) and massive machine type communications (mMTC)—provide the opportunity to harness commercial technology for future AF use cases such as smart bases, self-driving vehicles, augmented and virtual reality technologies for training, dynamic spectrum management and sharing technologies to facilitate coexistence of commercial and military spectrum dependent systems (SDSs). The 5G research areas of interests for this topic include but not limited to:
* Dynamic spectrum management and sharing with unlicensed and shared bands * Waveform design for enhanced security and high mobility
* Small cell mission scenarios
* AI and ML enhanced/incorporated spectrum management, dynamic sensing and sharing
* Smart base/smart port use cases with small cell, V2X, low power and localization technologies
* Advanced physical layer techniques such as carrier aggregation, full-duplex and massive MIMO
* Beamforming and adaptive nulling for interference tolerance and spectrum sharing/co-existence
* Millimeter-wave and terahertz band communications
SF.20.20.B0007: Persistent Sensor Coverage for Swarms of UAVs
The deployment of many airborne wireless sensors is being made easier due to technological advances in networking, smaller flight systems, and miniaturization of electromechanical systems. Mobile wireless sensors can be utilized to provide remote, persistent surveillance coverage over regions of interest, where the quality is measures as the sum of coverage and resolution of surveillance that the network can provide. The purpose of this research is provide efficient allocation of mobile wireless sensors across a region to maintain continuous coverage under constraints of flight speed and platform endurance. We seek methods for the structuring constraint optimization problems to develop insightful solutions that will maximize persistent coverage and provide analytical bounds on performance for a variety of platform configurations.
SF.20.20.B0003: Assurance in Mixed-Trust Cyber Environments
Operations in and through cyberspace typically depend on many diverse components and systems that have a wide range of individual trust and assurance pedigrees. While some components and infrastructures are designed, built, owned and operated by trusted entities, others are leased, purchased off-the-shelf, outsourced, etc., and thus cannot be fully trusted. However, this heterogeneous collection of mixed-trust components and infrastructures must be composed in such a way as to provide measurable and dependable security guarantees for the information and missions that depend on them.
This research topic invites innovative research leading to the ability to conduct assured operations in and through a cyberspace composed of many diverse components with varying degrees of trust. Topics of interest include, but are not limited to:
- Novel identity and access control primitives, models, and mechanisms.
- Secure protocol development and protocol analysis.
- Research addressing unique concerns of cyber-physical and wireless systems.
- Security architectures, mechanisms and protocols applicable to private, proprietary, and Internet networks.
- Embedded system security, including secure microkernel (e.g., seL4) research and applications.
- Zero-trust computing paradigms and applications.
- Legacy and commercial system security enhancements that respect key constraints of the same, including cost and an inability to modify.
- Secure use of commercial cloud infrastructure in ways that leverage their inherent resilience and availability without vendor lock-in.
- Novel measurement algorithms and techniques that allow rapid and accurate assessment of operational security.
- Obfuscation, camouflage, and moving target defenses at all layers of networking and computer architecture.
- Attack- and degradation-recovery techniques that rapidly localize, isolate and repair vulnerabilities in hardware and software to ensure continuity of operations.
- Design of trustable systems composed of both trusted and untrusted hardware and software.
- Non-traditional approaches to maintaining the advantage in cyberspace, such as deception, confusion, dissuasion, and deterrence.
SF.20.20.B0001: Learning an Algorithm with Provable Guarantees
The purpose of this research is to address some theoretical issues related to learning algorithms with provable guarantees for problem-solving and decision-making. In practice, machine learning techniques are often optimized over families of parameterized algorithms. The parameters are tuned based on "typical" domain-specific problem instances. However, the selected algorithm seldom yields a performance guarantee (based on some metric). We wish to explore the notion of casting the algorithm selection problem as a learning problem. Our goal is to reason appropriately, develop new paradigms, move theory towards the state-of-the-art, and to solve computationally challenging problems that frequently arise in tactical environments. Some research areas of interest include computational complexity, algorithms, artificial intelligence, machine learning, combinatorics and discrete mathematics, information theory, and statistical learning theory.
SF.20.19.B0017: Optical Interconnects
Our main area of interest is the design, modeling, and building of interconnect devices for advance high performance computing architectures with an emphasis on interconnects for quantum computing. Current research focuses on interconnects for quantum computing including switching of entangled photons for time-bin entanglement.
Quantum computing is currently searching for a way to make meaningful progress without requiring a single computer with a very large number of qubits. The idea of quantum cluster computing, which consists of interconnected modules each consisting of a more manageable smaller number of qubits is attractive for this reason. The qubits and quantum memory may be fashioned using dissimilar technologies and interconnecting such clusters will require pioneering work in the area of quantum interconnects. The communication abilities of optics as well as the ability of optics to determine the current state of many material systems makes optics a prime candidate for these quantum interconnects.
SF.20.19.B0015: Cyber Security Research and Applications for Cyber Defense
Cyberspace remains beneficial and a technological advantage with vulnerabilities under control. Cyber Defense is concerned with the protection and preservation of critical information infrastructures available in cyberspace. The Air Force’s mission to fly and fight in Air, Space, and Cyberspace involve the technologies to provide information to the warfighters anywhere, anytime, and for any mission. This far-reaching endeavor will necessarily span multiple networks and computing domains not exclusive to military.
Economics also known as the study of resource allocation problems, has always been a factor in engineering. Economics is sought to provide the answer to managing large-scale information systems. The introduction of mobile agents, autonomy, computational economy, pricing mechanisms, and game theory mechanisms will strive to unveil the same phenomena as a real one; it will admit arbitrary scale, heterogeneity of resources, decentralized operation, and tolerance in presence of vulnerability.
This technology area seeks to: 1) protect our own information space through assurance; 2) enable our system to automatically interface with multi-domain systems through information sharing with ability to deal with unanticipated states and environments; 3) provide the means to circumvent by learning new configurations and understand vulnerabilities before their exploitation, and 4) reconstitute systems, data, and information from different domains rapidly to avoid disruptions.
Fundamental research areas of interest within this topic include (cryptographic techniques is not of interest under this research opportunity):
•Design of systems composed of both trusted and untrusted hardware and software; study of virtualization of hardware components and platforms with configurability on-the-fly.
•Mathematical concepts and distinctive mechanisms that enable systems to automatically continue correct operation in the presence of unanticipated input or an undetected bug or vulnerability.
•Examination of assumptions, mechanisms, and implementations of security modules with capability to rewrite itself without human interactions in the presence of unwanted/unanticipated configurations.
•Information theory and Category theory describing interactions of systems of systems that lead to better consideration of their emergent behaviors during attack and reconstitution; models used to predict system responses to malwares and coordinated attacks as well as analyses of self-healing systems.
SF.20.19.B0010: Random Projection Networks
One of the characteristic features of artificial neural networks (ANN) is that all the synaptic connections (weights) between neurons are adjusted during training of the neural network, typically by back propagation. While straightforward in software, hardware realizations of ANNs have proven challenging in part because of the requirement for individually tunable weights. However, for certain classes of problems, this approach is overkill. Rather fixed, random weights in an ANN can be used to project data into sufficiently high-dimensional space such that training only a subset of weights, typically the output layer, are necessary to attain state of the art accuracies. For example, reservoir computing, a type of recurrent neural network where only the output layer weights are trained, has been used for speech recognition and RF non-linear channel modelling. From a hardware perspective such networks are easier to engineer, train, and field because of a) the relaxed hardware tolerances and b) the reduced training requirements. Random projection networks (RPN) include echo state networks (ESN), liquid state networks (LSM), extreme learning machines (ELM), random filters for convolutional neural networks (CNN), vector symbolic architectures/hyper-dimensional computing, and stochastic computing. This research effort encompasses mathematical formalisms, hardware characterization, network modelling, and hardware RPN development, with special emphasis on the lattermost.
SF.20.19.B0009: Exploring Relationships Among Ethical Decision Making, Computer Science, and Autonomous Systems
The increased reliance on human-computer interactions, coupled with dynamic environments where outcomes and choice are ambiguous, creates opportunities for ethical decision making situations with serious consequences where errors could cost loss of life. We are developing approaches that make autonomous system decisions more apparent to its users, and capabilities for a system to tailor the amount of automation based on the situation and input from the decision maker. This allows for dynamically adjustable human/machine teaming addressing C2 challenges of Autonomous Systems, Manned/Unmanned Teaming, and Human Machine Interface and Trust. The work focuses on developing a system for modeling and supporting human decision making during critical situations, providing a mechanism for narrowing choice options for ethical decisions faced by military personnel in combat/non-combative environments.
We propose developing software (an “ethical advisor”) to identify and provide interventions in situations where ethical dilemmas arise and quick, reliable decision making is efficacious. Our unique approach combines behavioral data and model simulation in the development of an interactive model of decision making that emphasizes the human element of the decision process. In the long term, understanding the fundamental aspects of human ethical decision making will provide key insights in designing fully autonomous computational systems with decision processes that consider ethics. As autonomous systems emerge and military applications are identified, we will work to provide verifiable assurance that our autonomous systems are making decisions that reflect USAF moral and ethical values. The first step towards realizing this vision is focusing on human decision processes and clarifying those values in a quantifiable model. The team has developed an ethical framework and preliminary model of ethical decision making that will be more fully developed with the Air Force Academy (AFA) and Air University (AU). In Year 1, we will articulate the individual psychological characteristic and situational factors impacting ethical dilemmas and develop realistic ethical dilemmas and situations. These scenarios will use computational agents employing AI and military personnel, requiring ethical decisions to be made by personnel in combat and non-combative environments. In year 2, we will develop the Ethical Advisor prototype, test the individual psychological characteristics and situational factors, refine the scenarios, and establish and implement collaborations across different commands/services. In year 3, we will test and integrate the model and Ethical Advisor into a mission system, and conduct joint war game testing.
We are seeking individuals from a variety of educational disciplines (Psychology, Philosophy, Computer Science) with experience in data gathering and summarization techniques, programming, and testing. The gathered data would be used for developing algorithms and programming to begin enabling software to mimic human decision making in complex ethics-laden situations.
SF.20.19.B0008: Digitizing the Air Force for Multi-Domain Command and Control (MDC2)
This in-house research effort focuses on working on the Android Tactical Assault Kit (ATAK), which is an extensible, network-centric Moving Map display with an open Application Programming Interface (API) for Android devices developed by Air Force Research Laboratory (AFRL). ATAK provides a mobile application environment where warfighters can seamlessly exchange relevant Command and Control (C2), Intelligence Surveillance and Reconnaissance (ISR), and Situational Awareness (SA) information for domestic and international operations. This capability is key to the Department of Defense’s (DoDs) goal of digitizing the Air Force for MDC2 efforts, because it serves as the backbone for connecting numerous platforms, people, and information sources.
SF.20.19.B0006: Cyber Defense through Dynamic Analyses
Modern systems are generally a tailored and complex integration of software, firmware and hardware. Additional complexity arises when these systems are further characterized by machine learning algorithms, with recent emphasis on deep learning methods. Couple this with the limited but “sufficient” testing in the development phases of the system and the end result is all too often an incompletely characterized set of system response to stimuli not of concern in the original tests
We are interested in new approaches to system testing for security and vulnerabilities that would otherwise go undetected. In particular, modern test methods such as fuzz testing (or fuzzing) can cover more scenario boundaries using data considered to be otherwise invalid from network protocols, application programming interface calls, files, etc.. These invalid data better ensure that a proper set of vulnerability analyses is performed to prevent exploits.
Further, we are interested in leveraging AI and machine learning techniques combined with these modern methods such as fuzzing, to more completely perform system tests and vulnerability analyses.
SF.20.19.B0005: Methods for Adapting Pre-Trained Machine Learning Models
Numerous machine learning algorithms have recently made remarkable advances in accuracies due to more standardized large datasets. Yet, designing and training an algorithm for large datasets can be time-consuming and there may be other tasks or activities for which less data exists. There is a large body of work showing the performance benefits of fusing models for the same task. Hence, the ability to adapt and fuse pre-trained models has the advantages of fewer data requirements and decreased computing resources.
The purpose of this topic will be to develop novel methods for fusing and building ensembles of pre-trained machine learning models that are task agnostic and can more closely mimic the agility that humans possess in the learning process. This topic is particularly interested in exploring and evaluating architectures and methods that involve the fusion of Convolutional Neural Networks (CNNs) or other deep learning methods. CNNs have been one class of learning algorithm that have greatly improved accuracies over numerous application domains, including computer vision, text analysis, and audio processing. Additionally, another area of interest includes methods that explain the numerical impacts of training examples on the models being learned. In other words, novel methods that conceptually describe what an algorithm is learning. Both being able to explain the impact of specific examples on the learning process and building novel algorithms and architectures for fusion of pre-trained models will support the realization of more adaptable learning methods.
SF.20.19.B0004: Trust in Machine Learning
The need for increased levels of autonomy has significantly risen within the Air Force. Thus, machine learning tools that enable intelligent systems have become essential. However, analysts and operators are often reluctant to rely on these tools due to a lack of understanding – treating machine learning as a black box that introduces significant mission risk. Although one may hope that improving machine learning performance would address this issue, there is in fact a trade-off: increased effectiveness often comes at the cost of increased complexity. Increased complexity then leads to a lack of transparency in understanding machine learning methods. In particular, it becomes unclear when such methods will succeed or fail, and why they will fail. This limits the adoption of intelligent systems.
This topic focuses on the test, evaluation, validation, and verification (TEVV) of machine learning models to increase model transparency and foster higher user reliance. Of particular interest are techniques that enable the end users of machine learning systems to lead the TEVV process; quantifying a model’s robustness to adversarial attacks, ability to detect out of distribution samples, generate “unit tests”, efficiently search for failure modes, provide explanations of decisions, and more. Other topics related to TEVV of machine learning models will also be considered.
SF.20.19.B0003: Blockchain-based Information Dissemination Across Network Domains
While crypto currency research has been around for decades, Bitcoin has gained a significant adaptation in recent years. Besides being an electronic payment mechanism, Bitcoin’s underlying building blocks known as Blockchain, has profound implications for many other computer security problems beyond cryptocurrencies such as a Domain Name System, Public Key Infrastructure, filestorage and secure document time stamping. The purpose of this topic is to investigate Blockchain technologies, and develop decentralized highly efficient information dissemination methods and techniques for sharing and archiving information across network domains via untrusted/insecure networks (internet) and devices.
Areas of consideration include but are not limited to: security design and analysis of the state of the art open source Blockchain implementations (e.g., IOTA), developing the theoretical foundation of Blockchain-based techniques for different application domains, block editing, and smart contracts in such application domains.
SF.20.17.B0010: Wireless Innovations at Spectrum Edge: mm-Waves, THz Band and Beyond
Today’s increasing demand for higher data rates and congestion in conventional RF spectrum have motivated research and development in higher frequency bands such as millimeter-wave, terahertz band and beyond. In higher frequency bands such as millimeter wave and terahertz, where channel properties are affected by mobility and atmospheric conditions, an agile system with a flexible, resilient architecture and the ability to adapt to the changing environment is required. To that end, we are interested in both foundational and applications-focused research to meet the demands of next generation wireless systems.
For foundational research for wireless communications at spectrum edge, we would like to address the technical challenges in both accessing the spectrum and exploiting the spectrum. We are interested in advanced technologies in architecture, waveform and signal processing that enable access to the emerging spectrum bands that are not traditionally widely used for wireless communications. We are also interested in the radio architecture, system design, waveform, algorithm and protocols that will let us exploit the abundant bandwidth that the spectrum edge for future AF wireless applications. Examples include but are not limited to:
* Novel waveform designs that are robust to the high atmospheric absorption loss.
* Use of novel relay architectures such as reconfigurable intelligent surfaces to solve the blockage problem at higher frequency bands.
* Use of data science tools in machine learning to construct meaningful datasets from few RF data collected at these frequency bands.
We are also interested in applications-focused research that specifically calls for the use of frequency bands at spectrum edge in the proposed applications. Examples include but not limited to high bandwidth links for next-generation mobile communication systems, Air Force and commercial applications that consider converged sensing and communications systems, etc.
SF.20.17.B0008: Uncertainty Propagation for Space Situational Awareness
One of the significant technical challenges in space situational awareness is the accurate and consistent propagation of uncertainty for a large number of space objects governed by highly nonlinear dynamics with stochastic excitation and uncertain initial conditions. Traditional uncertainty propagation methods which rely on linearizing the dynamics about a nominal trajectory often break down under a high degree of uncertainty or on long time scales. In addition the data uncertainty is usually poorly characterized or the data may be sparse or incomplete. Additionally, sensor noise models are often poorly modeled and oversimplified. Many recent developments which attempt to address these issues such as the unscented Kalman filters, Gaussian sum filters, and polynomial chaos filters tend to be ad hoc approaches with limited foundational rigor. The objective of this topic is to research accurate, computationally efficient, and rigorously validated methods for uncertainty propagation for the dynamical systems which address the nonlinear nature of the underlying dynamics, and the high degree of uncertainty and lack of completeness in the data. Of interest are approaches which leverage methods of modern dynamical systems theory, theory of stochastic differential equations, unique methods for numerically approximating solutions to the Fokker-Planck equation.
SF.20.17.B0007: Data Driven Model Discovery for Dynamical Systems
The discovery and extraction of dynamical systems models from data is fundamental to all science and engineering disciplines, and the recent explosion in both quantity and quality of available data demands new mathematical methods. While standard statistical and machine leaning approaches are capable of addressing static model discovery, they do not capture interdependent dynamic interactions which evolve over time or the underlying principles which govern the evolution. The goal of this effort is to research methods to discover complex time evolving systems from data. Key aspects include discovering the governing systems of equations underlying a dynamical system from large data sets and discovering dynamic causal relationships within data. In addition to model discovery, the need to understand relevant model dimensionality and dimension reduction methods are crucial. Approaches of interest include but are not limited to: model discovery based on Taken’s theorem, learning library approaches, multiresolution dynamic mode decomposition, and Koopman manifold reductions.
SF.20.17.B0006: Extracting Knowledge from Text
AFRL is interested in exploring recent machine learning advances via neural networks such as Recurrent Neural Networks (RNN) combined with Conditional Random Fields (CRF), Long Short-Term Memory (LSTM) networks, Convolutional Neural Network (CNN), and potentially others for improving extraction capabilities from text. The challenge would be to setup the network in-house, replicate performance on a known dataset, and then test on internal AFRL data. Examples of information that can be extracted from text include: (1) people and groups, (2) events (who, what), (3) geo-spatio-temporal information (where, when), (4) causal explanations (why, how), (5) facilities and equipment, (6) modality and beliefs, (7) anomaly, novelty, emerging trends, (8) interrelationships, entailments, coreference of entities and events, (9) disfluencies/disjointedness, (10) dynamic, perishable, changing situations. It is preferable that the learning environment is setup via known packages such as TensorFlow or Torch.
SF.20.17.B0004: Identification of Data Extracted from Altered Locations (IDEAL)
The primary objective of this effort is to extract information from documents in real time, without the need to install additional software packages, utilize specialized development, or train agents to each source, even if the location of that data changes.
Seeking data from multiple documents is a manual, time consuming, undocumented process, which needs to be repeated every time an update, or change, to that data is requested. Automating this process is a challenge because the documents routinely change. Sometimes, the mere act of refreshing a web page changes the document as the ads cycle. Such changes are damaging to most of today`s web scraping techniques. The lack of data, or inaccurate data, from failed updates during the extraction process also creates many problems when attempting to update the data, as unexpected results are returned. Extracting data from documents, typically requires training or expert analysis for each source before the data can be used. This means that documents must first be identified before a script or agent can be written to extract data from it by a developer. A user cannot discover a document, and immediately begin extracting data from it. This diverts time away from an analyst, as the analyst begins spending more time managing data, opposed to performing the intended analysis. Services that provide access to data such as RSS feeds, Web Services, and APIs, are useful, but are not necessarily what is needed by the requestor. For example, the Top Story from a news publisher may be available as an RSS feed, whereas the birth rate of the country may not be.
This assignment will focus heavily on enhancing the web browser extension prototype. The extension will be used for routine extraction of data elements from open source web pages/documents, and be developed for the Firefox web browser. In addition to Web Browser extension development, this assignment will include adding additional functionality such as visualization enhancements, search and transposition, crawl, and a process for identifying similar data. Consideration will also include expanding to additional web browsers such as Internet Explorer.
SF.20.17.B0002: Multi-Domain Mission Assurance
In an effort to support the Air Force mission to develop Adaptive Domain Control for increasingly integrated Mission Systems, we are interested in furthering the identification of problems, and development of solutions, in increasing Full-Spectrum Mission Assurance capabilities across joint air, space, and cyberspace operations. Modern multi-domain mission planning and execution integrates tightly with cyber and information infrastructure. To effectively direct and optimize complex operations, mission participants need timely and reliable decision support and an understanding of mission impacts that are represented and justified according to their own domain and mission context. We are interested in understanding, planning, and developing solutions for Mission Assurance that supports operations requiring Mission Context across multiple domains, and spans both Enterprise and constrained environments (processing, data, and bandwidth). The following topic areas are of interest as we seek to provide solutions that are domain adaptive, mission adaptive, and provide rich, critical situational awareness provisioning to Mission Commanders, Operators, and technologies that support autonomous Mission Assurance.
• Summary, Representation, and Translation of Multi-Domain Metrics of Mission Health - Expansive Mission Assurance requires adequate mechanisms to describe, characterize, and meaningfully translate mission success criteria, mission prioritization, information requirements, and operational dependencies from one domain to another in order to react to events, deliver them appropriately to mission participants, and thereby increase the agility, responsiveness, and resiliency of ongoing missions.
• Multi-Domain Command and Control information Optimization - Currently, information can be disseminated and retrieved by mission participants through various means. Increasingly, mission participants will face choices of what, how, and where information will reach them or be pushed back to the Enterprise. Deciding between C2 alternatives in critical situations requires increased autonomy, deconfliction, qualitative C2 mission requirements, and policy differentials. We are seeking representations, services, configuration management, and policy approaches towards solving multi-domain multi-C2 operations.
• Complex Event Processing for Multi-Domain Missions - The ability to better support future missions will require increased responsiveness to cyber, information, and multi-domain mission dynamics. We are seeking mission assurance solutions that process information event logs, kinetic operation event data, and cyber situational awareness in order to take data-driven approaches to validating threats across the full-spectrum of mission awareness, and justify decisions for posturing, resource and information management, and operational adjustments for mission assurance.
• Machine Learning for Mission Support - Decreasing the cost and time resource burdens for mission supporting technologies is critical to supporting transitioning to relevant domains and decreasing solution rigidity. To do this requires advanced approaches to zero shot learning in attempts to understand mission processes, algorithms to align active missions with disparate archival and streaming information resources, analysis of Mission SA to determine cross-domain applicability, and autonomous recognition of mission essential functions and mission relevant events. Additionally, ontologies and semantic algorithms that can provide mission context, critical mission analytics relationships, mission assurance provenance and response justifications, as well as mission authority de-confliction for intra-mission processes and role-based operational decisions, are topics that would support advanced capabilities for advanced mission monitoring, awareness, and assurance decisions.
SF.20.14.B1072: Feature-Based Prediction of Threats
Methods have been developed to detect anomalous behaviors of adversaries as represented within sensor data, but autonomous predictions of actual threats to US assets require further investigation and development. The proposed research will investigate foundational mathematical representations and develop the algorithms that can predict the type of threat a red (adversary) asset poses to a blue (friendly) asset. The inputs to the system may be assumed to include: 1) an indication/warning mechanism that indicates the existence of anomalous behavior, and 2) a classification of the type of red/blue asset. Approaches to consider include, but are not limited to, predictions based on offensive/defensive guidance templates and techniques associated with machine learning, game theoretic approaches, etc.. The proposed approach should be applicable to a variety of threat scenarios.
The example that follows illustrates an application to U.S. satellite protection. The offensive template determines the type of threat. Mechanisms such as templates are used to predict whether or not this asset is a threat by comparing configuration changes with known threatening scenarios through probabilistic analyses, such as Bayesian inferences or game theoretic analyses. Robustness tests may be employed as well. (For example, a threat can be simulated that is not specific to one template.) Once the threat is determined, the classification algorithm provides notification of the type of asset. The classification approach is employed to (for example) determine whether the asset is intact or a fragment, its control states, the type of control state, and whether it is a rocket body, payload, or debris. (An example of an offensive assessment is a mass-inertia configuration change in an active red asset that is specific for robotic arm-type movements.) In the above example, a question to be answered is: can a combination of the templates handle this case? The defensive portion must also provide recommended countermeasures, i.e. as in the case of a blue satellite, thruster burns to move away from possible threats. Although our specific application interests for this research topic are represented by the above example, many application areas are likely to benefit from this research, including cyber defense, counter Unattended Aerial Systems (UASs), etc.
SF.20.14.B1070: Advanced Event Detection and Specification in Streaming Video
Focus area 1: graph analysis techniques applied to assessing the resilience of critical infrastructure systems (e.g. electric power grid, communications systems); to include sets of critical nodes and links, measures of centrality, dimensionality reduction, application of game theory, graph matching and alignment with large sparse graphs, and corresponding metrics to characterize assessments and data fitness, and related areas.
Focus area 2: distributed computation and reasoning of near real-time stream data processing (e.g., full motion video) for situational awareness. A query-based approach to analyzing (i.e.: descriptive), understanding (i.e., diagnostic) and predicting (i.e., predictive) situation understanding with real-time feedback (i.e., prescriptive analytics) can be explored. Areas of interest include query robustness (i.e. quality and transactional properties), and applying machine learning (statistical) techniques with dynamic feedback loops measure to measure and adjust model fitness; applied to real-time streaming video. (Reference AFOSR’s Dynamic Data Driven Applications Systems (DDDAS) portfolio description or the community at www.1dddas.org.)
SF.20.14.B1068: Quantum Networking with Atom-based Quantum Repeaters
A key step towards realizing a quantum network is the demonstration of long distance quantum communication. Thus far, using photons for long distance communication has proven challenging due to the absorption and other losses encountered when transmitting photons through optical fibers over long distances. An alternative, promising approach is to use atom-based quantum repeaters combined with purification/distillation techniques to transmit information over longer distances. This in-house research program will focus on trapped-ion based quantum repeaters featuring small arrays of trapped-ion qubits connected through photonic qubits. These techniques can be used to either transmit information between a single beginning and end point, or extended to create small networks with many users.
SF.20.14.B1065: Mathematical Theory for Advances in Machine Learning and Pattern Recognition
To alleviate the effects of the so-called ‘curse of dimensionality’, researchers have developed sparse, hierarchical and distributed computing techniques to allow timely and meaningful extraction of intelligence from large amounts of data. As the amount of data available to analysts continues to grow, a strong mathematical foundation for new techniques is required. This research topic is focused on the development of theoretical mathematics with applications to machine learning and pattern recognition with a special emphasis techniques that admit sparse, low-rank, overcomplete, or hierarchical methods on multimodal data. Research may be performed in, but not limited to: sparse PCA, generalized Fourier series, low-rank approximation, tensor decompositions, and compressed sensing. Proposals with a strong mathematical foundation will receive special consideration.
SF.20.14.B1063: Secure Processing Systems
The objective of the Secure Processing Systems topic is to develop hardware that supports maintaining control of our computing systems. Currently most commercial computing systems are built with the requirement to quickly and easily pick up new functionality. This also leaves the systems very vulnerable to picking up unwanted functionality. By adding specific features to microprocessors and limiting the software initially installed on the system we can obtain the needed functionality yet not be vulnerable to attacks which push new code to our system. Many of these techniques are known however there is little commercial demand for products that are difficult and time consuming to reprogram no matter how much security they provided. As a result the focus of this topic is selecting techniques and demonstrating them through the fabrication of a secure processor. Areas of interest include: 1) design, layout, timing and noise analysis of digital integrated circuits, 2) Implementing a trusted processor design and verifying that design, 3) Selection of security features for a microprocessor design, 4) verifying manufactured parts, and 5) demonstrations of the resulting hardware.
SF.20.14.B0856: Event Detection and Predictive Assessment in Near-real Time Complex Systems
The goal is to make best use of multi-point observations and sensor information for event detection and predictive assessment applicable to complex, near real time systems which are found in many military domains.
The first step in tackling these challenges is to analyze the data, remove any non-relevant information and concentrate efforts in understanding correlations between variables and events. The analysis is followed by designing and developing signal processing techniques that strengthen these correlations. The selected approach would end up transforming data that does not make much sense into a meaningful event prediction. This step is not an easy task because sensor readings and operator logs are sometimes inconsistent, unreliable, provide perishable data, generate outliers due to some catastrophic failure, or evolve in time in such way that data is almost impossible to predict.
Searching for strong correlations between data and events leads to choosing a model which can best assess the current conditions and then predict the possible outcomes for a number of possible scenarios. Scientists need to understand why a proposed method can be a potential solution.
Perhaps deterministic or statistical models can be simplified and solved; maybe a preprocessing stage can map data into a space where patterns are easily identified; it can be possible that solutions applied to other problems can be translated into the proposed problem, or there is an untested technique that can be applied to a dynamic model.
This is an opportunity for researchers to investigate event detection scenarios in the areas of telecommunications, radars, audio, imagery and video and support AFRL projects in sensor exploitation. An important element of this topic is brainstorming, testing ideas and the gain a general understanding of input data and output events.
SF.20.14.B0855: Complex Network and Information Modeling & Inference
Recent advances in sensing technology have enabled the capture of dynamic heterogeneous network and information system data. However, due to limited resources it is not practical to measure a complete snapshot of the network or system at any given time. This topic is focused on inferring the full system or a close approximation from a minimal set of measurements. Relevant areas of interest include matrix completion, low-rank modeling, online subspace tracking, classification, clustering, and ranking of single and multi-modal data, all in the context of active learning and sampling of very large and dynamic systems. Applications areas of interest include, but are not limited to communication, social, and computational network analysis, system monitoring, anomaly detection, video processing. Also of interest are topological methods such as robust geometric inference, statistical topological data analysis, and computational homology and persistence. The exploration of new techniques and efficient algorithms for topological data analysis of time-varying and dynamic systems is of particular interest. Candidates should have a strong research record in these areas.
SF.20.14.B0854: Large Scale Geometric Reasoning & Modeling
Many recent efforts in machine learning have focused on learning from massive amounts of resulting in large advancements in machine learning capabilities and applications. However, many domains lack access to the large, high-quality, supervised data that is required and therefore are unable to fully take advantage of these data-intense learning techniques. This necessitates new data-efficient learning techniques that can learn in complex domains without the need for large quantities of data. This topic focuses on the investigation and development of data-efficient machine learning methods that are able to leverage knowledge from external/existing data sources, exploit the structure of data, and/or the parameters of the learning models as well as explore the efficient joint collection of training data and learning. Areas of interest include, but are not limited to: Active learning, Semi-supervised leaning, Learning from "weak" labels/supervision, One/Zero-shot learning, Transfer learning/domain adaptation, as well as methods that exploit structural or domain knowledge.
Furthermore, while fundamental machine learning work is of interest, so are principled data-efficient applications in, but not limited to: Computer vision (image/video categorization, object detection, visual question answering, etc.), Social and computational networks and time-series analysis, and Recommender systems.
SF.20.14.B0853: Advanced Computing Processors Information Management
As the number of computing processors is increased for most applications, a situation is reached where processor information management becomes the bottleneck in scaling, and adding additional processors beyond these number results in a deleterious increase in processing time. Some examples that limit scalability include bus and switch contentions, memory contentions, and cache misses, all of which increase disproportionally as the number of processors increase. The objective of this topic is to investigate existing and/or to develop novel methods of processor information management for multiprocessor and many-processor computing architectures that will allow for increased scaling.
SF.20.14.B0852: Neuromorphic Computing
The high-profile applications of machine learning(ML)/AI, while impressive, are a) not suitable for Size, Weight, and Power (SWaP) limited systems and b) not operable without access to “the cloud.” Neuromorphic computing is one of the most promising approaches for low-power, non-cloud-tethered ML, potentially operable down at the sensor level, also called “edge computing,” because it implements aspects of biological brains, e.g. trainable networks of neurons and synapses, in non-traditional, highly-parallelizable, reconfigurable hardware. As opposed to typical ML approaches today, our research aims for “the physics of the device” to perform the computations and for the reconfigurable hardware itself to be the ML algorithm. This research effort encompasses mathematical models, hardware characterization, hardware emulation, hybrid VLSI CMOS architecture designs, and algorithm development for neuromorphic computing processors. We are particularly interested in approaches that exploit the characteristic behavior of the physical hardware itself to perform computation, e.g. optics, memristors/ReRAM, metamaterials, nanowires. Again, special emphasis will be placed on imaginative technologies and solutions to satisfy future Air Force needs for non-cloud-tethered ML on SWaP limited assets.
SF.20.13.B0950: Quantum Information Processing
The topic of Quantum Information Processing and quantum photonic enabling components covers computational methods, entanglement characterization, methods for large scale entanglement generation, and device architectures. It has been well established that a computer based on quantum interference could offer significant increases in processing efficiency and speed over classical versions, and specific algorithms have been developed to demonstrate this in tasks of high potential interest such as data base searches, pattern recognition, and unconstrained optimization.
The experimental progress is rapidly catching up to the theoretical research as these small-scale devices, which are demonstrating quantum processes, continue to grow in their number of available qubits. The focus of this research is the generation, manipulation, and characterization of entangled photons states for quantum information processing, quantum networking, entanglement distribution, and heterogeneous qubit integration. The research focuses strongly on integrated photonics and expertise in this area is beneficial.
Theoretical advances will also be pursued with existing and custom quantum simulation software to model computational speedup, error correction, de-coherence effects, and modeling physical devices to fabricate. Algorithm investigation will focus on hybrid approaches which simplify the physical realization constraints and specifically address tasks of potential military interest.
SF.20.13.B0946: Quantum Computing Theory and Simulation
Quantum computing (QC) research involves interdisciplinary theoretical and experimental work from diverse fields such as physics, electrical and computer science, engineering and from pure and applied mathematics. Objectives of AFRL’s Quantum Information Science (QIS) Branch include the development of quantum algorithms with an emphasis on large scale scientific computing and search/decision applications/optimization on QC hardware, the simulation of quantum gates/circuits/processing, and quantum entanglement schemes with an emphasis on modeling experiments. Topics of special interest include the cluster state quantum computing paradigm, quantum simulated annealing, NISQ-based quantum algorithms, the behavior of quantum information and entanglement under arbitrary motion of qubits, measures of generation and detection of quantum entanglement, and the distinction between quantum and classical information and its subsequent exploitation.
SF.20.13.B0945: Nanocomputing
Advances in nanoscience and technology show great promise in the bottom-up development of smaller, faster, and reduced power computing systems. Nanotechnology research in this group is focused on leveraging novel emerging nanoelectroic devices and circuits for neruromorphic spike processing on temporal data. Of particular interest is biologically inspired approaches to neuromorphic computing which utilize existing nanotechnologies including nanowires, memristors, coated nanoshells, and carbon nanotubes. We have a particular interest in the modeling and simulation of architectures that exploit the unique properties of these new and novel nanotechnologies. This includes development of analog/nonlinear sub-circuit models that accurately represent sub-circuit performance with subsequent CMOS integration. Also of interest are the use of nanoelectronics as a neural biological interface for enhanced warfighter functionality
SF.20.13.B0944: Many-Node Computing for Cognitive Operations
The sea change in computing hardware architectures, away from faster cycle rates and towards processor parallelism, has expanded opportunities for development of large scale physical architectures that are optimized for specific operations. Porting of current cognitive computing paradigms onto systems composed of parallel mainstream processors will continue in the commercial world. What higher cognitive functionality could we achieve if we take better advantage of physical capabilities enabled by new multi-processor geometries?
Perception, object recognition and assignment to semantic categories are examples of lower level cognitive functions. Assignment of valence, creation of goals and planning are mid level functions. Self awareness and reflection are higher level processes that are so far beyond current cognitive systems that relatively little has been done to model the processes. Often, models assume higher cognitive processes will emerge, once the computing system reaches some level of speed / complexity. The problem is that the computational power required exceeded the reachable limit of single processor architectures and probably exceeds the limits of conventional parallel architectures. This topic seeks to enable mid and higher level cognitive function by creation of new physical architectures that address the computation demand in novel ways.
We are interested in developing models for the computational scale of the mid and higher functions and processor / memory node architectures that facilitate cognitive operations by configuring the physical architecture to closely resemble the functional cognitive architecture, e.g., where each node in a network represents and functions as a processor for a single semantic primitive. What new hierarchical architectures could we design for million node systems, where the individual nodes may be small ASPs, with very fast communication between nodes? A project of interest would combine both sides, new algorithms for higher level cognitive functions and new architectures to enable the computation in a realistic time frame. AFRL/RIT has projects on line to enable million node systems.
SF.20.11.B4442: Formal Methods for Complex Systems
Formal methods are based on areas of mathematics that support reasoning about systems. They have been successful in supporting the design and analysis of systems of moderate complexity. Today’s formal methods, however, cannot address the complexity of the computing infrastructure needed for our defense.
This area supports investigation on new powerful formal methods covering a range of activities throughout the lifecycle of a system: specification, design, modeling, and evolution. New mathematical notions are needed: to address the state-explosion problem, new powerful forms of abstraction, and composition. Furthermore, novel semantically sound integration of formal methods is also of interest. The goal is to develop tools that are based on rigorous mathematical notions, and provide useful, powerful, formal support in the development and evolution of complex systems.
SF.20.11.B4043: Trusted Software-Intensive Systems Engineering
Software is a prime enabler of complex weapons systems and its fungible nature is key to the development of next generation adaptive systems.
Yet, software is the most problematic element of large scale systems, dominated by unmet requirements and leading to cost and schedule overruns. As the complexity of todays system lies in greater than 10^5 requirements, over10^7 lines of code, thousands of component interactions, more than 30 year product life cycles and stringent certification standards. The tools used to design, develop and test these complex systems do little to instill trust that the software is free from vulnerabilities, malicious code or that it will function correctly. Furthermore there is virtually no tool capable of detecting design flaws. The objective of the trusted software-intensive systems engineering topic is to develop techniques and tools to enable trust (with a focus on security and correctness) throughout the software lifecycle.
Areas of interest include: evidence-based software assurance; static analysis tools with a preference to analysis at the binary level; algorithm or design-level analysis; secure software development; model-based software engineering; correct-by-construction software generation;
SF.20.11.B4040: Foundations of Resilient and Trusted Systems
Research opportunities are available for methodologies, technologies and tools supporting the design, development and demonstration of resilient and trustworthy computing. Opportunities include: model-based technologies, components and methods supporting a wide range of requirements for improving the resiliency and trustworthiness of computing systems via multiple resilience and trust anchors throughout the system life cycle including design, specification and verification of cyber-physical systems. Research supports security, resiliency, reliability, privacy and usability leading to high levels of availability, dependability, confidentiality and manageability. Thrusts include middleware and software theories, methodologies, techniques and tools for resilient and trusted, correct-and-secure-by-construction, composable software and system development. Specific areas of interest include: Automated discovery of relationships between computations and the resources they utilize along with techniques to safely and dynamically incorporate optimized, tailored algorithms and implementations constructed in response to ecosystem changes; Theories and application of scalable formal models, automated abstraction, reachability analysis, and synthesis; Perpetual model validation (both of the system interacting with the environment and the model itself); Trusted resiliency and evolvability; Compositional verification techniques for resilience and adaptation to evolving ecosystem conditions; Reduced complexity of autonomous systems; Effective resilient and trusted real-time multi-core exploitation; Architectural security, resiliency and trust; Provably correct complex software and systems; Composability and predictability of complex real-time systems; Resiliency and trustworthiness of open source software; Scalable formal methods for verification and validation to prove trust in complex systems; Novel methodologies and techniques which overcome the expense of current evidence generation/collection techniques for certification and accreditation; and A calculus of resilience and trust allowing resilient and trusted systems to be composed from untrusted components.
SF.20.11.B4039: Data Analytics for Sensor Exploitation
Current sensor strategies are primarily focused on placement for coverage, not on ISR performance, and are not robust to dynamic changes in the environment. AFRL seeks innovative research in the area of quantifiable data analytics for sensor exploitation. More specifically, AFRL seeks data analytics to help steer the quantification of ISR sensor performance (single sensors, distributed / disaggregated sensors, and heterogeneous sensors). This would include data analytics on: collection geometries, tracking prediction, sensor / uncertainty characterization, change detection, degradation if a platform is lost, complex multi-modal patterns of life and fusion of non-traditional data sources to provide additional assessment context, etc.
SF.20.01.B4567: Application of Game Theory and Mechanism Design to Cyber Security
Cyber attacks pose a significant danger to our economic prosperity and national security whereas cyber security seeks to solidify a scientific basis. Cyber security is a challenging problem because of the interconnection of heterogeneous systems and the scale and complexity of cyberspace. This research opportunity is interested in theoretical models that can broaden the scientific foundations of cyber security and develop automated algorithms for making optimum decisions relevant to cyber security. Current approaches to cyber security that overly rely on heuristics have been demonstrated to have only limited success. Theoretical constructs or mathematical abstractions provide a rigorous scientific basis for cyber security because they allow for reasoning quantitatively about cyber attacks.
Cyber security can mathematically be modeled as a conflict between two types of agents: the attackers and the defenders. An attacker attempts to breach the system’s security while the defenders protect the system. In this strategic interaction, each agent’s action affects the goals and behaviors of others. Game theory provides a rich mathematical tool to analyze conflict in strategic interaction and thereby gain a deep understanding of cyber security issues. The Nash equilibrium analysis of the security games allows the defender to allocate cyber security resources, understand how to prioritize cyber defense activities, evaluate the potential security risks, and reliably predict the attacker’s behavior.
Securing cyberspace needs innovative game theoretic models that consider practical scenarios such as: incomplete information, imperfect information, repeated interaction and imperfect monitoring. Moreover, additional challenges such as node mobility, situation awareness, and computational complexity are critical to the success of wireless network security. Furthermore, for making decisions on security investments, special attention should be given to the accurate value-added quantification of network security. New computing paradigms, such as cloud computing, should also be investigated for security investments.
We also explore novel security protocols that are developed using a mechanism design principle. Mechanism design can be applied to cyber security by designing strategy-proof security protocols or developing systems that are resilient to cyber attacks. A network defender can use mechanism design to implement security policies or rules that channel the attackers toward behaviors that are defensible (i.e., the desired equilibrium for the defender).
SF.20.01.B4555: Dynamic Resource Allocation in Airborne Networks
From the Air Force perspective, a new research and development paradigm supporting dynamic airborne networking parameter selection is of paramount importance to the next-generation warfighter. Constraints related to platform velocity, rapidly-changing topologies, mission priorities, power, bandwidth, latency, security, and covertness must be considered. By developing a dynamically reconfigurable network communications fabric that allocates and manages communications system resources, airborne networks can better satisfy and assure multiple, often conflicting, mission-dependent design constraints. Special consideration will be given to topics that address cross-layer optimization methods that focus on improving the performance at the application layer (i.e. video or audio), spectral-aware and/or priority-aware routing and scheduling, and spectral utilization problems in cognitive networks.
SF.20.01.B4438: Wireless Sensor Networks in Contested Environments
Sensor networks are particularly versatile for a wide variety of detection and estimation tasks. Due to the nature of communication in a shared wireless medium, these sensors must operate in the presence of other co-located networks which may have competing, conflicting, and even adversarial objectives. This effort focuses on the development of the fundamental mathematics necessary to analyze the behavior of networks in contested environments. Security, fault tolerance, and methods for handling corrupted data in dynamically changing networks are of interest.
Research areas include but are not limited to optimization theory, information theory, detection/estimation theory, quickest detection, and game theory.
Development of new cryptographic techniques is not of interest under this research opportunity.
SF.20.01.B4437: Communications Processing Techniques
We are focusing our research on exploring new and novel techniques to process existing and future wireless communications. We are developing advanced technologies to intercept, collect, locate and process communication signals in all parts of the spectrum. Our technical challenges include: interference cancellation in dense co-channel environments, multi-user detection (MUD) algorithms, hardware architecture and software methodologies, techniques to geo-locate and track emitters and methodologies to improve the efficiency of signal processing software. Research into developing unique and advanced methods to process communication signals in a high density, rapidly changing environment is of great importance. The research is expected to be a combination of analytical and experimental analyses. Experimental aspects will be performed via simulations using an appropriate signal processing software tool, such as MATLAB.
SF.20.01.B4336: Audio & Acoustic Processing
AFRL/RIGC is involved in all aspects of researching and developing state of the art audio and acoustical analysis and processing capabilities, to address needs and requirements that are unique to the DoD and intelligence communities. The group is a unique combination of linguists, mathematicians, DSP engineers, software engineers, and analysts. This combination of individuals allows us to tackle a wide spectrum of topics from basic research such as channel estimation, robust word recognition, language and dialect identification, and confidence measures to the challenging transitional aspects of real-time implementation for speech; as well as detecting, tracking, beamforming and classifying specific acoustical signatures in dynamic environments via array processing. AFRL/RIGC also has significant thrusts in noise estimation and removal (both spectral and spatial), speaker identification including open-set identification, acoustical identification, keyword spotting, robust feature extraction, language translation, analysis of stressed speech, coding algorithms along with the consequences of the compressions schemes, watermarking, co-channel mitigation, and recognition of background events in audio recordings. SOA techniques such as I-vectors, deep neural networks, bottleneck features, and extreme learning are used to pursue solutions for real-time and offline problems such as SID, LID, GID, etc.
SF.20.01.B4111: Agile Networking for the Aerial Layer
The characteristics of today`s aerial layer networks are limiting effective information sharing and distributed command & control (C2), especially in contested, degraded, operationally limited environments, where the lack of interoperability and pre-planned/static link configurations pose the greatest challenges. Advanced research in wireless networking is sought to support aerial information exchange capabilities in highly dynamic environments. This includes but is not limited to: disruption/delay tolerant networking; radio-to-router interface protocols; opportunistic transport protocols; resilient data/message protocols and on-demand prioritization; spectrum use; infrastructure sharing and mesh networking.
SF.20.01.B4008: Next-generation Aerial Directional Data Link & Networking (NADDLN)
Given the scarcity of spectrum, there is a desire to develop self-forming, self-managing directional tactical data links operating at higher frequencies. Directional networking provides an opportunity to increase spectral efficiency, support ad-hoc aerial connectivity, improve resistance to intended/unintended interference, and increase the potential capacity of the network. However, establishing and maintaining directional data links adds significant complexity to network operations over traditional omnidirectional systems.
Research interests reside in:
(1) the ability to make real-time content/context-aware trades involving capacity, latency, and interference tolerance;
(2) mission-aware link, discovery, and network topology control; and
(3) affordable apertures, ultimately, to deliver new capabilities for aerial directional data links and networks
SF.20.01.B4006: Airborne Networking and Communications Links
This research effort focuses on the examination of enabling techniques supporting potential and future highly mobile Airborne Networking and Communications Link capabilities and high-data-rate requirements as well as the exploration of research challenges therein. Special consideration will be given to topics that address the potential impact of cross-layer design and optimization among the physical, data link, and networking layers, to support heterogeneous information flows and differentiated quality of service over wireless networks including, but not limited to:
· Physical and MAC layer design considerations for efficient networking of airborne, terrestrial, and space platforms;
· Methods by which nodes will communicate across dynamic heterogeneous sub-networks with rapidly changing topologies and signaling environments, e.g., friendly/hostile links/nodes entering/leaving the grid;
· Techniques to optimize the use of limited physical resources under rigorous Quality of Service
· (QoS) and data prioritization constraints;
· Mechanisms to handle the security and information assurance problems associated with using new high-bandwidth, high-quality, communications links; and
· Antenna designs and advanced coding for improved performance on airborne platforms.
SF.20.01.B4005: Wireless Optical Communications
Quantum communications research involves theoretical and experimental work from diverse fields such as physics, electrical and computer science and engineering, and from pure and applied mathematics. Objectives include investigations into integrating quantum data encryption with a QKD protocol, such as BB84, and characterizing its performance over a roughly 30 km free space stationary link.
Free Space Optical Communication Links: Laser beams propagating through the atmosphere are affected by turbulence. The resulting wave front distortions lead to performance degradation in the form of reduced signal power and increased bit-error-rates (BER), even in short links. Objectives include the development of the relationship between expected system performance and specific factors responsible for wave front distortions, which are typically linked to some weather variables, such as the air temperature, pressure, wind speed, etc.
Keywords applicable to these studies are: quantum cryptography, free space laser propagation, Coherent state quantum data encryption, laser beam propagation through turbulent media, integration of quantum communications system with pointing, acquisition, and control system.
SF.20.01.B4001: Mission Driven Enterprise to Tactical Information Sharing
Forward deployed sensors, communication, and processing resources increase footprint, segregate data, decrease agility, slow the speed of command, and hamper synchronized operations. Required is the capability to dynamically discover information assets and utilize them to disseminate information across globally distributed federations of consumers spread across both forward-deployed tactical data links and backbone enterprise networks. The challenges of securely discovering, connecting to, and coordinating interactions between federation members and transient information assets resident on intermittent, low bandwidth networks need to be addressed. Mission prioritized information sharing over large-scale, distributed, heterogeneous networks for shared situational awareness is non-trivial. The problem space requires investigation, potential solutions and technologies need to be identified, and technical approaches need to be articulated which will lead to capabilities that enable forward deployed personnel to reach back to enterprise information assets, and allow rear deployed operators the reciprocal opportunity to reach forward to tactical assets that can address their information needs.
Anticipating versus Reacting - Conditions in real-world environments are dynamic - threats emerge and may be neutralized, opportunities appear without warning, etc. - and robust autonomous agents must be able to act appropriately despite these changing conditions. To this end, we are interested in identifying events which signal that a change must be made in one agent’s behavior by mining past data from a variety of sources, such as its own history, messages from other autonomous agents, or other environmental sensors. This capability would allow agents to learn to anticipate and plan for scenario altering events rather than reacting to them after they have already occurred.