N251-041 TITLE: Generalizable Artificial Intelligence/Machine Learning (AI/ML) Undersea Warfare (USW) Quick-Look Tool
OUSD (R&E) CRITICAL TECHNOLOGY AREA(S): Trusted AI and Autonomy
The technology within this topic is restricted under the International Traffic in Arms Regulation (ITAR), 22 CFR Parts 120-130, which controls the export and import of defense-related material and services, including export of sensitive technical data, or the Export Administration Regulation (EAR), 15 CFR Parts 730-774, which controls dual use items. Offerors must disclose any proposed use of foreign nationals (FNs), their country(ies) of origin, the type of visa or work permit possessed, and the statement of work (SOW) tasks intended for accomplishment by the FN(s) in accordance with the Announcement. Offerors are advised foreign nationals proposed to perform on this topic may be restricted due to the technical data under US Export Control Laws.
OBJECTIVE: Develop a configurable Artificial Intelligence/Machine Learning (AI/ML) tool that generates USW quick-look reports for laboratory testing and at-sea tests data collection events.
DESCRIPTION: Documenting the outcome of laboratory testing and at-sea tests procedures involves including time-consuming manual processes, variability in expertise, and subjectivity in interpretation. Manual interpretation of test results incurs potential for human error and requires substantial time. The potential for error and delay increases with the complexity and volume of data. Delay may also occur when multiple professionals must come to consensus on the interpretation of the data.
Not all test engineers have the same level of experience or knowledge when interpreting test results, leading to unnecessary inconsistencies in reported outcomes. This variability can result in unnecessary variation in management decisions based on test results that are not consistent due to interpretation of the data by various individuals.
Further, engineers may draw contrasting conclusions from the same test data, contributing further to the variability in outcomes, as may occur when the test is simple (e.g., calibration of a sensor array). These challenges are compounded by other factors, such as quality of results, factors related to the purpose of the test procedure, and the reliability of test measurements.
The Navy seeks a Generalizable USW Quick-Look Tool that reduces variability in outcomes and facilitates an advanced state of expertise among inexperienced test and manufacturing personnel. There is currently no commercial tool that can accomplish this.
The initial target of the technology would be relatively simple and repeatable tests, such as towed receive array calibration and inspection. The solution must be extensible to more complex test procedures, with the tool being evaluated based on accuracy of results in the report, useability of provided information, and latency reduction in the time it currently takes. The solution must show a range of quick-look test summaries to include representative tests, from simple calibrations to complex test series across multiple days, test objectives, and environmental conditions to demonstrate its abilities. It must also do pre-test quality assurance check that could detect mechanical inconsistencies between the planned test setup and the actual hardware configuration.
The concept will be evaluated based on feasibility, range of extensibility across test complexity (calibration test to multi-day multi-objective testing) and type (in-lab testing to at-sea testing), ease of use for test engineers, and clarity of test result presentation.
The minimum viable product (MVP) version of the end result will undergo independent testing by the IWS 5.0 Machine Learning Working Group. This independent testing will include using the prototype with classified data sets.
Work produced in Phase II may become classified. Note: The prospective contractor(s) must be U.S. owned and operated with no foreign influence as defined by 32 U.S.C. § 2004.20 et seq., National Industrial Security Program Executive Agent and Operating Manual, unless acceptable mitigating procedures can and have been implemented and approved by the Defense Counterintelligence and Security Agency (DCSA) formerly Defense Security Service (DSS). The selected contractor must be able to acquire and maintain a secret level facility and Personnel Security Clearances. This will allow contractor personnel to perform on advanced phases of this project as set forth by DCSA and NAVSEA in order to gain access to classified information pertaining to the national defense of the United States and its allies; this will be an inherent requirement. The selected company will be required to safeguard classified material during the advanced phases of this contract IAW the National Industrial Security Program Operating Manual (NISPOM), which can be found at Title 32, Part 2004.20 of the Code of Federal Regulations.
PHASE I: Develop a concept for an AI/ML USW Quick-Look tool and demonstrate that it will feasibly meet the parameters of the Description. Demonstrate feasibility through modelling and testing.
The Phase I Option, if exercised, will include the initial design specifications and capabilities description to build a prototype solution in Phase II.
PHASE II: Develop and deliver a prototype AI/ML USW Quick-Look tool based on the results of Phase I. Demonstrate the technology through performing independent evaluation of the MVP prototype with the government Machine Learning Working Group. The government Machine Learning Working Group will test the prototype using classified data sets.
It is probable that the work under this effort will be classified under Phase II (see Description section for details).
PHASE III DUAL USE APPLICATIONS: Assist the Navy in transitioning the technology to Navy use. It is anticipated the final product will eventually be used across PEO Integrated Warfare Systems (IWS) and USW to develop quick-look reports for both laboratory testing and at-sea tests. The Space, Weight, Power, and Cooling (SWAP-C) associated with the final product will determine details of how test engineers may utilize the resultant product in cases where cloud-based test infrastructure may not be available.
The Generalizable Quick-Look Tool will be of use in numerous applications where engineering tests must be rapidly summarized to support product decisions or provide insight to customers. Given the anticipated domestic reshoring of product manufacturing, the Generalizable Quick-Look Tool could become a major help to future manufacturers who will often lack sufficient seasoned personnel to mentor the rising workforce using traditional master-apprentice techniques.
REFERENCES:
1. Yoshimura, Heather. "Beyond Numbers: How AI Can Be Used To Assist With Lab Results Interpretation and Patient Outcomes." AGNP-PC, 17 July 2023. https://www.rupahealth.com/post/beyond-numbers-how-ai-can-be-used-to-assist-with-lab-results-interpretation-and-patient-outcomes
2. Yu, Mengling et al. "Array shape calibration with phase unwrapping techniques for highly deformed arrays." IET Radar, Sonar & Navigation, 10 June 2021. https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/rsn2.12131
3. "AN/SQQ-89(V) Undersea Warfare / Anti-Submarine Warfare Combat System." https://www.navy.mil/Resources/Fact-Files/Display-FactFiles/Article/2166784/ansqq-89v-undersea-warfare-anti-submarine-warfare-combat-system/
4. "National Industrial Security Program Executive Agent and Operating Manual (NISP), 32 U.S.C. § 2004.20 et seq. (1993)." https://www.ecfr.gov/current/title-32/subtitle-B/chapter-XX/part-2004
KEYWORDS: Quick-look test summaries; calibration of a sensor array; independent quality assurance check; complex test procedures; inexperienced test and manufacturing personnel; reduces variability in outcomes
** TOPIC NOTICE ** |
The Navy Topic above is an "unofficial" copy from the Navy Topics in the DoD 25.1 SBIR BAA. Please see the official DoD Topic website at www.dodsbirsttr.mil/submissions/solicitation-documents/active-solicitations for any updates. The DoD issued its Navy 25.1 SBIR Topics pre-release on December 4, 2024 which opens to receive proposals on January 8, 2025, and closes February 5, 2025 (12:00pm ET). Direct Contact with Topic Authors: During the pre-release period (December 4, 2024, through January 7, 2025) proposing firms have an opportunity to directly contact the Technical Point of Contact (TPOC) to ask technical questions about the specific BAA topic. Once DoD begins accepting proposals on January 8, 2025 no further direct contact between proposers and topic authors is allowed unless the Topic Author is responding to a question submitted during the Pre-release period. DoD On-line Q&A System: After the pre-release period, until January 22, at 12:00 PM ET, proposers may submit written questions through the DoD On-line Topic Q&A at https://www.dodsbirsttr.mil/submissions/login/ by logging in and following instructions. In the Topic Q&A system, the questioner and respondent remain anonymous but all questions and answers are posted for general viewing. DoD Topics Search Tool: Visit the DoD Topic Search Tool at www.dodsbirsttr.mil/topics-app/ to find topics by keyword across all DoD Components participating in this BAA.
|
1/22/25 | Q. |
|
A. |
|
|
12/31/24 | Q. | 1. Test-Specific Data
1. What are the key types of test data used during laboratory and at-sea evaluations (e.g., calibration results, environmental conditions)? Would it be possible to get a sample data set? 2. Pre-Test QA Checks 1. What are the common pre-test issues or inconsistencies that should be flagged (e.g., mismatched configurations, hardware defects)? 3. Quick-Look Report Content 1. Should quick-look reports focus on high-level summaries, or should they include detailed insights for individual test scenarios?4. Test Complexity and Extensibility 1. How diverse are the test scenarios in terms of environmental conditions and equipment setups?5. Real-Time vs. Post-Test Reporting 1. Should the tool generate real-time summaries during testing, or is it focused on post-test reporting? |
A. |
1. Test-Specific Data
1. The sorts of information collected during an event would be preliminary results, initial hypotheses about unusual results, conditions (environmental, location, duration of test(s) or collections), and calibration results (when applicable). Recall that this SBIR is for research into how AIML could assist in generating quick looks, so the nature of the events/tests to be summarized will not be precisely prescribed, else it wouldn't require research.2. Pre-Test QA Checks 1. There are many possible architectures/frameworks that could form the basis for desired research. It would be desirable to help the operator/test engineer identify a sufficiently comprehensive characterization of initial conditions. Details of what can be done in a more automated fashion would depend on the capabilities of the system under test.3. Quick-Look Report Content 1. Both are important, depending on the system under test. The details are left for the company to propose and determine utility and feasibility.4. Test Complexity and Extensibility 1. The hope is that this technology could be extensible to a wide range of observations/systems associated with anti-submarine warfare.5. Real-Time vs. Post-Test Reporting 1. Given that the range of test/observation use cases involve multi-day events, it would seem that some amount of summarization during events would be useful, in addition to final summaries at the end of the test. |