Generalizable Artificial Intelligence/Machine Learning (AI/ML) Undersea Warfare (USW) Quick-Look Tool

Navy SBIR 25.1- Topic N251-041
Naval Sea Systems Command (NAVSEA)
Pre-release 12/4/24   Opens to accept proposals 1/8/25   Closes 2/5/25 12:00pm ET    [ View Q&A ]

N251-041 TITLE: Generalizable Artificial Intelligence/Machine Learning (AI/ML) Undersea Warfare (USW) Quick-Look Tool

OUSD (R&E) CRITICAL TECHNOLOGY AREA(S): Trusted AI and Autonomy

The technology within this topic is restricted under the International Traffic in Arms Regulation (ITAR), 22 CFR Parts 120-130, which controls the export and import of defense-related material and services, including export of sensitive technical data, or the Export Administration Regulation (EAR), 15 CFR Parts 730-774, which controls dual use items. Offerors must disclose any proposed use of foreign nationals (FNs), their country(ies) of origin, the type of visa or work permit possessed, and the statement of work (SOW) tasks intended for accomplishment by the FN(s) in accordance with the Announcement. Offerors are advised foreign nationals proposed to perform on this topic may be restricted due to the technical data under US Export Control Laws.

OBJECTIVE: Develop a configurable Artificial Intelligence/Machine Learning (AI/ML) tool that generates USW quick-look reports for laboratory testing and at-sea tests data collection events.

DESCRIPTION: Documenting the outcome of laboratory testing and at-sea tests procedures involves including time-consuming manual processes, variability in expertise, and subjectivity in interpretation. Manual interpretation of test results incurs potential for human error and requires substantial time. The potential for error and delay increases with the complexity and volume of data. Delay may also occur when multiple professionals must come to consensus on the interpretation of the data.

Not all test engineers have the same level of experience or knowledge when interpreting test results, leading to unnecessary inconsistencies in reported outcomes. This variability can result in unnecessary variation in management decisions based on test results that are not consistent due to interpretation of the data by various individuals.

Further, engineers may draw contrasting conclusions from the same test data, contributing further to the variability in outcomes, as may occur when the test is simple (e.g., calibration of a sensor array). These challenges are compounded by other factors, such as quality of results, factors related to the purpose of the test procedure, and the reliability of test measurements.

The Navy seeks a Generalizable USW Quick-Look Tool that reduces variability in outcomes and facilitates an advanced state of expertise among inexperienced test and manufacturing personnel. There is currently no commercial tool that can accomplish this.

The initial target of the technology would be relatively simple and repeatable tests, such as towed receive array calibration and inspection. The solution must be extensible to more complex test procedures, with the tool being evaluated based on accuracy of results in the report, useability of provided information, and latency reduction in the time it currently takes. The solution must show a range of quick-look test summaries to include representative tests, from simple calibrations to complex test series across multiple days, test objectives, and environmental conditions to demonstrate its abilities. It must also do pre-test quality assurance check that could detect mechanical inconsistencies between the planned test setup and the actual hardware configuration.

The concept will be evaluated based on feasibility, range of extensibility across test complexity (calibration test to multi-day multi-objective testing) and type (in-lab testing to at-sea testing), ease of use for test engineers, and clarity of test result presentation.

The minimum viable product (MVP) version of the end result will undergo independent testing by the IWS 5.0 Machine Learning Working Group. This independent testing will include using the prototype with classified data sets.

Work produced in Phase II may become classified. Note: The prospective contractor(s) must be U.S. owned and operated with no foreign influence as defined by 32 U.S.C. § 2004.20 et seq., National Industrial Security Program Executive Agent and Operating Manual, unless acceptable mitigating procedures can and have been implemented and approved by the Defense Counterintelligence and Security Agency (DCSA) formerly Defense Security Service (DSS). The selected contractor must be able to acquire and maintain a secret level facility and Personnel Security Clearances. This will allow contractor personnel to perform on advanced phases of this project as set forth by DCSA and NAVSEA in order to gain access to classified information pertaining to the national defense of the United States and its allies; this will be an inherent requirement. The selected company will be required to safeguard classified material during the advanced phases of this contract IAW the National Industrial Security Program Operating Manual (NISPOM), which can be found at Title 32, Part 2004.20 of the Code of Federal Regulations.

PHASE I: Develop a concept for an AI/ML USW Quick-Look tool and demonstrate that it will feasibly meet the parameters of the Description. Demonstrate feasibility through modelling and testing.

The Phase I Option, if exercised, will include the initial design specifications and capabilities description to build a prototype solution in Phase II.

PHASE II: Develop and deliver a prototype AI/ML USW Quick-Look tool based on the results of Phase I. Demonstrate the technology through performing independent evaluation of the MVP prototype with the government Machine Learning Working Group. The government Machine Learning Working Group will test the prototype using classified data sets.

It is probable that the work under this effort will be classified under Phase II (see Description section for details).

PHASE III DUAL USE APPLICATIONS: Assist the Navy in transitioning the technology to Navy use. It is anticipated the final product will eventually be used across PEO Integrated Warfare Systems (IWS) and USW to develop quick-look reports for both laboratory testing and at-sea tests. The Space, Weight, Power, and Cooling (SWAP-C) associated with the final product will determine details of how test engineers may utilize the resultant product in cases where cloud-based test infrastructure may not be available.

The Generalizable Quick-Look Tool will be of use in numerous applications where engineering tests must be rapidly summarized to support product decisions or provide insight to customers. Given the anticipated domestic reshoring of product manufacturing, the Generalizable Quick-Look Tool could become a major help to future manufacturers who will often lack sufficient seasoned personnel to mentor the rising workforce using traditional master-apprentice techniques.

REFERENCES:

1. Yoshimura, Heather. "Beyond Numbers: How AI Can Be Used To Assist With Lab Results Interpretation and Patient Outcomes." AGNP-PC, 17 July 2023. https://www.rupahealth.com/post/beyond-numbers-how-ai-can-be-used-to-assist-with-lab-results-interpretation-and-patient-outcomes

2. Yu, Mengling et al. "Array shape calibration with phase unwrapping techniques for highly deformed arrays." IET Radar, Sonar & Navigation, 10 June 2021. https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/rsn2.12131

3. "AN/SQQ-89(V) Undersea Warfare / Anti-Submarine Warfare Combat System." https://www.navy.mil/Resources/Fact-Files/Display-FactFiles/Article/2166784/ansqq-89v-undersea-warfare-anti-submarine-warfare-combat-system/

4. "National Industrial Security Program Executive Agent and Operating Manual (NISP), 32 U.S.C. § 2004.20 et seq. (1993)." https://www.ecfr.gov/current/title-32/subtitle-B/chapter-XX/part-2004

KEYWORDS: Quick-look test summaries; calibration of a sensor array; independent quality assurance check; complex test procedures; inexperienced test and manufacturing personnel; reduces variability in outcomes


** TOPIC NOTICE **

The Navy Topic above is an "unofficial" copy from the Navy Topics in the DoD 25.1 SBIR BAA. Please see the official DoD Topic website at www.dodsbirsttr.mil/submissions/solicitation-documents/active-solicitations for any updates.

The DoD issued its Navy 25.1 SBIR Topics pre-release on December 4, 2024 which opens to receive proposals on January 8, 2025, and closes February 5, 2025 (12:00pm ET).

Direct Contact with Topic Authors: During the pre-release period (December 4, 2024, through January 7, 2025) proposing firms have an opportunity to directly contact the Technical Point of Contact (TPOC) to ask technical questions about the specific BAA topic. Once DoD begins accepting proposals on January 8, 2025 no further direct contact between proposers and topic authors is allowed unless the Topic Author is responding to a question submitted during the Pre-release period.

DoD On-line Q&A System: After the pre-release period, until January 22, at 12:00 PM ET, proposers may submit written questions through the DoD On-line Topic Q&A at https://www.dodsbirsttr.mil/submissions/login/ by logging in and following instructions. In the Topic Q&A system, the questioner and respondent remain anonymous but all questions and answers are posted for general viewing.

DoD Topics Search Tool: Visit the DoD Topic Search Tool at www.dodsbirsttr.mil/topics-app/ to find topics by keyword across all DoD Components participating in this BAA.

Help: If you have general questions about the DoD SBIR program, please contact the DoD SBIR Help Desk via email at [email protected]

Topic Q & A

1/22/25  Q.
  1. Are there any plans to make simple and/or complex examples of finished unclassified laboratory testing documentation available to Phase I awardees?
  2. Are there any plans to make raw data for simple and/or complex tests (e.g., towed receive array calibration and inspection) available to Phase I awardees?
  3. What kind of training do service personnel receive before being given the responsibility of documenting the outcomes of laboratory and at-sea system testing?
  4. Is there structured and reviewed documentation specifying test objectives before laboratory and at-sea testing commence? Would such documentation be available to use as an input to the USW Quick-Look Tool?
  5. Is it fair to say that the inputs to the test engineer for documenting laboratory and at-sea tests are (1) formally documented test objectives and (2) raw data (e.g., towed receive array raw data)? And that the job of the test engineer is to document the outcome of the test based on interpretation of the data meeting the test objectives?
   A.
  1. Examples up to the Controlled Unclassified Information (CUI) level will be provided to Phase I awardees, but for proposals we recommend that you discuss tests your technology would be able to handle, once developed.
  2. Sample data up to the Controlled Unclassified Information (CUI) level could be provided to Phase I awardees, but for proposals we recommend that you develop a plan for demonstrating your technology using information you generate, simulate, or obtain from other sources. The reason for this approach is to allow you to develop a technology you can present to other entities without constantly having to ask IWS 5.0 permission to share results that include their CUI.
  3. Training depends on the nature of the testing. From the standpoint of your ability to produce a powerful product with many potential applications, you would be advised to make it a tool that could be used by relatively unsophisticated persons. Even our highly intelligent and highly trained personnel aren’t necessarily experts in future machine learning, for example.
  4. Before novel experiments occur, it is standard practice to develop a Measurement and Analysis Plan (MAP). For standard tests and events, there should be a similar procedure in place. If we have procedures and MAPs up to the Controlled Unclassified Information (CUI) level could be provided to Phase I awardees. However it is recommended that you propose to demonstrate feasibility using documentation that you can generate, simulate, or obtain from other sources so you are not dependent on availability of CUI data to demonstrate the full feasibility of your technology.
  5. While formal test objectives and raw data are almost always elements of such experiments, it is not fair to say that these are the only two inputs. As General Eisenhower repeated, “Peace-time plans are of no particular value, but peace-time planning is indispensable.” The reason for testing is that we don’t know the outcome and we may not be certain the test will occur as planned. A further desire from this technology is to facilitate rapid review by management so the quick look can be released quickly, which extends beyond mere capture of test objectives and raw data.
12/31/24  Q. 1. Test-Specific Data
1. What are the key types of test data used during laboratory and at-sea evaluations (e.g., calibration results, environmental conditions)? Would it be possible to get a sample data set?
2. Are there specific test cases (e.g., array calibration, multi-objective testing) that should be prioritized during development?
3. How are test configurations documented, and do they include metadata like environmental parameters or equipment settings?
4. Should the tool support comparing test results across different conditions or iterations?

2. Pre-Test QA Checks
1. What are the common pre-test issues or inconsistencies that should be flagged (e.g., mismatched configurations, hardware defects)?
2. Are there predefined QA checklists or standards the tool must align with?
3. Should the tool prioritize identifying critical QA issues or flag all discrepancies, regardless of severity?

3. Quick-Look Report Content
1. Should quick-look reports focus on high-level summaries, or should they include detailed insights for individual test scenarios?
2. How should the tool handle reporting for multi-day or multi-objective tests?
3. Are there any specific visualizations or metrics that must be included in the quick-look reports?
4. Test Complexity and Extensibility
1. How diverse are the test scenarios in terms of environmental conditions and equipment setups?
2. Should the tool be extensible to handle new test types or future testing requirements?
3. How should the tool address variability in test data formats or levels of completeness?
5. Real-Time vs. Post-Test Reporting
1. Should the tool generate real-time summaries during testing, or is it focused on post-test reporting?
   A. 1. Test-Specific Data
1. The sorts of information collected during an event would be preliminary results, initial hypotheses about unusual results, conditions (environmental, location, duration of test(s) or collections), and calibration results (when applicable). Recall that this SBIR is for research into how AIML could assist in generating quick looks, so the nature of the events/tests to be summarized will not be precisely prescribed, else it wouldn't require research.
2. The most compelling Phase I demonstration of feasibility will be a capability that is extensible beyond specific cases. The twin thrusts mentioned by authors during calls have been 1) reducing the effort on the operator (e.g., helping auto-generate quick look content and/or helping guide the operator through a comprehensive quick look) and 2) assisting QA (e.g. managers) to determine that the content is appropriate for wider dissemination.
3. The system we intend to initially target with this technology is the AN/SQQ-89A(V)15. The system has a powerful recording function (Recorder Functional Segment (RecFS), which includes information such as acoustic data, environmental information, equipment settings, screen shots, and operator interactions with the system. This system was designed so analysts could perform forensic analysis of interesting real-world events. But other tests, such as array calibrations, do not involve RecFS, so an innovative framework and ontology for generating quick look reports would need to function both when there is extensive recording capability and when there is not extensive recording capability.
4. The technology demonstrated as feasible during the Phase I Base should be extensible to comparing the results with other similar events.
2. Pre-Test QA Checks
1. There are many possible architectures/frameworks that could form the basis for desired research. It would be desirable to help the operator/test engineer identify a sufficiently comprehensive characterization of initial conditions. Details of what can be done in a more automated fashion would depend on the capabilities of the system under test.
2. The fundamental technology sought could serve as a quick look tool for numerous situations, so the QA checklists or standards would vary depending on the system under test. It is not so much that the tool would prioritize identifying critical QA issues, though identifying important QA conditions (whether compliant or not) would seem to be important in the overarching framework.
3. Quick-Look Report Content
1. Both are important, depending on the system under test. The details are left for the company to propose and determine utility and feasibility.
2. The government does not want to become prescriptive of the final solution here. This is innovative research and we look forward to proposals for how the important case of multi-day and multi-objective observations/tests would be accommodated.
3.Specifics would change according to the system/observation involved.
4. Test Complexity and Extensibility
1. The hope is that this technology could be extensible to a wide range of observations/systems associated with anti-submarine warfare.
2. Yes.
3. The framework should be capable of accommodating numerous data formats and levels of completeness.
5. Real-Time vs. Post-Test Reporting
1. Given that the range of test/observation use cases involve multi-day events, it would seem that some amount of summarization during events would be useful, in addition to final summaries at the end of the test.


[ Return ]