Mixed Reality Point Cloud Manipulation

Navy SBIR 25.1- Topic N251-033
Naval Sea Systems Command (NAVSEA)
Pre-release 12/4/24   Opens to accept proposals 1/8/25   Closes 2/5/25 12:00pm ET    [ View Q&A ]

N251-033 TITLE: Mixed Reality Point Cloud Manipulation

OUSD (R&E) CRITICAL TECHNOLOGY AREA(S): Sustainment

OBJECTIVE: Develop a capability to visualize and modify 3-D point cloud models generated by Light Detection and Ranging (LiDAR) and photogrammetry with mixed reality hardware to improve the ability for engineers and technicians to perform virtual ship checks to support design, installation, and modernization to deliver ships on time at lower costs.

DESCRIPTION: Program Executive Offices (PEOs), shipyards, Original Equipment Manufacturers (OEMs), Alteration Installation Teams (AITs), Regional Maintenance Centers (RMCs), and others perform countless ship checks and inspections throughout a ship’s lifecycle. Investments are currently being made into creating dimensional digital twins with LiDAR, photogrammetry, and other 3-D scanning technologies. These technologies have proven invaluable for generating 3-D models that aid in various maintenance and sustainment functions throughout an asset’s lifecycle, but the Navy does not have an effective environment for visualizing and collaborating in the review of ship models.

3-D model generators and consumers visit ships, submarines, or other physical objects of interest, 3-D scan the physical asset leveraging LiDAR or Photogrammetry technologies, generate a 3-D data model with point cloud software, and then view the 3-D model in a 2-D environment (typically a computer monitor) to support future 3-D work (example: installation and modernization). This approach limits user performance and fidelity relative to what fully 3-D models offer, and results in lower effectiveness in the use of this technology.

Immersive 3-D native environments such as augmented reality (AR), virtual reality (VR), or holographic displays provide the opportunity to experience 3-D models in their native dimensions by allowing users to explore and visualize structures and components with every aspect of the model in a familiar and lifelike environment. This will allow naval architects, engineers, technicians, logisticians, shipyard workers, and others across the NAVSEA enterprise to gain significantly more value out of 3-D models with the ability to collaborate in real-time as if physically visiting the ship as a team.

While specific use cases differ in application, the general improvements to visualization are of scale, proportions, special relationships, interferences, and overlays of technical data and annotations from previous inspection and work crews. All these factors will be invaluable to maintenance planning and coordination. Direct return on investments will be seen by improved detection and resolution of physical interferences, design flaws or conflicts, physical damage to equipment or platforms, or other issues with material condition over traditional 2-D renderings on computer screens. Finally, mixed reality will offer the ability for collaborative touring, viewing, diagnosis, and resolution if the aforementioned issues to help diverse teams resolve challenges significantly faster, but currently these tools are not yet mature enough for wide adoption.

To improve the application, execution, and use of 3-D scanning technologies for shipyard applications, NAVSEA would greatly benefit from research, development, and transitioning of software tools that allow the exploration of models in full 3-D views. This concept of employment would be directly applicable to two primary user communities for design purposes:

    1. Ship-level inspections, issue documentation, and tagging which occurs on the deck plates of ships and are reviewed by both local and distributed engineering teams. Teams specifically inspect equipment for work and maintenance discrepancies (paint issues, corrosion, loose nuts, bolts, fittings, et al), which should be annotated, documented, and reported via Navy IT systems. In a 3-D environment those annotations can be made directly in a 3-D model environment to better correlate issue status with the specific physical location and piece of equipment of concern, and then models can be shared across multiple teams to maintain a single maintenance operations and maintenance picture.
    2. Long-term (multi-year) and short term (single year) modernization planning design work which occurs at the shipyard, at contractor offices, or at distributed engineering Navy laboratories. Engineers, architects, and technicians will take existing 3-D models and drawings, import CAD models for future installations and redesign, and look for interferences, poor condition of existing structures and materials, and will annotate corrections that need to be performed by other teams. A collaborative environment where these models can be viewed and toured by diverse teams to collaborate and rapidly resolve issues is critical, as is the ability to compare as-designed drawings to as-built and current condition models and take measurements inside of those models.

PHASE I: Provide detailed workflows for ingesting 3-D point clouds into vendor software and hardware. Demonstrate similar capability using contractor provided data to assess feasibility. To support this, the government will provide detailed requirements for interaction functionality, data specifications and standards for government models (provided at contract award). The Phase I Option, if exercised, will include the initial design specifications, capabilities description, a preliminary timetable, and a budget to build a scaled prototype solution in Phase II.

PHASE II: Demonstrate the ability to ingest, manipulate, and mark up 3D models of Navy-representative ships generated by the government, with annotations that can be shared across team-mates. Develop a full-scale prototype and complete a successful demonstration of the prototype’s capabilities.

PHASE III DUAL USE APPLICATIONS: Assist the Navy in transitioning this technology in the form of a fully operational system (premised on the Phase II prototype) to government use initially on DDG 51 class ships. The final product delivered at the end of Phase III will be an integrated hardware and software solution that can be used by any industry, academia, or government engineering or operations teams that can benefit from collaboration in 3-D space. This includes operations planning, construction and construction management, surveying, and any other use case with similar requirements.

REFERENCES:

1. Wirth, Florian et al. "Pointatme: efficient 3d point cloud labeling in virtual reality." 2019 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2019.

2. Evangelos, Alexiou; Yang, Nanyang and Ebrahimi, Touradj. "PointXR: A toolbox for visualization and subjective evaluation of point clouds in virtual reality." 2020 Twelfth International Conference on Quality of Multimedia Experience (QoMEX). IEEE, 2020.

3. Garrido, Daniel et al. "Point cloud interaction and manipulation in virtual reality." 2021 5th International Conference on Artificial Intelligence and Virtual Reality (AIVR).

4. Stets, Jonathan Dyssel et al. "Visualization and labeling of point clouds in virtual reality." SIGGRAPH Asia 2017 Posters, Article No. 31, pp. 1-2.

5. Maloca, Peter M, et al. "High-performance virtual reality volume rendering of original optical coherence tomography point-cloud data enhanced with real-time ray casting." Translational vision science & technology, Vol 7, 2, 2018.

KEYWORDS: LiDAR; Photogrammetry; Point-Cloud; Mixed-Reality; Annotation; Virtual Ship Check


** TOPIC NOTICE **

The Navy Topic above is an "unofficial" copy from the Navy Topics in the DoD 25.1 SBIR BAA. Please see the official DoD Topic website at www.dodsbirsttr.mil/submissions/solicitation-documents/active-solicitations for any updates.

The DoD issued its Navy 25.1 SBIR Topics pre-release on December 4, 2024 which opens to receive proposals on January 8, 2025, and closes February 5, 2025 (12:00pm ET).

Direct Contact with Topic Authors: During the pre-release period (December 4, 2024, through January 7, 2025) proposing firms have an opportunity to directly contact the Technical Point of Contact (TPOC) to ask technical questions about the specific BAA topic. Once DoD begins accepting proposals on January 8, 2025 no further direct contact between proposers and topic authors is allowed unless the Topic Author is responding to a question submitted during the Pre-release period.

DoD On-line Q&A System: After the pre-release period, until January 22, at 12:00 PM ET, proposers may submit written questions through the DoD On-line Topic Q&A at https://www.dodsbirsttr.mil/submissions/login/ by logging in and following instructions. In the Topic Q&A system, the questioner and respondent remain anonymous but all questions and answers are posted for general viewing.

DoD Topics Search Tool: Visit the DoD Topic Search Tool at www.dodsbirsttr.mil/topics-app/ to find topics by keyword across all DoD Components participating in this BAA.

Help: If you have general questions about the DoD SBIR program, please contact the DoD SBIR Help Desk via email at [email protected]

Topic Q & A

1/10/25  Q. Will Letters of Support be read and considered as part of the Phase I proposal review for this topic?
   A. If the Letter of Support is included within the Technical Volume, Volume 2, it will be included in the evaluation. As noted in the Navy SBIR 25.1 Phase I instruction, items included in the Supporting Documents (Volume 5) will not be considered in the selection process and will only undergo a compliance review.
1/6/25  Q. For the Phase 1 Base (not Option), are there specific technical milestones or deliverables you recommend including in the proposal for demonstrating feasibility?
   A. No specific milestones are required.
1/6/25  Q. 1. In the Phase I section, it mentions that the proposal should include detailed workflows for “ingesting of 3-D point clouds” and mentions nothing else. To what extent would you recommend that the proposal should also have detailed workflows covering interaction functionality? Would you recommend that it be as detailed or less so than the workflows for ingestion? Could you clarify the level of detail expected in the detailed workflows for 3-D point cloud ingestion and/or interaction functionality?
2. Given that interaction functionality specifications will be given after contract award, would you recommend basing any discussions, detailed workflows, etc. concerning interaction functionality to be based on our own ideas of possible interaction functionality?
3. Does “demonstrate similar capability” refer to demonstration of ingestion of point clouds? OR is the demonstration expected to also include interaction functionality?
   A. 1. Please provide enough detail that the government reviewers can understand the broad general necessary steps to prepare and ingest point cloud data into the tools, along with any necessary software.
2. Vendor best recommendations are sufficient for phase 1.
3. Please demonstrate capability required by the government but using data sets provided by the contractor. In follow-on phases, Government provided data will be used.
1/6/25  Q.
  1. Are the multiple users who work together to review the models distributed across multiple locations or are they working from the same physical location?
  2. Has NAVSEA committed to or procured any specific XR headset for this use case or others?
  3. The headset market today has tradeoffs between computational power and mobility. Assuming the large scale of the model, can proposing companies focus on platforms that offer a tethered (i.e. headset + computer) solution, or is mobility prioritized over accuracy?
  4. Navy has previously funded similarly scoped SBIR contracts with QualTech that went to Phase II and are listing on NavySTP - why are you putting out a new call for proposals instead of going to P3 with QualTech or using COTs options like the Resolve App, Figmin, XR Viewer or Campfire? What about the solutions available are inadequate to meet Navy needs and/or how is this desired solution different?
  5. Are there non-SBIR procurement budgets to support a Phase III transition with the Navy for this solution? If not, is there a clear path to secure budget within the Phase 2 period of performance?
  6. What is the security classification level of the digital twins of the ships? What security standards will headsets need to meet to be used under this use case?
  7. Once an annotation, tag, or document has been created during a mixed reality session, does it need to transfer to a Navy IT system?
  8. Does “manipulate” mean altering the 3D model, adding prefab artifacts, taking measurements, or something else? If something else, can you please elaborate?
   A. 1. Users will be co-located but providing a 2-D display that can be shared via remote collaboration tools is valuable.
2. No specific XR Platform has been selected, but TAA compliance is highly preferred if possible as it will assist with technology transition without migration to different platforms.
3. Tethered solutions are preferred due to cybersecurity approvals concerns.
4. This information will not be provided as part of the BAA.
5. The technology transition piece will be part of the execution strategy of the SBIR. At this point we cannot communicate that with vendors.
6. Currently LiDAR point cloud models are UNCLASS CUI-T. Future use cases may rise to higher classifications but that is not a requirement under the Phase I or Phase II.
7. Annotations, tags, and documents would be intended for reporting via Navy IT systems. Annotations and other notes must be stored for future retrieval.
8. Manipulations can be referred to the various operations including filtering, editing, and transforming to improve point cloud data. Changes to the point cloud is expected to occur in a 3D environment such as selecting and removing obstructions.
1/5/25  Q.
  1. What are the specific data formats, specifications, and standards for the government-provided 3-D models and annotations? Will LiDAR and photogrammetry outputs require preprocessing before ingestion?
  2. Are there preferred or required mixed reality hardware platforms (e.g., AR/VR headsets) that the solution should support? Should the system be cross-compatible with multiple devices?
  3. What level of real-time collaboration is expected? Should the solution support simultaneous annotations, voice communication, or shared measurements across distributed teams?
  4. What benchmarks or success criteria will be used to evaluate the system’s performance in Phase II, especially in terms of visualization fidelity, usability, and speed?
  5. Are there specific cybersecurity or IT compliance standards that the solution must adhere to, given its use in sensitive Navy environments?
  6. For non-military applications, are there specific industries or functionalities (e.g., construction management, surveying) that should be prioritized during development?
  7. Should the final solution include user training materials or integration support for existing Navy systems and workflows?
   A. 1. Common government model formats follow .las, .laz, .xyz, .e57, .ptx, .obj, .wrl, .stl, .ply for example. Ingestion may include preprocessing workflows for LiDAR and photogrammetry outputs.
2. No preferred or required XR platforms but TAA compliant devices is strongly preferred when possible.
3. Multiple participants in one point cloud is desired. It is assumed that those participants are co-located with access to the same devices in the event a PC is driving the XR device.
4. This information will not be provided as part of the BAA.
5. TAA compliant devices is strongly desired.
6. The government does not have a preference.
7. Basic charts covering subject material is sufficient for Phase 1. Follow-on contract awards would require this documentation.
12/11/24  Q. N251-033: The topic description makes reference to multiple data sources for the Navy created digital twins, including LIDAR scans, photogrammetry, and other 3D scanning technologies. However, the PHASE I: section makes specific reference only to 3D point clouds (typical of LIDAR scans) and does not mention complete “polygon” or “mesh” models of any kind. Is it correct that the phase 1 effort should specifically focus on ingest of point-cloud data (for example, .las point clouds)? Or is the use of the term “3-D point clouds” here shorthand, and intended to cover both point cloud and polygonal/mesh model digital twins?
   A. Both point cloud and mesh are important, but most existing data is probably in point cloud format, so that’s a good threshold.


[ Return ]