Auryn Aero develops certifiable algorithms based on ML, AI and Sensor Fusion for the enhancement of navigation techniques, operations and safety of UAVs.
We help our customers and partners in integrating high-performance GN&C Algorithms based upon ML, AI and Sensor Fusion, on a range of custom or third-party hardware, to optimize the performance we expect from the system, as well as minimizing the impact of upgrading an already-fielded System.
→ 01
Moving Platform Landing
Applications:
↳ Military and Defense Logistics
↳ Maritime Operations
↳ Emergency Medical Services
↳ Automated Delivery Systems
This functionality is specifically designed to enable precise and safe landings on moving platforms. By incorporating advanced AI algorithms and leveraging data from onboard sensors such as cameras and LiDAR, our system is capable of real-time detection and adaptation to the motion of landing surfaces. This advancement is facilitated through the implementation of Target Tracking and Target Localization features, which provide essential coordination and control, ensuring a stable and secure landing process.
Our approach integrates the flexibility of Python with the robustness of C/C++ to deliver unparalleled performance in dynamic environment adaptation. The integration with various Flight Control Systems (FCS) frameworks, including PX4 and ArduPilot, enhances the drone’s autonomous navigation capabilities, ensuring a seamless operation from takeoff to landing. Inputs such as images, streaming video, and diverse sensor readings (IMU, GPS, ToF, LiDAR, etc.) are meticulously analyzed to generate a series of sophisticated commands. These commands guide the drone through the complexities of landing on moving platforms, ensuring operational safety and reliability under a wide array of conditions. This technological leap not only highlights the potential for enhanced operational efficiency but also paves the way for groundbreaking applications across various sectors.
→ 02
Autonomous Safe Landing Spot Identification
This advanced solution, powered by Sensor Fusion technology, synthesizes data from onboard sensors and video processing to pinpoint the most suitable landing areas with incredibly accuracy. Incorporating Obstacle Avoidance and Environmental Mapping, the system ensures the identification of the safest landing zones, considering various environmental factors.
Inputs such as images, streaming video, and data from onboard sensors like IMU, GPS, ToF, and LiDAR are processed to generate a heatmap of the aerial view, indicating safe landing zones from green (safe) to red (unsafe), providing a clear and actionable output for drone operations.
↳ Drone Delivery Services
↳ Military and Defense Reconnaissance
↳ Security and Surveillance Operations
↳ Emergency Response and Search and Rescue
↳ Construction and Infrastructure Inspection
Furthermore, the integration of Multi-Target Detection assesses the vicinity for humans and vehicles, marking areas as off-limits to maintain operational safety.
The core functionalities of this toolbox is built on Python and C/C++, it take advantages of OpenCV and the VINS-Fusion algorithm to deliver sophisticated computer vision functionalities. This integration facilitates the execution of precise landing maneuvers, supported by ROS and different FCS frameworks for enhanced control and adaptability. Designed for compatibility with a wide range of operating systems that support Python and C++, this solution is accessible and reliable for diverse operational needs.
→ 03
Collision & Obstacle Avoidance
Applications
↳ Security and Surveillance
↳ Search and Rescue Operations
↳ Infrastructure Inspection
↳ Agricultural Monitoring
↳ Military and Defense Reconnaissance
The Collision and Obstacle Avoidance solution includes sophisticated AI algorithms capable of interact with an array of sensors, enabling the detection and avoidance of potential collisions with objects or obstacles in real-time. This system meticulously analyzes sensor data to identify any obstacles in the drone's flight path, dynamically adjusting its trajectory to maintain safe operation across varied and unpredictable environments.
The core of this solution is the integration of Python and C/C++, combining the strengths of both to enhance video processing and analysis capabilities. Utilizing different visual analisys frameworks along with a deep neural network (DNN) architecture, trained specifically by us on selected data, this setup ensures the precise detection of and swift response to obstacles. The integration with different FCS frameworks further enhances this solution, equipping it with cutting-edge functionalities essential for conducting safe and autonomous drone flight operations.
→ 04
GNSS - Denied Navigation
This solution leverages a combination of techniques and algorithms tailored to specific operational needs of drones to navigate and execute autonomous flight tasks in environments where GNSS (Global Navigation Satellite System) signals are compromised or entirely absent. Utilizing AI algorithms that integrate visual odometry with inertial measurement units (IMUs), the system empowers drones to accurately determine their position and orientation without reliance on satellite navigation. For indoor operations, SLAM (Simultaneous Localization and Mapping) algorithms come into play, reconstructing the immediate environment to facilitate navigation. In contrast, outdoor scenarios, where SLAM may not be applicable, rely on a blend of dead reckoning and other navigational methods.
Here, the drone's current location is inferred from previously known positions, incorporating speed and directional data. Sensor Fusion technology is applied to onboard sensor data, effectively minimizing the cumulative error associated with in-flight positional estimations.
This functionality is engineered through the integration of Python and C++, OpenCV and the ORB-SLAM2 library alongside the Visual-Inertial library, VINS-Fusion, to provide comprehensive image processing and autonomous navigation capabilities. The system processes inputs from images, streaming video, and various onboard sensors (IMU, GPS, ToF, LiDAR, etc.), delivering outputs that include the drone’s estimated position within unknown environments, enabling precise and reliable navigation even in the absence of GNSS signals.
Applications:
↳ Security and Surveillance
↳ Search and Rescue Operations
↳ Military and Defense
↳ Infrastructure Inspection
↳ Exploration and Mapping
→ 05
COMMS - Denied Navigation
This solution leverages the integration of Sensor Fusion from onboard sensors with advanced Obstacle Avoidance algorithms. This enables drones to navigate without relying on Radio Frequency and Telemetry communications and in environments where traditional communications are compromised, including indoor spaces and hardened infrastructure scenarios, ensuring seamless operation and enhanced safety.
This functionality is built upon the integration of Python and C/C++, utilizing state-of-the-art image processing techniques. The inclusion of ROS and the customization of different FCS frameworks into our solution provides a solid foundation for executing autonomous flight operations, making it a versatile tool for any platform supporting Python and C/C++. The system processes inputs such as images, streaming video, and data from various onboard sensors (IMU, GPS, ToF, LiDAR, etc.), to deliver an estimated drone position in challenging or unknown environments, facilitating reliable and independent navigation.
Applications:
↳ Security and Surveillance
↳ Military Operations
↳ Emergency Response
↳ Infrastructure Inspection
Our Products purchases include all the necessary documentation, a bulk hours of customer training and a comprehensive 24/7 after-sale assistance and pre-planned maintenance. Contact us today to book a flight demonstration!