CDE4301 Innovation & Design Capstone
AY2024/2025 Semester 2
ASI-401: Indoor Search Using Swarm of Drones
Group Report
Acknowledgements
We would like to extend our heartfelt gratitude to everyone who has contributed to our project in one way or another. Your support has been invaluable in ensuring its smooth progress, and we could not have accomplished this without you.
Project Supervisors
- Dr. Elliot Law, Project Supervisor
- Mr. Nicholas Chew, Co-Project Supervisor
Temasek Lab & SAFMC Team
- Mr. Liew Yung Jun, Temasek Lab Staff, SAFMC 2024 Team Member
- Mr. Jimmy Chiun, Temasek Lab Staff, Former SAFMC Team Member
- Mr. William Leong, Temasek Lab Staff, Former SAFMC Team Member
EDIC Staff
- Ms. Annie Tan
- Mr. Alvin Poh
Residential College 4 (RC4) Staff & Directors
- Dr. Naviyn Prabhu Balakrishnan, Director of Student Life
- Mr. Scormon Ho Rui Sheng, Director of Sports
- Ms. Chloe Siew Ying Ning, Director of Clubs and Societies
- Ms. Ngu Hui Tze, Admin Staff
- Ms. Loi Hwee Fang, Admin Staff
- Mr. Tan You Cheng, Admin Staff
Table of Abbreviations
SAFMC 2025 Competition Video
Credits: Dylan Khoo (UREx Team Member)
1. Singapore Amazing Flying Machine Competition 2025
Our group participated in the Singapore Amazing Flying Machine Competition (SAFMC) 2025 Open Category (Category E). In this category, we designed an autonomous drone swarm to search and rescue for victims in a Playing Field.
Drone swarms have been used in a variety of real-world implementations, including environmental mapping and search and rescue. Defence Science Organisation (DSO), the organiser of the competition, stands to gain technical know-how from organising this competition. This can be applied in their development of drone systems for defence.
1.1 Category E Mission
The mission of SAFMC 2025 Category E is:
Design a system of 10 to 25 drones to navigate through an indoor environment and search for victims,
using either a centralised or de-centralised fully autonomous control system. The system must possess
localization, obstacle sensing and obstacle avoidance capabilities.
1.2 Playing Field
Figure 1.1 shows how the Playing Field may look like, which is not drawn to scale. Furthermore, the Danger Zones, Regular Victims and Bonus Victims may not be placed exactly where it is depicted in Figure 1.1. Teams are given two runs for the challenge. There is a maximum of two simultaneous take-offs in each run. For each run, the placement of Danger Zones, Regular Victims and Bonus Victims may change.
The Playing Field has a size of 20 m x 14 m. The Start Area (green) has a size of 20 m x 6 m.
The Start Area is where the Crazyflies will take off from to start the mission. There is no limit to the number of Navigation Aids that can be used within the Start Area.
For our analysis, we split the Playing Field into three sections: (1) Known Search Area. (2) Unknown Search Area. (3) Pillar Area.
1.2.1 Known Search Area
The Known Search Area have the dimensions of 20 m x 14 m, which is the largest area in the Playing Field.
The Known Search Area (Figure 1.2) will have Inner Walls of 2 m thickness and 2 m height, as well as Danger Zones, Regular Victims and Bonus Victims. The Inner Wall-to-Inner Wall distance is at least 2 m. There is a maximum of Ten Navigation Aids that can be used within the Playing Field (exclusive of the Unknown Search Area).
1.2.2 Unknown Search Area
The Unknown Search Area have the size of 8 m x 8 m and is situated in the centre of the Playing Field.
The Unknown Search Area (Figure 1.3) contains Danger Zones, Inner Walls and Bonus Victims. There are two floor-to-ceiling entrances/exits to the Unknown Search Area.
Navigation Aids are not allowed within the Unknown Search Area. The layout of the Unknown Search Area was not revealed on Competition Day.
1.2.3 Pillar Area
The Pillar Area (Figure 1.4) is the smallest area in the Playing Field.
It consists of Eight pillars, with narrow paths (about 1 m wide) between the pillars. The Inner Wall-to-Pillar distance and the Pillar-to-Pillar distance is at least 1 m.
1.3 Victim Marker
For SAFMC 2025, up to eight non-electronic markers were used as Victim Markers. The specifications of the Victim Markers must not exceed 30 cm x 30 cm x 1 m.
Both Regular Victim and Bonus Victim share the same type of Victim Markers. Bonus Victims are situated in areas that are more challenging to navigate.
To rescue a Victim, the Crazyflie needs to land within a 1 m radius of the Line of Sight (LOS) of the Regular Victim or a Bonus Victim (Figure 1.6). No obstacles are allowed within the Line of Sight.
Each Victim Marker can be rescued only once by one Crazyflie.
1.4 Danger Zone
For SAFMC 2025, up to four non-electronic markers were used as Danger Zones. The specifications of the Danger Zones must not exceed 30 cm x 30 cm x 1 m.
Danger Zones may overlap with any Navigation Aids we place within the Playing Field.
Crazyflies landing within a 1 m radius of the Danger Zone incurs a score penalty.
1.5 Navigation Aid
For SAFMC 2025, a total of ten Navigation Aids was used within the Known Search Area and Pillar Area. The specifications of the Navigation Aids must not exceed 1 m x 1 m x an unspecified height. According to competition rule book, more than one type of Navigation Aids are allowed.
1.6 Pillar Obstacle
Eight Pillar Obstacles were presented in the Pillar Area. Pillar Obstacles have the dimensions of 0.3 m diameter and 2 m height, inclusive of a weighted circular base of 0.5 m diameter and 0.15 m height.
1.7 SAFMC 2025 vs SAFMC 2024
Table 1.1 shows an overview of the differences in the Competition Rules for SAFMC 2024 and SAFMC 2025.
For SAFMC 2025, an Unknown Search Area is introduced, in which the layout of the Unknown Search Area will remain undisclosed throughout the Competition Day Mission. Additionally, the layout of the Known Search Area will only be disclosed on the Competition Day Mission. In contrast, for SAFMC 2024, the layout of the entire Arena was only disclosed on the Competition Day Mission.
Furthermore, SAFMC 2025 introduces eight Pillar Obstacles, which are a new type of static obstacles within the Arena. Eight Victims Markers will be present, which includes both Regular Victims and Bonus Victims. Each type of Victim Markers offers different scoring opportunities. The Double-Rescue Victims are removed from this year’s competition.
Additionally, a maximum of four Danger Zones is introduced for SAFMC 2025. Ten Navigation Aids are allowed within the Known Search Area and no Navigation Aids are allowed within the Unknown Search Area. There are no limits on the number of Navigation Aids in the Start Area.
1.8 Advantages and Limitation of SAFMC 2024’s Mission Strategy
The strategy employed by the SAFMC 2024 team represented a significant shift towards from a centralised strategy (Optimal Reciprocal Collision Avoidance Algorithm) to a decentralized search strategy (Modified Swarm Gradient Bug Algorithm), in areas concerning navigation and collision avoidance, aiming to address limitations encountered by the SAFMC 2023 team.
The Bug Algorithm was implemented onboard on the crazyflie firmware. This approach alleviated computational demands on the GCS, compared to the SAFMC 2023 team, which required multiple Navigation Aids for localisation (SAFMC 2023 competition had no limits on the number of Navigation Aids that can be used.) The navigation was more reliable, even if communication was lost.
Furthermore, the team retained the GCS based AprilTag detection mechanism. However, this strategy had limitations. Victim detection and the issuance of landing commands remained centralized, which is dependent on Wi-Fi image streaming to the GCS. This dependency constituted a bottleneck and a single point of failure for the primary mission objective.
Additionally, the Bug Algorithm did not guarantee exhaustive area coverage and required specific mechanisms such as loop detection, re-heading to improve efficiency.
For collision avoidance, the ”Two-Lane” method was implemented. This method, while offering basic separation, lacked adaptability and remained sensitive to localization inaccuracies, especially as flight time increased.
By understanding the strategy used by the SAFMC 2024 team, we will later discuss the changes our team implemented to address the limitations observed in their approach.
2. Selected Drone Platform
Given the project’s scope, an off-the-shelf drone platform was used in this project as building custom drones is complex and impractical. The selection of the drone platform for autonomous search and rescue missions was based on size and weight, modularity, sensor capabilities, cost, and availability. The ideal platform should be ultra-lightweight to support scalability in swarm applications. It should also be highly modular in both hardware and software, featuring swappable parts for ease of maintenance and open-source software to provide flexibility in developing solutions. The drone should be equipped with sensor suites that enable localization, obstacle detection, and avoidance capabilities.
The drone platforms considered were DJI Tello, Bitcraze Crazyflie 2.1+, and DEXI Drone – Level III. The selection matrix for the drone development platform selection is in Appendix A.
2.1 Hardware: BitCraze Crazyflie
The Bitcraze Crazyflie 2.1+ was selected as the development platform for this project due to its superior lightweight design, modularity, and extensive sensor compatibility.
A key advantage of the Crazyflie is its open-source framework, which includes the Crazyflie firmware, the cflib Python library, and compatibility with ROS2 via the Crazyswarm2 package. The Crazyflie also supports onboard autonomy using the Crazyflie App Layer, allowing mission-specific algorithms to run directly on the drone without requiring continuous external communication.
Bitcraze has an extensive ecosystem, providing various expansion decks that enhances its sensing and navigation capabilities. The hardware modularity of the Crazyflie is leveraged by incorporating specific expansion decks, each chosen to fulfil a critical functional requirement for autonomous flight and interaction:
- AI Deck (Figure 2.2, Left): This deck serves two important purposes. Firstly, its onboard monochrome camera (HM01B0) is the primary sensor for visual object detection (identifying AprilTags representing Victim Markers, Danger Zones, or Navigation Aids). Secondly, its integrated ESP32 module provides essential Wi-Fi connectivity, enabling image streaming to the GCS for processing and facilitating command/telemetry exchange. Furthermore, the GAP8 MCU offers potential for future onboard image processing and supports decentralized strategies.
- Flow Deck v2 (Figure 2.2, Middle): This deck is essential for localization. It integrates a Time-of-Flight (ToF) sensor for accurate height-above-ground measurements and a downward-facing optical flow sensor to estimate velocity relative to the floor texture. While susceptible to drift, particularly over non-textured surfaces or long durations (which is a factor influencing mission strategy design), it provides the fundamental state estimation necessary for basic navigation and control. Alternatives like relying solely on the IMU are inadequate due to rapid drift accumulation.
- Multi-ranger Deck (Figure 2.2, Right): This deck provides horizontal obstacle detection, which is essential for navigating unknown environments and avoiding collisions with static obstacles such as walls and pillars. Its five ToF sensors (front, back, left, right, up) enable avoidance manoeuvres. Relying solely on the forward-facing camera on AI Deck or downward-facing ToF sensor on the Flow Deck would not provide the sufficient and comprehensive, real-time environmental awareness needed for navigation.
This specific combination of decks provides the minimum necessary sensing suite: (1) Localization (Flow Deck), (2) Obstacle Avoidance (Multi-ranger Deck), and (4) Object detection and communication (AI Deck) integrated within the Crazyflie's size, weight, and power constraints. While other specialized decks exist (e.g., Loco Positioning System deck for Ultra-Wide Band positioning), they were deemed unsuitable due to cost, infrastructure requirements, or incompatibility with mission constraints (e.g., non-electronic Victim Markers and Danger Zones).
2.2 Drone Software and Development
Crazyflie runs on the Bitcraze Crazyflie firmware, which is written in C and is fully open source. It includes an app layer that allow users to add custom code directly to the Crazyflie, making it the entry point for implementing decentralized autonomous capabilities. The app layer contains a set of APIs such as the Deck API that allows scripts running onboard to access sensor data like Multi-ranger Deck’s distance data, odometry data from the Flow Deck, etc. (Bitcraze, 2024)
For centralized solutions, the Crazyflie can be integrated with ROS2 using the Crazyswarm2 Python package. This enables high-level control for multi-drone coordination via cflib, Python library that facilitates communication and control through Crazyradio, a long-range open USB radio dongle. In addition, the Crazyswarm2 package also enable the Ground Control Station (GCS) to access Crazyflie sensor data through the various ROS2 topics. The development workstations are equipped with ROS2 Humble, running on Ubuntu 22.04 to support software development.
3. Object Detection (Shu Hui)
The Object Detection Subsystem involves the detection of objects such as Victim Markers, Danger Zones and Navigation Aids. Upon the detection of Victim Markers, land commands are transmitted via CPX (Crazyflie Packet eXchange) from the GCS to the drones. Similarly, upon the detection of Navigation Aids, the respective predefined Search Strategies will be “activated” after receiving commands from the GCS. In contrast, ignore commands is transmitted upon the detection of Danger Zones.
3.1 Object Selection
A decision matrix was created to aid the object selection decision (Table 3.1). Objects refer to Navigation Aids, Victim Markers and Danger Zones, weighing factors crucial for competition success under tight time constraints.
Given the project timeline and the breadth of subsystems required (Navigation Aids, Search Strategies, Mission Planning), solutions demanding extensive development time or complex integration were heavily penalized. Reliability was weighted lowest at this initial selection stage, assuming implementation quality would determine final reliability. Depending on the target type, the approach for object detection varies significantly.
One possible approach we want to investigate was using Convolutional Neural Networks (CNN) detect simple objects such as cones. While a CNN model can be trained successfully for object detection, its detection accuracy is heavily dependent on the environment of both the training and testing data. There was also limited training data consisting of images of the specific cones taken by the HM01B0 monochromatic camera. As the competition location was revealed very close to actual competition date, using cones as a target introduced additional complexity as well as reduced feasibility. From feedback provided by the SAFMC 2024 team, we noted that Bitcraze had a face detection example that used two classes of objects, controlled via a GCS. However, this example had not been tested onboard, and there was a need to expand the system to classify more than 10 different object classes (Bitcraze, n.d.) for our system needs.
Another possible approach we want to investigate was using Bitcraze’s Loco Positioning Deck which uses Ultra-wideband system to detect Loco Positioning anchors. However, the Loco Positioning System is expensive (Bitcraze, n.d.), with maximum detection range of 8 m to 10 m indoors (Bitcraze, n.d.). The Loco Positioning System was not feasible for a 20 m x 20 m playing field, as it would require multiple Loco Positioning anchors (to act as Navigation Aids) scattered throughout the field. Additionally, competition rules states that Navigation Aids may be electronic but both Danger Zones and Victim Markers must not be electronic. With our Crazyflie drones already upgraded with three decks, the thrust provided by the propellers is unable to support the weight of the additional deck, providing constraints.
The object chosen is AprilTags (Figure 3.1). AprilTags are visual fiducial markers used in applications such as camera calibration. AprilTags are chosen because it is more accurate than other markers such as WhyCon (Robotics Knowledgebase, n.d.). They meet the non-electronic requirement for Victim Markers and Danger Zones, are cheap, easy to create/print, and reliable under varying lighting conditions (compared to simple colour/shape detection). It allows calculation of the exact position, orientation and identity of a marker relative to a camera. Furthermore, it allows the user to specify a list of markers to detect and have existing ROS support.
AprilTags are used as Victim Markers for SAFMC 2024 and SAFMC 2025. For SAFMC 2025, they are markers for Danger Zones and Navigation Aids. An exploration of on-board and off-board detection of AprilTags is then conducted.
Competition rules state that the Navigation Aids must have the maximum base dimensions of 1 m x 1 m with no specified height. As the height of the Navigation Aids was not specified, we chose to make Nav Aids thin floor markers. This decision is influenced by the observation that the Flow Deck is very sensitive to floor texture and lighting conditions. Increasing the height of the Navigation Aids makes the drone fly higher momentarily to accommodate the height of the Navigation Aid when it is detected, which can disrupt the velocity estimation and localisation accuracy of the Flow Deck. Furthermore, we avoided mounting Navigation Aids on wall as the detection distance is too small, and the Multi-ranger Deck will just detect the Navigation Aid as a wall and not trigger the desired drone behaviour.
Our novelty lies not in using AprilTags themselves, but in developing the system logic to interpret the same marker type for three different purposes based on detected ID and trigger appropriate, distinct drone actions (land, ignore, change behaviour in the different areas), managing this within a large drone swarm.
3.2 Onboard Detection
The SAFMC 2024 team used an off-board object detection approach with one GCS. They faced significant challenges such as delayed image transmission for a swarm, network robustness issues, and repeated drone disconnections, even on the Competition Day itself.
To overcome these limitations and achieve a fully autonomous swarm that complements the decentralised search approach, an onboard object detection approach, meaning onboard image processing of the detected Victim Markers, was explored. The goal was to process images and detect AprilTags directly on the drone, reducing Wi-Fi traffic to only essential data.
A comparison of the advantages of on-board image processing and off-board image processing in Table 3.2.
The GAP8 on the AI Deck has a modifiable application layer, which can be modified to enable image streaming capabilities over Wi-Fi (Bitcraze, n.d.) and include AprilTag detection using an open-source repository (Bitcraze, n.d.). On-board image processing involves modifying the existing image streaming capability to include AprilTag detection capabilities. The GAP8 RISC-V GNU processor runs on the FreeRTOS Operating System, but the FreeRTOS is not Portable Operating System Interface (POSIX) compliant, meaning that standard POSIX functions for threading and mutex creation are unavailable.
The main challenge was that the AprilTag library depends heavily on POSIX functions, particularly for threading and mutex creation, which are not natively supported by FreeRTOS. To address this, a FreeRTOS-plus-POSIX wrapper was explored (FreeRTOS, n.d.), which implements a limited subset of the POSIX threading API. This wrapper was intended to allow the development of FreeRTOS applications using POSIX-like threading primitives. However, it supports only about 20% of the full POSIX API, which proved insufficient for the GAP8’s requirements, and the solution ultimately did not work as intended. Further attempts using alternative libraries encountered similar POSIX-dependency issues, preventing successful integration.
The next step was to manually refactor all POSIX functions in the AprilTag library with their FreeRTOS equivalents.
Unfortunately, after considerable time spent investigating on-board detection, it became clear that refactoring the code for this purpose had a low probability of success and would require significant additional effort. Given the tight project deadline, even successful on-board would necessitate a complete rewrite of the higher-level mission logic such as the handling of the Victim Marker detection, the ignoring of Danger Zone, and Navigation Aid differentiation algorithms would also require a complete rewrite, diverging significantly from the ROS2-based GCS framework used by previous teams and available support repositories.
Therefore, despite the theoretical advantages (particularly scalability and reduced network load) and the desire to utilize the AI Deck's processing power more fully, the onboard detection approach was deemed infeasible under the project constraints. This limitation meant we had to focus on optimizing the off-board approach.
3.3 Offboard Detection
For off-board detection, four relevant repositories were identified online: ai_deck_wrapper (TL-NUS-CFS, 2023), apriltag_ros (Rauch, 2025), apriltag_msgs (Rauch, 2025), and MissionPlanner (CDE-4301-ASI-401, n.d.). The ai_deck_wrapper repository enables image streaming and integrates with apriltag_ros, which handles AprilTag detection. apriltag_msgs defines the message types used for communication between the AI Deck wrapper node and the AprilTag detection node. The primary repository utilized in our development was MissionPlanner, which supports the implementation of our mission strategy.
The test for object detection (Figure 3.2) was conducted in the seminar room at RC4. In this test, the Crazyflie takes off to search for one Victim Marker autonomously. The Crazyflie flies over the Danger Zone (AprilTag in the middle of the room) and rescuing the victim (AprilTag at the end of the room). The search algorithm used in this test was Modified Swarm Bug Algorithm. The angle of the camera mount was 45°.
This test confirms that the Crazyflie successfully maintains a distance within the 1 m radius of the Victim Marker, achieves a successful rescue by landing near the Victim Marker, and accurately detects the Danger Zone without landing within it.
Our primary contribution within this off-board framework was modifying the MissionPlanner and associated logic to handle the specific requirements of SAFMC 2025: (1) Implementing logic to check the ID of a detected AprilTag against predefined lists of Victim, Danger Zone, and Navigation Aid IDs, (2) Sending the correct CPX command based on the identified object type (land for Victim, ignore/continue for Danger, execute specific behaviour for Navigation Aid) and (3) Integration with Search Strategies.
3.4 Optimising Detection Distance for the Drone
The camera mount holding the HM01B0 monochrome camera module is fixed at 45° below the horizontal.
Off-board processing inherently introduces latency between image capture, transmission, GCS processing, and command execution. This delay can cause the drone to overshoot the target, landing further away from the Victim Marker than desired, even if still within the 1 m LOS radius. The detection distance is not ideal. As such, tests to optimise the height and speed of the drones was conducted in IDP Studio 1 to decrease the detection distance, However, increasing the speeds and the altitudes lead to a decrease in image resolution due to motion blur, which negatively impacts detection accuracy. As such, a balance must be struck between the height, speed and image resolution of the drones to maintain detection accuracy.
The test was conducted by varying the speed of the drone by 0.1 m/s while taking the average of two detection distance readings.
The vertical distance between the detected AprilTag and the drone generally decrease as the speed increase (Figure 3.4). However, after increasing the speed beyond 0.5 m/s, there is only a marginal improvement in detection distance. Furthermore, to strike a balance between the drone speed and the turning speed, which is the drone's overall speed scaled down by half when the drone is rotating at a corner, 0.4 m/s was determined as the drone speed.
The turning speed of the drone when it is rotating at a corner is 0.2 m/s, which serves as a basis for the Pillar drones’ speed when navigating around the Pillars.
The second test was conducted by varying the height (attitude) of the drone by 0.1 m while taking the average of two detection distance readings.
The vertical distance between the detected AprilTag and the drone generally increase as the height increase (Figure 3.5), up to a height of 60 cm. The height of the drone was initially decided as 30 cm as the detection distance is the shortest. However, in the RC4 MPSH, the floor surface was very reflective and shiny, which interfered with the velocity estimation of the Flow Deck and the detection accuracy of the camera. The drones had to fly higher to 50 cm to increase the detection accuracy. Beyond 90 cm, the drone will not detect AprilTags.
To counteract the system latency caused by image transmission, GCS processing amd CPX command, a fixed delay was introduced in the mission_planner.py script after a Victim Marker is detected but before the land command is sent. This allows the drone to fly slightly further, positioning itself more directly over the Victim Marker by the time the land command executes. The test involved repeatedly flying the drones with the delay implemented and then taking the average results to determine the most effective delay time. 1 second delay was chosen. Without delay, the detection distance will be 63.5 cm.
3.5 Upgrade to Firmware
Due to issues such as delayed image transmission for a swarm, network robustness issues, and repeated drone disconnections, it was imperative that performance improvements measures be taken.
The GAP8 on the AI-Deck was flashed with Wi-Fi Video Streamer example (Bitcraze, n.d.) using a JTAG providing streaming raw images capability from the GAP8 to the GCS. However, streaming RAW files presents several disadvantages: the RAW images is larger than the lossy JPEG image format and needs to be processed by the GAP8, leading to processing time and visible lags. Streaming RAW images from multiple drones simultaneously quickly saturated the Wi-Fi bandwidth, leading to the lag, disconnections, and scalability issues observed by the 2024 team and in our initial multi-drone tests.
Figure 3.7 below illustrates the differences between the original image streaming firmware and the updated lossy JPEG image streaming firmware. In a well-lit room, the original firmware streams images that appear very dark and of poor quality. In contrast, the updated firmware dynamically adjusts to lighting conditions, resulting in improved image quality.
A modified image streaming firmware (Leong, n.d.) that streams Lossy JPEG images and included an autoexposure feature was flashed onto the GAP8. The frame rate is increased to 7 to 8 Frames per seconds (FPS) and the image quality and streaming efficiency improved.
This solved our issues marginally. For a swarm of more than twelve stationary drones, there was still delayed image transmission and repeated drones’ disconnections. When a swarm of nineteen drones was tested, only eight drones managed to continuously fly and stream simultaneously at any time. We experimented with reducing the frame rate to ~3.7 FPS to lessen the packet load further. This did not yield significant stability improvements in our tests and potentially risked missing tags during faster movements, so we reverted to the higher ~7-8 FPS rate provided by the JPEG firmware, deciding instead to focus on external network and communication optimization.
Further firmware-level optimizations included upgrading the ESP32 Wi-Fi firmware (Bitcraze, n.d.) for potentially better TCP throughput and buffer management, and updating the NRF51 firmware (Bitcraze, n.d.) for improved Crazyradio link startup reliability. These provided marginal gains but did not fully resolve the large-swarm communication bottleneck, necessitating architectural changes.
3.6 External System Optimisation
The SAFMC 2025 strategy required frequent CPX communication for:
- Take-off commands
- Landing commands (Victim Markers)
- Ignore commands (implicit for Danger Zones, but requires GCS processing)
- Triggering specific behaviours upon Navigation Aid detection
Due to the heavy reliance on commands sent using the Crazyradio Dongle this year, the Crazyradio throws a lot of errors when a swarm encounters the AprilTags.
The SAFMC 2024 team used only three Crazyradio dongles and assigned static radio/channel IDs. This limited architecture struggled even with their simpler mission, as noted in their reports. For SAFMC 2025, the need to potentially send unique commands to multiple drones simultaneously, when several drones detect different Navigation Aids concurrently quickly overloaded the limited Crazyradio. Early swarm tests showed frequent CPX errors and command failures when multiple drones detected AprilTags around the same time, whereas single-drone tests worked well (FIgure 3.8).
To solve this issue, our team increased the number of Crazyradio Dongles from 3 to 7. Instead of static assignments, we implemented a dynamic Crazyradio management system within the mission_planner. When a command needs to be sent to a drone, the system checks for an available (idle) Crazyradio dongle. If the primary assigned dongle is busy, it iterates through the pool of available dongles until it finds one free to transmit the CPX packet. This dynamic allocation prevents commands from being dropped due to a single busy radio and is a key improvement over the static SAFMC 2024 approach, enabling more reliable swarm communication under heavy load.
Successful swarm scenarios using this enhanced setup showed multiple drones correctly ignoring Danger Zones, landing at Victim Markers, or reacting to Navigation Aids simultaneously or in quick succession, which was not reliably achievable with the previous architecture.(Figures 3.9 and 3.10)
Initially, SAFMC 2024 team daisy chained two slave routers to a single master router but still faced issues. For SAFMC 2025, we increased the number of GCS laptops from one to three, with one GCS laptops in charge of one area in the Playing Field. Each GCS ran its own instance of the ROS detection nodes and MissionPlanner, communicating with its assigned drones via Wi-Fi for image streaming and its share of the Crazyradio dongles (for CPX commands). While introducing complexity in coordination which were handled via mission pre-planning and drone assignments, this approach offered significant advantages. Each GCS handled image processing and detection for fewer drones around 6 to 7 instead of 19 drones. The Wi-Fi load was distributed across multiple access points and network interfaces. Furthermore, the failure of one GCS would not necessarily bring down the entire swarm.
This setup was effective. Even though background noise was a potential factor during all our tests, this setup made it easier track the movement of each drone and the image streaming performance improved drastically. Only one to two drones per GCS laptops facing repeated disconnections during our tests at RC4 MPSH. On SAFMC 2025 competition day, all nineteen drones streamed images to their GCS successfully without disconnections or freezing.
4. Mission Strategy
4.1 Search Strategy
To maximize efficiency and effectiveness in both search time and search success rate, we have opted to develop distinct algorithms for each search area. This decision is driven by the varying challenges and characteristics specific to each region.
In addition to selecting algorithms based on their suitability to the environment, we must account for hardware limitations. While a mapped approach theoretically offers greater precision and full coverage, it is not always practical given our hardware setup. For example, the ToF sensor on our drone has a limited range and field of view, making it unreliable for robust map generation. Furthermore, the lightweight drone design combined with optical flow positioning (via the Flow Deck) introduces significant drift over time, further reducing the reliability of any map produced.
As a result, the choice between a mapped and mapless strategy depends not only on the operational environment but also on sensor fidelity and computational constraints. A strategy that is ideal on paper may underperform when deployed under real-world limitations.
There are two main categories of search strategies: mapped and mapless. A comparison between the two is shown below.
4.1.1 Known Search Area
We have chosen a mapless, decentralized approach. Since we have a general understanding of the layout, mapping is not necessary. Mapping would introduce unnecessary computational load and slow down coverage. Instead, we rely on the availability of multiple drones that can be distributed across the Known Search Area to achieve full coverage. The adaptability of our mapless algorithm also lends itself extremely well to this particular use case. Their behaviours can be fine-tuned through simulations and real-life testing to optimize performance and map coverage
4.1.2 Unknown Search Area
A combination of both approaches will be used here. The layout is completely unknown, making it difficult to simulate or predefine an effective mapless strategy. Additionally, the victim located in this area is worth the most points, increasing the priority of a successful search. Thus, a mapped strategy ensures systematic and thorough coverage, increasing the chance of locating the target. On top of that, the mapless approach will complement it by assessing easy to reach and rescuing victims as fast as possible as there is also a time element to the challenge. This approach allows us to balance speed and accuracy, giving us control over various optimization parameters.
4.1.3 Pillar Area
Like the Known Search Area, we employ a mapless, decentralized approach here. However, the algorithm will be tailored to handle tight navigation and dynamic obstacle avoidance due to the pillar-dense layout. This variation is necessary to address the hardware's sensing limitations more effectively.
4.2 Use of Navigation Aids
To further optimize performance, each drone is assigned to a specific area and preloaded with the corresponding algorithm. This eliminates the need for constant communication with a centralized GCS, thereby avoiding a potential single point of failure. Drones will only communicate with the GCS when necessary — for example, when detecting AprilTags or responding to specific instructions post-detection. The use of Navigation Aids will keep the drones in their assigned area.
To enable the effective deployment of distinct search strategies in the challenge arena, we will divide the environment into three clearly defined zones, each assigned a specific exploration algorithm. Precise area segmentation is crucial to ensure that drones operate only within their designated zones and apply the appropriate search logic.
To achieve this, we will leverage AprilTags not only as victim and hazard markers but also as navigation aids. Strategically placed AprilTags at key boundary points will act as virtual gates or identifiers, guiding drones into the correct zones upon entry and helping maintain their presence within that zone throughout the mission. By encoding specific tag IDs for each area, drones can recognize when they’ve crossed into a new zone and adjust behavior accordingly — or turn back if they’ve entered the wrong region.
This approach offers a lightweight and reliable solution for zone partitioning, enabling us to fully leverage our multi-strategy design while avoiding unnecessary overlap or miscoordination between drones operating in different areas.
5. Known Search Area (Yan Yew)
The Known Search Area provides a structured environment with known static obstacles, enabling preplanned drone paths for full coverage. This layout aligns well with a mapless search strategy, which is computationally efficient compared to SLAM-based mapping. While mapped approaches require high processing power and introduce complexity in merging maps via a GCS, a mapless approach is preferable here since the obstacle layout is predefined, allowing optimized search strategies.
To ensure scalability and robustness, a decentralized approach is used, where drones operate independently without centralized communication. This is achieved by programming navigation and decision-making logic in C and flashing it onto the Crazyflie firmware.
This section details the implementation of the Modified Swarm Bug Algorithm (MSBA), including the role of simulation in optimizing search parameters, multi-airways for collision avoidance, and reverse commands for search boundary definition.
5.1 Selection of Search Algorithms
The decentralized mapless search algorithms considered were Parallel Sweep Search, MSBA, and Pre-planned Flight Paths. These algorithms were shortlisted because they do not require a map, rely on heuristic that can easily be implemented onboard of the drones to form a decentralized system, and can be implemented in a multi-agent system like a drone swarm.
Parallel Sweep Search is a search strategy that is commonly used in the maritime industry in search and rescue mission. Using multiple ship vessels to search a large area when location of search target is unknown. The search algorithm involves making the drones move in parallel, evenly spaced lanes that are offset to one another. When encounter a wall, the drone will perform wall-following around the obstacle until it returns to its original planned path then resume the sweep search.
The MSBA is an adaptation of the Swarm Gradient Bug Algorithm (SGBA) developed by TU Delft. This algorithm follows an identical heuristic as bug algorithm where the drone would travel in a specific heading, then encountered an obstacle it performs wall-follow around it until its original heading is free of obstacle. This algorithm is explained in detail in Section 5.2.
The pre-planned flight path strategy involves programming a series of predefined movements for each drone before deployment, which is possible because of the environment being known. For example, as illustrated in Figure 5.3, if we want the drone to reach point X, the drone will be programmed to fly forward. When obstacle is detected by the front ToF sensor, the drone should turn right and continue forward until it detects another obstacle using the front ToF sensor.
The decentralized mapless strategies shortlisted above are then evaluated using a weighted decision matrix. The main factors to consider were (1) search coverage, (2) adaptability, (3) exploration speed, and (4) implementation complexity. For each of these strategies, the search strategies are given a score out of 3 for how well it performs.
Search coverage measures how effectively the algorithm explores the area to find victim markers. It has the highest weight, as competition scoring is based on rescues. Higher coverage improves the chances of detecting markers. Parallel Sweep Search scores the highest due to its systematic approach. MSBA and pre-planned flight paths also provide good coverage but are less effective than Parallel Sweep.
Reliability assesses predictability, repeatability, and robustness. Since teams get only two competition runs, a consistent strategy is crucial. Parallel Sweep Search scores the lowest due to its reliance on localization, which has a ±10% drift. MSBA ranks the highest as it uses precise ToF distance sensors. The pre-planned flight path also scores high for its structured approach.
Adaptability reflects how well an algorithm maintains search coverage across varying obstacle sizes. While obstacle layouts are known, dimensions are not. MSBA ranks highest as it dynamically adjusts to obstacles. Parallel Sweep Search can handle obstacles but may fail if walls exceed planned dimensions. The pre-planned flight path scores the lowest, as it requires fixed paths tailored to specific layouts.
Exploration speed measures how quickly drone search a fixed area. It has the lowest weight since runtime is only a tiebreaker. Parallel Sweep Search is the slowest due to its sweeping pattern. MSBA and pre-planned strategies score highest as they follow direct paths from start to finish with minimal backtracking.
Based on decision matrix evaluation above, MSBA emerges as the most balanced strategy, excelling in reliability, adaptability, and speed. Its ability to dynamically respond to obstacles ensures consistent performance, particularly in the Known Search Area with unpredictable obstacle dimensions.
5.2 Modified Swarm Bug Algorithm (MSBA)
MSBA is a heuristic algorithm governed by a few simple rule-based behaviour as illustrated in Figure 5.4. After take-off, the drone rotates to its preferred heading and then moves forward in that direction. Upon detecting an obstacle, it initiates wall-following until its preferred heading is clear of obstacles.
To prevent the Crazyflie from getting trapped in an indefinite loop, a loop detection mechanism is implemented. This mechanism modifies the preferred heading when the drone is detected to be stuck. When the Crazyflie encounters an obstacle and begins wall-following, it records the hit point using odometry data from the Flow Deck. If the drone moves toward the same hit point again, the loop detection mechanism is triggered, adjusting the navigation path to break the loop.
The performance of MSBA is determined by the coverage, speed, and reliability of the swarm drones. Ideally, the swarm would cover all the Known Search Area as fast as possible, and reliably by having reproducible results and zero collisions among the drones. The performance greatly relies on key parameters such as flight speed, altitude, wall-following direction, corresponding wall-following distance. Wall-following direction refers to the side of the wall that the drones use as a guide while navigating. Wall-following direction refers to the relative position of the wall that a drone maintains while navigating. A left wall-following direction means the drone keeps the wall to its left as it moves, while a right wall-following direction means the drone keeps the wall to its right.
5.3 Initial Proposed Strategy Using MSBA
The team has allocated a total of 10 drones to search the Known Search Area. To obtain the optimal parameters, the ideal paths of the 10 drones to maximize coverage are first planned in accordance with the heuristic of the algorithm. Then the parameters required to achieve the corresponding flight paths are then determined by working backwards.
The strategy to perform search in the Known Search Area is illustrated in Figure 5.5. It involves deploying the drones on the left side of the Playing Field, ensuring they reach designated dispersion points—Point A, Point B, and Point C, which the drones will then fan out in their respective preferred headings. Each dispersion point is assigned three drones. To guide the drones to their respective dispersion points, they will follow a wall-following manoeuvre around Wall 1, Wall 2, and Wall 3 before dispersing into the search area.
The flight altitude of all drones is set to 0.3 m because real-life test flight at that height provided great flight stability. The speed of drones is set to 0.4 m/s as it is the optimal speed for stable flight performance after performing real-life testing which will be explained in later section in Section 5.6. This speed also ensures reliable AprilTag detection as tested by the Object Detection system (Section 3.4).
For intra-drone collision avoidance strategy to prevent drone-to-drone collisions, a two-lane method was implemented. Depending on the direction of wall-following, the drones would maintain different distance to walls to create two separate lanes. Extensive real-world testing a small-scale Playing Field, along with simulations, confirmed the reliability of this strategy, provided that the drones are correctly positioned at the starting area The optimal wall distances were determined to be 0.6 m for drones performing left wall-following and 1.2 m for those performing right wall-following, as illustrated in Figure 5.6 below.
Additionally, the loop detection transformation angle was set to be 90 ° instead of the 180 ° proposed in the original SGBA to prevent drones from travelling straight back to the starting area. The transformation angles depend on the drones' wall-following direction and follow the right-hand rule convention: drones assigned to right wall-following adjust their preferred heading by +90 °, while those following the left wall adjust by -90 °.
Since all the drones must take off within two waves, to ensure the drones reach their respective dispersion points safely without collision, drones belong to Point B will first take off, followed by drones that travel to Point A and Point C. This configuration will also prevent drone collisions by ensuring Point B drones to be in the forefront, followed by Point C and Point A drones that will be unlikely to collide with one another as they travel in opposite directions.
The drone configurations in the Start Area to guide the 3 separate drone groups to dispersion points A, B, C is shown below in Figure 5.6. Each drone is also assigned wall-following direction of either left or right. Drone 1, 2, 3 will first take off and travel to Point A, then followed by Drone 4, 5, 6 to Point C and Drone 7, 8, 9 to Point A. The drones in each group are arranged in a "V" formation. Reason being because based on real-life swarm drone testing, maintaining a clear LOS in the forward direction is crucial, especially during take-off. If a drone detects another drone directly in front of it, it will automatically initiate wall-following mode, potentially disrupting the planned dispersion sequence.
5.4 Simulation to Evaluate Search Strategy
To validate, evaluate and further optimize the above proposed strategy, extensive testing is required. Performing small scale tests by recreating subsections of the Playing Field is not possible because it is difficult to predict the state of the drones at the boundaries of the subsections. Full-scale is also not feasible because of the space constraints at Studio 1. Therefore, a simulation environment was created.
Beyond addressing space limitations, simulation enables rapid troubleshooting of search algorithms and fine-tuning of parameters without the need to physically redeploy the code onto the drones every time a change is made. Webots was used as simulator because it allows individual robot in the environment to execute is own C code, which mirrors the actual implementation of the decentralized search strategy using MSBA. A key consideration in developing this simulation environment was ensuring that the same code deployed onto the drone firmware could be tested within the simulation.
To create a decentralized system, the MSBA scripts are all running onboard of Crazyflie. The scripts are all compiled on the development workspace before flashing onto the drone using Crazyradio. This will write the scripts within the app layer in the Crazyflie firmware. When the drone starts up, the firmware will execute the scripts inside the app layer, which then have access to all the deck sensor APIs, enabling the drones to operate autonomously without the need for centralized control.
While Webots provides an excellent platform for simulating robots running onboard scripts, it does not support the Crazyflie firmware, or the identical Deck API used in the actual Crazyflie hardware. Simulators that support software-in-the-loop simulation by supporting the simulation of Crazyflie firmware such as CrazySim and Sim_CF2 were explored but both do not support the building of App Layer in the simulator. As a result, to integrate MSBA with the Webots simulation, an interface is required to bridge the gap between MSBA and the simulation environment. This interface allows the MSBA to access the necessary sensor readings, such as distance data from virtual sensors, and provides a means for the Webots robot to receive output from the MSBA, ensuring that the drone's behaviour in the simulation mirrors that of the actual Crazyflie system.
The full Webots repository that contains the simulation environment and controller code can be found in our Webots Github Repository.
The competition search environment was first recreated in Webots as shown in Figure 5.9, with the static obstacles being placed according to the Playing Field layout illustrated in the competition challenge booklet while conforming to the minimum distance. Each box represents a 1 m by 1 m area. Note that the orientation of the top-down view differs from the previous figures. The view in this figure is rotated clockwise by 90 °.
The performance of the search strategy was then evaluated by simulating the drones in different Playing Field configurations. The drone paths after four minutes of simulation are plotted and compared using the drones position data stored in csv files and Matplotlib. The percentage of search coverage is also determined by counting the box which the swarm drone’s paths crossed in the Known Search Area overlayed in Figure 5.9.
In all the configurations tested, the drone swarm was able to cover most of the Known Search Area collectively, consistently achieving a search coverage above 80%. However, there are a few blind spots that would be miss by the drones. Notably, the area in corners. Figure 5.10 shows the results from 3 of the simulations performed, as well as the blind spots. In addition, in certain scenarios, some drones will travel into the Pillar Area, which is undesired.
After multiple simulations, the two-lanes collision avoidance strategy proved reliable, as no collisions occurred beyond the initial take-off phase where drones fly to their respective dispersion points. Drone-to-drone collisions were only observed between drones in the same group, primarily due to insufficient spacing between drones in the Start Area.
5.5 Improvements to MSBA
To address blind spots in the corners of walls, two wall-following (WF) drones will be deployed to cover these areas, as identified in the simulation. To prevent these WF drones from interfering with the paths of the MSBA drones, these WF drones will be fly at a higher altitude of 0.55 m, 0.25 m higher than the altitude of MSBA drones. At this altitude, the effect of downwash on the MSBA drones flying below the WF drones is minimal and has no significant impact on image detection as tested (Section 3.4).
A total of ten drones are assigned to search the Known Search Area. Since only nine drones were initially used to test the MSBA search strategy, an additional tenth drone will be introduced to perform WF along the outer perimeter of the Playing Field, while drone number nine will be reassigned to WF as well. These drones will maintain a wall-following distance of 0.5 m to handle cases where object markers are placed in the corners. With the addition of a new drone and the reassignment of one drone to WF, the search coverage increased to over 90 %, calculated using the same method described previously.
To address the scenario where drones cross into the Pillar Area, a reverse function was implemented in MSBA to enable the drones to perform 180 degrees turn and continue searching within the Known Search Area. To facilitate this mechanism, two April Tag Navigation Aids will be placed at the border between Known Search Area and Pillar Area. When an MSBA drone in the Known Search Area detects them, the Mission Planner will broadcast a reverse command packet to the detecting drone. Upon receiving this packet, the drone will adjust its preferred heading by 180 degrees, ensuring it remains within the designated search area.
5.6 Real World Testing
To validate the MSBA implementation and further optimize flight parameters, a series of real-world tests were conducted in three different environments, where 1 m by 1 m cardboards are used to create wall obstacles:
- Small-scale tests in the studio
- Medium-scale tests in a seminar room at E7
- Full-scale tests in RC4 Hall
Initially, most tests were small-scale tests conducted in E2A Studio 1 to optimize individual drone flight stability by tuning flight parameters such as wall-following distance, flight speed, and altitude. Prior to creating the simulation environment, these small-scale tests provided the optimum flight parameters for the MSBA drones which was later used in the simulation. The small-scale test also enabled the reverse function to be tested by manually issuing the reverse command packet before integrating it with the Mission Planner system.
A key issue observed when testing the WF drones was that they had difficulty turning around thin wall edges of the L-shaped free-floating wall in the Known Search Area. When turning around these edges, the drones would lose track of the wall edges due to the narrow field of view of the side ToF sensor. To mitigate this, 2 wall guards are used as Navigation Aids. These Navigation Aids are created by creating a trifold using a corrugated board and are used by slotting them around the wall edges of the L-shaped wall. By effectively increasing the wall-thickness around the wall edges, the WF performance is significantly improved.
Subsequently, medium-scale tests were performed in E7 seminar rooms. With a larger space, subsections of the Known Search Area were recreated to test the behaviour of swarm clusters (around 3 to 5 drones). These tests validated the Webots simulation, as the drones’ behaviour and paths closely matched those generated in the simulation. However, they also revealed challenges unique to executing the MSBA search strategy in the real-world.
The MSBA drones’ paths were highly dependent on their initial configuration in the Start Area. During the first few tests, the MSBA drones often deviated from the intended path due to inaccurate initial orientation when placed at the Start Area. In some cases, this resulted in the drones failing to reach reaching the dispersion zone in the Known Search Area, as they miss the walls that would have guided them correctly. Additionally, when drones were placed too close to one another, they either exhibited erratic after detecting the drone in front as a wall obstacle or, in worse the worst case, collided with the drones ahead.
Unlike in simulation, where drone placements are accurate and precise, the initial medium-scale tests revealed the significant impact of human error on overall search coverage. To mitigate deviations in the initial preferred heading of MSBA drones, laser pointers were used to align them correctly in the Start Area. Furthermore, to prevent drones from being too close to one another, they were spaced farther apart to fully utilize the large Start Area.
Finally, full-scale tests were conducted in RC4. The 20m-by-20m competition Playing Field was recreated to evaluate the performance of the entire swarm drone Initially, the drones exhibited poor localization, drifting, sudden accelerations, and jerky movements due to the venue’s reflective flooring. This issue is discussed in detail in Section 9. To mitigate this problem, all drones were flown at a higher altitude—MSBA drones were raised from 0.3m to 0.5m, while WF drones were increased from 0.55m to 0.75m. During these tests, the drones closely followed the same paths as their simulated counterparts in Webots.
5.7 Final MSBA Configurations
After conducting the series of real-world tests, the set of flight parameters for Known Search Area drones performing MSBA and wall-following were determined.
6. Unknown Search Area
The Unknown Search Area is an 8 m by 8 m area situated in the centre of the Playing Field accessible through two entrances/exits to the right and at the top of the area. Unlike the Known Search Area and Pillar Areas that are easily accessible after crossing the Start Area, drones searching the Unknown Search Area need to first reach either entrances to the Unknown Search Area, then start searching the areas.
A hybrid approach combining the decentralized mapless method—MSBA—and a mapped method was initially explored to ensure full coverage of the Unknown Search Area. The strategy involves deploying MSBA drones for a quick sweep, followed by mapping drones to ensure full coverage and rescue victims located in the blind spots of the MSBA drones. This approach balances speed and coverage, as mapped methods typically require longer convergence times.
6.1 Feasibility Testing of Mapping Hardware: Bitcraze Multi-Ranger Deck (Yan Yew)
A centralized approach using ROS2 was chosen to perform SLAM due to the limited computation resources available on the STM onboard the Crazyflie. This approach provided scalability and ease of implementation, as it can be easily integrated with the Mission Planner, which also operates on ROS2. This approach would also allow us to leverage on existing tools and libraries for SLAM. To implement SLAM on a swarm of drones, each individual drone needs to produce an accurate map. These individual occupancy maps can then be merged to create a single, complete map.
The performance of SLAM using a single Crazyflie using the off-the-shelf Bitcraze Multi-Ranger Deck was first tested using the Crazyflie ROS2 Multiranger package. This multi-ranger package uses the Crazyswarm2 package to collect sensor data from the Crazyflie via the Crazyradio, which is then used to construct an occupancy map.
For testing, a 4m by 5m enclosed space was set up in the E2A Studio. A single Crazyflie was deployed to perform SLAM while following the walls within the enclosed space. Upon completing a full loop and returning to its starting position, the drone was terminated, and the resulting map was evaluated.
During the initial tests, the maps produced by the drone was poor that did not remotely match the ground truth. In one of the runs, it was observed that when the drone performed SLAM, it was leaving an “occupied” trail in Figure 6.3. Upon inspection, it was discovered that the ToF sensors on the multi-ranger deck can be obstructed by the AI deck located below it, as shown in Figure 6.4. After lifting the multi-ranger up to ensure the ToF sensors are unobstructed and cleaning the ToF sensors, this single Crazyflie was able to consistently produce accurate map.
With a single Crazyflie producing a reliable SLAM map, the next step was to evaluate multi-drone SLAM by merging occupancy maps from multiple Crazyflies. To achieve this, the ROS2 Merge Map package was integrated into the system. This ROS2 node subscribes to multiple map topics and publishes a single merged occupancy map, allowing the GCS to construct a complete map of the environment. The final implementation can be found in our Multi-Drone SLAM and Merge Map Github repository.
To test the multi-drone SLAM and map merging implementation, two Crazyflies were deployed in a 4 m by 4m enclosed space, performing SLAM while wall-following in opposite directions. Initially, after both drones completed their first loop, the merged map was accurate and closely matched the actual environment. However, as the drones continued to complete additional loops, the mapped environment began to drift over time, leading to misalignment in the final merged map (Figure 6.7). This issue was attributed to drift in the individual Crazyflie’s optical flow sensor readings. As the drones continued wall-following around the enclosed space, positional errors accumulated significantly, causing increasing discrepancies between the mapped and actual environments.
To mitigate map drift in multi-drone SLAM, proper loop closure detection must be implemented. Loop closure detection works by extracting features from the current sensor data and comparing them with previous sensor data. If a match is found, a loop closure constraint is placed in the SLAM map, reducing cumulative error (Nashed, 2020).
However, implementing loop closure detection requires a large amount of data to overcome inherent sensor noise. This is not feasible with the ToF sensors on the Crazyflie due to their narrow field of view. Sparse sensing is a niche research area in SLAM that aims to improve performance using limited sensor data. One study suggests leveraging fast convex optimization techniques to implement loop closure detection (Latif et al., 2017). However, without any publication on actual implementation, integration such method to Crazyflie will require too much time and resources.
ETH-PBL recently developed Nano Swarm Mapping, a decentralized solution tailors for nano drones like the Crazyflie. Unlike a centralized SLAM solution which relies on a GCS to process sensor data and construct maps, this solution enables each drone to independently perform scanning of environment onboard using ToF sensors, then broadcasting the scans and poses to the “main drone”, a drone within the swarm that is assigned to collect the scan data, perform SLAM finally broadcasting the results back to the swarm. To mitigate the effect of drifts, the researchers designed a modified multi-ranger deck which features ToF sensors with wider FOV and was able to implement loop closure detection (Friess et al., 2024).
6.2 Feasibility Testing of Mapping Hardware: Custom Multi-Ranger Deck (Samuel)
ETH Zurich’s Project-Based Learning Lab (ETH-PBL) recently introduced Nano Swarm Mapping, a decentralized SLAM framework specifically designed for ultra-light nano drones like the Crazyflie 2.1. Unlike traditional centralized SLAM systems that rely on a GCS to collect and process sensor data, this solution allows each drone to independently perform onboard environmental scanning using ToF sensors. The drones broadcast their local scans and poses to a designated “main drone” within the swarm, which performs SLAM using the collective data and shares the resulting map back with the swarm.
To address limitations in the Bitcraze Multi-ranger deck—particularly its single-point laser and narrow field of view—ETH researchers engineered a custom deck outfitted with VL53L5CX ToF sensors, advanced sensors that offer a 45-degree spread field of view, significantly improving perception. The ETH system also supports loop closure detection for enhanced accuracy (Friess et al., 2024).
Inspired by this advancement, we explored the feasibility of integrating this system into our own search strategy. With support from T-Labs, we (together with Tong Jing Yen, a UREx team member) successfully fabricated two units of the custom ToF deck for testing. However, despite initial progress, we encountered several critical barriers that prevented full implementation:
- Memory Constraints: The ETH solution runs advanced SLAM algorithms onboard, requiring more than the standard 192 kB of RAM available on the Crazyflie 2.1. Attempting to flash and run the provided code led to out-of-memory errors on our hardware.
- Deck Compatibility Issues: The custom deck conflicted with other decks (e.g., Flow Deck v2 or battery mount deck), due to hardware stack limitations and pin usage conflicts.
- Battery Placement Challenges: The additional hardware and physical modifications left insufficient space to securely and safely place the battery, affecting both stability and flight time.
- Lack of Full Documentation: Although ETH shared their repository, certain hardware specs, firmware configurations, and system integration details were not fully documented, making it difficult to reproduce the setup reliably within our timeframe.
Given these challenges, and due to time and resource constraints, we have decided to defer implementation of the Nano Swarm Mapping framework. Instead, we will focus on more readily deployable solutions for the current phase of the project, leaving this advanced SLAM approach as a valuable opportunity for future cohorts to explore.
6.3 Feasibility Testing of Mapping Software (Samuel)
To ensure full coverage of the area, an effective exploration strategy must be implemented on top of the generated map. We considered three main exploration strategies: Breadth-First Search (BFS), Depth-First Search (DFS), and Frontier Search. The 3 strategies and how they work are explained in the following subsections.
While BFS is computationally efficient and well-suited for multi-drone coordination, its effectiveness is limited by the poor performance of the merge map. This limitation makes BFS impractical for our use case, as inaccurate merging can result in inefficient exploration paths or redundant coverage.
Frontier Search is the most efficient and robust method, as it actively selects the most promising unexplored regions for mapping. However, due to hardware and time constraints, implementing Frontier Search at this stage is not feasible.
Instead, we opted for DFS as a proof-of-concept approach. While DFS is inherently inefficient and suboptimal for multi-drone coordination, it can be implemented using a single drone. This allows us to validate our mapping framework before moving on to a more sophisticated exploration strategy. However, DFS will not be used in the final competition due to its drawbacks, including high exploration time and a single point of failure in an unknown environment.
6.3.1 Breadth First Search
Breadth-First Search explores the environment in a layer-by-layer manner, ensuring that all nodes at the current depth are visited before moving on to nodes at the next level. In the context of drone exploration, this translates to prioritizing short-range movements first, allowing the drone to systematically expand outward from its starting point. The drone begins at a known position and marks it as explored. It then identifies all adjacent unexplored areas and places them in a queue. The first area in the queue is explored next, and any new adjacent unexplored regions are added to the queue. This process repeats until all reachable areas have been covered. BFS is advantageous because it ensures uniform coverage and works well with multiple drones, as different drones can simultaneously handle different regions of the map. However, its effectiveness heavily depends on the quality of the merged map. Inaccurate merging can lead to redundant exploration or inefficient path planning.
6.3.2 Depth First Search
Depth-First Search follows a deep, narrow search pattern by exploring as far as possible in one direction before backtracking. When used with a drone, DFS causes it to move forward until it reaches an obstacle or a previously visited area. Upon reaching a dead end, the drone backtracks to the most recent unexplored branch and continues the search from there. This approach continues until every possible path has been explored. DFS is relatively easy to implement and requires minimal computational resources, making it a suitable choice for a single-drone proof-of-concept. However, it is less efficient than other strategies, often resulting in long, winding paths and excessive backtracking. It is not ideal for time-sensitive missions or situations requiring fast and reliable full coverage.
6.3.3 Frontier Search
Frontier Search is a highly efficient and adaptive strategy that targets the boundaries between known and unknown regions, known as frontiers. The drone continuously identifies these frontiers and evaluates them based on criteria such as distance, accessibility, and potential information gain. It then selects the most promising frontier to explore, updates the map upon reaching it, and repeats the process. This approach allows the drone to focus its exploration on the most valuable parts of the environment, minimizing redundant coverage. Frontier Search is particularly effective in dynamic environments and is well-suited for multi-drone operations, as different drones can be assigned different frontiers. However, it requires more complex decision-making and greater processing power compared to BFS or DFS. This makes it less suitable for hardware-constrained platforms unless further optimization is applied.
6.4 Implementation of MSBA (Yan Yew)
Due to the poor performance of SLAM using multiple drones, it was decided to implement MSBA for the Unknown Search Area drones to ensure sufficient tests can be conducted before the competition. Six drones are assigned to the Unknown Search Area (drone number 4, 5, 6, 7, 8, 9), the three even-numbered drones (4, 6, and 8) will enter from the top opening of the Unknown Search Area while the remaining odd-numbered drones (5, 7, and 9) will enter from the right opening, this will help maximising search coverage as the drones will be more spread out, and to tackle the possibility of the Unknown Search Area consisting two subsections that are only accessible through the two separate openings.
6.4.1 Drone Guidance to the Unknown Area
To get to the openings to the right and top of the Unknown Search Area, the drone will travel through the passageway between the Unknown Search Area right outer wall and the pillars. The right opening will be referred to as Entrance 1, and the top opening will be referred to as Entrance 2. The three drones that will enter from Entrance 1 belong to Group 1 and the other three drones that enter from Entrance 2 belong to Group 2.
The desired paths of Group 1 and Group 2 drones are first considered from Start Area to the respecting Unknown Search Area openings as illustrated in Figure 6.12 above. There are 3 waypoints along the path from the Start Area to Entrance 2 where key events take place. When a drone reaches point A, it should continue moving forward if it belongs to 2, otherwise turn left into Entrance 1 and begin searching the Unknown Search Area. At point B, the drones should turn left and continue moving forward. At point C, the drones should turn left into the Unknown Search Area and start searching for victim markers.
To achieve the desired paths, Navigation Aids will be placed on point A, B, and B, which will be referred to Tag A, Tag B, and Tag C respectively. Whenever these tags are seen by the drones, a radio command unique to the Navigation Aid will be issued to that drone. At the Start Area, all the Unknown Search Area drones will be lined up with point A and B being straight forward to them.
According to Figure 6.12, there are three unique possible motions that the drones would perform to enter the Unknown Search Area, namely (1) Continue straight forward, (2) Turn left, then continue straight forward, and (3) Turn left, move forward, and begin search. The pseudocode is shown in Algorithm 1, which is implemented in SGBA.c.
Since in Mission Planner there is only an only a one-to-one mapping of radio command to Navigation Aid. The logic to initiate the correct motion when received a radio command needs to be handled by the drone. For example, when a Group 1 drone received a radio command that corresponds to Tag A, it needs to initiate motion (3) Turn left, move forward, and begin search. Whereas a Group 2 drone receiving the same radio command should initiate motion (1) Continue straight forward. The pseudocode is included in Algorithm 2, which is implemented in state_machine.c.
6.4.2 MSBA for Searching the Unknown Area
Like the MSBA drones in the Known Search Area, these six Unknown Search Area drones have the same set of parameters to be optimized. Without knowledge of the obstacle layout in the area, the six drones will be evenly distributed to maximize search coverage, as shown in Figure 6.13. The preferred angles for each drone are relative to the forward direction, using the right-hand convention.
To ensure the drones remain within the Unknown Search Area, Navigation Aids placed at the Unknown Search Area openings will be used to guide them. When a drone detects these Navigation Aids, the GCS will issue the corresponding radio command packet to the drone. Once the drones have entered the Unknown Search Area and are on a path to exit, they will again detect these Navigation Aids. Upon receiving the radio command packet from the GCS, the drones' preferred direction will be adjusted to keep them within the boundaries of the Unknown Search Area. The optimal position of Navigation Aids after extensive real-world testing are shown in Figure 6.10.
Real-world testing with various transformation values revealed that a 135-degree transformation is the most optimal. While changing the preferred heading by 180 degrees would be the most effective way to keep the drones within the Unknown Search Area, it would not improve search coverage, as the drone would simply retrace its path in the opposite direction. A 90-degree transformation is not used, as it can cause the drone to exit the Unknown Search Area when approached at certain angles.
The Unknown Search Area drones will adapt a similar intra-drone collision avoidance strategy as the Known Search Area drones, by flying the two groups of drones at different distance to wall and altitude. Group 1 drones will fly at 0.75 m, while group 2 drones will fly at 0.5 m. Due to the absence of wall-following drones covering area in wall corners, the drones will fly slightly closer to the wall compared to the MSBA drones in the Known Search Area. Group 1 drones (5, 7, 9) will perform left wall-following 0.4 m from the wall, group 2 drones (4, 6, 8) will perform right wall-following 1.0 m from the wall. All the drones will fly with the same maximum speed of 0.4 m/s.
The Unknown Search Area MSBA drones will be lined up in the Start Area in the order of 4, 6, 8, 5, 7, 9. To mitigate the risk of collisions during take-off, the drones will be spaced equally apart, with both groups of drones taking off in separate wave. Group 2 drones (4, 6 ,8) will take off in the first wave to enter the top opening of Unknown Search Area, whilst the group 1 drones (5, 7, 9) will take off in the second wave, entering the Unknown Search Area from the right opening.
6.5 Real-World Testing (Yan Yew)
Real-world testing is a critical phase in validating the effectiveness of the search strategy, ensuring that the drones can perform as expected in a variety of conditions. Small, medium and full-scale tests were conducted throughout the development process to ensure the functionalities work as intended.
Firstly, small-scale tests were conducted in E2A Studio 1 to optimize the MSBA parameters such as wall distance during wall-following. The algorithm to guide drones into the Unknown Search Area was also extensively tested by manually issuing radio command to the drones.
After integration with Mission Planner, the autonomous navigation of two drones to the Unknown Search Area could be tested. This enabled the optimization of drone orientation in the Start Area. It is crucial to ensure the drones can detect the Navigation Aids and reach the correct opening. In the first few tests, the drones were often mis-orientation, which created paths that deviated away from the navigation, resulting in the Unknown Search Area drones crossing into the Pillar Area.
Then, medium-scale tests were performed in E7 seminar room. With a larger space, two-third of the Unknown Search Area could be constructed, and all six Unknown Search Area drones could be tested simultaneously. However, due to some drones unable to stream images, a maximum of four drones was tested simultaneously across all the medium-scale tests. With a larger swarm drone, the tests revealed the weakness of this strategy – reliance on image streaming. Visual feedback is essential in guiding the drones into the Unknown Search Area and keeping them within the Unknown Search Area. During the tests, the Wi-Fi connection of the drones was intermittent, leading to windows of time without image feedback. When these periods overlap with drones approaching the Navigation Aid, this will result in them missing the entrances and entering the Known Search Area at the top of the Playing Field. This is problematic because it means the drones will not be able to search the Unknown Search Area, and in the worst case, create risk of collisions with the Known Search Area drones.
Understanding the significance of reliable Wi-Fi connectivity in executing the search strategy for Unknown Search Area, the team worked closely together to optimize the Wi-Fi network for the swarm drone and was able to mitigate the intermittent Wi-Fi connectivity. This was explained in detail in Section 3.5.
Finally, full-scale tests were performed with the other subsystems in a competition-scale Playing Field. All six drones assigned to search the Unknown Area was tested. Through these tests, the positions of the navigation aids were optimized to guide the drones to the entrances while ensuring them to be detected to prevent the drones from exiting the area.
7. Pillar Area (Matthew)
Another new feature of SAFMC 2025 is the addition of the Pillar Area. This area of the playing field has no Inner Walls and only Pillars to form the obstacles that the drones must navigate across.
7.1 Introduction to the Pillar Area
In the Pillar Area, there are no Inner Walls within to navigate around. This would mean that a different strategy from that used in the Known and Unknown Search Areas would be needed for the drone to navigate it and search for Victim Markers.
7.2 Mock-Up of the Pillar Area
To test the Pillar Area, we need to build a mock-up of eight Pillars in the area for our drones to navigate through. This was achieved using Styrofoam cylinders, which were tested to ensure that the drones can detect them while flying and can also fly in a stable manner near them. Both were found to be satisfactory with our Crazyflie drones. The dimensions of the pillar are about 25-30cm in diameter, a close approximation of the competition pillars that would be of diameter 30cm. The height of the pillar are about 30-40cm, tall enough for our initially planned flying altitude of 30cm.
Later, the flying height was adjusted to 80cm which was found to be more robust and reliable. To support this new flying height, pillars were placed on chairs as seen in Fig 7.3 that were about 75cm in height. The pillar would now be in the range of 75 – 115cm off the ground.
7.3 Testing of Search Strategies Within
Without Inner Walls to follow, the search strategy relies on using the Pillars to localise the path of the drones as it flies within the area. A few methods were considered:
A. Using the MSBA wall following algorithm, but with changes to its speed
One method explored was to use the same MSBA wall following algorithm with the speed reduced to 0.2 m/s. This worked on the theory that the drone would interpret the Pillars as walls till it comes to free space again, where it would return to its original heading. However, it was found that the drones would follow the Pillar’s circumference as a wall and end up going in an infinite loop.
B. Using an algorithm that follows the preferred direction till meeting a Pillar, avoiding it by following the circumference
This method would involve having the drone first fly forward (which is its preferred direction) until it encounters a Pillar head-on, at which time it would avoid the Pillar by flying in a circular path following the walls of the Pillar. It would then resume its original heading after doing so in the new lane that it has corrected its flight path to. However, it tends to drift and cut the Pillar while doing so.
C. Using an algorithm that follows the preferred direction till meeting a Pillar, and avoids by turning on the spot
Similarly to method B, the drones would fly in the preferred direction until it meets a Pillar head on. However, unlike the previous method, it would turn on the spot to the left or right by 90 ° and move for a short distance before turning back by another 90 ° to resume its flight in the original heading. This would avoid it cutting the Pillar while doing its “avoidance”, and is a solution developed together with David Chong, an Undergraduate Research Experience (UREx) student working alongside us on our project.
Comparing the three solutions and their characteristics, we can arrive at the following decision matrix:
Given that Solution C is the method that would allow us to maximise the reliability and robustness of the algorithm, it is chosen as our search strategy for the pillar area.
7.4 Testing of Victim placement
In the Pillar Area, Victim Markers were placed in different locations. We then tested the search algorithm's effectiveness in rescuing them in each position.1. Victim Marker in free space
Victim Marker are placed in the middle of the Pillar Area in free space. Drones are seen to be able to detect and rescue these victims
Another configuration involved placing the victim marker directly in front of the pillar. Similarly to the one in free space, the drones had no issues picking up the victims in front of the pillar.
3. Victim behind the Pillar
This was the hardest in theory to pick up since the drone would not be able to see the victim till it comes close to the target for the camera to pick it up. However, the drones were able to pick up and rescue the victims in the tests that we did. A failsafe however can be considered to increase the likelihood of rescuing the victim if it is not seen the first time that the drone passes through it.
7.5 Optimising for Search and Rescue
While Solution C was chosen as the ideal search strategy in section 7.3, further optimisations were made to maximise the coverage and reliability of the algorithm while minimising the search time. To achieve this, a few strategies were employed:
7.5.1 Sliding to the Side When Near a Pillar
While the search strategy relies on avoiding a Pillar if it comes head-on to a Pillar, drift in the drone’s path can lead to it coming too close to Pillars on its side. A novel strategy was then employed for the drone to “slide” to the side should it come close to the Pillars, which would allow it to avoid collision with the Pillars while simultaneously maintaining the original heading of its flight path.
7.5.2 Optimising the speed of the drones
Another parameter that we sought to optimise was the speed of the drones while flying within the pillar zone. Flying faster would mean that we are able to complete the mission in a quicker time, but would also increase the chance of crashing into pillars due to either slow or non-detection of the pillars while flying. This happens as the drones do require a processing time between detecting the pillars and avoiding it, and flying too fast would mean that the drones would crash into pillars before they are able to avoid it. Flying too slowly however would cause the mission to take too much time to complete.
A speed of 0.5m/s was initially chosen. However, this was too fast leading to frequent crashes with drones. 0.2m/s was found to work well, but too slow. 0.3m/s was later found to be the optimal speed balancing speed with reliability of the algorithm.
7.5.3 Navigation Aid to Reverse at the End of the Pillar Area
Given the Pillar drones are optimised to carry out search and rescue in the Pillar Area, it should spend the duration of the mission searching the Pillar Area instead of drifting to other zones. Navigation Aids are therefore used at the end of the Pillar Area to send a command to reverse, which is like that employed in the Known Search Area. This would keep the drones “within” this area where it is best suited to search and capture targets that may have been placed directly behind a Pillar and missed in the first run.
7.5.4 Straight line vs Zig-Zag path
The last parameter that we considered was whether the drone should fly in a straight-line path or a zig-zag one (as seen in the images below):
By following a zig-zag path, we can increase the field of view of the camera with an intent to increase the reliability of target detection. However, this comes at the cost of more frequent crashes and an unpredictable flight path. After our testing, we found that this was not a good change and a straight line path would be more desirable given the trade-offs.
7.6 Full Algorithm
Putting all together, we now have the following algorithm for search and rescue within the Playing Field for a single drone.
Based on our testing, three drones flying with this algorithm are needed to achieve full search coverage of the pillar area while remaining robust and reliable (Figure 7.14). Any fewer drones would cause some search areas to be left uncovered, while allocating more drones may increase the chance of drone-to-drone collisions which would be counterproductive.
A video of the pillar test with 3 drones can be seen here:
8. Mission Planning (Shu Hui)
Mission Planning subsystem involves the efficient integration of the subsystems the SAFMC 2025 team have developed into a robust mission execution plan for our drone swarm. It encompasses (1) the optimisation of take-off sequence, (2) the optimisation of kill and landing sequence for mission termination and (3) managing the communication between GCS and the drones.
8.1 Optimisation of Take-off Sequence
Drones assigned to channel 60 will take-off first, followed by drones assigned to channel 80 after a 15 s delay (Figure 8.1). 15 s was chosen to clearly differentiate the take-offs of the group of drones and to prevent collisions and overcrowding at the Start Area. As previously mentioned, with a limited number of Crazyradio dongles connected to GCSs, the 15 s allows for time to minimise radio traffic by the Crazyradio dongles.
As competition rule book stipulates a maximum of two simultaneous take-off of drones and a 10 second window between take-offs, the drones were either assigned a channel of 60 or 80 for simplicity, unlike previous years where they used multiple channels. Using more channels was possible but, each additional channel used results in 3 extra packets sent to the Crazyflies of that assigned channel. Given hardware limitations and heavy reliance on commands sent via CPX protocol this year, we decided to limit the number of channels to reduce unnecessary packet transmission.
Known Search Area is the largest area in the Playing Field and have the highest concentration of Regular Victims, so it makes sense to assign ten drones to the Known Search Area to optimise Mission Score. The Known Search Area drones was arranged in a manner that optimises the initial coverage and time used for search.
Unknown Search Area contains the same type of Obstacles as the Known Search Area but contains only Bonus Victim(s). Since the layout of the Unknown Search Area was not disclosed, but we needed to optimise coverage, a moderate number of six drones was assigned to Unknown Search Area. The Unknown Search Area drones can only enter the Unknown Search Area via two entrances/exits, thus they are placed in a straight line, each group of drones during the different take-offs will enter via different entrances/exits. Once inside the Unknown Search Area, the drones will carry out their respective missions.
Pillar Area contains Bonus Victim(s) and Pillar obstacles. Given the limitation of the multi-ranger deck’s ToF sensors and the narrow 1 m Pillar-to-Pillar distance, an effective Pillar Area search strategy was developed. A small number of drones is assigned to the area to minimise the risk of drone-to-drone collisions and collisions with the Pillars while complementing the search approach. The Pillar Area drones was also placed in a straight line for sequential entry. Upon entering the Pillar Area, the drones will carry out their respective missions.
8.2 Optimisation of Kill and Landing Sequence
Competition rule book stipulates the implementation of a failsafe capability in the event of a loss of link. The SAFMC 2024 team developed a kill.py script which completely stops the motors of a Crazyflie after specifying the channel and drone number when the script is executed.
The SAFMC 2025 team developed a kill_all.py script (CDE-4301-ASI-401, n.d.) which completely stops the motors of all the Crazyflies that are turned on within 1 km radius (ideal condition for Crazyradio), regardless of channel number and status. In the event of catastrophic failure such as link loss or GCS crash where either the Wi-Fi routers are malfunctioning or are turned off, image streaming will not work and due to the Crazyflies’ small size, and it would be impossible to tell which malfunctioning drones are still flying and in which areas. The safest option is to stop all the drones within the vicinity immediately.
The mission termination criteria is set at 400 s (CDE-4301-ASI-401, n.d.) or when all the Victim Markers are rescued. The mission time was determined by allowing the Crazyflie to fly around with a fully charged battery several times and measuring the time until it lost power. Under optimal conditions, with a battery voltage of 4.20 V, the expected mission time is around 400 s. This means that for the full duration of the drones’ flight time, we can track mission progress by printing the elapsed time (CDE-4301-ASI-401, n.d.). However, to account for the situation where the drones may be lost in the Playing Field and cannot rescue all the Victims, the land_all.py developed by the SAFMC 2024 team can be used to end the mission prematurely.
8.3 Communication Between the Ground Control Station and Drones
With reference to Figure 8.2, The ESP32 chip on the AI Deck streamed compressed JPEG images via TCP sockets over Wi-Fi directly to the drone's assigned GCS.
Task allocation, take-off, landing, and behaviour-triggering commands based on GCS processing of images or were sent from the GCSs to the drones using the CPX protocol via the dynamically managed pool of Crazyradio dongles.
The detection process (Figure 8.3) begins with a take-off command issued via CPX from the GCS, followed by a check of the drone’s status to ensure it is ready for the mission. Once in the air, the drone continuously scans its surroundings for AprilTags, while continuously streaming image to the GCS via Wi-Fi. The GCS processes the image of the detected AprilTag and classifies the detected AprilTags as a Victim Marker or a Danger Zone or Navigation Aid.
If the GCS “recognizes” a Victim Marker, it marks the Victim Marker as "Detected" in the Mission Planner. The GCS will then issue a land command via CPX to the drone and allowing the drone to land in front of the Victim Marker. Conversely, if the detected AprilTag is a Danger Zone, the GCS records the ID of that Danger Zone. The drone continues to fly and avoids landing in that area. If a Navigation Aid is detected, the GCS triggers a behaviour (reverse, turn, change search pattern) by sending appropriate CPX commands for that Navigation Aid.
If no AprilTags are detected during the flight, the drone keeps scanning its environment until the GCS identifies an AprilTag.
9. Full-Scale Systems Testing
9.1 Testing in Residential College 4 Multi-Purpose Sports Hall
To ascertain the accuracy and reliability of our overall search strategy, we conducted our full-scale testing at the Multi-Purpose Sports Hall of Residential College 4 (RC4 MPSH). RC4 MPSH’s dimensions was about 20 m x 20 m, providing a good approximation of the actual competition layout. This was also the first chance for us to test the search strategies for the Known, Unknown and Pillar search Areas all at once.
In the RC4 MPSH, we first started by testing the algorithms without image streaming (no target detection). This allowed us to ascertain the feasibility of our search strategies and collision avoidance first.
Later, we had a total of three runs testing the overall performance of the drone swarm with image streaming on to enable target detection and rescue of Victim Markers. The score of our three runs were:
Run 3 was the best of all runs taken, where three Victim Markers were rescued (of which one is bonus) while none of the Danger Zones were within 1m of the drones after landing.
Originally, we were not able to rescue any victims and score points as we faced a range of issues that hindered our drone swarm’s performance. These are described more in detail below, including how they were resolved as we improved in each run.
9.1.1 Network and Bandwidth Issues
The original plan was to set up one central computer to manage all drones in the Playing Field, and it would do so on one Wi-Fi network with three routers. This would be up to 20 drones managed on one central system. While this was not an issue with individual system testing using up to 10 drones, it did not perform well with 20 at once which could have been due to overloading of the bandwidth. Drones would often either not connect to the Wi-Fi or do so then disconnect mid-flight. Once the Wi-Fi disconnects, the drones can no longer send images from the camera to the central system to process which means that it would not be able to rescue any victims.
Eventually, a decision was made to change de-link the network into three separate components that would be controlled by three separate computers. This worked much better and gave us the assurance that the system would be reliable moving into the competition day. More details regarding the changes can be found in Section 3.6.
9.1.2 Different Performance of the Drones in Different Conditions
Moving to RC4 MPSH, there were a few changes in conditions. First, the floor posed new challenges as it was shinier than the one seen in our iDP studio. This led to undefined behaviour of the drones, including speed-ups and inconsistent altitude observed. The drones were therefore not able to rescue Victim Markers or did so inconsistently, given that the speed varied drastically mid-flight.
To counter this, we had to vary the distance of the drones to ensure the stable performance of them within the RC4 MPSH. By flying them higher, this would allow us to counter the effect of the reflective surface given that the drones are further from the ground. By increasing the altitude from the original 30 cm to 50-80 cm, the drones were seen to perform in a much more stable manner. It could then fly in a much more predictable fashion allowing a more reliable search and rescue. More details about the changes made were mentioned in Sections 5, 6 and 7 detailing the optimisation to the drone’s parameters in each section (Known, Unknown, Pillar Areas) after testing.
9.1.3 Overall Analysis
The conditions at RC4 MPSH differed significantly from those in individual component testing, both due to the environment and the scale of the setup. Fortunately, we began testing early in Week 4 of Semester 2 and conducted four sessions at RC4 MPSH, giving us ample time to identify and resolve the issues we identified. As a result, our drone swarm's performance improved markedly from Run 1 to Run 3 as seen in Table 9.1, with increased reliability and robustness in our search strategies. Moreover, we later discovered that the conditions at the actual competition, as described in the next section, closely mirrored those at RC4 MPSH. This meant that our full system tests in RC4 MPSH provided a good approximation of the actual arena which allowed us to accurately validate our drone swarm's performance in testing.
9.2 Competition Day
On the 17th of March 2025, our team participated in SAFMC 2025 held at the Singapore Expo. This was the “final” full scale test with our full system being run all together.
We were given two runs to test our drone swarm system with the better of the two runs taken as the final score. A total of five Victim Markers (of possible eight as given in the rules) were used on the actual day, and likewise three Danger Zones (of possible four) were used. The location of the Victim Markers and Danger Zones were not made known to us before placing our drones down and were only placed in the Playing Field after deciding on the placement of drones in the Playing Field. We only knew that the two Bonus Victims would be placed in the Pillar and Unknown Search Areas (one each) while the rest of the Regular Victims could be placed anywhere in the Playing Field.
In the actual Playing Field, the performance of our drone swarm was like that exhibited on our final run at RC4 (conducted one week prior to the event) but with greater stability as the carpeted floor allowed the drones to fly in a much more stable manner. The results of the two runs are as follows:
Run 2 was taken as the final score, as it yielded the better result of the two runs. In the known area, all three victims were successfully rescued in Run 2, demonstrating that the search strategy in this area is both robust and reliable. The strategy in the Pillar Area also performed well, successfully rescuing the one bonus victim across both runs.
During the actual competition, three regular victims were placed within the Known Search Area for rescue. It should be noted that the placement of Victim and Danger Markers varied between Run 1 and Run 2. The take-off sequence and navigation to the dispersion points worked as intended. However, further drone behaviour could not be directly observed as the competition Playing Field was off-limits during the mission runs. In the first run, two out of three regular victims were successfully rescued. However, before the run began, the wall-guard Navigation Aid, which was designed to increase the thickness of the wall edges, was mistakenly left out and not placed around the L-shaped wall in the Known Search Area. This issue was rectified before the second run, and during this run, all of the regular victims were rescued. However, since the victim placements differed between runs, it was not conclusive whether the victim missed in Run 1 was located near the L-shaped wall.
For the Unknown Area, one bonus victim was placed in the area. The position of the bonus victim changes across the two runs. At the end, we were unable to rescue the one bonus victim in both runs. In the first run, only one of the six MSBA drones successfully entered the Unknown Search Area through the top opening. During setup, one drone experienced issues streaming images back to the GCS and was retired before the run. At take-off, another drone failed to lift off. Of the four remaining drones that took off, two drifted significantly, leading to a mid-air collision. Among the two drones that remained airborne, one successfully entered the Unknown Area through the top opening, while the other deviated from the navigation tag at the right opening and crossed into the Pillar Area and Known Area.
For the second run, to mitigate drifting during take-off, the extra April Tags brought to the competition were used as launch platforms by placing the drones on top of the tags. While this method had been used by the previous team, our own testing showed it had negligible impact on take-off performance. The second run deployed only four out of six drones: the same drone that failed to stream images in the first run was retired again, and the drone that failed to take off was also removed. This adjustment created more vertical spacing between the drones, reducing the risk of mid-air collisions. With this configuration, two drones successfully entered the Unknown Search Area, while the remaining two flew past the Navigation Aids and veered into the Known and Pillar Area.
Despite these challenges, the swarm performed well overall, by demonstrating strong potential for decentralized coordination and victim rescue in a dynamic indoor environment.
On the 5th of April 2025, we attended the prize-presentation ceremony at the Singapore University of Technology and Design (SUTD). We are pleased to announce that we came in 3rd Runner Up in SAFMC 2025 Category E!
10. Conclusion and Future Work
Using Bitcraze’s Crazyflie as the drone platform, our team developed a 20-drone swarm for an indoor search and rescue mission in Category E of SAFMC 2025. To detect objects such as Victim Markers, Danger Zones, and Navigation Aids, the AI deck was integrated to provide the drones with visual capabilities. To optimize the search process, the 20 m by 20 m search arena was divided into three distinct regions: the Known Search Area, Unknown Search Area, and Pillar Area, enabling the development of specialized strategies tailored to each zone.
While the overall drone swarm was effective in its search operation in the competition arena, several challenges were encountered. Addressing these limitations presents opportunities for future improvements, enhancing the robustness and performance of the system in terms of time, reliability and coverage.
10.1 Expanding Drone Capabilities
The key disadvantage our team face is that our current drone lacks sufficient space for additional expansion decks, which limits our exploration opportunities. Yet, due to the critical functional requirements our current decks fulfils for autonomous flight and navigation, we cannot remove decks that we currently use. As such, an alternative approach is to use the BigQuad deck (Figure 10.1) (Bitcraze, n.d.) to build a larger platform that can accommodate more expansion decks. This would provide greater flexibility for integrating additional sensors and modules, enhancing the drone’s overall functionality. With the increased capacity on the Crazyflie, we could explore the prototyping deck (Figure 10.2)(Bitcraze, n.d.), which allows for the integration of custom hardware, including the ability to solder a Bluetooth chip on it and other capacities (See Section 10.2).
10.2. Decentralized Distributed SLAM
Based on our experience and observations of other teams during the competition, decentralized solutions significantly enhance the reliability of a drone swarm. Implementing a mapped solution for the Unknown Search Area greatly improves search coverage. However, one limitation that prevented us from adopting such a strategy was hardware constraints. The ETH-PBL lab overcame the poor performance of SLAM by procuring a custom ToF sensor deck and implementing state-of-the-art SLAM features, such as loop closure detection.
By adopting ETH’s custom hardware (Figure 10.3 and Figure 10.4) and SLAM solution (ETH-PBL, n.d.) the team would gain access to a wider range of algorithms for optimizing search strategies. For example, multi-agent frontier search could be implemented to improve exploration of the Unknown Search Area. Additionally, having an accurate map enables adaptive search strategies, allowing drones to assess real-time coverage, adjust their paths dynamically, and coordinate more effectively to avoid redundant exploration.
Furthermore, with a mapped solution, drones would be able to navigate to the Unknown Search Area more reliably without relying on a series of pre-planned motions that need to be executed precisely. Instead of depending on fixed movement sequences that need to be execute timely and precisely, drones could make real-time adjustments based on their perceived environment, reducing the risk of errors due to minor deviations or unexpected obstacles. This would also make the system more robust against disturbances, such as drift, communication delays, or intermittent sensor data, ultimately improving overall mission success.
However, implementing this solution requires sufficient space on the drone to accommodate the additional deck.
Appendix A: Decision Matrix for Selection of Drone Development Platform
The selection of drone platform was based on six equal-weighed key factors, size and weight, hardware modularity, software modularity, sensor capability, cost, and parts availability. For each factor, the drone platform is given a score out of 5, the higher the better.
-
A smaller and lighter drone enhances manoeuvrability, especially in confined spaces, and improves swarm scalability. Among the evaluated platforms, the Bitcraze Crazyflie is the smallest and lightest, followed by the DJI Tello, which has a slightly larger size and higher weight.
-
A modular hardware design allows for easy upgrades and component replacements. Higher scores indicate greater flexibility in adding or replacing components. Crazyflie and DEXI are highly modular, as both suppliers provide spare parts and easily swappable components. DEXI scores higher since certain parts, like its onboard computer (Raspberry Pi), are off-the-shelf, whereas Bitcraze relies on a custom PCB. DJI Tello, in contrast, only offers spare parts for specific components like propellers and propeller guards.
-
Open-source software and compatibility with frameworks like ROS are crucial to provide better software flexibility, developer support, and ease of integration. Crazyflie and DEXI both have high scores as both have open-source software. However, DEXI ranks higher as its on-board computer (Raspberry Pi) provides greater processing power and flexibility compared to Crazyflie’s STM microcontroller. DJI Tello has a low score of 2 due to it running on a proprietary software.
-
The drone must support onboard sensors for localization, obstacle detection, and avoidance. A higher score is given to platforms with more built-in or easily integrable sensors. Crazyflie and DEXI have high scores as both offer the same sensor suite such as optical flow sensors, multi-TOF distance sensors, camera etc. On the other hand, DJI Tello only contains a camera module, limiting its capabilities to vision-based solutions.
-
The cost of each platform should be reasonable as they will be scaled up to form a swarm. In terms of pricing per unit, DJI Tello is the most affordable at USD 109, while Crazyflie, including the additional sensor decks cost USD 640. The DEXI Level III is the most expensive at USD 2000.
-
Readily available drones and spare parts ensure ease of maintenance. DJI and Bitcraze both have authorized retailers in Singapore, making their spare parts highly accessible. However, DEXI only operate in the United States, requiring international shipping for spare parts, which leads to increased lead time and cost.
References
A Brief Survey of Loop Closure Detection: A Case for Rethinking Evaluation of Intelligent Systems. (2020). S. Nashed. https://ml-retrospectives.github.io/neurips2020/camera_ready/21.pdf
Aideck-gap8-examples/examples/other/wifi-img-streamer at master · bitcraze/aideck-gap8-examples. (n.d.). GitHub. Retrieved 29 March 2025, from https://github.com/bitcraze/aideck-gap8-examples/tree/master/examples/other/wifi-img-streamer
AprilRobotics/apriltag. (2025). [C]. AprilRobotics. https://github.com/AprilRobotics/apriltag
BigQuad deck | Bitcraze. (n.d.). Retrieved 2 April 2025, from https://www.bitcraze.io/products/bigquad-deck/
Bitcraze. (2024, October). Crazyflie firmware documentation. Www.bitcraze.io. https://www.bitcraze.io/documentation/repository/crazyflie-firmware/2024.10/
Comparison of fiducial markers. (2022, April 26). Robotics Knowledgebase. https://roboticsknowledgebase.com/wiki/sensing/fiducial-markers/
Freertos+posix—FreertosTM. (n.d.). Retrieved 29 March 2025, from https://freertos.org/Documentation/03-Libraries/05-FreeRTOS-labs/03-FreeRTOS-plus-POSIX/00-FreeRTOS-Plus-POSIX
Friess, C., Niculescu, V., Polonelli, T., Magno, M., & Benini, L. (2024). Fully Onboard SLAM for Distributed Mapping With a Swarm of Nano-Drones. IEEE Internet of Things Journal, 1–1. https://doi.org/10.1109/jiot.2024.3367451
Github—Cde-4301-asi-401/missionplanner. (n.d.). GitHub. Retrieved 29 March 2025, from https://github.com/CDE-4301-ASI-401/MissionPlanner
GitHub—Williamleong/aideck-gap8-examples at dev/william. (n.d.). GitHub. Retrieved 29 March 2025, from https://github.com/williamleong/aideck-gap8-examples
Latif, Y., Huang, G., Leonard, J., & Neira, J. (2017). Sparse Optimization for Robust and Efficient Loop Closing. ArXiv.org. https://arxiv.org/abs/1701.08921
Loco explorer bundle—Crazyflie 2.1+. (n.d.). Bitcraze Store. Retrieved 29 March 2025, from https://store.bitcraze.io/products/loco-explorer-bundle
Maximum range for loco positioning | Bitcraze. (n.d.). Retrieved 29 March 2025, from https://www.bitcraze.io/documentation/system/positioning/max-range-loco/
MissionPlanner/mission_planner/kill_all.py at main · CDE-4301-ASI-401/MissionPlanner. (n.d.-a). GitHub. Retrieved 29 March 2025, from https://github.com/CDE-4301-ASI-401/MissionPlanner/blob/main/mission_planner/kill_all.py
MissionPlanner/mission_planner/mission_script.py at main · CDE-4301-ASI-401/MissionPlanner. (n.d.). GitHub. Retrieved 29 March 2025, from https://github.com/CDE-4301-ASI-401/MissionPlanner/blob/main/mission_planner/mission_script.py
Prototyping deck | Bitcraze. (n.d.). Retrieved 2 April 2025, from https://www.bitcraze.io/products/prototyping-deck/
Rauch, C. (2025). Christianrauch/apriltag_msgs [CMake]. https://github.com/christianrauch/apriltag_msgs
Rauch, C. (2025). Christianrauch/apriltag_ros [C++]. https://github.com/christianrauch/apriltag_ros
Release 2025.02 · bitcraze/aideck-esp-firmware. (n.d.). GitHub. Retrieved 29 March 2025, from https://github.com/bitcraze/aideck-esp-firmware/releases/tag/2025.02
Tl-nus-cfs/ai_deck_wrapper. (2023). [Python]. TL@NUS Centre For Flight Sciences. https://github.com/TL-NUS-CFS/ai_deck_wrapper