Aerial Search and Rescue Development Platform
How It Started
Hi! In 2023, I wanted to make an autonomous Search and Rescue drone. All of the systems on the market at the time were thousands of dollars. I wanted to make a way for people to get into the Search and Rescue field but not having to spend thousands to tens of thousands of dollars to do so. I have flown drones since the original DJI Phantom 1 came out in 2012. I have always loved flying them. I started by watching my Dad fly his drone when he first got one. I thought it was the coolest thing in the world. Over time, he would sometimes let me fly his drones to probably get me to stop asking so much. As the years went on, I continued to fly drones with my Dad for fun, for photography and video jobs, and to just try out some new tech. I eventually got to a point where I was flying these on my own and doing my jobs with them. Normally it was just getting a few photos and videos for my school and some businesses. I decided to branch out as time went on to professional photography, videography, cinematography, and location mapping/analysis. Sometimes I would help my Dad with his Search and Rescue jobs that he got.
Fast forward a few more years, and have my small arsenal of drones ranging from a few inches in diameter to a few feet. I pretty much have a drone for anything I could ever need. Since the technology was always advancing, especially in the Search and Rescue/Police field, I wanted something similar to the cool thermal and tracking tech that they had. My Dad had a thermal camera he used to use on one of his older drones, and I was just given access to a little bit of money from my high school for a new Senior Project test class they were running. What I decided to do was to attempt to make an autonomous Search and Rescue drone that could both find, alert and assist a person in need. That’s where the idea started anyway.
First Attempt (V1 Senior Year of Highschool)
Software Testing
First, I needed a drone to get the job done. The drones I had at the time were either too small or too large but designed only for specific payloads. I wanted something modular with the ability to add your components and integrate them into the drone’s flight controller. I went looking online and came across a drone from DJI called the Matric 100. It was a mid to larger size drone with the ability to run multiple battery packs and add your payloads. It was a great base for my needs. Next, I needed the hardware for my payload. My school gave me a rough budget of about $100 that I could spend on components to achieve what I wanted to do. I decided to save my budget a little and start with a Raspberry Pi 3b that my school already had. At the time I was unsure of how much GPU power Object Recognition needed to work at a decent speed, so I assumed it was enough. I was greatly mistaken. I found a few videos online on how to use Object Recognition on a Raspberry Pi with OpenCV that were somewhat easy to understand. What held me back was my lack of knowledge of how to use Linux-based operating systems. A lot of time was spent fixing random errors I was getting by looking up the problems online and trying to understand what they were. It took a bit of effort to get it all working, but eventually, I had some luck.
Once I had the software working and an old Logitech camera plugged into the Pi’s USB port, I finally had an image on the screen. While it certainly was recognized by me and a few objects I had around my room, it was very limited in its speed. I was hitting a whopping three to four frames per second. I was so happy it worked, but I knew this would not be realistic to use in the field. I went back to my advisor for the class and asked if I could buy a Raspberry Pi 4b to run a bit faster. From what I knew at the time, it was significantly more powerful in the GPU department. He said yes, I found a good eBay listing for one, and had it within the week. After reloading everything onto the new Pi and trying again, I was hitting about fifteen FPS which was more than usable. What I needed now was a way to use my thermal camera’s video stream instead of my USB camera. I found out I needed to buy a USB Capture Card to convert my thermal camera’s HDMI output to a USB input that the Pi could understand. I found a cheap one at my local Micro Center and tried it when I got back home. It worked perfectly. While the pre-made model for the detection was not designed for thermal images, it worked quite well for detecting people. I then had a good idea of what needed to get done to finish this proof of concept.
Mounting To The Drone
My original idea for this was to have an entire payload system that I could just mount and unmount whenever needed on one action. What I didn’t take into account was the drone ground clearance. With the dual battery setup it had, I couldn’t mount anything underneath the drone with enough clearance for the stock camera that the drone came with. Because of this, I needed to look at the mounting above. The nice part was that when I got this drone it was from a guy with the drone division of his company shutting down, so it came with a lot of extra goodies. I was given an expansion bay that could mount above the drone and give me proper mounting hardware to connect everything. This was amazing as I could mount the Pi safely. After some more fiddling, I found that instead of making one payload system was not going to work well. With the added weight of the components, I was already nearing max payload capacity. I decided that instead integrating my design was the better option for the drone. While it was not what I was looking for it was still fine by me. The final result was not what I was hoping to put together at the time but it still worked as a proof of concept. I had the camera statically mounted to the front of the drone facing down. I had the Pi mounted above using a 9V to 5V converter. I had the camera running off of a battery bank strapped to the upper back of the drone. And wires were running everywhere. At the time, I only had about a month or two to finish this, so making it look pretty was not on the top of my list. I found the drone’s hardware to be a bit more limiting than initially thought but it was fine.
There was one last component to mount that I needed. I wanted a way from the ground to see if my drone was able to spot a person. I originally wanted a spotlight that would shine on the person once it saw them. Again, since I was running out of time I had to compromise a bit and just have a small flashlight with a servo that would just aim downward. While it did work for the concept, it was in no way acceptable on a final version.
Software Troubles and Initial Testing
I had a plan when going into this about having the drone be fully autonomous. I wanted to have my waypoint system with its mapping software that would directly integrate into the Raspberry Pi and the tracking software that I was using. What I hadn’t accounted for was DJI’s documentation to connect a Linux-based computer to its flight controller and using DJI’s Onboard SDK controls. So turns out that when you try to use a drone that was released in 2015 in 2023, a few things are bound to be different. Because DJI has continued development on their SDK as the years have gone on for their newer Matrice drones, this old drone I was using was discontinued. With the drone being discontinued, most of its documentation was also lost online. I tried to use what I could to at least link a connection over serial to the flight controller but was only able to get it to barely work. I had to use Ubuntu on the Pi to get a connection and only after weeks of failure after failure. All I was able to get the drone to do was launch and land on its own. I was very exited in the moment when it worked. Weeks of trying finally paid off. What wasn’t great was then realizing that I couldn’t use Ubuntu and get the OpenCV object detection I was using on the Pi to work. I needed Raspbian OS for it to work properly with the other components I was using. So I eventually gave up on being able to control the drone with the Raspberry Pi and resorted to the autonomous flight being controlled by a specific land mapping software that could use the drone to map areas of land and fly in specified patterns. For me, this was good enough just to show that this concept would work.
With all of that out of the way, I would spend the rest of the month I had left just fine-tuning what I could to make the detection and weight balance of the drone perform to the best of its ability given the limitations I had. In the end, I had a drone that could launch, map an area using its camera, and find and report times that it spotted heat signatures that looked human to a log that I could check once it was back on the ground. I did want to have a live video feed of the Pi on the ground, but the limitations of the flight controller and video transmission system meant I could either have the drone video or the Pi’s. While this attempt was not what I was looking to accomplish, it was a great first attempt from someone who knows nothing about software and only hardware. I was happy with it and knew what to do if I were to do it again. So to put it simply I did it again.
During this last month, I listed the drone on eBay to see if I could sell it to buy another platform to continue the project with fewer limitations. I needed something that could lift more weight, have a longer flight time, and give me the ability to have multiple video streams transmitted to the ground. It eventually sold, and I went looking for something bigger. I found a few listings for DJI S900s and S1000s, but they were very old platforms that were not as reliable or stable. I wanted something newer. That is when I came across a few forums online and made a post to see if anyone had a DJI Matrice 600 for sale. The M600 is a larger hexacopter with six independent batteries running it. It was made fully out of carbon fiber and was both a drone designed for professional filming and the enterprise market. It was perfect. I eventually got a response from someone selling one. While it was still an older platform coming out in 2016, it was just new enough to be perfect for what I needed. I eventually got the funds and drone twelve hours to Massachusets with my Mom to pick it up and do a test flight. At the time, my family had a Tesla Model S with Full Self Driving that ended up making the trip a lot easier than driving on our own. Once I was there I met the guy, flew it, and decided it looked ok. He was selling it for significantly less than market value because he thought something was broken that caused it to fly awkwardly. Turns out it was just that he broke one of the arms, and replaced the arm, but when replacing forgot to make sure the motor tilt was in the right direction. I noticed this and realized this was an easy fix. It put it in our car and back to Maryland I went.
Now that I had the drone I needed and a ridiculous amount of spare accessories and parts that he gave me with it I was ready for round two.
Second Attempt (V2 Summer and Freshmen Year of College)
Summer
I’ll be honest here. I used the Summer to take a short break from this project. I wanted more time to plan and to have a better idea of what the outcome of this project would be. During Summer I did a lot of flight testing with the drone to see how it performs in different situations. It handles wind perfectly fine due to its size and its weight makes it very stable in flight. The flight time I was getting on this drone, even with my larger camera unit, was about 30 - 35 minutes. That was way more than enough. Even the updated DJI Lightbridge 2 transmission system it had was way more reliable and I was able to get a further transmission distance. I even shot some short films and was able to get a lot of photography done with this drone which is one of its selling points. Probably the coolest part of this drone is the sound it makes in the sky above due to its hexacopter nature and 221-inch props. It sounds like a B17 bomber. Overall, I was very happy with the purchase and was ready to move on to round two of the project.
Freshmen Year of College at RIT
For my freshmen year at RIT, I joined an organization called Computer Science House (CSH) which was one of the special interest houses that resided in the dorms. It’s probably one of the coolest places on campus. If you have time, look into it! One of the requirements for members to complete each year is a “Major Project.” Essentially, just do a project that you are interested in that has to meet two of three requirements. Those requirements were: Benefits House, Spend a decent amount of time on, You learn something new. I decided that I might as well continue this project as I enjoyed it before and didn’t want to leave it off where it was.
Going into this I had a few new ideas and changes from the original. I decided to hold off on wanting to control the drone’s flight through an added onboard computer and focus on the payload. I wanted to create my payload system that could be attached to any drone that could support the weight/size (A lot fit this). It would be its own contained unit that would be attached below. The only connections to the drone that would be needed were power, video output, and some extra connections to the flight controller if you wanted to control the gimbal’s pitch for the camera. The contained unit I ended up working on it later and first wanted to write my software for object recognition.
The Object Regonition Software
For this new version, I wanted to make everything myself. I didn’t want to use other pre-made object detection models or an online tutorial and use their code. This was to be all me this time around. Did this increase the time it took to do pretty much everything? Yes… yes, it did. I wanted to understand everything I was doing instead of just running with what other people made.
First up was training my object recognition model. Since I wanted to keep costs low and allow anyone to easily do what I was doing, I used a Raspberry Pi 5 to do the entire project. What made this awesome was that the person trying to follow along with my documentation would be able to train their model using it and later use the Pi itself to do the detection. A separate computer would not be needed, unless the Raspberry Pi they ordered online did not come with an SD card with Raspbian already installed. I went out and took some images with my thermal camera to use to help make the model. Thermal images online were also used as there were some things I could not capture myself. Sadly, you can not just put images of people into some software and poof… you have a fully ready model to go. I decided to use a labeling software called LabelIMG that is both easy to use and easy to run on the Pi given a few minor tweaks. There is software out there that will auto-label images for you, but I decided to label them myself by drawing boxes over parts of the image I wanted the detection to recognize. I also wanted people who were planning to do the same thing as me to do that as well to better teach them about how this all works. A goal of this was to teach me all about this subject, so why not get my hands on this?
Once each image was labeled, I had to “Train” the model to make it into something the recognition software could use. This process uses the images you labeled and machine learning to teach itself what each object you outlined earlier was by comparing the images to each other to find patterns. This process would have taken forever to do on a Raspberry Pi so I offloaded it to the Google Cloud Servers to use their GPUs. This process can take about an hour or longer given the amount of images one would use. It took mine on the lesser side of that. Once I had a trained model, I needed something to use the model. This is where I would choose what recognition software I would use.
When it came to recognition software available there was a lot to pick from. A lot of popular ones are OpenCV, YOLO, and TensorFlow. I decided to use TensorFlow as there were some “Lite” versions available that would run faster on lesser-powered machines such as a Raspberry Pi. The nice part about the Pi 5 is that it is quite a powerful computer for its size. After going through the process of writing the software to use TensorFlow and the trained model I had made earlier, I was good to start testing.
Once everything was working and I had worked out the bugs, it ran very well. I was hitting about 25 FPS and it was only detecting people and cars. I was getting essentially no false positives. Turns out that when you make the models and software yourself it is more optimized to work the way you want it to. I was very happy with the results. I was ready to move on to the hardware bit for now.
Getting The Hardware to Work on The Drone
Now that I had the object detection working where I wanted for thermal signatures, I was ready to start mounting to the drone for testing. With the mounting, I had a few limitations and wants. I needed to use the included underside mounting hardware standard that DJI made called the Ronin MX Mount. It was a slide and lock system that was designed to hold up to 5.5 Kilograms, or about twelve to fifteen pounds. I also wanted to have the whole unit attached to this mount and hang below the drone to make mounting and detaching easy. Power was also something I needed to figure out as the drone’s power output ports were too powerful for my components. I also wanted to design a gimbal to balance the camera and keep it at a constant pitch. Given these wants and restrictions, I was off to the designing and development stage.
The first issue I needed to solve was power. Without that, none of this would even work. I could have used a battery bank to run this all but I had 180 Watts of power I could use, so why not use that? Also using a battery bank meant you had to keep it charged which just adds another thing to worry about or forget about if you were in the field. The drone gave off an output of 18 Volts and 3 Amps. For the Pi and thermal camera to work I needed less voltage. Luckily, Buck Converters are readily available online so I just picked up one of those with two outputs all was good. Once I had that connected to the drone I was able to lower the 18 Volts to 5 Volts for the components. The Pi has a recommended input of 5 Amps as well but I was working with what I had at the time so 3 Amps was good enough. I connected it all up and tested it and all was good. The thermal camera and Pi 5 were working just fine with the detection software working. The power problem was solved.
What I wanted next was a gimbal for the thermal camera. The exact unit I was using was a Flir Vue Pro. It is a small box-like camera with mounting holes all around it. I decided for the moment that a one-axis gimbal design was enough for my purposes at the time. I mocked up and tested about nine to twelve different designs. I eventually made one that worked and fit exactly what I was looking for. I also made a Python script to run the gimbal using an accelerometer but that was only a short part of it. With the gimbal unit designed, I had to find out how to mount it and the rest of the components to the drone.
With all the components working as expected, I needed to mount them to the mount on the drone. I originally wanted them to all be internal with a large shell around it all, but at the moment I was still fine-tuning stuff so I went with an exposed model instead so I had easy access to components. I put the gimbal design at the bottom and had a “tree-branch” structure to mount the buck converter and Raspberry Pi. This model was easy to print on a 3D Printer, used as little material as it could, and was strong enough to handle the forces that it would need on a drone to prevent it from breaking. For the top of i,t I just took measurements of the female side of the Ronin MX mount and made the male equivalent to fit in.
The final version used worked for what I needed it to. This was not a final product by any means, but it was a lot better than the old version of the V1 project. I am still working on making a better version, but for now, it is fine.
Last Little Details
I wanted to be able to control the gimbal pitch from the drone remote, be able to see the Raspberry Pi detection program from the ground, and wanted to be able to switch between different thermal modes on the camera through the drone remote. These were easy to do but they did sacrifice a little convenience when adding and removing the payload unit.
To control the gimbal’s pitch, I had to route some wires from the Raspberry Pi to the drone’s DJI A3 flight controller. I just have a simple connection for this to transfer data from the remote to the drone to the Pi. When I would twist a dial on the remote it would send a signal to the flight controller and output that data to the Raspberry Pi as a value. That value would then correspond to a degree of pitch. The gimbal servo would then just move the camera to match that pitch. Simple enough I think.
Next was to switch the thermal image profiles. This was done pretty much the same way the gimbal pitch was done. I would connect the thermal camera directly to the flight controller and use the toggle switches on the remote controller for different modes. Once again, a toggle on the remote would correspond to a value that the flight controller would send to the camera. That value then would match up with a thermal profile. Again, easy enough.
This last bit took a little more fiddling but I was able to get it working. I wanted to see the program from the ground to see what the detection software was seeing. Luckily, the DJI Lightbridge 2 transmission unit could accept up to two inputs. One of those used the drone-mounted camera and the other would be my payload. I routed the output from the Raspberry Pi to the input of the transmission unit and it kind of worked. The DJI LB2 didn’t like the signal it was receiving and I was having trouble seeing it from the ground. It took about an hour of changing screen resolutions, values, and transmission priority settings to get it to a usable state. For now, this was fine for what I was doing.
The End Product Up to This Point
What I had at the end of this project was a self-contained thermal object recognition payload unit that could be easily attached to drones and connected directly to the drone systems for seamless use. While I put the autonomous part of the project on hold and focused on the payload unit itself, I was happy with what I ended up with. I learned a lot about machine learning and object recognition. I also kept the budget low to make this accessible to most people looking to get into Search and Rescue thermal applications. This also opened up new possibilities for further expansion. Adding integration into already made autonomous surveying apps could be achieved by using their SDKs making the thermal payload unit communicate with the software for better searching capabilities. I also have been wanting to add and “Releif-Kit” to the drone to help people in need. When the drone finds someone it could drop a medical, food, and radio package to the person to help them and put them in contact with Seach and Rescue teams to get them back safe and sound.
Overall, I am very happy with how far I have come with this. While it is nowhere near actual deployment in the field, it is a great start. Maybe a few more years of development will have me where I want to be. I also really want to add those other features I mentioned above. For trying to make this completely from scratch and make it less expensive than the current products on the market, I would say I have done a good job.
I’ll post more posts with updates and new versions. If you are interested in the project check out my GitHub page. It is labeled, M600 SandR.