One important issue in the study of unmanned airships is obstacle detection. This is important because it helps the airship avoid collisions with obstacles. To build an obstacle detection network, we insert convolutional layers at the beginning and end of a YOLOv3-tiny network. This network achieves higher detection accuracy than YOLOv3-tiny, while at the same time satisfying processing speed.
An innovative payload that can hop from the surface of the Moon is now being developed. Known as LunaRoo, the solar-powered robot can reach a height of 20 meters and has unique observation capabilities. The robot also produces imagery during its hop. This lunar probe is the proposed payload of the Google Lunar XPrize competition. The lunar rover is an unmanned airship with a scientific mission and is designed to provide images and data to humans.
YOLOV3 is a small neural network that predicts bounding boxes on three different scales. It performs regression to detect targets. The state-of-the-art version of this network is YOLOV3. It can detect targets with high accuracy and speed, even those far away. This makes it a promising candidate for unmanned airship obstacle detection.
The embedded YOLO model was tested for target detection. Compared to other lightweight models, the embedded YOLOv4 model exhibited a higher detection accuracy. The processing speed was guaranteed at 30 frames per second. Further, the YOLOv3-Tiny network improved the detection accuracy by 6%.
The proposed YOLOV3-dense network was built by combining the GMMDet and Faster R-CNN image processing models. In a real-world situation, these two networks do not consider the environment, and so can be ineffective for obstacle detection. Moreover, they suffer from false detections due to the influence of real-world environments.
The proposed system is a lightweight version of YOLO-V3 that uses the same three-module tandem frame design of YOLOv4 to detect targets. It is designed to be highly accurate and has real-time capability. It is compatible with embedded computing devices and can be easily deployed. This paper also explores the feasibility of GMMDet on unmanned airship obstacle detection.
A novel YOLOV3-tiny network for unmanned aerial vehicles (UAVs) is a promising candidate for obstacle detection in the real world. It can handle the challenging environments of airspace and other unforeseen threats. The researchers aim to develop this system further and implement it on real USVs. And a further goal is to improve the system’s detection performance.
The proposed network is robust against weather changes and improves upon existing approaches. It is capable of real-time obstacle detection and avoidance under varying weather conditions. A further goal is to create a robust and fast network capable of detecting obstacles in real-time. The network will have to train with a large dataset before it can be implemented on an actual USV.
In the study of unmanned airships, obstacle detection is an important issue to address. This is because obstacle detection helps the airship avoid obstacles. To achieve high-accuracy obstacle detection, convolutional layers are inserted at the beginning and end of the YOLOv3-tiny network. These layers have the added benefit of a high processing speed.
The training process of this model is based on the loss function of the inputs. The loss function consists of the predicted coordinate, the IOU, and the classification. The predicted coordinate is the ground truth. The confidence level informs the presence or absence of an obstacle. The YOLOv3 backbone network is split into three parts, each representing a feature. This network is capable of detecting a sphere, a rectangle, and a circle.
YOLOv3 has a standard size of (416, 416, 3). The input image is transformed into three feature maps with different scales. Next, the feature fusion module absorbs the semantic information of the different scales and outputs a new set of three feature maps. A detector then returns the bounding box coordinates and classes of objects.
In a recent study, a team from the University of Michigan used the YOLOv3-tinY network to detect obstacles in the sky. The YOLOv3-tiny-IRB was able to detect a variety of tomato diseases and pests. It could also detect debris and overlapping leaves. These results are presented in Table 11.
The YOLO algorithm has been developed for many years. During this time, it has improved in accuracy and speed. However, it has a significant computing complexity and volume. Hence, lightweight target detection algorithms have been developed. While maintaining good detection performance, these lightweight targets are suitable for resource-constrained devices. And, YOLOv4-Tiny network has achieved mAP of 86% for mask-wearing specification detection.
The YOLOv3-Tiny model uses three CSPBlock modules in its backbone. These modules are able to reduce the network’s computation complexity by reducing the number of parameters. However, the real-time detection performance is still very low. Nevertheless, these systems are better than the YOLOv3-Tiny network, and the team plans to continue research into their optimization and development.
In this paper, we propose an efficient and portable mobile sensor network based on the YOLOv3 architecture for obstacle detection. The YOLOv3 architecture adopts a decoupled detection head that reduces channel dimensions while enhancing converging speed. The network has two parallel branches that each have threex3 convolutional layers. It is possible to deploy the network anywhere, from a fixed ground station to an airship.
A network of mobile sensors can help the unmanned airship detect obstacles in real time. The on-board device has sensors for sensor calibration and monitoring the CSPDarknet. The backbone part can detect features in the environment and generate detection blue boxes. It then uses the deep SORT structure to determine the location of obstacles, their center point positions, and the ID of the obstacles. The whole process is repeated until the next frame is acquired.
To develop a mobile sensor network, the researchers have pre-trained the YOLOv3 model to detect small unmanned airships. This deep learning model can estimate confidence scores in seven x 7 grid cells, which enables the unmanned airship to avoid obstacles without collision. Furthermore, this model uses logistic classifiers to detect the presence of objects in complex environments.
The YOLOv3 algorithm is a continuation of YOLOv1 and YOLOv3. It incorporates two versions of the YOLOv3 architecture, one called the YOLOv3 and the other called YOLOv4. This new version has improved accuracy, frame rate, and learning ability. The YOLOv3 algorithm has been compared with YOLOv4 and is far superior.
The YOLOv3-dense model predicts bounding boxes at three different scales, improving the accuracy of detection of small targets. This model also improves the average detection time, which is related to the practicality of real-time application. And as mentioned before, the YOLOv3 network structure is flexible enough to adapt to new coupling structures and segmentation methods.
YOLOv3-mobile network architecture uses three anchor boxes and 106 convolution layers to detect small objects. It uses minimal filters to detect small objects, with low false alarm rate. The proposed architecture uses three anchor boxes to detect small objects. It also includes a minimum filter to eliminate false alarms. In contrast to existing YOLOv3 models, YOLOv3 can compare favorably with state-of-the-art methods.