The rapid advancement of Internet of Things (IoT) technologies has significantly influenced the development of multi-mode remote control systems, which are increasingly vital for enhancing automation and user interaction in various fields. This thesis focuses on the design and implementation of a multi-mode remote control car that integrates web-based control, gesture-based navigation, and autonomous obstacle avoidance. The primary objective is to build a versatile system that demonstrates seamless operation across these three control modes while maintaining high efficiency and adaptability. The approach involves hardware and software integration using components such as ESP32-CAM for realtime video streaming, MPU6050 for gesture recognition, and HC-SR04 ultrasonic sensors for autonomous navigation. A series of practical experiments and performance evaluations were conducted to assess the system’s latency, accuracy, and responsiveness in diverse operating scenarios. The results indicate that the system achieves significant improvements in latency reduction, gesture recognition accuracy, and obstacle avoidance efficiency, thereby meeting the intended objectives. This research contributes to the field of embedded systems by demonstrating a scalable and cost-effective solution with potential applications in education, robotics, and IoT-based automation.
Readership Map
Content Distribution
The rapid advancement of Internet of Things (IoT) technologies has significantly influenced the development of multi-mode remote control systems, which are increasingly vital for enhancing automation and user interaction in various fields. This thesis focuses on the design and implementation of a multi-mode remote control car that integrates web-based control, gesture-based navigation, and autonomous obstacle avoidance. The primary objective is to build a versatile system that demonstrates seamless operation across these three control modes while maintaining high efficiency and adaptability. The approach involves hardware and software integration using components such as ESP32-CAM for realtime video streaming, MPU6050 for gesture recognition, and HC-SR04 ultrasonic sensors for autonomous navigation. A series of practical experiments and performance evaluations were conducted to assess the system’s latency, accuracy, and responsiveness in diverse operating scenarios. The results indicate that the system achieves significant improvements in latency reduction, gesture recognition accuracy, and obstacle avoidance efficiency, thereby meeting the intended objectives. This research contributes to the field of embedded systems by demonstrating a scalable and cost-effective solution with potential applications in education, robotics, and IoT-based automation.