RT info:eu-repo/semantics/article T1 A Self-Adaptive Automatic Incident Detection System for Road Surveillance Based on Deep Learning A1 Bartolomé-Hornillos, César A1 San-José-Revuelta, Luis M. A1 Aguiar Pérez, Javier Manuel A1 García-Serrada, Carlos A1 Vara-Pazos, Eduardo A1 Casaseca-de-la-Higuera, Pablo AB first_pagesettingsOrder Article ReprintsOpen AccessArticleA Self-Adaptive Automatic Incident Detection System for Road Surveillance Based on Deep Learningby César Bartolomé-Hornillos1, Luis M. San-José-Revuelta1 [ORCID] , Javier M. Aguiar-Pérez1 [ORCID] , Carlos García-Serrada2, Eduardo Vara-Pazos2 and Pablo Casaseca-de-la-Higuera1,* [ORCID]1ETSI Telecomunicación, Universidad de Valladolid, 47011 Valladolid, Spain2Construcciones y Obras Llorente, S.A., 47012 Valladolid, Spain*Author to whom correspondence should be addressed.Sensors 2024, 24(6), 1822; https://doi.org/10.3390/s24061822Submission received: 7 February 2024 / Revised: 2 March 2024 / Accepted: 8 March 2024 / Published: 12 March 2024(This article belongs to the Section Vehicular Sensing)Downloadkeyboard_arrow_downBrowse FiguresVersions NotesAbstractWe present an automatic road incident detector characterised by a low computational complexity for easy implementation in affordable devices, automatic adaptability to changes in scenery and road conditions, and automatic detection of the most common incidents (vehicles with abnormal speed, pedestrians or objects falling on the road, vehicles stopped on the shoulder, and detection of kamikaze vehicles). To achieve these goals, different tasks have been addressed: lane segmentation, identification of traffic directions, and elimination of unnecessary objects in the foreground. The proposed system has been tested on a collection of videos recorded in real scenarios with real traffic, including areas with different lighting. Self-adaptability (plug and play) to different scenarios has been tested using videos with significant scene changes. The achieved system can process a minimum of 80 video frames within the camera’s field of view, covering a distance of 400 m, all within a span of 12 s. This capability ensures that vehicles travelling at speeds of 120 km/h are seamlessly detected with more than enough margin. Additionally, our analysis has revealed a substantial improvement in incident detection with respect to previous approaches. Specifically, an increase in accuracy of 2–5% in automatic mode and 2–7% in semi-automatic mode. The proposed classifier module only needs 2.3 MBytes of GPU to carry out the inference, thus allowing implementation in low-cost devices. YR 2024 FD 2024 LK https://uvadoc.uva.es/handle/10324/68168 UL https://uvadoc.uva.es/handle/10324/68168 LA eng NO Sensors 2024, 24, 1822 DS UVaDOC RD 19-oct-2024