<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns="http://purl.org/rss/1.0/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/">
<channel rdf:about="https://uvadoc.uva.es/handle/10324/966">
<title>Instituto de las Tecnologías Avanzadas en la Producción (ITAP)</title>
<link>https://uvadoc.uva.es/handle/10324/966</link>
<description>ITAP</description>
<items>
<rdf:Seq>
<rdf:li rdf:resource="https://uvadoc.uva.es/handle/10324/80441"/>
<rdf:li rdf:resource="https://uvadoc.uva.es/handle/10324/78652"/>
<rdf:li rdf:resource="https://uvadoc.uva.es/handle/10324/76284"/>
<rdf:li rdf:resource="https://uvadoc.uva.es/handle/10324/73711"/>
<rdf:li rdf:resource="https://uvadoc.uva.es/handle/10324/73268"/>
<rdf:li rdf:resource="https://uvadoc.uva.es/handle/10324/73267"/>
<rdf:li rdf:resource="https://uvadoc.uva.es/handle/10324/73265"/>
<rdf:li rdf:resource="https://uvadoc.uva.es/handle/10324/73264"/>
<rdf:li rdf:resource="https://uvadoc.uva.es/handle/10324/69775"/>
<rdf:li rdf:resource="https://uvadoc.uva.es/handle/10324/69767"/>
<rdf:li rdf:resource="https://uvadoc.uva.es/handle/10324/68303"/>
<rdf:li rdf:resource="https://uvadoc.uva.es/handle/10324/68288"/>
<rdf:li rdf:resource="https://uvadoc.uva.es/handle/10324/68147"/>
<rdf:li rdf:resource="https://uvadoc.uva.es/handle/10324/67137"/>
<rdf:li rdf:resource="https://uvadoc.uva.es/handle/10324/65784"/>
<rdf:li rdf:resource="https://uvadoc.uva.es/handle/10324/65761"/>
</rdf:Seq>
</items>
<dc:date>2026-04-12T17:08:30Z</dc:date>
</channel>
<item rdf:about="https://uvadoc.uva.es/handle/10324/80441">
<title>Improved early prediction of acute pancreatitis severity using SHAP-based XGBoost model: Beyond traditional scoring systems</title>
<link>https://uvadoc.uva.es/handle/10324/80441</link>
<description>Background: Acute pancreatitis (AP) progresses to severe forms in about 20 % of cases, leading to high&#13;
morbidity and mortality. Traditional clinical scoring systems for severity prediction (e.g., Ranson, BISAP),&#13;
are limited by delayed applicability, and suboptimal diagnostic accuracy.&#13;
Aims: To develop and validate machine learning (ML) models for early prediction of moderately severe&#13;
and severe acute pancreatitis (MSAP-SAP), and to compare them with conventional scores.&#13;
Methods: A retrospective cohort of 816 patients (2014–2023) was analyzed. ML models were developed&#13;
using admission (24-hour) and early (48-hour) data. Models were trained and tested using an 80:20 strat-&#13;
ified split and evaluated based on ROC-AUC. F-Anova, Mutual Information and SHapley Additive exPlana-&#13;
tions (SHAP) were used for feature selection. SHAP was also used for model interpretability.&#13;
Results: The XGBoost model with SHAP-based feature selection (XGBSH ) achieved the highest predic-&#13;
tive performance with ROC-AUCs of 0.89 (24-hour) and 0.94 (48-hour) on the test cohort. Key predictive&#13;
features included SIRS, BUN, CRP, creatinine, and pleural effusion. Compared to Ranson and BISAP (both&#13;
ROC-AUC = 0.72), the XGBSH models demonstrated superior accuracy and allowed flexible, threshold-&#13;
based classification.&#13;
Conclusion: The proposed SHAP-enhanced XGBoost model offers a reliable and interpretable tool for early&#13;
prediction of AP severity, improving clinical decision-making and patient management.
</description>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="https://uvadoc.uva.es/handle/10324/78652">
<title>Behavior tree generation and adaptation for a social robot control with LLMs</title>
<link>https://uvadoc.uva.es/handle/10324/78652</link>
<description>Large Language Models have recently emerged as a powerful tool for generating flexible and context-aware&#13;
robotic behavior. However, adapting to unforeseen events and ensuring robust task completion remain&#13;
significant challenges. This paper presents a novel system that leverages LLMs and Behavior Trees to enable&#13;
robots to generate, execute, and adapt task plans based on natural language commands. The system employs&#13;
ChatGPT to process user instructions, generating initial Behavior Trees that encapsulate the required task&#13;
steps. A modular architecture, combining the BT planner and a Failure Interpreter module, allows the system&#13;
to dynamically adjust Behavior Trees when execution challenges or environmental changes arise.&#13;
Unlike conventional methods that rely on static Behavior Trees or predefined state machines, our approach&#13;
ensures adaptability by integrating a Failure Interpreter capable of identifying execution issues and proposing&#13;
alternative plans or user clarifications in real time. This adaptability makes the system robust to disturbances&#13;
and allows for seamless human–robot interaction. We validate the proposed methodology using experiments&#13;
on a social robot across various scenarios in our workplace, demonstrating its effectiveness in generating&#13;
executable Behavior Trees and responding to execution failures. The approach achieves an 89.6% success rate&#13;
in a realistic home environment, highlighting the effectiveness of LLM-powered Behavior Trees in enabling&#13;
robust and flexible robot behavior from natural language input
</description>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="https://uvadoc.uva.es/handle/10324/76284">
<title>Generating vertical ground reaction forces using a stochastic data-driven model for pedestrian walking</title>
<link>https://uvadoc.uva.es/handle/10324/76284</link>
<description>A novel time-domain approach to the characterization of the forces induced by a pedestrian is proposed.&#13;
It focuses on the vertical component while walking, but thanks to how it is conceived, the algorithm can&#13;
be easily adapted to other activities or any other force component. The work has been developed from&#13;
the statistical point of view, so a stochastic data-driven model is finally obtained after the algorithm is&#13;
applied to a set of experimentally measured steps. The model is composed of two mean vectors and their&#13;
corresponding covariance matrices to represent the steps, as well as some more means and standard deviations&#13;
to account for the step scaling and double support phase, under the assumption that the random variables&#13;
follow normal distributions. Velocity and step length are also provided, so the model and the latter data enable&#13;
the realistic generation of virtual gaits. Some application examples at different walking paces are shown, in&#13;
which comparisons between the original steps and a set of virtual ones are performed to show the similarities&#13;
between both. For reproducibility purposes, the data and the developed algorithm have been made available
</description>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="https://uvadoc.uva.es/handle/10324/73711">
<title>Training Fully Convolutional Neural Networks for Lightweight, Non-Critical Instance Segmentation Applications</title>
<link>https://uvadoc.uva.es/handle/10324/73711</link>
<description>Augmented reality applications involving human interaction with virtual objects often rely on segmentation-based hand detection techniques. Semantic segmentation can then be enhanced with instance-specific information to model complex interactions between objects, but extracting such information typically increases the computational load significantly. This study proposes a training strategy that enables conventional semantic segmentation networks to preserve some instance information during inference. This is accomplished by introducing pixel weight maps into the loss calculation, increasing the importance of boundary pixels between instances. We compare two common fully convolutional network (FCN) architectures, U-Net and ResNet, and fine-tune the fittest to improve segmentation results. Although the resulting model does not reach state-of-the-art segmentation performance on the EgoHands dataset, it preserves some instance information with no computational overhead. As expected, degraded segmentations are a necessary trade-off to preserve boundaries when instances are close together. This strategy allows approximating instance segmentation in real-time using non-specialized hardware, obtaining a unique blob for an instance with an intersection over union greater than 50% in 79% of the instances in our test set. A simple FCN, typically used for semantic segmentation, has shown promising instance segmentation results by introducing per-pixel weight maps during training for light-weight applications.
</description>
<dc:date>2024-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="https://uvadoc.uva.es/handle/10324/73268">
<title>Tackling Post-COVID-19 Rehabilitation Challenges: A Pilot Clinical Trial Investigating the Role of Robotic-Assisted Hand Rehabilitation</title>
<link>https://uvadoc.uva.es/handle/10324/73268</link>
<description>Prolonged hospitalization in severe COVID-19 cases can lead to substantial muscle loss and functional deterioration. While rehabilitation is essential, conventional approaches face capacity challenges. Therefore, evaluating the effectiveness of robotic-assisted rehabilitation for patients with post-COVID-19 fatigue syndrome to enhance both motor function and overall recovery holds paramount significance. Our objective is to assess the effectiveness of rehabilitation in post-COVID-19 patients with upper extremity impairment through the utilization of a hand exoskeleton-based robotic system. Methods: A total of 13 participants experiencing acute or limited functional or strength impairment in an upper extremity due to COVID-19 were enrolled in the study. A structured intervention consisted of 45 min therapy sessions, conducted four times per week over a six-week period, utilizing a hand exoskeleton. The research employed standardized health assessments, motion analysis, and semi-structured interviews for pre-intervention and follow-up evaluations. Paired sample t-tests were employed to statistically analyze the outcomes. Results: The outcomes showed a reduction in overall dependence levels across participants, positive changes in various quality of life-related measurements, and an average increase of 60.4 ± 25.7% and 28.7 ± 11.2% for passive and active flexion, respectively. Conclusions: Our data suggest that hand exoskeleton-based robotic systems hold promise to optimize the rehabilitation outcomes following severe COVID-19. Trial registration: ID NCT06137716 at ClinicalTrials.gov.
</description>
<dc:date>2024-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="https://uvadoc.uva.es/handle/10324/73267">
<title>Design and Analysis of the M3Rob: A Robotic Platform for Wrist and Hand Rehabilitation</title>
<link>https://uvadoc.uva.es/handle/10324/73267</link>
<description>Physical therapy plays a crucial role in the motor recovery. Rehabilitation robots have emerged as significant advancement, enabling repetitive therapeutic interventions in both clinical and home settings. In this context, the development of a highly functional, reliable, portable, and cost-effective mechatronic systems for wrist and hand rehabilitation represents a significant step in the field. This article focused on the design and implementation of the M3Rob device, a 3-DoF wrist exoskeleton equipped with a force sensor, allowing for the execution of active therapies, recognized for their effectiveness in motor recovery. Hence, a close-loop admittance control utilizing a joint-space target trajectory as input is presented and experimentally evaluated across three distinct levels of assistance. Moreover, to address the need for rehabilitation targeting activities of daily living, the device enables the incorporation of a hand exoskeleton for simultaneously performing hand and wrist rehabilitation. Featuring a range of motion of 180° for pronation/supination, 120° for flexion/extension, and 75° for ulnar/radial deviation, in combination with joint torques spanning from 7.85 to 43.86 Nm, the device covers the required motions and forces essential for daily activities. The presented device offers a comprehensive solution for wrist and hand rehabilitation, effectively addressing critical challenges in motor recovery.
</description>
<dc:date>2024-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="https://uvadoc.uva.es/handle/10324/73265">
<title>Force-based control strategy for a collaborative robotic camera holder in laparoscopic surgery using pivoting motion</title>
<link>https://uvadoc.uva.es/handle/10324/73265</link>
<description>Introduction: Laparoscopic surgery often relies on a fixed Remote Center of Motion (RCM) for robot mobility control, which assumes that the patient’s abdominal walls are immobile. However, this assumption is inaccurate, especially in collaborative surgical environments. In this paper, we present a&#13;
force-based strategy for the mobility of a robotic camera-holder system for laparoscopic surgery based on a pivoting motion. This strategy reconceptualizes the conventional mobility control paradigm of surgical robotics.&#13;
Methods: The proposed strategy involves direct control of the Tool Center Point’s (TCP) position and orientation without any constraints associated with the spatial position of the incision. It is based on pivoting motions to minimize contact forces between the abdominal walls and the laparoscope. The control directly relates the measured force and angular velocity of the laparoscope, resulting in the&#13;
reallocation of the trocar, whose position becomes a consequence of the natural accommodation allowed by this pivoting.&#13;
Results: The effectiveness and safety of the proposed control were evaluated through a series of experiments. The experiments showed that the control was able to minimize an external force of 9 N to ±0.2 N in 0.7 s and reduce it to 2 N in just 0.3 s. Furthermore, the camera was able to track a region of interest by displacing the TCP as desired, leveraging the strategy’s property that dynamically&#13;
constrains its orientation.&#13;
Discussion: The proposed control strategy has proven to be effective minimizing the risk caused by sudden high forces resulting from accidents and maintaining the field of view despite any movements in the surgical environment, such as physiological movements of the patient or undesired movements of other surgical instruments. This control strategy can be implemented for laparoscopic robots&#13;
without mechanical RCMs, as well as commercial collaborative robots, thereby improving the safety of surgical interventions in collaborative environments.
</description>
<dc:date>2023-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="https://uvadoc.uva.es/handle/10324/73264">
<title>Prediction of cow calving in extensive livestock using a new neck-mounted sensorized wearable device: a pilot study</title>
<link>https://uvadoc.uva.es/handle/10324/73264</link>
<description>In this study, new low-cost neck-mounted sensorized wearable device is presented to help farmers detect the onset of calving in extensive livestock farming by continuously monitoring cow data. The device incorporates three sensors: an inertial measurement unit (IMU), a global navigation satellite system (GNSS) receiver, and a thermometer. The hypothesis of this study was that onset calving is detectable through the analyses of the number of transitions between lying and standing of the animal (lying bouts). A new algorithm was developed to detect calving, analysing the frequency and duration of lying and standing postures. An important novelty is that the proposed algorithm has been designed with the aim of being executed in the embedded microcontroller housed in the cow’s collar and, therefore, it requires minimal computational resources while allowing for real time data processing. In this preliminary study, six cows were monitored during different stages of gestation (before, during, and after calving), both with the sensorized wearable device and by human observers. It was carried out on an extensive livestock farm in Salamanca (Spain), during the period from August 2020 to July 2021. The preliminary results obtained indicate that lying-standing animal states and transitions may be useful to predict calving. Further research, with data obtained in future calving of cows, is required to refine the algorithm.
</description>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="https://uvadoc.uva.es/handle/10324/69775">
<title>Development of a human–robot interface for cobot trajectory planning using mixed reality</title>
<link>https://uvadoc.uva.es/handle/10324/69775</link>
<description>The growing demand for projects with collaborative robots, known as “cobots”, underlines the need to efficiently address the execution of tasks with speed and flexibility, without neglecting safety in human–robot interaction. In general terms, this practice requires knowledge of robotics programming and skill in the use of hardware. The proposed solution consists of a mixed reality (MR) application integrated into a mixed reality head-mounted device (HMD) that accelerates the process of programming the complex manoeuvres of a cobot. This advancement is achieved through voice and gesture recognition, in addition to the use of digital panels. This allows any user, regardless of his or her robotics experience, to work more efficiently. The Robot Operating System (ROS) platform monitors the cobot and manages the transfer of data between the two. The system uses QR (Quick Response) codes to establish a precise frame of reference. This solution has proven its applicability in industrial processes, by automating manoeuvres and receiving positive feedback from users who have evaluated its performance. This solution promises to revolutionize the programming and operation of cobots, and pave the way for efficient and accessible collaborative robotics.
</description>
<dc:date>2024-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="https://uvadoc.uva.es/handle/10324/69767">
<title>Discrete lattice element model for fracture propagation with improved elastic response</title>
<link>https://uvadoc.uva.es/handle/10324/69767</link>
<description>This research presents a novel approach to modeling fracture propagation using a discrete lattice element model with embedded strong discontinuities. The focus is on enhancing the linear elastic response within the model followed by propagation of fractures until total failure. To achieve this, a generalized beam lattice element with an embedded strong discontinuity based on the kinematics of a rigid-body spring model is formulated. The linear elastic regime is refined by correcting the stress tensor at nodes within the domain based on the internal forces present in lattice elements, which is achieved by introducing fictitious forces into the standard internal force vectors to predict the right elastic response of the model related to Poisson’s effect. Upon initiation of the first fractures, the procedure for the computation of the fictitious stress tensor is terminated, and the embedded strong discontinuities are activated in the lattice elements for obtaining an objective fracture and failure response. This transition ensures a shift from the elastic phase to the fracture propagation phase, enhancing the predictive capabilities in capturing the full fracture processes.
</description>
<dc:date>2024-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="https://uvadoc.uva.es/handle/10324/68303">
<title>Real-time tool localization for laparoscopic surgery using convolutional neural network</title>
<link>https://uvadoc.uva.es/handle/10324/68303</link>
<description>Partially automated robotic systems, such as camera holders, represent a pivotal step towards enhancing efficiency and precision in surgical procedures. Therefore, this paper introduces an approach for real-time tool localization in laparoscopy surgery using convolutional neural networks. The proposed model, based on two Hourglass modules in series, can localize up to two surgical tools simultaneously. This study utilized three datasets: the ITAP dataset, alongside two publicly available datasets, namely Atlas Dione and EndoVis Challenge. Three variations of the Hourglass-based models were proposed, with the best model achieving high accuracy (92.86%) and frame rates (27.64 FPS), suitable for integration into robotic systems. An evaluation on an independent test set yielded slightly lower accuracy, indicating limited generalizability. The model was further analyzed using the Grad-CAM technique to gain insights into its functionality. Overall, this work presents a promising solution for automating aspects of laparoscopic surgery, potentially enhancing surgical efficiency by reducing the need for manual endoscope manipulation.
</description>
<dc:date>2024-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="https://uvadoc.uva.es/handle/10324/68288">
<title>Design of an instant vibration-based warning system and Its operation during relocation works of historic facades</title>
<link>https://uvadoc.uva.es/handle/10324/68288</link>
<description>Preserved listed building facades may require large-scale and highly technical work when the supporting building structure is at serious risk of collapse. Such is the case described in this paper, where vast facades must be cut into large panels up to 200 m2 and 150 t in weight and carefully laid on the ground. Various engineering works must be carried out to ensure the structural integrity of the panels to be safeguarded. Each panel must be reinforced by a temporary lattice steel structure prior to the disengagement from the supporting building frame. The operations require the use of cutting tools, hitting demolition machines and heavy cranes, which can induce potentially damaging vibrations that should be monitored and processed so that workers can be alerted in real time if certain thresholds are exceeded so that they can proceed more carefully. The paper describes the specifically designed monitoring system, its electronic parts, how they operate and how the data are processed and displayed. The monitoring system, once verified in laboratory tests, is applied to the detachment and overturning activities of a representative full-scale panel, tracking vibration levels and tilting rates. After days of operation and visual observation, it is possible to correlate vibration levels with incipient damage, establishing that peaks below 0.5 m/s2 or RMS values of 0.05 m/s2 are permissible, but that above 1.0 m/s2 or 0.3 m/s2, respectively, activities should be halted. The proposed system has proven to be useful for the intended purposes, making it possible to know the acceptable thresholds and trigger the necessary alarms in real time for the successful course of the work.
</description>
<dc:date>2024-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="https://uvadoc.uva.es/handle/10324/68147">
<title>Machine learning-based prediction of cattle activity using sensor-based data</title>
<link>https://uvadoc.uva.es/handle/10324/68147</link>
<description>Livestock monitoring is a task traditionally carried out through direct observation by experienced caretakers. By analyzing its behavior, it is possible to predict to a certain degree events that require human action, such as calving. However, this continuous monitoring is in many cases not feasible. In this work, we propose, develop and evaluate the accuracy of intelligent algorithms that operate on data obtained by low-cost sensors to determine the state of the animal in the terms used by the caregivers (grazing, ruminating, walking, etc.). The best results have been obtained using aggregations and averages of the time series with support vector classifiers and tree-based ensembles, reaching accuracies of 57% for the general behavior problem (4 classes) and 85% for the standing behavior problem (2 classes). This is a preliminary step to the realization of event-specific predictions.
</description>
<dc:date>2024-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="https://uvadoc.uva.es/handle/10324/67137">
<title>An unsupervised method to recognise human activity at home using non-intrusive sensors</title>
<link>https://uvadoc.uva.es/handle/10324/67137</link>
<description>As people get older, living at home can expose them to potentially dangerous situations when performing everyday actions or simple tasks due to physical, sensory or cognitive limitations. This could compromise the residents’ health, a risk that in many cases could be reduced by early detection of the incidents. The present work focuses on the development of a system capable of detecting in real time the main activities of daily life that one or several people can perform at the same time inside their home. The proposed approach corresponds to an unsupervised learning method, which has a number of advantages, such as facilitating future replication or improving control and knowledge of the internal workings of the system. The final objective of this system is to facilitate the implementation of this method in a larger number of homes. The system is able to analyse the events provided by a network of non-intrusive sensors and the locations of the residents inside the home through a Bluetooth beacon network. The method is built upon an accurate combination of two hidden Markov models: one providing the rooms in which the residents are located and the other providing the activity the residents are carrying out. The method has been tested with the data provided by the public database SDHAR-HOME, providing accuracy results ranging from 86.78% to 91.68%. The approach presents an improvement over existing unsupervised learning methods as it is replicable for multiple users at the same time.
</description>
<dc:date>2023-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="https://uvadoc.uva.es/handle/10324/65784">
<title>A Versatile Embedded Platform for Implementation of Biocooperative Control in Upper-Limb Neuromotor Rehabilitation Scenarios</title>
<link>https://uvadoc.uva.es/handle/10324/65784</link>
<description>Biocooperative control uses both biomechanical and physiological information of the user&#13;
to achieve a reliable human-robot interaction. In the context of neuromotor rehabilitation, such control can&#13;
enhance rehabilitation experience and outcomes. However, the high cost and large volume of the commercial systems for physiological signal acquisition are major limitations for the development of such control. We present a highly versatile, low-cost and wearable embedded system that integrates the most commonly used sensors in this field: inertial measurement unit (IMU), electrocardiography (ECG), electromyography (EMG), galvanic skin response (GSR) and skin temperature (SKT) sensors. Additionally, the compact system combines wireless communication for data transmission and a high-efficiency microcontroller for real-time signal processing and control. We tested the system in two common neuromotor rehabilitation scenarios. The first is an upper-limb rehabilitation VR-based exergame, in which the patient must collect as many coins as possible. Movement recognition of the hand and arm is performed based on EMG and IMU information, respectively. The second is adaptive assistive control that adjusts the level of assistance of a wrist rehabilitation robot according to the physiological state and motor performance of the patient using GSR, ECG and SKT data. The quality of the recorded signals and the processing capacity of the system meet the needs of the two upper-limb rehabilitation applications. The wearable system is highly versatile, open, configurable and low cost, and it could promote the development of real-time biocooperative control for a wide range of neuromotor rehabilitation applications.
</description>
<dc:date>2023-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="https://uvadoc.uva.es/handle/10324/65761">
<title>Gauze detection and segmentation in minimally invasive surgery video using convolutional Neural Networks</title>
<link>https://uvadoc.uva.es/handle/10324/65761</link>
<description>Medical instruments detection in laparoscopic video has been carried out to increase the&#13;
autonomy of surgical robots, evaluate skills or index recordings. However, it has not been extended&#13;
to surgical gauzes. Gauzes can provide valuable information to numerous tasks in the operating&#13;
room, but the lack of an annotated dataset has hampered its research. In this article, we present&#13;
a segmentation dataset with 4003 hand-labelled frames from laparoscopic video. To prove the&#13;
dataset potential, we analyzed several baselines: detection using YOLOv3, coarse segmentation, and&#13;
segmentation with a U-Net. Our results show that YOLOv3 can be executed in real time but provides&#13;
a modest recall. Coarse segmentation presents satisfactory results but lacks inference speed. Finally,&#13;
the U-Net baseline achieves a good speed-quality compromise running above 30 FPS while obtaining&#13;
an IoU of 0.85. The accuracy reached by U-Net and its execution speed demonstrate that precise&#13;
and real-time gauze segmentation can be achieved, training convolutional neural networks on the&#13;
proposed dataset.
</description>
<dc:date>2022-01-01T00:00:00Z</dc:date>
</item>
</rdf:RDF>
