<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
<channel>
<title>Dpto. Teoría de la Señal y Comunicaciones e Ingeniería Telemática</title>
<link>https://uvadoc.uva.es/handle/10324/1191</link>
<description>71</description>
<pubDate>Sun, 05 Apr 2026 10:56:10 GMT</pubDate>
<dc:date>2026-04-05T10:56:10Z</dc:date>
<item>
<title>Deep learning to classify the ripeness of coffee fruit in the mechanized harvesting process</title>
<link>https://uvadoc.uva.es/handle/10324/83819</link>
<description>The coffee industry is a vital sector of global agriculture. Coffee is one of the most widely traded plant products in the world. Coffee fruit ripeness affects the taste and aroma of the final brewed beverage, coffee farms’ overall yield and economic viability. Traditional methods of assessing coffee fruit ripeness, which rely on manual inspection by skilled workers, are labor-intensive, time-consuming, and prone to subjective interpretation. In this study, we have used the YOLOv9 (You Only Look Once) algorithm that outperformed previous versions particularly by using a new lightweight network architecture called the gelan-c model. The objective of this study was to identify and classify quickly and accurately the degree of ripeness of the harvested coffee fruits into the following classes: unripe, ripe-red, ripe-yellow, and overripe. The images were captured during harvesting with a commercial harvester in a coffee farm in the southern region of the state of Minas Gerais, Brazil. Data augmentation was performed to increase the dataset in terms of images and bounding boxes. Detection performance was obtained for image sizes between 128 and 640 px. The best performance was achieved with an image size of 640 px, reaching a precision level of 99 %, a recall of 98.5 %, an F1-Score of 98.75 %, a mAP@0.5 of 99.25 %, and a mAP@0.5:0.95 of about 85 % during the validation phase. Our study significantly outperforms previous studies on fruit classification in terms of models used, data augmentation strategies, and overall performance.
</description>
<pubDate>Wed, 01 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://uvadoc.uva.es/handle/10324/83819</guid>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Performance assessment of no-fee GNSS augmentation systems for tractor guidance</title>
<link>https://uvadoc.uva.es/handle/10324/83781</link>
<description>This study assesses the performance of no-fee GNSS augmentation systems for tractor guidance. Five no-fee augmentation systems: EGNOS, GLIDE, RTK, VRS-NRTK, and on-site RTK were evaluated in both static and guidance tests over short- and long‑term periods using three GNSS receiver types: low-cost Navilock NL8022MP, mid-range Novatel Smart2, and high-end Harxon TS108PRO. Static tests recorded 24 h of position data from 14 receiver-augmentation configurations on a fixed surface. Guidance tests recorded trajectory data from the 14 configurations during straight-line guidance using a tractor equipped with two GNSS receivers, one under test and one high-precision reference. Results found that: (i) unaugmented GNSS resulted in guidance errors of 2–3 m, reduced below 1 m in pass-to-pass intervals shorter than 15 min; (ii) EGNOS reduced these guidance errors by ∼41%; (iii) GLIDE reduced guidance errors to below 20 cm for pass-to-pass intervals shorter than 15 min, with no long-term improvement; (iv) RTK guidance error decreased as baseline length shortened: &gt;100 km yielded &gt; 17 cm, 20–100 km yielded 3–20 cm, and &lt; 20 km yielded 2–3 cm; (v) VRS-NRTK slightly outperformed RTK with similar baseline lengths; and (vi) on-site RTK enabled 1 cm guidance error. In summary: low-cost receivers without augmentation or with EGNOS result in metre-level errors; mid-range receivers with GLIDE deliver decimetre-level guidance errors in the short term; and high-end receivers using on-site RTK or VRS-NRTK on baselines up to 100 km achieve centimetre-level errors, enabling farmers to replicate tractor trajectories consistently year to year.
</description>
<pubDate>Thu, 01 Jan 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://uvadoc.uva.es/handle/10324/83781</guid>
<dc:date>2026-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Single-case learning analytics: Feasibility of a human-centered analytics approach to support doctoral education</title>
<link>https://uvadoc.uva.es/handle/10324/83193</link>
<description>Recent advances in machine learning and natural language processing have the potential to transform human activity in many domains. The field of learning analytics has applied these techniques successfully to many areas of education but has not been able to permeate others, such as doctoral education. Indeed, doctoral education remains an under-researched area with widespread problems (high dropout rates, low mental well-being) and lacks technological support beyond very specialized tasks. The inherent uniqueness of the doctoral journey may help explain the lack of generalized solutions (technological or otherwise) to these challenges. We propose a novel approach to apply the aforementioned advances in computation to support doctoral education. Single-case learning analytics defines a process in which doctoral students, researchers, and computational elements collaborate to extract insights about a single (doctoral) learner's experience and learning process. The feasibility and added value of this approach are demonstrated using an authentic dataset collected by nine doctoral students over a period of at least two months. The insights from this exploratory proof-of-concept serve to spark a research agenda for future technological support of doctoral education, which is aligned with recent calls for more human-centred approaches to designing and implementing learning analytics technologies.
</description>
<pubDate>Sun, 01 Jan 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://uvadoc.uva.es/handle/10324/83193</guid>
<dc:date>2023-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reflection for action: designing tools to support teacher reflection on everyday evidence</title>
<link>https://uvadoc.uva.es/handle/10324/83191</link>
<description>Improving educational practice through reflection is one important focus of teacher professional development approaches. However, such teacher reflection operates under practical classroom constraints that make it happen infrequently, including the reliance on disruptive peer/supervisor observations or recordings. This article describes three design-based research iterations towards technological support for teacher reflection based on everyday evidence and feedback. The authors collaborated with 16 teachers from two different secondary schools, using a variety of prototype technologies (from paper prototypes to web applications and wearable sensors). The iterative evaluation of such prototypes led them from a high-tech focused approach to a more nuanced socio-technical one, based on lightweight technologies and ‘envelope routines’ that also involve students. After illustrating the potential of this approach to change teacher practice and students’ learning experience, the authors also present a series of guidelines for the design of technology that supports such reflection based on everyday evidence gathering.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://uvadoc.uva.es/handle/10324/83191</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Orchestration Load Indicators and Patterns: In-the-Wild Studies Using Mobile Eye-Tracking</title>
<link>https://uvadoc.uva.es/handle/10324/83189</link>
<description>Orchestration load is the effort a teacher spends in coordinating multiple activities and learning processes. It has been proposed as a construct to evaluate the usability of learning technologies at the classroom level, in the same way that cognitive load is used as a measure of usability at the individual level. However, so far this notion has remained abstract. In order to ground orchestration load in empirical evidence and study it in a more systematic and detailed manner, we propose a method to quantify it, based on physiological data (concretely, mobile eye-tracking measures), along with human-coded behavioral data. This paper presents the results of applying this method to four exploratory case studies, where four teachers orchestrated technology-enhanced face-to-face lessons with primary, secondary school, and university students. The data from these studies provide a first validation of this method in different conditions, and illustrate how it can be used to understand the effect of different classroom factors on orchestration load. From these studies, we also extract empirical insights about classroom orchestration using technology.
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://uvadoc.uva.es/handle/10324/83189</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Endowing a plain fluidic chip with micro-optics: a holographic microscope slide</title>
<link>https://uvadoc.uva.es/handle/10324/82982</link>
<description>Lab-on-a-Chip (LoC) devices are extremely promising in that they enable diagnostic functions at the point-of-care. Within this scope, an important goal is to design imaging schemes that can be used out of the lab. In this paper, we introduce and test a pocket holographic slide that allows Digital Holography microscopy to be performed without an interferometer setup. Instead, a commercial off-the-shelf plastic chip is engineered and functionalized with this aim. The microfluidic chip is endowed with micro-optics, i.e., a diffraction grating and polymeric lenses, to build an interferometer directly on the chip, avoiding the need for a reference arm and external bulky optical components. Thanks to the single-beam scheme, the system is completely integrated and robust against vibrations, sharing the useful features of any common path interferometer. Hence, it becomes possible to bring holographic functionalities out of the lab, moving complexity from the external optical apparatus to the chip itself. Label-free imaging and quantitative phase contrast mapping of live samples are demonstrated, along with flexible refocusing capabilities. Thus, a liquid volume can be analyzed in one single shot with no need for mechanical scanning systems.
</description>
<pubDate>Sun, 01 Jan 2017 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://uvadoc.uva.es/handle/10324/82982</guid>
<dc:date>2017-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Holographic microscope slide in a spatio-temporal imaging modality for reliable 3D cell counting</title>
<link>https://uvadoc.uva.es/handle/10324/82981</link>
<description>A Lab-on-a-Chip device for space-time digital holography is presented. Here, computational methods, holography, and microfluidics are intertwined to provide a reliable system for high-throughput counting of RBCs.In the current trend of miniaturization and simplification of imaging flow cytometry, Lab-on-a-Chip (LoC) microfluidic devices represent an innovative and cost-effective solution. In this framework, we propose for the first time a novel platform based on the compactness of a holographic microscope slide (HMS) in combination with the new computational features of space-time digital holography (STDH) that uses a 1D linear sensor array (LSA) instead of 2D CCD or CMOS cameras to respond to real diagnostic needs. In this LoC platform, computational methods, holography, and microfluidics are intertwined in order to provide an imaging system with a reduced amount of optical components and capability to achieve reliable cell counting even in the absence of very accurate flow control. STDH exploits the sample motion into the microfluidic channel to obtain an unlimited field-of-view along the flow direction, independent of the magnification factor. Furthermore, numerical refocusing typical of a holographic modality allows imaging and visualization of the entire volume of the channel, thus avoiding loss of information due to the limited depth of focus of standard microscopes. Consequently, we believe that this platform could open new perspectives for enhancing the throughput by 3D volumetric imaging.
</description>
<pubDate>Sun, 01 Jan 2017 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://uvadoc.uva.es/handle/10324/82981</guid>
<dc:date>2017-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Label free imaging of cell‐substrate contacts by holographic total internal reflection microscopy</title>
<link>https://uvadoc.uva.es/handle/10324/82980</link>
<description>The study of cell adhesion contacts is pivotal to understand cell mechanics and interaction at substrates or chemical and physical stimuli. We designed and built a HoloTIR microscope for label-free quantitative phase imaging of total internal reflection. Here we show for the first time that HoloTIR is a good choice for label-free study of focal contacts and of cell/substrate interaction as its sensitivity is enhanced in comparison with standard TIR microscopy. Finally, the simplicity of implementation and relative low cost, due to the requirement of less optical components, make HoloTIR a reasonable alternative, or even an addition, to TIRF microscopy for mapping cell/substratum topography. As a proof of concept, we studied the formation of focal contacts of fibroblasts on three substrates with different levels of affinity for cell adhesion.
</description>
<pubDate>Fri, 01 Jan 2016 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://uvadoc.uva.es/handle/10324/82980</guid>
<dc:date>2016-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Surface Plasmon Resonance Imaging by Holographic Enhanced Mapping</title>
<link>https://uvadoc.uva.es/handle/10324/82979</link>
<description>We designed, constructed and tested a holographic surface plasmon resonance (HoloSPR) objective-based microscope for simultaneous amplitude-contrast and phase-contrast surface plasmon resonance imaging (SPRi). SPRi is a widely spread tool for label-free detection of changes in refractive index and concentration, as well as mapping of thin films. Currently, most of the SPR sensors rely on the detection of amplitude or phase changes of light. Despite the high sensitivities achieved so far, each technique alone has a limited detection range with optimal sensitivity. Here we use a high numerical aperture objective that avoids all the limitations due to the use of a prism-based configuration, yielding highly magnified and distortion-free images. Holographic reconstructions of SPR images and real-time kinetic measurements are presented to show the capability of HoloSPR to provide a versatile imaging method for high-throughput SPR detection complementary to conventional SPR techniques.
</description>
<pubDate>Thu, 01 Jan 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://uvadoc.uva.es/handle/10324/82979</guid>
<dc:date>2015-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Twofold Self-Assembling of Nanocrystals Into Nanocomposite Polymer</title>
<link>https://uvadoc.uva.es/handle/10324/82978</link>
<description>In this paper, we introduce a single-step self-assembling process aimed at forming two-dimensional (2-D) array microstructures made from a nanocomposite polymer layer in which are dispersed CdSe-CdS nanocrystals. The novelty of the process reported here is that it operates simultaneously as a two-fold process where the liquid polymer matrix is self-shaped by electrohydrodynamic pressure as a 2-D array of microstructures, while at the same time, the nanocrystals are self-assembled by dielectrophoretic forces. The proposed approach could inspire future smart fabrication techniques for producing self-assembled lensed nanocomposite layers. In principle, the method is scalable down to diameter lens up to few micrometers.
</description>
<pubDate>Fri, 01 Jan 2016 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://uvadoc.uva.es/handle/10324/82978</guid>
<dc:date>2016-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reproducibility and Reliability of Free-Water-Corrected Diffusion Tensor Imaging of the Brain: Revisited</title>
<link>https://uvadoc.uva.es/handle/10324/82873</link>
<description>Diffusion tensor imaging (DTI) corrected for the free-water (FW) enables the separation of a hindered Gaussian-like profile from an isotropic component, which represents diffusion found in cerebrospinal and interstitial fluids within the extracellular space of grey and white matter. The assessment of the reproducibility and reliability properties of FW-corrected DTI is a crucial factor in demonstrating the potential clinical utility of this refinement, particularly considering the examinations across multiple medical centres. This paper explores the variability, reliability, and separability properties of free-water volume fraction (FWVF) and FW-corrected DTI-based measures in healthy human brain white matter using publicly available test–retest databases acquired in (1) intra-scanner, (2) intra-scanner longitudinal and (3) inter-scanner settings under varying acquisition schemes. Three different estimation techniques to retrieve the FW- corrected DTI parameters tailored to single- or multiple-shell diffusion-sensitising magnetic resonance (MR) acquisitions are investigated: (i) a direct optimization of bi-tensor signal representation in the variational framework, (ii) the region contraction-based approach and (iii) the spherical means technique combined with a correction of diffusion-weighted MR signal prior to DTI estimation. We found the previous suggestion that the FW correction to DTI in a single-shell diffusion-weighted MR acquisition improves the repeatability of DTI-based measures may be data- and methodology-dependent, and does not generalise to multiple-shell scenarios. The study also confirms that the single-shell variational FW-correction method fails to retrieve meaningful information from the mean diffusivity (MD) parameter. In contrast, the combined FW-correction scheme reduces the biological variability of MD, regardless of whether DTI is estimated from single- or multiple-shell data, given that the FWVF used for the correction in both cases is derived from multiple-shell acquisitions. Our experiments have shown that the most reliable and repeatable/reproducible measures, while preserving a moderate separability property, are fractional anisotropy and axial diffusivity estimated in a multiple-shell variant under a combined FW-correction scheme. On the contrary, our results show evidence that the least reliable measures are the mean diffusivity estimated using any FW-correction procedure, as well as the FWVF parameter itself. These results can be used to establish the direction for selecting the most attractive FW-correction DTI scheme for clinical applications in terms of the variability-reliability-separability criterion.
</description>
<pubDate>Thu, 01 Jan 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://uvadoc.uva.es/handle/10324/82873</guid>
<dc:date>2026-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>K‐CC‐MoCo: A Fast k‐Space‐Based Respiratory Motion Correction for Highly Accelerated First‐Pass Perfusion Cardiovascular MR</title>
<link>https://uvadoc.uva.es/handle/10324/82754</link>
<description>Purpose&#13;
First-pass perfusion cardiovascular MR (FPP-CMR) enables the non-invasive diagnosis of microcirculation and coronary artery disease. In free-breathing FPP-CMR, motion correction is usually performed in the image domain, requiring an initial reconstruction. This fact hinders its use in model-based and deep learning reconstructions, which present remarkable performance in obtaining high-quality images from highly accelerated acquisitions. We address this challenge by estimating and correcting respiratory motion in free-breathing FPP-CMR directly in k-space.&#13;
Methods&#13;
We propose K-CC-MoCo, an inter-frame rigid motion correction approach formulated exclusively in k-space that handles dynamic contrast through a specifically targeted design of the normalized cross-correlation (CC) objective function to deal with the dynamic contrast. In addition, an ROI-based coil-compression approach was employed to focus the optimization on the heart region. The proposed method was compared to state-of-the-art image-based registration using a digital phantom and real free-breathing acquisitions with different accelerations.&#13;
Results&#13;
The proposed k-space-based method is approximately 2× faster and can correct respiratory motion even at high acceleration factors (up to 50×), where the image-based method fails due to severe undersampling artifacts. Notably, after K-CC-MoCo, the time-averaged images are visibly less blurred. Quantitative metrics (SSIM, etc.) support this conclusion.&#13;
Conclusion&#13;
K-CC-MoCo outperforms image-based correction in free-breathing FPP-CMR acquisitions accelerated up to 50×. Respiratory motion is estimated and corrected in k-space, enabling its use for model-based and/or deep learning reconstructions from highly accelerated scans.
</description>
<pubDate>Thu, 01 Jan 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://uvadoc.uva.es/handle/10324/82754</guid>
<dc:date>2026-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Comparative evaluation of monocular deep learning pose estimation and IMU-based systems for remote kinematic assessment</title>
<link>https://uvadoc.uva.es/handle/10324/82164</link>
<description>Remote assessment of human motion is increasingly pivotal in clinical, sports, and rehabilitation contexts, particularly given the rise of telemedicine. While traditional motion capture systems deliver high-precision data, their dependence on expensive equipment and controlled laboratory conditions limits their broader application. Advances in computer vision have enabled the development of monocular video-based 3D human pose estimation methods, which leverage ubiquitous camera technologies to offer cost-effective and accessible kinematic analysis. This study systematically benchmarks joint angles derived from both video-based models and IMUs, addressing the gap in comparative evaluations under realistic, out-of-the-lab conditions
</description>
<pubDate>Wed, 01 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://uvadoc.uva.es/handle/10324/82164</guid>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>RehaBot: Enhancing Cerebral Palsy Rehabilitation with a Chatbot and Assessment of Video-Based Activity Recognition</title>
<link>https://uvadoc.uva.es/handle/10324/82162</link>
<description>Cerebral Palsy (CP) is a major cause of physical disability in childhood, with physical rehabilitation crucial for improving function. Telerehabilitation is increasingly important, driven by the growth of ICT and the COVID-19 pandemic. Chatbots offer a promising way to support home-based therapies and improve patient adherence. RehaBot, a chatbot-based system, was developed to provide this support. Future versions aim to integrate automated activity recognition to enhance the platform.
</description>
<pubDate>Wed, 01 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://uvadoc.uva.es/handle/10324/82162</guid>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>EEGSym: Overcoming inter-subject variability in motor imagery based BCIs with deep learning</title>
<link>https://uvadoc.uva.es/handle/10324/81636</link>
<description>In this study, we present a new Deep Learning (DL) architecture for Motor Imagery (MI) based Brain Computer Interfaces (BCIs) called EEGSym. Our implementation aims to improve previous state-of-the-art performances on MI classification by overcoming inter-subject variability and reducing BCI inefficiency, which has been estimated to affect 10-50% of the population. This convolutional neural network includes the use of inception modules, residual connections and a design that introduces the symmetry of the brain throughthemid-sagittalplane into the network architecture. It is complemented with a data augmentation technique that improves the generalization of the model and with the use of transfer learning across different datasets. We compare EEGSym’s performance on inter-subject MI classification with ShallowConvNet, Deep-ConvNet, EEGNet and EEG-Inception. This comparison is performed on 5 publicly available datasets that include left or right hand motor imagery of 280 subjects. This population is&#13;
the largest that has been evaluated in similar studies to date. EEGSym significantly outperforms the baseline models reaching accuracies of 88.6±9.0 on Physionet, 83.3±9.3 on OpenBMI, 85.1±9.5 on Kaya2018, 87.4±8.0 on Meng2019 and 90.2±6.5 on Stieger2021. At the same time, it allows 95.7% of the tested population (268 out of 280 users) to reach BCI control ( 70% accuracy). Furthermore, these results are&#13;
achieved using only 16 electrodes of themore than 60 available on some datasets. Our implementation of EEGSym, which includes new advances for EEG processing with DL, outperforms previous state-of-the-art approaches on intersubject MI classification.
</description>
<pubDate>Sat, 01 Jan 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://uvadoc.uva.es/handle/10324/81636</guid>
<dc:date>2022-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Revealing emotional insights from mental health discussions on Instagram and TikTok lsing BERT models</title>
<link>https://uvadoc.uva.es/handle/10324/81551</link>
<description>Esta investigación aborda los desafíos relacionados con los problemas de salud mental en las redes sociales mediante la integración del procesamiento del lenguaje natural. En primer lugar, el estudio amplía un corpus previo etiquetado con emociones y polaridad mediante la inclusión de nuevas publicaciones de Instagram y TikTok relacionadas con revelaciones sobre salud mental por parte de celebridades e influencers. Este corpus es el primer corpus en español diseñado para analizar el impacto de las respuestas sociales a narrativas sobre salud mental en dos de las redes sociales más utilizadas. En segundo lugar, la investigación integra modelos de clasificación basados en BERT (Bidirectional Encoder Representations) para mejorar la detección de emociones y polaridad. Uno de los algoritmos modelados, MenTaiBERT, que aprovecha una capa de clasificación especializada, demuestra superioridad frente a los demás algoritmos BERT, alcanzando un 99 % de precisión en la detección de emociones y un 98 % en polaridad. De hecho, MenTaiBERT supera significativamente la precisión de los otros algoritmos hasta en 13 puntos porcentuales. En tercer lugar, se ha diseñado una herramienta gráfica fácil de usar, basada en el corpus y los modelos de clasificación anteriores, para ayudar a los profesionales a identificar patrones emocionales en publicaciones de redes sociales relacionadas con la salud mental. En resumen, analizar mediante estrategias innovadoras de inteligencia artificial el impacto emocional de las publicaciones de celebridades en las redes sociales es crucial, especialmente entre los jóvenes, ya que estas plataformas influyen de manera significativa en su autoestima, percepción de la realidad y bienestar emocional.
</description>
<pubDate>Wed, 01 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://uvadoc.uva.es/handle/10324/81551</guid>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</item>
</channel>
</rss>
