<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns="http://purl.org/rss/1.0/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/">
<channel rdf:about="https://uvadoc.uva.es/handle/10324/1165">
<title>Dpto. Informática (Arquitectura y Tecnología de Computadores, Ciencias de la Computación e Inteligencia ...)</title>
<link>https://uvadoc.uva.es/handle/10324/1165</link>
<description>41</description>
<items>
<rdf:Seq>
<rdf:li rdf:resource="https://uvadoc.uva.es/handle/10324/83868"/>
<rdf:li rdf:resource="https://uvadoc.uva.es/handle/10324/83867"/>
<rdf:li rdf:resource="https://uvadoc.uva.es/handle/10324/83866"/>
<rdf:li rdf:resource="https://uvadoc.uva.es/handle/10324/83865"/>
<rdf:li rdf:resource="https://uvadoc.uva.es/handle/10324/83864"/>
<rdf:li rdf:resource="https://uvadoc.uva.es/handle/10324/83863"/>
<rdf:li rdf:resource="https://uvadoc.uva.es/handle/10324/83862"/>
<rdf:li rdf:resource="https://uvadoc.uva.es/handle/10324/83860"/>
<rdf:li rdf:resource="https://uvadoc.uva.es/handle/10324/83826"/>
<rdf:li rdf:resource="https://uvadoc.uva.es/handle/10324/83821"/>
<rdf:li rdf:resource="https://uvadoc.uva.es/handle/10324/83792"/>
<rdf:li rdf:resource="https://uvadoc.uva.es/handle/10324/83751"/>
<rdf:li rdf:resource="https://uvadoc.uva.es/handle/10324/83749"/>
<rdf:li rdf:resource="https://uvadoc.uva.es/handle/10324/83748"/>
<rdf:li rdf:resource="https://uvadoc.uva.es/handle/10324/83404"/>
<rdf:li rdf:resource="https://uvadoc.uva.es/handle/10324/83195"/>
</rdf:Seq>
</items>
<dc:date>2026-04-18T13:37:08Z</dc:date>
</channel>
<item rdf:about="https://uvadoc.uva.es/handle/10324/83868">
<title>Open SYCL on heterogeneous GPU systems: A case of study</title>
<link>https://uvadoc.uva.es/handle/10324/83868</link>
<description>Computational platforms for high-performance scientic applications are becoming more heterogenous, including hardware accelerators such as multiple GPUs. Applications in a wide variety of scientic elds require an efcient and careful management&#13;
of the computational resources of this type of hardware to obtain the best possible performance. However, there are currently&#13;
different GPU vendors, architectures and families that can be found in heterogeneous clusters or machines. Programming with the&#13;
vendor provided languages or frameworks, and optimizing for specic devices, may become cumbersome and compromise portability to other systems. To overcome this problem, several proposals for high-level heterogeneous programming have appeared, trying to reduce the development eort and increase functional and performance portability, specically when using GPU hardware accelerators. This paper evaluates the SYCL programming model, using the Open SYCL compiler, from two different perspectives: The performance it offers when dealing with single or multiple GPU devices from the same or different vendors, and the development effort required to implement the code. We use as case of study the Finite Time Lyapunov Exponent calculation over two real-world scenarios and compare the performance and the development eort of its Open SYCL-based version against the equivalent versions that use CUDA or HIP. Based on the experimental results, we observe that the use of SYCL does not lead to a remarkable overhead in terms of the GPU kernels execution time. In general terms, the Open SYCL development eort for the host code is lower than that observed with CUDA or HIP. Moreover, the SYCL version can take advantage of both CUDA and AMD GPU devices simultaneously much easier than directly using the vendor-specic programming solutions.
</description>
<dc:date>2023-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="https://uvadoc.uva.es/handle/10324/83867">
<title>Secure architecture for IoT and blockchain-based waste traceability</title>
<link>https://uvadoc.uva.es/handle/10324/83867</link>
<description>This paper presents a secure and scalable architecture that integrates Internet of Things (IoT), blockchain, and Data Lake technologies to improve traceability in waste management systems. The proposed system combines the MQTT protocol for efficient communication between resource-constrained IoT devices and a blockchain-based mechanism to ensure the immutability, verifiability, and authenticity of the collected data. A practical use case is demonstrated involving waste container monitoring, where IoT sensors transmit environmental and geolocation data. These data are stored in a Data Lake and cryptographically signed using Merkle trees before being anchored in a blockchain through smart contracts. The experimental results validate the feasibility of the approach and highlight its benefits in terms of transparency, auditability, and operational efficiency.
</description>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="https://uvadoc.uva.es/handle/10324/83866">
<title>Study of historical evacuation drill data combining regression analysis and dimensionless numbers</title>
<link>https://uvadoc.uva.es/handle/10324/83866</link>
<description>The time needed to evacuate a building depends on many factors. Some are related to people’s behavior, while others are related to the physical characteristics of the building. This paper analyzes the historical data of 47 evacuation drills in 15 different university buildings, both academic and residential, involving more than 19 000 persons. We propose the study of the data presented using a dimensionless analysis and statistical regression in order to give a prediction of the ratio between exit time and the number of people evacuated. The results obtained show that this approach could be a useful tool for comparing buildings of this type, and that it represents a promising research topic which can also be extended to other types of buildings.
</description>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="https://uvadoc.uva.es/handle/10324/83865">
<title>Operators for Data Redistribution: Applications to the STL Library and RayTracing Algorithm</title>
<link>https://uvadoc.uva.es/handle/10324/83865</link>
<description>In distributed-memory systems, data redistributions are operations that change the ownership and location of a selected subset of a data structure at runtime. They allow the improvement of the performance of parallel algorithms which operate on changing or partial domains, aiming to create a balanced workload among the active processes. To manually redistribute data is a cumbersome and error-prone task. In this paper, we present a method based on four combinable operators to redistribute partial domains selected by the programmer at runtime in an efficient and simple way. They abstract to the programmer the data-redistribution implementation details, such as the new mapping, relocation, and communication of the selected data. We also present the application of the proposed operators to a RayTracing application and to a significant part of STL (C++ Standard Template Library). Our experimental results show that our approach automatically generates a good load balance, which leads to performance improvements for generic data-distribution policies. It does not introduce significant performance overheads compared with tailored data redistributions directly programmed using MPI (Message Passing Interface), while it greatly reduces the code development effort.
</description>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="https://uvadoc.uva.es/handle/10324/83864">
<title>Extending and validating a theoretical model to predict the effectiveness of building evacuations</title>
<link>https://uvadoc.uva.es/handle/10324/83864</link>
<description>To predict the effectiveness of building evacuations is a very difficult task in the general case. In a previous work, the historical results of 47 evacuation drills in 15 different university buildings, both academic and residential, involving more than 19 000 persons, was analyzed, and a method based on dimensional analysis and statistical regression was proposed to give an estimation of the exit time in case of evacuation. Comparing this estimated exit time with the real values obtained in evacuation drills, more informed decisions on whether to invest in more training and/or preventive culture of the occupants or to invest in structural improvements of the buildings can be taken. In this work, we both propose a refinement of the method to calculate expected exit times, that leads to an even better adjustment between predictions and real-world results, and we use this refined model to predict the results of evacuations of a new building, whose use and characteristics are different from those previously studied, and whose data was provided by other authors in the bibliography. We show that there exists a correlation between the published results and the predictions generated by our model, both from a quantitative and qualitative point of view.
</description>
<dc:date>2022-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="https://uvadoc.uva.es/handle/10324/83863">
<title>EPSILOD: efficient parallel skeleton for generic iterative stencil computations in distributed GPUs</title>
<link>https://uvadoc.uva.es/handle/10324/83863</link>
<description>Iterative stencil computations are widely used in numerical simulations. They present a high degree of parallelism, high locality and mostly-coalesced memory access patterns. Therefore, GPUs are good candidates to speed up their computation. However, the development of stencil programs that can work with huge grids in distributed systems with multiple GPUs is not straightforward, since it requires solving problems related to the partition of the grid across nodes and devices, and the synchronization and data movement across remote GPUs. In this work, we present EPSILOD, a high-productivity parallel programming skeleton for iterative stencil computations on distributed multi-GPUs, of the same or different vendors that supports any type of n-dimensional geometric stencils of any order. It uses an abstract specification of the stencil pattern (neighbors and weights) to internally derive the data partition, synchronizations and communications. Computation is split to better overlap with communications. This paper describes the underlying architecture of EPSILOD, its main components, and presents an experimental evaluation to show the benefits of our approach, including a comparison with another state-of-the-art solution. The experimental results show that EPSILOD is faster and shows good strong and weak scalability for platforms with both homogeneous and heterogeneous types of GPU.
</description>
<dc:date>2023-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="https://uvadoc.uva.es/handle/10324/83862">
<title>UVaFTLE: Lagrangian finite time Lyapunov exponent extraction for fluid dynamic applications</title>
<link>https://uvadoc.uva.es/handle/10324/83862</link>
<description>The determination of Lagrangian Coherent Structures (LCS) is becoming very important in several disciplines, including cardiovascular engineering, aerodynamics, and geophysical fluid dynamics. From the computational point of view, the extraction of LCS consists of two main steps: The flowmap computation and the resolution of Finite Time Lyapunov Exponents (FTLE). In this work, we focus on the design, implementation, and parallelization of the FTLE resolution. We offer an in-depth analysis of this procedure, as well as an open source C implementation (UVaFTLE) parallelized using OpenMP directives to attain a fair parallel efficiency in shared-memory environments. We have also implemented CUDA kernels that allow UVaFTLE to leverage as many NVIDIA GPU devices as desired in order to reach the best parallel efficiency. For the sake of reproducibility and in order to contribute to open science, our code is publicly available through GitHub. Moreover, we also provide Docker containers to ease its usage.
</description>
<dc:date>2023-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="https://uvadoc.uva.es/handle/10324/83860">
<title>Supporting efficient overlapping of host-device operations for heterogeneous programming with CtrlEvents</title>
<link>https://uvadoc.uva.es/handle/10324/83860</link>
<description>Heterogeneous systems with several kinds of devices, such as multi-core CPUs, GPUs, FPGAs, among others, are now commonplace. Exploiting all these devices with device-oriented programming models, such as CUDA or OpenCL, requires expertise and knowledge about the underlying hardware to tailor the application to each specific device, thus degrading performance portability. Higher-level proposals simplify the programming of these devices, but their current implementations do not have an efficient support to solve problems that include frequent bursts of computation and communication, or input/output operations. In this work we present CtrlEvents, a new heterogeneous runtime solution which automatically overlaps computation and communication whenever possible, simplifying and improving the efficiency of data-dependency analysis and the coordination of both device computations and host tasks that include generic I/O operations. Our solution outperforms other state-of-the-art implementations for most situations, presenting a good balance between portability, programmability and efficiency.
</description>
<dc:date>2023-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="https://uvadoc.uva.es/handle/10324/83826">
<title>Towards aircraft trajectory prediction using LSTM networks</title>
<link>https://uvadoc.uva.es/handle/10324/83826</link>
<description>La predicción de trayectorias permite mejorar la previsibilidad, la seguridad y la eficiencia en las operaciones de gestión de tráfico aéreo. Las redes LSTM han sido aplicadas con éxito para realizar predicciones de trayectorias a corto plazo, pero la criticidad de la supervisión de estas operaciones en zonas de alta densidad de tráfico, como el área de control terminal (TMA) en torno a los aeropuertos, requiere métodos que proporcionen predicciones precisas a largo plazo.&#13;
En este trabajo, proponemos una arquitectura basada en LSTM para la predicción de trayectorias utilizando datos de vigilancia (ADS-B). Realizamos nuestros experimentos sobre el caso de estudio de vuelos con llegada al aeropuerto Adolfo Suárez Madrid-Barajas (España), empleando datos correspondientes a nueve meses. En particular, nos centramos en predicciones a más largo plazo que el estado del arte, prediciendo los siguientes 150 segundos en cualquier punto de la trayectoria. Este modelo proporciona una mayor precisión en el posicionamiento 2D, con errores absolutos medios de 0,0238 y 0,0544 grados para la latitud y la longitud, respectivamente, en el TMA del aeropuerto de destino.; Trajectory prediction allows for better predictability, security and efficiency in the operations of the modern Air Traffic Management. LSTM networks have been successfully applied to make short-term trajectory predictions. However, the criticality of the supervision of these operations in high density traffic zones, such as the Terminal Maneuvering Area (TMA) around the airports, require methods that provide long-term, precise predictions. In this paper, we propose a LSTM-based architecture for trajectory prediction using surveillance data (ADS-B). We conduct our experiments on the case study of flights arriving at the Madrid Barajas-Adolfo Suárez airport (Spain), using nine months worth of data. In particular, we focus on longer-term predictions than the state of the art, predicting the next 150 seconds at any point in the trajectory. This model provides an increased accuracy for 2D positioning, with mean absolute errors of 0.0238 and 0.0544 degrees for latitude and longitude, respectively, in the TMA of the destination airport.
</description>
<dc:date>2024-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="https://uvadoc.uva.es/handle/10324/83821">
<title>Quality-of-service provision for BXIv3-based interconnection networks</title>
<link>https://uvadoc.uva.es/handle/10324/83821</link>
<description>Supercomputers (SCs) enable advanced research for a variety of scientific fields, and data centers (DCs) power our day-to-day services. These two massive systems work at scales, in terms of storage and computing power, which are not comparable to our everyday devices. As such, they require state-of-the-art technology to constantly evolve and meet our increasing demand. The interconnection network is the backbone of these systems, since it must provide efficient communication among the nodes that compose the whole system, otherwise becoming the entire system bottleneck. As multiple applications and services may use subsets of the system at the same time, interconnection networks must prevent excessive degradation for latency-sensitive applications. To this end, differentiated services are used to provide fair network access that considers bandwidth and latency requirements for each application. In this paper, we extend the switch architecture of next-generation BXI networks (hereafter called BXIv3) to incorporate arbitration tables so these networks can provide quality of service (QoS) to applications and services running on both SCs and DCs. Our proposal has been implemented in a network simulator, which models the behavior of a BXIv3 network. We have used several traffic patterns and arbitration table configurations to conduct a set of simulation experiments for the evaluation of our solution. The obtained results show that our proposal achieves accurate bandwidth allocation with differentiated latencies. Moreover, a study of memory requirements shows that our solution is quite feasible for hardware implementation.
</description>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="https://uvadoc.uva.es/handle/10324/83792">
<title>Compact Encoding of Reified Triples using HDTr</title>
<link>https://uvadoc.uva.es/handle/10324/83792</link>
<description>La información contextual sobre una afirmación suele representarse en grafos de conocimiento RDF mediante reificación: la creación de un nuevo término “ancla” que representa la afirmación y su uso en los triples que la describen. Los enfoques actuales establecen la conexión entre la afirmación reificada y su ancla extendiendo la sintaxis de RDF, lo que da lugar a una representación que no cumple con las normas del estándar, o mediante la creación de triples adicionales que conectan el ancla y los términos de la afirmación, generando representaciones de mayor tamaño y complejidad.&#13;
Este trabajo aborda este desafío y presenta HDTr, un formato de serialización binaria para triples reificados que es agnóstico al modelo, compacto y consultable. HDTr se basa en, y es compatible con, el formato HDT, aprovechando su estructura subyacente para conectar las afirmaciones reificadas con los términos que las representan. La evaluación muestra que HDTr mejora la compresión y el tiempo de recuperación de afirmaciones reificadas con respecto a varios triplestores y a la serialización en HDT de distintos enfoques de reificación.; Contextual information about a statement is usually represented in RDF knowledge graphs via reification: creating a fresh ‘anchor’ term that represents the statement and using it in the triples that describe it. Current approaches make the connection between the reified statement and its anchor by either extending the RDF syntax, resulting in non-compliant RDF, or via additional triples to connect the anchor with the terms of the statement, at the cost of size and complexity.&#13;
&#13;
This work tackles this challenge and presents HDTr, a binary serialization format for reified triples that is model-agnostic, compact, and queryable. HDTr is based on, and compatible with, the counterpart HDT format, leveraging its underlying structure to connect the reified statements with the terms that represent them. Our evaluation shows that HDTr improves compression and retrieval time of reified statements w.r.t. several triplestores and HDT serialization of different reification approaches.
</description>
<dc:date>2023-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="https://uvadoc.uva.es/handle/10324/83751">
<title>Towards a hierarchical approach for autotuning task-based libraries</title>
<link>https://uvadoc.uva.es/handle/10324/83751</link>
<description>This work proposes a hierarchical approach to reduce the training time of task-based routines by reusing previously obtained autotuning information. This approach has been integrated into a working prototype of Chameleon, a dense linear algebra software whose tile-based routines are executed on the available computational resources by means of a runtime system. The results show that this approach provides a high degree of scalability to the entire self-optimization process, achieving a reduction in training time of up to 80% and an appropriate selection of values for the adjustable parameters.
</description>
<dc:date>2026-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="https://uvadoc.uva.es/handle/10324/83749">
<title>cBiK: A Space-Efficient Data Structure for Spatial Keyword Queries</title>
<link>https://uvadoc.uva.es/handle/10324/83749</link>
<description>La universalización en el uso de dispositivos móviles y otros sensores ha traído consigo la creación de cantidades ingentes de datos georreferenciados, lo que lo que incrementa la relevancia de los análisis espacio-textuales sobre este tipo de información. Debido a este elevado volumen de datos, resulta imprescindible el uso de índices para acelerar consultas que posibilitan dichos análisis. En el estado del arte existen numerosas propuestas de índices basados en disco, su eficiencia se ve perjudicada por sus elevados costes de E/S, aspecto aún más determinante en colecciones de gran tamaño.&#13;
En este trabajo, proponemos cBiK, el primer índice espacio-textual que emplea estructuras de datos compactas para construir soluciones eficientes en memoria principal. Nuestra evaluación experimental muestra que este enfoque requiere la mitad de espacio y es más de un orden de magnitud más rápido que un índice de última generación residente en disco. Asimismo, demostramos que nuestra propuesta sigue siendo competitiva incluso en un escenario en el que la estructura de datos residente en disco ha sido precargada en memoria principal.; A vast amount of geo-referenced data is being generated by mobile devices and other sensors increasing the importance of spatio-textual analyses on such data. Due to the large volume of data, the use of indexes to speed up the queries that facilitate such analyses is imperative. Many disk resident indexes have been proposed for different types of spatial keyword queries, but their efficiency is harmed by their high I/O costs. In this work, we propose cBiK, the first spatio-textual index that uses compact data structures to reduce the size of the structure, hence facilitating its usage in main memory. Our experimental evaluation, shows that this approach needs half the space and is more than one order of magnitude faster than a disk resident state-of-the-art index. Also, we show that our approach is competitive even in a scenario where the disk resident data structure is warmed-up to fit in main memory.
</description>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="https://uvadoc.uva.es/handle/10324/83748">
<title>HDT++: improving HDT for SPARQL triple pattern resolution</title>
<link>https://uvadoc.uva.es/handle/10324/83748</link>
<description>Los autoíndices RDF son capaces de comprimir las colecciones de RDF y proporcionar acceso eficiente a sus datos sin necesidad de una descompresión previa, mediante los denominados patrones de triples SPARQL. HDT es una de las soluciones de referencia en este escenario, con diversas aplicaciones orientadas a reducir la barrera tanto de publicación como de consumo de Big Semantic Data. HDT ofrece una solución competitiva en términos de espacio y tiempo, dada su capacidad para resolver consultas de escaneo y basadas en el sujeto. Sin embargo, HDT requiere índices adicionales para resolver patrones de triples SPARQL basados en predicado y objeto.&#13;
Una variante reciente de HDT: HDT++, mejora sus tasas de compresión, pero no conserva las capacidades originales de recuperación de HDT. En este artículo, extendemos HDT++ con índices adicionales para soportar la resolución completa de patrones de triples SPARQL, consolidando una configuración más ligera que la planteada en la propuesta de indexación original de HDT: HDT-FoQ. Nuestra evaluación muestra que la estructura resultante, iHDT++, requiere entre un 70 % y un 85 % del espacio del HDT-FoQ original (y hasta un 48 %–72 % en una variante HDT Community). Además, iHDT++ presenta mejoras significativas de rendimiento en las operaciones de acceso a los datos comprimidos.; RDF self-indexes compress the RDF collection and provide efficient access to the data without a previous decompression (via the so-called SPARQL triple patterns). HDT is one of the reference solutions in this scenario, with several applications to lower the barrier of both publication and consumption of Big Semantic Data. However, the simple design of HDT takes a compromise position between compression effectiveness and retrieval speed. In particular, it supports scan and subject-based queries, but it requires additional indexes to resolve predicate and object-based SPARQL triple patterns. &#13;
A recent variant, HDT++, improves HDT compression ratios, but it does not retain the original HDT retrieval capabilities. In this article, we extend HDT++ with additional indexes to support full SPARQL triple pattern resolution with a lower memory footprint than the original indexed HDT (called HDT-FoQ). Our evaluation shows that the resultant structure, iHDT++, requires 70−85% of the original HDT-FoQ space (and up to 48−72% for an HDT Community variant). In addition, iHDT++ shows significant performance improvements (up to one level of magnitude) for most triple pattern queries, being competitive with state-of-the-art RDF self-indexes.
</description>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="https://uvadoc.uva.es/handle/10324/83404">
<title>Bitcoin Protocol Mechanics for Economists: A Compact Reference</title>
<link>https://uvadoc.uva.es/handle/10324/83404</link>
<description>Economists increasingly engage with Bitcoin's macroeconomic and financial implications, yet many debates implicitly assume protocol properties that are rarely stated with mechanical precision. This paper offers a concise technical primer on Bitcoin protocol mechanics, aimed at providing the minimal foundations for interpreting claims about decentralization, immutability, and a credible issuance cap without trusted intermediaries. We explain how transactions encode ownership and transfer via public-key cryptography and digital signatures, why the blockchain functions as an append-only, replicated ledger, and how full nodes, wallets, and miners jointly enforce validity and compliance with the rules. We then describe Proof-of-Work as a coordination and security mechanism, clarifying how confirmations deliver probabilistic finality and why rewriting settled history is computationally prohibitive. The paper also outlines Bitcoin's deterministic issuance schedule via the coinbase reward and halving rule, and briefly situates fee revenue and second-layer protocols as the long-run basis for payments and security as block subsidies decline. Three appendices provide a didactic treatment of transactions, a compact summary of cryptography, and an overview of the mining workflow.
</description>
<dc:date>2026-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="https://uvadoc.uva.es/handle/10324/83195">
<title>How well do collaboration quality estimation models generalize across authentic school contexts?</title>
<link>https://uvadoc.uva.es/handle/10324/83195</link>
<description>Multimodal learning analytics (MMLA) research has made significant progress in modelling collaboration quality for the purpose of understanding collaboration behaviour and building automated collaboration estimation models. Deploying these automated models in authentic classroom scenarios, however, remains a challenge. This paper presents findings from an evaluation of collaboration quality estimation models. We collected audio, video and log data from two different Estonian schools. These data were used in different combinations to build collaboration estimation models and then assessed across different subjects, different types of activities (collaborative-writing, group-discussion) and different schools. Our results suggest that the automated collaboration model can generalize to the context of different schools but with a 25% degradation in balanced accuracy (from 82% to 57%). Moreover, the results also indicate that multimodality brings more performance improvement in the case of group-discussion-based activities than collaborative-writing-based activities. Further, our results suggest that the video data could be an alternative for understanding collaboration in authentic settings where higher-quality audio data cannot be collected due to contextual factors. The findings have implications for building automated collaboration estimation systems to assist teachers with monitoring their collaborative classrooms.
</description>
<dc:date>2023-01-01T00:00:00Z</dc:date>
</item>
</rdf:RDF>
