<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
<title>DEP41 - Artículos de revista</title>
<link href="https://uvadoc.uva.es/handle/10324/1335" rel="alternate"/>
<subtitle>Dpto. Informática (Arquitectura y Tecnología de Computadores, Ciencias de la Computación e Inteligencia ...) - Artículos de revista</subtitle>
<id>https://uvadoc.uva.es/handle/10324/1335</id>
<updated>2026-04-08T20:59:40Z</updated>
<dc:date>2026-04-08T20:59:40Z</dc:date>
<entry>
<title>Study of historical evacuation drill data combining regression analysis and dimensionless numbers</title>
<link href="https://uvadoc.uva.es/handle/10324/83866" rel="alternate"/>
<author>
<name>Miñambres Del Moral, María Dolores</name>
</author>
<author>
<name>Llanos Ferraris, Diego Rafael</name>
</author>
<author>
<name>Gento Municio, Ángel Manuel</name>
</author>
<id>https://uvadoc.uva.es/handle/10324/83866</id>
<updated>2026-03-30T19:01:38Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">The time needed to evacuate a building depends on many factors. Some are related to people’s behavior, while others are related to the physical characteristics of the building. This paper analyzes the historical data of 47 evacuation drills in 15 different university buildings, both academic and residential, involving more than 19 000 persons. We propose the study of the data presented using a dimensionless analysis and statistical regression in order to give a prediction of the ratio between exit time and the number of people evacuated. The results obtained show that this approach could be a useful tool for comparing buildings of this type, and that it represents a promising research topic which can also be extended to other types of buildings.
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Operators for Data Redistribution: Applications to the STL Library and RayTracing Algorithm</title>
<link href="https://uvadoc.uva.es/handle/10324/83865" rel="alternate"/>
<author>
<name>Moreton Fernández, Ana</name>
</author>
<author>
<name>Torres de la Sierra, Yuri</name>
</author>
<author>
<name>González Escribano, Arturo</name>
</author>
<author>
<name>Llanos Ferraris, Diego Rafael</name>
</author>
<id>https://uvadoc.uva.es/handle/10324/83865</id>
<updated>2026-03-30T19:01:36Z</updated>
<published>2021-01-01T00:00:00Z</published>
<summary type="text">In distributed-memory systems, data redistributions are operations that change the ownership and location of a selected subset of a data structure at runtime. They allow the improvement of the performance of parallel algorithms which operate on changing or partial domains, aiming to create a balanced workload among the active processes. To manually redistribute data is a cumbersome and error-prone task. In this paper, we present a method based on four combinable operators to redistribute partial domains selected by the programmer at runtime in an efficient and simple way. They abstract to the programmer the data-redistribution implementation details, such as the new mapping, relocation, and communication of the selected data. We also present the application of the proposed operators to a RayTracing application and to a significant part of STL (C++ Standard Template Library). Our experimental results show that our approach automatically generates a good load balance, which leads to performance improvements for generic data-distribution policies. It does not introduce significant performance overheads compared with tailored data redistributions directly programmed using MPI (Message Passing Interface), while it greatly reduces the code development effort.
</summary>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Extending and validating a theoretical model to predict the effectiveness of building evacuations</title>
<link href="https://uvadoc.uva.es/handle/10324/83864" rel="alternate"/>
<author>
<name>Miñambres Del Moral, María Dolores</name>
</author>
<author>
<name>Llanos Ferraris, Diego Rafael</name>
</author>
<author>
<name>Gento Municio, Ángel Manuel</name>
</author>
<id>https://uvadoc.uva.es/handle/10324/83864</id>
<updated>2026-03-30T19:01:34Z</updated>
<published>2022-01-01T00:00:00Z</published>
<summary type="text">To predict the effectiveness of building evacuations is a very difficult task in the general case. In a previous work, the historical results of 47 evacuation drills in 15 different university buildings, both academic and residential, involving more than 19 000 persons, was analyzed, and a method based on dimensional analysis and statistical regression was proposed to give an estimation of the exit time in case of evacuation. Comparing this estimated exit time with the real values obtained in evacuation drills, more informed decisions on whether to invest in more training and/or preventive culture of the occupants or to invest in structural improvements of the buildings can be taken. In this work, we both propose a refinement of the method to calculate expected exit times, that leads to an even better adjustment between predictions and real-world results, and we use this refined model to predict the results of evacuations of a new building, whose use and characteristics are different from those previously studied, and whose data was provided by other authors in the bibliography. We show that there exists a correlation between the published results and the predictions generated by our model, both from a quantitative and qualitative point of view.
</summary>
<dc:date>2022-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>EPSILOD: efficient parallel skeleton for generic iterative stencil computations in distributed GPUs</title>
<link href="https://uvadoc.uva.es/handle/10324/83863" rel="alternate"/>
<author>
<name>Castro Caballero, Manuel De</name>
</author>
<author>
<name>Santamaria Valenzuela, Inmaculada</name>
</author>
<author>
<name>Torres de la Sierra, Yuri</name>
</author>
<author>
<name>González Escribano, Arturo</name>
</author>
<author>
<name>Llanos Ferraris, Diego Rafael</name>
</author>
<id>https://uvadoc.uva.es/handle/10324/83863</id>
<updated>2026-03-30T19:01:32Z</updated>
<published>2023-01-01T00:00:00Z</published>
<summary type="text">Iterative stencil computations are widely used in numerical simulations. They present a high degree of parallelism, high locality and mostly-coalesced memory access patterns. Therefore, GPUs are good candidates to speed up their computation. However, the development of stencil programs that can work with huge grids in distributed systems with multiple GPUs is not straightforward, since it requires solving problems related to the partition of the grid across nodes and devices, and the synchronization and data movement across remote GPUs. In this work, we present EPSILOD, a high-productivity parallel programming skeleton for iterative stencil computations on distributed multi-GPUs, of the same or different vendors that supports any type of n-dimensional geometric stencils of any order. It uses an abstract specification of the stencil pattern (neighbors and weights) to internally derive the data partition, synchronizations and communications. Computation is split to better overlap with communications. This paper describes the underlying architecture of EPSILOD, its main components, and presents an experimental evaluation to show the benefits of our approach, including a comparison with another state-of-the-art solution. The experimental results show that EPSILOD is faster and shows good strong and weak scalability for platforms with both homogeneous and heterogeneous types of GPU.
</summary>
<dc:date>2023-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>UVaFTLE: Lagrangian finite time Lyapunov exponent extraction for fluid dynamic applications</title>
<link href="https://uvadoc.uva.es/handle/10324/83862" rel="alternate"/>
<author>
<name>Carratalá-Sáez, Rocío</name>
</author>
<author>
<name>Torres de la Sierra, Yuri</name>
</author>
<author>
<name>Sierra Pallarés, José Benito</name>
</author>
<author>
<name>López Huguet, Sergio</name>
</author>
<author>
<name>Llanos Ferraris, Diego Rafael</name>
</author>
<id>https://uvadoc.uva.es/handle/10324/83862</id>
<updated>2026-03-30T19:01:30Z</updated>
<published>2023-01-01T00:00:00Z</published>
<summary type="text">The determination of Lagrangian Coherent Structures (LCS) is becoming very important in several disciplines, including cardiovascular engineering, aerodynamics, and geophysical fluid dynamics. From the computational point of view, the extraction of LCS consists of two main steps: The flowmap computation and the resolution of Finite Time Lyapunov Exponents (FTLE). In this work, we focus on the design, implementation, and parallelization of the FTLE resolution. We offer an in-depth analysis of this procedure, as well as an open source C implementation (UVaFTLE) parallelized using OpenMP directives to attain a fair parallel efficiency in shared-memory environments. We have also implemented CUDA kernels that allow UVaFTLE to leverage as many NVIDIA GPU devices as desired in order to reach the best parallel efficiency. For the sake of reproducibility and in order to contribute to open science, our code is publicly available through GitHub. Moreover, we also provide Docker containers to ease its usage.
</summary>
<dc:date>2023-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Supporting efficient overlapping of host-device operations for heterogeneous programming with CtrlEvents</title>
<link href="https://uvadoc.uva.es/handle/10324/83860" rel="alternate"/>
<author>
<name>Torres de la Sierra, Yuri</name>
</author>
<author>
<name>Andújar Muñoz, Francisco José</name>
</author>
<author>
<name>González Escribano, Arturo</name>
</author>
<author>
<name>Llanos Ferraris, Diego Rafael</name>
</author>
<id>https://uvadoc.uva.es/handle/10324/83860</id>
<updated>2026-03-30T19:01:30Z</updated>
<published>2023-01-01T00:00:00Z</published>
<summary type="text">Heterogeneous systems with several kinds of devices, such as multi-core CPUs, GPUs, FPGAs, among others, are now commonplace. Exploiting all these devices with device-oriented programming models, such as CUDA or OpenCL, requires expertise and knowledge about the underlying hardware to tailor the application to each specific device, thus degrading performance portability. Higher-level proposals simplify the programming of these devices, but their current implementations do not have an efficient support to solve problems that include frequent bursts of computation and communication, or input/output operations. In this work we present CtrlEvents, a new heterogeneous runtime solution which automatically overlaps computation and communication whenever possible, simplifying and improving the efficiency of data-dependency analysis and the coordination of both device computations and host tasks that include generic I/O operations. Our solution outperforms other state-of-the-art implementations for most situations, presenting a good balance between portability, programmability and efficiency.
</summary>
<dc:date>2023-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quality-of-service provision for BXIv3-based interconnection networks</title>
<link href="https://uvadoc.uva.es/handle/10324/83821" rel="alternate"/>
<author>
<name>de la Rosa, Miguel Sánchez</name>
</author>
<author>
<name>Gomez-Lopez, Gabriel</name>
</author>
<author>
<name>Andújar, Francisco J.</name>
</author>
<author>
<name>Escudero-Sahuquillo, Jesús</name>
</author>
<author>
<name>Sánchez, José L.</name>
</author>
<author>
<name>Alfaro-Cortés, Francisco J.</name>
</author>
<author>
<name>Lagadec, Pierre-Axel</name>
</author>
<id>https://uvadoc.uva.es/handle/10324/83821</id>
<updated>2026-03-25T20:01:27Z</updated>
<published>2025-01-01T00:00:00Z</published>
<summary type="text">Supercomputers (SCs) enable advanced research for a variety of scientific fields, and data centers (DCs) power our day-to-day services. These two massive systems work at scales, in terms of storage and computing power, which are not comparable to our everyday devices. As such, they require state-of-the-art technology to constantly evolve and meet our increasing demand. The interconnection network is the backbone of these systems, since it must provide efficient communication among the nodes that compose the whole system, otherwise becoming the entire system bottleneck. As multiple applications and services may use subsets of the system at the same time, interconnection networks must prevent excessive degradation for latency-sensitive applications. To this end, differentiated services are used to provide fair network access that considers bandwidth and latency requirements for each application. In this paper, we extend the switch architecture of next-generation BXI networks (hereafter called BXIv3) to incorporate arbitration tables so these networks can provide quality of service (QoS) to applications and services running on both SCs and DCs. Our proposal has been implemented in a network simulator, which models the behavior of a BXIv3 network. We have used several traffic patterns and arbitration table configurations to conduct a set of simulation experiments for the evaluation of our solution. The obtained results show that our proposal achieves accurate bandwidth allocation with differentiated latencies. Moreover, a study of memory requirements shows that our solution is quite feasible for hardware implementation.
</summary>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards a hierarchical approach for autotuning task-based libraries</title>
<link href="https://uvadoc.uva.es/handle/10324/83751" rel="alternate"/>
<author>
<name>Cámara Moreno, Jesús</name>
</author>
<author>
<name>Cuenca Muñoz, Javier</name>
</author>
<author>
<name>Boratto, Murilo</name>
</author>
<id>https://uvadoc.uva.es/handle/10324/83751</id>
<updated>2026-03-23T20:01:00Z</updated>
<published>2026-01-01T00:00:00Z</published>
<summary type="text">This work proposes a hierarchical approach to reduce the training time of task-based routines by reusing previously obtained autotuning information. This approach has been integrated into a working prototype of Chameleon, a dense linear algebra software whose tile-based routines are executed on the available computational resources by means of a runtime system. The results show that this approach provides a high degree of scalability to the entire self-optimization process, achieving a reduction in training time of up to 80% and an appropriate selection of values for the adjustable parameters.
</summary>
<dc:date>2026-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>cBiK: A Space-Efficient Data Structure for Spatial Keyword Queries</title>
<link href="https://uvadoc.uva.es/handle/10324/83749" rel="alternate"/>
<author>
<name>Sanjuan-Contreras, Carlos E.</name>
</author>
<author>
<name>Retamal, Gilberto Gutierrez</name>
</author>
<author>
<name>Martínez Prieto, Miguel Angel</name>
</author>
<author>
<name>Seco, Diego</name>
</author>
<id>https://uvadoc.uva.es/handle/10324/83749</id>
<updated>2026-03-23T20:00:59Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">La universalización en el uso de dispositivos móviles y otros sensores ha traído consigo la creación de cantidades ingentes de datos georreferenciados, lo que lo que incrementa la relevancia de los análisis espacio-textuales sobre este tipo de información. Debido a este elevado volumen de datos, resulta imprescindible el uso de índices para acelerar consultas que posibilitan dichos análisis. En el estado del arte existen numerosas propuestas de índices basados en disco, su eficiencia se ve perjudicada por sus elevados costes de E/S, aspecto aún más determinante en colecciones de gran tamaño.&#13;
En este trabajo, proponemos cBiK, el primer índice espacio-textual que emplea estructuras de datos compactas para construir soluciones eficientes en memoria principal. Nuestra evaluación experimental muestra que este enfoque requiere la mitad de espacio y es más de un orden de magnitud más rápido que un índice de última generación residente en disco. Asimismo, demostramos que nuestra propuesta sigue siendo competitiva incluso en un escenario en el que la estructura de datos residente en disco ha sido precargada en memoria principal.; A vast amount of geo-referenced data is being generated by mobile devices and other sensors increasing the importance of spatio-textual analyses on such data. Due to the large volume of data, the use of indexes to speed up the queries that facilitate such analyses is imperative. Many disk resident indexes have been proposed for different types of spatial keyword queries, but their efficiency is harmed by their high I/O costs. In this work, we propose cBiK, the first spatio-textual index that uses compact data structures to reduce the size of the structure, hence facilitating its usage in main memory. Our experimental evaluation, shows that this approach needs half the space and is more than one order of magnitude faster than a disk resident state-of-the-art index. Also, we show that our approach is competitive even in a scenario where the disk resident data structure is warmed-up to fit in main memory.
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>HDT++: improving HDT for SPARQL triple pattern resolution</title>
<link href="https://uvadoc.uva.es/handle/10324/83748" rel="alternate"/>
<author>
<name>Hernández Illera, Antonio</name>
</author>
<author>
<name>Martínez Prieto, Miguel Angel</name>
</author>
<author>
<name>Fernández García, Javier David</name>
</author>
<author>
<name>Fariña, Antonio</name>
</author>
<id>https://uvadoc.uva.es/handle/10324/83748</id>
<updated>2026-03-23T20:00:57Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Los autoíndices RDF son capaces de comprimir las colecciones de RDF y proporcionar acceso eficiente a sus datos sin necesidad de una descompresión previa, mediante los denominados patrones de triples SPARQL. HDT es una de las soluciones de referencia en este escenario, con diversas aplicaciones orientadas a reducir la barrera tanto de publicación como de consumo de Big Semantic Data. HDT ofrece una solución competitiva en términos de espacio y tiempo, dada su capacidad para resolver consultas de escaneo y basadas en el sujeto. Sin embargo, HDT requiere índices adicionales para resolver patrones de triples SPARQL basados en predicado y objeto.&#13;
Una variante reciente de HDT: HDT++, mejora sus tasas de compresión, pero no conserva las capacidades originales de recuperación de HDT. En este artículo, extendemos HDT++ con índices adicionales para soportar la resolución completa de patrones de triples SPARQL, consolidando una configuración más ligera que la planteada en la propuesta de indexación original de HDT: HDT-FoQ. Nuestra evaluación muestra que la estructura resultante, iHDT++, requiere entre un 70 % y un 85 % del espacio del HDT-FoQ original (y hasta un 48 %–72 % en una variante HDT Community). Además, iHDT++ presenta mejoras significativas de rendimiento en las operaciones de acceso a los datos comprimidos.; RDF self-indexes compress the RDF collection and provide efficient access to the data without a previous decompression (via the so-called SPARQL triple patterns). HDT is one of the reference solutions in this scenario, with several applications to lower the barrier of both publication and consumption of Big Semantic Data. However, the simple design of HDT takes a compromise position between compression effectiveness and retrieval speed. In particular, it supports scan and subject-based queries, but it requires additional indexes to resolve predicate and object-based SPARQL triple patterns. &#13;
A recent variant, HDT++, improves HDT compression ratios, but it does not retain the original HDT retrieval capabilities. In this article, we extend HDT++ with additional indexes to support full SPARQL triple pattern resolution with a lower memory footprint than the original indexed HDT (called HDT-FoQ). Our evaluation shows that the resultant structure, iHDT++, requires 70−85% of the original HDT-FoQ space (and up to 48−72% for an HDT Community variant). In addition, iHDT++ shows significant performance improvements (up to one level of magnitude) for most triple pattern queries, being competitive with state-of-the-art RDF self-indexes.
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How well do collaboration quality estimation models generalize across authentic school contexts?</title>
<link href="https://uvadoc.uva.es/handle/10324/83195" rel="alternate"/>
<author>
<name>Chejara, Pankaj</name>
</author>
<author>
<name>Kasepalu, Reet</name>
</author>
<author>
<name>Prieto Santos, Luis Pablo</name>
</author>
<author>
<name>Rodríguez Triana, María Jesús</name>
</author>
<author>
<name>Ruiz Calleja, Adolfo</name>
</author>
<author>
<name>Schneider, Bertrand</name>
</author>
<id>https://uvadoc.uva.es/handle/10324/83195</id>
<updated>2026-02-27T20:01:04Z</updated>
<published>2023-01-01T00:00:00Z</published>
<summary type="text">Multimodal learning analytics (MMLA) research has made significant progress in modelling collaboration quality for the purpose of understanding collaboration behaviour and building automated collaboration estimation models. Deploying these automated models in authentic classroom scenarios, however, remains a challenge. This paper presents findings from an evaluation of collaboration quality estimation models. We collected audio, video and log data from two different Estonian schools. These data were used in different combinations to build collaboration estimation models and then assessed across different subjects, different types of activities (collaborative-writing, group-discussion) and different schools. Our results suggest that the automated collaboration model can generalize to the context of different schools but with a 25% degradation in balanced accuracy (from 82% to 57%). Moreover, the results also indicate that multimodality brings more performance improvement in the case of group-discussion-based activities than collaborative-writing-based activities. Further, our results suggest that the video data could be an alternative for understanding collaboration in authentic settings where higher-quality audio data cannot be collected due to contextual factors. The findings have implications for building automated collaboration estimation systems to assist teachers with monitoring their collaborative classrooms.
</summary>
<dc:date>2023-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Classroom data collection for teachers’ data-informed practice</title>
<link href="https://uvadoc.uva.es/handle/10324/83194" rel="alternate"/>
<author>
<name>Saar, Merike</name>
</author>
<author>
<name>Prieto, Luis P.</name>
</author>
<author>
<name>Rodríguez Triana, María Jesús</name>
</author>
<id>https://uvadoc.uva.es/handle/10324/83194</id>
<updated>2026-02-27T20:01:02Z</updated>
<published>2021-01-01T00:00:00Z</published>
<summary type="text">Research indicates that data-informed practice helps teachers change their teaching and promotes teacher professional development (TPD). Although educational data are often collected from digital spaces, in-action evidence from physical spaces is seldom gathered, providing an incomplete view of the classroom reality. Also, most learning analytics tools focus on learners and do not explicitly collect or analyse teaching data. To support teacher-led inquiries in TPD, the authors’ Design-Based Research explores the feasibility and effects of teachers actively collecting, with the help of technology, data about their classroom practice and the possible impact of such data on their own teaching. Based on an online survey (N = 94), prior research literature and feedback from teachers (N = 11), the authors demonstrate the feasibility of such data collection and suggest design principles for classroom data-collection tools as, besides usability and ease of use, they also detected interest in customisation, triggering teacher interest and inclusion of teaching data.
</summary>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing human-centered learning analytics and artificial intelligence in education solutions: a systematic literature review</title>
<link href="https://uvadoc.uva.es/handle/10324/83192" rel="alternate"/>
<author>
<name>Topali, Paraskevi</name>
</author>
<author>
<name>Ortega-Arranz, Alejandro</name>
</author>
<author>
<name>Rodríguez Triana, María Jesús</name>
</author>
<author>
<name>Er, Erkan</name>
</author>
<author>
<name>Khalil, Mohammad</name>
</author>
<author>
<name>Akçapınar, Gökhan</name>
</author>
<id>https://uvadoc.uva.es/handle/10324/83192</id>
<updated>2026-02-27T20:00:59Z</updated>
<published>2024-01-01T00:00:00Z</published>
<summary type="text">The recent advances in educational technology enabled the development of solutions that collect and analyse data from learning scenarios to inform the decision-making processes. Research fields like Learning Analytics (LA) and Artificial Intelligence (AI) aim at supporting teaching and learning by using such solutions. However, their adoption in authentic settings is still limited, among other reasons, derived from ignoring the stakeholders' needs, a lack of pedagogical contextualisation, and a low trust in new technologies. Thus, the research fields of Human-Centered LA (HCLA) and Human-Centered AI (HCAI) recently emerged, aiming to understand the active involvement of stakeholders in the creation of such proposals. This paper presents a systematic literature review of 47 empirical research studies on the topic. The results show that more than two-thirds of the papers involve stakeholders in the design of the solutions, while fewer papers involved them during the ideation and prototyping, and the majority do not report any evaluation. Interestingly, while multiple techniques were used to collect data (mainly interviews, focus groups and workshops), few papers explicitly mentioned the adoption of existing HC design guidelines. Further evidence is needed to show the real impact of HCLA/HCAI approaches (e.g., in terms of user satisfaction and adoption).
</summary>
<dc:date>2024-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning Analytics for Professional and Workplace Learning: A Literature Review</title>
<link href="https://uvadoc.uva.es/handle/10324/83190" rel="alternate"/>
<author>
<name>Ruiz-Calleja, Adolfo</name>
</author>
<author>
<name>Prieto, Luis P.</name>
</author>
<author>
<name>Ley, Tobias</name>
</author>
<author>
<name>Rodriguez-Triana, Maria Jesus</name>
</author>
<author>
<name>Dennerlein, Sebastian</name>
</author>
<id>https://uvadoc.uva.es/handle/10324/83190</id>
<updated>2026-02-27T20:00:58Z</updated>
<published>2021-01-01T00:00:00Z</published>
<summary type="text">Despite the ubiquity of learning in workplace and professional settings, the learning analytics (LA) community has paid significant attention to such settings only recently. This may be due to the focus on researching formal learning, as workplace learning is often informal, hard to grasp, and not unequivocally defined. This article summarizes the state of the art of workplace learning analytics (WPLA), extracted from a two-iteration systematic literature review. Our in-depth analysis of 52 existing proposals not only provides a descriptive view of the field, but also reflects on researcher conceptions of learning and their influence on the design, analytics, and technology choices made in this area. We also discuss the characteristics of workplace learning that make WPLA proposals different from LA in formal education contexts and the challenges resulting from this. We found that WPLA is gaining momentum, especially in some fields, like healthcare and education. The focus on theory is generally a positive feature in WPLA, but we encourage a stronger focus on assessing the impact of WPLA in realistic settings.
</summary>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding teacher design practices for digital inquiry–based science learning: the case of Go-Lab</title>
<link href="https://uvadoc.uva.es/handle/10324/83188" rel="alternate"/>
<author>
<name>de Jong, Ton</name>
</author>
<author>
<name>Gillet, Denis</name>
</author>
<author>
<name>Rodríguez-Triana, María Jesús</name>
</author>
<author>
<name>Hovardas, Tasos</name>
</author>
<author>
<name>Dikke, Diana</name>
</author>
<author>
<name>Doran, Rosa</name>
</author>
<author>
<name>Dziabenko, Olga</name>
</author>
<author>
<name>Koslowsky, Jens</name>
</author>
<author>
<name>Korventausta, Miikka</name>
</author>
<author>
<name>Law, Effie</name>
</author>
<author>
<name>Pedaste, Margus</name>
</author>
<author>
<name>Tasiopoulou, Evita</name>
</author>
<author>
<name>Vidal, Gérard</name>
</author>
<author>
<name>Zacharia, Zacharias C.</name>
</author>
<id>https://uvadoc.uva.es/handle/10324/83188</id>
<updated>2026-02-27T20:00:57Z</updated>
<published>2021-01-01T00:00:00Z</published>
<summary type="text">Designing and implementing online or digital learning material is a demanding task for teachers. This is even more the case when this material is used for more engaged forms of learning, such as inquiry learning. In this article, we give an informed account of Go-Lab, an ecosystem that supports teachers in creating Inquiry Learning Spaces (ILSs). These ILSs are built around STEM–related online laboratories. Within the Go-Lab ecosystem, teachers can combine these online laboratories with multimedia material and learning apps, which are small applications that support learners in their inquiry learning process. The Go-Lab ecosystem offers teachers ready–made structures, such as a standard inquiry cycle, alternative scenarios or complete ILSs that can be used as they are, but it also allows teachers to configure these structures to create personalized ILSs. For this article, we analyzed data on the design process and structure of 2414 ILSs that were (co)created by teachers and that our usage data suggest have been used in classrooms. Our data show that teachers prefer to start their design from empty templates instead of more domain–related elements, that the makeup of the design team (a single teacher, a group of collaborating teachers, or a mix of teachers and project members) influences key design process characteristics such as time spent designing the ILS and number of actions involved, that the characteristics of the resulting ILSs also depend on the type of design team and that ILSs that are openly shared (i.e., published in a public repository) have different characteristics than those that are kept private.
</summary>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Social practices in teacher knowledge creation and innovation adoption: a large-scale study in an online instructional design community for inquiry learning</title>
<link href="https://uvadoc.uva.es/handle/10324/83187" rel="alternate"/>
<author>
<name>Rodríguez Triana, María Jesús</name>
</author>
<author>
<name>Prieto Santos, Luis Pablo</name>
</author>
<author>
<name>Ley, Tobias</name>
</author>
<author>
<name>de Jong, Ton</name>
</author>
<author>
<name>Gillet, Denis</name>
</author>
<id>https://uvadoc.uva.es/handle/10324/83187</id>
<updated>2026-02-27T20:00:56Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Social practices are assumed to play an important role in the evolution of new teaching and learning methods. Teachers internalize knowledge developed in their communities through interactions with peers and experts while solving problems or co-creating materials. However, these social practices and their influence on teachers’ adoption of new pedagogical practices are notoriously hard to study, given their implicit and informal nature. In this paper, we apply the Knowledge Appropriation Model (KAM) to trace how different social practices relate to the implementation of pedagogical innovations in the classroom, through the analysis of more than 40,000 learning designs created within Graasp, an online authoring tool to support inquiry-based learning, used by more than 35,000 teachers. Our results show how different practices of knowledge appropriation, maturation and scaffolding seem to be related, to a varying degree, to teachers’ increased classroom implementation of learning designs. Our study also provides insights into how we can use traces from digital co-creation platforms to better understand the social dimension of professional learning, knowledge creation and the adoption of new practices.
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
</feed>
