<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns="http://purl.org/rss/1.0/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/">
<channel rdf:about="https://uvadoc.uva.es/handle/10324/1279">
<title>DEP24 - Capítulos de monografías</title>
<link>https://uvadoc.uva.es/handle/10324/1279</link>
<description>Dpto. Estadística e Investigación Operativa - Capítulos de monografías</description>
<items>
<rdf:Seq>
<rdf:li rdf:resource="https://uvadoc.uva.es/handle/10324/38327"/>
<rdf:li rdf:resource="https://uvadoc.uva.es/handle/10324/22919"/>
<rdf:li rdf:resource="https://uvadoc.uva.es/handle/10324/22917"/>
<rdf:li rdf:resource="https://uvadoc.uva.es/handle/10324/21848"/>
<rdf:li rdf:resource="https://uvadoc.uva.es/handle/10324/21842"/>
<rdf:li rdf:resource="https://uvadoc.uva.es/handle/10324/21840"/>
<rdf:li rdf:resource="https://uvadoc.uva.es/handle/10324/21814"/>
</rdf:Seq>
</items>
<dc:date>2026-04-11T13:42:19Z</dc:date>
</channel>
<item rdf:about="https://uvadoc.uva.es/handle/10324/38327">
<title>Robust Approaches for Fuzzy Clusterwise Regression Based on Trimming and Constraints</title>
<link>https://uvadoc.uva.es/handle/10324/38327</link>
<description>Three different approaches for robust fuzzy clusterwise regression are reviewed. They are all based on the simultaneous application of trimming and constraints. The first one follows from the joint modeling of the response and explanatory variables through a normal component fitted in each cluster. The second one assumes normally distributed error terms conditional on the explanatory variables while the third approach is an extension of the Cluster Weighted Model. A fixed proportion of “most outlying” observations are trimmed. The use of appropriate constraints turns these problem into mathematically well-defined ones and, additionally, serves to avoid the detection of non-interesting or “spurious” linear clusters. The third proposal is specially appealing because it is able to protect us against outliers in the explanatory variables which may act as “bad leverage” points. Feasible and practical algorithms are outlined. Their performances, in terms of robustness, are illustrated in some simple simulated examples.
</description>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="https://uvadoc.uva.es/handle/10324/22919">
<title>Classification of samples with order restricted discrimination rules. Statistical Analysis in Proteomics</title>
<link>https://uvadoc.uva.es/handle/10324/22919</link>
<description>In recent years, mass spectrometry techniques have helped proteomics to become a powerful tool for the early diagnosis of cancer, as they help to discover protein profiles specific to each pathological state. One of the questions where proteomics is giving useful practical results is that of classifying patients into one of the possible severity levels of an illness, based on some features measured on the patient. This classification is usually made using one of the many discrimination procedures available in statistical literature. We present in this chapter recently developed restricted discriminant rules that use additional information in terms of orderings on the means, and we illustrate how to apply them to mass spectrometry data using R package&#13;
dawai. Specifically, we use proteomic prostate cancer data, and we describe all steps needed, including data preprocessing and feature extraction, to build a discriminant rule that classifies samples in one of several disease stages, thus helping diagnosis. The restricted discriminant rules are compared with some standard classifiers that do not take into account the additional information, showing better performance in terms of error rates.
</description>
<dc:date>2015-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="https://uvadoc.uva.es/handle/10324/22917">
<title>Some advances in constrained inference for ordered circular parameters in oscillatory systems</title>
<link>https://uvadoc.uva.es/handle/10324/22917</link>
<description>Constraints on parameters arise naturally in many applications. Statistical methods that&#13;
honor the underlying constraints tend to be more powerful and result in better interpretation&#13;
of the underlying scientific data. In the context of Euclidean space data, there exists&#13;
over five decades of statistical literature on constrained statistical inference and at least four&#13;
books on the subject (e.g. Robertson et al. 1988; Silvapulle and Sen 2005). However, it was&#13;
not until recently that these methods have been used extensively in applied research. For&#13;
example, constrained statistical inference is gaining considerable interest among applied&#13;
researchers in a variety of fields, such as, for example, toxicology (Peddada et al. 2007),&#13;
genomics (Hoenerhoff et al. 2013; Perdivara et al. 2011; Peddada et al. 2003), epidemiology&#13;
(Cao et al. 2011; Peddada et al. 2005), clinical trials (Conaway et al. 2004), or cancer&#13;
trials (Conde et al. 2012, 2013).
</description>
<dc:date>2015-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="https://uvadoc.uva.es/handle/10324/21848">
<title>Grouping Around Different Dimensional Affine Subspaces</title>
<link>https://uvadoc.uva.es/handle/10324/21848</link>
<description>Grouping around affine subspaces and other types of manifolds is&#13;
receiving a lot of attention in the literature due to its interest in several fields of&#13;
application. Allowing for different dimensions is needed in many applications. This&#13;
work extends the TCLUST methodology to deal with the problem of grouping data&#13;
around different dimensional linear subspaces in the presence of noise. Two ways&#13;
of considering error terms in the orthogonal of the linear subspaces are considered.
</description>
<dc:date>2013-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="https://uvadoc.uva.es/handle/10324/21842">
<title>Robust Fuzzy Clustering via Trimming and Constraints</title>
<link>https://uvadoc.uva.es/handle/10324/21842</link>
<description>A methodology for robust fuzzy clustering is proposed. This&#13;
methodology can be widely applied in very different statistical problems given&#13;
that it is based on probability likelihoods. Robustness is achieved by trimming&#13;
a fixed proportion of “most outlying” observations which are indeed&#13;
self-determined by the data set at hand. Constraints on the clusters’ scatters&#13;
are also needed to get mathematically well-defined problems and to avoid the&#13;
detection of non-interesting spurious clusters. The main lines for computationally&#13;
feasible algorithms are provided and some simple guidelines about&#13;
how to choose tuning parameters are briefly outlined. The proposed methodology&#13;
is illustrated through two applications. The first one is aimed at heterogeneously&#13;
clustering under multivariate normal assumptions and the second&#13;
one migh be useful in fuzzy clusterwise linear regression problems.
</description>
<dc:date>2016-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="https://uvadoc.uva.es/handle/10324/21840">
<title>Fuzzy Clustering Throug Robust Factor Analyzers</title>
<link>https://uvadoc.uva.es/handle/10324/21840</link>
<description>In fuzzy clustering, data elements can belong to more than one cluster , and membership levels are associated with each element, to indicate the strength of the association between that data element and a particular cluster. Unfortunately, fuzzy clustering is not robust, while in real applications the data is contaminated by outliers and noise, and the assumed underlying Gaussian distributions could be unrealistic. Here we propose a robust fuzzy estimator for clustering through Factor Analyzers, by introducing the joint usage of trimming and of constrained estimation of noise matrices in the classic Maximum Likelihood approach.
</description>
<dc:date>2016-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="https://uvadoc.uva.es/handle/10324/21814">
<title>Robustness and Outliers</title>
<link>https://uvadoc.uva.es/handle/10324/21814</link>
<description>Unexpected deviations from assumed models as well as the presence of certain amounts of outlying data are common in most practical statistical applications. This fact could lead to undesirable solutions when applying non-robust statistical techniques. This is often the case in cluster analysis, too. The search for homogeneous groups with large heterogeneity between them can be spoiled due to the lack of robustness of standard clustering methods. For instance, the presence of (even few) outlying observations may result in heterogeneous clusters artificially joined together or in the detection of spurious clusters merely made up of outlying observations. In this chapter we will analyze the effects of different kinds of outlying data in cluster analysis and explore several alternative methodologies designed to avoid or minimize their undesirable effects.
</description>
<dc:date>2015-01-01T00:00:00Z</dc:date>
</item>
</rdf:RDF>
