IPOL
http://www.ipol.im/feed/
IPOL Preprints — Latest public preprints from IPOL.Incidence of the Sample Size Distribution on One-Shot Federated LearningMarie Garin,
Gonzalo Iñaki Quintanahttp://www.ipol.im/pub/pre/440/
http://www.ipol.im/pub/pre/440/
Tue, 06 Dec 2022 13:31:44 +01002022-12-07T17:49:50ZFederated Learning (FL) is a learning paradigm where multiple nodes collaboratively train a model by only exchanging updates or parameters. This enables to keep data locally, therefore enhancing privacy. Depending on the application, the number of samples that each node contains can be very different, which can impact the training and the final performance. This work studies the impact of the per-node sample size distribution on the mean squared error (MSE) of the one-shot federated estimator. We focus on one-shot aggregation of statistical estimations made across disjoint, independent and identically distributed (i.i.d.) data sources, in the context of empirical risk minimization. In distributed learning, it is well-known that for a total number of $m$ nodes, each node should contain at least $m$ samples to equal the performance of centralized training. In a federated scenario, this result remains true, but now applies to the mean of the per-node sample size distribution. The demo enables to visualize this effect as well as to compare the behavior of the FESC (Federated Estimation with Statistical Correction) algorithm - a weighting scheme which depends on the local sample size - with respect to the classical federated estimator and the centralized one, for a large collection of distributions, number of nodes, and features space dimension.An Overview of GANet - Guided Aggregation Net for End-to-end Stereo MatchingAlvaro Gómezhttp://www.ipol.im/pub/pre/441/
http://www.ipol.im/pub/pre/441/
Tue, 29 Nov 2022 13:35:55 +01002022-12-05T18:01:41ZGuided Aggregation Net for End-to-end Stereo Matching
(GANet) is a stereo matching method that uses Deep Neural Networks (DNN) to compute a
disparity map from a pair of images of a scene. As other classic and DNN stereo methods, it follows the traditional stereo steps: dense features are extracted from both images, the cost of matching the features at different disparities is organized in a Cost Volume (CV) which is regularized by aggregation and local filtering and finally a map with minimal cost is
derived from the CV. In GANet, the aggregation of the CV is done by a Semi-Global Guided Aggregation layer (SGA) which implements a differentiable approximation of the well known Semi-Global Matching (SGM) algorithm. SGA is followed by a Local Guided Aggregation layer (LGA) that performs a local filtering. SGA and LGA weights are generated by an auxiliary
guidance subnet fed with the original reference image and its extracted features.
This article presents an overview of GANet. An online demo, running on CPU, is made available.Fixed Pattern Noise Reduction: Temporal High Pass FilterArnaud Barralhttp://www.ipol.im/pub/pre/436/
http://www.ipol.im/pub/pre/436/
Fri, 25 Nov 2022 09:21:14 +01002022-11-25T08:21:14ZTemporal high pass filter methods are a family of methods for Fixed Pattern Noise (FPN) reduction. They are recursive real time methods that apply a high-pass temporal filter to remove the FPN. FPN is a temporally coherent noise present on video due to the non-uniformity response of the sensors. It is a common problem for infrared videos and can degrade the quality of the observation. In this work we will study and compare three classical temporal high pass filter FPNR methods.Association Rules Discovery of Deviant Events in Multivariate Time Series: An Analysis and Implementation of the SAX-ARM AlgorithmAxel Roques,
Anne Zhaohttp://www.ipol.im/pub/pre/437/
http://www.ipol.im/pub/pre/437/
Wed, 16 Nov 2022 14:44:59 +01002022-11-16T13:44:59ZIn this work, we propose an open-source Python implementation of the SAX-ARM algorithm introduced by Park and Jung (2019). This algorithm mines association rules efficiently among the deviant events of multivariate time series. To do so, the algorithm combines two existing methods, namely the Symbolic Aggregate approXimation (SAX) from Lin et al. (2003) - a symbolic representation of time series - and the Apriori algorithm from Agrawal et al. (1996) - a data mining method which outputs all frequent itemsets and association rules from a transactional dataset.
A detailed description of the underlying principles is given along with their numerical implementation. The choice of relevant parameters is thoroughly discussed and evaluated using a public dataset on the topic of temperature and energy consumption. Fast Chromatic Aberration Correction With 1D FiltersThomas Ebolihttp://www.ipol.im/pub/pre/443/
http://www.ipol.im/pub/pre/443/
Sun, 13 Nov 2022 16:09:48 +01002022-11-14T07:44:21ZThis article presents an implementation of the chromatic aberration correction technique of
Chang et al. This method decomposes aberration correction into a cascade of two 1D
filters. The first one locally sharpens the red and blur edges such that they have similar profiles
to that of the green channel serving a guiding image throughout restoration. The second one
shifts the red and blur corrected edges to the location of the green ones to remove the color
fringes. These two successive estimates are ultimately merged into a final prediction, free of
most chromatic aberrations.Image Unprocessing: A Pipeline to Recover Raw Data from sRGB ImagesValéry Dewilhttp://www.ipol.im/pub/pre/438/
http://www.ipol.im/pub/pre/438/
Sun, 13 Nov 2022 15:50:59 +01002022-11-13T14:52:22ZAccess to high quality datasets is an essential condition for data-driven methods as it is known that mismatches between the distributions of training and test data may cause learning-based methods to fail. This issue has led to one of the most active research subjects in learning-based image restoration. For instance neural networks trained on unrealistic synthetic data may not generalize to real data even if they perform well on those synthetic data. This is specially problematic for image and video processing tasks, such as denoising, which are performed on raw data, since acquiring real raw datasets is not straightforward and is even impossible in some cases (acquiring a video dataset of real noise with clean ground-truth, for instance). Consequently, CNNs are often trained on synthetic data. Synthesizing realistic raw data is a difficult task and requires to invert properly the image processing pipeline. This paper focuses on the backward pipeline proposed by Brooks et al. [Unprocessing images for learned raw denoising, CVPR 2019] which aims at producing raw data from sRGB images.
**This is an MLBriefs article, the source code has not been reviewed!**<br>Electron Paramagnetic Resonance Image Reconstruction with Total Variation RegularizationRémy Abergel,
Mehdi Boussâa,
Sylvain Durand,
Yves-Michel Fraparthttp://www.ipol.im/pub/pre/414/
http://www.ipol.im/pub/pre/414/
Thu, 13 Oct 2022 23:09:45 +02002022-10-13T21:09:45ZThis work focuses on the reconstruction of two and three dimensional images of the concentration
of paramagnetic species from electron paramagnetic resonance (EPR) measurements. A direct
operator, modeling how the measurements are related to the paramagnetic sample to be imaged,
is derived in the continuous framework taking into account the physical phenomena at work
during the acquisition process. Then, this direct operator is discretized to closely take into
account the discrete nature of the measurements and provide an explicit link between them
and the discrete image to be reconstructed. A variational inverse problem with total variation
regularization is formulated and an efficient resolvant scheme is implemented. The setting of
the reconstruction parameters is thoroughly studied and facilitated thanks to the introduction
of appropriate normalization factors. Moreover, an a contrario algorithm is proposed to derive
the optimal resolution at which the data should be acquired. Finally, an in-depth experimental
study over real EPR datasets is done to illustrate the potential and limitations of the presented
image reconstruction model.CAEclust: A consensus of autoencoders representations for clusteringSéverine Affeldt,
Lazhar Labiod,
Mohamed Nadifhttp://www.ipol.im/pub/pre/398/
http://www.ipol.im/pub/pre/398/
Thu, 13 Oct 2022 22:49:44 +02002022-10-13T20:49:44ZThe CAEclust Python package implements an original deep spectral clustering in an ensemble
framework. Recently, strategies combining classical clustering approaches and deep autoencoders have been proposed, but their robustness is impeded by deep network hyperparameters
settings. We alleviate this issue with a consensus solution that hinges on the fusion of multiple deep autoencoder representations and spectral clustering. CAEclust offers an efficient
merging of encodings by using the landmarks strategy and demonstrates its performance and
robustness on benchmark data. CAEclust enables to reproduce our experiments and explore
novel datasets.Detection and Interpretation of Change in Registered Satellite Image Time SeriesTristan Dagobert,
Rafael Grompone von Gioi,
Carlo de Franchis,
Charles Hesselhttp://www.ipol.im/pub/pre/416/
http://www.ipol.im/pub/pre/416/
Mon, 11 Jul 2022 13:34:19 +02002022-07-11T17:01:56ZTime series of satellite images are now massively available thanks to the existence of several
constellations of recurrent satellites. We propose a method for detecting and measuring the
duration of changes on such series. This approach is intended to be generic and independent of
the type of satellite used, whether band limited or multispectral. It is based on a global analysis
of the sequence. The statistical detection method is applied to a residual sequence computed
from backward and forward novelty filters applied to all images in the series. Significant changes
are computed with a guarantee on their number of false alarms (NFA). To establish the efficiency
of the method, we have created an open database of 28 sequences of 20 images acquired by the
Sentinel-2 satellite, in different regions of the world. We obtain satisfactory results which are
consistent with the visual observations.Binary Shape Vectorization by Affine Scale-spaceYuchen He,
Sung Ha Kang,
Jean-Michel Morelhttp://www.ipol.im/pub/pre/401/
http://www.ipol.im/pub/pre/401/
Thu, 28 Apr 2022 19:59:37 +02002022-04-28T17:59:37ZBinary shapes, or silhouettes are building elements of logos, graphic symbols and fonts which
require various forms of geometric editing without compromising the resolution. In this paper,
we present an effective silhouette vectorization algorithm that extracts the outline of a 2D
shape from a raster binary image and converts it to a combination of cubic Bézier polygons
and perfect circles. Compared to state-of-the-art image vectorization software, this algorithm
has demonstrated a superior reduction in the number of control points while maintaining high
accuracy.Ant Colony Optimization for Estimating Pith Position on Images of Tree Log EndsRémi Decelle,
Phuc Ngo,
Isabelle Debled-Rennesson,
Frédéric Mothe,
Fleur Longuetaudhttp://www.ipol.im/pub/pre/338/
http://www.ipol.im/pub/pre/338/
Sat, 12 Jun 2021 00:46:53 +02002022-10-26T08:08:40ZThe pith location is one of the most important features to detect in order to determine the quality of wood. Indeed, it allows to extract other important features. In this paper, we address the problem of pith detection on images of wood cross-sections.
Taking such images can be done at little cost and with a high resolution. However, contrary to computed tomographic images, digital images exhibit disturbances like sawing marks, dirt or ambient light variations which make difficult the image analysis. Few studies have focused on such images. Furthermore these studies do some prior segmentation or cropping before the detection. We propose an approach for estimating the pith location without any requirements. Our method is based on an ant colony optimization algorithm. It is a probabilistic approach for solving this task. We validate our algorithm on images of Douglas fir captured after harvesting. The efficiency of this algorithm has been demonstrated by performance comparisons with other approaches. Experiments show an accurate and fast estimation and the algorithm could be used in real time, at sawmill environment or in forest, with a smartphone.Cross-comparison of the Performance of Sequential Summed Area Table and Box Filter Algorithms with respect to C/C++ CompilersAli Ozturk,
Ibrahim Cayirogluhttp://www.ipol.im/pub/pre/268/
http://www.ipol.im/pub/pre/268/
Mon, 22 Jul 2019 00:03:34 +02002021-05-24T22:31:23ZSummed area table algorithm has been used to accelerate some computer vision and signal processing algorithms. In this study, the performance of the sequential summed area table algorithms and box filter algorithm with and without summed area table algorithm is examined by taking account of effect of C/C++ compilers. Three variants of sequential summed area table algorithm are included into the study. Loop invariant code motion and loop unrolling optimization techniques are applied to one of them. The performance of GNU Compiler Collection (GCC) and Intel C/C++ Compiler (ICC) on both Windows and Ubuntu and Visual C++ (CL) compiler on Windows by using the summed area table and box filter algorithms are compared. Result of the study reveals that Intel C/C++ Compiler (ICC) perform best on Ubuntu with respect to others for sequential summed area table and box filter algorithm. The performance of summed area table calculation which utilizes Viola-Jones Equation by using a scalar accumulator outperforms other summed area table algorithms.