We present an algorithm and its implementation for computing the entire Hasse diagram of the flow complex of point cloud data in Euclidean space. Known algorithms for computing the flow complex in two and three dimensions compute the geometric realization of the flow complex and need to compute the Delaunay triangulation of the point cloud data first. Our algorithm computes less information, namely only the Hasse diagram of the flow complex that has been augmented with enough geometric information to allow the same topological multi-scale analysis of point cloud data as with alpha shapes. The algorithm avoids the explicit computation of the Delaunay triangulation and can be parallelized. We have implemented a parallel version of the algorithm and show some experimental results also for medium dimensions beyond three.

The Cosmic Web is the fundamental spatial organization of matter on scales of a few up to a hundred Megaparsec, scales at which the Universe still resides in a state of moderate dynamical evolution. galaxies, intergalactic gas and dark matter exist in a wispy weblike spatial arrangement consisting of dense compact clusters, elongated filaments, and sheetlike walls, amidst large near-empty void regions.

While the complex intricate structure of the cosmic web contains a wealth of cosmological information, its quantification has remained a major challenge. In this lecture, we focus on our work on the dynamical evolution and the topological structure of the cosmic web.

Hidding et al. have found that tessellations play a key role in the solution of the Burgers' equation for the description of the formation of cosmic structure as a result of the gravitational growth of initially small perturbations. In a well-known approximation of the firs stages of evolution, the adhesion approximation is an extension of the first-order Zeldovich approximation by including an artifical viscosity to model selfgravity of emerging weblike structures. The solution is found by determining the convex hull of the initial velocity potential. The resulting structure is the Delaunay tessellation in Lagrangian space. The emerging structure in Eulerian space is dictated by its dual, the corresponding Voronoi tessellation. This, in turn, provides a very efficient and insightful description of the dependence of cosmic web features as a function of the primordial spectrum of density perturbations.

In a related project, we focus on the topology of the resulting mass distribution. Pranav et al. have been assessing the complex topology of the hierarchical mass distribution by a systematic study of the persistence properties of a range of point distributions that model aspects of the weblike hierarchical cosmological patterns seen in the distribution of galaxies. Both the persistence diagrams and the resulting Betti number characteristics provide new insights into the connectivity of the mass distribution as a function of cosmology.

We design and implement polynomial-time algorithms for approximating the volume and computing the exact edge-skeleton of polytopes given by optimization oracles, which generalize algorithms that solve linear programs over the polytope. Our motivation is twofold: On the first hand, Minkowski sums of convex polytopes, extended to the case of alternating sums where we allow for polytopes to be subtracted provided the final sum is a convex polytope. On the other hand, secondary, resultant and discriminant Newton polytopes. For both classes, optimization oracles are naturally and easily available; whereas it is not straightforward to construct the more commonly employed membership oracles. For resul- tant polytopes, optimization oracles offer the most efficient representation. The main algorithmic step is to obtain efficient separation oracles for these polytopes, which is reduced to a feasibility problem in the polar dual polytope. Our first contribution is an implementation of Las Vegas algorithms based on random walks, which furthermore provide a general linear programming solver. Our experimental results are quite satisfactory, computing the optimal solution in up to 10 dimensions within seconds. Second, we offer the first total polynomial-time algorithm for constructing the edge skeleton of the polytope, when we are also given a superset of the edge segments. This question has also been studied in the framework of integer convex optimization. Third, we design and implement polynomial-time Monte Carlo algorithms for approximating the volume, again using random walks. In the above setting of optimization oracles, our prototype implementation serves to study the tradeoff between speed and accuracy. If a membership oracle is given, our code is significantly faster and more accurate, handling polytopes in about 10 dimensions in the order of minutes. Our C++ software is publicly available and serves as a proof of concept of the use of oracles.

Voronoi diagrams and their duals, Delaunay triangulations, are used in many areas of computing and the sciences. Starting in 3-dimensions, there is a substantial (i.e. polynomial) difference between the best case and the worst case complexity of these objects when starting with n points. This motivates the search for algorithms that are output-senstiive rather than relying only on worst-case guarantees. In this talk, I will describe a simple, new algorithm for computing Voronoi diagrams in d-dimensions that runs in O(f log n log spread) time, where f is the output size and the spread of the input points is the ratio of the diameter to the closest pair distance. For a wide range of inputs, this is the best known algorithm. The algorithm is novel in the that it turns the classic algorithm of Delaunay refinement for mesh generation on its head, working backwards from a quality mesh to the Delaunay triangulation of the input. Along the way, we will see instances of several other classic problems for which no higher-dimensional results are known, including kinetic convex hulls and splitting Delaunay triangulations.

The persistent homology with Z2-coefficients coincides with the same for cohomology because of duality. Recently it has been observed that the cohomology based algorithms perform much better in practice than the orginally proposed homology based persistence algorithm. We implement a cohomology based algorithm that attaches binary labels called annotations with the simplices. This algorithm fits very naturally with a recently developed data structure called simplex tree to represent simplicial complexes. By taking advantages of several practical tricks such as representing annotations compactly with memory words, using a union-find structure that eliminates duplicate annotation vectors, and a lazy evaluation, we save both space and time cost for computations. I will present this implementation and provide a theoretical analysis as well as a detailed experimental study of it. Experimental results show that our implementation performs several times better than the existing state-of-the-art software for computing persistent homology in terms of both time and memory requirements and can handle very large complexes efficiently.

We consider unconstrained randomized optimization of smooth convex functions in the gradient-free setting with Random Pursuit (RP). This algorithm only uses zeroth-order information about the objective function and computes an approximate solution by repeated optimization along a randomly chosen direction.

We first present fixed metric RP (F-RP) where the directional sampling follows a normal distribution with fixed covariance matrix. We derive exact convergence rates of F-RP on convex and strongly convex functions and also quantify how well the employed covariance matrix needs to reflect the local geometry of the function in order for F-RP to converge with the best possible rate.

Successful state-of-the-art (derivative-free) optimization heuristics, such as the Covariance Matrix Adaptation Evolution Strategy, however, often include a mechanism to iteratively \emph{adapt} the covariance matrix of the sampling distribution in order to best reflect the changing local geometry of the objective function and thus to increase the convergence rate. By combining RP with a novel variant of such an adaptation scheme, recently introduced by Leventhal and Lewis (D. Leventhal and A. S. Lewis., Optimization 60(3), 329--245, 2011), we propose an adaptive version of RP: Variable Metric Random Pursuit (V-RP). We study and prove the convergence of V-RP on convex quadratic functions. Our theoretical results are accompanied by illustrative experiments.

Multi-robot motion planning is a fundamental problem in robotics and naturally arises in situations where several robots operate in the same workspace. This problem, in its most basic form, consists of computing paths for a group of robots, moving each robot from its start position to a designated target position, while avoiding collision with obstacles as well as with fellow robots. Even though it has been extensively studied in the last three decades, many aspects of the problem are still not well understood and existing methods are often restricted to relatively simple settings.

In this talk, we will describe novel approaches to multi-robot motion planning that are able to cope with challenging settings such as scenarios with narrow passages or with a large number of robots. These approaches combine the prevalent sampling-based framework for motion planning with additional tools that have not been previously used in this context like geometric arrangements computed on randomly chosen sub-manifolds of the configuration space (the space where motion planning problems are typically studied and whose dimension is equal to the overall number of degrees of freedom of the robots), and algorithms for pebble problems on graphs.

Based on results by the speaker, Oren Salzman, Michael Hemmer and Dan Halperin.

We present a major revamp of the point-location data structure for general two-dimensional subdivisions via randomized incremental construction, implemented in CGAL, the Computational Geometry Algorithms Library. We can now guarantee that the constructed search structure, which is a directed acyclic graph, is of linear size and provides logarithmic query time.

A major challenge is to retain the expected O(n log n) preprocessing time while guaranteeing (deterministic) linear space and logarithmic query-time. We give a concrete simple solution with expected O(n log^2 n) preprocessing time and also provide another solution with expected O(n log n) runtime. However, our CGAL implementation is slightly different. We conjecture, based on our experimental results, that the implemented solution achieves expected O(n log n) preprocessing time. The trigger for this project was a Nearest-Neighbor (NN) search structure proposed by Birn et al (ALENEX10), which works well in practice but whose query time could be linear in the worst case. We will present experiments comparing their method and ours.

This is a joint work with Michael Hemmer and Dan Halperin.

Every smooth surface can be locally approximated by a quadratic surface patch using, for example, Taylor expansion. Thus, triangulation of quadratic surface patches is of interest. In particular, up to second order terms and up to Euclidean motions, the resulting quadratic patch can be either a plane, or an elliptic paraboloid, or a hyperbolic paraboloid or a parabolic cylinder. In this talk we present an optimal triangulation of a neighborhood of a point on quadratic surface $S$ which is either of those mentioned above. The triangulation is optimal with respect to distance function which serves as an upper bound on the Hausdorff distance from $S$ to the approximating patch. The construction of the approximating patch is strongly motivated by geometrical ideas.

TBA

The Gauss-Bonnet theorem relates local differential geometrical quantities, that is quantities which depend only on the Riemannian metric and its derivatives, by integration to a global topological invariant, namely the Euler characteristic $ (\chi) $. It is not directly obvious that the Euler characteristic is the only quantity which can be established in this manner. We shall prove for a surface that if $ f $ is a function that can be expressed locally in terms of the metric and all its derivatives such that $\int_M f dA = t$ where $ t $ is a topological invariant, then $ t = c \chi $ for some constant $ c $. We shall generalize this statement to three dimensional manifolds and prove that under the same conditions $ t = 0 $ and touch on problems occurring in dimensions exceeding three.

© 2010 CGL Computational Geometric Learning.