The Computer Vision Group headed by Prof. Michael Moeller conducts research in the field of mathematical image processing, computer vision, and machine learning.
Research in computer vision, machine learning, optimization.
Proving the convergence with regards to the noise level of two regularization methods for 2D parallel beam CT-reconstruction, and investigating the effect of discretization errors at different resolutions.
Using text inputs to disambiguate solutions of image super-resolution..
We analyze the ability of common zero-cost proxies to serve as performance predictors for robustness in a popular NAS search space.
We utilize quantum annealing to solve optimization problems in jointly matching multiple, non-rigidly deformed 3D shapes.
We investigate differentiable architecture search for the design of novel architectures for inverse problems in a systematic case study.
Adversarial attacks on CT recovery networks can still maintain measurement consistency, and could be used to generate diagnostically different solutions.
Exploring solutions of image super-resolution using pretrained text-to-image diffusion models.
We study the frequencies in learned convolution filters and achieve improved native robustness with frequency regularization in learned convolution weights.
We propose to tackle the curse of dimensionality of large permutation matrices by approximating them using low-rank matrix factorization, followed by a nonlinearity. To this end, we rely on the Kissing number theory to infer the minimal rank required for representing a permutation matrix of a given size, which is significantly smaller than the problem size.
An unpaired learning approach for learning posterior distributions of underdetermined inverse problems using two normalizing flows.
We show that aligning the latent space of pretrained models with a linear transformation.
We show Transformer based restoration networks are not robust, and uncover effects of different attention mechanisms and nonlinearities on adversarially robust generalization.
In this paper we propose to learn QUBO forms for quantum annealing from data through gradient backpropagation instead of deriving them. As a result, the solution encodings can be chosen flexibly and compactly.
We propose a novel mixed-integer programming (MIP) formulation for generating precise sparse correspondences for highly non-rigid shapes.
a 3D convolutional neural network (3D CNN) to classify subsurface defects in a glass fiber reinforced thermoplastic (GFRT) composite material inspected by a 3D THz imaging system.
We train a generative autoencoder for light fields and use it as a prior for a variety of light field reconstruction tasks.
We make neural networks invariant by modifying the input pose such that every element from the orbit of transformations maps to the same canonical element..
Use Latent Diffusion Models for zero-shot text guided manipulation using DDIM sampling.
Imperceptible distortion can significantly degrade the performance of SOTA deblurring networks, even producing drastically different content in the output.
We investigate the combination of differentiable physics and spatial transformers in a deep action conditional video representation network
We introduce the first algorithm for motion segmentation that uses quantum annealing.
Models trained with full-batch gradient descent and explicit regularization can match the generalization performance of models trained with stochastic minibatching.
We develop an iterative method to tackle quadratic assignment problems with quantum annealing. Using this we solve quadratic assignment problems from shape matching.
Data poisoning attacks that successfully poison neural networks trained from scratch, even on large-scale datasets like ImageNet.
We develop various methods to tackle graph matching problems with quantum annealing. Using this we solve quadratic assignment problems from shape matching.
We call into question commonly held beliefs regarding the loss landscape, optimization, network width, and rank.
A new strategy to optimize the bi-level problems arising in training parameterized energy minimization models.
We propose a novel non-linear transfer function called lifting, perform theoretical analysis of lifting layer and demonstrate its effectiveness in deep learning approaches to image classification and denoising.
We replace the proximal operator of the regularization used in many convex energy minimization algorithms by a denoising neural network which serves as an implicit natural image prior.
We propose a novel class of regularizations collaborative total variation (CTV), provide theoretical characterization, demonstrate practical application in inverse problems.
We propose a new greedy sparse recovery method, which approximates L1 minimization more closely
We present the motivation and theory of nonlinear spectral representations, based on convex regularizing functionals.
Analysis, implementation, and comparison of several vector-valued total variation (TV) methods that extend the Rudin-Osher-Fatemi variational model to color images.
We discuss the use of absolutely one-homogeneous regularization functionals in a variational, scale space, and inverse scale space setting to define a nonlinear spectral decomposition of input data
We propose the first sub-label accurate convex relaxation for vectorial multilabel problems by approximating the dataterm in a piecewise convex (rather than piecewise linear) manner.
We propose a novel spatially continuous framework for convex relaxations based on functional lifting which can be interpreted as a sublabel–accurate solution to multilabel problems.