Skip to main content
Skip to main content

Chair of Visual Computing

The Visual Computing Group deals with machine vision and learning. One focus is on the automatic development of neural networks. We want to find out how such networks can become more efficient and remain robust against disturbances in images. We are also researching new approaches to better understand how neural networks make decisions - and how to bring them closer to human decision-making.

Visual Computing

Our research profile

Development of efficient approaches for machine learning for various tasks in the field of computer vision.

The research of our group focuses on methods of machine learning and computer vision with emphasis on

  • Efficient search for neural architectures

    We focus on advancing the automated design and optimization of neural networks to reduce manual trial-and-error procedures through efficient search strategies. Our group develops acceleration methods - often using predictive models - to explore large search spaces and minimize costly evaluations, optimizing the path to high-performance architectures.

     

  • Parameter generation and transfer learning

    Our research extends transfer learning beyond traditional pre-trained weights to the generation of network parameters from learned distributions and the use of large language models. We aim to improve the efficiency and adaptability of neural networks for different domains, carefully considering the number of parameters and resource constraints.

     

  • Improving the robustness of neural networks

    An important aspect of our research is to improve the robustness of neural networks against image noise and other disturbances that can lead to incorrect predictions. We are investigating strategies such as emphasizing object detection and controlling frequency biases in network weights to reduce vulnerabilities and strengthen model reliability.

Main research areas

  • Automated design of neural architectures
  • Robustness of deep learning models
  • Object re-identification
  • Efficiency of neural networks
  • Joint hardware-software optimization (Learning2Sense)
  • Learning and generation of weights

 

Courses

Teaching

We offer the following courses:

  • Automated Machine Learning (summer semester)
  • Digital Image Processing (summer semester)
  • Practical course Deep Learning (winter semester)
  • Unsupervised Deep Learning (winter semester)

 

Master's and Bachelor's theses

If you are interested in writing a Bachelor's or Master's thesis or project group with us, you are welcome to contact us. The following list contains only a small selection of the topics currently offered by our group. Please contact us for further topics and additional information. Your own ideas and interests are also welcome. Please note, we do not supervise external work that requires consent to a confidentiality agreement.

  • Alexander Auras:
    • NAS for inverse problems
  • Penelope Natusch:
    • Robustness Predictions
    • Object Re-Identification

Publications

Show more filters
Conference paper
2025

Can we talk models into seeing the world differently?

Conference paper
2025

Transferrable Surrogates in Expressive Neural Architecture Search Spaces

Journal article
2025

An Evaluation of Zero-Cost Proxies - from Neural Architecture Performance Prediction to Model Robustness

Conference paper
2024

Implicit Representations for Constrained Image Segmentation

Conference paper
2024

Surprisingly Strong Performance Prediction with Neural Graph Features

Book chapter
2024

An Evaluation of Zero-Cost Proxies - From Neural Architecture Performance Prediction to Model Robustness

Conference paper
2023

Neural Architecture Design and Robustness: A Dataset

Other
2023

Improving Native CNN Robustness with Filter Frequency Regularization

Book chapter
2022

Learning Where to Look – Generative NAS is Surprisingly Efficient

Conference paper
2022

Surrogate NAS Benchmarks: Going Beyond the Limited Search Spaces of Tabular NAS Benchmarks

Conference paper
2021

Smooth Variational Graph Embeddings for Efficient Neural Architecture Search

Book chapter
2021

Neural Architecture Performance Prediction Using Graph Neural Networks

Current research projects

KiwiSiwi Logo
Tree

KIWI@SIWI

KIWI@SIWI is a project that deals with the recognition of bison using AI technology. The aim is to use the technology to better recognize the animals and thus support herd management. The project is being carried out by the University of Siegen and NanoGiant GmbH. It has received funding from the NEXT.IN.NRW funding line.

Portait Jovita Lukasik

Jun.-Prof. Dr. Jovita Lukasik

Lehrstuhlinhaber

I am the head of the Visual Computing Group at the University of Siegen since September 2025.  Before becoming the head of the Visual Computing Group, I did my PhD with Margret Keuper at the University of Mannheim in 2023.

 

Ulrich Schipper

Dipl.-Inform. Ulrich Schipper

Technical employee

Contact the working group

Postal address

University of Siegen
Visual Computing Group
Hölderlinstraße 3
57076 Siegen

Visitor address

University of Siegen
Visual Computing Group
H-A Level 7
Room: H-A 7107
57076 Siegen

Secretariat

Secretary: Sarah Wagener
Phone: +49 (0)271 / 740-3315

Room: H-A 7107
E-mail: sarah-chr.wagener@uni-siegen.de