Skip to main content
Skip to main content

Mobile robot for location-independent order picking in the food industry

MASON addressed the significant gap between highly automated distribution centers and the still largely manual process of loading sea containers. To overcome this challenge, an intelligent system for automated, palletless loading of cartons into containers was developed, with a particular focus on precise placement near container edges. The solution integrates robust computer vision for carton detection, specialized gripper technology, collision-free and time-optimized path planning, and learning-based carton insertion into narrow spaces, all supported by a modular IoT communication architecture for reliable system coordination. Building on advances in robotics, computer vision, and TSN communication, a modular system architecture was designed to recognize, grip, and position cartons within tight tolerances. A mobile manipulation robot was implemented as a demonstrator system, combining several essential components that each contribute to scientific and technical progress as well as industrial applicability. 

mason

Project Description

In global freight logistics, standardized sea containers are the primary mode of transport for general cargo, including food products. These containers are typically loaded manually, directly stacking individual packages without the use of pallets. While this method avoids the need for additional load-securing materials like straps or wrapping film thus cutting costs and reducing PVC waste, it relies entirely on human labor to tightly pack the goods. The challenge arises from the complexity and physical demands of this manual process. Workers are required to fit packages precisely against container walls using methods like pressing, wedging, and compressing—techniques that are inherently difficult to replicate with current robotic systems. This labor-intensive method stands in stark contrast to the high levels of automation already present in modern distribution centers.

Despite the rapid advancement of Industry 4.0 technologies, there is currently no robotic solution capable of performing this task with the required speed, precision, and reliability. Several key challenges prevent automation such as Confined working space inside containers Irregular package shapes and unpredictable loading configurations.

Furthermore, employees in this sector face difficult working conditions, especially in refrigerated environments typical for food logistics. This results in high physical strain, increased staff turnover, and elevated absenteeism—all of which increase operational costs and disrupt productivity. The lack of a robotic system capable of handling these tasks has been a critical bottleneck for logistics companies, particularly SMEs in high-wage regions like Germany, which must remain competitive on the global stage without increasing labor costs.

 

Project Focus

To address these challenges, this project developed a fully automated approach to the container loading process using state-of-the-art robotics and computer vision technologies. The core innovation lies in a new system architecture that allows robots to:

  • Recognize the shape, size, and orientation of each package using AI-driven visual analysis.
  • Grasp and lift packages accurately within confined environments.
  • Precisely position packages to maximize space utilization and ensure stable stacking.
  • Adapt dynamically to changing package configurations and container conditions in real time.

    By integrating advanced algorithms and robotic control methods, the system can handle complex loading scenarios with a high degree of reliability and speed. The result is a process that significantly reduces manual labor, minimizes risk of injury, and increases throughput, loading containers more quickly and efficiently than traditional manual methods.

    This project targets enhancing the automation of food container loading, a task long considered too complex for robotic systems due to its unstructured nature. Through the innovative combination of AI, robotics, and adaptive control, the developed solution provides a pathway to transforming logistics operations making them faster, more efficient, and more sustainable.

    MASON

All At a Glance

  • Icon Kalender

    Duration: 
    01.04.2022 – 31.03.2025

  • Icon Tag

    Research Area: 
    Robotics, Computer Vision, TSN Communication.

  • Icon Abzeichen Euro

    Funding: 
    Federal Ministry for Economic Affairs and Climate Action (BMWK) based on a decision taken by the German Bundestag.

 

Project Methodology

WP1 - requirements analysis:

WP1 focused on defining the system and logistical requirements through workshops and expert interviews. Product classes, environmental conditions, and key logistical parameters such as pick times, warehouse frequency, and personnel use were analyzed to set benchmarks for later validation. Together with project partners, data management concepts and a validation scenario were developed, and all findings were consolidated into a requirements catalog and specification document.

WP2 - Planning the handling robot:

WP2 focused on the detailed design and production-oriented planning of the handling robot system, consisting of the manipulator, gripper, and optical components. Various gripper solutions were evaluated and selected based on the requirements, preferably using available market systems. The project utilized the mobile platform TORsten and the UR10 manipulator, to enable rapid integration, with the overall system initially downscaled for scientific validation. For the optical setup, compatibility and positioning of cameras and lighting were assessed to support later work packages. WP2 concluded with the development of a test stand integrating all components as a foundation for subsequent project phases.

WP3 - Communication and IoT:

WP3 focused on developing and evaluating a communication architecture based on Time-Sensitive Networking (TSN) to enable deterministic communication in wireless networks. The aim was to ensure stable data exchange between mobile robots, sensors, and warehouse management systems by addressing key requirements such as latency, reliability, interoperability, and scalability. This work was crucial for supporting time-critical warehouse scenarios, including the transmission of control commands, sensor data, and image information, over inherently unpredictable wireless channels.

WP4 – Sensor Fusion and Computer Vision:

WP4 focused on developing a system for object detection and classification in warehouse environments using depth camera technology. Unlike traditional 2D cameras, depth cameras capture 3D data such as spatial positioning, depth, and volume, enabling more precise recognition in complex or dynamic settings. By applying advanced computer vision and machine learning techniques, including CNN-based algorithms like YOLO and SSD, the system achieved efficient and reliable real-time detection of objects such as cartons and boxes, even with variations in size, shape, or orientation. This technology significantly enhances warehouse automation by improving tracking, sorting, and organization, thereby increasing efficiency and reducing human error.

WP5 - Trajectory generation for press fit:

WP5 focused on developing motion generation strategies for the robotic manipulator to insert cartons with press-fit. The work included trajectory optimization methods as well as the exploration of new motion patterns tailored to press-fit operations. The package concluded with iterative testing and optimization of these approaches using experimental setups developed at IfU, ensuring their practical validation and refinement.

WP6 – System Integration and Validation:

WP6 focused on integrating the components developed in the project into a complete system and testing them under realistic conditions within the defined validation scenario. The objective was to evaluate the overall technical functionality of the subsystems—including object detection, position estimation, trajectory planning, and the gripping and insertion process, ensuring their performance in a practical, real-world setup.

Project Team:

Roman Obermaisser

Univ.-Prof. Dr.-Ing. Roman Obermaisser

Professor

Prof. Dr. Roman Obermaisser is full professor at the Division for Embedded Systems of University of Siegen. Roman Obermaisser has finished his doctoral studies in Computer Science with Prof. Hermann Kopetz at Vienna University of Technology as research advisor in 2004.

Funders and cooperation partners

Federal Ministry for Economic Affairs and Climate Action (BMWK) based on a decision taken by the German Bundestag.