Algorithms and Complexity Group
  • People
  • Research
  • Courses
  • Talks
  • Jobs
  • Contact

Learning to Solve Dynamic Vehicle Routing Problems (Research Project)

a joined research project from the Algorithms and Complexity Group, TU Wien, Austria, and Honda Research Institute, Germany

Project Team

Maria Bresich
Günther Raidl
Steffen Limmer

Topic

Vehicle routing - i.e., planning an optimal set of routes for a fleet of vehicles - is an intensely studied research area with enormous practical and raising relevance due to increasing mobility and transportation demand and new challenges coming, e.g., from increasing interest in shared mobility services and electric vehicles. The dial-a-ride problem (DARP), for example, is the problem of finding optimal tours of vehicles through different pickup and drop-off locations in order to serve a number of transportation requests, allowing different customers to share a vehicle. The electric autonomous dial-a-ride problem (E-ADARP) represents a challenging and practically relevant extension to the DARP, where electric autonomous vehicles are employed and their charging requirements have to be taken into consideration. Furthermore, not only classical objectives like total travel time have to be optimized, but user inconvenience also plays an important role. Thus, factors such as user excess ride time, which is due to detours because of the ride-sharing, have to be taken into account.

For such problems, heuristic optimization approaches are considered to be the means of choice due to a better scalability compared to exact approaches. In this project, we propose a heuristic framework based on large neighborhood search (LNS) to solve the E-ADARP, and we plan to tackle the issue of scalability by automatically designing, i.e., learning, efficient heuristics that either guide or possibly replace classical optimization techniques. We will investigate the usage of reinforcement learning with different machine learning models to dynamically select operators for the LNS from a set of possible operators during the optimization process. We intend to experimentally compare this approach to other learning techniques such as classical supervised learning, imitation learning, and Q learning.

News

  • Best Paper Award at SOFSEM 2025

    Best Paper Award at SOFSEM 2025

    2025-01-23
    Thomas Depian, Simon D. Fink, Alexander Firbas, Robert Ganian, and Martin Nöllenburg received the Best Paper Award for their paper …Read More »
  • Markus Wallinger receives Award of Excellence for his PhD Thesis

    Markus Wallinger receives Award of Excellence for his PhD Thesis

    2024-12-05
    Our former group member Markus Wallinger won the Award of Excellence by the Federal Ministry for Education, Science and Research. …Read More »
  • Thomas Depian receives State Prize for his Master’s Thesis

    Thomas Depian receives State Prize for his Master’s Thesis

    2024-11-21
    Our group member Thomas Depian won the Appreciation Award given by the Federal Ministry for Education, Science and Research. This …Read More »
  • Best Paper Award at GECCO 2024 for M. Bresich, G. Raidl, and S. Limmer

    Best Paper Award at GECCO 2024 for M. Bresich, G. Raidl, and S. Limmer

    2024-07-24
    Maria Bresich, Günther Raidl, and Steffen Limmer received the best paper award at the 2024 Genetic and Evolutionary Computation Conference …Read More »
  • Welcome to our Feodor Lynen Fellow Dr. Frank Sommer

    Welcome to our Feodor Lynen Fellow Dr. Frank Sommer

    2024-06-14
    On June 1, 2024, Dr. Frank Sommer has joined the Algorithms and Complexity group with a prestigious Feodor Lynen postdoc …Read More »

News archive

All news for 2015, 2016, 2017, 2018, 2019, 2020, 2021, 2022, 2023 and 2024.
TU Wien Informatics
Offenlegung (§25 MedienG) Inhaber der Website ist das Institut für Logic and Computation an der Technischen Universität Wien, 1040 Wien. Die TU Wien distanziert sich von den Inhalten aller extern gelinkten Seiten und übernimmt diesbezüglich keine Haftung. – Disclaimer – Datenschutzerklärung
Log in requires cookies.