Background
Safe design and deployment of intelligent transportation systems (ITS) in applications where the exact positioning of environment features and vehicles is unknown remains an unsolved task. First, uncertainty in localization and state evolutions introduces risks which need to be minimized. Second, traffic laws require the system’s strict adherence to behavioral constraints. Hence, beyond integrating multi-modal data, e.g., geometric features from geographic information systems and the agent’s own perceptio, ITS require representations that allow for high-level reasoning and for querying safety-critical information.
For example, in the Advanced Air Mobility (AAM) context, the agent and its operator should follow a complex set of public laws relating to often uncertain environmental features. As AAM applications, such as logistics and emergency response systems, become more prevalent, the complexity of the regulatory environment increases as well, e.g., when operations occur in human-inhabited spaces. The same is true across modes of mobility. For instance, in the autonomous driving context, where traffic regulations, safety and driving comfort objectives similarly depend on an uncertain environment.
Developing frameworks for prediction, planning and control that can effectively manage these complexities while accounting for the inherent uncertainties in background knowledge (e.g., geographic information systems) and perception (e.g., sensors and deep learning models) necessitates appropriately expressive environment representations. More specifically, it is important to move beyond associative representations towards models that consider causal effects in the environment. Similarly, one needs to consider not only the legality of an agent’s current placement in the world but allow for reasoning on complete traces of state transitions.
Research/Technical Target
Prior work has presented Statistical Relational Maps (StaR Maps) [1] as a means of representing and reasoning in uncertain environments. However, in their current form, StaR Maps are limited to reasoning about the satisfiability of individual states within the agent’s navigation space. To more closely represent real-world navigation restrictions, e.g., traffic lights or lane directionality, StaR Maps need to be extended in order to reason on the agent transitioning from one state to another.
Furthermore, the causal effects of interventions on the environment ought to be considered, e.g., the impact of switching the current state of a traffic light on the resulting landscape of satisfiable navigation space and the reachability of a desired target location. Besides strengthening the potential impact of StaR Maps, this research promises substantial improvements for downstream tasks that rely on StaR Maps, such as probabilistic mission design [2, 3] methods that encode local regulations under uncertainty for navigating human-inhabited spaces safely.
Experimental Evaluation
To quantitatively evaluate the improvements over prior work and related methods, the thesis shall include an experimental evaluation on the effect of causal and transitional reasoning on StaR Maps via downstream task performance. As a working hypothesis, one may expect that the performance of prediction algorithms on top of StaR Map reasoning gain accuracy where there are causal effects on a tracked agent’s trajectory. For this endeavor, a recent benchmark on prediction of mobile agents, e.g., in the autonomous driving context, shall be made use of for comparison with the state-of-the-art. Alternatively, the additions may be evaluated on other downstream tasks if a suitable benchmark is available, e.g., on risk minimization and safe planning.
Schedule
The following time schedule is envisaged for the master thesis:
• 1 month: Literature review on probabilistic environment representations and causal inference.
• 1.5 months: Conceptual design of state transitions and interventions within existing StaR Maps framework.
• 1.5 months: Implementation and setting up experiments.
• 1 month: Experimental evaluation of representational power and computational costs as well as downstream performance in selected task, e.g., prediction.
• 1 month: Writing the thesis.
Partner
TU Eindhoven, Uncertainty in Artificial Intelligence Lab (Prof. Dhami).
Requirements:
• Good knowledge in Python
• Experience in probabilistic modeling
• Knowledge of formal logic and probabilistic programming
Literature:
[1] Kohaut, S., Flade, B., Dhami, D. S., Eggert, J., & Kersting, K. (2024). StaR Maps: Unveiling uncertainty in geospatial relations. In 27th International Conference on Intelligent Transportation Systems (ITSC). IEEE.
[2] Kohaut, S., Flade, B., Dhami, D. S., Eggert, J., & Kersting, K. (2023). Mission design for unmanned aerial vehicles using hybrid probabilistic logic programs. In 26th International Conference on Intelligent Transportation Systems (ITSC). IEEE.
[3] Kohaut, S., Flade, B., Dhami, D. S., Eggert, J., & Kersting, K. (2024). Towards probabilistic clearance, explanation and optimization. In 2024 International Conference on Unmanned Aircraft Systems (ICUAS) (pp. 911-916). IEEE. https://doi.org/10.1109/ICUAS60882.2024.10556879
Devendra Dhami