Understanding causal relationships within data is essential across fields such as healthcare, economics, and social sciences, where knowing "what causes what" guides decision-making and policy. Causal discovery, the process of identifying these relationships and structuring them in causal graphs, remains challenging, especially in complex, high-dimensional datasets. Traditional causal discovery methods often rely on assumptions that may not hold in real-world data, or require large-scale computation, limiting their practicality.
Reinforcement Learning (RL), with its ability to learn optimal strategies through trial and feedback, offers a promising alternative for causal discovery. By framing causal discovery as an RL problem, we can train agents to explore possible graph structures, learning to construct accurate causal graphs based on observational data. This approach could reduce computational cost, enhance scalability, and adaptively learn causal dependencies.
This project aims to develop an RL-based method for causal discovery, enabling more efficient and accurate causal graph learning. This involves a comprehensive literature review on Causal Discovery and Reinforcement Learning, formulation of the problem in an RL setting, and development of RL algorithms for causal discovery.
References: