Abstract:
The process of assigning most appropriate resources to workstations or agents at the right time
is termed as Scheduling. The word is applied separately to tasks and resources in task
scheduling and resource allocation accordingly. Scheduling is a universal theme being
conferred in technological areas like computing and strategic areas like operational
management. The core idea behind scheduling is the distribution of shared resources across
time for competitive tasks. Optimization, efficiency, productivity and performance are the
major metrics evaluated in scheduling. Effective scheduling under uncertainty is tricky and
unpredictable and it’s an interesting area to study. Environmental uncertainty is a challenging
extent that effect scheduling based decision making in work environments where environment
dynamics subject to numerous fluctuations frequently.
Reinforcement Learning is an emerging field extensively research on environmental modelling
under uncertainty. Optimization in dynamic scheduling can be effectively handled using
Reinforcement learning. This research is about a research study that focused on Reinforcement
Learning techniques that have been used for dynamic task scheduling. This thesis addresses
the results of the study by means of the state-of-the-art on Reinforcement learning techniques
used in dynamic task scheduling and a comparative review of those techniques. This thesis
reports on our research on a Hybrid Approach for Dynamic Task Scheduling in Unforeseen
Environments using the techniques; Multi Agent Reinforcement Learning and Enhanced QLearning.
The proposed solution follows online and offline reinforcement learning approaches which
works on real time inputs of heuristics like, Number of agents involved, current state of the
environment and backlog of tasks and sub-tasks, Rewarding criteria etc. The outputs are the
set of scheduled tasks for the work environment. The solution comes with an approach for
priority based dynamic task scheduling using Multi Agent Reinforcement Learning &
Enhanced Q-Learning. Enhanced Q-Learning includes developed algorithm approaches; QLearning,
Dyna
Q+
Learning
and
Deep
Dyna-Q+
Learning
which
is
proposed
as
an
effective
methodology
for
scheduling
problem.
The novelty of the solutions resides on implementation of model-based reinforcement learning
and integration with the model-free reinforcement learning algorithmic approach by means of
Dyna-Q+ Learning and Deep Dyna-Q+ Learning for dynamic task scheduling in an unforeseen
environment. The research project also concentrates on how the dynamic task scheduling is
managed within a constantly updating environment which the Deep Dyna-Q+ has provided a
ground solution to cater this requirement. The end solution has comparatively evaluated the
product using evaluation metrics in each of the three Q-Learning variations developed. As per
the evaluation results it was revealed Deep Dyna-Q+ implementation would cater well the
problem of dynamic task scheduling in an unforeseen environment.
Citation:
Shayalika, J.K.C. (2020). A hybrid approach for dynamic task scheduling in unforeseen environments using multi agent reinforcement learning and enhanced Q-learning [Master's theses, University of Moratuwa]. Institutional Repository University of Moratuwa. http://dl.lib.uom.lk/handle/123/21207