Deep Reinforcement Learning Solves Job-shop Scheduling Problems
Author:
Affiliation:

1.School of Mechanical and Electrical Engineering, Xi 'an University of Architecture and Technology, Xi 'an 710055, China;
2.Department of Automation Engineering, Wuxi Higher Vocational and Technical School of Mechanical and Electrical Engineering, Wuxi 214028, China

Clc Number:

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    To solve the sparse reward problem of job-shop scheduling by deep reinforcement learning, a deep reinforcement learning framework considering sparse reward problem is proposed. The job shop scheduling problem is transformed into Markov decision process, and six state features are designed to improve the state feature representation by using two-way scheduling method, including four state features that distinguish the optimal action and two state features that are related to the learning goal. An extended variant of graph isomorphic network GIN++ is used to encode disjunction graphs to improve the performance and generalization ability of the model. Through iterative greedy algorithm, random strategy is generated as the initial strategy, and the action with the maximum information gain is selected to expand it to optimize the exploration ability of Actor-Critic algorithm. Through validation of the trained policy model on multiple public test data sets and comparison with other advanced DRL methods and scheduling rules, the proposed method reduces the minimum average gap by 3.49%, 5.31% and 4.16%, respectively, compared with the priority rule-based method, and 5.34% compared with the learning-based method. 11.97% and 5.02%, effectively improving the accuracy of DRL to solve the approximate solution of JSSP minimum completion time.

    Reference
    Related
    Cited by
Get Citation

Anjiang Cai, Yangfan Yu, Manman Zhao.[J]. Instrumentation,2024,(1):88-100

Copy
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:
  • Revised:
  • Adopted:
  • Online: May 05,2024
  • Published:
License
  • Copyright (c) 2023 by the authors. This work is licensed under a Creative
  • Creative Commons Attribution-ShareAlike 4.0 International License.