Abstract
Assemble-to-order (ATO) strategies are common to many industries. Despite their popularity, ATO systems remain challenging to analyze. We consider a general-product structure ATO problem modeled as an infinite horizon Markov decision process. As the optimal policy of such a system is computationally intractable, we develop a heuristic policy that is based on a decomposition of the original system, into a series of two-component ATO subsystems. We show that our decomposition heuristic policy (DHP) possesses many properties similar to those encountered in special-product structure ATO systems. Extensive numerical experiments show that the DHP is very efficient. In particular, we show that the DHP requires less than 10−5 the time required to obtain the optimal policy, with an average percentage cost gap less than 4% for systems with up to 5 components and 6 products. We also show that the DHP outperforms the state aggregation heuristic of Nadar et al. (2018), in terms of cost and computational effort. We further develop an information relaxation-based lower bound on the performance of the optimal policy. We show that such a bound is very efficient with an average percentage gap not exceeding 0.5% for systems with up to 5 components and 6 products. Using this lower bound, we further show that the average suboptimality gap of the DHP is within 9% for two special-product structure ATO systems, with up to 9 components and 10 products. Using a sophisticated computing platform, we believe the DHP can handle systems with a large number of components and products.
Original language | English |
---|---|
Pages (from-to) | 233-249 |
Number of pages | 17 |
Journal | European Journal of Operational Research |
Volume | 286 |
Issue number | 1 |
DOIs | |
Publication status | Published - 1 Oct 2020 |
Externally published | Yes |
Event | 32nd Annual POMS conference - Duration: 21 Apr 2022 → 25 Apr 2022 https://pomsmeetings.org/conf-2022/ |
Keywords
- Approximate policy
- Assemble-to-order
- Information relaxation lower bound
- Inventory control
- Markov decision process
- Production