TY - JOUR
T1 - Deep reinforcement learning-driven smart and dynamic mass personalization
AU - Xiao, Ruxin
AU - Wang, Yuchen
AU - Wang, Xinheng
AU - Liu, Ang
AU - Zhang, Jinhua
N1 - Publisher Copyright:
© 2023 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0)
PY - 2023
Y1 - 2023
N2 - Smart mass personalization is becoming increasingly important to improve the competitiveness of products. In mass personalization, customers' contextual data is characterized by complexity and fluctuation. Hence, designers must ensure the timeliness of smart mass personalization that can continuously satisfy customers' demands. This paper proposes a deep reinforcement learning-driven system for dynamic and smart mass personalization. Besides, the system adopts deep Q-network as the training algorithm due to its compatibility with both off-policy and on-policy training. In the beginning, deep Q-network will get trained based on previous customers' contextual data collected from purchase history and web services until it can generate the expected policy for concept generation. Then, the agent in deep Q-network will dynamically tune the algorithm by continuously interacting with incoming customers' contextual data. Besides, this paper depicts a scenario of personalization for automobiles to illustrate this system. The contribution of this paper lies in the application of DRL to realize dynamic updates in smart mass personalization and the innovative dynamic action space generated from customer clusters.
AB - Smart mass personalization is becoming increasingly important to improve the competitiveness of products. In mass personalization, customers' contextual data is characterized by complexity and fluctuation. Hence, designers must ensure the timeliness of smart mass personalization that can continuously satisfy customers' demands. This paper proposes a deep reinforcement learning-driven system for dynamic and smart mass personalization. Besides, the system adopts deep Q-network as the training algorithm due to its compatibility with both off-policy and on-policy training. In the beginning, deep Q-network will get trained based on previous customers' contextual data collected from purchase history and web services until it can generate the expected policy for concept generation. Then, the agent in deep Q-network will dynamically tune the algorithm by continuously interacting with incoming customers' contextual data. Besides, this paper depicts a scenario of personalization for automobiles to illustrate this system. The contribution of this paper lies in the application of DRL to realize dynamic updates in smart mass personalization and the innovative dynamic action space generated from customer clusters.
KW - Artificial intelligence-enhanced design
KW - Deep reinforcement learning
KW - Real-time system
KW - Smart mass personalization
UR - http://www.scopus.com/inward/record.url?scp=85169918652&partnerID=8YFLogxK
U2 - 10.1016/j.procir.2023.04.004
DO - 10.1016/j.procir.2023.04.004
M3 - Conference article
AN - SCOPUS:85169918652
SN - 2212-8271
VL - 119
SP - 97
EP - 102
JO - Procedia CIRP
JF - Procedia CIRP
T2 - 33rd CIRP Design Conference
Y2 - 17 May 2023 through 19 May 2023
ER -