PLSR: Unstructured Pruning with Layer-wise Sparsity Ratio

Haocheng Zhao, Limin Yu*, Runwei Guan, Liye Jia, Junqing Zhang, Yutao Yue

*Corresponding author for this work

Research output: Chapter in Book or Report/Conference proceedingConference Proceedingpeer-review


In the current era of multi-modal and large models gradually revealing their potential, neural network pruning has emerged as a crucial means of model compression. It is widely recognized that models tend to be over-parameterized, and pruning enables the removal of unimportant weights, leading to improved inference speed while preserving accuracy. From early methods such as gradient-based, and magnitude-based pruning to modern algorithms like iterative magnitude pruning, lottery ticket hypothesis, and pruning at initialization, researchers have strived to increase the compression ratio of model parameters while maintaining high accuracy. Currently, mainstream algorithms focus on the global pruning of neural networks using various scoring functions, followed by different pruning strategies to enhance the accuracy of sparse model. Recent studies have shown that random pruning with varying layer-wise sparsity ratio has achieved robust results for large models and out-of-distribution data. Based on this discovery, we propose a new score called FeatIO, which is based on module input and output feature map sizes. As a score function used in PaI, FeatIO surpasses the performance of other PaI score functions. Additionally, we propose a novel pruning strategy called Pruning with Layer-wise Sparsity Ratio (PLSR), which conbines the layer-wise sparsity ratios and magnitude-based score function, resulting in optimal evaluation performance. Almost all algorithms exhibit improved performance when using our novel pruning strategy. The combination of PLSR and FeatIO consistently outperforms other algorithms in testing, demonstrating the significant potential of our proposed approach. Our code will be available here.

Original languageEnglish
Title of host publicationProceedings - 22nd IEEE International Conference on Machine Learning and Applications, ICMLA 2023
EditorsM. Arif Wani, Mihai Boicu, Moamar Sayed-Mouchaweh, Pedro Henriques Abreu, Joao Gama
PublisherInstitute of Electrical and Electronics Engineers Inc.
Number of pages8
ISBN (Electronic)9798350345346
Publication statusPublished - 2023
Event22nd IEEE International Conference on Machine Learning and Applications, ICMLA 2023 - Jacksonville, United States
Duration: 15 Dec 202317 Dec 2023

Publication series

NameInternational Conference on Machine Learning and Applications (ICMLA)


Conference22nd IEEE International Conference on Machine Learning and Applications, ICMLA 2023
Country/TerritoryUnited States


  • Layer-wise Sparsity
  • Model Com-pression
  • Pruning
  • Unstructured Pruning


Dive into the research topics of 'PLSR: Unstructured Pruning with Layer-wise Sparsity Ratio'. Together they form a unique fingerprint.

Cite this