Project Details
Project Title (In Chinese)
神经网络压缩新范例研究
Fund Amount (RMB)
100000
Description
Neural network compression is crucial for deploying large models on resource-constrained devices. While existing techniques like quantization, pruning, and distillation have progressed, they are limited by high-performance loss as compression rates decrease, restricted applications, and demanding computing resources.
This research aims to overcome these challenges by introducing a novel compression paradigm. We aim to achieve significant size reductions with minimal performance impact by targeting semi-structured data representations between compressed and redundant data.
Preliminary theoretical analysis suggests that some neural networks can be compressed below 10% of their original size while maintaining low-performance loss. Furthermore, this study explores the compatibility of our proposed approach with state-of-the-art methods, paving the way for practical applications in Computer vision and Natural Language Processing.
This research aims to overcome these challenges by introducing a novel compression paradigm. We aim to achieve significant size reductions with minimal performance impact by targeting semi-structured data representations between compressed and redundant data.
Preliminary theoretical analysis suggests that some neural networks can be compressed below 10% of their original size while maintaining low-performance loss. Furthermore, this study explores the compatibility of our proposed approach with state-of-the-art methods, paving the way for practical applications in Computer vision and Natural Language Processing.
Project Category | Research and Development Fund |
---|---|
Acronym | RDF-A |
Status | Active |
Effective start/end date | 1/01/25 → 31/12/27 |
Keywords
- Neural networks, model compression, semi-structured data, group theory
Fingerprint
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.