|
|
--- |
|
|
license: mit |
|
|
tags: |
|
|
- benchmark |
|
|
- systems-ml |
|
|
- distributed-training |
|
|
- muon |
|
|
- optimizer |
|
|
- performance-analysis |
|
|
--- |
|
|
|
|
|
# π¬ Distributed Muon: Field Notes & Reproducibility Artifacts |
|
|
|
|
|
**Code, Performance Traces, and Analysis Logs** |
|
|
|
|
|
This repository contains the raw engineering artifacts for the deep-dive investigation: **"Reproducing and Validating Distributed Muon"**. |
|
|
|
|
|
It serves as the **proof of work** for the performance claims regarding the Muon optimizer's communication efficiency and computational overhead in a distributed setting (Data Parallel + Tensor Parallel). |
|
|
|
|
|
π **Read the Full Report:** [Reproducing and Validating Distributed Muon π’β¨: A Practical Verification of Communication Efficiency Claims](https://medium.com/@jenwei0312/reproducing-and-validating-distributed-muon-a-practical-verification-of-communication-0be4d1d9b893) |
|
|
π οΈ **Get the Tutorial Code:** [bird-of-paradise/muon-distributed](https://huggingface.co/datasets/bird-of-paradise/muon-distributed) |
|
|
|
|
|
--- |
|
|
|
|
|
## π Repository Structure |
|
|
|
|
|
* **`traces/`**: Raw Chrome Trace (`.json`) files generated by PyTorch Profiler. You can load these into `chrome://tracing` or [ui.perfetto.dev](https://ui.perfetto.dev) to visualize the exact CPU/GPU execution timeline. |
|
|
* `comparison/`: Side-by-side traces of AdamW vs. Muon (Hybrid DP=2/TP=2). |
|
|
* `distributed_muon/`: Scaling traces for DP=4, TP=4, and Hybrid configurations. |
|
|
* **`analysis_scripts/`**: The exact Python scripts used to generate the traces and parse the performance metrics. |
|
|
* **`figures/`**: High-resolution charts and trace visualizations used in the report. |
|
|
* **`report/`**: A PDF archive of the full technical investigation. |
|
|
|
|
|
--- |
|
|
|
|
|
## π Key Findings (Verified in Traces) |
|
|
|
|
|
The traces in this repository provide empirical evidence for the following: |
|
|
|
|
|
1. **Communication Efficiency:** Muon (Hybrid DP2/TP2) demonstrates **0.57x** the communication overhead of AdamW on a bandwidth-constrained cluster (PCIe Gen4 x4). |
|
|
* *Evidence:* Compare `traces/comparison/adamw_fullstep_rank0.json` vs `muon_fullstep_dp2_tp2_rank0.json`. |
|
|
2. **Optimizer Latency:** The Muon step accounts for **~1.1%** of total training time, validating the paper's "negligible overhead" claim. |
|
|
3. **Hybrid Scaling:** The `DP=2, TP=2` configuration outperforms pure DP or pure TP on 4 GPUs, balancing memory bandwidth with communication overhead. |
|
|
|
|
|
--- |
|
|
|
|
|
## π οΈ How to Reproduce |
|
|
|
|
|
To run these benchmarks yourself on a 4-GPU cluster: |
|
|
|
|
|
1. Clone this repository. |
|
|
2. Install dependencies: `torch`. |
|
|
3. Run the benchmark script: |
|
|
|
|
|
```bash |
|
|
# This will generate new JSON traces in your local directory |
|
|
python analysis_scripts/muon_vs_adam.py |
|
|
``` |
|
|
|
|
|
4. Run the performance analysis on included trace files |
|
|
```bash |
|
|
python analysis_scripts/performance_comparison.py |
|
|
``` |
|
|
|
|
|
--- |
|
|
|
|
|
## π Acknowledgments |
|
|
- [Mahdi Chaker](https://github.com/mchaker) for generously providing GPU cluster access |
|
|
- MoonShot AI team for open-sourcing their [PoC implementation](https://github.com/NVIDIA/Megatron-LM/pull/1428/commits/f432fbe45c169aeb5a0805ff6f41e13f989c6730#diff-61c8e9370cb7fd634a4019472368c487898093f5d330375524c76eac15c7390c) |
|
|
|
|
|
--- |
|
|
|
|
|
π Citation |
|
|
If you use these traces or analysis in your work, please cite: |
|
|
|
|
|
|
|
|
@misc{wei2025muoneproducibility, |
|
|
author = {Wei, Jen}, |
|
|
title = {Distributed Muon: Performance Artifacts and Benchmarks}, |
|
|
year = {2025}, |
|
|
publisher = {Hugging Face}, |
|
|
journal = {Hugging Face Datasets}, |
|
|
howpublished = {\url{[https://huggingface.co/datasets/bird-of-paradise/muon-distributed-reproducibility](https://huggingface.co/datasets/bird-of-paradise/muon-distributed-reproducibility)}} |
|
|
} |
|
|
|