license: mit
tags:
- benchmark
- systems-ml
- distributed-training
- muon
- optimizer
- performance-analysis
π¬ Distributed Muon: Field Notes & Reproducibility Artifacts
Code, Performance Traces, and Analysis Logs
This repository contains the raw engineering artifacts for the deep-dive investigation: "Reproducing and Validating Distributed Muon".
It serves as the proof of work for the performance claims regarding the Muon optimizer's communication efficiency and computational overhead in a distributed setting (Data Parallel + Tensor Parallel).
π Read the Full Report: Reproducing and Validating Distributed Muon π’β¨: A Practical Verification of Communication Efficiency Claims π οΈ Get the Tutorial Code: bird-of-paradise/muon-distributed
π Repository Structure
traces/: Raw Chrome Trace (.json) files generated by PyTorch Profiler. You can load these intochrome://tracingor ui.perfetto.dev to visualize the exact CPU/GPU execution timeline.comparison/: Side-by-side traces of AdamW vs. Muon (Hybrid DP=2/TP=2).distributed_muon/: Scaling traces for DP=4, TP=4, and Hybrid configurations.
analysis_scripts/: The exact Python scripts used to generate the traces and parse the performance metrics.figures/: High-resolution charts and trace visualizations used in the report.report/: A PDF archive of the full technical investigation.
π Key Findings (Verified in Traces)
The traces in this repository provide empirical evidence for the following:
- Communication Efficiency: Muon (Hybrid DP2/TP2) demonstrates 0.57x the communication overhead of AdamW on a bandwidth-constrained cluster (PCIe Gen4 x4).
- Evidence: Compare
traces/comparison/adamw_fullstep_rank0.jsonvsmuon_fullstep_dp2_tp2_rank0.json.
- Evidence: Compare
- Optimizer Latency: The Muon step accounts for ~1.1% of total training time, validating the paper's "negligible overhead" claim.
- Hybrid Scaling: The
DP=2, TP=2configuration outperforms pure DP or pure TP on 4 GPUs, balancing memory bandwidth with communication overhead.
π οΈ How to Reproduce
To run these benchmarks yourself on a 4-GPU cluster:
- Clone this repository.
- Install dependencies:
torch. - Run the benchmark script:
# This will generate new JSON traces in your local directory
python analysis_scripts/muon_vs_adam.py
- Run the performance analysis on included trace files
python analysis_scripts/performance_comparison.py
π Acknowledgments
- Mahdi Chaker for generously providing GPU cluster access
- MoonShot AI team for open-sourcing their PoC implementation
π Citation If you use these traces or analysis in your work, please cite:
@misc{wei2025muoneproducibility, author = {Wei, Jen}, title = {Distributed Muon: Performance Artifacts and Benchmarks}, year = {2025}, publisher = {Hugging Face}, journal = {Hugging Face Datasets}, howpublished = {\url{https://huggingface.co/datasets/bird-of-paradise/muon-distributed-reproducibility}} }