bird-of-paradise's picture
Upload README.md
b4999b8 verified
metadata
license: mit
tags:
  - benchmark
  - systems-ml
  - distributed-training
  - muon
  - optimizer
  - performance-analysis

πŸ”¬ Distributed Muon: Field Notes & Reproducibility Artifacts

Code, Performance Traces, and Analysis Logs

This repository contains the raw engineering artifacts for the deep-dive investigation: "Reproducing and Validating Distributed Muon".

It serves as the proof of work for the performance claims regarding the Muon optimizer's communication efficiency and computational overhead in a distributed setting (Data Parallel + Tensor Parallel).

πŸ“„ Read the Full Report: Reproducing and Validating Distributed Muon 🐒✨: A Practical Verification of Communication Efficiency Claims πŸ› οΈ Get the Tutorial Code: bird-of-paradise/muon-distributed


πŸ“‚ Repository Structure

  • traces/: Raw Chrome Trace (.json) files generated by PyTorch Profiler. You can load these into chrome://tracing or ui.perfetto.dev to visualize the exact CPU/GPU execution timeline.
    • comparison/: Side-by-side traces of AdamW vs. Muon (Hybrid DP=2/TP=2).
    • distributed_muon/: Scaling traces for DP=4, TP=4, and Hybrid configurations.
  • analysis_scripts/: The exact Python scripts used to generate the traces and parse the performance metrics.
  • figures/: High-resolution charts and trace visualizations used in the report.
  • report/: A PDF archive of the full technical investigation.

πŸ” Key Findings (Verified in Traces)

The traces in this repository provide empirical evidence for the following:

  1. Communication Efficiency: Muon (Hybrid DP2/TP2) demonstrates 0.57x the communication overhead of AdamW on a bandwidth-constrained cluster (PCIe Gen4 x4).
    • Evidence: Compare traces/comparison/adamw_fullstep_rank0.json vs muon_fullstep_dp2_tp2_rank0.json.
  2. Optimizer Latency: The Muon step accounts for ~1.1% of total training time, validating the paper's "negligible overhead" claim.
  3. Hybrid Scaling: The DP=2, TP=2 configuration outperforms pure DP or pure TP on 4 GPUs, balancing memory bandwidth with communication overhead.

πŸ› οΈ How to Reproduce

To run these benchmarks yourself on a 4-GPU cluster:

  1. Clone this repository.
  2. Install dependencies: torch.
  3. Run the benchmark script:
# This will generate new JSON traces in your local directory
python analysis_scripts/muon_vs_adam.py
  1. Run the performance analysis on included trace files
python analysis_scripts/performance_comparison.py

πŸ™ Acknowledgments


πŸ“– Citation If you use these traces or analysis in your work, please cite:

@misc{wei2025muoneproducibility, author = {Wei, Jen}, title = {Distributed Muon: Performance Artifacts and Benchmarks}, year = {2025}, publisher = {Hugging Face}, journal = {Hugging Face Datasets}, howpublished = {\url{https://huggingface.co/datasets/bird-of-paradise/muon-distributed-reproducibility}} }