bird-of-paradise commited on
Commit
74cbb3d
·
verified ·
1 Parent(s): cde3af5

initial commit

Browse files
Files changed (45) hide show
  1. .gitattributes +8 -0
  2. README.md +78 -3
  3. analysis_scripts/muon_vs_adam.py +795 -0
  4. analysis_scripts/performance_comparison.py +126 -0
  5. figures/table_5.1.png +3 -0
  6. figures/table_5.2.png +3 -0
  7. figures/table_6.png +3 -0
  8. figures/table_7.png +3 -0
  9. figures/trace_adamw_FULLSTEP_rank0.png +3 -0
  10. figures/trace_adamw_rank0.png +3 -0
  11. figures/trace_muon_FULLSTEP_dp2_tp2_rank0.png +3 -0
  12. figures/trace_muon_dp2_tp2_rank0.png +3 -0
  13. report/Reproducing and Validating Distributed Muon 🐢✨_ A Practical Verification of Communication Efficiency Claims _ by Jennifer Wei _ Nov, 2025 _ Medium.pdf +3 -0
  14. traces/.DS_Store +0 -0
  15. traces/comparison/.DS_Store +0 -0
  16. traces/comparison/trace_adamw_FULLSTEP_async_rank0.json +0 -0
  17. traces/comparison/trace_adamw_FULLSTEP_rank0.json +0 -0
  18. traces/comparison/trace_adamw_FULLSTEP_rank1.json +0 -0
  19. traces/comparison/trace_adamw_FULLSTEP_rank2.json +0 -0
  20. traces/comparison/trace_adamw_FULLSTEP_rank3.json +0 -0
  21. traces/comparison/trace_adamw_rank0.json +0 -0
  22. traces/comparison/trace_adamw_rank1.json +0 -0
  23. traces/comparison/trace_adamw_rank2.json +0 -0
  24. traces/comparison/trace_adamw_rank3.json +0 -0
  25. traces/comparison/trace_muon_FULLSTEP_dp2_tp2_async_rank0.json +0 -0
  26. traces/comparison/trace_muon_FULLSTEP_dp2_tp2_rank0.json +0 -0
  27. traces/comparison/trace_muon_FULLSTEP_dp2_tp2_rank1.json +0 -0
  28. traces/comparison/trace_muon_FULLSTEP_dp2_tp2_rank2.json +0 -0
  29. traces/comparison/trace_muon_FULLSTEP_dp2_tp2_rank3.json +0 -0
  30. traces/comparison/trace_muon_dp2_tp2_rank0.json +0 -0
  31. traces/comparison/trace_muon_dp2_tp2_rank1.json +0 -0
  32. traces/comparison/trace_muon_dp2_tp2_rank2.json +0 -0
  33. traces/comparison/trace_muon_dp2_tp2_rank3.json +0 -0
  34. traces/distributed_muon/trace_1_4_rank0.json +3 -0
  35. traces/distributed_muon/trace_1_4_rank1.json +3 -0
  36. traces/distributed_muon/trace_1_4_rank2.json +3 -0
  37. traces/distributed_muon/trace_1_4_rank3.json +3 -0
  38. traces/distributed_muon/trace_2_2_rank0.json +0 -0
  39. traces/distributed_muon/trace_2_2_rank1.json +0 -0
  40. traces/distributed_muon/trace_2_2_rank2.json +3 -0
  41. traces/distributed_muon/trace_2_2_rank3.json +3 -0
  42. traces/distributed_muon/trace_4_1_rank0.json +0 -0
  43. traces/distributed_muon/trace_4_1_rank1.json +0 -0
  44. traces/distributed_muon/trace_4_1_rank2.json +0 -0
  45. traces/distributed_muon/trace_4_1_rank3.json +3 -0
.gitattributes CHANGED
@@ -57,3 +57,11 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
60
+ report/Reproducing[[:space:]]and[[:space:]]Validating[[:space:]]Distributed[[:space:]]Muon[[:space:]]🐢✨_[[:space:]]A[[:space:]]Practical[[:space:]]Verification[[:space:]]of[[:space:]]Communication[[:space:]]Efficiency[[:space:]]Claims[[:space:]]_[[:space:]]by[[:space:]]Jennifer[[:space:]]Wei[[:space:]]_[[:space:]]Nov,[[:space:]]2025[[:space:]]_[[:space:]]Medium.pdf filter=lfs diff=lfs merge=lfs -text
61
+ traces/distributed_muon/trace_1_4_rank0.json filter=lfs diff=lfs merge=lfs -text
62
+ traces/distributed_muon/trace_1_4_rank1.json filter=lfs diff=lfs merge=lfs -text
63
+ traces/distributed_muon/trace_1_4_rank2.json filter=lfs diff=lfs merge=lfs -text
64
+ traces/distributed_muon/trace_1_4_rank3.json filter=lfs diff=lfs merge=lfs -text
65
+ traces/distributed_muon/trace_2_2_rank2.json filter=lfs diff=lfs merge=lfs -text
66
+ traces/distributed_muon/trace_2_2_rank3.json filter=lfs diff=lfs merge=lfs -text
67
+ traces/distributed_muon/trace_4_1_rank3.json filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,78 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ tags:
4
+ - benchmark
5
+ - systems-ml
6
+ - distributed-training
7
+ - muon
8
+ - optimizer
9
+ - performance-analysis
10
+ ---
11
+
12
+ # 🔬 Distributed Muon: Field Notes & Reproducibility Artifacts
13
+
14
+ **Code, Performance Traces, and Analysis Logs**
15
+
16
+ This repository contains the raw engineering artifacts for the deep-dive investigation: **"Reproducing and Validating Distributed Muon"**.
17
+
18
+ It serves as the **proof of work** for the performance claims regarding the Muon optimizer's communication efficiency and computational overhead in a distributed setting (Data Parallel + Tensor Parallel).
19
+
20
+ 📄 **Read the Full Report:** [Reproducing and Validating Distributed Muon 🐢✨: A Practical Verification of Communication Efficiency Claims](https://medium.com/@jenwei0312/reproducing-and-validating-distributed-muon-a-practical-verification-of-communication-0be4d1d9b893)
21
+ 🛠️ **Get the Tutorial Code:** [bird-of-paradise/muon-distributed](https://huggingface.co/datasets/bird-of-paradise/muon-distributed)
22
+
23
+ ---
24
+
25
+ ## 📂 Repository Structure
26
+
27
+ * **`traces/`**: Raw Chrome Trace (`.json`) files generated by PyTorch Profiler. You can load these into `chrome://tracing` or [ui.perfetto.dev](https://ui.perfetto.dev) to visualize the exact CPU/GPU execution timeline.
28
+ * `comparison/`: Side-by-side traces of AdamW vs. Muon (Hybrid DP=2/TP=2).
29
+ * `distributed_muon/`: Scaling traces for DP=4, TP=4, and Hybrid configurations.
30
+ * **`analysis_scripts/`**: The exact Python scripts used to generate the traces and parse the performance metrics.
31
+ * **`figures/`**: High-resolution charts and trace visualizations used in the report.
32
+ * **`report/`**: A PDF archive of the full technical investigation.
33
+
34
+ ---
35
+
36
+ ## 🔍 Key Findings (Verified in Traces)
37
+
38
+ The traces in this repository provide empirical evidence for the following:
39
+
40
+ 1. **Communication Efficiency:** Muon (Hybrid DP2/TP2) demonstrates **0.57x** the communication overhead of AdamW on a bandwidth-constrained cluster (PCIe Gen4 x4).
41
+ * *Evidence:* Compare `traces/comparison/adamw_fullstep_rank0.json` vs `muon_fullstep_dp2_tp2_rank0.json`.
42
+ 2. **Optimizer Latency:** The Muon step accounts for **~1.1%** of total training time, validating the paper's "negligible overhead" claim.
43
+ 3. **Hybrid Scaling:** The `DP=2, TP=2` configuration outperforms pure DP or pure TP on 4 GPUs, balancing memory bandwidth with communication overhead.
44
+
45
+ ---
46
+
47
+ ## 🛠️ How to Reproduce
48
+
49
+ To run these benchmarks yourself on a 4-GPU cluster:
50
+
51
+ 1. Clone this repository.
52
+ 2. Install dependencies: `torch`.
53
+ 3. Run the benchmark script:
54
+
55
+ ```bash
56
+ # This will generate new JSON traces in your local directory
57
+ python analysis_scripts/muon_vs_adam.py
58
+ ```
59
+
60
+ 4. Run the performance analysis on included trace files
61
+ ```bash
62
+ python analysis_scripts/performance_comparison.py
63
+ ```
64
+
65
+ ---
66
+
67
+ 📖 Citation
68
+ If you use these traces or analysis in your work, please cite:
69
+
70
+
71
+ @misc{wei2025muoneproducibility,
72
+ author = {Wei, Jen},
73
+ title = {Distributed Muon: Performance Artifacts and Benchmarks},
74
+ year = {2025},
75
+ publisher = {Hugging Face},
76
+ journal = {Hugging Face Datasets},
77
+ howpublished = {\url{[https://huggingface.co/datasets/bird-of-paradise/muon-distributed-artifacts](https://huggingface.co/datasets/bird-of-paradise/muon-distributed-reproducibility)}}
78
+ }
analysis_scripts/muon_vs_adam.py ADDED
@@ -0,0 +1,795 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # megatron/core/optimizer/muon.py
2
+ from typing import Tuple, Dict
3
+ import torch
4
+ import math
5
+ import torch.distributed as dist
6
+
7
+ import os
8
+ import sys
9
+ import torch
10
+ import torch.distributed as dist
11
+ import torch.multiprocessing as mp
12
+ import math
13
+ from typing import Tuple, Dict
14
+
15
+ from torch.profiler import profile, record_function, ProfilerActivity
16
+ import time
17
+
18
+
19
+ # copy from https://github.com/KellerJordan/Muon/tree/master
20
+ # @torch.compile
21
+ def zeropower_via_newtonschulz5(G, steps):
22
+ """
23
+ Newton-Schulz iteration to compute the zeroth power / orthogonalization of G. We opt to use a
24
+ quintic iteration whose coefficients are selected to maximize the slope at zero. For the purpose
25
+ of minimizing steps, it turns out to be empirically effective to keep increasing the slope at
26
+ zero even beyond the point where the iteration no longer converges all the way to one everywhere
27
+ on the interval. This iteration therefore does not produce UV^T but rather something like US'V^T
28
+ where S' is diagonal with S_{ii}' ~ Uniform(0.5, 1.5), which turns out not to hurt model
29
+ performance at all relative to UV^T, where USV^T = G is the SVD.
30
+ """
31
+ assert len(G.shape) == 2
32
+ a, b, c = (3.4445, -4.7750, 2.0315)
33
+ X = G
34
+ if G.size(0) > G.size(1):
35
+ X = X.T
36
+
37
+ # Ensure spectral norm is at most 1
38
+ X = X / (X.norm() + 1e-7)
39
+ # Perform the NS iterations
40
+ for _ in range(steps):
41
+ A = X @ X.T
42
+ B = b * A + c * A @ A # adapted from suggestion by @jxbz, @leloykun, and @YouJiacheng
43
+ X = a * X + B @ X
44
+
45
+ if G.size(0) > G.size(1):
46
+ X = X.T
47
+ return X
48
+
49
+ def normalize_range(range: Tuple[int, int], start):
50
+ return (range[0] - start, range[1] - start)
51
+
52
+ class MuonDistMeta:
53
+
54
+ # which buffer and bucket param belongs to
55
+ buffer_idx: int = 0
56
+ bucket_idx: int = 0
57
+ # param shape after tp
58
+ shape: torch.Size = None
59
+ # param location in global buffer
60
+ global_range: Tuple[int, int] = None
61
+ tp_split_dim: int = -1
62
+ # param location in global buffer (current dp slice)
63
+ local_range: Tuple[int, int] = None
64
+
65
+ def __init__(self, buffer_idx: int, bucket_idx: int, shape: torch.Size, global_range: Tuple[int, int], tp_split_dim: int):
66
+ self.buffer_idx = buffer_idx
67
+ self.bucket_idx = bucket_idx
68
+ self.shape = shape
69
+ self.global_range = global_range
70
+ self.tp_split_dim = tp_split_dim
71
+
72
+ def set_local_buffer_range(self, local_buffer_range: Tuple[int, int]):
73
+ start = max(self.global_range[0], local_buffer_range[0])
74
+ end = min(self.global_range[1], local_buffer_range[1])
75
+ self.local_range = (start, end) if start < end else (local_buffer_range[0], local_buffer_range[0])
76
+
77
+ # adjust LR based on: https://github.com/MoonshotAI/Moonlight
78
+ def adjust_lr_wd_for_muon(lr, matched_adamw_rms, param_shape):
79
+ A, B = param_shape[:2]
80
+ adjusted_ratio = math.sqrt(max(A, B)) * matched_adamw_rms
81
+ adjusted_lr = lr * adjusted_ratio
82
+ return adjusted_lr
83
+
84
+ # copy from https://github.com/KellerJordan/Muon/tree/master and support distributed solution
85
+ class Muon(torch.optim.Optimizer):
86
+ """
87
+ Muon - MomentUm Orthogonalized by Newton-schulz
88
+ Muon internally runs standard SGD-momentum, and then performs an orthogonalization post-
89
+ processing step, in which each 2D parameter's update is replaced with the nearest orthogonal
90
+ matrix. To efficiently orthogonalize each update, we use a Newton-Schulz iteration, which has
91
+ the advantage that it can be stably run in bfloat16 on the GPU.
92
+ Some warnings:
93
+ - We believe this optimizer is unlikely to work well for training with small batch size.
94
+ - We believe it may not work well for finetuning pretrained models, but we haven't tested this.
95
+ Arguments:
96
+ param_groups: The parameters to be optimized.
97
+ lr: The learning rate. The updates will have spectral norm of `lr`. (0.02 is a good default)
98
+ momentum: The momentum used by the internal SGD. (0.95 is a good default)
99
+ matched_adamw_rms: The AdamW Update RMS that Muon is designed to match. (0.2~0.4 recommended)
100
+ nesterov: Whether to use Nesterov-style momentum in the internal SGD. (recommended)
101
+ ns_steps: The number of Newton-Schulz iterations to run. (5 is probably always enough)
102
+ {0, 1}-D or are detected as being the embed or lm_head will be optimized by AdamW as well.
103
+ adamw_betas: The betas for the internal AdamW.
104
+ adamw_eps: The epsilon for the internal AdamW.
105
+ adamw_wd: The weight decay for the internal AdamW.
106
+ """
107
+ def __init__(self, param_groups, lr=2e-2, weight_decay=0.1,
108
+ matched_adamw_rms=0.2, momentum=0.95, nesterov=True, ns_steps=5,
109
+ adamw_betas=(0.95, 0.95), adamw_eps=1e-8):
110
+
111
+ defaults = dict(lr=lr, weight_decay=weight_decay,
112
+ matched_adamw_rms=matched_adamw_rms,
113
+ momentum=momentum, nesterov=nesterov, ns_steps=ns_steps,
114
+ adamw_betas=adamw_betas, adamw_eps=adamw_eps,)
115
+
116
+ super().__init__(param_groups, defaults)
117
+ self.distributed_mode = False
118
+
119
+
120
+ def enable_distributed_mode(self, global_buffer_sizes, dist_group, tp_group,
121
+ dist_metas: Dict[torch.nn.Parameter, MuonDistMeta]):
122
+ """
123
+ enable distributed mode
124
+ Args:
125
+ global_buffer_size: global buffer size
126
+ dist group: optimizer sharding group
127
+ tp group: param tp group
128
+ dist metas: dist metas for all param
129
+ """
130
+
131
+ self.global_buffer_sizes = global_buffer_sizes
132
+ self.dist_group = dist_group
133
+ self.tp_group = tp_group
134
+ self.dist_metas = dist_metas
135
+
136
+ world_size = dist.get_world_size(dist_group)
137
+ rank = dist.get_rank(dist_group)
138
+
139
+ # calc local buffer range
140
+ self.local_buffer_sizes = []
141
+ self.local_buffer_ranges = []
142
+ # The outer loop is for different parameter groups (e.g., weights vs. biases)
143
+ for global_bucket_sizes in global_buffer_sizes: # <--- rename `global_bucket_sizes`
144
+ local_bucket_sizes = []
145
+ local_bucket_ranges = []
146
+
147
+ # The inner loop is for the different buckets within a single group
148
+ for (global_bucket_size, bucket_offset) in global_bucket_sizes:
149
+ # calculate the local range for THIS specific bucket
150
+ assert global_bucket_size % world_size == 0
151
+ local_bucket_size = global_bucket_size // world_size
152
+ # Renaming here makes the logic so much clearer
153
+ local_bucket_start = local_bucket_size * rank + bucket_offset
154
+ local_buffer_range = (local_bucket_start, local_bucket_start + local_bucket_size)
155
+ local_bucket_sizes.append(local_bucket_size)
156
+ local_bucket_ranges.append(local_buffer_range)
157
+
158
+ self.local_buffer_sizes.append(local_bucket_sizes)
159
+ self.local_buffer_ranges.append(local_bucket_ranges)
160
+
161
+ # calc local range for params
162
+ for dist_meta in dist_metas.values():
163
+ local_buffer_range = self.local_buffer_ranges[dist_meta.buffer_idx][dist_meta.bucket_idx]
164
+ dist_meta.set_local_buffer_range(local_buffer_range)
165
+
166
+ self.distributed_mode = True
167
+
168
+ def step(self):
169
+
170
+ dtype = torch.bfloat16
171
+ device = torch.cuda.current_device()
172
+
173
+ ns_inputs = {}
174
+
175
+ # update muon momentum first
176
+ # `self.param_groups` is already sharded
177
+ for group in self.param_groups:
178
+
179
+ if not group.get("use_muon", False):
180
+ continue
181
+
182
+ momentum = group['momentum']
183
+ params = group["params"]
184
+
185
+ for p in params:
186
+
187
+ g = p.grad
188
+ assert g is not None
189
+ # 1-dim grad for distributed mode
190
+ assert self.distributed_mode or g.dim() == 2
191
+
192
+ # prepare muon buffer in state
193
+ state = self.state[p]
194
+ if not "muon_buffer" in state:
195
+ state["muon_buffer"] = torch.zeros_like(g)
196
+ buf = state["muon_buffer"]
197
+ buf.mul_(momentum).add_(g)
198
+
199
+ # save to ns input
200
+ g = g.add(buf, alpha=momentum) if group['nesterov'] else buf
201
+ ns_inputs[p] = g.bfloat16()
202
+
203
+ # rewrite ns_inputs if distributed
204
+ """
205
+ the four-step "acrobatic" journey of the ns_inputs data:
206
+
207
+ 1. **DP `all_gather`**: (ZeRO) Gather all the sharded pieces from your data-parallel "column" to re-create your **full TP slice**.
208
+ 2. **TP `all_gather`**: Gather all the TP slices from your tensor-parallel "row" to re-create the **full, 100% complete matrix**.
209
+ 3. *(...Run the math on the full matrix...)*
210
+ 4. **TP `shard`**: Shard the full `update` matrix back down to your **local TP slice**.
211
+ 5. **DP `shard`**: (ZeRO) Shard that TP slice *again* back down to the **local DP/ZeRO slice** that you're responsible for.
212
+
213
+ """
214
+ if self.distributed_mode:
215
+
216
+ # initialize buffers
217
+ # hanged the variable nnames to `local_bucket_size` and `global_bucket_size` for clarity
218
+ ns_input_local_buffers = [
219
+ [ torch.empty((local_bucket_size), device=device, dtype=dtype)
220
+ for local_bucket_size in local_bucket_sizes ]
221
+ for local_bucket_sizes in self.local_buffer_sizes
222
+ ]
223
+ ns_input_global_buffers = [
224
+ [ torch.empty((global_bucket_size), device=device, dtype=dtype)
225
+ for (global_bucket_size, bucket_offset) in global_bucket_sizes ]
226
+ for global_bucket_sizes in self.global_buffer_sizes
227
+ ]
228
+
229
+ # fill ns input data to local buffer
230
+ # looping through all params in local rank, ok.
231
+ for param, ns_input in ns_inputs.items():
232
+ dist_meta = self.dist_metas[param]
233
+ # ceate a reference to `ns_input_local_buffers`
234
+ # the update is in local rank, so we only need one `for` loop
235
+ ns_input_local_buffer = ns_input_local_buffers[dist_meta.buffer_idx][dist_meta.bucket_idx]
236
+ local_buffer_range = self.local_buffer_ranges[dist_meta.buffer_idx][dist_meta.bucket_idx]
237
+ local_range = normalize_range(dist_meta.local_range, local_buffer_range[0]) # local_range in global_range
238
+ # copy data into this `ns_input_local_buffer` memory
239
+ # because dist.all_gather requires a single, physically contiguous block of memory to work efficiently.
240
+ ns_input_local_buffer[local_range[0]:local_range[1]].copy_(ns_input.view(-1))
241
+
242
+ # all gather buffers: one bucket at a time. -- the "shipping" phase
243
+ for ns_input_global_buffer, ns_input_local_buffer in zip(ns_input_global_buffers, ns_input_local_buffers):
244
+ for ns_input_global_bucket, ns_input_local_bucket in zip(ns_input_global_buffer, ns_input_local_buffer):
245
+ dist.all_gather_into_tensor(ns_input_global_bucket, ns_input_local_bucket, group=self.dist_group)
246
+
247
+ # overwrite ns input with the `all_gather`-ed `ns_inputs` -- the "unpacking" phase
248
+ # this is the "opposite" of filling ns input data to local buffer
249
+ for p in ns_inputs.keys():
250
+ dist_meta = self.dist_metas[p]
251
+ ns_input_global_buffer = ns_input_global_buffers[dist_meta.buffer_idx][dist_meta.bucket_idx]
252
+ offset = self.global_buffer_sizes[dist_meta.buffer_idx][dist_meta.bucket_idx][1]
253
+ global_range = normalize_range(dist_meta.global_range, offset)
254
+
255
+ #ns_inputs[p] = ns_input_global_buffer[global_range[0]:global_range[1]].view(-1)
256
+ ## bug fix 👆🏻-- overwrite ns input with the `all_gather`-ed `ns_inputs` -- the "unpacking" phase
257
+ #ns_inputs[p] = ns_input_global_buffer[global_range[0]:global_range[1]].view(-1)
258
+ # Unpack the 1D slice of data
259
+ unpacked_data = ns_input_global_buffer[global_range[0]:global_range[1]]
260
+
261
+ # THIS IS THE FIX: Reshape it to its correct 2D shape, not view(-1)
262
+ ns_inputs[p] = unpacked_data.view(dist_meta.shape)
263
+
264
+ # set tp info
265
+ tp_world_size = dist.get_world_size(self.tp_group)
266
+ tp_rank = dist.get_rank(self.tp_group)
267
+
268
+ # update muon momentum first
269
+ for group in self.param_groups:
270
+
271
+ if not group.get('use_muon', False):
272
+ continue
273
+
274
+ lr = group["lr"]
275
+ ns_steps = group["ns_steps"]
276
+ weight_decay = group["weight_decay"]
277
+ matched_adamw_rms = group["matched_adamw_rms"]
278
+ params = group["params"] # <-- add this
279
+
280
+ for p in params:
281
+
282
+ ns_input = ns_inputs[p]
283
+ tp_split_dim = -1
284
+
285
+ if self.distributed_mode:
286
+ dist_meta = self.dist_metas[p]
287
+ tp_split_dim = dist_meta.tp_split_dim
288
+
289
+ # gather tensor parallel ( if tp )
290
+ if tp_split_dim != -1:
291
+ ns_input_shards = [ torch.empty_like(ns_input) for _ in range(tp_world_size) ]
292
+ dist.all_gather(ns_input_shards, ns_input, self.tp_group)
293
+ ns_input = torch.cat(ns_input_shards, dim=tp_split_dim)
294
+
295
+ # calc update
296
+ update = zeropower_via_newtonschulz5(ns_input, steps=ns_steps)
297
+
298
+ # only local tp part
299
+ # this is effectivly "shadding" the newtonschulz-processed update,
300
+ # and keep only your assigned piece, discarding the rest
301
+ if tp_split_dim != -1:
302
+ update = update.chunk(tp_world_size, dim=tp_split_dim)[tp_rank]
303
+
304
+ # only local dp buffer part
305
+ if self.distributed_mode:
306
+ # local range in global range
307
+ # unpacking the tp sharded update to dp sharded update
308
+ local_range = normalize_range(dist_meta.local_range, dist_meta.global_range[0])
309
+ update = update.reshape(-1)[local_range[0]:local_range[1]]
310
+
311
+ # apply weight decay
312
+ p.data.mul_(1 - lr*weight_decay)
313
+
314
+ # adjust lr and apply update
315
+ adjusted_lr = adjust_lr_wd_for_muon(lr, matched_adamw_rms, ns_input.shape)
316
+ p.data.add_(update, alpha=-adjusted_lr)
317
+
318
+ # use adam for other params
319
+ for group in self.param_groups:
320
+
321
+ if group.get('use_muon', False):
322
+ continue
323
+
324
+ # init step
325
+ if 'step' in group:
326
+ group['step'] += 1
327
+ else:
328
+ group['step'] = 1
329
+
330
+ step = group['step']
331
+ params = group["params"]
332
+ lr = group['lr']
333
+ weight_decay = group['weight_decay']
334
+ beta1, beta2 = group['adamw_betas']
335
+ eps = group['adamw_eps']
336
+
337
+ for p in params:
338
+
339
+ g = p.grad
340
+ assert g is not None
341
+ state = self.state[p]
342
+
343
+ if len(state) == 0:
344
+ state['adamw_exp_avg'] = torch.zeros_like(g)
345
+ state['adamw_exp_avg_sq'] = torch.zeros_like(g)
346
+
347
+ buf1 = state['adamw_exp_avg']
348
+ buf2 = state['adamw_exp_avg_sq']
349
+ buf1.lerp_(g, 1-beta1)
350
+ buf2.lerp_(g.square(), 1-beta2)
351
+
352
+ g = buf1 / (eps + buf2.sqrt())
353
+
354
+ bias_correction1 = 1 - beta1**step
355
+ bias_correction2 = 1 - beta2**step
356
+ scale = bias_correction1 / bias_correction2**0.5
357
+ p.data.mul_(1 - lr * weight_decay)
358
+ p.data.add_(g, alpha=-lr/scale)
359
+
360
+
361
+ ##--------------- tests/unit_tests/test_optimizer_muon.py -----------------
362
+ import os
363
+
364
+ import torch
365
+ import torch.distributed as dist
366
+
367
+ #from megatron.core.optimizer.muon import Muon, MuonDistMeta, normalize_range
368
+
369
+ def is_rank_0():
370
+ return torch.distributed.get_rank() == 0
371
+
372
+ def print_rank_0(*args):
373
+ if is_rank_0():
374
+ print(*args)
375
+
376
+ def cdiv(x: int, y: int):
377
+ return (x + y - 1) // y
378
+
379
+ def gen_param_and_grads():
380
+
381
+ # reset manual seed
382
+ torch.manual_seed(0)
383
+ torch.cuda.manual_seed(0)
384
+ device = 'cuda'
385
+ dtype = torch.float32
386
+
387
+ # gen params
388
+ params = [ torch.randn(shape, device=device, dtype=dtype) for shape in [
389
+ (4096, 4096), (1024, 324), (456, 1024), (676, 876), (128, 128), ] ]
390
+
391
+ # gen grads [ [ grad-list ] * step ]
392
+ grads = [ [ torch.randn_like(param) for param in params ] for _ in range(10) ]
393
+
394
+ return params, grads
395
+
396
+ def distribute_params(params, grads, tp_dims, dist_group, tp_group):
397
+ """ 将 param 进行 dist & tp shard, 仅保留自己的一部分 """
398
+
399
+ params = params.copy()
400
+ grads = [ step_grads.copy() for step_grads in grads ]
401
+
402
+ # tp dist
403
+ tp_size = dist.get_world_size(tp_group)
404
+ tp_rank = dist.get_rank(tp_group)
405
+ for i, param in enumerate(params):
406
+ tp_dim = tp_dims[i]
407
+ if tp_dim == -1:
408
+ continue
409
+ # Shard the parameter tensor along the `tp_dim` dimension.
410
+ assert param.shape[tp_dim] % tp_size == 0
411
+ local_range_start = param.shape[tp_dim] // tp_size * tp_rank
412
+ # range of the shard based on the rank of the current GOU in the given `tp_group``
413
+ local_range_end = param.shape[tp_dim] // tp_size * (tp_rank + 1)
414
+ # each GPU gets `[local_range_start:local_range_end, :] ` rows or `[:, local_range_start:local_range_end]` columns
415
+ params[i] = param[local_range_start:local_range_end, :] if tp_dim == 0 else \
416
+ param[:, local_range_start:local_range_end].contiguous()
417
+ # same logic applies to sharding the gradients for the current layer(param)
418
+ for step_grads in grads:
419
+ step_grads[i] = step_grads[i][local_range_start:local_range_end, :] if tp_dim == 0 else \
420
+ step_grads[i][:, local_range_start:local_range_end].contiguous()
421
+
422
+ # distributed
423
+ world_size = dist.get_world_size(dist_group)
424
+ rank = dist.get_rank(dist_group)
425
+
426
+ # global as the given DP group
427
+ # "global" here means "global to the TP group's worth of parameters."
428
+ global_buffer_size = sum(param.numel() for param in params)
429
+ local_buffer_size = cdiv(global_buffer_size, world_size)
430
+ # deciding the shard range for this rank
431
+ local_buffer_range = (local_buffer_size * rank, local_buffer_size * (rank + 1))
432
+ # padded global_buffer_size
433
+ global_buffer_size = local_buffer_size * world_size # fix global buffer size
434
+
435
+ numel_acc = 0
436
+ dist_params = []
437
+ dist_grads = [[] for _ in grads]
438
+ dist_metas = {}
439
+ for i, param in enumerate(params):
440
+
441
+ # gen meta
442
+ # align global buffer index(range) with local buffer index(range)
443
+ # see handwritten diagram for more details
444
+ numel = param.numel()
445
+ dist_meta = MuonDistMeta(0, 0, param.shape, (numel_acc, numel_acc + numel), tp_dims[i])
446
+ dist_meta.set_local_buffer_range(local_buffer_range)
447
+ numel_acc += numel
448
+
449
+ # skip if no element in this shard
450
+ if dist_meta.local_range[0] == dist_meta.local_range[1]:
451
+ continue
452
+
453
+ # gen param
454
+
455
+ # Convert the ABSOLUTE slice range (from the global virtual buffer)
456
+ # into a RELATIVE slice range (local to just this one parameter).
457
+ local_range = normalize_range(dist_meta.local_range, dist_meta.global_range[0])
458
+
459
+ # 1. Flatten the 2D parameter tensor into a 1D vector.
460
+ # 2. Use the relative range to slice out the piece this GPU is responsible for storing.
461
+ dist_param = param.view(-1)[local_range[0]:local_range[1]]
462
+ dist_params.append(dist_param)
463
+ dist_metas[dist_param] = dist_meta
464
+
465
+ # gen grad
466
+ # same logoc as the `gen param` scetion
467
+ for step, step_grads in enumerate(grads):
468
+ dist_grad = step_grads[i].view(-1)[local_range[0]:local_range[1]]
469
+ dist_grads[step].append(dist_grad)
470
+
471
+ return dist_params, dist_grads, global_buffer_size, dist_metas
472
+
473
+
474
+
475
+
476
+ def test_muon_dist(dp_size, tp_size):
477
+
478
+ world_size = dist.get_world_size()
479
+ rank = dist.get_rank()
480
+ assert dp_size * tp_size == world_size
481
+
482
+ # init dist group
483
+ for i in range(tp_size):
484
+ # decide the tp group based on grod of size `tp_size`
485
+ ranks = range(i, world_size, tp_size)
486
+ group = dist.new_group(ranks)
487
+ # each rank finds its groups
488
+ if rank in ranks:
489
+ # groups are passed as instructions
490
+ dist_group = group
491
+ # init tp group
492
+ for i in range(dp_size):
493
+ ranks = range(i * tp_size, (i + 1) * tp_size)
494
+ group = dist.new_group(ranks)
495
+ if rank in ranks:
496
+ tp_group = group
497
+
498
+ print_rank_0("process group initialized")
499
+
500
+ params_ref, grads_ref = gen_param_and_grads()
501
+ params_test, grads_test = gen_param_and_grads()
502
+ tp_dims = [0, 1, -1, 1, 0]
503
+ #tp_dims = [1, 0, -1, 0, 1]
504
+
505
+ # global_buffer_size is the padded buffer size of the dp group where the current rank belongs to
506
+ params_test, grads_test, global_buffer_size, dist_metas \
507
+ = distribute_params(params_test, grads_test, tp_dims, dist_group, tp_group)
508
+
509
+ muon_args = {
510
+ "use_muon": True,
511
+ "lr": 0.1,
512
+ "momentum": 0.9,
513
+ "nesterov": True,
514
+ "ns_steps": 5,
515
+ "weight_decay": 0.1,
516
+ }
517
+
518
+ # gen params
519
+ ref_param_groups = [{
520
+ "params": params_ref,
521
+ **muon_args
522
+ }]
523
+ test_param_groups = [{
524
+ "params": params_test,
525
+ **muon_args
526
+ }]
527
+
528
+ ref_muon = Muon(ref_param_groups)
529
+ test_muon = Muon(test_param_groups)
530
+ test_muon.enable_distributed_mode([[(global_buffer_size, 0)]], dist_group, tp_group, dist_metas)
531
+
532
+ for step in range(10):
533
+
534
+ # add grad
535
+ for i, grad in enumerate(grads_ref[step]):
536
+ params_ref[i].grad = grad.clone()
537
+ for i, grad in enumerate(grads_test[step]):
538
+ params_test[i].grad = grad.clone()
539
+ # step
540
+ ref_muon.step()
541
+ test_muon.step()
542
+
543
+ # distribute ref params
544
+ dist_ref_params, _, _, _ = distribute_params(params_ref, [], tp_dims, dist_group, tp_group)
545
+ # verify
546
+ for i, params_x2 in enumerate(zip(dist_ref_params, params_test)):
547
+ assert (params_x2[0] == params_x2[1]).all(), f"rank {rank} param {i} verify failed"
548
+ print_rank_0(f" - step {step} verify passed")
549
+
550
+ print_rank_0(f"dist dp = {dp_size} tp = {tp_size} test passed")
551
+
552
+
553
+ from torch.profiler import profile, record_function, ProfilerActivity
554
+ #-------------------------- benchmarks/added for benchmark_muon_vs_adam.py -----------------
555
+
556
+ def gen_param_and_grads():
557
+ # reset manual seed
558
+ torch.manual_seed(0)
559
+ torch.cuda.manual_seed(0)
560
+ device = 'cuda'
561
+ # Using float32 as input (Muon will cast internally, AdamW uses as is)
562
+ dtype = torch.float32
563
+
564
+ # gen params (LLM Sized)
565
+ params = [ torch.randn(shape, device=device, dtype=dtype) for shape in [
566
+ (4096, 4096), (1024, 324), (456, 1024), (676, 876), (128, 128), ] ]
567
+
568
+ # gen grads [ [ grad-list ] * step ]
569
+ grads = [ [ torch.randn_like(param) for param in params ] for _ in range(5) ] # 5 steps is enough
570
+
571
+ return params, grads
572
+
573
+ # backward + optimizer step only
574
+
575
+ def benchmark_adamw(rank, world_size, steps=5):
576
+ print_rank_0(f"🥊 Starting Round 1: AdamW (Standard DDP Simulation)...")
577
+ params, grads_list = gen_param_and_grads()
578
+
579
+ # Standard AdamW setup
580
+ optimizer = torch.optim.AdamW(params, lr=1e-3)
581
+
582
+ # Warmup
583
+ for p, g in zip(params, grads_list[0]):
584
+ p.grad = g
585
+ # Simulate DDP: All-Reduce gradients
586
+ dist.all_reduce(p.grad, op=dist.ReduceOp.SUM)
587
+ p.grad /= world_size
588
+ optimizer.step()
589
+ optimizer.zero_grad()
590
+
591
+ # Profile
592
+ with profile(activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA], record_shapes=True) as prof:
593
+ with record_function("AdamW_Battle"):
594
+ for step in range(steps):
595
+ # 1. Simulate Backward Pass (Gradient Available)
596
+ for i, p in enumerate(params):
597
+ p.grad = grads_list[step][i]
598
+
599
+ # 2. Simulate DDP Communication (The cost of AdamW comms)
600
+ with record_function("AdamW_Comm_AllReduce"):
601
+ for p in params:
602
+ dist.all_reduce(p.grad, op=dist.ReduceOp.SUM)
603
+ p.grad /= world_size
604
+
605
+ # 3. Optimizer Step (Should be fast/local)
606
+ with record_function("AdamW_Step"):
607
+ optimizer.step()
608
+ optimizer.zero_grad()
609
+
610
+ prof.export_chrome_trace(f"trace_adamw_rank{rank}.json")
611
+ print_rank_0("✅ AdamW Round Finished.")
612
+
613
+ def setup_process_groups(dp_size, tp_size):
614
+ world_size = dist.get_world_size()
615
+ rank = dist.get_rank()
616
+
617
+ for i in range(tp_size):
618
+ ranks = range(i, world_size, tp_size)
619
+ group = dist.new_group(ranks)
620
+ if rank in ranks:
621
+ dist_group = group
622
+
623
+ for i in range(dp_size):
624
+ ranks = range(i * tp_size, (i + 1) * tp_size)
625
+ group = dist.new_group(ranks)
626
+ if rank in ranks:
627
+ tp_group = group
628
+
629
+ return dist_group, tp_group
630
+
631
+ def benchmark_muon(rank, world_size, dp_size, tp_size, steps=5):
632
+ print_rank_0(f"🥊 Starting Round 2: Muon (DP={dp_size}, TP={tp_size})...")
633
+
634
+ # Setup (same as the OG test, but separate)
635
+ dist_group, tp_group = setup_process_groups(dp_size, tp_size)
636
+ params, grads_list = gen_param_and_grads()
637
+ tp_dims = [0, 1, -1, 1, 0]
638
+
639
+ params, grads_list, global_buffer_size, dist_metas = \
640
+ distribute_params(params, grads_list, tp_dims, dist_group, tp_group)
641
+
642
+ muon_args = {
643
+ "use_muon": True,
644
+ "lr": 0.1,
645
+ "momentum": 0.9,
646
+ "nesterov": True,
647
+ "ns_steps": 5,
648
+ "weight_decay": 0.1,
649
+ }
650
+
651
+ optimizer = Muon([{"params": params, **muon_args}])
652
+ optimizer.enable_distributed_mode(
653
+ [[(global_buffer_size, 0)]], dist_group, tp_group, dist_metas
654
+ )
655
+
656
+ # Warmup
657
+ for p, g in zip(params, grads_list[0]):
658
+ p.grad = g
659
+ optimizer.step()
660
+
661
+ # Profile ONLY the optimizer steps (like AdamW)
662
+ with profile(activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA],
663
+ record_shapes=True) as prof:
664
+ with record_function("Muon_Battle"):
665
+ for step in range(steps):
666
+ # 1. Attach gradients (simulating backward pass)
667
+ with record_function("Muon_Attach_Grads"):
668
+ for i, p in enumerate(params):
669
+ p.grad = grads_list[step][i]
670
+
671
+ # 2. Optimizer Step (THIS is what we want to measure)
672
+ with record_function("Muon_Step"):
673
+ optimizer.step()
674
+
675
+ prof.export_chrome_trace(f"trace_muon_dp{dp_size}_tp{tp_size}_rank{rank}.json")
676
+ print_rank_0("✅ Muon Round Finished.")
677
+
678
+
679
+ # -- full setup --
680
+
681
+
682
+ def simulate_fwd_bwd(size=2048, iterations=20):
683
+ """Simulate model forward + backward compute
684
+ Adjust size and iterations to match your real model's compute time
685
+ """
686
+ dummy = torch.randn(size, size, device='cuda')
687
+ for _ in range(iterations):
688
+ dummy = torch.matmul(dummy, dummy)
689
+ torch.cuda.synchronize()
690
+
691
+ def benchmark_adamw_full_step(rank, world_size, steps=5):
692
+ print_rank_0(f"🥊 AdamW with Forward-Backward Simulation...")
693
+ params, grads_list = gen_param_and_grads()
694
+ optimizer = torch.optim.AdamW(params, lr=1e-3)
695
+
696
+ # Warmup
697
+ for p, g in zip(params, grads_list[0]):
698
+ p.grad = g
699
+ dist.all_reduce(p.grad, op=dist.ReduceOp.SUM)
700
+ p.grad /= world_size
701
+ optimizer.step()
702
+ optimizer.zero_grad()
703
+
704
+ with profile(activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA]) as prof:
705
+ with record_function("AdamW_Full_Training_Step"):
706
+ for step in range(steps):
707
+ # 1. Forward + Backward
708
+ with record_function("FWD_BWD"):
709
+ simulate_fwd_bwd()
710
+
711
+ # 2. Gradients available
712
+ with record_function("Attach_Grads"):
713
+ for i, p in enumerate(params):
714
+ p.grad = grads_list[step][i]
715
+
716
+ # 3. DDP Gradient sync
717
+ with record_function("AdamW_AllReduce"):
718
+ for p in params:
719
+ dist.all_reduce(p.grad, op=dist.ReduceOp.SUM)
720
+ p.grad /= world_size
721
+
722
+ # 4. Optimizer step
723
+ with record_function("AdamW_Step"):
724
+ optimizer.step()
725
+ optimizer.zero_grad()
726
+
727
+ prof.export_chrome_trace(f"trace_adamw_FULLSTEP_rank{rank}.json")
728
+ print_rank_0("✅ AdamW Full Step Finished.")
729
+
730
+ def benchmark_muon_full_step(rank, world_size, dp_size, tp_size, steps=5):
731
+ print_rank_0(f"🥊 Muon with Forward-Backward Simulation...")
732
+
733
+ dist_group, tp_group = setup_process_groups(dp_size, tp_size)
734
+ params, grads_list = gen_param_and_grads()
735
+ tp_dims = [0, 1, -1, 1, 0]
736
+
737
+ params, grads_list, global_buffer_size, dist_metas = \
738
+ distribute_params(params, grads_list, tp_dims, dist_group, tp_group)
739
+
740
+ optimizer = Muon([{"params": params, "use_muon": True, "lr": 0.1,
741
+ "momentum": 0.9, "nesterov": True, "ns_steps": 5,
742
+ "weight_decay": 0.1}])
743
+ optimizer.enable_distributed_mode(
744
+ [[(global_buffer_size, 0)]], dist_group, tp_group, dist_metas
745
+ )
746
+
747
+ # Warmup
748
+ for p, g in zip(params, grads_list[0]):
749
+ p.grad = g
750
+ optimizer.step()
751
+
752
+ with profile(activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA]) as prof:
753
+ with record_function("Muon_Full_Training_Step"):
754
+ for step in range(steps):
755
+ # 1. Forward + Backward
756
+ with record_function("FWD_BWD"):
757
+ simulate_fwd_bwd()
758
+
759
+ # 2. Gradients available
760
+ with record_function("Attach_Grads"):
761
+ for i, p in enumerate(params):
762
+ p.grad = grads_list[step][i]
763
+
764
+ # 3. Optimizer step (includes Muon's communication)
765
+ with record_function("Muon_Step"):
766
+ optimizer.step()
767
+
768
+ prof.export_chrome_trace(f"trace_muon_FULLSTEP_dp{dp_size}_tp{tp_size}_rank{rank}.json")
769
+ print_rank_0("✅ Muon Full Step Finished.")
770
+
771
+ def run_process(rank, world_size):
772
+ torch.cuda.set_device(rank)
773
+ dist.init_process_group("nccl", rank=rank, world_size=world_size)
774
+
775
+ # Test 1: Optimizer-only (what you already have)
776
+ benchmark_adamw(rank, world_size, steps=5)
777
+ benchmark_muon(rank, world_size, dp_size=2, tp_size=2, steps=5)
778
+
779
+ # Test 2: Full training step (to verify 1-3% claim)
780
+ benchmark_adamw_full_step(rank, world_size, steps=5)
781
+ benchmark_muon_full_step(rank, world_size, dp_size=2, tp_size=2, steps=5)
782
+
783
+ dist.destroy_process_group()
784
+
785
+
786
+ if __name__ == "__main__":
787
+
788
+ world_size = 4
789
+ os.environ['MASTER_ADDR'] = 'localhost'
790
+ os.environ['MASTER_PORT'] = '12345'
791
+ os.environ['CUDA_DEVICE_MAX_CONNECTIONS'] = '1'
792
+
793
+ torch.multiprocessing.spawn(run_process, args=(world_size,), nprocs=world_size, join=True)
794
+
795
+ print("✅ All tests passed!")
analysis_scripts/performance_comparison.py ADDED
@@ -0,0 +1,126 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import os
3
+ from pathlib import Path
4
+
5
+ SCRIPT_DIR = Path(__file__).parent.parent / 'traces'/'comparison'
6
+
7
+ def analyze_trace(filename):
8
+ # Make filename relative to script location
9
+ filepath = SCRIPT_DIR / filename
10
+
11
+ with open(filepath) as f:
12
+ trace = json.load(f)
13
+
14
+
15
+ timings = {
16
+ 'nccl_all_reduce': [],
17
+ 'nccl_all_gather': [],
18
+ 'Muon_Step': [],
19
+ 'AdamW_Step': [],
20
+ 'Muon_Full_Training_Step': [],
21
+ 'AdamW_Full_Training_Step': [],
22
+ 'FWD_BWD': [],
23
+ 'AdamW_AllReduce': [],
24
+ 'Attach_Grads': [],
25
+ }
26
+
27
+ # Let's see ALL event names first
28
+ all_names = set()
29
+
30
+ for event in trace['traceEvents']:
31
+ if 'name' in event:
32
+ all_names.add(event['name'])
33
+
34
+ if 'dur' not in event:
35
+ continue
36
+ name = event.get('name', '')
37
+ dur = event['dur']
38
+
39
+ # Match any key that contains the name
40
+ for key in timings.keys():
41
+ if key in name:
42
+ timings[key].append(dur)
43
+ break
44
+
45
+ print(f"\n=== {filename} ===")
46
+ print(f"All unique event names found: {len(all_names)}")
47
+ print("Sample names:", list(all_names)[:20])
48
+
49
+ print("\n--- Timings ---")
50
+ for key, values in timings.items():
51
+ if values:
52
+ total = sum(values)
53
+ print(f"{key}: {len(values)} calls, total={total/1000:.2f}ms, "
54
+ f"avg={total/len(values)/1000:.2f}ms")
55
+
56
+ # Calculate percentages
57
+ if timings['Muon_Step']:
58
+ muon_opt = sum(timings['Muon_Step'])
59
+ muon_total = sum(timings['Muon_Full_Training_Step']) if timings['Muon_Full_Training_Step'] else sum(timings['FWD_BWD']) + muon_opt
60
+ print(f"\n📊 Muon Optimizer: {muon_opt/1000:.2f}ms = {(muon_opt/muon_total)*100:.1f}% of total")
61
+
62
+ if timings['AdamW_Step']:
63
+ adam_opt = sum(timings['AdamW_Step'])
64
+ adam_total = sum(timings['AdamW_Full_Training_Step']) if timings['AdamW_Full_Training_Step'] else sum(timings['FWD_BWD'])/2 + adam_opt
65
+ print(f"📊 AdamW Optimizer: {adam_opt/1000:.2f}ms = {(adam_opt/adam_total)*100:.1f}% of total\n")
66
+
67
+
68
+ def detailed_comm_analysis(filename):
69
+ # Make filename relative to script location
70
+ filepath = SCRIPT_DIR / filename
71
+ with open(filepath) as f:
72
+ trace = json.load(f)
73
+
74
+ comm_ops = {
75
+ 'all_reduce': [],
76
+ 'all_gather': [],
77
+ 'reduce_scatter': [],
78
+ 'broadcast': [],
79
+ }
80
+
81
+ for event in trace['traceEvents']:
82
+ if 'dur' not in event:
83
+ continue
84
+ name = event.get('name', '').lower()
85
+ dur = event['dur']
86
+
87
+ if 'all_reduce' in name or 'allreduce' in name:
88
+ comm_ops['all_reduce'].append(dur)
89
+ elif 'all_gather' in name or 'allgather' in name:
90
+ comm_ops['all_gather'].append(dur)
91
+ elif 'reduce_scatter' in name or 'reducescatter' in name:
92
+ comm_ops['reduce_scatter'].append(dur)
93
+ elif 'broadcast' in name:
94
+ comm_ops['broadcast'].append(dur)
95
+
96
+ print(f"\n=== Communication Breakdown: {filename} ===")
97
+ total_comm = 0
98
+ for op, times in comm_ops.items():
99
+ if times:
100
+ op_total = sum(times)
101
+ total_comm += op_total
102
+ print(f"{op}: {len(times)} calls, {op_total/1000:.2f}ms total, {op_total/len(times)/1000:.2f}ms avg")
103
+
104
+ print(f"\nTotal Communication: {total_comm/1000:.2f}ms")
105
+ return total_comm
106
+
107
+
108
+
109
+ def main():
110
+ muon_trace_file = 'trace_muon_FULLSTEP_dp2_tp2_rank0.json'
111
+ adam_trace_file = 'trace_adamw_FULLSTEP_rank0.json'
112
+
113
+ analyze_trace(muon_trace_file)
114
+ analyze_trace(adam_trace_file)
115
+
116
+ muon_comm = detailed_comm_analysis(muon_trace_file)
117
+ adam_comm = detailed_comm_analysis(adam_trace_file)
118
+
119
+
120
+ print(f"\n📊 Communication Comparison:")
121
+ print(f"Muon comm: {muon_comm/1000:.2f}ms")
122
+ print(f"AdamW comm: {adam_comm/1000:.2f}ms")
123
+ print(f"Ratio: {muon_comm/adam_comm:.2f}x")
124
+
125
+ if __name__ == "__main__":
126
+ main()
figures/table_5.1.png ADDED

Git LFS Details

  • SHA256: cee8f1cdf8fd6b47256e92ffba8dfe5e69c2bdb260c4907dc99d63770a301d15
  • Pointer size: 130 Bytes
  • Size of remote file: 40.4 kB
figures/table_5.2.png ADDED

Git LFS Details

  • SHA256: c28ac76dfe23f2f7f87a7c1f72caf29bc2962e974c0f602f12e8a47b640dc4cb
  • Pointer size: 130 Bytes
  • Size of remote file: 46.8 kB
figures/table_6.png ADDED

Git LFS Details

  • SHA256: bd57f35873a0d91c54485a6c58621dad792f163e6861e13e9518ad9b134f7ad2
  • Pointer size: 130 Bytes
  • Size of remote file: 34.3 kB
figures/table_7.png ADDED

Git LFS Details

  • SHA256: 65ff6776833511e890027514a3ef8537a49ea4c255fd8218e547cb0557c09fa6
  • Pointer size: 130 Bytes
  • Size of remote file: 46.2 kB
figures/trace_adamw_FULLSTEP_rank0.png ADDED

Git LFS Details

  • SHA256: f8d7b97d7ad50c01c35b255957b8c3df0170bbaa4ee2ca697e043879ffb4a20f
  • Pointer size: 131 Bytes
  • Size of remote file: 310 kB
figures/trace_adamw_rank0.png ADDED

Git LFS Details

  • SHA256: e2fc84053b99f6d28c2e40aa12b9b10609972bae05914aa9231bc80068aa6e96
  • Pointer size: 131 Bytes
  • Size of remote file: 277 kB
figures/trace_muon_FULLSTEP_dp2_tp2_rank0.png ADDED

Git LFS Details

  • SHA256: 8d607b0c3856bc0546b9d916e768a17990df3603601bb6f56fb9ba1d691c6254
  • Pointer size: 131 Bytes
  • Size of remote file: 322 kB
figures/trace_muon_dp2_tp2_rank0.png ADDED

Git LFS Details

  • SHA256: 3de1e8c705d5b63d57d3a2fb4f29009cf189e011a0a345330d68da22f60c9481
  • Pointer size: 131 Bytes
  • Size of remote file: 288 kB
report/Reproducing and Validating Distributed Muon 🐢✨_ A Practical Verification of Communication Efficiency Claims _ by Jennifer Wei _ Nov, 2025 _ Medium.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:44982f00dbcc1bd68ea22478d6f520b3bf9a9df270f93c2ec757ee741daafa90
3
+ size 9988983
traces/.DS_Store ADDED
Binary file (6.15 kB). View file
 
traces/comparison/.DS_Store ADDED
Binary file (6.15 kB). View file
 
traces/comparison/trace_adamw_FULLSTEP_async_rank0.json ADDED
The diff for this file is too large to render. See raw diff
 
traces/comparison/trace_adamw_FULLSTEP_rank0.json ADDED
The diff for this file is too large to render. See raw diff
 
traces/comparison/trace_adamw_FULLSTEP_rank1.json ADDED
The diff for this file is too large to render. See raw diff
 
traces/comparison/trace_adamw_FULLSTEP_rank2.json ADDED
The diff for this file is too large to render. See raw diff
 
traces/comparison/trace_adamw_FULLSTEP_rank3.json ADDED
The diff for this file is too large to render. See raw diff
 
traces/comparison/trace_adamw_rank0.json ADDED
The diff for this file is too large to render. See raw diff
 
traces/comparison/trace_adamw_rank1.json ADDED
The diff for this file is too large to render. See raw diff
 
traces/comparison/trace_adamw_rank2.json ADDED
The diff for this file is too large to render. See raw diff
 
traces/comparison/trace_adamw_rank3.json ADDED
The diff for this file is too large to render. See raw diff
 
traces/comparison/trace_muon_FULLSTEP_dp2_tp2_async_rank0.json ADDED
The diff for this file is too large to render. See raw diff
 
traces/comparison/trace_muon_FULLSTEP_dp2_tp2_rank0.json ADDED
The diff for this file is too large to render. See raw diff
 
traces/comparison/trace_muon_FULLSTEP_dp2_tp2_rank1.json ADDED
The diff for this file is too large to render. See raw diff
 
traces/comparison/trace_muon_FULLSTEP_dp2_tp2_rank2.json ADDED
The diff for this file is too large to render. See raw diff
 
traces/comparison/trace_muon_FULLSTEP_dp2_tp2_rank3.json ADDED
The diff for this file is too large to render. See raw diff
 
traces/comparison/trace_muon_dp2_tp2_rank0.json ADDED
The diff for this file is too large to render. See raw diff
 
traces/comparison/trace_muon_dp2_tp2_rank1.json ADDED
The diff for this file is too large to render. See raw diff
 
traces/comparison/trace_muon_dp2_tp2_rank2.json ADDED
The diff for this file is too large to render. See raw diff
 
traces/comparison/trace_muon_dp2_tp2_rank3.json ADDED
The diff for this file is too large to render. See raw diff
 
traces/distributed_muon/trace_1_4_rank0.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:72ce92941f76b352af275eec4014ad07af2fe96c773e9adc3d44bf94495ccf8f
3
+ size 14460297
traces/distributed_muon/trace_1_4_rank1.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2836ad88a4dfc3f3a12b7bdbdc401fff8dcb4133b86ec22d9da8fb608e82b4df
3
+ size 14467000
traces/distributed_muon/trace_1_4_rank2.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0bb62c19feca67d9feb6d3a4bfd662ee450578e431ef18c718238541d6890d7c
3
+ size 14460827
traces/distributed_muon/trace_1_4_rank3.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0f273bc9b1be106712601915d855224a309f05bdefcf0977861c529a608c7ace
3
+ size 14460833
traces/distributed_muon/trace_2_2_rank0.json ADDED
The diff for this file is too large to render. See raw diff
 
traces/distributed_muon/trace_2_2_rank1.json ADDED
The diff for this file is too large to render. See raw diff
 
traces/distributed_muon/trace_2_2_rank2.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2973324a6081b37293fabda1c1c22d0d7232e603fd99cc9d19e6394ead8eba00
3
+ size 13613151
traces/distributed_muon/trace_2_2_rank3.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4bcd1d5597af70653f1cd2b600cab5f06733ef3e765e47fa85749e6e87a6f990
3
+ size 13630823
traces/distributed_muon/trace_4_1_rank0.json ADDED
The diff for this file is too large to render. See raw diff
 
traces/distributed_muon/trace_4_1_rank1.json ADDED
The diff for this file is too large to render. See raw diff
 
traces/distributed_muon/trace_4_1_rank2.json ADDED
The diff for this file is too large to render. See raw diff
 
traces/distributed_muon/trace_4_1_rank3.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9914260eedb7b50dfa0766a55cf7047b3352e26d84b2d4bd95d611d42fe67012
3
+ size 13840654