- Published on
Demystifying DeepSeek-V3: Breaking Down Its Revolutionary AI Architecture
Introduction
DeepSeek-V3 represents a significant leap forward in the field of large language models (LLMs). Developed by DeepSeek-AI, this model leverages the Mixture-of-Experts (MoE) architecture to deliver unparalleled performance while maintaining efficiency in both training and inference. With a total of 671 billion parameters and 37 billion activated per token, DeepSeek-V3 is designed to handle complex tasks with remarkable accuracy.
This post provides a detailed walkthrough of DeepSeek-V3’s architecture, explaining its key components step by step. Each section breaks down the underlying mechanisms and presents the mathematical formulations that define their functionality. The explanations are based on the official DeepSeek-V3 technical report, which serves as the primary source of information provided by the model’s authors. You can refer to the full paper here: DeepSeek-V3 Technical Report. By the end, you will have a clear understanding of how DeepSeek-V3 achieves efficiency, scalability, and inference optimization.
Architecture and Innovations
DeepSeek-V3 introduces multiple innovations that enhance efficiency, scalability, and accuracy. The key architectural advancements include:
- Multi-head Latent Attention – Reduces inference costs and improves attention efficiency.
- DeepSeekMoE – A refined Mixture-of-Experts (MoE) architecture that enhances expert specialization and load balancing.
- Auxiliary-Loss-Free Load Balancing – A novel expert selection strategy that removes the need for auxiliary loss, ensuring stable and efficient expert utilization.
- Complementary Sequence-Wise Auxiliary Loss – A lightweight loss function that prevents local imbalances within a single sequence.
- Node-Limited Routing – Optimizes expert distribution across computational nodes to reduce communication overhead.
- No Token-Dropping Strategy – Ensures stable token retention during both training and inference.
- Multi-Token Prediction – Enhances token representations during training and can be used for speculative decoding in inference.
Each of these components contributes to DeepSeek-V3's state-of-the-art performance while maintaining computational efficiency.

DeepSeek-V3 Model Architecture. Source DeepSeek-V3 Technical Report
Multi-head Latent Attention
Multi-head Latent Attention (MLA) is a cornerstone of DeepSeek-V3's architecture. This mechanism allows the model to process information more efficiently by focusing on the most relevant parts of the input data. MLA reduces the computational overhead while maintaining high accuracy, making it ideal for large-scale language models.
Step by Step Explanation
(a) Projection of Input Token to Latent Vector
- The input token at time step is denoted as . Here, has the same dimension as the model's hidden state.
- It is projected into a latent vector with a much smaller dimension (where ):
- Here, is a learned projection matrix that reduces the dimensionality of .
- The latent vector stores compressed information used for computing key and value representations in the attention mechanism.
(b) Key Vector Computation by Upsizing from Latent Vector
- The key vector is computed by upsizing from the latent vector dimension to the full attention dimension :
- Here, is a learned projection matrix that expands into the full key representation .
- This ensures that has the same dimensionality as keys in Multi-Head Attention (MHA).
(c) Projection and RoPE Encoding of Input Token
- The input token is first projected from dimension to using a learned projection matrix:
- Here, is a learned projection matrix with size .
- The projected vector is then encoded using Rotary Positional Embeddings (RoPE) at token position :
- The RoPE-encoded key is concatenated to each head's key vectors, ensuring that the same positionally encoded key is shared across all heads.
- After concatenation, we obtain the final per-head key vector for each attention head with dimensionality:
(d) Value Vector Computation by Upsizing from Latent Vector
- The value vector is computed by upsizing from the latent vector with dimension to the full attention dimension :
- Here, is a learned projection matrix.
- This ensures that has the same dimensionality as values in Multi-Head Attention (MHA).
(e) Query Vector Computation with Low-Rank Compression
- The attention query is computed using a low-rank compression, first down-projecting into a latent space with dimension (where ):
- Here, is a learned down-projection matrix, and the latent vector has dimension:
- The query is then upsized back to the full attention space using a learned up-projection matrix:
- Here, is a learned up-projection matrix specific to queries, separate from those used for keys and values.
- RoPE encoding is applied to the projected query vector at token position :
- Here, is a learned projection matrix with size .
- Finally, the RoPE-encoded query is concatenated with the upsized query vector, forming the final per-head query vector:
- The resulting dimensionality of each per-head query vector is:
(f) Attention Output Computation
- The final attention output is computed using the queries, keys, and values:
- Here, for each head :
- The query .
- The key .
- The value .
- The output .
- The query-key similarity is scaled by the factor (the dimension of and ) before applying the softmax.
- The outputs from all heads are concatenated into a single column vector: which has entries.
- The final output hidden state is computed as:
- Here, is the learned output projection matrix.
- The final output is a one-column vector with entries, i.e., .
DeepSeekMoE
DeepSeekMoE is a specialized Mixture of Experts (MoE) architecture used in DeepSeek-V3 for Feed-Forward Networks (FFNs). Compared to traditional MoE architectures like GShard, DeepSeekMoE introduces finer-grained expert allocation, where some experts function as shared ones.
Step by Step Explanation
(a) FFN Computation for Each Token
- Let the FFN input of the -th token be , where is the hidden dimension.
- The output is computed as:
- Here:
- and denote the number of shared and routed experts, respectively.
- represents the -th shared expert.
- represents the -th routed expert.
- is the gating value for the -th expert.
- Both and have the same hidden dimension .
(b) Gating Value Normalization
- The gating values are normalized across the activated experts:
- where is the initial gating score.
(c) Top-K Expert Selection
- Each token is assigned to the top experts with the highest affinity scores:
- The affinity score determines the routing probability.
(d) Computing Token-to-Expert Affinity
- The token-to-expert affinity score is given by: The sigmoid function is a mathematical function defined as , which maps any real-valued number to a range between 0 and 1, commonly used for probability estimation and activation in neural networks.
- The centroid is computed as:
- Here:
- is the centroid vector of the -th routed expert.
- is the set of tokens routed to expert in a given batch.
- is the number of tokens assigned to expert .
- The centroid is learned during training to specialize experts for different types of tokens.
- During inference, remains fixed and is used only for routing decisions.
Auxiliary-Loss-Free Load Balancing
For Mixture of Experts (MoE) models, an unbalanced expert load can lead to routing collapse, reducing computational efficiency in expert-parallel architectures. Conventional solutions use auxiliary losses to balance token distribution, but large auxiliary losses can degrade model performance. To avoid these trade-offs, DeepSeek-V3 introduces an auxiliary-loss-free load balancing strategy that dynamically adjusts a bias term for each expert. This bias is added to the affinity scores to determine expert selection.
Step by Step Explanation
(a) Expert Selection with Bias Adjustment
- Each expert has a bias term , which is added to the original affinity score before routing:
- Here:
- is the original token-to-expert affinity score.
- is a bias term assigned to expert .
- is learned during training to balance expert utilization.
- During inference, remains fixed and is only used for routing decisions.
- The Top- function selects the experts with the highest adjusted scores.
(b) Bias Term is Only Used for Routing
- The bias term does not affect the FFN computation.
- It is only used to adjust expert selection probabilities.
(c) Dynamic Bias Update to Balance Load
- At the end of each training step, the bias term is updated based on expert load:
- If expert is overloaded, is decreased by a factor of .
- If expert is underloaded, is increased by a factor of .
- Here:
- is a bias update speed hyperparameter that controls how fast is adjusted.
- At the end of each training step, the bias term is updated based on expert load:
Complementary Sequence-Wise Auxiliary Loss
Although DeepSeek-V3 primarily relies on an auxiliary-loss-free strategy for load balancing, it introduces a complementary sequence-wise balance loss to prevent extreme imbalances within a single sequence. This ensures that expert utilization remains balanced across tokens in a sequence.
The sequence-wise balance loss is defined as:
where is a balance factor hyperparameter, assigned an extremely small value in DeepSeek-V3.
Step by Step Explanation
(a) Computing Expert Load Fraction
- The fraction of tokens assigned to expert within a sequence is computed as:
- Here:
- is the number of routed experts.
- is the number of activated experts per token.
- is the sequence length, representing the number of tokens.
- is the indicator function, returning 1 if expert is among the top- selected experts for token .
(b) Normalized Expert Probability
- The normalized token-to-expert gating value is computed as:
- Here:
- is the original token-to-expert gating value.
- The denominator ensures that the gating values sum to 1 across all routed experts.
(c) Computing Mean Expert Utilization
- The mean probability of expert being selected across the sequence is:
- This represents the average normalized gating value for expert over all tokens in the sequence.
(d) How the Sequence-Wise Balance Loss is Used
- penalizes imbalances in expert usage within a sequence.
- It is only applied during training and not used at inference.
- It gently adjusts routing to prevent short-term expert overload.
- The hyperparameter ensures minimal interference with the main loss.
Node-Limited Routing
DeepSeek-V3 employs Node-Limited Routing during training to reduce communication costs in MoE models. Each token is routed to at most nodes, selected based on the sum of the highest affinity scores among experts on each node. This constraint ensures efficient load balancing while maintaining near full computation-communication overlap, optimizing training efficiency.
No Token-Dropping
Due to its effective load balancing, DeepSeek-V3 does not drop any tokens during training or inference. The model maintains stable expert utilization, and inference-specific deployment strategies ensure balanced token routing.
Multi-Token Prediction
DeepSeek-V3 introduces Multi-Token Prediction (MTP), a training objective that extends the prediction scope to multiple future tokens per position. This approach improves training efficiency while enhancing token representations for better future token prediction.

DeepSeek-V3 Multi-Token Prediction (MTP) implementation. Source DeepSeek-V3 Technical Report
Step by Step Explanation
(a) MTP Modules
- MTP is implemented using sequential modules, each predicting an additional token.
- Each -th MTP module consists of:
- A shared output head .
- A Transformer block .
- A projection matrix .
- At prediction depth , the representation of token is computed by combining the previous depth representation of the -th token with the embedding of the -th token:
- Here:
- is a learned projection matrix.
- is the previous depth's hidden representation.
- is the embedding of the future token at position .
- RMSNorm is used for normalization, stabilizing activations without mean subtraction.
- The transformed representation is processed through a Transformer block: where represents the input sequence length, and denotes the slicing operation (inclusive of both the left and right boundaries).
- Finally, taking as the input, the shared output head computes the probability distribution for the -th additional prediction token: where , with being the vocabulary size. The output head linearly maps the representation to logits and subsequently applies the Softmax function to compute the prediction probabilities of the -th additional token.
(b) MTP Training Objective
- For each prediction depth, a cross-entropy loss is computed:
- where denotes the input sequence length, represents the ground-truth token at the -th position, and is the predicted probability of given by the -th MTP module.
- The MTP losses are averaged across all depths and scaled by a weighting factor to obtain the overall MTP loss , which serves as an additional training objective:
(c) MTP in Inference
- MTP is used during training to enhance token representations.
- During inference, MTP modules are disabled, and only the main model is used for token prediction.
- MTP can also be repurposed for token speculation, improving decoding efficiency.
Key Takeaways
- Efficient Attention with MLA: Reduces memory usage by using latent-space projections to shrink key and value dimensions, with potential computational savings by operating on smaller representations.
- Stable Expert Routing with DeepSeekMoE: Implements auxiliary-loss-free load balancing, preventing routing collapse and ensuring efficient expert specialization. Uses a bias-adjusted selection mechanism to maintain an even token-to-expert distribution, enhancing model stability without introducing extra computational overhead.
- No Token Dropping: Maintains stable token retention during training and inference, avoiding degradation in sequence processing.
- Multi-Token Prediction Enhances Training: Improves token representations and learning efficiency by extending the prediction objective beyond the next token.
DeepSeek-V3 represents a major leap in both training efficiency and inference scalability, setting a new standard for next-generation language models.
Source(s)
Enjoyed this post? Found it insightful? Feel free to leave a comment below to share your thoughts or ask questions. A GitHub account is required to join the discussion.
If you found the mathematical analysis of LLM architectures valuable and would like to see more posts exploring their inner workings in depth, let us know.