Ensemble-methods

  • Published on
    5 min0Comments
    This paper challenges the conventional wisdom of mixing different Large Language Models (LLMs) in ensemble methods. It introduces Self-MoA, a novel approach that aggregates outputs from only the top-performing LLM, and demonstrates its superiority over standard Mixture-of-Agents (MoA) in various benchmarks.
    Read more