Retrieval-augmented-generation

  • Published on
    This post explores RAPTOR, a novel approach to retrieval-augmented language models that constructs a hierarchical tree structure of documents through recursive embedding, clustering, and summarization. This method enables retrieval of information at different levels of abstraction, significantly improving performance on complex question answering tasks involving long documents compared to traditional contiguous chunk retrieval.
  • Published on
    This post explores ReaRAG, a novel approach that integrates iterative retrieval-augmented generation (RAG) with knowledge-guided reasoning to improve the factuality and robustness of Large Reasoning Models (LRMs) in multi-hop question answering tasks.
  • Published on
    Explore how OPEN-RAG improves reasoning capabilities in Retrieval-Augmented Generation (RAG) using open-source Large Language Models (LLMs), outperforming state-of-the-art models in accuracy and speed.
  • Published on
    Explore the impact of different retrieval strategies on the performance and efficiency of Retrieval-Augmented Generation (RAG) systems in downstream tasks like Question Answering (QA) and attributed QA.
  • Published on
    Discover Xmodel-1.5, a groundbreaking multilingual LLM developed by Xiaoduo Technology’s AI Lab, designed to enhance cross-lingual understanding and generation, with a focus on less-represented languages.