ZurichRain / AGI-Hallucination

A Survey of MultiModel LLM Hallucination

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

AGI-Hallucination

Version Stars Issues

Please check recent paper index list in PaperList.

lighthouse

We also present a bird’s eye view of hallucinations in AGI and compile a comprehensive survey. LightHouse: A Survey of AGI Hallucination

📖 Once this problem is solved, the path to AGI unfolds

We called it LightHouse for AGI

Define

✨ Nowadays, we defined hallucinations as: Model outputs that do not align with the contemporary empirical realities of our current world.

  • Definition for AGI Hallucination

    • Conflict in Intrinsic Knowledge of Models
    • Factual Conflict in Information Forgetting and Updating
    • Conflict in Multimodal Fusion
  • Emergence for AGI Hallucination

    • Training Data Distribution
    • Timeliness of Information
    • Ambiguity in Different Modalities
  • Mitigation for AGI Hallucination

    • Data
    • Train & SFT & RLHF
    • Inference & Post-hoc
  • Evaluation for AGI Hallucination

    • Benchmark
    • Rule-Based
    • Large Model-Based
    • Human-Based
  • Talk about Future

    • Dataset for 3D, Ego-Centric, Video, and Agent
    • Mutimodel Fusion, RLHF Refine, Objective Refine, Inference Refine
    • Knowledge Update, Catastrophic Forgetting
    • Balance Creation and Hallucintion

🔍 Case Study

case1

case2

case3

case4

case5

case6

case7

🔮 Talking about Future

future

📲 Contact

We warmly welcome any kinds of useful suggestions or contributions. Feel free to drop us an issue or contact Rain with this e-mail.

About

A Survey of MultiModel LLM Hallucination