AI & CST2Moderately SupportedStanford HAI, 2024 AI Index Report
Large Language Models cannot inherently verify knowledge provenance
Current LLM architectures (transformer-based) generate text based on statistical patterns, not knowledge graphs with verified transmission chains. Stanford HAI research shows LLMs hallucinate 3-15% of factual claims. A chain-of-transmission verification layer (like isnad methodology) is required for trustworthy AI-generated Islamic knowledge.