Back to ClaimsAI & CS
AI & CST2Moderately SupportedACTIVEStanford HAI, 2024 AI Index Report
Large Language Models cannot inherently verify knowledge provenance
Current LLM architectures (transformer-based) generate text based on statistical patterns, not knowledge graphs with verified transmission chains. Stanford HAI research shows LLMs hallucinate 3-15% of factual claims. A chain-of-transmission verification layer (like isnad methodology) is required for trustworthy AI-generated Islamic knowledge.
Composite Score
71.0%
Supported by evidence with some gaps in transmission
Transmitter Reliability(40%)
88.0%
Chain Completeness(35%)
60.0%
Corroboration(25%)
55.0%
Transmission Chain 1
Incomplete1
Stanford HAI
Stanford University — Reputation: 88%
Transmitter Reputation Breakdown
Stanford HAI
Accuracy (35%)
88%
Reliability (30%)
85%
Authority (20%)
90%
Transparency (15%)
92%