Tag Archives: trustworthy AI

147–163 A.D. Dhotre, S.A. Thorat,, B.S. Yelure and P.B. Jawade
An explainable AI-Driven Framework for Precision Agriculture: A Comprehensive Survey
Abstract |
Full text PDF (242 KB)

An explainable AI-Driven Framework for Precision Agriculture: A Comprehensive Survey

A.D. Dhotre¹*, S.A. Thorat²,*, B.S. Yelure³ and P.B. Jawade⁴

¹Government College of Engineering, Department of Computer Science and Engineering, Karad, Maharashtra, India
²Government College of Engineering, Department of Information Technology, Karad, Maharashtra, India
³Government College of Engineering, Department of Information Technology, Kolhapur, Maharashtra, India
⁴Government College of Engineering, Department of Computer Science and Engineering, Nagpur, Maharashtra, India
*Correspondence: aakashdhotre12@gmail.com, sathorat2003@gmail.com

Abstract:

This review focuses on crop recommendation systems and provides a thorough explanation of Explainable AI (XAI) in precision agriculture. The paper charts the development of predictive models that have been published in the literature, from straightforward, comprehensible algorithms to extremely accurate ‘black box’ ensemble and deep learning models, as well as their lack of transparency, which may erode farmers’ confidence. In order to make these black box algorithms comprehensible and useful, the paper focuses on two XAI frameworks – LIME and SHAP – that are currently in use. The accuracy and explainability trade-off, problems with data heterogeneity, and the requirement for relevant user explanations are just a few of the significant gaps in the evidence base that are highlighted by the paper’s synthesis of the research. The paper’s concluding remarks provide a potential path toward integrated, reliable, and comprehensible AI systems that will enhance contemporary sustainable agriculture.

Key words:

, , , , , , ,