Artificial intelligence (AI) is increasingly being explored for its potential to enhance patient-provider interactions in psychology, a field facing a shortage of mental health providers. This review focuses on the integration of AI, particularly through emotion-predicting system-on-chip technologies, while addressing concerns related to nonclinical chatbots. An extensive literature search identified 112 articles, with 36 relevant studies examined, but none of the AI tools are currently FDA-approved for clinical use in psychology.

AI models have shown promise in predicting emotional responses and providing patient support, yet challenges remain due to biased or incomplete data sets, which can impact reliability. Additional privacy risks associated with nonclinical chatbots, such as ChatGPT, complicate their implementation in mental health care. Ethical and regulatory barriers also hinder the adoption of these technologies.

Despite the potential benefits of AI in improving clinician efficiency and patient access, the need for careful ethical and methodological considerations is crucial for responsible integration. The findings underscore the importance of evidence-based approaches as the field moves toward incorporating AI tools in psychological practice.