Consistency Analysis of ChatGPT
Myeongjun Erik Jang and Thomas Lukasiewicz
Abstract
ChatGPT has gained a huge popularity since its introduction. Its positive aspects have been reported through many media platforms, and some analyses even showed that ChatGPT achieved a decent grade in professional exams, adding extra support to the claim that AI can now assist and even replace humans in industrial fields. Others, however, doubt its reliability and trustworthiness. This paper investigates ChatGPT's trustworthiness regarding logically consistent behaviour, focusing specifically on semantic consistency, and the properties of negation, symmetric, and transitive consistency. Our findings suggest that while ChatGPT appears to show an enhanced language understanding and reasoning ability, it still frequently falls short of generating logically consistent predictions. We also ascertain via experiments that prompt designing and data augmentation cannot be the ultimate solution to resolve the inconsistency issue of ChatGPT.