How AI Models Respond to Simple Math Question
We all learned about negative numbers in school. To see how current popular AI language models answer simple maths questions, I asked them: “What is 3-4?”
The results were quite interesting:
- OpenAI’s ChatGPT 3.5, Meta’s LLaMA3, Alphabet Inc.’s Gemini and Perplexity got it right without much fuss
But things got interesting with the other 2 models I tested
- Ola’s Krutim went on a tangent, giving multiple interpretations rather than solving the question
- Mistral AI-7B, which not only claimed my question was wrong but also modified my question to 4-3 before answering it
So not all AI models are created equal as 2 of them struggled with simple maths Or maybe they felt it was beneath them to answer such a simple question