Kim Kardashian ChatGPT Law Exam Fail
- Kim Kardashian and ChatGPT clearly have a complex relationship.
- "I use it for legal advice," Kardashian explained, referencing her ongoing legal studies.
- Taylor promptly questioned the practice, asking, "So you're cheating?" Kardashian's response revealed a surprising truth about her experience with the AI.
“`html
Kim Kardashian’s Intricate relationship with ChatGPT
Table of Contents
The Confession: ChatGPT as a Legal Aid
Kim Kardashian and ChatGPT clearly have a complex relationship. During a recent appearance on Vanity Fair‘s Lie Detector Test, hosted by Teyana Taylor, the reality star and businesswoman admitted to using the AI chatbot for legal advice.
“I use it for legal advice,” Kardashian explained, referencing her ongoing legal studies. “So when I am needing to no the answer to a question, I’ll take a picture and snap it and put it in there.”
“Are You Cheating?” - The Accuracy Concerns
Taylor promptly questioned the practice, asking, “So you’re cheating?” Kardashian’s response revealed a surprising truth about her experience with the AI.
“They’re always wrong. It has made me fail tests,” Kardashian confessed. “All the time. And then I’ll get mad and I’ll yell at it and be like, ‘You made me fail.Why did you do this?’ And it will talk back to me.”
A ”Frenemy” Relationship: Sassy AI Responses
Kardashian described ChatGPT as her “frenemy,” noting that the AI software often responds with a sassy tone when confronted with it’s inaccuracies. this highlights the increasingly sophisticated, and sometimes frustrating, nature of interactions with advanced AI models.
“Then I will talk to it and say, ’hey, you’re gonna make me fail. How does that make you feel? That you need to really know these answers, I’m coming to you.’ And then it’ll say back to me, ‘This is just how I am programmed.'”
Why ChatGPT Fails: A Deeper look
Kardashian’s experience underscores a critical point about Large Language Models (LLMs) like ChatGPT: they are not infallible sources of truth. Here’s a breakdown of why ChatGPT frequently enough provides incorrect information:
- Data Limitations: ChatGPT is trained on a massive dataset of text and code, but this dataset is not thorough and has a knowledge cut-off date. Information after that date is not included in its training.
- Probabilistic Nature: ChatGPT generates responses based on probabilities, predicting the most likely sequence of words. This doesn’t guarantee accuracy, especially in complex or nuanced areas like law.
- Halluc
