Home Business Are kids really safe using ChatGPT?

Are kids really safe using ChatGPT?

0
Are kids really safe using ChatGPT?
Are kids really safe using ChatGPT?

Oct 11, 2024 06:20 PM IST

AI models often function as black boxes, processing inputs and delivering outputs without users knowing what’s happening in between

Are the digital spaces our kids occupy safe for them? Perhaps, even the most paranoid among us doesn’t have a clue. If the conversations around this are anything to go by, there isn’t much happening to secure our digital lives.

When kids input assignments into AI tools, they’re feeding data into systems they don’t understand. (Representative file photo)
When kids input assignments into AI tools, they’re feeding data into systems they don’t understand. (Representative file photo)

These questions were triggered after listening to some compelling confessions between a bunch of 12-year-old kids. It appears they were wrestling with a dilemma: Was it okay to use Artificial Intelligence (AI) tools such as ChatGPT to complete their assignments? After all it gets the job done fast and leaves them with free time.

One of the boys in the group said they had a worked out a 40:40:20 formula. What it meant was that it was acceptable to use ChatGPT for 40% of the assignment to generate a structure and the main arguments. The other 40% is where information from search results from Google and other repositories such as Wikipedia. And finally, 20% must be spent on the “creative” parts such as writing. The formula was arrived at by consensus. Ingenious? Yes! But there are multiple issues with it.

When the least amount of time is spent on the “creative part”, does original thought and creativity erode? And if a significant portion of their work is generated or structured by an algorithm, are they truly engaging with the material? Is it possible this could have long-term effects on their critical thinking and problem-solving skills? Early research suggests too much dependence can lead to reduced cognitive engagement and diminished creativity.

Then there is the issue of Intellectual Property rights. Who owns the content produced by AI? If students submit such work as their own, does that constitute plagiarism? Schools might need to revisit their academic honesty policies to address this new frontier. Countries in the European Union have already deep at work to regulate such technologies. They’ve recognized the risks and are setting guidelines.

All the 12-year-olds I was listening to have access to technology. But India is a large country and not every kid has equal access. Those who lack access to such tools can fall behind. How do we ensure a level playing field where technology enhances learning without creating altogether new disparities? This is a question that has come up on various fora for some time now and there is recognition that this problem exists.

Next, take privacy. When kids interact with tools such as ChatGPT, they’re inevitably sharing data, most of the times, without fully understanding the ramifications. Are they inadvertently giving away personal information? And are these platforms equipped to handle such sensitive data responsibly? A consultant with the government does not mince his words. “In India, for all the noise, this conversation is barely a whisper. We hear occasional murmurs about regulating AI. But concrete steps? Hardly any. It’s as if our children are navigating uncharted digital territories without a safety net.”

His point is that when kids input assignments into AI tools, they’re feeding data into systems they don’t understand. Are they inadvertently sharing personal information? Where does that data go? It’s unsettling to think their inputs might be stored or analysed without parental oversight.

Talking about privacy, and not just that of kids, Manoj Nair, a Bengaluru-based consultant who was with the Open Network for Digital Commerce (ONDC), makes no bones about his discomfort. “While the right to privacy has been upheld by the Supreme Court as intrinsic as right to life and liberty under Article 21 of the constitution, the biggest violator has been the state and data privacy will be no exception.” If that be the case, there’s a third problem, and the worst, to deal with which is transparency.

AI models often function as black boxes, processing inputs and delivering outputs without users knowing what’s happening in between. This opacity can be problematic, especially when minors are involved. How can parents and educators ensure that the content generated is appropriate and that the data isn’t being misused? We don’t have the answers yet when the state is the biggest culprit.

Stay updated with the…

See more

LEAVE A REPLY

Please enter your comment!
Please enter your name here