The Real Truth About Artificial Intelligence (And Why It Matters to You) by Abrar Nayeem Chowdhury

Artificial intelligence isn’t some distant sci-fi concept anymore. It’s already deciding who gets a loan, who gets a job interview, whose exam score is “good enough,” and even who gets stopped at a border. Most of the time, it works quietly in the background unseen, unquestioned, and rarely understood.

And that’s exactly where the problem begins.

During the pandemic, as the world turned to technology for support, many individuals encountered the power of AI in a personal way. In the UK, students anxiously awaiting their exam results discovered that an algorithm, rather than their teachers, had determined their futures. This meant that thousands of students were graded down based on historical data from their schools, not on their actual skills and hard work. The reaction was swift and heartfelt, with one powerful sentiment echoing everywhere: “Your algorithm doesn’t know me.”

They were right.

AI systems don’t understand people the way humans do. They don’t know struggle, ambition, or potential. They learn from data, and data often carries the same biases society has been fighting for decades. When those biases are fed into machines, they don’t disappear. They scale.

Around the same time, the UK government faced criticism for using automated systems to assess visa applications. Campaigners argued that the technology reinforced discrimination rather than removing it. These moments forced an uncomfortable question into the public spotlight: If machines are making decisions about our lives, who is holding the machines accountable?

Lawmakers across Europe recognized that they could no longer ignore the pressing question of AI regulation. The European Union has proposed one of the world's most ambitious frameworks for regulating artificial intelligence. Unlike a one-size-fits-all approach, this law categorizes AI systems based on their risk levels. Technologies employed in sensitive areas such as healthcare, law enforcement, recruitment, and transportation will be subject to strict regulations concerning accuracy, transparency, and safety. Additionally, certain practices, such as AI designed to manipulate individuals or facilitate mass surveillance, will be banned outright.

Even everyday tools wouldn’t escape scrutiny. Chatbots, for example, would need to clearly identify themselves as non-human. No more pretending a machine is a person.

Critics still argue that the regulations are insufficient. Facial recognition technology continues to raise significant concerns, particularly when used outside of real-time policing.

Additionally, emotion-reading AI software that claims to detect emotions based on facial expressions or vocal tones presents another area of uncertainty. Many experts believe these systems are unreliable and deeply biased, yet they are still being developed and marketed.

What’s missing from many of these debates is the public voice.

That’s where organizations like We and AI step in, working to help everyday people understand how AI affects their rights and choices. Their message is simple but powerful: AI governance shouldn’t belong only to tech companies and policymakers. It should include the people whose lives are being shaped by these systems.

Meanwhile, the UK finds itself at a crossroads. No longer bound by EU regulations, it must decide what kind of AI future it wants.

One path prioritizes speed, innovation, and investment. The other emphasizes protection, trust, and accountability. The challenge is finding a balance, because progress without responsibility comes at a cost.

At its core, the conversation about artificial intelligence isn’t really about technology at all. It’s about power, fairness, and who gets to decide the rules. AI will keep evolving. That’s inevitable. What isn’t inevitable is allowing it to grow unchecked.

The real question is not what AI can do but who it serves.


“They say a machine knows your future
from numbers you never chose,
from patterns older than your name,
from data that never felt your fear.

It measures you without your story,
ranks you without your voice,
decides in silence
and calls it fairness.

But I am not average.
I am not a probability curve.
I am late nights and second chances,
calloused hope and unfinished dreams.

If a machine must judge us,
Let it first learn mercy.
Let it learn the weight of a pause,
The truth behind a trembling yes.

Because the future should not be coded
without asking who we are.
And no algorithm—no matter how clever
should never forget the human in the data. 

                      ---   (Poetry by Abrar Nayeem Chowdhury)


Artificial intelligence is not just shaping the future; it is quietly shaping the present. Every automated decision carries the weight of human lives, even when it pretends to be neutral. The danger isn’t that machines will think like humans, but that humans will stop questioning the machines.

Progress doesn’t mean handing over responsibility. It means choosing transparency over convenience, fairness over speed, and people over profit. AI can support us, guide us, even protect us, but only if it is built with care and governed with courage.

The true measure of artificial intelligence will not be its level of advancement, but rather how well it embodies our values, empathy, and sense of justice. Ultimately, technology should not dictate who we are allowed to become.

That choice must always remain human.


Copyrights: Abrar Nayeem Chowdhury.


#AI



Comments