I’ve taught thousands of people how to use AI – here’s what I’ve learned. AI (Artificial Intelligence)

by
0 comments
I've taught thousands of people how to use AI – here's what I've learned. AI (Artificial Intelligence)

Training teams to use AI at work has given me a front row seat to a new type of professional segmentation.

Some people leave everything to the machine and stop thinking. Others won’t touch it at all.

But there is also a third group. They learn to work with AI seriously, treating it like a bright, enthusiastic apprentice who needs management and support to do her best work.

difference of? It is rarely technical ability. This is curiosity. The willingness to experiment, get things wrong, and find out what AI is really good at.

This is what I have learned so far.

Most people fail at AI because they don’t understand what it really is

People I’ve worked with oscillate between extremes: treating AI as an omniscient oracle or dismissing it altogether after one mistake.

Current AI has as much in common with the human brain as a bird has with an A380. Both can fly, but the similarity ends there. Large language models simply predict words based on patterns in their training data. This is why they can produce fluent prose about well-covered topics, but also make things confidently when they’re on unfamiliar ground.

Once users understand this, their approach changes to provide it with clear goals and proper context. When someone tells me that everything they get from an AI is nonsense, it almost always turns out that they are getting generic answers to generic prompts.

Those who achieve the best results treat AI as a skill, not a shortcut

The biggest predictor of success is not technical ability. The question is whether one treats AI as a learnable skill rather than a magic box that either works or doesn’t. The people best at using it are those who experiment daily and think about how to get better results next time. The goal is to have machines work for us, not think for us – that means using them in an active, critical and engaged way.

AI needs direction, feedback, and improvement – ​​just like people do

The skills needed to use AI are ones that many people already have: communication and delegation. Just like that intern, you wouldn’t assign them a project and disappear. You’ll break it down, check back regularly, and course-correct as needed. The same applies to AI also.

And like an apprentice, as their manager you are ultimately responsible for their production. That’s what ‘human in the loop’ really means: it’s your job to keep the AI ​​on track and make sure the output is absolutely perfect.

You shouldn’t outsource your decisions to AI – or give it sensitive data

A few months ago, a manager of a small retail chain was proudly showing me an HR dashboard he had coded using AI. Unfortunately, they also imported sensitive information without thinking about what would happen if that data was leaked or what policies they would have to follow. I sent it straight to IT.

But the risks go beyond safety. AI systems are trained on data created by humans and reflecting our collective biases. You should avoid asking AI to make high-level subjective decisions such as “should we hire this candidate for an interview” which could be prone to bias. Instead focus on factual assessment, for example “does this candidate have the right number of years of experience”.

Ignoring AI won’t stop its impact

The environmental, ethical, and social impacts of AI are significant and growing. At a recent session at an environmental charity, a director was torn between the potential to do more as an organization and the ethical costs of doing so, such as the carbon impact of running AI systems. But AI isn’t going away. It is far better to have AI-literate citizens who are able to demand that it be built in a responsible and democratic manner. AI is not a train waiting for us to board; It’s already mid-journey. The only question is who will operate?

The pace of AI development leaves no room for slow decisions

Today’s version of AI is the worst it’s ever been, and it’s improving faster than most people realize. Tasks that were impossible a year ago have now become routine. Where once I would spend long nights sitting at a keyboard trying to figure out why my code wasn’t running the way it should, I now create entire applications in a matter of hours with little more than a few pointers. Many developers laughed last year when Anthropic’s CEO said that 90% of code would soon be written by AI. Today many people believe that it was not far away.

Unlike past technological revolutions, this one is moving faster than our ability to adapt. It took a century from the steam engine to the locomotive, and fifty years for Faraday to build Edison’s power plant. Today, the gap between breakthroughs and global adoption is a few months. We don’t have the luxury of a decade-long debate; We must build our social and democratic response as fast as we can technology, otherwise we risk being governed by tools we do not yet understand.

The people who will shape the way AI changes the world do not need to be the technologists who build these systems. They may be people who are willing to experiment, taking both capabilities and risks seriously. We all have a responsibility to not only understand AI ourselves, but to inspire our employers, communities and governments to use it in a way that ensures no one is left behind.

Tom Hewitson is the founder and Major A.I. officers of general purposeA London-based AI training company

Related Articles

Leave a Comment