Rely on AI Wisely — Use It as a Tool, Not a Dependency


Professor Shang Yi
SIBU: Artificial Intelligence (AI) is all around us, and with its increasing usefulness in our daily lives and work, one cannot help but ask the question: ‘Can we fully trust AI?’

In The Trust, Attitudes and Use of Artificial Intelligence: A Global Study 2025, it was found that 66 per cent of people globally intentionally use AI regularly.

However, less than half of global respondents (46 per cent) are willing to trust it. One possible reason for this phenomenon could be a lack of understanding about AI.

Experts are cautiously optimistic about the impact of AI across every industry worldwide. Thus, embracing AI is a necessity in today’s world.

That is why Professor Shang Yi from the University of Missouri, Columbia, United States, said AI literacy is important in order to use it responsibly and effectively as a tool in everyday life.

Take chatbots such as ChatGPT, for example. Professor Shang Yi said it is based on large language models, which are trained using large amounts of data—text, images, audio, conversations from various platforms, and more.

He said these large language models are similar to the human brain. They are capable of answering questions and are highly knowledgeable—much like humans.

But the question remains: Do humans always answer questions correctly? The answer is no.

In the same way, he said, chatbots or large language models are not guaranteed to always produce correct answers.

“Are they still useful? Yes. You cannot always trust them, but at least they are useful. Let’s use humans as a reference. Do you always trust a person—their answers, their judgment? The answer is no.

“We all have different levels of competency. When you see a doctor, we know that doctors have varying levels of expertise.

“Do you always trust a doctor’s judgment or decision? Not always. However, do you trust a doctor more than a regular person like me to diagnose a disease? You do,” he explained.

He further explained that for certain tasks, AI can perform extremely well—sometimes better than humans. If AI is used as a tool, then there is trust in the system.

According to him, even before AI began, it was a dream for computer scientists and engineers to use machines to do what people do well.

However, early machines only did what humans told them to do.

Humans, with their creative capacity, wanted to build machines that were smart and capable of handling the complex tasks people do—like understanding the environment.

Humans live in an environment with so many things going on and make thousands of decisions every day.

“In order for machines to understand, they need to receive input or sensory data from the environment. Only then can they comprehend what’s going on and make decisions,” he said.

He described AI as a smart assistant to help humans in daily life.

AI may have taken off rapidly in the past 10 years, but it actually began in the 1950s, with British mathematician and computer scientist Alan Turing as a pioneer.

Through his creation of the Turing Test, he aimed to measure how well machines could achieve human-level performance.

Turing significantly influenced the development of AI and laid the foundation for many modern computer science concepts.

Since then, people have tried to use computers to do intelligent tasks, such as playing checkers or chess.

In fact, AI is one of the oldest fields in computer science, and machine learning is one of its major components.

“It’s a major characteristic of AI because you don’t want to write every single instruction for the machine.

“Instead, you want the machine to learn automatically from data and experience.

“In the early days, people had already created machine learning algorithms to improve the performance of software in playing games like checkers.

“By playing repeatedly, the software improved. But due to the limitations of early computers, those programs could not solve real-world, large-scale problems—only small, toy problems.

“Since the 1950s, the AI field has experienced several ups and downs,” he said.

He added that significant waves of development occurred, especially in the past 15 to 20 years, as computers became more powerful and vast amounts of data became available through the internet.

These developments advanced machine learning algorithms.

He said one such algorithm is called the deep neural network, which mimics the structure and function of the human brain.

“This large-scale computational model has been developed aggressively over the past 20 years with major investments from federal agencies and the private sector,” he said, adding that this has greatly improved AI system performance.

One major milestone was the release of ChatGPT in November 2022. It became so popular that 100 million users adopted it within weeks.

ChatGPT was revolutionary because it could hold human-like conversations and provide wide-ranging information.

As a result, ChatGPT has become one of the most valuable tools for providing answers and resources—especially in education.

While ChatGPT is helpful in education, many institutions, including the University of Missouri, are studying balanced approaches to its use.

He said institutions must recognise that AI tools or chatbots can serve as resources—like encyclopedias or libraries.

These tools are efficient in delivering quick, high-quality responses, and their capabilities are improving.

“However, when students study, they are expected to acquire knowledge themselves—not just rely on machines to give them answers.

“So, there needs to be a balance in using chatbots during the learning process. The goal is to enhance learning and skill development—not to replace what students are meant to learn,” he pointed out.

He also said ChatGPT can be a valuable research tool, offering article summaries and presenting multiple viewpoints.

Still, the core ideas must come from the students—not the machine—especially since complex issues often involve personal, non-standard opinions.

Students are expected to develop their own key points and drafts. AI tools can help revise those drafts, but humans must review the final versions.

He acknowledged that ChatGPT can assist with homework or reports, and that some online services now offer writing support powered by AI, making it cheaper and more accessible.

“In some ways, chatbots are really helpful. They are patient, offer examples, case studies, and support. They can even outperform human systems in some areas.

“However, AI chatbots are still machines. They need to be directed appropriately, with clear boundaries and guidance,” he said.

In his own research group, he requires all students to learn how to use AI tools in their studies.

However, when conducting experiments and writing reports, students are not allowed to use ChatGPT to generate entire papers.

“They must contribute their own ideas, innovations, correct data, and verify any data provided by ChatGPT. In scientific writing, all figures must be accurate and verifiable.

“If you treat AI chatbots as tools, the responsibility for the outcome must still lie with the human user,” he said.

On the topic of jobs, he noted that AI will replace certain roles—such as data entry.

“For example, we work with our journalism school. They have archives of articles, and we’ve automated the digitisation and summarisation process, replacing manual data entry.”

Professor Shang Yi also said they also worked with the Missouri Department of Conservation to track bird migration.

“In the past, they used planes to observe birds manually. Later, they used drones to capture images and videos.

“But then, you still need humans to watch them, and they cannot process a large amount of data to extract useful information.

“Then, we developed the machine learning AI system to automatically count the number of birds and the location.

“That replaced some works and also speed up the process greatly,” he said.

He added that in factories, robots will increasingly replace human labour to boost efficiency and speed.

Still, he emphasised the need for responsible and effective AI use.

“I want everyone to understand that AI is here to stay. It’s very useful, and if you do not use it, you will be left behind.

“People say AI will not replace your job—but your job will be replaced by another person who knows how to use AI,” he concluded.

Humanity cannot run away from the onslaught of technology, let’s face it by arming ourselves with the correct understanding.

Professor Shang Yi will be conducting a seminar on Generative AI in Education this Saturday (June 14) here at Methodist Pilley Institute (MPI).

The free seminar is from 9am until 11am. All are welcome to join the seminar.