How do we make use of technology to gain competitive advantage?

Dr Ai Xin
Dr Ai Xin
NUS School of Computing

Article Contribution by Dr Ai Xin

In recent years, more and more people are realising the importance of data in Artificial Intelligence (AI). Andrew Ng initiated the Data Centric AI movement in Mar 2021 and recently in an interview, he mentioned that the biggest shift in AI in this decade would be towards data-centric AI. In this article, I would like to share my view on this through looking back at the history of AI. AI started in 1950s-1960s with a lot of expectations. The early AI focused on general purpose problem solving and search. However, very quickly in the 1970s, people realised that they underestimated the complexity of the problem. It was easy to find a solution for a toy example, but once people wanted to generalise the solution to real-life problems, it was soon beyond the reach of anyone. The limited memory and computation power also made it very difficult to apply AI in practice.

Therefore, later in the 1980s, people adjusted their expectations to focus on some realistic problems. Instead of trying to solve the general problem, this time, people were focusing on a specialised area and developed rulebased system using experts’ knowledge. This was the so-called Expert System, which created a lot of success in many specialised areas (e.g. medical, control, planning etc.) and made AI a billion-dollar industry. Once again, in the late 1980s and early 1990s, people gradually realised the limitations on the rule-based systems which mainly relied on expert knowledge. The real-life problem could be very complex with many unclear rules and relations, which greatly affected the scalability of rulebased systems. To solve this problem, people started to develop intelligent agent which could learn from historical data and adjust accordingly. This was the beginning of Machine Learning (ML). It was during this period (1990s-2010s) that AI/ML became widely adopted in the business world and created many successfully stories (e.g. Google, Netflix etc.).


Recently with a major technology breakthrough in Deep Learning (DL) in 2012 , AI/DL are successfully applied in real life applications, e.g. face recognition, self-driving cars, chat bots and etc. AI is now moving with the speed of light and the question is what is next: Is there another AI winter coming?

Firstly, things are quite different this time. Beside the better algorithms powered by deep learning, we also have big data availability and highly improved computing power. Many companies and governments are heavily investing in AI related projects. However, we still need to learn from history and not to have too much unrealistic expectations on AI thinking that AI can solve everything. Moreover, after identifying the proper problem, we should focus on utilising ML /DL techniques to solve specific problem in a narrow area.

To a certain extent, we can learn from the success story of Expert System and try to properly utilise the cutting-edge ML /DL models in many industries with the help of expert knowledge. We should work with the domain expert to collect the most useful data and properly transform the data to effectively train ML/DL models. Hopefully this will generate some really useful and effective models which can be applied in the actual industry process. Andrew Ng also mentioned something similar, “In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.”

To conclude, I am optimistic to see AI/ML/DL being applied across the industries in the next decade and we need both AI experts and domain experts from various industries to work together to make it happen.

To view course information on AI and Machine Learning, click