CIO (New Zealand) | Divina Paredes | Tuesday, 30 April 2019.
When Dr Ayesha Khanna talks about artificial intelligence, she does not start with technology.
“That’s the wrong way to start,” says Khanna, co-founder and CEO of ADDO AI, an artificial intelligence (AI) solutions firm and incubator.
“I start with impact.”
For her, one way to do this is to imagine Barack Obama as a law student.
According to her, if Obama were to enter law school today, he would most likely be studying the AI platform.
In fact, she says a professor at Yale Law School told her one of the most popular courses for their students is around robots and AI.
She cites a possible scenario wherein a new lawyer is asked to research on a case, to find similar cases and any precedents. This “grunt work” is not exciting for this lawyer.
Meanwhile, software from Ravel Law can go through all of the related cases that happened in the United States, find interesting correlations, such as who constituted the jury, the arguments, and decisions.
“This is now the kind of automation that we are seeing across all fields,” states Khanna.
She cites another case, the creation of ‘Ross’, which was touted as the first AI lawyer.
“Everybody was worried Ross will replace lawyers,” she says.
Ross can mine huge amounts of data and give insights to support humans, she explains. “It is important to note that human input is still part of the equation.”
“Yes, we need lawyers, but not need them trained the way they were or work the way they were. We want them to focus on what they went to law school, to think,” adds Khanna. “And we need them to understand how to work with machines.”
She stresses that law schools are also not behind the AI trend.
Harvard Law School, for instance, has started the CaseLaw Access Project (CAP).
This is a compilation of all precedential cases - covering 40,000 volumes of case law comprising some 40 million pages of text of cases from 1658 to present.
The law school recently posted 360 years of US case law free and available on the internet for students in partnership with Ravel.
“This is an essential tool for lawyers in the future,” she declares.
For Khanna, these examples of how AI has changed a traditional industry like law present critical lessons for today’s business leaders on how they should view this set of technologies.
“If you don't know about new technologies like AI, virtual reality, and Internet of Things (IoT), how can you innovate?”
She adds that, “Innovation today is driven and made possible by many of these technologies.”
“If you don’t move fast, someone unexpected will come after your business with the power of data and AI,” warns Khanna, who spoke at the annual conference of the NZ Institute of Directors.
“I spend a lot of time educating senior executives and encouraging them and for their middle management to train the business users on the basics of AI.”
She tells them: “If you don’t have an AI-first approach, you are going to be disrupted by a competitor.”
“First of all, it is important to have someone technically savvy on your Board,” she advises.
“You need to have a diversity on the Board. As more and more products are powered by AI, make sure they are not biased in any way.”
Diversity in the team is also important. “If you have women, minorities of all ages [in your team], they will notice this [bias] before the product hits the market.”
“You have to train your people around governance,” she states. “Looking out for bias is an ongoing, constant process.”
Khanna points out, too, that “Data you put in is important, oversight is important. Inspect your algorithm to make sure it is not biased.”
She has a similar message for those in charge of regulatory systems.
Regulatory panels should not just be composed of lawyers and politicians, but also technology experts and philosophers, she says.
“Get the experts at the table so they can truly inform what is happening right now because change happening so fast.”
Khanna further talks about various ways AI is impacting a range of sectors today.
Governments are also using AI, as well.
She adds that every country has its own way of dealing with security
In China, for instance, looking for criminals is like looking for a needle in an extremely large haystack.
China manages this through ‘smart policing’, aided by data provided by 176 million surveillance cameras.
A company called Sensetime works with Guangzhou, Shenzhen and Yunna police to identify criminals. A member of the police force will be wearing glasses or ‘special goggles’ equipped with computer recognition software.
The government provided the SenseTime training database of over two billion images to train its AI.
There is also another company in China, Watrix, which can recognise individuals by their gait.
Khanna shares that many of the 300,000 criminals wanted in China are in its database, and this is one way they can be identified.
Another AI use involves food safety, an issue in the developing world.
Citing another case in China, she says consumers may want to know whether the chickens they are buying from the grocery are really organic or free range.
"One company realised somebody is willing to pay a bit of money for this information, and it is a business problem that can be solved by technology."
The company, ZhongAn, is developing facial recognition technology for poultry, so customers can make sure they are eating the same chicken that was picked in the farm and labelled as organic. The chickens wear anklets and information from the time the chickens were hatched until they reach the grocery are put into a blockchain. No one can tamper with this information.
She says more than 10,000 chickens are already wearing the monitoring device called GoGo Chicken. ZhongAn says these devices will be found on 2500 farms in China by 2020.
It takes a community
Khanna is emphatic about the need to upskill and empower everyone to use AI.
“No job now does not have or will not have someone related to AI and data as part of a team.”
She says it is likewise important to ensure programmes will nurture the talent to succeed in the AI-driven world.
“Every single thing my children do will be impacted by AI,” she states and notes that, “To have sustainable and equitable growth in an AI-powered economy, we need to guide and empower our nation’s talent.”
“You need to upskill and empower everyone. Your domain expertise and knowledge will be your advantage.”
At the same time, an AI engineer cannot go to any business or company and try to change it with the use of AI without understanding the domain.
“AI is your responsibility, your opportunity and your problem,” she points out, in a message addressed to leaders in government and the private sector.
“You need an interdisciplinary team,” she says.
She adds that the ability to collaborate is an important skill. “A lot of innovation will be led by people in partnership with the business.”
Khanna further says, “You will need to work with deep technology experts on AI, robotics and virtual reality. You need to think for your company, your customers and what matters to them.You need the basic skills to connect the dots, look at a problem and the solution. The glue between them is technology.”
Singapore, for instance, subsidises AI courses.
One such course teaches the basics of AI in the context of banking and finance.
“The idea is that I am not going to teach AI, I will teach you AI that is relevant to you with data and the opportunity (it offers),” she explains.
The bankers who complete the course are marketable for life, because they know how to work with data scientists and engineers, UX and UI designers, she says.
They will build the new financial services for the emerging middle class.
Khanna believes there will be more courses like this that will be offered in the future.
“AI is inevitable,” she says. “You will run into the need for AI and data.
“You must be open to ideas and be able to work with people like AI engineers, and are empowered and confident to question their bias.”
She also says, “You are able to probe and make sure they reflect the values of your company.”
Khanna, thus, encourages everyone - no matter what age, gender or background - to learn a bit about AI.
“At the very least,” she says, “you can sign up for newsletters that will keep you in the loop of interesting things that are happening in emerging technologies.”
This way, “you can recognise it and feel empowered to be part of the AI-powered economy.”
Khanna, who has an undergraduate degree in economics from Harvard University and a doctorate in information systems and innovation from the London School of Economics, is personally involved in this advocacy.
She is the founder and chairperson of 21C GIRLS, a not-for-profit that provides free coding, artificial intelligence, and robotics classes to girls in Singapore.
To date, it has taught 5000 students in schools and community centres and is supported by a range of organisations like Google, VISA, Goldman Sachs, and PayPal.
She is also founder of Empower: AI for Singapore, a national movement to teach all youth in the country the basics of artificial intelligence.
AI sandbox and support
Khanna, however, lists a range of issues that need to be considered as organisations move into the AI world.
She points out AI is not infallible.
She cites Facebook’s AI, which the company says failed to detect the shooting video of the Christchurch attacks.
“There may be people who may be tagged inappropriately,” she adds.
“It is important for us to look at the benefits, but also think about the risks," she says, citing "the importance of regulation, safety nets, and the human input on making final decisions.”
She cites how Singapore, where she is based, is dealing with this challenge.
First, she says, Singapore provides sandboxes and regulatory support.
In the area of fintech, Singapore encouraged people to use the new technology, but also assessed the risks.
“You will deploy it in a sandbox, test it with a beta set of users, and only when it is in a position to be launched will you launch it.”
One female entrepreneur developed blockchain-based insurance products. The government gave her sandbox support and was one of the first to graduate from this programme, and is now earning money and raising funds for her venture.
Khanna stresses that, “It is important to have this balance between regulatory oversight and innovation.”
The second is to “emphasise safety first.”
“It is hard to emphasise this enough,” she says, on AI safety and ethics.
She notes that 5G is coming, connecting over a billion sensors around the world with IoT.
“But if everything is connected, you think of cybersecurity, and you think of data protection, as well. You should have the right to your information, and it should be easy for you to understand how the company is using your information.”
Third, she says, is that Singapore set up a council on AI and ethics.
This means putting down a framework to guide companies to have an ethical approach in AI governance.
She relates that, “A lot of people ask me, are you afraid AI will become smarter than us?”
Her response? “No, not right now, but I am afraid of people with malevolent intent, who will be able to manipulate us with AI.”
For instance, she says, the same technique that generates more data for AI is able to help detect cancer in radiology reports, but also allows AI to mimic human beings.
“You could record my voice for just three seconds, and then you could literally call anybody who knows me and programme me to say anything.”
“This kind of fake news is a real concern,” she stresses.
At the same time, “You don’t want to stop the software because have you seen the fear of someone waiting for a cancer prognosis. You also want to help them but you also want to avoid having people manipulate human beings.”
According to Khanna, therein lies the dilemma of technology.
“That is why we need to have ethics and safety first.”