Human Agency and Human-Centered Design Need to Be at the Core of AI

Areej Mehdi | MIT Technological Review | 30 March 2020

As data continues to be collected and analyzed it is important for human beings to not passively observe and be overwhelmed by AI but think of it in terms of how they can empower themselves. This empowerment of human agency is very important.

Ayesha Khanna is CEO of ADDO AI, an artificial intelligence solutions provider. She is the author of Hybrid Reality: Thriving in the Emerging Human-Technology Civilization. Managing Editor, Areej Mehdi sat down with Ayesha about her vision for a smart city and a world powered by artificial intelligence.

Should we be afraid of AI?

Every technology is a double-edged sword.  One can have a very naive view of it or one can be ridiculously pessimistic and usually people swing between the two because they are not informed. The key to making sure that AI is used in a proper manner is good governance. When you have the right governance in place you build trust. You also need to have accountability. This is done over time and it requires the ability to move beyond knee-jerk reactions, to think in a balanced manner, and to think about the unintended consequences that may happen if something is not governed properly.

For instance, the phenomenon of fake news was not the intention of people who started social media websites like Twitter and Facebook. But because they did not think of the unintended consequences it has really become a monster in itself. At the same time, the core of fake news, – where it stems from – is generative adversarial networks (GANs). GANs were really thought to improve the accuracy of the algorithm, and in fact they have. Research at MIT has shown that for a group of women it can detect breast cancer five years earlier. You cannot stop the march of technology but you can govern it better and that is a combination of organizational governance, the regulations at the federal level, human education, human agency and the ability to educate people.

With the rise of AI how do we prepare people for this transition?

There are two steps really. One is technical education, whereby everyone should have some basic education in technology, both in understanding how computers work but also how data science works. This should be done in a reasonable way. The second key is communication. Nobody can learn enough to be an expert but you should know enough to be able to communicate with somebody who is a deep-tech expert. The best way to prepare for something is to learn a little bit; like we need to learn a little about business, they need to learn a little bit about us. Another way is to learn how to collaborate, or to partner. And that hinges very much on communication. More than the technology it is the interaction of people that can make or break an AI project.

What new/upcoming innovation in machine learning are you most excited about?

Along AI itself, we are seeing architectural improvements in the field of Machine Learning. One of the things we’re seeing is the work that Yoshua Benjio was doing in reference to capsule networks. Most of the advancement happening in the US, Canada and in Cambridge are attempts at improving the ability of the deeper networks’ architecture and their ability to work more efficiently. This would enable them to work with less data, to make less mistakes and improve their overall performance. So that’s interesting both in terms of huge amount of AI at the fingertips of people and at the same time creating architecture that could potentially enhance data privacy.

In the short term, 5G is another technology that is going to change the way machine learning is done through federated machine learning models, where more of the AI and computing is done at the edge. I think Natural Language Processing is going to have enormous jumps close to what we saw with computer vision.

And then of course there’s another exciting area about our ways of thinking about AI in totally different ways. Gary Markus and others have been working on ways to make AI more like the human brain. Their work is exploring if we can embed these templates into AI therefore making it more conscious, more easily replicable, or more approximate to how the human brain works.

Many of these things are more in the research phase but that’s really what’s exciting for everyone.

Are we ready for robots?

I think we are. But there are multiple ways to look at it. One is are we individually ready? We’ve already seen that in the work of the company GROOVE X which is creating robots that are called lovots. Kaname Hayashi (the CEO) told me that people in the West constantly talk about robots as enhancing productivity but ‘my robots are there to add cheer and love to the environment’. There has been research that human beings tend to sympathize with any object that shows some kind of social emotion even if it’s a robot, so maybe we are already kind of ready for it.

Are we economically ready for it? I don’t know. Because that depends on the cost, and the cost is going down for Japan, Singapore, Europe; essentially wherever there are aging populations. For other places, such as South Asia where the cost of labour continues to be cheaper, the cost of robots will not make sense.

Are we ready for it as a society? That is complicated to answer. As a society and as a government we have to think about what are we giving these robots in exchange for them freely moving around in society? Who owns them?

Perhaps also, how does ownership affect our perception of them?

I think the larger question is what happens when we extrapolate socially with robots that are owned by a third party. When we bring them in our homes and offices, they are going to alter the way we live and work. These robots have all kinds of sensors, cameras, and capability to collect all kinds of data about us. I think of  Alexa for example.

Alexa could hear the way somebody was breathing and could actually predict if someone is going to have a heart attack before they were fully aware of it.

When you ask Alexa a question, can it identify the context or nuance behind it? You say: “can you order me some flowers?” But I sound upset, so will it recommend something else for me? Or sound happy, will it recommend some balloons for me? If it hears a baby crying in my background, will it say something else?

The more context you’re giving the more it is able to adapt its recommendation and, in a way, impact your buying habits or even promote them. The more context you provide, the more it learns. And who is gathering all this information?

Our phone is always with us, but now there are other multiple objects all triangulating information about us. And then of course there’s the whole hacking situation, manipulation, and misuse of information. We need to ask where is the data going? Who is to blame if something bad happens? Have we put in place the legal and regulatory frameworks to minimize risk? I think these are the issues that we need to consider as we bring robots into our lives.

And issues that would be at the fore when looking at the evolution of smart cities. What is your vision for building smart cities in the age of AI?

I think my vision is very much aligned with what I have seen Singapore do. In 2014 Singapore announced that it is going to be a smart nation. Smart means technology is one component but technology for technology’s sake or as the operating system of the city is not the end goal of the smart city. The end goal is to pivot the economy towards the industries of the future, it is to develop talent and create opportunity for social mobility and a better life.

Technology is one thing, but if you’re going to be lonely, if there is going to be pollution then it doesn’t make any sense to just focus on the smart home. For instance, in the case of elderly people, by placing sensor networks in their homes, so that if they fall someone can be alerted.

So that for me has been very important – this kind of keeping the individual and their betterment at the center of it.

What is Pakistan’s potential as a market for AI?

It’s humongous. There’s no doubt. And it’s not only Pakistan, it’s Indonesia’s opportunity, it is Bangladesh’s opportunity – any country that invests in its talent development will find that it can stand toe to toe with those who have succeeded previous technological breakthroughs. So, we should continue to build upon our already talented students and engineering populations and teach them the basics of data science and cloud computing, moving beyond just app development, web development and machine learning.

Science is a creative, interesting, problem centered exercise. It is not that you learn the technique and you’ll just become this receptacle of somebody else’s business requirements. Not only is this opportunity for Pakistan and all developing countries to be better at this new field but more than that to showcase that they are creative, out of the box thinkers who can communicate and support strategy.

Click here to read article

Previous
Previous

These 6 Women Are Making Their Marks in Deep Tech Field

Next
Next

Featured in a Magazine as a Leading Woman in Tech