Australian companies must get ready for AI laws, Addo AI boss Ayesha Khanna warns

Australian companies need to be ready for a new era of regulation around their use of artificial intelligence, warns the boss of an AI advisory firm.

Ayesha Khanna, who is the co-founder and chief executive of Addo AI, said the European Union and several US states had already passed laws around the use of AI.

Speaking at the governance summit by the Australian Institute of Company Directors in Melbourne on Thursday, Ms. Khanna said other countries such as Singapore had issued “guidelines” for companies but were being careful not to stifle innovation in the sector.

The Singapore-based consultant predicted that companies would eventually have to make reports on what they were doing around AI in the same way as they were being asked to report on ESG and climate change policies.

Ms. Khanna said the European Union’s AI legislation, which has just been formally approved, “lays the groundwork for the fact that AI owners must be responsible and ethical in their use and management of AI risks.”

She said the laws involved setting out a “risk-based framework” for regulation.

“There is a lot of excitement around it,” she said.

“A lot of people are complaining that it’s going to stifle innovation, but to be honest, they (the EU) are fundamentally correct.”

She said the EU legislation involved using “layers of risks” to regulate the use of AI.

At the highest level it could involve tougher regulations for situations where it was being used in a way which could fundamentally change someone’s life-such as when they are being hired for a job, or whether AI is being used with facial recognition to decide if someone could get access to a building.

“But if it’s just recommending a dress you might like to buy, it’s low risk,” she said.

Ms. Khanna said the ranking of risks saw much higher penalties for companies which were found to have misused AI in certain circumstances, but lower penalties and auditing for lower risk cases.

She said there were dangers with the current use of AI for circumstances like hiring and allowing access to buildings which were found to have a bias against people who were not white.

In the state of New York, she said, there was a ban on the use of AI algorithms for hiring people because of its known biases.

Ms. Khanna said AI algorithms were known to be biased against the hiring of women for roles in computer science.

She said efforts were being made to counter the inherent biases around AI algorithms, including racial biases, but there was still some way to go.

She said there were also incidents where AI algorithms could “hallucinate” and produce irrational and potentially damaging results, and other cases where it could be deliberately used to create “deep fake” images and content.

Ms. Khanna said companies should be ready with their own internal policies on the use of AI ahead of any government regulations, not because they were forced to but because it was “the right thing to do.”

She said companies needed to put in processes around their use of AI to prevent it “hallucinating” or producing outcomes which were biased or prejudiced.

Ms. Khanna said the Singapore government did not want to have “stringent regulations” at the moment on the use of AI, as it wanted to allow companies to be innovative and for the country to become a center of AI innovation.

“It has created a toolkit called AI Verified which companies can use to check their AI models and make sure they are not biased, or they don’t go against the values of the country,” she said.

She said some companies were starting to voluntarily report their own policies on the use of AI at the moment.

“This used to be the case with ESG as well,” she said.

“It used to be the case that everyone was making voluntary reporting on their carbon footprint, but that is changing, and companies are having to make these reports (by law).”

“I can see a time coming when companies will have to report on their AI governance policies.”

“For any board, it is imperative that the governance of AI be part of their risk management.”

Ms. Khanna warned that there was a danger that AI could be “poisoned” or hacked to produce wrong outcomes.

She said companies needed to be alert to the dangers of their AI being hacked in the same way as they were concerned about cyber security.

She said countries like Australia and Singapore were being cautious about how they regulated the use of AI so that they did not stifle innovation in their countries.

“The last thing they want is for companies in their country to head off to Silicon Valley.”

“We know that AI can provide huge advantages in terms of operational efficiency and competitiveness, and no country can afford to lose out on it.”

This article was originally published in The Australian Business Review.

Previous
Previous

Why this AI entrepreneur moved from New York City to Singapore

Next
Next

Al Honcho Dr. Ayesha Khanna exploring tech’s societal impact with innovative solutions