AI ethics framework released
A potentially concerning new artificial intelligence (AI) program has gone public, while the Federal Government releases new AI guidelines.
A bot called GPT-2 is taking predictive text to the next level, creating long tracts of writing on its own, practically indistinguishable from something written by a human.
The software was developed by Elon Musk's company OpenAI.
It was originally deemed too malicious to release, because it could be used to convincingly scam people or create “synthetic propaganda”.
The company says it has seen “no strong evidence of misuse so far”, so GPT-2 has been released to the public.
It comes after Australia released eight voluntary principles to reduce the risk of negative impacts from the development of AI.
The Government believes AI should be programmed to benefit individuals, society and the environment. It should also respect human rights, diversity and autonomy and be inclusive and accessible, according to the department.
Finally, the government says AI systems should respect and uphold privacy rights and data protection.
“We need to make sure we're working with the business community as AI becomes more prevalent and these principles encourage organisations to strive for the best outcomes for Australia and to practice the highest standards of ethical business,” science minister Karen Andrews said.
“This is essential, as we build Australians' trust that AI systems are safe, secure, reliable and will have a positive effect on their lives.”
The AI principles could help a prevent repeat of Microsoft’s Tay chatbot, which learned to post inflammatory and offensive tweets through its Twitter account, causing Microsoft to shut it down only 16 hours after its launch.