The standards target generative AI’s misuse potential but Microsoft says its ability to flag problematic material could be hurt too
- Follow our Australia news live blog for latest updates
- Get our morning and afternoon news emails, free app or daily news podcast
Tech companies say new Australian safety standards will inadvertently make it harder for generative AI systems to detect and prevent online child abuse and pro-terrorism material.
Under two mandatory standards aimed at child safety released in draft form by the regulator last year, the eSafety commissioner, Julie Inman Grant, proposed providers detect and remove child-abuse material and pro-terrorism material “where technically feasible”, as well as disrupt and deter new material of that nature.
Continue reading...