Tech firms say new Australian standards will make it harder for AI to protect online safety

8 months ago 57

The standards target generative AI’s misuse potential but Microsoft says its ability to flag problematic material could be hurt too

Tech companies say new Australian safety standards will inadvertently make it harder for generative AI systems to detect and prevent online child abuse and pro-terrorism material.

Under two mandatory standards aimed at child safety released in draft form by the regulator last year, the eSafety commissioner, Julie Inman Grant, proposed providers detect and remove child-abuse material and pro-terrorism material “where technically feasible”, as well as disrupt and deter new material of that nature.

Sign up for Guardian Australia’s free morning and afternoon email newsletters for your daily news roundup

Continue reading...
Read Entire Article