04-20-2022, 01:54 PM
From author Charles Stross:
"Yikes. TLDR is deep learning models can ALL be compromised, undetectably, if an attacker had access to the original training data set—you can implant undetectable back doors into neural networks."
https://twitter.com/cstross/status/1516766036871323656
"Yikes. TLDR is deep learning models can ALL be compromised, undetectably, if an attacker had access to the original training data set—you can implant undetectable back doors into neural networks."
https://twitter.com/cstross/status/1516766036871323656