Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Yet another AI problem
#1
From author Charles Stross:

"Yikes. TLDR is deep learning models can ALL be compromised, undetectably, if an attacker had access to the original training data set—you can implant undetectable back doors into neural networks."

https://twitter.com/cstross/status/1516766036871323656
Reply
#2
Much like implanted memories in humans...
Reply
#3
I remember the good old days when "AI" meant Adobe Illustrator. :oldfogey:
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)