AI Models Stolen Remotely Via Electromagnetic Emissions
Researchers at KAIST have developed ModelSpy, a technique that reverse engineers AI systems using electromagnetic signals emitted during normal GPU operation. Using a small antenna hidden in a bag, they reconstructed AI model architectures from up to six meters away, even through walls, achieving 97.6% accuracy across multiple GPU types.
The attack requires no network access or physical intrusion, exposing AI architecture as intellectual property vulnerable to passive observation. Proposed defenses include adding electromagnetic noise and altering computation patterns, but experts warn that securing AI may now require hardware-level changes beyond traditional software protections.
