Deception has been used since ancient times, for both offensive and defensive purposes. In today’s digital and highly networked world, many deception related activities happen in the cyber space and involve the creation of deceptive data in digital format.
The use of digital data deception (DDD), by adversaries, call for countermeasures which include methods for detection of such deceptive data, as well as defensive deception that can mislead adversaries. The DDD Technology Watch Newsletter project has been established to monitor recent research and innovation progresses in DDD.
This issue focuses on AI-related DDD, covering both offensive and defensive aspects.
This issue focuses on conversational agents or chatbots. Although different definitions exist for these two terms, for the purpose of this newsletter they were used interchangeably.
NL-2021-3 (pdf cannot be released yet)
This issue focuses on deception in different settings: recommender systems, communication from a psychological perspective, and cyber-physical systems.
This issue is the first to have an addendum Chinese section (NL-2021-3-C) where the scope of the DDD technology watch is extended to research papers published in Chinese; this section is available upon request.
NL-2021-4 (pdf cannot be released yet)
This issue focuses on a range of topics relevant for DDD: Information Hiding in Images, Fake Software and Services, Data Poisoning in AI Systems, and Vulnerabilities in AI Systems.
The addendum NL-2021-4-C is available upon request.
NL-2021-5 (pdf cannot be released yet)
This issue focuses on Attacks in AI Models, Fact-Checking Technology, and Information Hiding (including Steganography Across Different Media, Coverless Steganography, and Steganalysis).
The addendum NL-2021-5-C is available upon request.
NL-2022-1 (pdf cannot be released yet)
This issue focuses on deepfake technology in terms of state-of-the-art and state-of-practice; it covers Deepfake Generation, Detection & Prevention, Psychology & Deepfake, Readily Available Deepfake Technology, and Deepfake in the Real World.
The addendum NL-2022-1-C is available upon request.
Feedback is always welcome, and should be directed to email@example.com