1st Workshop on New Trends in

AI-Generated Media and Security .

(AIMS) @AVSS2024

Niagara Falls, Canada

July 16, 2024 (collocated with AVSS 2024 and ICME 2024)

Keynote Speaker



Xin Li

SUNY Albany, USA

    From Art to Science: a Personal Perspective to AI and Neuroscience

  • Bio:Xin Li received the B.S. degree with highest honors in electronic engineering and information science from University of Science and Technology of China, Hefei, in 1996, and the Ph.D.degree in electrical engineering from Princeton University, Princeton, NJ, in 2000. He was a Member of Technical Staff with Sharp Laboratories of America, Camas, WA from Aug. 2000 to Dec. 2002. He was a faculty member in Lane Department of Computer Science and Electrical Engineering, West Virginia University from Jan.2003 to Aug. 2023. Currently, he is with the Department of Computer Science, University at Albany, Albany, NY 12222 USA. His research interests include image and video processing, compute vision and computational neuroscience. Dr. Li was elected a Fellow of IEEE in 2017 for his contributions to image interpolation, restoration and compression.


Irene Amerini

Sapienza University of Rome

    Multimedia forensics to counter misinformation: current research and challenges

  • Abstract: The diffusion of easy-to-use editing tools accessible to a wider and wider public induced in the last decade growing issues about the dependability of digital media. Concerns recently reached an unprecedented level thanks to the development of a new class of artificial intelligent techniques capable of producing high quality fake images and videos (e.g., Deepfakes and AI-generated media) without requiring any specific technical know-how from the users. The problem of misinformation has been exacerbated by the prevalence of such artificially generated contents and by the speed at which information spreads through social media. The continuous progress in deep learning has led to the emerging of increasingly sophisticated technologies for generating deceptive content (e.g., StableDiffusion, Midjourney, Dall-E3) as well as developing corresponding detection methods. Multimedia Forensics is constantly working on devising countermeasures to detect manipulated and fake images/videos to hinder their dissemination. This seminar aims at describing the latest findings and advancements in deep learning approaches for identifying fake media and highlighting the research challenges ahead.

  • Bio: Irene Amerini is Associate Professor at DIAG, the Department of Computer, Control and Management Engineering of Sapienza University of Rome in Italy where she leads the ALCORLab Multimedia Forensics Research Team. Previously she was a postdoctoral researcher at MICC Media Integration and Communication Center, University of Florence where she received the Ph.D. in computer engineering in 2011.
    In 2018 she obtained a Visiting Research Fellowship at Charles Sturt University offered by the Australian Government through the Endeavour Scholarship & Fellowship program.
    She is currently a member of the IEEE Forensics and Security Technical Committee, EURASIP Biometrics, Data Forensics, and Security and of IAPR Computational Forensics Committee. She is Associate Editor of the Elsevier Journal of Information Security and Applications and guest editors of many special issues in various journals on the theme of multimedia content security technologies and deep learning for image and video forensics.


  • Bio:Sam Gregory is an internationally recognized, award-winning human rights advocate and technologist and expert on smartphone witnessing, human rights work using video and technology, deepfakes, media authenticity and generative AI. He has testified to both Houses of the US Congress on AI and synthetic media, and is a TED speaker on how to prepare better for the threat of deepfakes. Currently Executive Director of the global human rights organization WITNESS, he has over twenty years experience at the forefront of practices, impact and innovations in video, technology, human rights, civic participation and media. Sam initiated the first globally focused effort to 'Prepare, Don't Panic' around deepfakes and multimodal generative AI and is widely known and consulted as an advocate, researcher and speaker on deepfakes, Generative AI's promise and perils, innovation in how to understand media authenticity and provenance, and emerging forms of mis/disinformation.


Jon Gillham

Originality.AI
  • Jonathan Gillham is a Founder / CEO of Originality.AI. He has been involved in the SEO and Content Marketing world for over a decade. His career started with a portfolio of content sites, recently he sold 2 content marketing agencies, and he is the Co-Founder of MotionInvest.com, the leading place to buy and sell content websites. Through these experiences, he understands what web publishers need when it comes to verifying content is original. He is not For or Against AI content, he thinks it has a place in everyone's content strategy. However, he believes you as the publisher should be the one making the decision on when to use AI content. His Originality checking tool has been built with serious web publishers in mind!





Dongfang Liu

Rochester Institute of Technology
  • Dr. Dongfang Liu is an Assistant Professor of Computer Engineering at RIT, having earned his Ph.D. from Purdue University. His research centers on generic intelligence and its applications in the medical and physical sciences, supported by grants from DOD and NSF.

    Towards Secure and Robust 3D Perception in the Real World: An Adversarial Approach

    The rise of advanced machine learning and computer vision has made 3D perception in the real world feasible, enabling tasks like monocular depth estimation (MDE), 3D object detection, semantic scene completion, and optical flow estimation (OFE). These techniques have revolutionized applications such as autonomous driving (AD), unmanned aerial vehicles (UAV), virtual/augmented reality (VR/AR), and video composition. However, Deep Neural Network (DNN) models are vulnerable to adversarial attacks, which pose significant threats to security-sensitive applications like autonomous driving systems (ADS). Our research aims to develop secure and robust 3D perception systems by examining and mitigating vulnerabilities under adversarial attacks. We propose stealthy physical-world attacks against MDE, a key component in ADS and AR/VR, by minimizing patch size and disguising adversarial patterns to balance stealth and efficacy. Additionally, we develop single-modal attacks against camera-LiDAR fusion models for 3D object detection, demonstrating that sensor fusion alone does not ensure robustness. We also explore black-box attacks against MDE and OFE models, which are practical as they require no model details and can be executed through queries. To counter these threats, we devise a self-supervised adversarial training method to strengthen MDE models without ground-truth depth labels, enhancing their resilience against various adversarial attacks. Our research contributes to more secure and robust 3D perception systems for real-world applications.

Contacts

                    Shu Hu
                    Assistant Professor
                    Purdue University in Indianapolis
                    E-mail: hu968@purdue.edu

                    Xin Wang
                    Assistant Professor
                    State University of New York at Albany
                    E-mail: xwang56@albany.edu