Mark your calendar and come join us for CAE Forum! CAE Forum is a live, real-time, online academic forum where members of the CAE community give non-technical presentations on topics of value to the CAE community.
CAE Forum is about sharing your ideas, knowledge, and expertise to empower and strengthen our community. It's that simple. CAE Forum presentations are normally held on the third Wednesday of each month during the fall and spring semesters.
Forum flyer
Presentation 1: The use of Steganography and Steganalysis Trends in Computer Forensics
Date: Wednesday, November 2, 2022
Time: 1:00-1:50 p.m. ET
Location:
Just log in as "Guest" and enter your name; no password required.
Audience: Students, professors, government
Presenter(s): Dinesh Reddy, Our Lady of the Lake University
Description: Steganography is the art and science of writing hidden messages. The goal is to hide information in files so that even if the files with hidden information are intercepted, it is not clear that information is hidden in those files. Steganalysis is the process of analyzing a file or files for hidden content. Steganalysis can show the likelihood that a given file has additional information hidden in it, by using tools such as S-Tools and Invisible Secrets. A forensic examiner must be very familiar with techniques and trends in steganography and steganalysis. This means a forensic examiner should be able to do steganography and steganalysis by knowing multiple techniques and best practices for hiding/scrambling and recovering information.
Presentation 2: Vulnerabilities of Machine Learning Algorithms to Adversarial Attacks for Cyber-Physical Power Systems
Date: Wednesday, November 2, 2022
Time: 2:00–2:50 p.m. EST
Location:
Just log in as "Guest" and enter your name; no password required.
Audience: Students, professors, government
Presenter(s): Tapadhir Das, University of Nevada, Reno
Description: Artificial intelligence (AI) techniques have been widely employed to monitor, control, and manage Cyber-Physical Power Systems (CPPS). AI algorithms provide several advantages over analytical algorithms including modeling flexibility and applicability to real-time control and operation. However, AI algorithms, especially those dependent on machine learning (ML), could be exposed to multiple attack vectors through unsecured and unencrypted communications. Recent attacks have shown several vulnerabilities of ML algorithms to adversarial attacks. Attacks can include fabricated samples, poisoned data, and changes in model architecture to make deliberate errors. Therefore, it has become crucial to ensure the security, reliability, and robustness of deployed ML algorithms against adversarial attacks. This chapter discusses the vulnerabilities of ML algorithms to adversarial attacks, possible attack vectors, real-work examples of adversarial attacks on ML algorithms, numerical examples, and discussions to enhance ML algorithms against adversarial attacks in CPPS.
A will be available within 48 hours of the presentation.