CS Colloquium - Rising Stars Talks

Speakers 


Arjun Arunasalam headshot - submitted
Arjun Arunasalam

Title: Abuse within Socio-Technical Systems: Security and Privacy Consequences on End Users

Abstract:

Socio-technical systems are broadly defined as systems which blend technological aspects with human elements such as human behaviors and mental models. As socio-technical systems increasingly integrate more complex technical components such as extended reality and generative AI, these systems are experiencing widespread adoption among end users. Despite this popularity, socio-technical systems still face various limitations.

In this talk, I will discuss my research on how various threat actors on socio-technical systems can impact end users. First, I will discuss my work on how toxicity in online spaces impacts refugee security and privacy decision making. Second, I will overview how my collaborators and I investigated dark patterns in mobile permission prompts and how they can impact users’ security and privacy perceptions.

Bio

Arjun is a PhD candidate in the Department of Computer Science at Purdue University, working under the supervision of Dr. Z. Berkay Celik at the PurSec Lab. His research focuses on understanding and improving human-centered security, privacy, and trust on socio-technical systems ranging from human-AI interfaces to AR/VR devices. His research has earned him a departmental research merit award and has been published in top-tier security conferences such as USENIX Security and Network and Distributed System Security, and human-computer interaction venues such as the ICWSM and CSCW.


 Matthew R. DeVerna portrait; submitted
Matthew DeVerna

Title: Fact-Checking Information from Large Language Models can Decrease Headline Discernment

Abstract:

Fact checking can be an effective strategy against misinformation, but its implementation at scale is impeded by the overwhelming volume of information online. Recent artificial intelligence (AI) language models have shown impressive ability in fact-checking tasks, but how humans interact with fact-checking information provided by these models is unclear. Here, we investigate the impact of fact-checking information generated by a popular large language model (LLM) on belief in, and sharing intent of, political news headlines in a preregistered randomized control experiment. Although the LLM accurately identifies most false headlines (90%), we find that this information does not significantly improve participants' ability to discern headline accuracy or share accurate news. In contrast, viewing human-generated fact checks enhances discernment in both cases. Subsequent analysis reveals that the AI fact-checker is harmful in specific cases: it decreases beliefs in true headlines that it mislabels as false and increases beliefs in false headlines that it is unsure about. On the positive side, AI fact-checking information increases the sharing intent for correctly labeled true headlines. When participants are given the option to view LLM fact checks and choose to do so, they are significantly more likely to share both true and false news but only more likely to believe false headlines. Our findings highlight an important source of potential harm stemming from AI applications and underscore the critical need for policies to prevent or mitigate such unintended consequences.

Bio

Matt is a PhD candidate in the Informatics department at Indiana University, specializing in complex networks and systems. His research explores the intersection of artificial intelligence, human behavior, and digital platforms, with a focus on combating the challenges of misinformation, understanding its real-world impacts, and fostering healthier information ecosystems. Adopting a computational social science perspective, he has leveraged large-scale data analysis, agent-based simulations, and experiments to examine critical questions around information diffusion, superspreaders of low-credibility content, and large language model interventions. His research has been featured in outlets such as AP News, Tech Policy Press, Axios, Slate, Time, and The New York Times Magazine.


Ananya Joshi portrait - submitted
Ananya Joshi

Title: Data Monitoring for Large-Scale Public Health Data

Abstract:

Groups like the Delphi Research Group at Carnegie Mellon University publish millions of public health data points daily across various data streams. With ever-growing data volumes and limited manual review capacity (typically less than 0.0001%) these groups need new approaches to monitoring this data for quality issues or changes in disease dynamics.

In this talk, I will introduce a data monitoring approach tailored to this setting, involving new machine learning tasks and methods inspired by extreme value theory. Deployed for over a year at Delphi, my approach has increased the efficiency of data reviewers by 54x, enabling them to detect approximately 200 significant outbreaks, data issues, or changes in disease dynamics from 15 million new data points weekly.

Bio

Ananya Joshi is a PhD candidate in Computer Science at Carnegie Mellon University, co-advised by Roni Rosenfeld and Bryan Wilder. Her research focuses on designing and deploying practical, human-in-the-loop frameworks that empower public health experts to analyze and act on large-scale public health data.


Ahmed Tanvir Mahdad portrait - submitted
Ahmed Tanvir Mahdad

Title: On the Insecurity of Authentication in Untrusted Terminals: An Evaluation of FIDO2 Keys, Notification-Based Authentication, and More

Abstract:

Two-factor authentication systems (2FA), which combine a knowledge factor (e.g., password) with a possession factor (e.g., smartphone), are widely believed to provide robust protection, even if one factor, such as the password, is compromised. In our research, we challenge this perception by developing an attack framework that bypasses 2FA without compromising the possession factor device itself (e.g., smartphone, FIDO2 key). This attack framework uses user-level malware that exploits the limited display of possession factors (e.g., FIDO2 key’s flashing LED button) or the limited space in UI notifications (e.g., Duo notifications) to craft a deception attack. Notably, we demonstrate cross-service attacks, where an attacker gains access to one service (e.g., a financial account) while the user is attempting to log into another (e.g., email), achieving very low detectability in real-world scenarios. This low detectability is also evident in our user study, which shows an attack success rate of 95.55% on the FIDO2 key-based authentication system and 82.22% on notification-based authentication systems (e.g., Duo). These findings highlight the importance of taking proper security measures against such threats and create the opportunity to design more secure authentication systems in future.

Bio

Ahmed Tanvir Mahdad is a final-year PhD student in the Computer Science and Engineering Department at Texas A&M University. He is currently conducting research under the supervision of Dr. Nitesh Saxena at the SPIES Lab. Prior to this, he earned his B.Sc. from the Bangladesh University of Engineering and Technology (BUET). His research focuses on exploring and mitigating security and privacy issues in modern authentication systems, smart devices (e.g., smartphones, AR/VR devices), and sensor-assisted biometrics. Many of his works have been published in top-tier security and systems conferences and journals, including ACM CCS, IEEE S&P, ACM Mobicom, and ACM TOPS. Additionally, his research has been featured in various news media worldwide and university news outlets.


Taylor Olson portrait - submitted
Taylor Olson

Title: A Formal Theory of Norms - Towards Artificial Moral Agents

Abstract:

Artificially intelligent agents are now part of our daily lives, bringing both benefits and potential risks. For example, unguarded chatbots can spread harmful content, and virtual assistants can be intrusive. To safely integrate these agents into society, they must understand and follow social and moral norms. Progress on this front has been made in the field of AI ethics through classical reasoning techniques and modern learning models. However, a unified approach is needed to create true artificial moral agents (AMAs).

In this talk, I will discuss my research creating AMAs. Drawing upon moral philosophy, my approach combines norm learning with sound moral reasoning. This work provides formal theories for representing, learning, and reasoning with different types of norms. I have theoretically demonstrated various interesting and necessary properties of these theories. I have also empirically demonstrated that this unified approach improves the social and moral competence of AI systems.

Bio

Taylor Olson is a PhD Candidate at Northwestern University working in AI ethics. His research aims to better understand human moral nature and to use this understanding to improve the moral competence of AI systems. His interdisciplinary research combines theories and techniques from moral philosophy, logic, and machine learning. His work has been published at top-tier AI venues such as AAAI and IJCAI. In addition, his work has been recognized with the 2023 IBM PhD Fellowship and the 2018 Incoming Cognitive Science Fellowship from Northwestern University. Taylor is an ex-hooper, current gamer, and lover of 90s rap & hip-hop.

Friday, December 6, 2024 3:30pm to 5:00pm
MacLean Hall
110
2 West Washington Street, Iowa City, IA 52240
View on Event Calendar
Individuals with disabilities are encouraged to attend all University of Iowa–sponsored events. If you are a person with a disability who requires a reasonable accommodation in order to participate in this program, please contact Computer Science Dept. in advance at 319-335-0713 or matthieu-biger@uiowa.edu.