research

Visit my lab website, pearl.umd.edu to read more about specific projects I am working on in the areas of privacy, surveillance, and ethics. For information about where I have received funding and financial compensation, visit my Disclosures page.

Below, I share the personal statement I wrote in 2023 when I was preparing my promotion materials.


Jessica Vitak | Personal Statement | July 2023

1 OVERVIEW

Information and communication technologies (ICTs) like smartphones, wearables, and sensors are reshaping the content individuals share, the audiences with whom they communicate, and the entities that collect and analyze personal information. The social and technical affordances of these technologies may simplify our lives in multiple ways, but they also blur distinctions between public and private spaces and data. This, in turn, raises broad questions about the privacy and security of data, as well as whether data collection and use practices are ethical and whether the design of systems and tools encourages—or discourages—consumers to develop their skills and agency in using technology and managing data flows.

For more than a decade, my research has defined and described the privacy and ethical challenges raised by technologies we use every day at home, school, work, and beyond. It asks: How can we empower people to better understand and respond to the risks posed by widespread data collection and use? With the US struggling to pass comprehensive privacy reform, this research is more important than ever, and I have been a leading voice in identifying the risks posed by the widespread collection and use of consumer data. Through more than 100 peer-reviewed research papers, my research has helped define the space (e.g., through my early work on context collapse), pushed researchers to re-evaluate their data collection practices (e.g., through my ethics research), and identified key privacy risks raised by new technologies in our homes, schools, and workplaces.

Because of the interdisciplinary nature of my research, its impact spans multiple fields and can be demonstrated through various metrics. I publish in top-tier communication and media studies journals (e.g., New Media & Society, Journal of Computer-Mediated Communication, Social Media & Society), top social computing conferences (e.g., CSCW, CHI [1]), and open-access venues (e.g., Big Data & Society, Media & Communication, Surveillance & Society).  According to Google Scholar, my articles have received more than 13,500 citations since 2008 (h-index: 46; i10 index: 78). I have received more than US$10 million in research funding from the National Science Foundation (NSF), Institute for Museum & Library Services (IMLS), industry, and others. I have received 10 best paper awards/nominations, including two as first author and three led by a student I was advising. I have given invited talks and keynotes at universities, organizations, and government agencies in the US and Europe on issues related to privacy, data ethics, and social media. I have also organized and run numerous panels and workshops at premiere conferences and universities to provide space for academics, industry, and policymakers to discuss core privacy and ethical challenges and identify potential solutions. I have been interviewed for dozens of popular press articles in the Washington Post, MIT Technology Review, WIRED, The Atlantic, and NPR, among others.

In the following sections, I describe my core research areas and contributions, show how I connect my research with mentoring and service activities, and discuss future directions for my work.

2 RESEARCH FOCUS AND CONTRIBUTIONS

2.1 Making Sense of Data Sharing in Social Media and  Smart Environments

From social media to smart home sensors, technologies that collect a wide range of personal data have become increasingly popular in the last decade. My research provides important theoretical, empirical, methodological, and design insights into how people navigate these technologies, both in whether to use them at all as well as how to balance tensions between privacy and disclosure.

Social media platforms—where users share personal information with friends and strangers—have long provided an important context for privacy researchers, and I was one of the first people to empirically study the phenomenon of context collapse [J4, J13, CP11], whereby the technical structure of platforms flattens social networks and makes it more difficult to engage in the types of nuanced self-presentation individuals do unconsciously offline. My research in this space [also including CP32, CP35, CP40] has identified the social and technical strategies users employ to balance self-presentation and privacy goals, providing important insights into the design and use of platform features.

These findings align well with Nissenbaum’s theory of privacy as contextual integrity (CI), which provides a framework for identifying how small shifts in the parameters of an information flow may lead to big shifts in attitudes toward the appropriateness of that flow. Through both qualitative and quantitative studies employing CI, I have pushed for more nuanced understandings of data sharing and privacy-related decision making in contexts like contact tracing apps [J31, J32], workplace monitoring programs [J44], smart home devices [CP34, CP42], and data collection more broadly [J40]. Through this work, my collaborators and I realized that most CI research only engages with part of the framework, so we have written a guidebook on how to engage the full CI framework in empirical research [WiP5]. I am especially proud of this methodological contribution, as it will help researchers design and implement studies grounded in theory.

Much of my current research focuses on the privacy concerns raised by data collection via mobile apps, wearables, and smart home devices (SHDs). These devices collect audio, video, health, and environment data unobtrusively—often from private spaces. Easy to set up but hard to manage, they create a wide range of privacy and security risks to consumers. Over the last six years, I’ve interviewed dozens of users and non-users of these technologies to understand how they make purchasing/usage decisions, as well as barriers to use and privacy concerns [J23, J26, J40, J42, CP24, CP30]. Most recently, I led a study with smart home “power users” that identifies their strategies for mitigating privacy concerns and managing data flows from devices [CP41] and provides design recommendations to enhance data transparency, visibility, and control of device data [CP42]. In 2023, we received NSF funding to continue this work [GR10], where we will design privacy-enhancing tools to manage data flows from smart home devices.

Beyond these empirical contributions, my impact can be seen in the many outreach and networking events I conduct. For example, I have (co)organized and hosted a dozen workshops at conferences and universities in the US and Europe to discuss the privacy risks these technologies raise; identify stakeholders, solutions, and next steps; and build a global network of data privacy scholars. In 2018, I met with Congressional representatives on Capitol Hill and shared initial findings from the PERVADE project [GR5] on the risks posed by companies, apps, and devices that collect personal information [NRPO1]. In 2020, two collaborators and I curated a special issue in the Journal of the Association for Information Science and Technology (JASIST) on “Information Privacy in the Digital Age” [J29].

2.2 Empowering Children and Families Through Privacy and Security Education

My collaborators and I have spent the last seven years exploring ways to help children and families develop knowledge and skills related to digital privacy and security. Through projects funded by Google, the NSF, and IMLS, we have worked closely with teachers, students, parents, and library staff to understand educational needs, identify challenges to developing privacy and security curriculum, and design a wide range of games and other resources to work in formal and informal learning settings.

In one stream of research (with Drs. Marshini Chetty, Tamara Clegg, and former student Dr. Priya Kumar), we developed the “Connecting Contexts Framework for Privacy and Security Learning” [GR6] to explain why elementary school curriculum should span home and school contexts, include life-relevant examples that resonate with children, and engage parents in the learning process. We also focus on how to scaffold learning across elementary school so that even the youngest students can begin talking about digital privacy and security, and so that lessons build up over time to cover more diverse and complex topics. To date, this project has generated five publications that address the key needs of each stakeholder group [J19, J22, CP26, CP31, R4] and two publications that focus on broader trends in this research space and the need for greater engagement with theory [J27, CP39]. We have also examined the privacy and security risks associated with the shift to emergency remote learning at the start of the pandemic [CP40] and we have worked with children and teachers to develop curriculum to act as conversation starters on privacy and security topics—and how those topics relate to children’s daily lives [WiP1].

In the second stream of research (with Dr. Mega Subramaniam and several PhD students), we addressed privacy challenges of economically disadvantaged populations, who face compounding threats due to reduced digital literacy, reduced home internet access and reliance on publicly available computers and, in many cases, reduced English proficiency. This research was guided by the question of how library staff can help families develop important skills needed for navigating new technologies, protecting their data, and avoiding scams. Findings highlight a lack of resources and little consensus in guidelines for how to work with patrons’ personal information (CP25] and the challenges families face when using technology [CP27]. Based on this work, we developed a patron-focused privacy policy for public libraries [CP33] and provided recommendations for researchers and practitioners developing digital literacy resources for children and families literacy [J27, CP29]. Finally, we developed several games to help families learn about privacy and security; these are all available for download on https://safedata.umd.edu. For example, we worked closely with the UMD KidsTeam [2] to develop a card game called Password Mania (Figure 1), where the goal is to build the strongest password based on core password features.

Screenshots of the five types of cards in Password Mania.
Figure 1. The Password Mania card game features five categories of cards. Players try build the strongest password to win.

The impact and influence of this research can be seen in the invited talks and keynotes I have given [e.g., K2, K3] and the numerous webinars and information sessions we have hosted with parents, teachers, and library staff [e.g., OP4-OP8] to share developed resources and discuss our findings. We also work closely with our two partner elementary schools in Chicago and Maryland and provide regular professional development training for teachers and join parent-teacher association (PTA) events to share our work and answer questions parents have about technology.

Looking ahead, we are currently seeking additional funding from the NSF (via the Racial Equity in STEM Education program) to consider privacy and security education within the wider needs and goals of disadvantaged communities. Working with community partners and teens in Chicago and Washington DC, this project  will 1) build critical data literacy around privacy and security concerns, 2) help teens and young adults advocate for themselves on issues of surveillance and data collection, and 3) create a model for community-driven knowledge sharing.

2.3 Exploring Increasing Workplace Surveillance During—and After—the Pandemic

At the start of the pandemic, my primary collaborator (Dr. Michael Zimmer) and I began noticing a significant uptick in media coverage of new surveillance tools to allow employers to monitor workers remotely. This offered a unique research opportunity to explore how the pandemic may accelerate the spread of worker surveillance. Beyond that, the shift to remote work significantly blurred home and work contexts; while workers’ privacy rights are fairly clear in traditional workplace settings, the shift to a traditionally private space—the home—created a completely new set of privacy risks for workers.

After receiving a Rapid Response grant from the Social Science Research Council (SSRC), we conducted a national survey of 650 American workers about how the pandemic had affected their work environment—including changes to employer monitoring—as well as their attitudes toward future forms of workplace surveillance. We published two papers on this project in 2023 [J41, J44], demonstrating that worker attitudes toward surveillance are nuanced and are shaped not just by the type of data collected but also by how that data is used, who can access the data, and the constraints under which surveillance occurs. We also identify gender inequities in surveillance practices and attitudes. We are currently exploring ways to extend this work and further shed light on how surveillance harms are unequally distributed across workers and industries

2.4 Pushing for Ethics Practices That Reflect the Modern Technology Landscape

Complementing my privacy research, I also study ethical issues arising from researchers’ data collection practices. With the increasing focus on sharing personal information publicly via social media platforms, we have also seen a rise of big data analytics and other forms of online data collection and analysis practices in social science and computing research. Researchers face significant challenges when making decisions about how to treat data from users who may not be considered “human subjects” by review boards, who are often unaware of the boundaries between public and private information, and who rarely give informed consent to have their data collected, analyzed, and/or published online in aggregate.

As a co-investigator on the $3 million PERVADE NSF grant [GR5], my research has identified core challenges researchers and ethics review boards face in identifying and mitigating ethical risks from research employing big data, algorithms, and AI. In recent years, I have evaluated social media users’ attitudes toward data use across several platforms (Facebook, Instagram, Reddit, and dating apps). These studies [J34, J43] show how the appropriateness of data use on these platforms varies by factors like who is doing the research, what data they’re collecting, what inferences they draw from the data, and how (if at all) users are notified. Across numerous papers, panels, and workshops [J18, J35, CP14, CP18, CP20, CP36, CP43 R3], my team and I have provided clear recommendations for how to do social media research more ethically. This work makes it clear that sharing data publicly does not mean users are okay with it being used for purposes that go beyond expected uses.

3 CONNECTING RESEARCH, TEACHING, AND SERVICE

My research is inherently interdisciplinary and spans human-computer interaction (HCI), computer-mediated communication (CMC), science & technology studies (STS), education, social psychology, and computer science. This provides me with a somewhat unique opportunity to be a bridge between computer and social science, and to engage in important translational work to connect people who are asking important questions about technology’s impact on society. I also imbue my teaching and service activities to maximize connections between different ways of thinking about important problems at the intersection of ethics, privacy, and technology.

For example, I have developed multiple courses in our undergraduate program (INST201, INST366, INST466), PhD program (INST611), and the Joint Program in Survey Methodology (SURV612) that address core privacy and ethics challenges raised by new technologies. I was also recently named a University Honors Fellow at UMD and will develop a cluster of courses on privacy and surveillance topics for honors undergrads. In my teaching, I use case studies and current events to train students to enter the tech sector armed with the ability to identify when a new technology may be problematic and critically evaluate how to mitigate those problems. Through my lab (Privacy Education and Research Lab/PEARL), I have advised nine PhD students (five have graduated) and mentored nearly two dozen additional students through research projects and independent study courses. I have published 37 journal articles or conference proceedings with UMD students over the last nine years.

My data ethics research has also provided important opportunities for me to bridge research disciplines and start important conversations about ethics and technology. One way I do this is through my position on the SIGCHI Research Ethics Committee, which reviews papers flagged by one of the 30+ SIGCHI conferences as having potential ethical issues. I’ve organized numerous panels and workshops to provide opportunities for different research communities to discuss ethical issues they encounter, most recently at the International Communication Association annual conference. Following a blue sky panel I ran in 2023, I will be working with conference leadership to survey members and develop ethical guidelines for communication research using digital data. This year, I’m bringing my expertise to UMD’s IRB, where I will provide guidance on how to evaluate research that would typically not be considered “human subjects research” (e.g., data scraping) and serve as a board member.

Finally, I have used my leadership of UMD’s Human-Computer Interaction Lab (HCIL)—which is home to more than 100 students and faculty from across campus—to encourage interdisciplinary approaches to sociotechnical questions. For example, I recently coauthored an essay with prior HCIL leaders for ACM’s Interactions Magazine calling for the CHI community to embrace “joyful sustainability” and consider the environmental impacts of our research and conferences [E2]. In my seven years in leadership (first as the associate director and now as the director), lab members have published exemplary HCI research, regularly winning best paper awards at leading conferences. In my role as director, I have focused on training the next generation of HCI researchers by providing informal and formal mechanisms for members to network, get feedback on ongoing projects (via paper clinics and informal research talks), and present research to wider audiences through the lab’s annual symposium.

4 FUTURE RESEARCH AGENDA

At its core, my research helps consumers understand what data is being collected about them, how it might be used, and what they can do—both as individuals and as a society. I seek to empower children and adults to better protect themselves and vulnerable populations from data-driven harms. This research agenda is becoming more important than ever as technology becomes more invasive and policy changes strip citizens’ rights to data privacy. Therefore, it is critical we identify and mitigate the sociotechnical harms that affect what data can be collected about people, who that data is collected from, and how that data can be used in potentially harmful ways.

My track record makes me uniquely qualified to push for better consumer data protections via research, design, and policy. As I look to the future, my agenda will increasingly focus on translating my research outputs to have greater impact in communities beyond academia. For example, I was recently named a fellow at the Center for Democracy and Technology. [3] This will provide me with important opportunities to share my research with policymakers and industry representatives, and to influence policymaking at the state and federal level. Beyond working toward comprehensive federal privacy reform, I will use my knowledge and expertise to push for “privacy-by-default” settings and, more generally, shifting responsibility for data protection from consumers to companies.

Across my research, I am also increasingly engaging with and empowering local communities to take control of their data and make their voices heard by regulators (e.g., see [GR2, GR9] and the NSF proposal described in Section 2.2). Much of my future work will focus on providing resources, training, and knowledge to vulnerable communities to identify and mitigate the data harms they encounter throughout their daily lives.


[1] CSCW is an acronym for the ACM SIGCHI Conference on Computer-Supported Cooperative Work & Social Computing. CHI is an acronym for the ACM Conference on Human Factors in Computing Systems.

[2] Part of the Human-Computer Interaction Lab at UMD, KidsTeam is an intergenerational research group that has adults and children (ages 5-12) work together to co-design technologies that support children’s learning and play.

[3] See https://cdt.org/about/fellows/ for more information about CDT’s non-resident fellows program.