CSAB'19
Complex Systems perspectives on Algorithmic Bias
An ICWSM Full-Day Workshop - June 11th, 2019
artwork by JK Rofling jkrofling.com

Abstract


From search result ranking to friend recommendation, algorithms used in today’s online platforms exist within a coevolving universe of model parameters, institutional constraints and the whims of platform users. This workshop will focus on the development and exposition of research that seeks to model, quantify or theorize this complex system of algorithms, platforms and users. Through a mix of presentations of accepted abstracts, presentations from keynote speakers, and panel discussions, this workshop will bring to the forefront empirical, theoretical, and simulation-based research that helps to clarify this interplay using a complex systems perspective. We take a broad view of what a “complex system perspective” entails, emphasizing only that there is a goal of understanding how these various pieces fit and evolve together. As such, we are interested in work on, e.g., algorithmic feedback loops and the impact of algorithmic changes on social media systems, using a variety of methodological approaches, e.g. qualitative interviews, empirical studies, and agent-based models.


CSAB'19 is a full-day ICWSM workshop and will be held on June 11th, 2019 in Munich, Germany before the conference begins.

Questions? Comments? Email us at csab19@googlegroups.com

Funding


The workshop was funding in part by a grant from the SUNY Germination Space program, awarded to the Ethical AI Working Group at the University at Buffalo.

Schedule


Themes


  1. How can we leverage quantitative, qualitative, and simulation-based methods to push forward our understanding of algorithmic bias from a complex systems perspective?

    Within our community, research is dominated by quantitative study of observational data. Such an approach has provided important insights into algorithmic bias, but also cannot easily be used to study feedback loops that help produce this bias. This workshop will discuss new approaches to quantitative evaluation of observational data, as well as new quantitative experimental approaches, but also will emphasize the benefits of qualitative and simulation-based approaches. Qualitative study can help to identify, e.g., how algorithms are (or are not) put to work across difficult-to-quantify social contexts2. Similarly, while identifying problems in real systems through empirical measurements is a great first step towards accountability, we can not possibly investigate each system one-by-one, as they change over time, etc. using empirical measurements. To study these kinds of behaviors, it is instead useful to approach such a problem from a simulation perspective. Using simulation, we can assess how different potential changes to the system impact the evolution of platforms and their users over time.

  2. How can theory from the social sciences (and/or social physics) help us to understand complex systems of algorithmic bias?

    In addition to the various methodological perspectives that must be brought to bear on this problem, this workshop also emphasizes the need for various theoretical perspectives. In many cases, applicable theory from the social sciences may not be known to, e.g., computer scientists, and conceptual tools from social physics used to describe complex systems may be unknown to both social scientists and computer scientists. To this end, this workshop seeks to bring to bear theoretical perspectives from across these disciplines that use a complex systems perspective to help explain or understand algorithmic bias.

  3. Can (and should) we try to “fix” algorithmic bias? If so, what does that solution look like? If not, why, and what should we be doing instead?

    Recent research has argued that the problem of algorithmic bias lies largely in structures of bias and discrimination that lay beyond the boundaries of model parameters. Further, developing a “fair” algorithm may be difficult, particularly because definitions of fairness are themselves varying and socially constructed. At the same time, however, at the scale of today’s social media, it is obvious that some automation is required if we wish to, e.g. provide friend recommendations on Facebook. How, then, can we move forward in a world where algorithms seem destined to be both necessary and that may never fully be "de-biased”?

Call for Participation



We request 1-2 page papers (in ICWSM paper format) on a range of topics relating to the workshop and its themes. These papers will be non-archival. Papers can include position papers, initial results, summaries of published papers.

Papers will be evaluated by the workshop organizers. Authors of accepted papers will be given an 8-10 minute slot to present their work and receive feedback during the workshop.

Please submit all inquiries about the CfP and any submissions to csab19@googlegroups.com

Dates:

  • Submission deadline: April 1st April 7th

  • Decisions sent: April 12th

  • Workshop date: June 11th

Program


Full details coming soon!

At a high level, the program will feature two keynote talks, four three-person panels comprised of experts from a diverse set of backgrounds, and 8-10 minute talks from authors of accepted papers.

Confirmed speakers (more to come!):

Organizers


Aniko (Ancsa) Hannak is an assistant professor at the Vienna University of Economics and Business, and faculty member of the Complexity Science Hub. She received her PhD from the College of Computer & Information Science at Northeastern University, where she was part of the Lazer Lab and the Algorithmic Auditing Group. Her main interest lies in computational social sciences. She is focusing on the co-evolution of online systems and their users with a focus on algorithmically aided online platforms. Since big data algorithms learn on human data, they are likely to pick up on social biases and unintentionally reinforce them. In her PhD work, Aniko created a methodology called “algorithmic auditing”, which tries to uncover the potential negative impacts of large online systems. Examples of such audits include examining the filter bubble effect on Google Search, online price discrimination or detecting inequalities in online labor markets.


Kenneth (Kenny) Joseph is an Assistant Professor of Computer Science and Engineering at the University at Buffalo, SUNY. Prior to that, he was a postdoc at the Network Science Institute at Northeastern University and a fellow at Harvard’s Institute for Quantitative Social Science, and completed his graduate work in the Societal Computing program in the School of Computer Science at Carnegie Mellon University. His research focuses on developing a better understanding of the dynamics and cognitive representations of stereotypes and prejudice, and their interrelationships with sociocultural structure. Kenny's work has been published in a variety of outlets, including Science, KDD, ICWSM, WWW, CSCW and the Journal of Mathematical Sociology.