When data scientists and engineers walk in the door on the very first day of their very first adult jobs, they enter the workplace with their own views about what it means to be a professional and what professionalism requires of them. To prepare for this moment, universities can’t pass the buck and delegate the responsibility for ethics training to corporations. They need to do their fair share of ethics education, which, at a minimum, means placing a high value on imparting codes of conduct, case studies, moral reasoning skills, policy briefs, and project-based assignments that highlight ethically salient issues.
And design ethics! For all the reasons Hartzog emphasizes, there’s got to be lots of design ethics. Students who are creating the online, virtual reality, augmented reality, AI-driven, and machine learning environments in which we’re all going to think, learn, work, and socialize need to have a clear sense of the immense power they wield. Experience designers aren’t just responsible for making intuitive products; they also bear some of the burden for shaping what gets experienced.
The thing is, it’s not enough to throw more resources into ethics classes and related, ongoing endeavors, like integrated ethics across the curriculum. To be sure, more of these activities would help, especially if they involve philosophers. Admittedly, as a philosophy professor, I’m biased. I believe that conceptual tools, like carefully crafted and smartly interpreted thought experiments, can help people think in novel and creative ways, much like the most challenging quality fiction, film, and fine art do. There’s much more mileage to extract from applying the trolley problem to self-driving car policy, for one example, even though critics rightly point out that the thought experiment can divert attention from pressing problems if it’s interpreted too reductively and draws attention away from other valuable insights.
When I talked about the value of philosophy with Robin Zebrowski, chair of the Cognitive Science Program at Beloit College and an affiliate of its Philosophy, Psychology, and Computer Science Departments, she took the argument further. Philosophers shouldn’t just be taken more seriously in academic settings, Zebrowski argued, they should also get a more prominent seat at the corporate table.
“Governments have begun to recognize the work philosophers do around critically examining algorithms, autonomous vehicles, drone warfare, artificial intelligence, and even social problems that technology companies are trying to solve in their own ways,” Zebrowski said. “Philosophers are invited to consult with the United Nations; the United Nations Educational, Scientific, and Cultural Organization; the Department of Defense — all because of the unique expertise philosophers have in relation to these questions. So why aren’t technology companies hiring them and paying them for this specialized knowledge that promises to offer a competitive edge in a cutthroat industry?”
Special pleading for philosophy aside, the reason universities shouldn’t be content with merely improving ethics curricula is that it’s hard for them to be good ethical role models in the information age. Role models that don’t create an alarming disconnect between what they preach and what they practice, if they aren’t introspective and committed to good governance.
Simply put, universities need to continually yet carefully reflecting and acting upon the dangers of infusing business intelligence into their own technological practices. After all, universities are embracing big data–fueled surveillance systems and predictive analytics to improve how they pursue highly prioritized goals: recruiting and admitting students; networking with successful alumni; improving retention; and helping students study better, learn more, and select the right classes at the right time.
While these are all laudable objectives, abstractly speaking, the devil, as always, is in the details. Universities have the digital tools for putting students in lockstep with the all the customers and employees who are imprisoned in the “iron cage of the quantified self regime that aims to track all of our data to maximize and optimize all of our behavior.”
Mitch Daniels, president of Purdue University, recently cautioned that if universities don’t properly administer their technological systems, their profiling and nudging could end up doing serious harm to the very students they are charged with protecting. Since schools are acquiring massive amounts of personal data that aggregate to form rich portraits of student habits, they have the potential to comply with all state and federal education laws while still creating excessively controlling environments — possibly even string-pulling ones, like China’s all-encompassing, government-run social credit system. “Many of us,” Daniels writes, “will have to stop and ask whether our good intentions are carrying us past boundaries where privacy and individual autonomy should still prevail.”
In other words, if universities want to educate tomorrow’s leaders while also leading by example, they need to become more committed to the ideal of “information justice.” Jeffrey Alan Johnson, director of institutional effectiveness, planning, and accreditation support at Utah Valley State University, has long been making the case that, among other things, information justice requires that college students be given fair processes to challenge and change personal information that a school believes is useful, but which, in reality, is inaccurate or inappropriate for them to possess or act upon. Johnson also contends that universities should be doing a better job of “tapping into the expertise of their faculty members who deal with technology ethics and basic social science and research methods who can talk about data as a social construct.”
I ran these ideas past Pasquale, and he found them all worthwhile to pursue. “My main problem is with efforts to substitute ethical reflection for legal obligations,” he said.
“In a sound legal system,” he continued, “Facebook would be facing massive fines for repeatedly violating the trust of its users and would be subject to prudential regulation (in the same way many banks are) to assure it keeps its promises in the future. But there are many areas where our sense of right and wrong is emerging or where duties are more moral in form than legal. That’s where ethical education is essential — to cultivate judgment and articulacy about values in realms where the obsession with the algorithmic leads to binary thinking (whatever is legal is fine to do) or hacker ‘ethics’ (rules are made to be broken).”
Tech companies will have to take ethics much more seriously than they currently do if they want to be genuinely trustworthy. The public and politicians alike are fed up with repetitive mea culpa sound bites that are scripted to resonate as contrite but lack the substance of committed leadership.
The reckoning seems to be finally here, and so promises of reform that appear bold but only tinker at the edges of institutional myopia, greed, and arrogance will be taken for what they are: lies that can no longer be tolerated.