Speaker Bio:
Dr. Vivienne Ming was named one of 10 Women to Watch in Tech by Inc. Magazine, she is a theoretical neuroscientist, entrepreneur, and author. She co-founded Socos Labs, her fifth company, an independent think tank exploring the future of human potential. Dr. Ming launched Socos Labs to combine her varied work with that of other creative experts and expand their impact on global policy issues, both inside companies and throughout our communities. Previously, Vivienne was a visiting scholar at UC Berkeleyâs Redwood Center for Theoretical Neuroscience, pursuing her research in cognitive neuroprosthetics. In her free time, Vivienne has invented AI systems to help treat her diabetic son, predict manic episodes in bipolar sufferers weeks in advance, and reunited orphan refugees with extended family members. She sits on boards of numerous companies and nonprofits including StartOut, The Palm Center, Cornerstone Capital, Platypus Institute, Shiftgig, Zoic Capital, and SmartStones. Dr. Ming also speaks frequently on her AI-driven research into inclusion and gender in business. For relaxation, she is a wife and mother of two.
Distilled Blog Post Summary: Dr. Vivienne Mingâs talk at a recent Domino MeetUp delved into bias and its implications, including potential liabilities for algorithms, models, businesses, and humans. Dr. Mingâs evidence included first-hand knowledge fundraising for multiple startups, data analysis completed during her tenure as the Chief Scientist at Gild, as well as citing studies within data, economics, recruiting, and education. This blog post provides text and video clip highlights from the talk. The full video is available for viewing. If you are interested in viewing additional content from Dominoâs past events, review the Data Science Popup Playlist. If you are interested in attending an event in-person, then consider the upcoming Rev.
Research, Experimentation, and Discovery: Core of Science
Research, experimentation, and discovery are at the core of all types of science, including data science. Dr. Ming kicked off the talk with indicating âone of the powers of doing a lot of rich data work, thereâs this whole rangeâ I mean, thereâs very little in this world thatâs not an entree intoâ. While Dr. Ming provided detailed insights and evidence that pointed to the potential of rich data work during the entire talk, this blog post focuses on the implications and liabilities of bias within gender, names, and ethnic demographics. It also covers how bias isnât solely a data or algorithm problem, it is a human problem. The first step to address bias is acknowledging that it exists.
Do You See the Chameleon? The Roots of Bias
Each one of us has biases and makes assessments based on those biases. Dr. Ming uses Johannes Stotterâs Chameleon to point out that âthe roots of bias are fundamental and unavoidableâ. Many people when they see the image, see a chameleon. However, the chameleon image consists of two people covered in body paint and are strategically placed to look like a chameleon. In the video clip below, Dr. Ming indicates
âI cannot make an unbiased AI. There are no unbiased rats in the world. In a very basic sense, these systems are making decisions on their uncertainty, and the only rational way to do that is to act the best we can given the data. The problem is when you refuse to acknowledge thereâs a problem with our bias and actually do something about it. And we have this tremendous amount of evidence that there is a serious problem, and itâs holding, not just small things back. But as Iâm going to get to later, itâs holding us back from a transformed world, one that I think anyone can selfishly celebrate.â
https://fast.wistia.com/assets/external/E-v1.js
Bias as the Pat on the Head (or the Chain) that Holds Us Back
While history is filled with moments when bias is not acknowledged as a problem, there are also moments when people addressed societal-reinforced gender bias. Women have assumed male nom de plumes to write epic novels, fight in wars, win Judo championships, run marathons, and even, as Dr. Ming pointed out, create an all-women software company called Freelance Programmers in the 1960s. During the meetup, Dr. Ming indicated that Dame Stephane âSteveâ Shirleyâs TedTalk, âWhy do ambitious women have flat heads?â, helped her parse two distinctly different startup fundraising experiences that were grounded in gender bias.
Prior to Dr. Ming co-founding her current education technology company and obtaining her academic credentials, she dropped out of college and started a film company. When
âwe started this company, and the funny thing is, despite having nothing, nothing that anyone should invest inâ we didnât have a script. We didnât have talent. Literally, we didnât even have talent. We didnât have experience. We had nothing. We essentially raised what you might in the tech industry called seed round after a few phone calls.â
However, raising funding was more difficult the second time, for her current company, despite having substantially more academic, technology, and business credentials. During one of the funding meetings with a small firm with 5 partners, Dr. Ming relayed how the last partner said ââyou should feel so proud of what youâve builtâ. And at the time, I thought, oh, Jesus, at least one of these people is on our side. In fact, as we were leaving the room, he literally patted me on the head, which seemed a little strange.â This prompted Dr. Ming to consider how
âmy credentials are transformed that second time. No one questioned us about the technology. They loved it. They questioned whether we know how to run a business. The product itself people loved versus a film. Everything the second time around should have been dramatically easier. Except the only real difference that I can see is that the first time I was a man and the second time I was a woman.â
This led Dr. Ming to conclude and understand what Stephanie Shirley meant by ambitious women having flat heads from all of the times they have been pat on the head. Dr. Ming relays that
âIâve learned ever since as an entrepreneur is, as soon as it feels like theyâre dealing with their favorite niece rather than me as a business person, then I know, I know that they simply are not taking me seriously. And all the PhDâs in the world doesnât matter, all the past successes in my other companies doesnât matter. You are just that thing to me. And what Iâve learned is, figure that out ahead of time. Donât bother wasting days and hours, and prepping to pitch to people that simply are not capable of understanding who you are, but of course, in a lot of context, thatâs all youâve got.â
Dr. Ming also pointed out that the bias due to gender also manifested at an organization where she worked before and after her gender transition. She noted when she went into work after her gender transition,
âThatâs the last day anyone ever asked me a math question, which is kind of funny. I do happen to also have a PhD in psychology. But somehow one day to the next, I didnât forget how to do convergence proofs. I didnât forget what it meant to invent algorithms. And yet that was how people dealt with it, people who knew before. You see how powerful the change is to see someone in a different skin.â
This experience is similar to Dame Shirleyâs, who, in order to start what would become a multi-billion dollar software company in the 1960s, âstarted to challenge the conventions of the time, even to the extent of changing my name from âStephanieâ to âSteveâ in my business development letters, so as to get through the door before anyone realized that he was a sheâ. Dame Shirley subverted bias during a time when she, as a female, was prevented from working on the stock exchange, driving a bus, or, âIndeed, I couldnât open a bank account without my husbandâs permissionâ. Yet, despite the bias, Dame Shirley remarked
âwho would have guessed that the programming of the black box flight recorder of Supersonic Concord would have been done by a bunch of women working in their own homesâ â¦.âAnd later, when it was a company valued at over three billion dollars, and Iâd made 70 of the staff into millionaires, they sort of said, âWell done, Steve!â
While it is no longer the 1960s, bias implications and liabilities are still present. Yet, we in data science are able to access data to have open conversations about bias as the first step avoiding inaccuracies, training data liabilities, and model liabilities within our data science projects and analysis. What if, in 2018, people built and trained models based on the assumption that humans with XY chromosomes lacked the ability to code because they only reviewed and used data from Dame Shirleyâs company in the 1960s? Consider that a moment, as that is what happened to Dame Shirley, Dr. Ming, and many others. Bias implications and liabilities have real world consequences. Being aware of the bias and then addressing it, moves the industry forward towards breaking the chain that holds research, data science, and us, back.
Say My Name: Biased Perceptions Uncovered
When Dr. Ming was the Chief Scientist at Gild, a reporter called her for a quote on the Jose Zamora story. This also led to Dr. Mingâs research on her upcoming book. âThe Tax of Being Differentâ, Dr. Ming relayed anecdotes during the meetup (see video clip) and has also written about this research for the Financial Times:
âTo calculate the tax on being different I made use of a data set of 122m professional profiles collected by Gild, a company specialising in tech for hiring and HR, where I worked as chief scientist. From that data, I was able to compare the career trajectories of specific populations by examining the actual individuals. For example, our data set had 151,604 people called âJoeâ and 103,011 named âJoséâ. After selecting only for software developers we still had 7,105 and 4,896 respectively, real people writing code for a living. Analysing their career trajectories I found that José typically needs a masters degree or higher compared to Joe with no degree at all to be equally likely to get a promotion for the same quality of work. The tax on being different is largely implicit. People need not act maliciously for it to be levied. This means that José needs six additional years of education and all of the tuition and opportunity costs that education entails. This is the tax on being different, and for José that tax costs $500,000-$1m over his lifetime.â (Financial Times)
https://fast.wistia.com/assets/external/E-v1.js
While this particular example focuses on ethnicity-oriented demographic bias, during the meetup discussion, Dr. Ming referenced quite a few research studies regarding name bias. In case Domino Data Science Blog readers do not have some of research she cites on hand, a sample of studies have published around bias with names include: names that suggest male gender, ânoble-soundingâ surnames in Europe, names that are perceived as âeasy-to-pronounceâ which also has implications for how organizations choose their names. Yet, Dr. Ming did not limit the discussion to bias within gender and naming, she also dived right into how demographic bias impacts image classification, particularly with ethnicity.
Bias within Image Classification: Missing Uhura and Not Unlocking your iPhone X
Before Dr. Ming was the Chief Data Scientist at Gild, she was able to see Paul Violaâs face recognition algorithm demo. In that demo, she noticed that the algorithm didnât detect Uhura. Viola indicated that this was a problem and it would be addressed. Fast forward years later to when Dr. Ming was the Chief Scientist at Gild, she relayed how she received âa call from The Wall Street Journal [and WSJ asked her] âSo Googleâs face recognition system just labeled a black couple as gorillas. Is AI racist?â And I said, âWell, itâs the same as the rest of us. It depends on how you raise it.ââ
For background context, in 2015, Google released a new photo app and a software developer discovered that the app labeled two people of color as âgorillasâ and Yonatan Zunger was the Chief Architect for Social at Google at the time. Since Yonatan Zunger is no longer at Google, he has since provided candid commentary about bias. Then, in January 2018, Wired ran a follow up story regarding the 2015 event. In the article Wired tested Google Photos and found that the labels for gorillas, chimpanzees, chimp, and monkey âwere censored from searches and image tags after the 2015 incidentâ. This was confirmed by Google. Wired also ran a test to assess view of people by conducting searches for âAfrican Americanâ, âblack manâ, âblack womanâ, or âblack personâ which resulted in âan image of a grazing antelopeâ (on the search âAfrican Americanâ) as well as âblack-and-white images of people, correctly sorted by gender but not filtered by raceâ. This points to the continued challenges involved with addressing bias in machine learning and models. Bias that also has implications beyond social justice.
As Dr. Ming pointed out in the meetup video clip below, facial recognition is also built into the iPhone X. The face recognition feature has potential challenges in recognizing global faces of color. Yet, despite all of this, Dr. Ming indicates âbut what you have to recognize, none of these are algorithm problems. These are human problems.â Humans made decisions to build algorithms, build models, train models, and roll out products that include bias that has wide implications.
https://fast.wistia.com/assets/external/E-v1.js
Conclusion
Introducing liability into an algorithm or model via bias isnât solely a data or algorithm problem, it is a human problem. Understanding that it is a problem is the first step in addressing it. In the recent Domino Meetup, Dr. Ming relayed how
âAI is an amazing tool, but itâs just a tool. It will never solve your problems for you. You have to solve them. And particularly in the work I do, there are only ever messy human problems, and they only ever have messy human solutions. Whatâs amazing about machine learning is that once we found some of those issues, we can actually use it to reach as many people as possible, to make this essentially cost-effective, to scale that solution to everyone. But if you think some deep neural network is going to somehow magically figure out who you want to hire when you have not been hiring the right people in the first place, what is it you think is happening in that data set?â
Domino continually curates and amplifies ideas, perspectives, and research to contribute to discussions that accelerate data science work. The full video of Dr. Mingâs talk at the recent Domino MeetUp is available. There is also an additional technical talk that Dr. Ming gave at the Berkeley Institute of Data Science on âMaximizing Human Potential Using Machine Learning-Driven Applicationsâ. If you are interested in similar content to these talks, please feel free to visit the Domino Data Science Popup Playlist or attend the upcoming Rev.
The post Bias: Breaking the Chain that Holds Us Back appeared first on Data Science Blog by Domino.