{ systems problems }

  • A Seven-Step Primer on Soft Systems Methodology

    I’m currently TAing for CSC2720H Systems Thinking for Global Problems, a graduate-level course on systems thinking. In class today we talked about soft systems thinking (SSM), an approach which uses systems thinking to tackle what are called “wicked problems“. I thought I’d outline one approach to SSM, as it’s useful to CS education research.

    Step 1: Identify the domain of interest

    Before you can research something, you should first decide what your domain is. What topic? What system are you studying? For example, “teaching computer science” could be your starting point, as could “climate change”.

    Chances are you’re looking at a wicked problem. Conklin’s definition of wicked problems are that:

    1. The problem is not understood until after the formulation of a solution.
    2. Wicked problems have no stopping rule.
    3. Solutions to wicked problems are not right or wrong.
    4. Every wicked problem is essentially novel and unique.
    5. Every solution to a wicked problem is a ‘one shot operation.’
    6. Wicked problems have no given alternative solutions.Because you’re looking at a domain which doesn’t have a clear definition or boundaries, you’ll first want to immerse yourself in the domain. One trick is to draw “rich pictures“, which are essentially visualized streams of consciousness.

    You should also think about what perspectives you bring into this domain. What biases and privileges do you have going into this? Why are you interested in this domain? What do you have to gain or lose here?


    Step 2: Express the Problem Situation

    A good expression of a situation should contain no value judgments. As a heuristic, ask yourself if it’s possible for somebody to not see your statement as a problem.

    For example: “Global surface temperatures on Earth have been increasing since the late 19th century” is a statement some people may not even see as a problem, whereas “Climate change will destroy our way of life” presents climate change as a problem, rather than as a situation.

    The goal of expressing a situation is so that you can then identify the different ways people would frame this as a problem. If you express the situation as inherently problematic, this will narrow the problem frames that you’ll think of.

    It may take you a few drafts to come up with a situation statement you’re happy with. That’s fine; SSM is an iterative process. You may find in later steps you’ll want to come back here (or to earlier steps) and revise your work.

    Step 3: Identify Different Problem Frames

    How you frame a situation will affect the types of analysis and solutions you’ll come up with. Your next step is to think of the different ways the situation could be framed, before you pick which one to proceed with. I’ll give three examples.

    Situation: A large fraction of undergraduate students fail in first-year CS. 
    Some problem frames:

    1. The context is problematic. Students are overburdened in all their classes, and have a difficult time adjusting to university study.2. The curriculum structure is problematic. CS1 material builds on itself to an extent that other first-year classes don’t. If you fall behind, it can be impossible to catch up.
    2. The amount of content is problematic. We pack too much material into CS1 for students to properly absorb and understand.4. The pedagogy is problematic. We just aren’t teaching CS1 effectively.
    3. The affordances are problematic. We teach CS1 using programming languages which don’t reflect how non-programmers reason about computation (e.g. while loops vs. until loops) 
    4. It isn’t a problem. The failure rates, while high, are consistent with other first-year courses.7. Computer science is inherently difficult for humans to learn.
      Situation: Computer science is not available in every high school in Canada.
      Some problem frames:

    5. If the general population doesn’t learn about CS, they won’t be able to properly participate in a 21st century democracy. Issues of privacy and security are poorly understood but necessary for democracies to address.

    6. Not enough young Canadians are equipped with the necessary computational skills for the workforce.3. CS is generally only offered at affluent, urban, schools. This causes racial and class disparities in access to CS, and in turn, to lucrative jobs.
    7. Fewer girls than boys take high school CS; if more girls could take high school CS it could reduce the gender gap.5. There aren’t enough qualified high school teachers to teach CS in every school.
    8. Schools are underfunded and overburdened.
    9. This isn’t a problem, schools should be focusing on other topics instead! 
      Situation: The percentage of women completing undergrad CS programmes hasn’t changed since the wide-scale creation of women in computing initiatives.
      Some problem frames:

    10. The initiatives are being undermined by the same external forces that created the gender disparity in CS.2. The initiatives are themselves faulty. They reinforce gender norms and the subtyping of “female computer scientist” != “computer scientist”.

    11. The initiatives are having a positive impact, but not in ways this measurement can capture. For example, they improve the personal experiences of the women in the field, but don’t change the numbers.
    12. Without the initiatives, the percentage would have decreased. Before the initiatives, the percentage was decreasing; it has since flatlined.

    Step 4: Study the problem frames and pick one

    At this point you’ll want to start doing a literature review. As you review the literature you’ll find different papers take different problem frames (explicitly or implicitly).

    Each problem frame will lead to different approaches to studying and “solving” the problem. Let’s return to the CS1 failure example:

    Situation: A large fraction of undergraduate students fail in first-year CS.

    1. The context is problematic. Students are overburdened in all their classes, and have a difficult time adjusting to university study. 
    • Possible solutions: increase financial aid, reduce course load 2. The curriculum structure is problematic. CS1 material builds on itself to an extent that other first-year classes don’t. If you fall behind, it can be impossible to catch up.

    • Possible solution: reconfigure the CS1 material to be breadth-first, like they have at Harvey Mudd3. The amount of content is problematic. We pack too much material into CS1 for students to properly absorb and understand. 

    • _Possible solutions: _spread it out amongst more classes, get CS1 in all high schools4. The pedagogy is problematic. We just aren’t teaching CS1 effectively.

    • Possible solutions: use peer instruction and/or pair programming 5. The affordances are problematic. We teach CS1 using programming languages which don’t reflect how non-programmers reason about computation (e.g. while loops vs. until loops) Ultimately for your research you’ll need to pick a single problem frame to work within. At this point you should choose one and justify why you’re picking that one rather than the other ones. Once you’ve picked your problem frame you’ll want to carefully review and analyse the relevant literature.

    Step 5: Arena of Action

    Think about your problem frame like you’re framing a photograph. Who or what is in the foreground? In the background? Not in the frame at all?

    Or, if you see your situation as an arena: who is in the arena? Who is being affected by the situation? Who has the power to change the situation? Who benefits from this framing, and who loses out?

    CATWOE Analysis can come in here: who are the customers/clients? The actors? The transformation process? What’s the world view underlying this? Who “owns” or controls this? What environmental constraints are there?

    When I took a class on policy analysis, some questions we asked were: “Why was this policy adopted? On whose terms? Why? On what grounds have these selections been justified? Why? In whose interests? Indeed, how have competing interests been negotiated?” as well as “Why now?” and “What are the consequences?” (from Taylor et al, 1997, “Doing Policy Analysis”)

    Step 6: Theorize/model the relevant system

    Having identified your problem frame, you’ll want to return to the system at hand. Depending on the problem, you may want to model it and/or theorize about it based on the existing literature.

    Once you have a model, compare it to real world evidence. You may have to design and perform empirical studies in order to accomplish this.

    Step 7: Identify possible/feasible changes to the situation, and take action

    Like it says. Soft systems methodology was created for action research, a style of research intended to create social change. In this paradigm, research is bad if it doesn’t help improve the situation.

    Your goal in doing this research is to identify changes to the situation which are both possible and feasible. You should identify who has the power to enact these changes. Challenge yourself: contact these people and share your findings. Advocate for your proposed solution.

  • On the Social Psychology of Sexism

    As somebody interested in gender equality in CS, one thing that’s proved quite illuminating for me is to read up on the psychology of sexism. Why does sexism persist in society? What social and psychological structures keep it in place?

    Sexism in some ways is unlike other forms of discrimination. When it comes to race, or class, or disability, the social psychology literature will frequently talk about social distance. But when it comes to women, men “can’t live without ‘em” [1] – and so things tend to be a bit different.

    Ambivalent Sexism

    It turns out that sexism has two faces: good old hostile sexism – and the more palatable benevolent sexism. Benevolent sexism is the notion that women are wonderful, caring, nurturing and beautiful creatures – and so must be protected and provided for. (Note the “creatures” – not “people”.)

    The evidence on the psychology of sexism is that the people who espouse hostile sexism are also benevolently sexist. They think women shouldn’t work – women should stay home and care for the children because women are so good at mothering. And they get hostile when their regressive worldview clashes with attempts of women and men to change the status quo.

    Furthermore, the more you’re exposed to benevolent sexism, the more likely you are to later take on hostile sexist views. The more you’re primed to believe that women should fill the magical-traditional role, the more likely you’ll try to thwart any attempt to move away from this view.

    A vital thing to note about benevolent sexism is how frequently it is embraced by women. Many women will argue that women are better at being nurturing, or communicating, etc. After all, why would you turn down a worldview that argues you’re wonderful and worthy of protection?

    Hammond et al’s 2013 “The Allure of Sexism: Psychological Entitlement Fosters Women’s Endorsement of Benevolent Sexism Over Time“ gives the most up-to-date review I’ve seen on ambivalent sexism in its introduction – the rest of this paper is largely a summary of their literature review.

    Benevolent Sexism

    Benevolent sexism tends to be a fairly palatable type of sexism since it doesn’t seem sexist [2] – indeed women often see it as chivalry or intimacy rather than sexism [3]. After all, women “complete” men and are their “better halves”.

    While hostile sexism “works to suppress direct challenges to men’s power by threatening women who taken on career roles or seek political reform” [2], benevolent sexism works to incentivise women’s adoption of traditional, patriarchal gender roles. Women are revered as special and caring – and deserving of protection.

    And indeed, the main way in which benevolent sexism stops gender inequality is through women’s adoption of these views. Benevolent sexism incentizises women to stay in those special, caring, protection-worthy gender roles – and at the same time makes men’s social advantages seem more fair [4].

    Effects on Women

    A longitudinal study of New Zealanders found that psychological entitlement in women leads to greater endorsement of benevolent sexism. (Hammond et al, 2013). The more a woman believes she is deserving of good things, praise, and material wealth – the more benevolent sexism will appeal to her. After all, benevolent sexism both reveres women as being men’s “better halves” – and at the same time promises women that they will be protected and financially provided for by their male partners.

    Women who espouse benevolent sexism have greater life satisfaction [5]. They feel good about their focus on care-giving and appreciate being provided for by their male partners. They have more power to influence their male partners [6], and hence have indirect access to resources and power. 

    At the same time, however, benevolent sexism holds women back when it comes to gaining social power. Women who hold benevolently sexist beliefs have less ambition for education and their careers [7]. They are more likely to defer to their partners on career decisions [8]. They believe their careers should take the back seat to their male partner’s careers [9].

    And furthermore, exposure to benevolently sexist statements leads women to perform poorly at tasks and feel lower competence [10]. They’re more likely to believe that men and women have an equal chance of success in society [4]. And they’re less likely to support gender-based collective action [11].

    Effects on Men

    For men, espousing benevolent sexism requires sacrifice in relationships [12]. They have to provide for their female partners, and live up to the romantic notion of the ‘’white knight’’.

    But this sacrifice does come with benefits. Men who are portrayed as benevolently sexist are viewed as more attractive [13]. And men who actually agree with benevolent sexism are more caring and satified relationship partners [6].

    Furthermore, women who espouse benevolent sexism are likely to be hostile/resistant to men who aren’t on the same page [6]. And women who espouse benevolent sexism are more let down when their male partners fail to live up to the fantasy of the white knight [14].

    A Tragedy of the Commons

    For women, benevolent sexism gives them dyadic power [1]. They have more power in relationships, and more satisfaction in them. As a result, women have something to lose if the status quo is disrupted [12].

    But getting the individual benefits of benevolent sexism means agreement with the attitudes that perpetuate gender inequality. The women who agree with benevolent sexism are more likely to hold themselves back in their careers and less likely to support feminist causes.

    If we want to get more women into non-traditional careers like computer science, sexism is something we need to tackle. We spend a lot of time trying to break down hostile sexism in STEM – but what about benevolent sexism? 

    References

    1. Glick, Peter, and Susan T. Fiske. “Hostile and benevolent sexism measuring ambivalent sexist attitudes toward women.” Psychology of Women Quarterly 21.1 (1997): 119-135.
    2. Hammond, Matthew D., Chris G. Sibley, and Nickola C. Overall. “The Allure of Sexism Psychological Entitlement Fosters Women’s Endorsement of Benevolent Sexism Over Time.” Social Psychological and Personality Science (2013): 1948550613506124.
    3. Barreto, M., & Ellemers, N. (2005). The burden of benevolent sexism: How it contributes to the maintenance of gender inequalities. European Journal of Social Psychology, 35, 633-642. doi:10.1002/ejsp.270
    4. Jost, J. T., & Kay, A. C. (2005). Exposure to benevolent sexism and complementary gender stereotypes: Consequences for specific and diffuse forms of system justification. Journal of Personality and Social Psychology, 88, 498–509. doi:10.1037/0022-3514.88.3.498
    5. Hammond, M. D., & Sibley, C. G. (2011). Why are benevolent sexists happier? Sex Roles, 65, 332–343. doi:10.1007/s11199-011-0017-2
    6. Overall, N. C., Sibley, C. G., & Tan, R. (2011). The costs and benefits of sexism: Resistance to influence during relationship conflict. Journal of Personality and Social Psychology, 101, 271–290. doi:10.1037/a0022727
    7. Fernandez, M., Castro, Y., Otero, M., Foltz, M., & Lorenzo, M. (2006). Sexism, vocational goals, and motivation as predictors of men’s and women’s career choice. Sex Roles, 55, 267–272. doi:10.1007/s11199-006-9079-y
    8. Moya, M., Glick, P., Exposito, F., de Lemus, S., & Hart, J. (2007). It’s for your own good: Benevolent sexism and women’s reactions to protectively justified restrictions. Personality and Social Psychology Bulletin, 33, 1421–1434. doi:10.1177/0146167207304790
    9. Chen, Z., Fiske, S. T., & Lee, T. L. (2009). Ambivalent Sexism and power-related gender-role ideology in marriage. Sex Roles, 60, 765–778. doi:10.1007/s11199-009-9585-9
    10. Dardenne, B., Dumont, M., & Bollier, T. (2007). Insidious dangers of benevolent sexism: consequences for women’s performance. Journal of Personality and Social Psychology, 93, 764–779. doi:10.1037/0022-3514.93.5.764
    11. Becker, J. C., & Wright, S. C. (2011). Yet another dark side of chivalry: Benevolent sexism undermines and hostile sexism motivates collective action for social change. Journal of Personality and Social Psychology, 101, 62–77. doi:10.1037/a0022615
    12. Glick, P., & Fiske, S. T. (1996). The ambivalent sexism inventory: Differentiating hostile and benevolent sexism. Journal of Personality and Social Psychology, 70, 491–512. doi:10.1037//0022-3514.70.3.491
    13. Kilianski, S. E., & Rudman, L. A. (1998). Wanting it both ways: Do women approve of benevolent sexism? Sex Roles, 39, 333–352. doi:10.1023/A:1018814924402
    14. Hammond, M. D., & Overall, N. C. (2013). When relationships do not live up to benevolent ideals: Women’s benevolent sexism and sensitivity to relationship problems. European Journal of Social Psychology, 43, 212–223. doi:10.1002/ejsp.1939
  • Toward a systems model of CS enrolments

    This term, I’m taking my supervisor’s grad course on “Systems Thinking for Global Problems“. It’s been quite interesting so far. In our last couple of lectures, we have been talking about feedback loops.

    And with that on my mind, I was particularly struck by a recent post on Mark Guzdial’s blog reposting a keynote by Eric Roberts:

    [in response to increasing CS enrolments], 80% of the universities are responding by increasing teaching loads, 50% by decreasing course offerings and concentrating their available faculty on larger but fewer courses, and 66% are using more graduate-student teaching assistants or part-time faculty. […] However, these measures make the universities’ environments less attractive for employment and are exactly counterproductive to their need to maintain and expand their labor supply. They are also counterproductive to producing more new faculty since the image graduate students get of academic careers is one of harassment, frustration, and too few rewards. 
    Computer science departments have, for decades, had cyclical enrolment. The sort of oscillation in enrolments is exactly the sort of thing you see in systems analysis when you have a balancing feedback loop with a delay in it.

    Balancing Feedback Loops

    Causal loops are used in systems analysis to show the relationship between variables in a system. If the variable _x_ increases when _y_ increases, and _x_ decreases when _y _decreases, they have a positive link. If, however, _x_ increases when _y _decreases, and _x_ decreases when _y_ increases, they have a negative link.

    Let’s say we put a bunch of variables in a loop. If there’s an even number of negative links, then we have a reinforcing feedback loop: the system will increase (or decrease) exponentially until it hits some limit to growth. The negative links cancel each other out – so everything just reinforces everything.

    But what if we have an odd number of negative links? The system tends towards an equilibrium – either it will asymptote to some value, or, more often, it will oscillate. Something will increase, another thing will decrease it, another will increase, and so on.

    Consider:

    As the number of students enroling in CS1 increases, the quality of student experience in a CS programme goes down for the reasons Eric Roberts covered above. And as the quality of the student experience goes down, the CS enrolments go down.

    Eventually, enrolments and my abstract “student experience” will reach equilibrium – but it won’t be a static one. The enrolments will oscillate due to delay in the system: when CS enrolments increase, the quality of “student experience” won’t go down for a while yet, and “student experience” won’t immediately affect CS enrolments.

    ### Playing with the Model

    Where do CS1 enrolments come from? Eric Roberts has elsewhere observed that CS1 enrolments at Stanford (and elsewhere) are positively correlated with the NASDAQ average – with some delay.

    And with some delay, one would think (hope?) that the number of computer scientists in the economy would increase the NASDAQ average. Again, a balancing loop emerges with some delays in it:

    Let’s think some more about the relationship between CS1 enrolments and the number of CS graduates.

    The more CS1 enrolments there are, the more students in a CS department relative to the number of CS professors. Let’s call that “students / profs” for short.

    As students / profs increases, the use of sessional lecturers increases. The number of courses a department offers decreases – which in turn decreases the amount of streaming in CS programmes. The teaching load for faculty increases, in turn hurting faculty satisfaction.

    Even with the abstract “student experience” unpacked somewhat, we still see a balancing feedback loop. It doesn’t matter what path you take from “students/profs” to student retention – there’s an odd number of negative links.

    Now, everything so far has assumed the number of faculty is fixed. But it’s not quite – with a (often substatial) delay, increased enrolments will lead to more faculty hirings. Let’s look a bit more at that:


    Here we have our first reinforcing loop. The more students/profs, the more the teaching load – and down goes faculty satisfaction, more profs quit and go into industry, and then the ratio of students/profs gets even worse.

    This feedback loop would continue on until it hits a limit to growth (no faculty to teach classes?) if it weren’t for the interacting effect of faculty hirings. The more faculty leave, the more faculty need to be hired. If we ignore our link between faculty turnover and the number of faculty, what we have here is a balancing feedback loop: profs who leave are replaced, and all is steady.

    What could change that is university funding for hiring more faculty above the replacement rate. This is going to be institution-specific, so it’s hard for me to come up with a model here. (Even for any given institution, funding structures tend to be incredibly complicated.)

    As I’ve been playing with these models, it’s striking me that it’s unlikely the cyclical enrolments in CS will stop. For them to stop, we’d have have either a nice steady tech economy – meaning interest in CS was steady – or we’d have to have a university funding structure where faculty can be rapidly hired in proportion to increasing enrolments.

    Any ideas on how the models could be refined – or leveraged? This is just a first stab at modelling this. Let me know in the comments.

  • Subtyping, Subgrouping, and Stereotype Change

    There’s been a fair bit of research finding that negative stereotypes are part of what deters women and racial minorities from computer science and STEM in general (e.g. [1]). These stereotypes make it harder for women and minorities to personally identify with computer science, and amplify some of the biases that they face in CS. So for this post, I’ll be going over observed phenomena in social psychology and sociology that pertain to stereotype change.

    Stereotypes

    Stereotypes are really hard to change. They’re reinforced from many sources (media, individuals, groups, etc). But even more than that, stereotypes are schema: they are how we mentally organize information about social groups, and how we can determine whether we are “in” or “out” of a group. Schema allow us to process information effortlessly, and are pretty deeply ingrained once they’re there.

    The human brain is not very good at changing schema. When we see evidence that contradicts our schema, our brains will do all sorts of mental gymnastics to avoid confronting or changing the incorrect schema. Most frequently, we forget that we saw it all. Sometimes our misconceptions even get stronger [2].

    This happens with stereotypes. Betz and Sekaquaptewa did a study where they showed role models to young girls, to try to motivate the girls’ interest in STEM [3]. Role models were either gender-neutral, or feminine. The result? Gender-neutral role models boosted interest – and feminine (counterstereotypic) role models actually reduced girls’ interest in science. To these girls, the feminine scientist – a stereotype violator – is aberrant.

    Stereotype violators are not viewed favourably by others. Indeed, in laboratory settings, people go out of their way to punish stereotype violators [4]. Stereotype violators are seen as less likable, and less competent. Not surprisingly, women in science are rated as less likable and less competent than otherwise identical men [5, 6].

    Subtyping

    So, let’s say instead of being exposed to just one woman scientist, you are exposed to a bunch of them. Regularly. That will change your schema, right? Nope.

    The human brain does a thing that social psychologists call subtyping. Instead of changing your mental model of what a scientist is (white male), you instead create a new category: the woman scientist [7].

    And the evidence is that this is what happens to female scientists, and to female engineers [3]. Furthermore, the stereotype of the woman scientist is of an unfeminine woman. The unfeminine label in of itself is costly: these women are seen as less likable, less attractive, less competent, and less confident [3].

    Perceived Variability

    So how can we change stereotypes, then? It turns out a thing called “perceived variability” is key: it’s how much variation we perceive in an out-group [7]. “Out-group” here refers to any group that a person does not identify with; an “in-group” is one that that person identifies with. Humans systematically underestimate the variability within an out-group, particularly in comparison to the variability within the in-group (e.g. men see women as more homogenous than they are; whites see aboriginals as more homogenous; etc).

    This is known as the Out-group homogeneity effect. We mentally exaggerate the stereotypical qualities of outgroups (and outgroup members), and ignore the counterstereotypical qualities.

    We stop paying attention to stereotypes when we perceive greater variability in the group that’s been stereotyped [7]. For example, it’s a lot harder to think about aboriginals in terms of generalizations and stereotypes when you’re used to thinking about the differences between Inuit, Metis and First Nations, and differences between the Haida, Salish, Blackfoot, Anishinaabe, Innu, Mi’kmaq, Dene, etc.

    Subgrouping

    So how can we increase the perceived variability of an outgroup? Subgrouping refers to the process in which both people members are brought together around common goals or interests, and can include both in-group and out-group members. For example, creating a study group in a computer science class in which both women and men are represented  – or joining a robotics club which has a mix of white, Asian, black, and hispanic students.

    Subgrouping “allows for a more varied cognitive representations of group members” [7] – it leads you to start seeing the members of your subgroup around their membership in your common subgroup – rather than their membership in any in-group or out-group. Richards and Hewstone have a very nice literature review about subgrouping and subtyping, showing how dozens of studies have consistently found that subgrouping leads to increased perceived group variability, and stereotype change.

    The Contact Hypothesis in sociology gets at subgrouping: the observed effect that being familiar with a member of an outgroup (eg. homosexuals) increases your acceptance of the outgroup. Having a friend, classmate or family member who is queer means you a share with a subgroup with them (friend group, class, family, etc).

    For subgroups to form effectively, they need to have meaningful cohesion to those in the subgroup. One study by Park et al that is described by Richards and Hewstone found that they could not form a subgroup around all engineering students: “[they] were all hardworking and bright, but in very different ways. Some were motivated only by money, some by parental expectations, and some by larger environmental goals.” Instead, there subgroups of engineering students formed, around those three motivators. [7]

    Similarly, Park et al found that they could not form subgroups around continuous variables (high/moderate/high) or arbitrary bases [7]. And other studies in the Richards and Hewstone review found that trying to form subgroups around having minority status (e.g. clubs for women in STEM, study groups for black students) either did not change stereotypes about their group, or intensified them [7].

    Indeed, I wouldn’t be surprised that part of why instructional techniques such as Peer Instruction disproportionately helps female CS/physics students is because they encourage subgrouping. When you have your whole class together, collaborating in small groups for class activities, you’re having them bond as classmates – rather than as members of in-groups or out-groups.

    Discussion

    This is one reason I’m always a bit iffy about Women in CS/Science clubs: they don’t promote stereotype change, but instead promote subtyping. Instead of changing the notion of what a computer scientist is, they reinforce the subcategory of woman computer scientist.

    But stereotypes aren’t the only thing that affect minorities. Part of why Women in CS clubs are so popular is that they provide a sense of community to these women. This is really important when you’re a minority member! The sense of isolation that many women experience in STEM is why many of them leave.

    And, as always, is evidence that girls only schooling can be good for encouraging young girls’ interest in math and science [8]. It’s somewhat of a tragedy of the commons problem: putting all the women together in a club helps those individual women cope with a culture in which they are negatively stereotyped – but it doesn’t change the actual stereotype.

    References:
    [1] Cheryan, Sapna, et al. “The Stereotypical Computer Scientist: Gendered Media Representations as a Barrier to Inclusion for Women.” Sex roles 69.1-2 (2013): 58-71.
    [2] McRaney. The Backfire Effect. http://youarenotsosmart.com/2011/06/10/the-backfire-effect/
    [3] Betz, Diana E., and Denise Sekaquaptewa. “My fair physicist? Feminine math and science role models demotivate young girls.” Social Psychological and Personality Science 3.6 (2012): 738-746.
    [4] Rudman, Laurie A., and Kimberly Fairchild. “Reactions to counterstereotypic behavior: the role of backlash in cultural stereotype maintenance.” Journal of personality and social psychology 87.2 (2004): 157.
    [5] Steinpreis, Rhea E., Katie A. Anders, and Dawn Ritzke. “The impact of gender on the review of the curricula vitae of job applicants and tenure candidates: A national empirical study.” Sex roles 41.7-8 (1999): 509-528.
    [6] Moss-Racusin, Corinne A., et al. “Science faculty’s subtle gender biases favor male students.” Proceedings of the National Academy of Sciences 109.41 (2012): 16474-16479.
    [7] Richards, Zoë, and Miles Hewstone. “Subtyping and subgrouping: Processes for the prevention and promotion of stereotype change.” Personality and Social Psychology Review 5.1 (2001): 52-73.
    [8] Barinaga, Marcia. “Surprises across the cultural divide.” Science 263.5152 (1994): 1468-1470.