{ diversity in cs }

  • What CS Departments Do Matters: Diversity and Enrolment Booms

    I’ve written before about the historical factors that have led to the decline in the percentage of women in CS. The two enrolment booms of the past (in the late-80s and the dot-com era) both had large impacts on decreasing diversity in CS. During enrolment booms, CS departments favoured gatekeeping policies which cut off many “non-traditional” students; these policies also fostered a toxic, competitive learning environment for minority students.

    We’re in an enrolment boom right now so I — along with many others — have been concerned that this enrolment boom will have a similarly negative effect on diversity.

    Last year I surveyed 78 CS profs and admins about what their departments were doing about the enrolment boom. We found that it was rare for CS departments to be considering diversity in the process of making policies to manage the enrolment boom.

    Furthermore, in a phenomenographic analysis of the open-ended responses, I found that increased class sizes led many professors to feel their teaching is less effective and is harming student culture (this hasn’t been published yet — but hopefully soon!)

    Around the same time I put out my survey, CRA put out a survey of their own on the enrolment boom. Their report has just come out; they have also found that few CS departments are considering diversity in their policy making — and that the departments who have been considering diversity have better student diversity.

    From CRA’s report:

    The Relationships Between Unit Actions and Diversity Growth

    The CRA Enrollment Survey included several questions about the actions that units were taking in response to the surge. In this section, we highlight a few statistically significant correlations that relate growth in female and URM students to unit responses (actually, a composite of several different responses).

    1.    Units that explicitly chose actions to assist with diversity goals have a higher percentage of female and URM students. We observed significant positive correlations between units that chose actions to assist with diversity goals and the percentage of female majors in the unit for doctoral-granting units (per Taulbee 2015, r=.19, n=113, p<.05), and with the percent of women in the intro majors course at non-doctoral granting units (r=.43, n=22, p<.05). A similar correlation was found for URM students. Non-MSI doctoral-granting units showed a statistically significant correlation between units that chose actions to assist with diversity goals and the increase in the percentage of URM students from 2010 to 2015 in the intro for majors course (r=.47, n=36, p<.001) and mid-level course (r=.37, n=38, p<.05). Of course, units choosing actions to assist with diversity goals are probably making many other decisions with diversity goals in mind. Improved diversity does not come from a single action but from a series of them

    2.    Units with an increase in minors have an increase in the percentage of female students in mid- and upper-level courses. We observed a positive correlation between female percentages in the mid- and upper-level course data and doctoral-granting units that have seen an increase in minors (mid-level course r=.35, n=51, p<.01; upper-level course r=.30, n=52, p<.05). We saw no statistically significant correlation with the increased number of minors in the URM student enrollment data. The CRA Enrollment Survey did not collect diversity information about minors. Thus, it is not possible to look more deeply into this finding from the collected data. Perhaps more women are minoring in computer science, which would then positively impact the percentage of women in mid- and upper-level courses. However, units that reported an increase in minors also have a higher percentage of women majors per Taulbee enrollment data (r=.31. n=95, p<.01). Thus, we can’t be sure of the relative contribution of women minors and majors to an increased percentage of women overall in the mid- and upper-level courses. In short, more research is needed to understand this finding.

    3.    Very few units specifically chose or rejected actions due to diversity. While many units (46.5%) stated they consider diversity impacts when choosing actions, very few (14.9%) chose actions to reduce impact on diversity and even fewer (11.4%) decided against possible actions out of concern for diversity. In addition, only one-third of units believe their existing diversity initiatives will compensate for any concerns with increasing enrollments, and only one-fifth of units are monitoring for diversity effects at transition points.
    From a researcher’s perspective this has me happy to see: we used very different sampling approaches (they surveyed administrators, I surveyed professors in CS ed online communities), we used different analytical approaches (their quantitative vs. my qualitative), and we came to the same conclusion: CS departments aren’t considering diversity. This sort of triangulation doesn’t happen every day in the CS ed world.

    CRA’s report gives us further evidence that CS departments should be considering diversity in how they decide to handle enrolment booms (and admissions/undergrad policies in general). If diversity isn’t on policymakers’ radars, it won’t be factored into the decisions they make.

  • Categorizing Interventions: Adapting the USI Model to CS Education

    |

    I’m interested in studying diversity initiatives in CS education – and in doing so I consider it helpful to have a model of the different types of diversity initiatives that are used to recruit/retain women and other underrepresented groups in CS. But how can we come up with a useful model? This blog post is what I’ve come up with so far – where I started (explicit and implicit interventions) and where I recently arrived to (adapting the USI model of public health interventions to this context). It’s a work in progress and I’d love feedback.

    Explicit and Implicit Interventions

    First, I want to walk you through how I have mostly been thinking about diversity initiatives. I currently categorize them like so:

    • Explicit interventions: these target women (or other groups) and are explicit in their purpose. For example:
      *   Departmental women-in-CS clubs at many universities
      
      • The Grace Hopper Celebration of Women in Computing and similar conferences
      • Mentorship programmes for women in CS, like CRA-W’s
      • Outreach initiatives like Gr8Girls and Girlsmarts
      • Grassroots bootcamps/workshops like Black Girls Code and Ladies Learning Code
      • Awards/scholarships/grants for women, like the Anita Borg ScholarshipAll of these both are intended for women/girls, and in the process, the women/girls participating know the intervention is for women/girls.
    • Implicit interventions: these are stealthy – they are open to everybody and do not advertise the goal of supporting women in CS. Instead these are approaches which are known to benefit women disproportionately (and may also benefit dominant groups). For example:
      *   A CS professor uses [pair-programming](http://dl.acm.org/citation.cfm?id=1060075) and [peer instruction](http://scitation.aip.org/content/aapt/journal/ajp/74/2/10.1119/1.2162549) in their class, and [randomly calls on students in a structured fashion](http://dl.acm.org/citation.cfm?doid=1060071.1060073) -- all are known to disproportionately benefit female students -- but the professor does not tell her students she is doing this for the female students' sake.
      
      • A CS professor has their students write a value-affirming essay as an assignment at the beginning of term – this is known to help women overcome stereotype treat in male-dominated disciplines.
      • A CS department provides a mentorship programme to all students. A university mandates that all students need to take CS, and its CS department provides multiple, engaging, versions of CS1 that are tailored to different students’ interests, à la Harvey Mudd. A conference switches to using blind review of its submissions, which is known to disproportionately benefit women.
        The implicit interventions have a fairly different feel to them. For one thing, they tend not to just help women – these can also disproportionately help students of colour, students of low SES backgrounds, LGBTQ+ students, etc. These interventions change the system, rather than give underrepresented groups like women a buffer in an unwelcoming system.

        The implicit interventions also move away from singling out minority groups as though it is us women who have the problem, to instead working from an assumption that it is the CS classroom/department/workplace/etc that has the problem. (And btw, explicit interventions can cause stereotype threat: “we’re gonna help you because you’re a woman” is still reducing somebody to their gender.)

    Now, there’s some limitations to this model of explicit vs. implicit interventions. What if a professor did pair-programming in class but said it was to benefit the women? I’m not sure what I’d do with that. My categories were also clearly thought up about group-level interventions. In one of my committee meetings, Mark Guzdial asked me how one would categorize a CS professor spending extra time with a female student, encouraging her one-on-one. This also doesn’t really nicely fit into the current model.

    The Universal-Selected-Indicated (USI) Model of Suicide Prevention

    Greg Wilson recently retweeted a fascinating report on suicide in Toronto, which I was looking through earlier today out of curiosity. This particular section caught my attention:

    _4.2 Preventive interventions

    A public health approach to suicide prevention includes both universal interventions in the whole population and interventions targeted to key risk groups. Rose’s Theorem makes the case for prioritizing universal interventions because a large number of people at small risk may give rise to more cases of disease than a small number who are at high risk.[106,107]

    A model for understanding prevention interventions is the Universal, Selective and Indicated (USI) model, which breaks the targeted interventions down into selective and indicated interventions. [105] The USI model is a comprehensive way of categorizing prevention efforts according to defined populations, and consists of:

    • Universal interventions designed to reach the whole population, without regard to population target groups or risk factors;
    • Selective interventions are designed to focus on groups who have been identified as at high risk for suicide-related behaviours; and
    • Indicated interventions are designed for individuals showing signs of suicide-related _
      This immediately made me think of my explicit/implicit intervention model. My ‘implicit interventions’ are universal interventions (though stealthy), ‘explicit interventions’ are selective interventions, and the one-on-one interventions that Mark asked me about would be indicated interventions. The conversion isn’t perfect: public health scientists think about ‘the population’ whereas in education we could be thinking about a classroom, a department, a programme, all the kids in grade 8 across a country, etc – our ‘population’ is flexible.

    The fascinating thing for me is the analysis of universal interventions – which the report lauds as being be more effective, and cites meta-analyses finding universal interventions being generally be more effective than selective interventions.

    Digging through the public health literature on USI, this certainly seems to be a trend. Selective interventions can do good work, but universal interventions seem to go that extra mile.

    A USI Model for Diversity Initiatives in Education

    After spending my afternoon reading public health papers using the USI model, I’d say the way I now think about diversity initiatives takes this form:

    • Universal interventions are intended to affect a whole body of students (or professionals, etc), such as a whole classroom or a whole degree programme. Universal interventions make change in an educational system in a way that disproportionately benefits underrepresented groups, but also has a positive or neutral effect on dominant groups. Examples include:

    • A whole CS classroom using pair-programming

    • Blind review for conferences
    • Everybody has to take some CS* Selective interventions target a population known to be underrepresented in computer science (e.g. women, people of colour, low-SES, etc), are offered specifically and explicitly to that group, and provides them with targeted support to ‘level the playing field’ with dominant groups in CS. Examples include:

    • Women in CS conferences/celebrations

    • Outreach events for girls* Indicated interventions are individual-level interventions – such as a teacher or professor taking the time to give extra encouragement to a student to study (or stay in) CS.
      All three types of interventions can have positive impacts on diversity in CS. One-on-one encouragement is, for example, a strong indicator of whether black students will take CS. And supportive communities like Grace Hopper can help women find a place in the CS community.

    But like in public health, universal interventions can make the widest changes on a population-level at a lesser cost. Not every student interested in CS can (or will be) reached by selective/indicated interventions. It would be infeasible to get CS outreach efforts to reach every single girl, kid of colour, kid of low SES, etc – “[w]e need [to get CS into] school in order to reach everyone.

  • Bonuses and Software Projects

    |

    At today’s CS Education Reading Group, one of our group members led us through an exercise about group work from “Students’ cooperation in teamwork: binding the individual and the team interests“ by Orit Hazzan and Yael Dubinsky.

    It’s an in-class activity to get students thinking about how they work together in software projects. Students are given a scenario: you’ll be on a software team. If the project completes early, the team gets a bonus. How should the bonus be allocated?

    1. 100% of the bonus should be shared equally
    2. 80% should be for the team to share; 20% should go to the top contributor
    3. 50% team, 50% individual
    4. 80% team, 20% individual
    5. 0% team, 100% to the individual who contributed the mostEverybody in the room got a minute to say which option we’d prefer and to write it down – and then we had a discussion about it. We then went through the rest of Orit’s paper and the variant scenarios.

    I was the sole person in the room arguing for 100% team. My reasoning was because individual bonuses are not effective rewards – and often counterproductive.

    Large Monetary Awards are Counterproductive

    Ariely et al found that larger monetary rewards actually reduce performance on cognitive, intellectual tasks (link).  There’s almost half a century of psychology research arguing this.

    And it doesn’t just hold in laboratory studies – though it’s not as simple as in the lab. Pay-for-performance approaches in the white-collar workplace have been repeatedly found to be at least suboptimal.

    External motivators generally don’t help with cognitive tasks – internal motivation is really what drives us to do well on cognitive tasks.

    Bonuses and Justice

    Another problem with bonuses is fairness. Women and a number of other minorities are less likely to get them. They’re less likely to argue they deserve them. Their contributions are more likely to be viewed as less important. And they are perceived as less valuable.

    (On that note, tipping in restaurants in the like is known to amplify racial inequalities. The race of the server is a stronger predictor of gratuity size than the quality of the service.)

    Student Perceptions of Teamwork

    In Orit’s small-sized classes, students opted for 80% team, 20% individual (see her 2003 ITiCSE paper). Why not 100%? One of the things that came up in our discussion is the question “but who is on my team?”

    For a lot of our discussants, team composition was the driving factor. Do you have a team you trust? Then 100% for the team, for sure. But what if you don’t know them? Or you don’t trust them?

    Katrina Falkner et al did a study on how CS students perceive collaborative activities, which they presented at last year’s SIGCSE. For a lot of students, collaboration stresses them out: they’re not used to it, they’re not experienced at it, and they’re not particularly good at it. But as educators, that’s what we’re here to work on, right?

    The biggest source of anxiety for students in Katrina’s study was in who their partners were/would be. Would their partner(s) be smart, hardworking, and reliable?

    Team Composition

    It turns out randomized groups were the worst for students. Students felt powerless over their performance. We know from other literature that randomized groups is suboptimal for student performance. A much better way to form groups for performance is to group students by ability – strong students with fellow strong students.

    On that note – it can be disastrous to pair strong students with weak students on the idea that poor students learn from the weak ones. It seeds/reinforces a lot of student fears about group work: strong students dislike it as they have to do a disproportionate share of the work; weak students learn less as their partner is doing the work for them.

    Moving on: the best way to form groups in terms of reducing student anxiety is often to let students pick their groups. I say “often” because for a student who feels like an odd-one-out in the class, or doesn’t know anybody, this can be just as stressful.

    Managing Short Term Stress

    Stress is another thing worth talking about. Some people do great under pressure, and work better with the focus it gives them. And some people fall apart under stress, and work best without pressure. (And most of us are somewhere in between.)

    The good news is that how we interpret anxiety is fairly malleable, and in a good way:

    _The first experiment was at Harvard University with undergraduates who were studying for the Graduate Record Examination. Before taking a practice test, the students read a short note explaining that the study’s purpose was to examine the effects of stress on cognition. Half of the students, however, were also given a statement declaring that recent research suggests “people who feel anxious during a test might actually do better.” Therefore, if the students felt anxious during the practice test, it said, “you shouldn’t feel concerned. . . simply remind yourself that your arousal could be helping you do well.” _
    Just reading this statement significantly improved students’ performance. They scored 50 points higher in the quantitative section (out of a possible 800) than the control group on the practice test. 

    Getting that message out to students is something we ought to be doing – test anxiety hurts a lot of student, as does anxiety about group work. It doesn’t have to be so bad.