Binge drinking is traditionally defined as consuming 4+/5+ drinks/occasion for females/males, and peaks in young adulthood with approximately 35-40% of 18 to 21 year-olds in the US reporting at least one episode of binge drinking in the past two weeks. This behavior contributes to a substantial proportion of alcohol-related deaths, including suicide, in addition to a host of other negative outcomes such as alcohol poisoning, unintentional injuries, vehicular accidents, and increased risk of developing alcohol use disorder. The probability that negative alcohol-related consequences will occur greatly increases with more frequent binge drinking episodes and when individuals consume larger quantities of alcohol during a particular drinking episode.
Importantly, a substantial proportion of young adults drink at levels far beyond the standard binge threshold, typically referred to as high-intensity drinking. For example, White and colleagues (2006) found that approximately 43% of college student drinkers endorsed drinking at levels twice the recommended binge cut-off in a single sitting. In a nationally representative sample of US high school seniors, Patrick & Terry-McElrath (2017) found that approximately 25% consumed 5+ alcoholic drinks, 10% consumed 10+ drinks, and 5% consumed 15+ drinks at least once in the last 2 weeks. Young adults who engage in this high-intensity drinking are particularly vulnerable to severe alcohol-related harms, including blackouts and death, which is why identifying those who are at risk to engage in high intensity drinking, and understanding why these individuals are at increased risk, is a research priority.
Personality is one construct that has been identified as important in predicting unhealthy alcohol use. Specific personality traits, including neuroticism and extraversion, have been linked to binge drinking and may also help explain who is at increased risk of engaging in high intensity drinking. However, we have only found two papers on this topic, and only one investigated neuroticism and extraversion (mean age 50). Much of the research on predictors of high intensity drinking has thus far focused on motivations or reasons for drinking. Several theories propose that drinking motives are the most proximal predictors of alcohol use that all other distal determinants (e.g., personality traits) operate through, and many studies have shown that motives mediate the relationship between personality traits and alcohol consumption. Four possible drinking motives have been identified, including social (positive-external; drinking to obtain/facilitate social gains), conformity (negative-external; drinking to feel included/avoid social rejection), enhancement (positive-internal; drinking to enhance a positive mood), and coping (negative-internal; drinking to avoid/regulate negative feelings).
A large body of research has accumulated showing that drinking motives predict alcohol use and alcohol-related consequences, but little research has investigated whether drinking motives predict high intensity drinking. White and colleagues (2016) found that over six months, increases in social and enhancement motives were higher among college students who transitioned from non-binge drinking to high intensity drinking. In addition, in a clinical sample of adolescents with alcohol-related problems, Creswell and colleagues (2020) found that the maintenance of relatively high endorsement of enhancement and social motives over time was associated with high intensity drinking, and that decreases in coping motives were associated with less risky drinking in young adulthood. Taken together, drinking motives seem to be a promising avenue to pursue in better understanding the emergence of high intensity drinking in young adults, but no prior studies have examined whether drinking motives mediate the link between personality traits and high intensity drinking (which is what my research aims to do).
In my last post, I discussed my background in working with children with special needs and how it has led me to research how the COVID-19 pandemic has impacted their ability to get a supportive and effective education. Now, two months into the summer, I have made plenty of progress on the project, moving forward with my research questions and goals.
My major goal for this summer was to put together all of the materials needed for an IRB application. For this, I needed to have a firm grip of what my study was going to look like, including a detailed draft of my survey. To prepare for all of these application requirements, my first major step needed to be collecting background information.
To start, I spent a lot of time reading. I would sit for hours searching through databases for relevant studies and prior research regarding special-needs education, pandemic education, or any other relevant thing I could find. All of this reading was extremely independent and self-motivated, which often made it difficult to get through. Ultimately, all of the background reading that I have has given me a strong basis for writing, both in terms of my survey and, ultimately, my final paper.
I have also been collecting background information in the form of consultations. With the help of Dr. Sharon Carver, I have been connected with a handful of volunteer consultants, including a Pittsburgh private school administrator, a Pittsburgh public school administrator, an administrator at a laboratory school in Toronto, and a parent of a child with special needs. In these Zoom conversations, I was able to ask about each person’s experience with special-needs education throughout the pandemic, gaining more insight into their personal experiences than a study or article may be able to provide. While I will not be able to use these conversations directly in my data, being able to have these discussions with each consultant, I was able to add more personal and anecdotal background to my survey, allowing the survey to be more tailored to what each group of people may have experienced.
Looking forward, I am hopeful that I will soon gain IRB approval so that I can hit the ground running in the fall with conducting my survey. In the meantime, I plan on really utilizing the background knowledge I have to begin putting together the introduction to my paper and forming my main argument. I also plan on testing my survey with family and friends, making revisions where necessary, in order to understand what my data may look like.
Hello! I am Mallory Page, a rising Senior majoring in Social and Political History and Japanese Studies. I am interested in studying The Women’s Bible, Elizabeth Cady Stanton’s commentary on the Christian Bible. Stanton was highly critical of Christianity, believing that it contributed to women’s subjugation. However, not every suffragist shared Stanton’s feelings towards Christianity, and she ended up being essentially blacklisted from the National American Women’s Suffrage Association, a movement that she helped form.
While there has been research on The Woman’s Bible, previous scholarship focused on Stanton’s own motivations for penning this work and her resulting ostracization from women’s rights movements. I am more interested in studying the public debate surrounding The Woman’s Bible. I am using The Woman’s Bibleas an opportunity to explore the relationship between Christianity and American political life in the late 19thcentury, studying criticisms from both outside and inside the women’s movement. I will mainly be looking at the comments on The Woman’s Biblein newspapers from the time.
I am interested in studying The Woman’s Bible because it merges two of my historical interests: women’s history and Christianity. As a Religious Studies minor, I am fascinated by how Christianity has been used to oppress people and inspire others to do charitable works. In this case, Christianity was used by both parties to argue for suffrage for women and argue against it.
Hello, again! It’s Renée, here with another update on my research project. To recap, my senior honors thesis project is creating a survey experiment to test media framing effects on how the American public perceives Chinese Americans. Last time I wrote something, I was introducing my project and telling everyone how I came up with the idea for it. Now, I will tell you what I have accomplished from then to now and the stuff that I still need to do.
Progress (What I Have Done So Far)
I have been doing A LOT of reading. I’ve found so many articles about media priming and how other studies measure attitudes toward an ethnicity or race. Basically, I’ve used most of the summer doing a literature review. Yes, reading and taking notes on different articles doesn’t sound exciting; it sounds like something I would do on-the-regular during the school year. However, the literature review is important because it gives me ideas about how I want to design my experiment. Currently, I have confirmed that I want the dependent variable of my experiment to be “attitudes toward Chinese Americans” — measured by survey questions on a 6-point scale. I also have demographic questions drafted to control for fixed effects in my survey experiment.
Reading about how other researchers measure attitudes toward an ethnicity or race helps standardize how I will measure attitudes toward Chinese Americans from participants in my own study. I want the operationalization of attitudes toward Chinese Americans in my study to be consistent with the measurements of other researchers, so that I can be reassured of the validity of my study’s measurement of the experiment variables. I can even save some effort in designing the survey questions for my experiment by using survey questions from other studies. I have been reading survey questions from Colin Ho and Jay W. Jackson’s (2011) study of an Attitudes Toward Asians (ATA) survey to gain inspiration for my own survey questions. I might even use some of them.
My goal is to have my experiment design finished by the end of the summer so that I can submit it to the Ethics Review Board and start collecting data in the fall. So, after I feel like I have gained enough knowledge from the literature review, I will start crafting my survey experiment in Qualtrics and recording my experiment procedures.
I can’t believe it has been a month since I posted my Introduction blog post — when they say the summer flies, they really weren’t kidding… Back then, I was only starting to read relevant literature and work on the introduction section of my thesis.
Since then, I have:
-written the first section of my introduction on cultural values
-finalized my research questions and created my hypothesized model that connects my variables of interest
-started compiling my quantitative measures
Creating my model helped me figure out how all the pieces of my project fit together. I started by adding all the relevant variables I could think of to a blank slide and moving them into an order based on how I thought they were related. For my second revision, I thought about which of those variables were the most important for me to include, deleted the rest of them, and wrote out a justification for each arrow in my model. I’m now on my third and (hopefully) close to final version.
Sitting down and writing out not just how my variables were connected but why they were connected that way also really helped me clarify my research questions. When I first wrote my thesis proposal, I knew I wanted to look at both attitudes toward mental health difficulties and social support, but I was struggling to figure out how the two connected to each other. Through making my model, I realized two things: 1) I was interested in both social support preferences and social support behaviors, and 2) I was interested in how attitudes toward mental health difficulties and social support preferences affected support seeking and support provision for mental health difficulties. Therefore, my revised/finalized research questions are:
- How does culture influence attitudes toward mental health difficulties and social support preferences?
- How do attitudes toward mental health difficulties and social support preferences influence support seeking and support provision for mental health difficulties?
In the remaining month of the summer, I hope to:
-write and submit my IRB proposal for my project
-have a draft of my entire introduction
Tune in a few weeks from now for my next round of updates!
It’s been days since your last fix. You frantically search throughout the confines of your living space, desperately trying to find another hit. As your withdrawal symptoms finally kick in, you only wish that you could have one…last…retweet.
Welcome readers to the first instance of my summer blog series (brought through the CMU Senior Dietrich Honors Research Fellowship program) Tweet Addict! This blog will document the endeavors of myself, Dr. Simon DeDeo, and Dr. George Loewenstein (both of CMU’s Social and Decision Sciences department) as we investigate the mechanisms of addiction on twitter, and how such models of addiction may generalize to the wider social media ecosystem as a whole. But first, introductions!
Part 1: Who am I?
My name is Zachary Novack, and I am a rising senior (evidently) within the Statistics and Data Science department here at CMU. I’ve bounced around a lot between labs on campus, briefly spending time in an auditory psychology lab as well as researching behavioral game theory, before finally settling to where I am now within the Laboratory for Social Minds. Broadly, I’ve been interested in statistics/ML applications to real-world policy domains, most notably within social networks. I’m also concurrently doing research on noise properties of Stochastic Gradient Descent with the Approximately Correct Machine Intelligence (ACMI) Lab within the ML department at CMU, but such research is effectively separate from this current story (though I won’t rule out the idea of a cross-over episode just yet).
Part 2: The “What”s
Dr. DeDeo and I have been looking at quantitative inference in online social settings for about a year now (see Part 3), but this current project is our first shot at addiction (as a theoretical framework) and Twitter (as opposed to other social media sites). If you do a quick search with some permutation of the keywords “addiction” and “social media” on Google, you’ll quickly find two kinds of articles:
- Pseudo-scientific opinion pieces about Gen-Z’s “addiction” to social media, chock full of mentions about sinking “hundreds of hours into apps” and “endless scrolling” (a sort of buzz term for the continuous consumption of content without outsized interaction from the user).
- Journal papers from relevant Psych/Neuroscience journals that mostly discuss the above endless scrolling effect, and how responses to media interaction light up the same areas in the brain that drugs do.
The important factor that both these kinds of articles are missing is any mention of withdrawal. We as outside observers can say that social media is addicting because people spend a large amount of time on it and we feel like they shouldn’t, but such an outlook is only motivated by our biased inclinations about social media in the first place (we certainly wouldn’t say someone who reads one book a week is addicted to reading, but the only difference in these two scenarios is the societal opinion we hold about the addictive medium).
It’s this mechanism, withdrawal, which is the primary focus of the present research project:
- If there exists some body of people who are addicted to social media, and specifically addicted to the interactions and engagement on the medium itself, then what happens when people are starved of interaction and engagement?
- Do people change their patterns of posting and interacting when they see a drop in likes/followers/retweets?
- And can we quantify and predict such changes in behavior in order to help at-risk individuals from sinking into their addiction?
Two notable clarifications here. First, we differentiate between addiction to interaction/engagement and addiction to purely content (endless scrolling) since while the latter is almost certainly a real phenomenon, there isn’t a sensible way to track withdrawal symptoms as any symptoms won’t be visible on the media app itself. Second, we pick Twitter as our unique social media site to focus on because:
- Using Twitter’s API as academic researchers allows us to access every tweet ever made
- Twitter data is primarily text based, affording much easier quantitative analysis then sites like Instagram (photos) or TikTok (video)
- Twitter, like Facebook, encourages small-to-moderate scale interaction among social circles (here due to its basis in text and the conversation nature of short tweets) and is likely to have less of the endless scrolling type of users than Instagram or TikTok, in which users are incentivized to either post frequently and monetize their content or just consume content
Part 3: The “Why”s
Before this project, Dr. DeDeo and I spent about a year looking at the alt-right fringe community of 4chan (specifically, the Politically Incorrect /pol/ sub forum), searching for patterns in how extremist ideologies may intersect on the website and across conversations there. Results were not particularly conclusive in any direction, and as a last ditch effort on the project, we decided to hone it specifically on patterns of antisemitic hate speech on the website. This just so happened to coincide exactly with the recent rise in antisemitic hate crimes this past spring, and as a Jew my self, I had a very deep investment in discovering patterns of hate against my own religion. But alas, our last ditch effort fizzled out as well (there are a few somewhat interesting results from it, but they exist in research limbo for the time being).
Based on these setbacks, and seeing the reaction to the rise in antisemitism on social media websites, we began to wonder if we were tackling the issue too granularly. Instead of looking at what motivates people to post one form of hate speech over another, look at what motivates people to change their posting patterns in any sort of way. It is this desire for a general framework in which rational actors become irrational on social media that drove us to tackling the addiction question. If we are able to broadly categorize addictive behavior (whether its hate speech or influencer culture or academic arguments), we may be able to inform policy makers at both the corporate and governmental levels on how we could curtail these occurrences the best; we may even be able to help individual users themselves track their own addiction patterns and notify them when they may become at-risk to further addictive actions.
That’s it for this first blog of Tweet Addict! Next time, we’ll tackle some of the nitty gritty of dealing with the Twitter API, how best to parse tweet data, and some promising initial results.
See you all soon!
According to my last post on the research background, our proposed study tries to understand if the presence of learning science principles and the different formats in which the principles are presented will influence the quality of design products and the effectiveness of collaboration, and if so, in what ways.
To answer our research questions, our study will investigate the following three presentation formats of learning science principles in a digital card-based design support tool: prescriptive statements, guiding questions, and concrete examples.
For instance, a prescription of the “spacing principle”, which describes the benefit of leaving some time in between practice opportunities, may look like this:
“space practice across time > mass practice all at once”Kenneth R Koedinger, Julie L Booth, and David Klahr. 2013. Instructional Complexity and the Science to Constrain It. Science 342, 6161: 935–937. https://doi.org/10.1126/science.1238056
Nevertheless, it may not be the most effective form of presentation in comparison to a guiding question like
“When do players get opportunities to practice skills in your game?”
or a concrete example like
“Duolingo encourages users to take a break in between lessons by providing incentives for streaks of practicing multiple days in a row.”
Specifically, we will conduct game ideation workshops with our design support tool, which will be a web-based interface with learning science principles shown in flippable digital cards like this. Depending on their assigned condition, participants will (or will not in the control group) be provided with a design support tool presenting principles in the corresponding formats (Fig. 1).
We will collect design pitches of game concepts, which will be evaluated in semi-structured interviews with experts in game design and education. We also plan to collect recordings and participants’ interaction data with the design support tool, which will be used to analyze the dynamics and effectiveness of the collaborative game design process.
The recordings will be qualitatively coded in a 10s window to encode the primary collaboration activity in the window, such as explanation, question, concern, idea suggestion, etc. The interaction data (i.e., click patterns) will help us analyze the activity during collaboration and highlight the potentially different ways of using the design support tool by game designers and teachers, such as when and for how long participants view certain principles.
To implement click tracking, I’m teaching myself more about react, google analytics, and http requests using Linkedin Learning, which is a really good platform that have numerous free online courses for CMU students.
I just submitted a work-in-progress paper to the 2021 CHI-Play conference, so hope the good news will come soon, and if so, I’ll definitely write more about it in my next post!
Hi there for those who read my first post, and who is reading this one (as well as those who will in the future), my words miss you! Summer time flies fast, and there is no going back in life even if you have unfortunately found them wasted (not saying I did but still realizing this is cool).
In the past month, I have endeavored to accomplish two things: (1) formulating an account of mental accounting that is conductive to testable hypothesis and the development of my thesis and (2) replicating an innovative study that illustrates the possibility of invoking mental account on the spot (Reinholtz et al. 2015). In fact, I might have spent more time reading books on scientific explanations than doing actual research work (as most of the “work” was waiting for the study to be posted and doing some analysis which was quickly done anyway after I have written my code initially).
After reading tons (literally) of papers in mental accounting, both theoretical and empirical, I can safely determine that the term “mental accounting” have not received a uniform definition that singles out its epistemic status. In other words, what laws or explanations that “mental accounting” feature in? In what other theories is mental accounting entrenched? Economists and psychologists have wildly different opinions on this. Maybe we should consult the person who coined this phenomenon —- Richard Thaler? The Nobel laureate would likely be too busy to answer our email (either way I do not want to bother him with a message like “hey what do you think mental accounting really is”), but from reading his papers (along with many others who have written reviews on the topic), I have the sense that “mental accounting” has been used figuratively, and has been applied to a wide range of phenomenon that can be explained by a metaphor of as if someone is setting up accounts in his mind just like what one does with bank accounts. This falls far short of what we want — a characterization of behavioral regularity that is both explained and explaining, which provide us with foundations of prediction (at least qualitatively) and conductive to intervention.
I need to have something on my own to serve this goal, and so I turned to philosophers for help.
The books I have read include Alex Rosenberg’s Sociobiology and the Preemption of Social Science, Reduction and Mechanism, and James Woodword’s Making Things Happen. These polemical and yet incredibly smart philosophers (and social scientists) wrote extensively on the nature of scientific explanation specifically on social subjects, such as human behavior, that have traditionally been considered too complex and not conductive to “scientific” investigations. Alex Rosenberg is a critic of rational choice theory (that is, the economic method of studying human behavior by using axiomatic formulations of utility function under strict assumptions of economic rationality) on the ground that preferences, belief, and choices are not “natural kinds” that attain scientific laws. In other words, because any given economic behavior (including mental accounting, of course) are multiple realized by many configurations of belief and preferences, we cannot reductively seek behavioral laws at the level of preferences and beliefs (Rosenberg 2019, 2020). This is frustrating, and what does it mean for our study? Well, we can give a mechanistic account of what behavior regularly happens at similar context that leads to qualitatively predictable outcomes —- this is, after all, what behavioral scientists and marketing researchers have been doing all along. But there is something more to it: the sort of explanation that I hope my definition of mental account affords is one that goes beyond simply pointing out the regularity, but also allows us to manipulate behaviors in ways that produce predictable results. In other words, mental accounting in my use of language, is causal because it is manipulation-enabling (Woodword 2003).
It might take forever for me to formalize my definition of mental accounting, but let me give a rough sketch of it:
Mental accounting is the set of behavioral tendencies that regularly shows up in consumers’ expenditures decisions, which consists of two essential components: on one hand, preferences are casted or formed over products (opportunity set) put in mental bundles formed according to the principle of categorization; on the other hand, consumers make decisions given the constraint of a local and relevant income sets instead of the global asset account. In this characterization, mental accounting is postulated to be both ubiquitous in the sense that almost all consumption decisions involve it and general in the sense that it characterizes a general process of expenditure decision process. The virtue of this characterization is that it engages extant research on mental accounting in constructive ways that does not take neo-classical paradigm to be implicitly granted. Research on consumption bundling and mental accounting informs us about what principles underpin the former component, while studies in labeling, budgeting, and those in the narrow sense of “mental accounting” provide us with evidence on what counts as “relevant” income set. For sure, this is a very rough sketch and we have known little about neither of the two components of mental accounting, but because of which there is future research potentially more than fruitful to be carried out. Researchers in marketing and behavioral economics should find filling out this gap both interesting and promising. The more we know about these two components, the more predictions and interventions are tenable.
What about my replications?…..Well, I have long thought that the result of Reinholtz et al. 2015 not really “safe” given its marginal significance. The result of my replication seems to have confirmed this believe. Even though we did obtain some results that seem to possess the same trend shows up in the original study, we largely fail to find the extreme result the paper is looking for. This by no means dis-confirm results obtained by the published study, but that the phenomenon is not as strong as we thought it would be. Fortunately, this failure does not impede my own design very much, and it serves as an example of failing to confirm one of the principle that we have thought to underpin one of the component of mental accounting.
Works still in progress…..
The research: measuring transactivity in one-on-one negotiations
Transactivity: how people build upon each other’s ideas and reasoning
Coding: labeling a dataset on a metric(s), i.e. 1 if a piece of dialogue is transactive and 0 if it isn’t
When your negotiation data comes in the form of an audio recording and you want to run any sort of repetitive analyses on its textual contents, the inevitable first step is to transcribe each file. This is of course aided by present transcription software, but accuracy varies based on accents, audio quality, and overlapping speech, so some manual labor is unavoidable. Thus, my first task was transcribing a small set of audio files so we would have enough data to do initial analyses. This went relatively smoothly, leaving us with usable data.
Next comes the more daunting overall task of phase one, with the goal of formulating a machine learning algorithm that can effectively code for transactivity when given a negotiation transcript. Specifically, we want a program that can determine 1) whether each line in a negotiation is transactive and 2) in which ways (i.e. active listening) it is transactive. But before we can outsource transactivity coding to machine learning, we first have to be able to reliably code the data by hand. For one, the algorithm will need training data to guide it. But more importantly, we need to be sure that the coding is consistent across coders and transcripts so we can trust any statistical analyses run on the coded data. This leaves us with the current task: develop a coding scheme that has sufficient inter-rater reliability.
After trying out various coding schemes that included labeling each transactive line with its transactive referents, its exact type of transactivity, and its transactivity level, we came to the realization that ambiguity arose from the sheer number of metrics coupled with the concision of the data. Previous research had looked at discussion board posts, which were both a step more formal and had more content per post compared to a line in a negotiation dialogue. Conversational dialogue with its free-flowing, immediate-response nature would require a different type of coding scheme. With this in mind, we decided to start simple—to just code whether each line is transactive in the present conversation context. To our delight, our first test batch had at least moderate inter-rater reliability (κ=0.52), and after minor revisions, we are now testing a second batch with tentative optimism. If it yields a high reliability score, we can then look into adding other metrics or even doing an initial algorithm run.
Effective communication is important, and this is especially true in negotiations, from negotiating a better deal with your hiring manager to delineating who should do each household chore. And one way to measure communication effectiveness is by looking at transactivity, or how people build upon each other’s ideas and reasoning, either expanding or even refuting points that are brought up. My research looks at transactivity in one-on-one negotiations.
Specifically, I’m looking at how power dynamics (for example, between a recruiter and a job candidate) and how prompting a collaborative orientation (as opposed to a competitive one) can both influence transactivity levels in negotiations, as well as their subsequent outcomes. My research is separated into two phases. In the first phase, we are working on formulating a machine learning algorithm that can effectively code for transactivity when given a negotiation transcript. In other words, we’re trying to outsource measuring transactivity levels to the computer. In phase two, we plan on running laboratory-style experiments in order to control the independent variables in question. We can then feed the data into our algorithm from phase one and see if our hypotheses are supported.
As for the people involved, I am working directly with Ki-Won Haan and Professor Anita Woolley from the Tepper School of Business as well as the extended collective intelligence research team. They have been tremendously helpful in setting up the framework for the project and introducing me to relevant papers in the field. I have also had the pleasure to work with collaborators from the School of Computer Science to brainstorm potential machine learning algorithms that could work with our data. As for myself, I’m a rising senior studying Policy & Management and Chemistry, and this fellowship enables me to explore my interest in negotiation research while (hopefully) contributing something novel to the field.
How did I get here?
I have been fascinated by negotiations and negotiating before I even knew what the term meant. I loved the thrill of creatively refuting various contentions in formal debate and the thoughtfulness required for trading resources in board games. This interest evolved into undergraduate courses, and I was fortunate to have Professor Maria Tomprou, both as a wonderful instructor and as the key person who introduced me to my current research team after our after-class discussions on research interests. I then worked with Professor Woolley and Ki-Won to integrate myself into the nascent project and contribute my own set of hypotheses (power dynamics / collaborative orientation) and corresponding experimental structure.
Extant work shows that more transactive discussions can promote collaborative learning, create breakthrough solutions, and show better performance outcomes. However, transactivity studies have focused on education and learning science rather than negotiations. Likewise, negotiation studies show that power differences and collaborative approaches can affect outcomes, but little has been said about the interaction between the two. Thus, we address research gaps both broadly—transactivity in negotiations—and specifically—power and approach interactions. I hope this work will increase our understanding of how communication factors into negotiation outcomes.