KEMBAR78
Assignment 3 | PDF | Experiment | Design Of Experiments
0% found this document useful (0 votes)
25 views34 pages

Assignment 3

The experimental method is a research procedure that manipulates variables to identify cause-and-effect relationships, relying on controlled methods and random assignment of participants. This method has historical roots in the late 19th century with pioneers like Wilhelm Wundt, and it encompasses various types of experiments, including lab, field, and natural experiments, each with its strengths and limitations. Key terminology includes independent and dependent variables, hypotheses, and extraneous variables, which are crucial for understanding and conducting experiments in psychology.

Uploaded by

Shaily Diwan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views34 pages

Assignment 3

The experimental method is a research procedure that manipulates variables to identify cause-and-effect relationships, relying on controlled methods and random assignment of participants. This method has historical roots in the late 19th century with pioneers like Wilhelm Wundt, and it encompasses various types of experiments, including lab, field, and natural experiments, each with its strengths and limitations. Key terminology includes independent and dependent variables, hypotheses, and extraneous variables, which are crucial for understanding and conducting experiments in psychology.

Uploaded by

Shaily Diwan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 34

Introduction

The experimental method is a type of research procedure that involves manipulating variables
to determine if there is a cause-and-effect relationship. The results obtained through the
experimental method are useful but do not prove with 100% certainty that a singular cause
always creates a specific effect.1 Instead, they show the probability that a cause will or will
not lead to a particular effect.
At a Glance
While there are many different research techniques available, the experimental method allows
researchers to look at cause-and-effect relationships. Using the experimental method,
researchers randomly assign participants to a control or experimental group and manipulate
levels of an independent variable. If changes in the independent variable lead to changes in
the dependent variable, it indicates there is likely a causal relationship between them.
What Is the Experimental Method in Psychology?
The experimental method involves manipulating one variable to determine if this causes
changes in another variable. This method relies on controlled research methods and random
assignment of study subjects to test a hypothesis.
For example, researchers may want to learn how different visual patterns may impact our
perception. Or they might wonder whether certain actions can improve memory. Experiments
are conducted on many behavioral topics, including:2

 Attention
 Cognition
 Emotion
 Memory
 Perception
 Sensation

The scientific method forms the basis of the experimental method.3 This is a process used to
determine the relationship between two variables—in this case, to explain human behavior.

Positivism is also important in the experimental method. It refers to factual knowledge that is
obtained through observation, which is considered to be trustworthy.

When using the experimental method, researchers first identify and define key variables.
Then they formulate a hypothesis, manipulate the variables, and collect data on the results.
Unrelated or irrelevant variables are carefully controlled to minimize the potential impact on
the experiment outcome.

History of the Experimental Method


The idea of using experiments to better understand human psychology began toward the end
of the nineteenth century. Wilhelm Wundt established the first formal laboratory in 1879. 4

Wundt is often called the father of experimental psychology. He believed that experiments
could help explain how psychology works, and used this approach to study consciousness.
Wundt coined the term "physiological psychology."5 This is a hybrid of physiology and
psychology, or how the body affects the brain.
Other early contributors to the development and evolution of experimental psychology as we
know it today include:

 Gustav Fechner (1801-1887), who helped develop procedures for measuring


sensations according to the size of the stimulus6
 Hermann von Helmholtz (1821-1894), who analyzed philosophical assumptions
through research in an attempt to arrive at scientific conclusions7
 Franz Brentano (1838-1917), who called for a combination of first-person and third-
person research methods when studying psychology8
 Georg Elias Müller (1850-1934), who performed an early experiment on attitude
which involved the sensory discrimination of weights and revealed how anticipation
can affect this discrimination9

Key Terms to Know

To understand how the experimental method works, it is important to know some key terms.

Dependent Variable
The dependent variable is the effect that the experimenter is measuring. If a researcher was
investigating how sleep influences test scores, for example, the test scores would be the
dependent variable.

Independent Variable
The independent variable is the variable that the experimenter manipulates. In the previous
example, the amount of sleep an individual gets would be the independent variable.
Hypothesis
A hypothesis is a tentative statement or a guess about the possible relationship between two
or more variables. In looking at how sleep influences test scores, the researcher might
hypothesize that people who get more sleep will perform better on a math test the following
day. The purpose of the experiment, then, is to either support or reject this hypothesis.
Operational definitions are necessary when performing an experiment. When we say that
something is an independent or dependent variable, we must have a very clear and specific
definition of the meaning and scope of that variable.
Extraneous Variables

Extraneous variables are other variables that may also affect the outcome of an experiment.
Types of extraneous variables include participant variables, situational variables, demand
characteristics, and experimenter effects. In some cases, researchers can take steps to control
for extraneous variables.

Demand Characteristics
Demand characteristics are subtle hints that indicate what an experimenter is hoping to find in
a psychology experiment. This can sometimes cause participants to alter their behavior,
which can affect the results of the experiment.10
Intervening Variables

Intervening variables are factors that can affect the relationship between two other variables.

Confounding Variables
Confounding variables are variables that can affect the dependent variable, but that
experimenters cannot control for. Confounding variables can make it difficult to determine if
the effect was due to changes in the independent variable or if the confounding variable may
have played a role.
Types of Variables in Psychology Research
The Experimental Process
Psychologists, like other scientists, use the scientific method when conducting an experiment.
The scientific method is a set of procedures and principles that guide how scientists develop
research questions, collect data, and come to conclusions.

The five basic steps of the experimental process are:1

1. Identifying a problem to study


2. Devising the research protocol
3. Conducting the experiment
4. Analyzing the data collected
5. Sharing the findings (usually in writing or via presentation)

Most psychology students are expected to use the experimental method at some point in their
academic careers. Learning how to conduct an experiment is important to understanding how
psychologists prove and disprove theories in this field.
Types of Experiments

There are a few different types of experiments that researchers might use when studying
psychology. Each has pros and cons depending on the participants being studied, the
hypothesis, and the resources available to conduct the research.

Lab Experiments

Lab experiments are common in psychology because they allow experimenters more control
over the variables.11 These experiments can also be easier for other researchers to replicate.
The drawback of this research type is that what takes place in a lab is not always what takes
place in the real world.

Field Experiments
Sometimes researchers opt to conduct their experiments in the field. For example, a social
psychologist interested in researching prosocial behavior might have a person pretend to faint
and observe how long it takes onlookers to respond.

This type of experiment can be a great way to see behavioral responses in realistic settings.
But it is more difficult for researchers to control the many variables existing in these settings
that could potentially influence the experiment's results.

The Basics of Prosocial Behavior


Quasi-Experiments

While lab experiments are known as true experiments, researchers can also utilize a quasi-
experiment. Quasi-experiments are often referred to as natural experiments because the
researchers do not have true control over the independent variable.

A researcher looking at personality differences and birth order, for example, is not able to
manipulate the independent variable in the situation (personality traits). Participants also
cannot be randomly assigned because they naturally fall into pre-existing groups based on
their birth order.

So why would a researcher use a quasi-experiment? This is a good choice in situations where
scientists are interested in studying phenomena in natural, real-world settings. It's also
beneficial if there are limits on research funds or time.12

Field experiments can be either quasi-experiments or true experiments.


Examples of the Experimental Method in Use

The experimental method can provide insight into human thoughts and behaviors,
Researchers use experiments to study many aspects of psychology.

Attention
A 2019 study investigated whether splitting attention between electronic devices and
classroom lectures had an effect on college students' learning abilities. It found that dividing
attention between these two mediums did not affect lecture comprehension. However, it did
impact long-term retention of the lecture information, which affected students' exam
performance.13
Cognition
An experiment used participants' eye movements and electroencephalogram (EEG) data to
better understand cognitive processing differences between experts and novices. It found that
experts had higher power in their theta brain waves than novices, suggesting that they also
had a higher cognitive load.14
Emotion
A study looked at whether chatting online with a computer via a chatbot changed the positive
effects of emotional disclosure often received when talking with an actual human. It found
that the effects were the same in both cases.15
Memory
One experimental study evaluated whether exercise timing impacts information recall. It
found that engaging in exercise prior to performing a memory task helped improve
participants' short-term memory abilities.16
Perception

Sometimes researchers use the experimental method to get a bigger-picture view of


psychological behaviors and impacts. For example, one 2018 study examined several lab
experiments to learn more about the impact of various environmental factors on building
occupant perceptions.17

Sensation
A 2020 study set out to determine the role that sensation-seeking plays in political violence.
This research found that sensation-seeking individuals have a higher propensity for engaging
in political violence. It also found that providing access to a more peaceful, yet still exciting
political group helps reduce this effect.18
Potential Pitfalls of the Experimental Method

While the experimental method can be a valuable tool for learning more about psychology
and its impacts, it also comes with a few pitfalls.1

Experiments may produce artificial results, which are difficult to apply to real-world
situations. Similarly, researcher bias can impact the data collected. Results may not be able to
be reproduced, meaning the results have low reliability.

Since humans are unpredictable and their behavior can be subjective, it can be hard to
measure responses in an experiment. In addition, political pressure may alter the results. The
subjects may not be a good representation of the population, or groups used may not be
comparable.

And finally, since researchers are human too, results may be degraded due to human error.

The experimental method involves the manipulation of variables to establish cause-and-effect


relationships. The key features are controlled methods and the random allocation of
participants into controlled and experimental groups.

What Is An Experiment?

An experiment is an investigation in which a hypothesis is scientifically tested. An


independent variable (the cause) is manipulated in an experiment, and the dependent variable
(the effect) is measured; any extraneous variables are controlled.

An advantage is that experiments should be objective. The researcher’s views and opinions
should not affect a study’s results. This is good as it makes the data more valid and less
biased.

Types

There are three types of experiments you need to know:


1. Lab Experiment

A laboratory experiment in psychology is a research method in which the experimenter


manipulates one or more independent variables and measures the effects on the dependent
variable under controlled conditions.

A laboratory experiment is conducted under highly controlled conditions (not necessarily a


laboratory) where accurate measurements are possible.

The researcher uses a standardized procedure to determine where the experiment will take
place, at what time, with which participants, and in what circumstances.

Participants are randomly allocated to each independent variable group.

Examples are Milgram’s experiment on obedience and Loftus and Palmer’s car crash study.

 Strength: It is easier to replicate (i.e., copy) a laboratory experiment. This is because a


standardized procedure is used.

 Strength: They allow for precise control of extraneous and independent variables. This
allows a cause-and-effect relationship to be established.

 Limitation: The artificiality of the setting may produce unnatural behavior that does not
reflect real life, i.e., low ecological validity. This means it would not be possible to
generalize the findings to a real-life setting.

 Limitation: Demand characteristics or experimenter effects may bias the results and
become confounding variables.

2. Field Experiment

A field experiment is a research method in psychology that takes place in a natural, real-
world setting. It is similar to a laboratory experiment in that the experimenter manipulates
one or more independent variables and measures the effects on the dependent variable.

However, in a field experiment, the participants are unaware they are being studied, and the
experimenter has less control over the extraneous variables.

Field experiments are often used to study social phenomena, such as altruism, obedience, and
persuasion. They are also used to test the effectiveness of interventions in real-world settings,
such as educational programs and public health campaigns.
An example is Holfing’s hospital study on obedience.

 Strength: behavior in a field experiment is more likely to reflect real life because of its
natural setting, i.e., higher ecological validity than a lab experiment.

 Strength: Demand characteristics are less likely to affect the results, as participants may
not know they are being studied. This occurs when the study is covert.

 Limitation: There is less control over extraneous variables that might bias the results. This
makes it difficult for another researcher to replicate the study in exactly the same way.

3. Natural Experiment

A natural experiment in psychology is a research method in which the experimenter observes


the effects of a naturally occurring event or situation on the dependent variable without
manipulating any variables.

Natural experiments are conducted in the day (i.e., real life) environment of the participants,
but here, the experimenter has no control over the independent variable as it occurs naturally
in real life.

Natural experiments are often used to study psychological phenomena that would be difficult
or unethical to study in a laboratory setting, such as the effects of natural disasters, policy
changes, or social movements.

For example, Hodges and Tizard’s attachment research (1989) compared the long-term
development of children who have been adopted, fostered, or returned to their mothers with a
control group of children who had spent all their lives in their biological families.

Here is a fictional example of a natural experiment in psychology:

Researchers might compare academic achievement rates among students born before and
after a major policy change that increased funding for education.

In this case, the independent variable is the timing of the policy change, and the dependent
variable is academic achievement. The researchers would not be able to manipulate the
independent variable, but they could observe its effects on the dependent variable.

6. Strength: behavior in a natural experiment is more likely to reflect real life because of its
natural setting, i.e., very high ecological validity.

7. Strength: Demand characteristics are less likely to affect the results, as participants may
not know they are being studied.
8. Strength: It can be used in situations in which it would be ethically unacceptable to
manipulate the independent variable, e.g., researching stress.

9. Limitation: They may be more expensive and time-consuming than lab experiments.

10. Limitation: There is no control over extraneous variables that might bias the results. This
makes it difficult for another researcher to replicate the study in exactly the same way.

Key Terminology

Ecological validity

The degree to which an investigation represents real-life experiences.

Experimenter effects

These are the ways that the experimenter can accidentally influence the participant through
their appearance or behavior.

Demand characteristics

The clues in an experiment lead the participants to think they know what the researcher is
looking for (e.g., the experimenter’s body language).

Independent variable (IV)

The variable the experimenter manipulates (i.e., changes) is assumed to have a direct effect
on the dependent variable.

Dependent variable (DV)

Variable the experimenter measures. This is the outcome (i.e., the result) of a study.

Extraneous variables (EV)

All variables which are not independent variables but could affect the results (DV) of the
experiment. EVs should be controlled where possible.
Confounding variables

Variable(s) that have affected the results (DV), apart from the IV. A confounding variable
could be an extraneous variable that has not been controlled.

Random Allocation

Randomly allocating participants to independent variable conditions means that all


participants should have an equal chance of participating in each condition.

The principle of random allocation is to avoid bias in how the experiment is carried out and
limit the effects of participant variables.

Order effects

Changes in participants’ performance due to their repeating the same or similar test more than
once. Examples of order effects include:

(i) practice effect: an improvement in performance on a task due to repetition, for example,
because of familiarity with the task;

(ii) fatigue effect: a decrease in performance of a task due to repetition, for example, because
of boredom or tiredness.

Life and its secrets can only be proven right or wrong with experimentation. You can

speculate and theorize all you wish, but as William Blake once said, “The true method of

knowledge is experiment.”

It may be a long process and time-consuming, but it is rewarding like no other. And there are

multiple ways and methods of experimentation that can help shed light on matters. In this

article, we explained the definition, types of experimental research, and some experimental

research examples. Let us get started with the definition!


What is experimental research?

Experimental research is the process of carrying out a study conducted with a scientific

approach using two or more variables. In other words, it is when you gather two or more

variables and compare and test them in controlled environments.

With experimental research, researchers can also collect detailed information about the

participants by doing pre-tests and post-tests to learn even more information about the

process. With the result of this type of study, the researcher can make conscious decisions.

The more control the researcher has over the internal and extraneous variables, the better it is

for the results. There may be different circumstances when a balanced experiment is not

possible to conduct. That is why are are different research designs to accommodate the needs

of researchers.

3 Types of experimental research designs

There is more than one dividing point in experimental research designs that differentiates

them from one another. These differences are about whether or not there are pre-tests or post-

tests done and how the participants are divided into groups. These differences decide which

experimental research design is used.


Types of experimental research designs

1 - Pre-experimental design

This is the most basic method of experimental study. The researcher doing pre-

experimental research evaluates a group of dependent variables after changing the

independent variables. The results of this scientific method are not satisfactory, and future

studies are planned accordingly. The pre-experimental research can be divided into three

types:

A. One shot case study research design

Only one variable is considered in this one-shot case study design. This research method is

conducted in the post-test part of a study, and the aim is to observe the changes in the effect

of the independent variable.

B. One group pre-test post-test research design

In this type of research, a single group is given a pre-test before a study is conducted and a

post-test after the study is conducted. The aim of this one-group pre-test post-test research

design is to combine and compare the data collected during these tests.

C. Static-group comparison

In a static group comparison, 2 or more groups are included in a study where only a group of

participants is subjected to a new treatment and the other group of participants is held static.

After the study is done, both groups do a post-test evaluation, and the changes are seen as

results.
2 - Quasi-experimental design

This research type is quite similar to the experimental design; however, it changes in a few

aspects. Quasi-experimental research is done when experimentation is needed for accurate

data, but it is not possible to do one because of some limitations. Because you can not

deliberately deprive someone of medical treatment or give someone harm, some experiments

are ethically impossible. In this experimentation method, the researcher can only manipulate

some variables. There are three types of quasi-experimental design:

A. Nonequivalent group designs

A nonequivalent group design is used when participants can not be divided equally and

randomly for ethical reasons. Because of this, different variables will be more than one,

unlike true experimental research.

B. Regression discontinuity

In this type of research design, the researcher does not divide a group into two to make a

study, instead, they make use of a natural threshold or pre-existing dividing point. Only

participants below or above the threshold get the treatment, and as the divide is minimal, the

difference would be minimal as well.

C. Natural Experiments

In natural experiments, random or irregular assignment of patients makes up control and

study groups. And they exist in natural scenarios. Because of this reason, they do not qualify

as true experiments as they are based on observation.

3 - True experimental design

In true experimental research, the variables, groups, and settings should be identical to the

textbook definition. Grouping of the participant are divided randomly, and controlled

variables are chosen carefully. Every aspect of a true experiment should be carefully designed
and acted out. And only the results of a true experiment can really be fully accurate. A

true experimental design can be divided into 3 parts:

A. Post-test only control group design

In this experimental design, the participants are divided into two groups randomly. They are

called experimental and control groups. Only the experimental group gets the treatment,

while the other one does not. After the experiment and observation, both groups are given a

post-test, and a conclusion is drawn from the results.

B. Pre-test post-test control group

In this method, the participants are divided into two groups once again. Also, only the

experimental group gets the treatment. And this time, they are given both pre-tests and post-

tests with multiple research methods. Thanks to these multiple tests, the researchers can make

sure the changes in the experimental group are directly related to the treatment.

C. Solomon four-group design

This is the most comprehensive method of experimentation. The participants are randomly

divided into 4 groups. These four groups include all possible permutations by including both

control and non-control groups and post-test or pre-test and post-test control groups. This

method enhances the quality of the data.


Life and its secrets can only be proven right or wrong with experimentation. You can
speculate and theorize all you wish, but as William Blake once said, “The true method of
knowledge is experiment.”

It may be a long process and time-consuming, but it is rewarding like no other. And there are

multiple ways and methods of experimentation that can help shed light on matters. In this

article, we explained the definition, types of experimental research, and some experimental

research examples. Let us get started with the definition!


What is experimental research?

Experimental research is the process of carrying out a study conducted with a scientific

approach using two or more variables. In other words, it is when you gather two or more

variables and compare and test them in controlled environments.

With experimental research, researchers can also collect detailed information about the

participants by doing pre-tests and post-tests to learn even more information about the

process. With the result of this type of study, the researcher can make conscious decisions.

The more control the researcher has over the internal and extraneous variables, the better it is

for the results. There may be different circumstances when a balanced experiment is not

possible to conduct. That is why are are different research designs to accommodate the needs

of researchers.

3 Types of experimental research designs

There is more than one dividing point in experimental research designs that differentiates

them from one another. These differences are about whether or not there are pre-tests or post-

tests done and how the participants are divided into groups. These differences decide which

experimental research design is used.


Types of experimental research designs

1 - Pre-experimental design

This is the most basic method of experimental study. The researcher doing pre-

experimental research evaluates a group of dependent variables after changing the

independent variables. The results of this scientific method are not satisfactory, and future

studies are planned accordingly. The pre-experimental research can be divided into three

types:

A. One shot case study research design

Only one variable is considered in this one-shot case study design. This research method is

conducted in the post-test part of a study, and the aim is to observe the changes in the effect

of the independent variable.

B. One group pre-test post-test research design

In this type of research, a single group is given a pre-test before a study is conducted and a

post-test after the study is conducted. The aim of this one-group pre-test post-test research

design is to combine and compare the data collected during these tests.

C. Static-group comparison

In a static group comparison, 2 or more groups are included in a study where only a group of

participants is subjected to a new treatment and the other group of participants is held static.

After the study is done, both groups do a post-test evaluation, and the changes are seen as

results.
2 - Quasi-experimental design

This research type is quite similar to the experimental design; however, it changes in a few

aspects. Quasi-experimental research is done when experimentation is needed for accurate

data, but it is not possible to do one because of some limitations. Because you can not

deliberately deprive someone of medical treatment or give someone harm, some experiments

are ethically impossible. In this experimentation method, the researcher can only manipulate

some variables. There are three types of quasi-experimental design:

A. Nonequivalent group designs

A nonequivalent group design is used when participants can not be divided equally and

randomly for ethical reasons. Because of this, different variables will be more than one,

unlike true experimental research.

B. Regression discontinuity

In this type of research design, the researcher does not divide a group into two to make a

study, instead, they make use of a natural threshold or pre-existing dividing point. Only

participants below or above the threshold get the treatment, and as the divide is minimal, the

difference would be minimal as well.

C. Natural Experiments

In natural experiments, random or irregular assignment of patients makes up control and

study groups. And they exist in natural scenarios. Because of this reason, they do not qualify

as true experiments as they are based on observation.

3 - True experimental design

In true experimental research, the variables, groups, and settings should be identical to the

textbook definition. Grouping of the participant are divided randomly, and controlled

variables are chosen carefully. Every aspect of a true experiment should be carefully designed
and acted out. And only the results of a true experiment can really be fully accurate. A

true experimental design can be divided into 3 parts:

A. Post-test only control group design

In this experimental design, the participants are divided into two groups randomly. They are

called experimental and control groups. Only the experimental group gets the treatment,

while the other one does not. After the experiment and observation, both groups are given a

post-test, and a conclusion is drawn from the results.

B. Pre-test post-test control group

In this method, the participants are divided into two groups once again. Also, only the

experimental group gets the treatment. And this time, they are given both pre-tests and post-

tests with multiple research methods. Thanks to these multiple tests, the researchers can make

sure the changes in the experimental group are directly related to the treatment.

C. Solomon four-group design

This is the most comprehensive method of experimentation. The participants are randomly

divided into 4 groups. These four groups include all possible permutations by including both

control and non-control groups and post-test or pre-test and post-test control groups. This

method enhances the quality of the data.

Advantages and disadvantages of experimental research

Just as with any other study, experimental research also has its positive and negative sides. It

is up to the researchers to be mindful of these facts before starting their studies. Let us see

some advantages and disadvantages of experimental research:


Advantages of experimental research:

All the variables are in the researchers’ control, and that means the researcher can influence

the experiment according to the research question’s requirements.

As you can easily control the variables in the experiment, you can specify the results as

much as possible.

The results of the study identify a cause-and-effect relation.

The results can be as specific as the researcher wants.

The result of an experimental design opens the doors for future related studies.

Disadvantages of experimental research:

Completing an experiment may take years and even decades, so the results will not be as

immediate as some of the other research types.

As it involves many steps, participants, and researchers, it may be too expensive for some

groups.

The possibility of researchers making mistakes and having a bias is high. It is important to

stay impartial

Human behavior and responses can be difficult to measure unless it is specifically

experimental research in psychology.

Examples of experimental research

When one does experimental research, that experiment can be about anything. As the

variables and environments can be controlled by the researcher, it is possible to have


experiments about pretty much any subject. It is especially crucial that it gives critical insight

into the cause-and-effect relationships of various elements. Now let us see some important

examples of experimental research:

An example of experimental research in science:

When scientists make new medicines or come up with a new type of treatment, they have to

test those thoroughly to make sure the results will be unanimous and effective for every

individual. In order to make sure of this, they can test the medicine on different people or

creatures in different dosages and in different frequencies. They can double-check all the

results and have crystal clear results.

An example of experimental research in marketing:

The ideal goal of a marketing product, advertisement, or campaign is to attract attention and

create positive emotions in the target audience. Marketers can focus on different elements in

different campaigns, change the packaging/outline, and have a different approach. Only then

can they be sure about the effectiveness of their approaches. Some methods they can work

with are A/B testing, online surveys, or focus groups.

References

American Psychological Association. (2011). About APA. Retrieved


from http://www.apa.org/about

Bushman, B. J. (2002). Does venting anger feed or extinguish the flame? Catharsis, rumination,
distraction, anger, and aggressive responding. Personality and Social Psychology Bulletin,
28, 724–731.

Gilovich, T. (1991). How we know what isn’t so: The fallibility of human reason in
everyday life. New York, NY: Free Press.

Gladwell, M. E. (2005). Blink: The power of thinking without thinking (9th ed.). New York:
Little, Brown & Co.
Hines, T. M. (1998). Comprehensive review of biorhythm theory. Psychological Reports, 83,
19–64.

Johnson, D. J., Cheung, F., & Donnellan, M. B. (2013). Does cleanliness influence moral
judgments? A direct replication of Schnall, Benton, and Harvey (2008). Social Psychology,
45(3), 209-215. doi: 10.1027/1864-9335/a000186

Kassin, S. M., & Gudjonsson, G. H. (2004). The psychology of confession evidence: A review
of the literature and issues. Psychological Science in the Public Interest, 5, 33–67.

Klein, R. A., Ratliff, K. A., Vianello, M., Adams, R. B., Bahník, S., Bernstein, M. J., . . . Nosek,
B. A. (2013). Investigating variation in replicability: A “many labs” replication
project. Social Psychology, 45(3), 142-152. doi: 10.1027/1864-9335/a000178

Lilienfeld, S. O., Lynn, S. J., Ruscio, J., & Beyerstein, B. L. (2010). 50 great myths of popular
psychology. Malden, MA: Wiley-Blackwell.

Mann, T., Tomiyama, A. J., Westling, E., Lew, A., Samuels, B., & Chatman, J. (2007).
Medicare’s search for effective obesity treatments: Diets are not the answer. American
Psychologist, 62, 220–233.

Mehl, M. R., Vazire, S., Ramirez-Esparza, N., Slatcher, R. B., & Pennebaker, J. W. (2007). Are
women really more talkative than men? Science, 317, 82.

Norcross, J. C., Beutler, L. E., & Levant, R. F. (Eds.). (2005). Evidence-based practices in
mental health: Debate and dialogue on the fundamental questions. Washington, DC:
American Psychological Association.

Popper, K. R. (2002). Conjectures and refutations: The growth of scientific knowledge.


New York, NY: Routledge.

Schnall, S., Benton, J., & Harvey, S. (2008). With a clean conscience: Cleanliness reduces the
severity of moral judgments. Psychological Science, 19(12), 1219-1222. doi:
10.1111/j.1467-9280.2008.02227.x

Sexton, M., Cuttler, C., Finnell, J., & Mischley, L (2016). A cross-sectional survey of medical
cannabis users: Patterns of use and perceived efficacy. Cannabis and Cannabinoid
Research, 1, 131-138. doi: 10.1089/can.2016.0007.

Stanovich, K. E. (2010). How to think straight about psychology (9th ed.). Boston, MA:
Allyn & Bacon.

Adair, J. G., & Vohra, N. (2003). The explosion of knowledge, references, and citations:
Psychology’s unique response to a crisis. American Psychologist, 58, 15–23.

Collet, C., Guillot, A., & Petit, C. (2010). Phoning while driving I: A review of
epidemiological, psychological, behavioral and physiological studies. Ergonomics, 53, 589–
601.
Drews, F. A., Pasupathi, M., & Strayer, D. L. (2004). Passenger and cell-phone conversations in
simulated driving. Proceedings of the Human Factors and Ergonomics Society Annual
Meeting, 48, 2210–2212.

Milgram, S. (1963). Behavioral study of obedience. Journal of Abnormal and Social


Psychology, 67, 371–378.

Mueller, P. A., & Oppenheimer, D. M. (2014). The pen is mightier than the keyboard:
Advantages of longhand over laptop note taking. Psychological Science, 25(6), 1159-1168.

Schwarz, N., Bless, H., Strack, F., Klumpp, G., Rittenauer-Schatka, H., & Simons, A. (1991).
Ease of retrieval as information: Another look at the availability heuristic. Journal of
Personality and Social Psychology, 61, 195–202.

Weisberg, R. W. (1993). Creativity: Beyond the myth of genius. New York, NY: Freeman.

Zajonc, R. B., Heingartner, A., & Herman, E. M. (1969). Social enhancement and impairment
of performance in the cockroach. Journal of Personality and Social Psychology, 13, 83–92.

Zajonc, R. B. (1965). Social facilitation. Science, 149, 269–274

Zajonc, R.B. & Sales, S.M. (1966). Social facilitation of dominant and subordinate
responses. Journal of Experimental Social Psychology, 2, 160-168.

Baumrind, D. (1985). Research using intentional deception: Ethical issues revisited. American
Psychologist, 40, 165–174.

Bowd, A. D., & Shapiro, K. J. (1993). The case against animal laboratory research in
psychology. Journal of Social Issues, 49, 133–142.

Burger, J. M. (2009). Replicating Milgram: Would people still obey today? American
Psychologist, 64, 1–11.

Burns, J. F. (2010, May 24). British medical council bars doctor who linked vaccine to
autism. The New York Times. Retrieved
from:http://www.nytimes.com/2010/05/25/health/policy/25autism.html

Haidt, J., Koller, S. and Dias, M. (1993) Affect, culture, and morality, or is it wrong to eat your
dog? Journal of Personality and Social Psychology, 65, 613-
628. http://dx.doi.org/10.1037/0022-3514.65.4.613

Koocher, G. P. (1977). Bathroom behavior and human dignity. Journal of Personality and
Social Psychology, 35, 120–121.

Mann, T. (1994). Informed consent for psychological research: Do subjects comprehend


consent forms and understand their legal rights? Psychological Science, 5, 140–143.
Middlemist, R. D., Knowles, E. S., & Matter, C. F. (1976). Personal space invasions in the
lavatory: Suggestive evidence for arousal. Journal of Personality and Social Psychology,
33, 541–546.

Middlemist, R. D., Knowles, E. S., & Matter, C. F. (1977). What to do and what to report: A
reply to Koocher. Journal of Personality and Social Psychology, 35, 122–125.

Milgram, S. (1963). Behavioral study of obedience. Journal of Abnormal and Social


Psychology, 67, 371–378.

Miller, N. E. (1985). The value of behavioral research on animals. American Psychologist, 40,
423–440.

Reverby, S. M. (2009). Examining Tuskegee: The infamous syphilis study and its legacy.
Chapel Hill, NC: University of North Carolina Press.

Rosenthal, R. M. (1994). Science and ethics in conducting, analyzing, and reporting


psychological research. Psychological Science, 5, 127–133.

Sieber, J. E., Iannuzzo, R., & Rodriguez, B. (1995). Deception methods in psychology: Have
they changed in 23 years? Ethics & Behavior, 5, 67–85.

Amir, N., Freshman, M., & Foa, E. (2002). Enhanced Stroop interference for threat in social
phobia. Journal of Anxiety Disorders, 16, 1–9.

Bandura, A., Ross, D., & Ross, S. A. (1961). Transmission of aggression through imitation of
aggressive models. Journal of Abnormal and Social Psychology, 63, 575–582.

Cacioppo, J. T., & Petty, R. E. (1982). The need for cognition. Journal of Personality and
Social Psychology, 42, 116–131.

Cohen, S., Kamarck, T., & Mermelstein, R. (1983). A global measure of perceived
stress. Journal of Health and Social Behavior, 24, 386-396.

Costa, P. T., Jr., & McCrae, R. R. (1992). Normal personality assessment in clinical practice:
The NEO Personality Inventory. Psychological Assessment, 4, 5–13.

Delongis, A., Coyne, J. C., Dakof, G., Folkman, S., & Lazarus, R. S. (1982). Relationships of
daily hassles, uplifts, and major life events to health status. Health Psychology, 1(2), 119-
136.

Gosling, S. D., Rentfrow, P. J., & Swann, W. B., Jr. (2003). A very brief measure of the Big
Five personality domains. Journal of Research in Personality, 37, 504–528.

Holmes, T. H., & Rahe, R. H. (1967). The Social Readjustment Rating Scale. Journal of
Psychosomatic Research, 11(2), 213-218.
Levels of Measurement. (2016, August 26). Retrieved
from http://wikieducator.org/Introduction_to_Research_Methods_In_Psychology/
Theories_and_Measurement/Levels_of_Measurement

MacDonald, T. K., & Martineau, A. M. (2002). Self-esteem, mood, and intentions to use
condoms: When does low self-esteem lead to risky health behaviors? Journal of
Experimental Social Psychology, 38, 299–306.

Petty, R. E, Briñol, P., Loersch, C., & McCaslin, M. J. (2009). The need for cognition. In M. R.
Leary & R. H. Hoyle (Eds.), Handbook of individual differences in social behavior (pp.
318–329). New York, NY: Guilford Press.

Rosenberg, M. (1965). Society and the adolescent self-image. Princeton, NJ: Princeton
University Press

Rosenberg, M. (1989). Society and the adolescent self-image (rev. ed.). Middletown, CT:
Wesleyan University Press.

Segerstrom, S. E., & Miller, G. E. (2004). Psychological stress and the human immune system:
A meta-analytic study of 30 years of inquiry. Psychological Bulletin, 130, 601–630.

Stevens, S. S. (1946). On the theory of scales of measurement. Science, 103, 677–680.

Stroop, J. R. (1935). Studies of interference in serial verbal reactions. Journal of Experimental


Psychology, 18, 643–662.

Bauman, C.W., McGraw, A.P., Bartels, D.M., & Warren, C. (2014). Revisiting external
validity: Concerns about trolley problems and other sacrificial dilemmas in moral
psychology. Social and Personality Psychology Compass, 8/9, 536-554.

Birnbaum, M.H. (1999). How to show that 9>221: Collect judgments in a between-subjects
design. Psychological Methods, 4(3), 243-249.

Cialdini, R. (2005, April). Don’t throw in the towel: Use social influence research. APS
Observer. Retrieved
fromhttp://www.psychologicalscience.org/index.php/publications/observer/2005/april-05/
dont-throw-in-the-towel-use-social-influence-research.html

Darley, J. M., & Latané, B. (1968). Bystander intervention in emergencies: Diffusion of


responsibility. Journal of Personality and Social Psychology, 4, 377–383.

Fredrickson, B. L., Roberts, T.-A., Noll, S. M., Quinn, D. M., & Twenge, J. M. (1998). The
swimsuit becomes you: Sex differences in self-objectification, restrained eating, and math
performance. Journal of Personality and Social Psychology, 75, 269–284.

Goldstein, N. J., Cialdini, R. B., & Griskevicius, V. (2008). A room with a viewpoint: Using
social norms to motivate environmental conservation in hotels. Journal of Consumer
Research, 35, 472–482.
Guéguen, N., & de Gail, Marie-Agnès. (2003). The effect of smiling on helping behavior:
Smiling and good Samaritan behavior. Communication Reports, 16, 133–140.

Ibolya, K., Brake, A., & Voss, U. (2004). The effect of experimenter characteristics on pain
reports in women and men. Pain, 112, 142–147.

Judd, C.M. & Kenny, D.A. (1981). Estimating the effects of social interventions. Cambridge,
MA: Cambridge University Press.

Knecht, S., Dräger, B., Deppe, M., Bobe, L., Lohmann, H., Flöel, A., . . . Henningsen, H.
(2000). Handedness and hemispheric language dominance in healthy humans. Brain: A
Journal of Neurology, 123(12), 2512-2518.http://dx.doi.org/10.1093/brain/123.12.2512

Manning, R., Levine, M., & Collins, A. (2007). The Kitty Genovese murder and the social
psychology of helping: The parable of the 38 witnesses. American Psychologist, 62, 555–
562.

Morling, B. (2014, April). Teach your students to be better consumers. APS Observer.
Retrieved fromhttp://www.psychologicalscience.org/index.php/publications/observer/2014/
april-14/teach-your-students-to-be-better-consumers.html

Moseley, J. B., O’Malley, K., Petersen, N. J., Menke, T. J., Brody, B. A., Kuykendall, D. H., …
Wray, N. P. (2002). A controlled trial of arthroscopic surgery for osteoarthritis of the
knee. The New England Journal of Medicine, 347, 81–88.

Price, D. D., Finniss, D. G., & Benedetti, F. (2008). A comprehensive review of the placebo
effect: Recent advances and current thought. Annual Review of Psychology, 59, 565–590.

Rosenthal, R., & Fode, K. (1963). The effect of experimenter bias on performance of the albino
rat. Behavioral Science, 8, 183-189.

Rosenthal, R., & Rosnow, R. L. (1976). The volunteer subject. New York, NY: Wiley.

Rosenthal, R. (1976). Experimenter effects in behavioral research (enlarged ed.). New York,
NY: Wiley.

Shapiro, A. K., & Shapiro, E. (1999). The powerful placebo: From ancient priest to modern
physician. Baltimore, MD: Johns Hopkins University Press.

Abrams, L. S., & Curran, L. (2009). “And you’re telling me not to stress?” A grounded theory
study of postpartum depression symptoms among low-income mothers. Psychology of
Women Quarterly, 33, 351–362.

Bryman, A. (2012). Social Research Methods (4th ed.). Oxford: Oxford University Press.

Bushman, B. J., & Huesmann, L. R. (2001). Effects of televised violence on aggression. In D.


Singer & J. Singer (Eds.), Handbook of children and the media (pp. 223–254). Thousand
Oaks, CA: Sage.
Cacioppo, J. T., & Petty, R. E. (1982). The need for cognition. Journal of Personality and
Social Psychology, 42, 116–131.

Cohen, D., Nisbett, R. E., Bowdle, B. F., & Schwarz, N. (1996). Insult, aggression, and the
southern culture of honor: An “experimental ethnography.” Journal of Personality and
Social Psychology, 70(5), 945-960.

Diener, E. (2000). Subjective well-being: The science of happiness, and a proposal for a
national index. American Psychologist, 55, 34–43.

Festinger, L., Riecken, H., & Schachter, S. (1956). When prophecy fails: A social and
psychological study of a modern group that predicted the destruction of the world.
University of Minnesota Press.

Freud, S. (1961). Five lectures on psycho-analysis. New York, NY: Norton.

Geertz, C. (1973). The interpretation of cultures. New York, NY: Basic Books.

Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for
qualitative research. Chicago, IL: Aldine.

Jouriles, E. N., Garrido, E., Rosenfield, D., & McDonald, R. (2009). Experiences of
psychological and physical aggression in adolescent romantic relationships: Links to
psychological distress. Child Abuse & Neglect, 33(7), 451–460.

Kraut, R. E., & Johnston, R. E. (1979). Social and emotional messages of smiling: An
ethological approach. Journal of Personality and Social Psychology, 37, 1539–1553.

Levine, R. V., & Norenzayan, A. (1999). The pace of life in 31 countries. Journal of Cross-
Cultural Psychology, 30, 178–205.

Lindqvist, P., Johansson, L., & Karlsson, U. (2008). In the aftermath of teenage suicide: A
qualitative study of the psychosocial consequences for the surviving family members. BMC
Psychiatry, 8:26. Retrieved from http://www.biomedcentral.com/1471-244X/8/26

Loftus, E. F., & Pickrell, J. E. (1995). The formation of false memories. Psychiatric Annals,
25, 720–725.

Messerli, F. H. (2012). Chocolate consumption, cognitive function, and Nobel laureates. New
England Journal of Medicine, 367, 1562-1564.

Milgram, S. (1963). Behavioral study of obedience. Journal of Abnormal and Social


Psychology, 67, 371–378.

Milgram, S. (1974). Obedience to authority: An experimental view. New York, NY: Harper
& Row.

Pelham, B. W., Carvallo, M., & Jones, J. T. (2005). Implicit egotism. Current Directions in
Psychological Science, 14, 106–110.
Peterson, C., Seligman, M. E. P., & Vaillant, G. E. (1988). Pessimistic explanatory style is a
risk factor for physical illness: A thirty-five year longitudinal study. Journal of Personality
and Social Psychology, 55, 23–27.

Plomin, R., DeFries, J. C., McClearn, G. E., & McGuffin, P. (2008). Behavioral genetics (5th
ed.). New York, NY: Worth.

Radcliffe, N. M., & Klein, W. M. P. (2002). Dispositional, unrealistic, and comparative


optimism: Differential relations with knowledge and processing of risk information and
beliefs about personal risk. Personality and Social Psychology Bulletin, 28, 836–846.

Rentfrow, P. J., & Gosling, S. D. (2008). The do re mi’s of everyday life: The structure and
personality correlates of music preferences. Journal of Personality and Social Psychology,
84, 1236–1256.

Rosenhan, D. L. (1973). On being sane in insane places. Science, 179, 250–258.

Todd, Z., Nerlich, B., McKeown, S., & Clarke, D. D. (2004) Mixing methods in psychology:
The integration of qualitative and quantitative methods in theory and practice. London,
UK: Psychology Press.

Trenor, J.M., Yu, S.L., Waight, C.L., Zerda. K.S & Sha T.-L. (2008). The relations of ethnicity
to female engineering students’ educational experiences and college and career plans in an
ethnically diverse learning environment. Journal of Engineering Education, 97(4), 449-
465.

Watson, J. B., & Rayner, R. (1920). Conditioned emotional reactions. Journal of


Experimental Psychology, 3, 1–14.

Wilkins, A. (2008). “Happier than Non-Christians”: Collective emotions and symbolic


boundaries among evangelical Christians. Social Psychology Quarterly, 71, 281–301.

Buhrmester, M., Kwang, T., & Gosling, S.D. (2011). Amazon’s Mechanical Turk: A new
source of inexpensive, yet high quality, data? Perspectives on Psychological Science, 6(1),
3-5.

Chang, L., & Krosnick, J.A. (2003). Measuring the frequency of regular behaviors: Comparing
the ‘typical week’ to the ‘past week’. Sociological Methodology, 33, 55-80.

Converse, J. M. (1987). Survey research in the United States: Roots and emergence, 1890–
1960. Berkeley, CA: University of California Press.

Gosling, S. D., Vazire, S., Srivastava, S., & John, O. P. (2004). Should we trust web-based
studies? A comparative analysis of six preconceptions about internet
questionnaires. American Psychologist, 59(2), 93-104.

Groves, R. M., Fowler, F. J., Couper, M. P., Lepkowski, J. M., Singer, E., & Tourangeau, R.
(2004). Survey methodology. Hoboken, NJ: Wiley.
Krosnick, J.A. & Berent, M.K. (1993). Comparisons of party identification and policy
preferences: The impact of survey question format. American Journal of Political Science,
27(3), 941-964.

Lahaut, V. M. H. C. J., Jansen, H. A. M., van de Mheen, D., & Garretsen, H. F. L. (2002). Non-
response bias in a sample survey on alcohol consumption. Alcohol and Alcoholism, 37, 256–
260.

Lerner, J. S., Gonzalez, R. M., Small, D. A., & Fischhoff, B. (2003). Effects of fear and anger
on perceived risks of terrorism: A national field experiment. Psychological Science, 14, 144–
150.

Likert, R. (1932). A technique for the measurement of attitudes. Archives of Psychology, 140,
1–55.

Miller, J.M. & Krosnick, J.A. (1998). The impact of candidate name order on election
outcomes. Public Opinion Quarterly, 62(3), 291-330.

Natala@aws. (2011, January 26). Re: MTurk CENSUS: About how many workers were on
Mechanical Turk in 2010? Message posted to Amazon Web Services Discussion Forums.
Retrieved from https://forums.aws.amazon.com/thread.jspa?threadID=58891

Peterson, R. A. (2000). Constructing effective questionnaires. Thousand Oaks, CA: Sage.

Schwarz, N., & Strack, F. (1990). Context effects in attitude surveys: Applying cognitive theory
to social research. In W. Stroebe & M. Hewstone (Eds.), European review of social
psychology (Vol. 2, pp. 31–50). Chichester, UK: Wiley.

Schwarz, N. (1999). Self-reports: How the questions shape the answers. American
Psychologist, 54, 93–105.

Strack, F., Martin, L. L., & Schwarz, N. (1988). Priming and communication: The social
determinants of information use in judgments of life satisfaction. European Journal of
Social Psychology, 18, 429–442.

Sudman, S., Bradburn, N. M., & Schwarz, N. (1996). Thinking about answers: The
application of cognitive processes to survey methodology. San Francisco, CA: Jossey-
Bass.

Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design & analysis issues
in field settings. Boston, MA: Houghton Mifflin.

Eysenck, H. J. (1952). The effects of psychotherapy: An evaluation. Journal of Consulting


Psychology, 16, 319–324.

Posternak, M. A., & Miller, I. (2001). Untreated short-term course of major depression: A meta-
analysis of studies using outcomes from studies using wait-list control groups. Journal of
Affective Disorders, 66, 139–146.
Smith, M. L., Glass, G. V., & Miller, T. I. (1980). The benefits of psychotherapy. Baltimore,
MD: Johns Hopkins University Press.

Brown, H. D., Kosslyn, S. M., Delamater, B., Fama, A., & Barsky, A. J. (1999). Perceptual and
memory biases for health-related information in hypochondriacal individuals. Journal of
Psychosomatic Research, 47, 67–78.

Gilliland, K. (1980). The interactive effect of introversion-extraversion with caffeine induced


arousal on verbal performance. Journal of Research in Personality, 14, 482–492.

MacDonald, T. K., & Martineau, A. M. (2002). Self-esteem, mood, and intentions to use
condoms: When does low self-esteem lead to risky health behaviors? Journal of
Experimental Social Psychology, 38, 299–306.

Schnall, S., Benton, J., & Harvey, S. (2008). With a clean conscience: Cleanliness reduces the
severity of moral judgments. Psychological Science, 19(12), 1219-1222. doi:
10.1111/j.1467-9280.2008.02227.x

Schnall, S., Haidt, J., Clore, G. L., & Jordan, A. H. (2008). Disgust as embodied moral
judgment. Personality and Social Psychology Bulletin, 34, 1096–1109.

Baer, D. M., Wolf, M. M., & Risley, T. R. (1968). Some current dimensions of applied behavior
analysis. Journal of Applied Behavior Analysis, 1, 91–97.

Danov, S. E., & Symons, F. E. (2008). A survey evaluation of the reliability of visual inspection
and functional analysis graphs. Behavior Modification, 32, 828–839.

Dehaene, S. (2011). The number sense: How the mind creates mathematics (2nd ed.). New
York, NY: Oxford.

Fisch, G. S. (2001). Evaluating data from behavioral analysis: Visual inspection or statistical
models. Behavioral Processes, 54, 137–154.

Hall, R. V., Lund, D., & Jackson, D. (1968). Effects of teacher attention on study
behavior. Journal of Applied Behavior Analysis, 1, 1–12.

Kazdin, A. E. (1982). Single-case research designs: Methods for clinical and applied
settings. New York, NY: Oxford University Press.

Ross, S. W., & Horner, R. H. (2009). Bully prevention in positive behavior support. Journal of
Applied Behavior Analysis, 42, 747–759.

Scruggs, T. E., & Mastropieri, M. A. (2001). How to summarize single-participant research:


Ideas and applications. Exceptionality, 9, 227–244.

Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-
experimental designs for generalized causal inference. Boston, MA: Houghton Mifflin.
Sidman, M. (1960). Tactics of scientific research: Evaluating experimental data in
psychology. Boston, MA: Authors Cooperative.

Skinner, B. F. (1938). The behavior of organisms: An experimental analysis. New York,


NY: Appleton-Century-Crofts.

Wolf, M. (1976). Social validity: The case for subjective measurement or how applied behavior
analysis is finding its heart. Journal of Applied Behavior Analysis, 11, 203–214.

American Psychological Association, Committee on Lesbian, Gay, and Bisexual Concerns Joint
Task Force on Guidelines for Psychotherapy With Lesbian, Gay, and Bisexual Clients.
(2000). Guidelines for psychotherapy with lesbian, gay, and bisexual clients. Retrieved
from https://www.apa.org/pi/lgbt/resources/guidelines

Bem, D. J. (2003). Writing the empirical journal article. In J. M. Darley, M. P. Zanna, & H. R.
Roediger III (Eds.), The complete academic: A practical guide for the beginning social
scientist (2nd ed.). Washington, DC: American Psychological Association.

Bentley, M., Peerenboom, C. A., Hodge, F. W., Passano, E. B., Warren, H. C., & Washburn, M.
F. (1929). Instructions in regard to preparation of manuscript. Psychological Bulletin, 26,
57–63.

Darley, J. M., & Latané, B. (1968). Bystander intervention in emergencies: Diffusion of


responsibility. Journal of Personality and Social Psychology, 4, 377–383.

Madigan, R., Johnson, S., & Linton, P. (1995). The language of psychology: APA style as
epistemology. American Psychologist, 50, 428–436.

Onwuegbuzie, A. J., Combs, J. P., Slate, J. R., & Frels, R. K. (2010). Editorial: Evidence-based
guidelines for avoiding the most common APA errors in journal article
submissions. Research in the Schools, 16, ix–xxxvi.

American Psychological Association. (2010). Publication Manual of the American


Psychological Association (6th ed.). Washington, D.C.: American Psychological
Association.

Ollendick, T. H., Öst, L.-G., Reuterskiöld, L., Costa, N., Cederlund, R., Sirbu, C.,…Jarrett, M.
A. (2009). One-session treatments of specific phobias in youth: A randomized clinical trial in
the United States and Sweden. Journal of Consulting and Clinical Psychology, 77, 504–
516.

Cohen, J. (1992). A power primer. Psychological Bulletin, 112, 155–159.

Hyde, J. S. (2007). New directions in the study of gender similarities and differences. Current
Directions in Psychological Science, 16, 259–263.

Carlson, K. A., & Conard, J. M. (2011). The last name effect: How last name influences
acquisition timing. Journal of Consumer Research, 38(2), 300-307. doi: 10.1086/658470
MacDonald, T. K., & Martineau, A. M. (2002). Self-esteem, mood, and intentions to use
condoms: When does low self-esteem lead to risky health behaviors? Journal of
Experimental Social Psychology, 38, 299–306.

McCabe, D. P., Roediger, H. L., McDaniel, M. A., Balota, D. A., & Hambrick, D. Z.
(2010). The relationship between working memory capacity and executive functioning.
Neuropsychology, 24(2), 222–243. doi:10.1037/a0017619

Brown, N. R., & Sinclair, R. C. (1999). Estimating number of lifetime sexual partners: Men and
women do it differently. The Journal of Sex Research, 36, 292–297.

Bem, D. J. (2003). Writing the empirical journal article. In J. M. Darley, M. P. Zanna, & H. L.
Roediger III (Eds.), The complete academic: A career guide (2nd ed., pp. 185–219).
Washington, DC: American Psychological Association.

Schmitt, D. P., & Allik, J. (2005). Simultaneous administration of the Rosenberg Self-Esteem
Scale in 53 nations: Exploring the universal and culture-specific features of global self-
esteem. Journal of Personality and Social Psychology, 89, 623–642.

Buss, D. M., & Schmitt, D. P. (1993). Sexual strategies theory: A contextual evolutionary
analysis of human mating. Psychological Review, 100, 204–232.

Aarts, A. A., Anderson, C. J., Anderson, J., van Assen, M. A. L. M., Attridge, P. R., Attwood,
A. S., … Zuni, K. (2015, September 21). Reproducibility Project: Psychology. Retrieved
from osf.io/ezcuj

Abelson, R. P. (1995). Statistics as principled argument. Mahwah, NJ: Erlbaum.

Aschwanden, C. (2015, August 19). Science isn’t broken: It’s just a hell of a lot harder than we
give it credit for. Retrieved from http://fivethirtyeight.com/features/science-isnt-broken/

Brandt, M. J., IJzerman, H., Dijksterhuis, A., Farach, F. J., Geller, J., Giner-Sorolla, R., … can’t
Veer, A. (2014). The replication recipe: What makes for a convincing replication? Journal of
Experimental Social Psychology, 50, 217-224. doi:10.1016/j.jesp.2013.10.005

Cohen, J. (1994). The world is round: p < .05. American Psychologist, 49, 997–1003.

Frank, M. (2015, August 31). The slower, harder ways to increase reproducibility. Retrieved
from http://babieslearninglanguage.blogspot.ie/2015/08/the-slower-harder-ways-to-
increase.html

Head M. L., Holman, L., Lanfear, R., Kahn, A. T., & Jennions, M. D. (2015). The extent and
consequences of p-hacking in science. PLoS Biology, 13(3): e1002106.
doi:10.1371/journal.pbio.1002106

Hyde, J. S. (2007). New directions in the study of gender similarities and differences. Current
Directions in Psychological Science, 16, 259–263.
Kanner, A. D., Coyne, J. C., Schaefer, C., & Lazarus, R. S. (1981). Comparison of two modes
of stress measurement: Daily hassles and uplifts versus major life events. Journal of
Behavioral Medicine, 4, 1–39.

Kerr, N. L. (1998). HARKing: Hypothesizing after the results are known. Personality and
Social Psychology Review, 2(3), 196-217. doi:10.1207/s15327957pspr0203_4

Lakens, D. (2017, December 25). About p-values: Understanding common misconceptions.


[Blog post] Retrieved from https://correlaid.org/en/blog/understand-p-values/

Mehl, M. R., Vazire, S., Ramirez-Esparza, N., Slatcher, R. B., & Pennebaker, J. W. (2007). Are
women really more talkative than men? Science, 317, 82.

Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S. D., Breckler, S. J., …
Yarkoni, T. (2015). Promoting an open research culture. Science, 348(6242), 1422-1425. doi:
10.1126/science.aab2374

Oakes, M. (1986). Statistical inference: A commentary for the social and behavioral
sciences. Chichester, UK: Wiley.

Pashler, H., & Harris, C. R. (2012). Is the replicability crisis overblown? Three arguments
explained. Perspectives on Psychological Science, 7(6), 531-536.
doi:10.1177/1745691612463401

Rosenthal, R. (1979). The file drawer problem and tolerance for null results. Psychological
Bulletin, 83, 638–641.

Scherer, L. (2015, September). Guest post by Laura Scherer. Retrieved


from http://sometimesimwrong.typepad.com/wrong/2015/09/guest-post-by-laura-scherer.html

Schnall, S., Benton, J., & Harvey, S. (2008). With a clean conscience: Cleanliness reduces the
severity of moral judgments. Psychological Science, 19(12), 1219-1222. doi:
10.1111/j.1467-9280.2008.02227.x

Simonsohn U., Nelson L. D., & Simmons J. P. (2014). P-Curve: a key to the file
drawer. Journal of Experimental Psychology: General, 143(2), 534–547. doi:
10.1037/a0033242

Tramimow, D. & Marks, M. (2015). Editorial. Basic and Applied Social Psychology, 37, 1–
2. https://dx.doi.org/10.1080/01973533.2015.1012991

Wilkinson, L., & Task Force on Statistical Inference. (1999). Statistical methods in psychology
journals: Guidelines and explanations. American Psychologist, 54, 594–604.

Yong, E. (August 27, 2015). How reliable are psychology studies? Retrieved
from http://www.theatlantic.com/science/archive/2015/08/psychology-studies-reliability-
reproducability-nosek/402466/

You might also like