Foundations of Scientific Thinking
Foundations of Scientific Thinking
Explore epistemology and alternative ways of knowing, for example the development of
navigation.
Epistemology is a branch of philosophy that investigates the origin, nature, methods and
limits of human knowledge.
We know through
· Sense and perception
· Language/authority
· Emotion/intuition
· Logic/reason
· Practical knowledge ⟶ knowledge that is skills based (e.g knowing how to drive)
· Knowledge by acquaintance ⟶ knowledge that does not involve facts but familiarity
with someone or an object (e.g knowing your mother, what an apple looks like)
· Factual knowledge ⟶ knowledge based on fact (e.g knowing that the sun rises every
morning)
Navigation
Rationalism argues that certain knowledge can be gained just by reason and thinking.
Empiricism argues that we gain all of our knowledge from experience. It argues that we
cannot know anything except by information which comes through our senses.
Example
· Nancy made the Olympic Team.
· Nancy had the highest score.
· Therefore, Nancy will win gold.
· Charles Darwin made observations related to finches on the Galapagos Islands and was
able to apply this understanding to other species and developed the Theory of Evolution
(a general conclusion of his observations)
Example Confirmation
· All dogs are mammals
· Daisy is a dog
· Daisy is a mammal
· When two explanations are given for the same result, the simpler one is usually correct.
The result that requires more assumptions and speculation is more likely to be incorrect.
· Fewest possible factors or causes.
· However, this does not mean that the simplest answer is correct
· But it may be the easiest to work with, making it easier to design experiments to
disprove it (by Popper’s falsifiability)
· If it is disproven then alternate hypotheses can be re evaluated
· Occam’s razor is a heuristic device (suggestion/guide) rather than a way to guarantee
the right answer.
· Can be applied to developing a hypothesis
Example:
· Lorentz and Einstein both produced theories which explain/predict that the speed of
light in a vacuum is constant and that energy and mass are connected (E=mc 2)
· However, Lorentz’s theory required an as yet (and still) undetected fluid filling space
known as “ether”.
· Einstein’s Special Relativity did not require this. As it was simpler, involving less
assumptions, it was accepted.
· If scientists abandon falsifiability, they could damage the public’s trust in science.
· However, we are in various ways hitting the limits of what will ever be testable.
· Confirmation bias
· The tendency to interpret new evidence as confirmation of one’s existing beliefs or
theories
· A confirmation bias happens when people give more weight to evidence that confirms
their hypothesis and undervalue information that could disprove it.
· This leads to our beliefs being reinforced, even if they are incorrect
Theory-dependence of observation
· Understanding of an observation is guided by the knowledge that one already has.
· Therefore we have to be very careful in our assumptions as to how objective we are in
our scientific experiments
· Scientists can not claim that their observations are completely independent of pre
existing scientific theories.
· This doesn’t make their discoveries wrong as long as they are aware of it
· The nature of science is that it is often building on previous discoveries, so this is likely to
happen.
· Includes
· Indigenous Australians
· Inuit peoples of Northern America
· Western science favours quantitative and empirical knowledge, traditional knowledge
observes natural phenomena linked to local culture
· Indigenous Australian Aboriginal peoples have extensive knowledge of plant medicine,
including processing and purification. Knowledge rejected by Western civilisations until
recently. Several Indigenous plant-based medicines are in commercial production (e.g
Smoke Bush)
· Much knowledge has been lost through colonisation (Western peoples discarded their
knowledge as primitive without realising advanced scientific techniques in use by all
cultures)
· Australian and South American Indigenous [peoples had extensive celestial and weather
pattern knowledge. However, expressed through stories and legends so often ignored by
colonisers (failed to predict cyclones, floods, hurricanes, severe weather)
– pre – 1500 CE, exemplified by Greek and Egyptian cultures and those of the Asia region
· Contributions of Greek, Egyptian and Asian cultures are also deeply rooted in their
culture (e.g Egyptian pyramids are engineering marvels but also have a religious/cultural
basis)
Paradigm shifts occur when there is a fundamental change in the basic concepts and
experimental practices of a scientific discipline.
Anomalies appear in
data/investigations
(initially ignored)
Anomalies continuously
A new paradigm is appear and cannot be
accepted explained within current
paradigm
Initial paradigm: the continents were always in their current positions with perhaps some
minor movement.
· Crust of Earth is divided into numerous rigid slabs of crust called plates
· Plates move around relative to each other, causing continents to move around
· Explains mountains, oceanic ridges and other geological features
· Evidence: continents fit together like jigsaw pieces. Distribution of fossils and rock
formations/types fit perfectly along jigsaw pieces of continents.
Process of rejection and acceptance
· Wegener could not explain why his theory would work, causing universal rejection by
scientists
· Died in 1930 without theory gaining acceptance
· Several other scientists carried on his work
· Arthur Holmes – convection currents as mechanism of plate movement**
· Harry Hess – determined occurrence of seafloor spreading
· Vine and Matthews – explored magnetic banding that occurred as new
rock formed at mid-ocean ridges
· Work of these scientists filled in blanks missingin Wegener’s theory
Eventually the initial theory proposed by Wegener was supported by so much scientific
evidence that the overall thinking changed and his theory became the accepted theory
(paradigm shift)
Analyse the current influences on scientific thinking, including but not limited to:
– economic
– political
Funding
· Should scientific endeavours reap benefits for society?
· Curiosity vs planning with an end use
· Government agencies manage the allocation of public science funding
· Research to contribute to economic and societal growth
· Public engagement
Political manipulation
· Accusations of presenting inaccurate or incomplete information on issues e.g climate
change, editing/skewing scientific reports, preventing scientists from speaking with
media about findings and knowledge (George Bush)
– global
· Gender and culture affect what we choose to study, our perspectives when approaching
scientific phenomena and strategies for studying them
· For example: research about evolutionary biology. US primatologists focused on male
dominance and associated mating access. Japanese researchers focused on status and
social relationships, values holding higher relative importance in Japanese society.
· A diversity of scientists is important for reducing bias and for providing different
worldviews
· Global competition is a theme that resonates throughout many scientific discoveries
· Space race (20th-century competition between two Cold War rivals, the Soviet
Union (USSR) and the United States (US), to achieve firsts in spaceflight capability.
Sparked one of the most famous periods in scientific research related to space
exploration.
Analyse the influence of ethical frameworks on scientific research over time, including but
not limited to
– human experimentation
· People running clinical trials have legal obligations set out in Medicines for Human Use
(Clinical Trials) Regulations 2004
· Anyone taking part in a trial must have a full understanding of the objectives of the
research, and any risks and potential inconveniences they may experience when
taking part. This information will be given to them at a meeting with a member of
the research team
· A point of contact must be provided so patients can obtain more information
about the trial
· Before a clinical trial of a new medicine can begin, all of the following must be in place:
· The science the research is based on must be reviewed by experts
· The researchers must secure funding
· An organisation, such as a hospital or research institute, must agree to provide a
home base for the trial
· The Medicines and Healthcare Products Regulatory Agency (MHRA) needs to
review and approve trials of a medicine and issue a clinical trial authorisation (CTA)
· A recognised ethics committee must review the trial and allow it to proceed
· Informed consent
· Privacy and anonymity
· Safe data storage and confidentiality
– experimentation on animals
– biobanks
· These are stores of human biological samples e.g from patients with particular diseases
· Samples are used by researchers for developing
· New diagnosis systems
· New medications
· Genomics
· Issues include
· Patient consent
· Ownership/control over data
· Benefits to patient or family if material is used to help discover a medicine
(profitable)
· Privacy rights
Conduct an initial literature search, from one or more areas of science, to identify the
potential use of a contemporary, relevant publicly available data set
Data sets
· Science is data driven. Therefore, when planning an investigation, scientists need to
ensure they will have access to data that will enable them to test their hypothesis
· Electronic sensing has enabled scientists to take measurements very frequently or over a
large time scale (leading to large data sets). This is advantageous as:
· anomalies are more easily identified
· errors in data can be quantified more accurately
Class summary
· Identify a problem
· Literature review
· Extensive – range of literature
· Peer reviewed
· Overview of science already conducted
· Further research
· Relationships between literature
· Seek advice from experts
· Depth of understanding
· CONSIDER: ethics, safety, expense, equipment, time, environmental impacts
They are accepted or rejected on the basis or findings arising from the investigation.
· Rejection of the hypothesis may lead to new, alternative hypotheses.
· Acceptance of the hypothesis as a valid explanation is not necessarily permanent.
Relevant data sets, on the same or similar areas of study, can be used to help conduct,
discuss or analyse the project.
Reliability
Validity
– assessing the current state of the theory, concept, issue or problem being considered
Include:
· Current research occurring in your intended area of study
· Innovations occurring in this field
· Relevance to wider society (developing solutions for problems, helping us in our daily
lives etc)
Assess the process involved in the development of a scientific research question and
relevant hypothesis
Writing style:
· Precise
· Formal
· Quantitative information
· Objective
Structure:
· Introduction
· Introduce widely accepted core concepts
· Highlight importance of the review
· Discuss core aim of review
· List points/topics in order
· Main Body
· Group topics according to common elements
· Back up main points with research
· Focus on recent data
· Summarise individual studies or articles
· One key point per paragraph
· Sub-headings to group points/topics
· Diagrams, figures, tables to discuss point
· Conclusion
· Follow from introduction
· Summarise major research contributions to scientific field
· Point out gaps in research
· Highlight potential future studies
Formulate a final scientific hypothesis based on the scientific research question
An alternative (or directional) hypothesis makes a prediction about the expected outcome.
It is based on prior readings, research and studies on a topic that suggest a potential
outcome.
A non-directional hypothesis makes the prediction that a change will occur but the direction
is not specified.
Hypothesis features
· Focuses on something testable
· Includes an independent and dependent variable
· Variables can be manipulated
· Can be tested without violating ethical standards
Develop the rationale and possible outcomes for the chosen scientific research
Rationale
A rationale is a justification for choosing the topic of study, explaining why the research was
performed.
Outcomes
Research is classified into two main classes: fundamental and applied research.
Fundamental research investigates basic principles and reasons for the occurrence of a
particular event, process or phenomenon.
Applied research involves solving problems using well known and accepted theories and
principles.
– Methodology
Methodology describes the procedure to follow so that the researcher can address the
objectives/goal. It refers the researcher’s justification or reasoning behind using a specific
method.
A method refers to the specific steps the researcher will take to conduct the experiment.
Consider:
· How you will present your data (tables/graphs) – affects how you collect it
· The analysis you need to perform (look for trends as you perform the investigation)
· Problems in your investigation – so methodology can be altered if necessary
Timelines
Timelines allow the key dates you need to work around to be tracked.
– Benchmarks
Critically analyse the scientific research plan to refine and make appropriate amendments
In the logbook, the experiment should be refined and analysed as it is planned. Changes and
evidence of planning and evaluation should be noted.
Assess and evaluate the uncertainty in experimental evidence, including but not limited
to:
– systematic errors
Systematic errors are ones that consistently cause the measurement value to be either too
large or too small.
Minimising
Best way to increase certainty to measurements is to devise an experiment to measure the
same quantity by a completely different method that is unlikely to have the same error. If
new technique produces different results, one or both experiments may contain
unidentified systematic errors. If measurements made with different measurement
techniques agree, it suggests that there is no systematic error in either measurement.
Causes
· Faulty equipment such as mis-calibrated balances or inaccurate stopwatches.
· Incorrectly used equipment
· Forgetting to subtract weight of container when finding mass of substance
· Converting units incorrectly
Example
· Timing a running race
· Delay caused by time it takes for sound to reach the ears of the timer.
· More delay caused by the official’s reaction time being longer at the start of the race
rather than the finish (where the runner’s motion can be used to anticipate when to use
the watch).
– random errors
Random error is where variations in the measurements occur without a predictable pattern.
Sometimes above and sometimes below actual value which causes uncertainty.
Minimising
We can determine how much error our measurements have by repeating the
measurements many times. If results are identical or nearly same, this indicates a small
amount of random error. If different each time, random error is affecting results.
Random errors can be reduced but never eliminated. Does not always prevent
measurements from being useful, but contributes to measurement uncertainty.
Assess and evaluate the use of errors in:
– mathematical calculations involving degrees of uncertainty
There is always some different between the measured value and the actual value.
There is uncertainty associated with every measurement.
If we use a measured value to make a calculation, the results of the calculation will not be
exactly correct, so there is uncertainty associated with the calculation.
If we can quantify the uncertainty for a measurement, we can use to measurement with
confidence. We do so by specifying a range of values between which we are absolutely
certain the true value of our measurement lies.
Range of values = X ± ΔX
ΔX
= half the range
= (highest possible value – lowest possible value) / 2
Compare quantitative and qualitative research methods, including but not limited to:
Qualitative Quantitative
– Design of Design and Flexible, specified only in Structured, inflexible,
method method general terms in advance of specified in detail in
study. Non-intervention. All advance of study.
descriptive. Consider multiple Intervention. Consider
variables. Small group. only a few variables.
Large group.
Purpose The purpose is to explain and The purpose is to
gain insight and understanding explain, predict
or phenomena through and/or control
intensive collection of narrative phenomena through
data. Generate a hypothesis to focused collection of
test. Inductive process. numerical data.
Generate a
hypothesis to test.
Deductive process.
Approach to Subjective, holistic, process- Objective, focused,
inquiry oriented outcome-oriented
Hypothesis Tentative, evolving, based on Specific, testable,
particular study stated prior to
particular study
Research Controlled settings not Controlled as much as
setting important possible
– Gathering Sampling Non-random. Intent to select Random. Intent to
of data small group, not necessarily select a large
representative in order to get representative sample
in-depth understanding in order to generalise
results to a larger
population
Measurement Non-standardised, narrative, Standardised,
ongoing numerical
Data Document and artefact Observations, specific
collection (something observed). numerical
strategies Interviews/focus groups. measurements.
Questionnaires. Extensive and
detailed field notes.
– Analysis of Data Analysis Raw data is in words. Ongoing, Raw data is
data involves using statistically analysed
observations/comments to to come to a
come to a conclusion. conclusion.
Data Conclusions can change, Conclusions and
interpretation reviewed on an ongoing basis, generalisations
conclusions are generalisations. formulated at end of
The validity of the study, stated with
generalisations are the reader's predetermined
responsibility. degree of certainty.
Inferences and
generalisations are
the researcher's
responsibility. Never
100% certain of
findings.
Investigate the various methods that can be used to obtain large data sets, for example:
– remote sensing
“Big data is a field that treats ways to analyse, systematically extract information from, or
otherwise deal with data sets that are too large or complex to be dealt with by traditional
data-processing application software”
Remote sensing is the science of obtaining information about objects or areas from a
distance, typically from aircraft or satellites.
Objects on Earth can be detected and classified (objects on surface, atmosphere and
oceans).
They use sensors to detect on propagated signals (e.g electromagnetic radiation) emitted or
reflected off an object.
Today, anyone with access to the Internet can view high-quality satellite images of
anywhere on earth at any time.
Satellite imagery can be used for tasks ranging from crop assessment and ecosystem
mapping to monitoring overgrazing, erosion, flooding and bushfires.
– streamed data
Streaming data is data that is continuously generated by different sources. Such data should
be processed incrementally using stream processing techniques without having access to all
the data. In addition, it should be considered that concept drift may happen in the data
which means that the properties of the stream may change over time.
Propose a suitable method to gather relevant data, including large data set(s), if
appropriate, applicable to the scientific hypothesis
PROCESSING DATA FOR ANALYSIS
Inquiry question: How is data processed so that it is ready for analysis?
Investigate appropriate methods for processing, recording, organising and storing data
using modern technologies
Data collection
Data processing is the conversion of data into usable and desired form. It
is carried out using a predefined sequence of operations either manually
or automatically. Data storage
Data storage typically occurs in digital form. This allows the user to
perform a large number of operations in a short period of time. This is Data sorting
Each stage starting from data collection to presentation has a direct effect on the output
and usefulness of the processed data.
Conduct a practical investigation to obtain a qualitative and a quantitative set of data and
apply appropriate methods to process, record, store and organise this data
Example: measuring acidity and basicity of common household substances using a pH probe
(quantitative) and universal indicator (qualitative).
Assess the impact of making a large data set from scientific sources public, for example:
– Large Hadron Collider (world’s largest and most powerful particle accelerator, has fed
data to four large experimental collaborations, resulting in over 2,000 scientific papers)
– Kepler Telescope (detected thousands of exoplanets)
– Human genome project (allowed scientists to begin mapping the blueprint of building a
person, impacts on medicine, biotechnology and life sciences)
Advantages of open data repositories
· Transparency
· Innovation
· Efficiency
· Economic benefits
· Public engagement
Conduct an investigation to access and obtain relevant publicly available data set(s),
associated with the proposed hypothesis, for inclusion in the development of the
Scientific Research Project
Data set of the compositions of different paper sources was obtained and used to analyse
my data.
Module Three: The data, evidence and decisions
PATTERNS AND TRENDS
What tools are used to describe patterns and trends in data?
Evidence is data that is relevant and furnishes proof that supports a evidence
conclusion. The data must support or refute the variables on which
the hypothesis relies.
Data Evidence
The size of each seed The number of seeds that germinate
The mass of each seed
The type of seed
The type of paper towel
The number of seeds that germinate
Describe the difference between qualitative and quantitative data sets, and methods used
for statistical analysis, including but not limited to:
– content and thematic analysis
1. Become familiar with the data and read through it several times to look for basic
observations or patterns. This includes transcribing the data
2. Revisit the research objectives. Identify questions that can be answered through the
data collected.
3. Develop a framework. Identify broad ideas, concepts, behaviours or phrases and
assign codes to them. This allows data to be labelled and structured.
4. Identify patterns and connections. Once the data is coded, the research can identify
themes, looking for common responses to questions, identifying relevant data and
patterns to answer the questions and finding areas that can be explored further.
– descriptive statistics
First level of analysis. Helps researchers summarise new data and find patterns.
Select and use appropriate tools, technologies and/or models in order to manipulate and
represent data appropriately for a data set, including but not limited to:
– spreadsheets
– graphical representations
– models (physical, computational and/or mathematical)
– digital technologies
Assess the relevance, accuracy and validity of the data and determine error, uncertainty
and comment on its limitations
Relevance
· Relates to the aim of the experiment and the chosen topic of investigation
Validity
· A valid experiment is a fair test.
Accuracy
· Accuracy depends on the design on the experiment (i.e validity of method) and
sensitivity of instruments used.
Error
· The standard error is a measure of the accuracy of the estimate of the mean from the
true or reference value
· The main use of the standard error of the mean is to give confidence intervals around
the estimated means for normally distributed data
Uncertainty
· Uncertainty, or confidence, is described in terms of mean and standard deviation of a
dataset. It is the quantitative estimation of error present in data (all measurements
contain some uncertainty generated through systematic error and/or random error).
· The uncertainty of a single measurement is limited by the precision and accuracy of the
measuring instrument and other factors affecting the ability of the experimenter to
make a measurement
· Measurement = measured value ± standard uncertainty
Apply appropriate descriptive statistics to a data set(s), including but not limited to:
– mean
It is calculated by adding up all data entries and dividing by the total number of data entries.
(x 1 + x 2+ …+ x n) ∑ x
x= =
n n
– median
Middle score for a set of data that has been arranged in order of magnitude.
Less affected by outliers and skewed data.
– standard deviation
The standard deviation is a measure of the spread of scores within a data set.
It can be used in conjunction with the mean to summarise continuous data, not categorical
data.
s=
√ ∑ ( X− X)2
n−1
A small standard deviation indicates low variability (scores are close together).
A large standard deviation indicates high variability (scores are more spread out).
Lead to a:
· Consistent difference in true value (systematic error)
· Variance about the true value (random error)
– accuracy
– precision
– bias
Bias from an experimenter has a constant magnitude and direction so averaging over a large
number of observations doesn’t minimise its effect.
Examples include
· Selection bias (studying a group not representative of the study e.g wealthy Sydney
suburb to represent Australia)
· Expectancy bias (looking for an expected result due to prior knowledge)
· Response bias (subjects answered untruthfully or withheld information)
· Reporting bias (selective choice of data/findings to prove hypothesis)
– data cleansing
Data cleansing is the process of detecting and correcting data quality issues.
It can include computers identifying missing or incomplete data, manual steps such as
repeating trials or manually correcting for calibration or measurement error.
Example: Reviews (companies eliminate one star ratings from their review i.e unethical)
Apply appropriate statistical tests of confidence to a data set(s), including but not limited
to:
· The null hypothesis states that there is no difference between the groups you are
testing.
· When interpreting results, you work to either accept or reject the null hypothesis.
· Accepting the null hypothesis means there is no difference in any samples being
compared. Rejecting the null hypothesis means there is a significant difference between
samples that most likely did not occur by random chance.
· The null hypothesis is given as H0. The “alternative” hypothesis is H1.
· Most statistical tests involve the calculation of a p-value (probability value).
· The p-value is the probability of finding the observed results when the null hypothesis is
true.
· An α-value is the value set by the experimenter to determine whether the null
hypothesis is rejected. Meanwhile, the p-value is determined statistically.
· In most statistical tests we set a value of 0.05 for the α-value. This is equivalent to a
percentage of 5%.
· P-value less than 0.05 = result is significant and there is a significant difference
between samples. Null hypothesis is rejected. Lower value = less chance of
results occurring naturally
· P-value greater than 0.05 = result is not significant, we accept the null hypothesis
and conclude there is no significant difference between samples.
– Student’s t-test
· Compares the means and standard deviations of two separate samples (mean and
variance in table)
· Paired same individuals e.g animals before and after eating a specific vitamin
· Unpaired different individuals e.g two groups of animals eating different feed
· A calculated t-statistic is then compared to a table of critical values
· Degrees of freedom = number of samples – 1
· For two sets of data: Degrees of freedom = total number of samples – 2
· Then find the p=0.05 value for degrees of freedom
· To do a comparison for t-test result you can either
· Compare t-statistics directly (compare t-stat and t-critical). If greater then reject
null hypothesis. If t-stat is smaller then accept null hypothesis.
· Compare p-value to α-value of 0.05. If p-value is below α-value then null
hypothesis is rejected.
· A one-tailed test looks for a difference in a particular direction (above OR below mean)
while a two-tailed test looks for any difference (above AND below mean).
Assumptions
· Random sampling
· Continuous numerical data
· Normal distribution
· Adequate sample size
· Equal variances
Types
· Two-tailed or one-tailed
· Paired or unpaired
· (Equal or unequal variances)
– Chi-squared test
· For categorical data to test how well observed data fits expected values
· The expected values can be either data from a previous observation or an expectation of
proportions.
· An X2 value is calculated that is comparerd to a critical value table similar to one used for
t-tests
· E.g 5 tulip colours in a shop. 100 flowersr present so 20 of each colour are expected. 4
degrees of freedom (based on no. colours not no. flowers)
· Null hypothesis is that there is no significant difference between the observed and
expected frequency of a variable.
· Chi-squared larger than critical value = reject null hypothesis (so significant difference
between observed and expected frequency of a variable)
· Chi-squared smaller than critical value = accept null hypothesis
Assumptions
· Random sampling
· Categorical variables
· Data must be actual frequencies or counts (not percentages)
· Independent study groups
· Mutually exclusive categories – each subject only contributes to one data point
– F-test
Assumptions
· Random sampling
· Continuous numerical data
· Normal distributions
· Two populations/samples are independent of each other
· Sample 1 set must have the larger variance (as F-test is a ratio)
Apply statistical tests that can determine correlation between two variables, including but
not limited to:
– correlation coefficient
Describe the difference between correlation and causation
Correlation is a statistical measure (expressed as number) that describes the size and
direction of a relationship between two or more variables. A correlation between variables,
however, does not automatically mean that change in one variable is the cause of the
change in the values of the other variable.
Causation indicates that one event is the result of the occurrence of the other event i.e
there is a casual relationship between the two events. This is also referred to as cause and
effect.
Correlation
Correlation is a statistical technique that can show whether and how strongly pairs of are
related. For instance, height and weight are related.
Correlation works for quantifiable data.
• Correlation coefficient = r
• It ranges from -1.0 to +1.0. The closer r is to +1 or -1, the more closely the two
variables are related.
1. Strength. How strong is the correlation between the cause and the effect?
2. Consistency. Almost every study should support the association for there to be
causation. This is why numerous experiments have to be done before meaningful
statements can be made about the causal relationship between two or more factors.
3. Specificity. This is established when a single putative cause produces a specific
effect. This is considered by some to be the weakest of all the criteria.
4. Temporality. Exposure always precedes the outcome. If factor "A" is believed to
cause a disease, then it is clear that factor "A" must necessarily always precede the
occurrence of the disease.
5. Biological gradient. Also known as dose-response. A little exposure should result in a
little effect, a large exposure should cause a large effect.
6. Plausibility. The effect must have plausibility but should not violate well known laws
of the universe. However what is biologically plausible depends upon the biological
knowledge of the day.
7. Coherence. The association should be compatible with existing theory and
knowledge. In other words, it is necessary to evaluate claims of causality within the
context of the current state of knowledge within a given field and in related fields.
8. Experiment. Can be tested with an experiment.
9. Consideration of Alternate Explanations. In judging whether a reported association
is causal, it is necessary to determine the extent to which researchers have taken
other possible explanations into account and have effectively ruled out such
alternate explanations. In other words, it is always necessary to consider multiple
hypotheses before making conclusions about the causal relationship between any
two items under investigation.
Use available software to apply statistical tests appropriate to a large data set(s) to assist
with the analysis of the data
Excel – analysis ToolPak add-in can be used to apply statistical tests to data.
DECISIONS FROM DATA AND EVIDENCE
Inquiry question: How is evidence used to make decisions in the scientific research process?
Individual Collective
Analyse patterns and trends arising from the data set(s) related to the Scientific Research
Project to:
– construct a relevant conclusion
– suggest possibilities for further investigation
Demonstrate the impact of new data on established scientific ideas, including but not
limited to one of the following:
– gravitational waves on general relativity
– mechanisms of disease transmission and control
John Snow. London struck with cholera outbreak in 1854. Spread by contaminated food or
water. Talked to local residents in his town and used a spot map to model cholera cases.
Cases were centred around pump on Broad Street and this pattern convinced authorities to
disable the well pump, limiting further spread.
Analyse rather than describe, address key words of the question, address all aspects,
monitor time, integrate stimulus material
DATA MODELLING INQUIRY QUESTION
How can data modelling help to process, frame and use knowledge obtained from the
analysis of data sets?
Evaluate data modelling techniques used in contemporary science associated with large
data sets, including but not limited to: – predictive – statistical – descriptive – graphical