KEMBAR78
Foundations of Scientific Thinking | PDF | Plate Tectonics | Hypothesis
0% found this document useful (0 votes)
71 views41 pages

Foundations of Scientific Thinking

The document discusses the development of modern scientific thinking and navigation. It explores topics like epistemology, types of knowledge, the history of navigation techniques from 2000 BC to present day GPS, and the influence of empiricism, induction and deduction on scientific inquiry.

Uploaded by

brionycampbell04
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
71 views41 pages

Foundations of Scientific Thinking

The document discusses the development of modern scientific thinking and navigation. It explores topics like epistemology, types of knowledge, the history of navigation techniques from 2000 BC to present day GPS, and the influence of empiricism, induction and deduction on scientific inquiry.

Uploaded by

brionycampbell04
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 41

Module One: The foundations of scientific thinking

THE DEVELOPMENT OF MODERN SCIENCE


How have philosophical arguments influenced the development of modern scientific
research?

Explore epistemology and alternative ways of knowing, for example the development of
navigation.

Epistemology is a branch of philosophy that investigates the origin, nature, methods and
limits of human knowledge.

Knowledge (as defined by Plato) is a justified true belief.

We know through
· Sense and perception
· Language/authority
· Emotion/intuition
· Logic/reason

Three types of knowledge

· Practical knowledge ⟶ knowledge that is skills based (e.g knowing how to drive)
· Knowledge by acquaintance ⟶ knowledge that does not involve facts but familiarity
with someone or an object (e.g knowing your mother, what an apple looks like)
· Factual knowledge ⟶ knowledge based on fact (e.g knowing that the sun rises every
morning)

Navigation

Date Development Summary


2000 BC Celestial · Knowledge of the night sky allowed navigation access to
navigation open waters
· Polynesians were the first to use the motion of the stars,
weather, position of certain wildlife species or the size of
waves to find the path from one island to another
· North star (Polaris) does not appear to move. Direction
can be determined by the relationship between the
north star and the plough.
206 BC First compass · Developed in China
· As it became more advanced, it allowed steady direction
to be held while sailing.
880 AD Kamal · Invented by Arab navigators
· Consists of wooden block attached to a string with
several equally spaced knots via a hole in its centre
· Used to measure the latitudes and altitudes of stars
· Thus the earliest step toward the use of quantitative
methods in navigation
1730 Sextant · Invented by British mathematician John Hadley
· Measures angular distances between two visible objects
(e.g celestial body and horizon)
· Allows positions of latitude and longitude to be
determined (further quantitative development)
1764 Marine · Developed by John Harrison (England)
chronometer · Precise, specialised clocks for finding longitude at sea
that serve as portable time standards
1978 GPS · Originally set up for US military use
· Created when 22 satellites were specifically built to send
positions and coordinate data to ships at sea
· Modernised GPSs use satellites that orbit the earth twice
a day and use information and trilateration to calculate a
user’s exact location.
· They do this by measuring the exact distance to each
satellite by the time it takes the satellite to receive a
transmitted signal.

Describe the influence of empiricism on scientific inquiry

Rationalism argues that certain knowledge can be gained just by reason and thinking.

Empiricism argues that we gain all of our knowledge from experience. It argues that we
cannot know anything except by information which comes through our senses.

Compare induction and deduction with reference to scientific Information


inquiry

Induction (specific to general – ‘bottom-up reasoning’) Pattern


· We find a general rule by using a large number of particular
cases.
· Inductive reasoning allows for a false conclusion even if all Tentative hypothesis
premises are true.
· Stronger premises ⟶ more chance the conclusion is true
· Scientists use induction to form hypothesises and theories Theory
· Inductive arguments are only probable rather than certain
· Issues include
· the future is different from the past
· there are always outliers

Example
· Nancy made the Olympic Team.
· Nancy had the highest score.
· Therefore, Nancy will win gold.

· Charles Darwin made observations related to finches on the Galapagos Islands and was
able to apply this understanding to other species and developed the Theory of Evolution
(a general conclusion of his observations)

Deduction (general to specific – ‘top-down logic’) Theory


· We apply a general rule to a particular case.
· The conclusion is taken from the premises.
· If the conclusion is wrong, one of the premises is wrong. Hypothesis
· If one of the premises is wrong, the conclusion may also be
wrong.
· Scientists use deduction to apply theories to specific Observation
situations and prove the hypothesis.

Example Confirmation
· All dogs are mammals
· Daisy is a dog
· Daisy is a mammal

· Statements are valid if the conclusion follows on from the premises


· Statements are sound if the premises are true

Assess parsimony/Occam’s razor and its influence on the development of science

· When two explanations are given for the same result, the simpler one is usually correct.
The result that requires more assumptions and speculation is more likely to be incorrect.
· Fewest possible factors or causes.
· However, this does not mean that the simplest answer is correct
· But it may be the easiest to work with, making it easier to design experiments to
disprove it (by Popper’s falsifiability)
· If it is disproven then alternate hypotheses can be re evaluated
· Occam’s razor is a heuristic device (suggestion/guide) rather than a way to guarantee
the right answer.
· Can be applied to developing a hypothesis

Example:
· Lorentz and Einstein both produced theories which explain/predict that the speed of
light in a vacuum is constant and that energy and mass are connected (E=mc 2)
· However, Lorentz’s theory required an as yet (and still) undetected fluid filling space
known as “ether”.
· Einstein’s Special Relativity did not require this. As it was simpler, involving less
assumptions, it was accepted.

Analyse the importance of falsifiability in scientific research


· Established by the 20th century philosopher of science, Karl Popper to distinguish
between scientific concepts and pseudo-scientific concepts.
· For something to be scientific, it must be able to be proven false.
· A scientific idea must be able to make testable predictions, where there are possible
outcomes that would prove the idea to be wrong.
· **All experiments should be completed in the attempt to disprove the hypothesis.
· It can be thought about as proving a null hypothesis true.
· Hence if things are falsifiable then they are able to be used in scientific inquiry.

· If scientists abandon falsifiability, they could damage the public’s trust in science.
· However, we are in various ways hitting the limits of what will ever be testable.

Evaluate the significance of confirmation bias, including theory-dependence of


observation

· Confirmation bias
· The tendency to interpret new evidence as confirmation of one’s existing beliefs or
theories
· A confirmation bias happens when people give more weight to evidence that confirms
their hypothesis and undervalue information that could disprove it.
· This leads to our beliefs being reinforced, even if they are incorrect

How does it occur?


· Selective memory (we don’t remember the dull and mundane)
· Differing interpretation of facts based on our previous views
· Increased questioning of information if it disagrees with us.
· Selection of search criteria with a specific answer in the question
· Following/talking to/reading info from people/organisations we are already likely
to agree with.

How can we avoid it?


1. Reading the literature ⟶ look for different points of view and hypotheses
2. Experimental design ⟶ use ‘blinded’ experiments if there is a risk of accidental
bias in interpretation or results. Use repetition by other experimenters. Use
concept of falsifiability.
3. Analysis of results ⟶ Be wary of assuming something is an outlier just because it
does not conform to your expectations. Use statistical analysis to see if how likely
your results could have occurred by chance

Theory-dependence of observation
· Understanding of an observation is guided by the knowledge that one already has.
· Therefore we have to be very careful in our assumptions as to how objective we are in
our scientific experiments
· Scientists can not claim that their observations are completely independent of pre
existing scientific theories.
· This doesn’t make their discoveries wrong as long as they are aware of it
· The nature of science is that it is often building on previous discoveries, so this is likely to
happen.

Use historical examples to evaluate the contribution of cultural observational knowledge


and its relationship to science, including

· All civilisations throughout history have produced, recorded and accumulated


knowledge to understand and explain the world
· Modern and Western science s very constrained by an empirical/analytical approach
· Asian/African/Indigenous cultures often approach scientific discovery very differently,
causng their knowledge to be rejected by Western scientists
· Such knowledge is increasingly used for medicines, agriculture and engineering

– post – 49000 BCE, exemplified by Aboriginal cultures

· Includes
· Indigenous Australians
· Inuit peoples of Northern America
· Western science favours quantitative and empirical knowledge, traditional knowledge
observes natural phenomena linked to local culture
· Indigenous Australian Aboriginal peoples have extensive knowledge of plant medicine,
including processing and purification. Knowledge rejected by Western civilisations until
recently. Several Indigenous plant-based medicines are in commercial production (e.g
Smoke Bush)
· Much knowledge has been lost through colonisation (Western peoples discarded their
knowledge as primitive without realising advanced scientific techniques in use by all
cultures)
· Australian and South American Indigenous [peoples had extensive celestial and weather
pattern knowledge. However, expressed through stories and legends so often ignored by
colonisers (failed to predict cyclones, floods, hurricanes, severe weather)

– pre – 1500 CE, exemplified by Greek and Egyptian cultures and those of the Asia region

· Contributions of Greek, Egyptian and Asian cultures are also deeply rooted in their
culture (e.g Egyptian pyramids are engineering marvels but also have a religious/cultural
basis)

ORIGIN EXAMPLE DESCRIPTION INFLUENCE ON CURRENT


SCIENTIFIC THINKING
Greek Astronomy/the Greek philosophers Such measurements are
Earth proposed theories to important for scientists to
account for the form and utilise today.
origin of the Earth.
Eratosthenes made the
first accurate
measurement of the
Earth’s diameter. He
invented
latitude/longitude.
Water Archimedes screw  Farmers have used for
movement used laws of physics to thousands of years for crop
enable water to be moved irrigation. Modern scientists
uphill. have discovered that
reversing the Archimedes
screw can be used as an
efficient source of alternative
energy.
Aqueducts  keeps water Helped keep drinking water
flowing across free of human waste and
valleys/through mountain contamination. Concept of
ranges, allowing for water running water remains today,
movement from distant improving public health and
rivers and lakes to cities. hygiene. Transporting water
Regular access to clean from source to distribution
water improved point (e.g pipelines) remains
health/sanitation in Greek today.
and Roman cities.
Egyptian Simple Used to build pyramids Basis of mechanics.
machines and temples of Egypt from
(ramps, levers, stone blocks (3-5 tonnes).
pulleys,
wheels)
Metal Tools made from copper, Today, copper is used in
extraction and then bronze (copper + tin). electrical equipment (e.g
use Copper-making wiring, motors) as it is a good
hearths/ovens designed to heat and electrical conductor.
work copper into various Must be drawn into wires for
shapes. Gold was alloyed this.
and purified.
Asian Gunpowder Around 300CE. Mainly Military purposes for defence.
Region used in celebratory Affected warfare and
fireworks, clearing land, weaponry, mining (improving
construction, weapons. rate of construction), rockets,
fireworks.
Paper Developed in second Printing, packaging,
century CE replacing decorating,
writing on bamboo, wood, writing, cleaning/hygiene,
stone, silk. Traded across filtering, currency.
Asia and globally through
routes e.g Silk Road
Select one example from the following list to analyse the paradigm shift and how
evidence is used to support new theories to explain phenomena and their consequences:
– Lavoisier and oxygen
– Einstein and general relativity
– Wegener and continental drift, leading to plate tectonics

A paradigm is a pattern or framework of concepts (theory or group of theories) which have


other explanations/theories fitting inside.

Paradigm shifts occur when there is a fundamental change in the basic concepts and
experimental practices of a scientific discipline.

The cause of most paradigm shifts is either


· Experimental results that fall outside the existing paradigm – results able to be repeated
and confirmed from multiple sources.
· A change in thinking an interpretation of existing results

Anomalies appear in
data/investigations
(initially ignored)

Anomalies continuously
A new paradigm is appear and cannot be
accepted explained within current
paradigm

Alternative concepts are New explanations arerr


proposed and tested proposed, debated and
(where possible) investigated

Initial paradigm: the continents were always in their current positions with perhaps some
minor movement.

Wegener’s proposal and evidence

· Crust of Earth is divided into numerous rigid slabs of crust called plates
· Plates move around relative to each other, causing continents to move around
· Explains mountains, oceanic ridges and other geological features
· Evidence: continents fit together like jigsaw pieces. Distribution of fossils and rock
formations/types fit perfectly along jigsaw pieces of continents.
Process of rejection and acceptance
· Wegener could not explain why his theory would work, causing universal rejection by
scientists
· Died in 1930 without theory gaining acceptance
· Several other scientists carried on his work
· Arthur Holmes – convection currents as mechanism of plate movement**
· Harry Hess – determined occurrence of seafloor spreading
· Vine and Matthews – explored magnetic banding that occurred as new
rock formed at mid-ocean ridges
· Work of these scientists filled in blanks missingin Wegener’s theory

Eventually the initial theory proposed by Wegener was supported by so much scientific
evidence that the overall thinking changed and his theory became the accepted theory
(paradigm shift)

– McClintock and transposable elements, commonly known as ‘jumping genes’


INFLUENCES ON CURRENT SCIENTIFIC THINKING
Inquiry question: What currently influences scientific thinking?

Analyse the current influences on scientific thinking, including but not limited to:
– economic

· Scientific thinking and knowledge is of significant economic value


· Advanced physical and mathematical sciences contribute around $145b to the
Australian economy every year
· 11% of Australian economic activity directly relies on advanced mathematical and
physical sciences
· Advanced mathematical and physical sciences are directly related to 7% of Australian
employment (760,000 jobs)
· Greater economic access allows for more substantial scientific knowledge to be gained
· Society is economically focused, so research and scientific value is often measured by its
economic use

Funding for scientific research


· Large amount of funding is based on government grants
· Private funding in the United States can be from large transnational corporations who
may have their own agenda in terms of the results of the research, such as
pharmaceutical companies. Coca Cola fund large amounts of research, such as the
impact of sugar on human health

– political

Funding
· Should scientific endeavours reap benefits for society?
· Curiosity vs planning with an end use
· Government agencies manage the allocation of public science funding
· Research to contribute to economic and societal growth
· Public engagement

Political manipulation
· Accusations of presenting inaccurate or incomplete information on issues e.g climate
change, editing/skewing scientific reports, preventing scientists from speaking with
media about findings and knowledge (George Bush)

– global

· Gender and culture affect what we choose to study, our perspectives when approaching
scientific phenomena and strategies for studying them
· For example: research about evolutionary biology. US primatologists focused on male
dominance and associated mating access. Japanese researchers focused on status and
social relationships, values holding higher relative importance in Japanese society.
· A diversity of scientists is important for reducing bias and for providing different
worldviews
· Global competition is a theme that resonates throughout many scientific discoveries
· Space race (20th-century competition between two Cold War rivals, the Soviet
Union (USSR) and the United States (US), to achieve firsts in spaceflight capability.
Sparked one of the most famous periods in scientific research related to space
exploration.

Analyse the influence of ethical frameworks on scientific research over time, including but
not limited to
– human experimentation

· People running clinical trials have legal obligations set out in Medicines for Human Use
(Clinical Trials) Regulations 2004
· Anyone taking part in a trial must have a full understanding of the objectives of the
research, and any risks and potential inconveniences they may experience when
taking part. This information will be given to them at a meeting with a member of
the research team
· A point of contact must be provided so patients can obtain more information
about the trial
· Before a clinical trial of a new medicine can begin, all of the following must be in place:
· The science the research is based on must be reviewed by experts
· The researchers must secure funding
· An organisation, such as a hospital or research institute, must agree to provide a
home base for the trial
· The Medicines and Healthcare Products Regulatory Agency (MHRA) needs to
review and approve trials of a medicine and issue a clinical trial authorisation (CTA)
· A recognised ethics committee must review the trial and allow it to proceed

Important ethical considerations

· Informed consent
· Privacy and anonymity
· Safe data storage and confidentiality

– experimentation on animals

· Early scientists, anatomists, chemists studying poisons did experiments on animals


without much concern from Ancient Greeks onwards.
· Robert Koch in the late 1800s used animals to prove that particular bacteria caused
particular diseases. Used Guinea Pigs to find the bacterium which caused Tuberculosis
but when he claimed he had used them to help find a cure for TB which didn’t seem to
actually work, public opinion turned against him and his use of guinea pigs.
· 1937 – an American released a drug using a new solvent (diethyl glycol). It had not been
tested on Animals. The solvent turned out to be highly toxic. More than 100 people died.
· 1938 – After a huge public outcry, America passed a law requiring all new drugs to be
tested on animals
· 1970s – Australian Philosopher Peter Singer argues for the moral and ethical rights of
animals. The ethics of animal experiments start to be questioned again by many in
society. This questioning still exists today.
· The 3 R’s that guide animal experimentation
· Reduce - Use as few animals as possible
· Refine - Develop experiments to reduce animal experiments as much as possible
· Replace - Wherever possible do non-animal experiments

– biobanks

· These are stores of human biological samples e.g from patients with particular diseases
· Samples are used by researchers for developing
· New diagnosis systems
· New medications
· Genomics
· Issues include
· Patient consent
· Ownership/control over data
· Benefits to patient or family if material is used to help discover a medicine
(profitable)
· Privacy rights

– use of research data

Who owns the data?


· traditionally view is that information discovered is intellectually owned by scientists who
did work or organisation/company who paid them
· When data comes from humans or natural world there are questions surrounding who
owns information e.g the human genome

Who controls how it is used?


· “dual use research” – some data can be used for good or bas
· For instance, atomic energy research may also be used to make atomic bombs

Privacy of participants in studies


· For human studies. Large scale hacking of data becoming more prevalent despite
“anonymised” data
· DNA data can be specific to individuals and is an ethical issue about right to privacy
which can have other implications e.g health insurance costs
Module Two: The Scientific Research Proposal
DEVELOPING THE QUESTION AND HYPOTHESIS
Inquiry question: What are the processes needed for developing a scientific research
question and initial hypothesis?

Conduct an initial literature search, from one or more areas of science, to identify the
potential use of a contemporary, relevant publicly available data set

The purpose of literature


· The scientific process starts and ends with sharing research findings
· Scientific research is published in peer reviewed journals
· Peer reviewed ⟶ has been checked by experts in the field who ensure that
· the author has demonstrated an appropriate understanding of the field of
investigation
· has used proven methods to analyse data
· has made logical conclusions

Data sets
· Science is data driven. Therefore, when planning an investigation, scientists need to
ensure they will have access to data that will enable them to test their hypothesis
· Electronic sensing has enabled scientists to take measurements very frequently or over a
large time scale (leading to large data sets). This is advantageous as:
· anomalies are more easily identified
· errors in data can be quantified more accurately

Where can data be obtained from?


· Gather it yourself. Consider:
· If the data collected will qualify as a large data set
· If the sensors/equipment available will enable the hypothesis to be tested
· If a statistical analysis of the data can be carried out
· Access data through government and scientific organisations such as:
· Australian Bureau of Statistics
· Bureau of Meteorology
· United States Geological Survey
· Using online dataset repositories e.g GitHub
· Contact academics working in your chosen field

Evaluating data sets (all felines must purr)


· Able to acknowledge source/ownership of data
· File format that is can be imported into data analysis software that you have access to
· Measurements (of independent and dependant variable)
· Permission to use data
· Low uncertainty in the data to make meaningful conclusions
· Relevance (to area of investigation)
· Data range (broad enough to enable you to make meaningful generalisations)
Develop a scientific research question from the literature search

What are features of a good inquiry question?


· Links to and expands on the knowledge gap in the problem identified
· Is open to research ⟶ an understanding of the question can be obtained by research
· Has an unknown answer ⟶ otherwise violates spirit of genuine inquiry
· May have multiple possible answers ⟶ beyond yes/no
· Has a clear focus ⟶ either direct and specific, or with clear sub-questions. May need to
be revised and focused as the research continues.
· Is reasonable ⟶ credible information to research question scientifically
· Avoids/rephrases questions with a premise ⟶ e.g why do we only use 3% of our brain?
· All terms in question can be defined ⟶ e.g ‘most recent’
· Can prompt new questions once information is gathered

Class summary
· Identify a problem
· Literature review
· Extensive – range of literature
· Peer reviewed
· Overview of science already conducted
· Further research
· Relationships between literature
· Seek advice from experts
· Depth of understanding
· CONSIDER: ethics, safety, expense, equipment, time, environmental impacts

Formulate an initial scientific hypothesis based on the scientific research question

A hypothesis is a proposed explanation for a phenomenon that can be tested by


experimentation.

They are accepted or rejected on the basis or findings arising from the investigation.
· Rejection of the hypothesis may lead to new, alternative hypotheses.
· Acceptance of the hypothesis as a valid explanation is not necessarily permanent.

Features of a good hypothesis


· Based on observations and prior knowledge of the system
· Offers an explanation for an observation
· Refers to only one independent variable
· Is written as a definite statement and not a question
· Is testable by experimentation
· Leads to predictions about the system
Evaluate the resources associated with the initial scientific hypothesis derived from the
literature in terms of:
– the scope to perform an investigation to obtain primary data

Important to determine whether intended area of research can be done by performing a


first-hand investigation in a school laboratory. For instance, modifying my experiment to a
smaller scale that can be completed in the lab (rather than industry level biofuel
production).

– the availability of secondary-sourced data


– the availability of a relevant publicly available data set(s)

Relevant data sets, on the same or similar areas of study, can be used to help conduct,
discuss or analyse the project.

– reliability and validity

Reliability

Criteria What to look for Evidence to back up


judgement
Author Reputable sites often have “.org” or “.edu” or Provide website name/suffix
“.gov” suffixes BUT these sites may contain and comment on why you
bias and only present one side of the think it is a respected
information. source.
“.com” sites CAN be reputable, unbiased and
factual. Provide credentials and
The author’s credentials (education, evidence of respect/work in
published work etc) should be investigated field in question.
and they should be an expert in the field.
Bias Commercial, political or economic interests. Comment on motivations
for the author or
organisation.
Content Presence of first hand investigations Refer to types of evidence
conducted by scientists with data, results, (data, graphs, examples)
graphs and conclusions formed. Data should that was present in the
be gathered using an appropriate method and source.
measuring devices.
Presence of detailed examples.
Date The date of publication of an article or book, Provide the date of
or the date of last update for a website. publication of article or last
time website was updated.
Accuracy

Criteria What to look for Evidence to back up


judgement
The information is You have compared the Name/identify the sources
factually consistent information from sources like that you have compared to
with information your textbook, webpages, class (for example, your text). Give
from multiple notes, peer-reviewed journals (as examples of accurate data.
reliable sources. appropriate).

Validity

Criteria What to look for Evidence to back up


judgement
Information is relevant Key words, data, explanations, Identify the types of
to the topic or question diagrams etc that relate directly to information (diagram,
the question. graph, explanation)
Not everything in the source needs to that is relevant to your
be relevant for the source to be valid. question.
Information is accurate Use previous factors to make an Refer to judgements
and reliable overall judgement on reliability and made on reliability and
accuracy. If the source is not reliable accuracy using above
and not accurate than it cannot be criteria.
valid.
Any data given in first The method used contains controlling Reefer to some
hand investigations has variables and is a fair test. The specifics on how the
been gathered using experiment or report may be peer testing was fair, or
appropriate valid reviewed. how data was
scientific method gathered.

– assessing the current state of the theory, concept, issue or problem being considered

Include:
· Current research occurring in your intended area of study
· Innovations occurring in this field
· Relevance to wider society (developing solutions for problems, helping us in our daily
lives etc)
Assess the process involved in the development of a scientific research question and
relevant hypothesis

Problem Description Solution


Research · Attempts to cover too much · Identify one aspect of the
question is information or answer too many topic and focus on it.
too broad. questions. · Concentrate on a specific
· Preliminary searches for information factor/variable rather than
may produce thousands of resources. a group of variables.
· Components cannot be covered in · Limit the topic
detail within the scope of the project. geographically or
· Only general information can be chronologically.
provided as there are too many details
for depth to be provided.
Research · Unable to find enough information · Expand number of
question is that addresses question. subjects.
too narrow. · Possibility that no quality findings have · Expand geographic range.
been reported on the question or · Expand timespan
question has never been tested. · Use research to
brainstorm related but
broader topics
Further questions:
· Does the project address a relevant and important issue?
· Has the project already been done? If possible, try to find a new angle
· Is it feasible? Can necessary equipment be accessed? Can it be completed within the
timeframe?
SCIENTIFIC RESEARCH PROPOSAL
How is scientific research planned, based on a relevant hypothesis?

Conduct a detailed literature review to support the validity, significance and


appropriateness of the scientific research question

A literature review is a critical analysis of published sources, or literature, on a particular


topic. It assesses literature and provides a summary, classification, comparison and
evaluation.

Features of a good literature review:


· Provides a clear statement of the topic area
· Provides a range of research on the topic (good and bad data)
· Critically analyses a selected topic using published knowledge
· Provides an indication of what further research may occur
· Identifies areas of controversy in the literature

Writing style:
· Precise
· Formal
· Quantitative information
· Objective

Structure:
· Introduction
· Introduce widely accepted core concepts
· Highlight importance of the review
· Discuss core aim of review
· List points/topics in order
· Main Body
· Group topics according to common elements
· Back up main points with research
· Focus on recent data
· Summarise individual studies or articles
· One key point per paragraph
· Sub-headings to group points/topics
· Diagrams, figures, tables to discuss point
· Conclusion
· Follow from introduction
· Summarise major research contributions to scientific field
· Point out gaps in research
· Highlight potential future studies
Formulate a final scientific hypothesis based on the scientific research question

A null hypothesis represents the traditional approach, that there is no relationship or


significant difference that exists between groups.

An alternative (or directional) hypothesis makes a prediction about the expected outcome.
It is based on prior readings, research and studies on a topic that suggest a potential
outcome.

A non-directional hypothesis makes the prediction that a change will occur but the direction
is not specified.

Hypothesis features
· Focuses on something testable
· Includes an independent and dependent variable
· Variables can be manipulated
· Can be tested without violating ethical standards

Develop the rationale and possible outcomes for the chosen scientific research

Rationale

A rationale is a justification for choosing the topic of study, explaining why the research was
performed.

Questions that can be asked


· What is the issue about? Why is it important?
· Why is there a need to conduct the study?
· How should the issue be resolved?

Outcomes

Outcomes provide an overview or general statement of what the research intends to


achieve. They should specify what specific items will be needed to be completed to meet
the goal. They can form timelines/benchmarks for the project. Normally 2-3 are written.

Develop a detailed plan to investigate the scientific hypothesis including:

– The overall strategy

Research is classified into two main classes: fundamental and applied research.

Fundamental research investigates basic principles and reasons for the occurrence of a
particular event, process or phenomenon.
Applied research involves solving problems using well known and accepted theories and
principles.

– Methodology

Methodology describes the procedure to follow so that the researcher can address the
objectives/goal. It refers the researcher’s justification or reasoning behind using a specific
method.

A method refers to the specific steps the researcher will take to conduct the experiment.

An effective methodology should:


· Introduce the overall approach for investigating the problem
· Indicate how the approach fits the overall research design
· Describe specific methods of data collection that will be used
· Explain how results will be analysed
· Provide background and a rationale for methodologies unfamiliar to readers
· Provide a justification for subject selection and sampling procedure
· Describe potential limitations

– Data analysis (statistical analysis)


– Representation and communication of the scientific research

Consider:
· How you will present your data (tables/graphs) – affects how you collect it
· The analysis you need to perform (look for trends as you perform the investigation)
· Problems in your investigation – so methodology can be altered if necessary

Timelines
Timelines allow the key dates you need to work around to be tracked.

– Benchmarks

Dates and targets you set yourself in your project.

Critically analyse the scientific research plan to refine and make appropriate amendments

In the logbook, the experiment should be refined and analysed as it is planned. Changes and
evidence of planning and evaluation should be noted.

Employ accepted referencing protocols, for example:


– APA
– Harvard
– MLA
METHODOLOGY AND DATA COLLECTION
How is an appropriate methodology developed to collect valid and reliable data?

Assess and evaluate the uncertainty in experimental evidence, including but not limited
to:
– systematic errors

Systematic errors are ones that consistently cause the measurement value to be either too
large or too small.

Can be very difficult to identify systematic errors.

Minimising
Best way to increase certainty to measurements is to devise an experiment to measure the
same quantity by a completely different method that is unlikely to have the same error. If
new technique produces different results, one or both experiments may contain
unidentified systematic errors. If measurements made with different measurement
techniques agree, it suggests that there is no systematic error in either measurement.

Causes
· Faulty equipment such as mis-calibrated balances or inaccurate stopwatches.
· Incorrectly used equipment
· Forgetting to subtract weight of container when finding mass of substance
· Converting units incorrectly

Example
· Timing a running race
· Delay caused by time it takes for sound to reach the ears of the timer.
· More delay caused by the official’s reaction time being longer at the start of the race
rather than the finish (where the runner’s motion can be used to anticipate when to use
the watch).

– random errors

Random error is where variations in the measurements occur without a predictable pattern.
Sometimes above and sometimes below actual value which causes uncertainty.

Minimising
We can determine how much error our measurements have by repeating the
measurements many times. If results are identical or nearly same, this indicates a small
amount of random error. If different each time, random error is affecting results.

Random errors can be reduced but never eliminated. Does not always prevent
measurements from being useful, but contributes to measurement uncertainty.
Assess and evaluate the use of errors in:
– mathematical calculations involving degrees of uncertainty

There is always some different between the measured value and the actual value.
There is uncertainty associated with every measurement.

If we use a measured value to make a calculation, the results of the calculation will not be
exactly correct, so there is uncertainty associated with the calculation.

If we can quantify the uncertainty for a measurement, we can use to measurement with
confidence. We do so by specifying a range of values between which we are absolutely
certain the true value of our measurement lies.

Range of values = X ± ΔX

X = the measurement value

ΔX
= half the range
= (highest possible value – lowest possible value) / 2

– graphical representations from curves of best fit

Error bars show data variability on a plot of mean values

Types of error bars


· max/min
· standard deviation – demonstrates data variability but no comparison possible.
· standard error – if bars overlap, any difference in means is not statistically different. If
bars do not overlap, indicates nothing.
· 95% confidence interval – if bars overlap, indicates nothing. If bars do not overlap,
difference is statistically significant – expected to contain population mean 95% of time.

Compare quantitative and qualitative research methods, including but not limited to:

Qualitative Quantitative
– Design of Design and Flexible, specified only in Structured, inflexible,
method method general terms in advance of specified in detail in
study. Non-intervention. All advance of study.
descriptive. Consider multiple Intervention. Consider
variables. Small group. only a few variables.
Large group.
Purpose The purpose is to explain and The purpose is to
gain insight and understanding explain, predict
or phenomena through and/or control
intensive collection of narrative phenomena through
data. Generate a hypothesis to focused collection of
test. Inductive process. numerical data.
Generate a
hypothesis to test.
Deductive process.
Approach to Subjective, holistic, process- Objective, focused,
inquiry oriented outcome-oriented
Hypothesis Tentative, evolving, based on Specific, testable,
particular study stated prior to
particular study
Research Controlled settings not Controlled as much as
setting important possible
– Gathering Sampling Non-random. Intent to select Random. Intent to
of data small group, not necessarily select a large
representative in order to get representative sample
in-depth understanding in order to generalise
results to a larger
population
Measurement Non-standardised, narrative, Standardised,
ongoing numerical
Data Document and artefact Observations, specific
collection (something observed). numerical
strategies Interviews/focus groups. measurements.
Questionnaires. Extensive and
detailed field notes.
– Analysis of Data Analysis Raw data is in words. Ongoing, Raw data is
data involves using statistically analysed
observations/comments to to come to a
come to a conclusion. conclusion.
Data Conclusions can change, Conclusions and
interpretation reviewed on an ongoing basis, generalisations
conclusions are generalisations. formulated at end of
The validity of the study, stated with
generalisations are the reader's predetermined
responsibility. degree of certainty.
Inferences and
generalisations are
the researcher's
responsibility. Never
100% certain of
findings.
Investigate the various methods that can be used to obtain large data sets, for example:
– remote sensing

“Big data is a field that treats ways to analyse, systematically extract information from, or
otherwise deal with data sets that are too large or complex to be dealt with by traditional
data-processing application software”

Remote sensing is the science of obtaining information about objects or areas from a
distance, typically from aircraft or satellites.

Objects on Earth can be detected and classified (objects on surface, atmosphere and
oceans).

They use sensors to detect on propagated signals (e.g electromagnetic radiation) emitted or
reflected off an object.

Today, anyone with access to the Internet can view high-quality satellite images of
anywhere on earth at any time.

Satellite imagery can be used for tasks ranging from crop assessment and ecosystem
mapping to monitoring overgrazing, erosion, flooding and bushfires.

– streamed data

Streaming data is data that is continuously generated by different sources. Such data should
be processed incrementally using stream processing techniques without having access to all
the data. In addition, it should be considered that concept drift may happen in the data
which means that the properties of the stream may change over time.

Examples: GPS sensors, measuring climatic data from different locations.

Propose a suitable method to gather relevant data, including large data set(s), if
appropriate, applicable to the scientific hypothesis
PROCESSING DATA FOR ANALYSIS
Inquiry question: How is data processed so that it is ready for analysis?

Investigate appropriate methods for processing, recording, organising and storing data
using modern technologies
Data collection

Data processing is the conversion of data into usable and desired form. It
is carried out using a predefined sequence of operations either manually
or automatically. Data storage

Data storage typically occurs in digital form. This allows the user to
perform a large number of operations in a short period of time. This is Data sorting

important with the emergence and growing emphasis on big data.

Methods of data processing Data processing


1. Manual data processing (no use of machine, tool or electronic
device)
2. Mechanical data processing (use of mechanical device or very Data analysis
simple electronic devices, good for simple processing)
3. Electronic data processing (Modern technique, has highest
Data presentation
reliability and accuracy, using a computer that automatically and conclusions
processes data)

Each stage starting from data collection to presentation has a direct effect on the output
and usefulness of the processed data.

Conduct a practical investigation to obtain a qualitative and a quantitative set of data and
apply appropriate methods to process, record, store and organise this data

Example: measuring acidity and basicity of common household substances using a pH probe
(quantitative) and universal indicator (qualitative).

Assess the impact of making a large data set from scientific sources public, for example:
– Large Hadron Collider (world’s largest and most powerful particle accelerator, has fed
data to four large experimental collaborations, resulting in over 2,000 scientific papers)
– Kepler Telescope (detected thousands of exoplanets)
– Human genome project (allowed scientists to begin mapping the blueprint of building a
person, impacts on medicine, biotechnology and life sciences)
Advantages of open data repositories

· Transparency
· Innovation
· Efficiency
· Economic benefits
· Public engagement

Disadvantages of open data repositories

· Consent and ethics


· Acquiring data
· Competence of people analysing data
· Access and updating of data can slow servers
· Large amounts of storage needed
· Need for specific software to analyse
· Security risks

Conduct an investigation to access and obtain relevant publicly available data set(s),
associated with the proposed hypothesis, for inclusion in the development of the
Scientific Research Project

Data set of the compositions of different paper sources was obtained and used to analyse
my data.
Module Three: The data, evidence and decisions
PATTERNS AND TRENDS
What tools are used to describe patterns and trends in data?

Analyse and determine the differences between data and evidence

Data is factual information such as numbers, percentages and data


statistics. Data can exist on its own.

Evidence is data that is relevant and furnishes proof that supports a evidence
conclusion. The data must support or refute the variables on which
the hypothesis relies.

Example: seed germination

Data Evidence
The size of each seed The number of seeds that germinate
The mass of each seed
The type of seed
The type of paper towel
The number of seeds that germinate

Describe the difference between qualitative and quantitative data sets, and methods used
for statistical analysis, including but not limited to:
– content and thematic analysis

Used for qualitative data.


Analysis of words, observations, images or symbols.

1. Become familiar with the data and read through it several times to look for basic
observations or patterns. This includes transcribing the data
2. Revisit the research objectives. Identify questions that can be answered through the
data collected.
3. Develop a framework. Identify broad ideas, concepts, behaviours or phrases and
assign codes to them. This allows data to be labelled and structured.
4. Identify patterns and connections. Once the data is coded, the research can identify
themes, looking for common responses to questions, identifying relevant data and
patterns to answer the questions and finding areas that can be explored further.

Patterns are derived through common words, themes or concepts.


Advantages Disadvantages
· It provides a vast amount of flexibility. · Reliability is the main concern due to
· Many theories can be applied to this large variety of interpretations from
process across various range of different researchers.
epistemologies. · Thematic analysis might miss variations
· This analysis is well suited to vast data in data.
bases. · It becomes difficult regarding the focus
· It permits researchers to grow the on one thing as, you get flexibility
range of study past individual regarding analysis.
experiences. · Finding and verifying themes and codes,
· Amazing analysis for multiple mix together.
researchers. · If the analysis excludes theoretical
· Helps in interpretation of themes framework, then you get limited
backed up by data. interpretive power.
· This analysis is applicable to even those · It is very difficult to maintain sense of
researches which are beyond an continuity of data in each single
individual’s experience. account.
· Allows for categories to evolve from · It does not allow researchers to claim
data. for language usage.

– descriptive statistics

First level of analysis. Helps researchers summarise new data and find patterns.

· Mean: numerical average of a set of values.


· Median: midpoint of a set of numerical values.
· Mode: most common value among a set of values.
· Percentage: used to express how a value or group of respondents within the data relates
to a larger group of respondents.
· Frequency: the number of times a value is found.
· Range: the highest and lowest value in a set of values.
Provide absolute numbers but do not explain rationale or reasoning behind numbers.
Summarise individual variables, allowing them to be compared and patterns to be identified.

Select and use appropriate tools, technologies and/or models in order to manipulate and
represent data appropriately for a data set, including but not limited to:
– spreadsheets
– graphical representations
– models (physical, computational and/or mathematical)
– digital technologies

Uses Advantages Disadvantages


Spreadsheets Entering data Easy to collect and Not easy to visualise
Organising data organise data data from a
Sub-setting and Streamlines calculations spreadsheet alone
organising data (access to formulas) Possibility of errors
Statistics Access by multiple users
Plotting and inserting Generation of graphs
graphs
Graphical Representing a dataset Visually display data Vertical exaggeration
representations visually Clearly show trends Easy to manipulate and
Communication in Easier to communicate be misleading
presentations
Models Can be used for testing Simplify complex data Time consuming to test
(physical, the hypothesis. Used sets. and make
computational when experiments are Interactive and Some things e.g human
and/or hard to perform, engaging behaviour are difficult
mathematical) unethical, impossible to to model
perform or on a very Can be complex
small or large scale. requiring expensive
software
Digital GPS, data loggers, Improves efficiency and Breaching privacy e.g
technologies remote sensing, satellites accuracy Facebook

Assess the relevance, accuracy and validity of the data and determine error, uncertainty
and comment on its limitations

Relevance
· Relates to the aim of the experiment and the chosen topic of investigation

Validity
· A valid experiment is a fair test.

A method is valid if:


· It investigates what you think it will investigate
· In incorporates suitable equipment
· Variables are controlled
· Appropriate measuring procedures are included
Discussions about validity must:
· Identify what validity is
· Identify factors that affect the validity of a particular experiment (controlled variables,
equipment, range of values etc)
· Assess the overall validity of the experiment

Accuracy
· Accuracy depends on the design on the experiment (i.e validity of method) and
sensitivity of instruments used.

Results are accurate if:


· They are close to the true value of the quantity being measured
· They can be substantiated in secondary sources

· Precision of a measurement system refers to how close the agreement is between


repeated measurements (which are repeated under the same conditions).

Error
· The standard error is a measure of the accuracy of the estimate of the mean from the
true or reference value
· The main use of the standard error of the mean is to give confidence intervals around
the estimated means for normally distributed data

Uncertainty
· Uncertainty, or confidence, is described in terms of mean and standard deviation of a
dataset. It is the quantitative estimation of error present in data (all measurements
contain some uncertainty generated through systematic error and/or random error).
· The uncertainty of a single measurement is limited by the precision and accuracy of the
measuring instrument and other factors affecting the ability of the experimenter to
make a measurement
· Measurement = measured value ± standard uncertainty

Evaluate the limitations of data analysis and interpretation

· Having the necessary skills to analyse


· Selecting appropriate analysis techniques
· Bias within conclusions
· Lack of clearly defined and objective outcomes
· Faults within question phrasing for surveys
· False correlations – that are analysed as causation
· Data fishing – large volumes of data are analysed for the purpose of discovering
relationships between data points, regardless of the aim
· Omission – only including data that supports your view and findings
· Misleading data representation – relating to the scales used
· Purposeful and selective bias – deliberate attempt to influence data findings
· Small sample sizes – can cause invalid percentages to be calculated

STATISTICS IN SCIENTIFIC RESEARCH


How does statistical analysis assist in finding meaning in the trends or patterns in data sets?

Apply appropriate descriptive statistics to a data set(s), including but not limited to:
– mean

The mean is an estimate of the ‘true’ value of the measurement.


It is the average of all data entries.
It is a measure of central tendency for normally distributed data.

It is calculated by adding up all data entries and dividing by the total number of data entries.

(x 1 + x 2+ …+ x n) ∑ x
x= =
n n

Main disadvantage: particularly susceptible to influence of outliers.


Median is preferred over mean when data is skewed.

– median

Middle score for a set of data that has been arranged in order of magnitude.
Less affected by outliers and skewed data.

When to use the mean, median and mode:

Type of variable Best measure of central tendency


Nominal Mode
Ordinal Median
Interval/Ratio (not skewed) Mean
Interval/Ratio (skewed) Median

– standard deviation

The standard deviation is a measure of the spread of scores within a data set.
It can be used in conjunction with the mean to summarise continuous data, not categorical
data.

s=
√ ∑ ( X− X)2
n−1

s = sample standard deviation


∑ = sum of …
X = sample mean

A small standard deviation indicates low variability (scores are close together).
A large standard deviation indicates high variability (scores are more spread out).

Apply appropriate performance measures to the statistical analysis of quantitative data


set(s) obtained from conducting a relevant practical investigation, including but not
limited to:
– error

Lead to a:
· Consistent difference in true value (systematic error)
· Variance about the true value (random error)

– accuracy

Relates to use of apparatus.

– precision

How close measurements of the same item are to each other.

– bias

Bias from an experimenter has a constant magnitude and direction so averaging over a large
number of observations doesn’t minimise its effect.

Examples include
· Selection bias (studying a group not representative of the study e.g wealthy Sydney
suburb to represent Australia)
· Expectancy bias (looking for an expected result due to prior knowledge)
· Response bias (subjects answered untruthfully or withheld information)
· Reporting bias (selective choice of data/findings to prove hypothesis)

– data cleansing

Data cleansing is the process of detecting and correcting data quality issues.
It can include computers identifying missing or incomplete data, manual steps such as
repeating trials or manually correcting for calibration or measurement error.

Example: Reviews (companies eliminate one star ratings from their review i.e unethical)

Apply appropriate performance measures to the statistical analysis of a data set(s)


relevant to the Scientific Research Project

Apply appropriate statistical tests of confidence to a data set(s), including but not limited
to:

· The null hypothesis states that there is no difference between the groups you are
testing.
· When interpreting results, you work to either accept or reject the null hypothesis.
· Accepting the null hypothesis means there is no difference in any samples being
compared. Rejecting the null hypothesis means there is a significant difference between
samples that most likely did not occur by random chance.
· The null hypothesis is given as H0. The “alternative” hypothesis is H1.
· Most statistical tests involve the calculation of a p-value (probability value).
· The p-value is the probability of finding the observed results when the null hypothesis is
true.
· An α-value is the value set by the experimenter to determine whether the null
hypothesis is rejected. Meanwhile, the p-value is determined statistically.
· In most statistical tests we set a value of 0.05 for the α-value. This is equivalent to a
percentage of 5%.
· P-value less than 0.05 = result is significant and there is a significant difference
between samples. Null hypothesis is rejected. Lower value = less chance of
results occurring naturally
· P-value greater than 0.05 = result is not significant, we accept the null hypothesis
and conclude there is no significant difference between samples.

– Student’s t-test

· Compares the means and standard deviations of two separate samples (mean and
variance in table)
· Paired  same individuals e.g animals before and after eating a specific vitamin
· Unpaired  different individuals e.g two groups of animals eating different feed
· A calculated t-statistic is then compared to a table of critical values
· Degrees of freedom = number of samples – 1
· For two sets of data: Degrees of freedom = total number of samples – 2
· Then find the p=0.05 value for degrees of freedom
· To do a comparison for t-test result you can either
· Compare t-statistics directly (compare t-stat and t-critical). If greater then reject
null hypothesis. If t-stat is smaller then accept null hypothesis.
· Compare p-value to α-value of 0.05. If p-value is below α-value then null
hypothesis is rejected.
· A one-tailed test looks for a difference in a particular direction (above OR below mean)
while a two-tailed test looks for any difference (above AND below mean).

Assumptions
· Random sampling
· Continuous numerical data
· Normal distribution
· Adequate sample size
· Equal variances

Types
· Two-tailed or one-tailed
· Paired or unpaired
· (Equal or unequal variances)

– Chi-squared test

· For categorical data to test how well observed data fits expected values
· The expected values can be either data from a previous observation or an expectation of
proportions.
· An X2 value is calculated that is comparerd to a critical value table similar to one used for
t-tests
· E.g 5 tulip colours in a shop. 100 flowersr present so 20 of each colour are expected. 4
degrees of freedom (based on no. colours not no. flowers)
· Null hypothesis is that there is no significant difference between the observed and
expected frequency of a variable.
· Chi-squared larger than critical value = reject null hypothesis (so significant difference
between observed and expected frequency of a variable)
· Chi-squared smaller than critical value = accept null hypothesis

Assumptions
· Random sampling
· Categorical variables
· Data must be actual frequencies or counts (not percentages)
· Independent study groups
· Mutually exclusive categories – each subject only contributes to one data point

– F-test

· Used to compare two variances


· Ratio of two variances (equal variances = f-value of 1)
· Null hypothesis is that expected f-value is 1
· F-value is then compared to a table of critical values to determine whether null
hypothesis is accepted or rejected

Assumptions
· Random sampling
· Continuous numerical data
· Normal distributions
· Two populations/samples are independent of each other
· Sample 1 set must have the larger variance (as F-test is a ratio)

Alpha values (0.05 confidence interval)

We reject the null hypothesis if:

T-Test - (P value is less than 0.05)


ANOVA - (P value is less than 0.05)
Chi Squared – (P values less than 0.05)
Pearson Correlation (R value is less than 0.95)
Multiple Regression (P value is less than 0.05)

Apply statistical tests that can determine correlation between two variables, including but
not limited to:
– correlation coefficient
Describe the difference between correlation and causation

Correlation is a statistical measure (expressed as number) that describes the size and
direction of a relationship between two or more variables. A correlation between variables,
however, does not automatically mean that change in one variable is the cause of the
change in the values of the other variable.

Causation indicates that one event is the result of the occurrence of the other event i.e
there is a casual relationship between the two events. This is also referred to as cause and
effect.

Correlation

Correlation is a statistical technique that can show whether and how strongly pairs of are
related. For instance, height and weight are related.
Correlation works for quantifiable data.

Correlation quantifies the degree to which two variables are related.


Correlation does not fit a line through the data points.
You simply are computing a correlation coefficient (r) that tells you how much one variable
tends to change when the other one does.
Linear regression finds the best line that predicts Y from X. Correlation does not fit a line.
Correlation is almost always used when you measure both variables. It rarely is appropriate
when one variable is something you experimentally manipulate.
Linear regression is usually used when X is a variable you manipulate (time, concentration,
etc.)

• Correlation coefficient = r
• It ranges from -1.0 to +1.0. The closer r is to +1 or -1, the more closely the two
variables are related.

• If r is close to 0, it means there is no relationship between the variables.


• If r is positive, it means that as one variable gets larger the other gets larger. If r is
negative it means that as one gets larger, the other gets smaller (often called an
"inverse" correlation).

Explain the requirements to establish causation

1. Strength. How strong is the correlation between the cause and the effect?
2. Consistency. Almost every study should support the association for there to be
causation. This is why numerous experiments have to be done before meaningful
statements can be made about the causal relationship between two or more factors.
3. Specificity. This is established when a single putative cause produces a specific
effect. This is considered by some to be the weakest of all the criteria.
4. Temporality. Exposure always precedes the outcome. If factor "A" is believed to
cause a disease, then it is clear that factor "A" must necessarily always precede the
occurrence of the disease.
5. Biological gradient. Also known as dose-response. A little exposure should result in a
little effect, a large exposure should cause a large effect.
6. Plausibility. The effect must have plausibility but should not violate well known laws
of the universe. However what is biologically plausible depends upon the biological
knowledge of the day.
7. Coherence. The association should be compatible with existing theory and
knowledge. In other words, it is necessary to evaluate claims of causality within the
context of the current state of knowledge within a given field and in related fields.
8. Experiment. Can be tested with an experiment.
9. Consideration of Alternate Explanations. In judging whether a reported association
is causal, it is necessary to determine the extent to which researchers have taken
other possible explanations into account and have effectively ruled out such
alternate explanations. In other words, it is always necessary to consider multiple
hypotheses before making conclusions about the causal relationship between any
two items under investigation.

Use available software to apply statistical tests appropriate to a large data set(s) to assist
with the analysis of the data

Excel – analysis ToolPak add-in can be used to apply statistical tests to data.
DECISIONS FROM DATA AND EVIDENCE
Inquiry question: How is evidence used to make decisions in the scientific research process?

Assess the benefits of collective and individual decision-making

Individual Collective

Advantages · Don’t have to consult · Wider range of ideas


· Less conflict · Support from experts
· Faster decisions · Editorial process
· Saves time, money and energy · Better accountability
· More focused and rational · Can be achieved with more
collaborators/greater
productivity
· Multiple perspectives
· Increased understanding and
knowledge

Disadvantages · Narrower range of · Opinions can be swayed


skills/expertise · Collaboration can be time
· Individuals use own intuition intensive
and views · Responsibility denied by leader
· Individuals don’t take into
account interests of broader
populations

Analyse patterns and trends arising from the data set(s) related to the Scientific Research
Project to:
– construct a relevant conclusion
– suggest possibilities for further investigation
Demonstrate the impact of new data on established scientific ideas, including but not
limited to one of the following:
– gravitational waves on general relativity
– mechanisms of disease transmission and control

John Snow. London struck with cholera outbreak in 1854. Spread by contaminated food or
water. Talked to local residents in his town and used a spot map to model cholera cases.
Cases were centred around pump on Broad Street and this pattern convinced authorities to
disable the well pump, limiting further spread.

– prediction of natural disasters


– effects of chemical pollutants on climate

Analyse rather than describe, address key words of the question, address all aspects,
monitor time, integrate stimulus material
DATA MODELLING INQUIRY QUESTION
How can data modelling help to process, frame and use knowledge obtained from the
analysis of data sets?

Evaluate data modelling techniques used in contemporary science associated with large
data sets, including but not limited to: – predictive – statistical – descriptive – graphical

– predictive · Allows to make informed · Data Labeling - Especially in


decisions about the future Machine Learning data must be
· Allows for continuous labeled and categorized
adjustment with changing appropriately. This process can
demand be imprecise, full of errors, and
· Allows scientists to determine a generally colossal
risk undertaking.
· With large sets, uncertainties · Obtaining large enough
can be avoided datasets
· As a model is making the · Bias in algorithm construction
prediction, it avoids bias in the leading to results which can
final answer lead to data that is
misconstrued.
· Availability of data from the
real system
· Not an exact measurement –
Data modelling is based on
conditions that are known and
present in a particular
environment. Future changes
are not accounted for
– statistical · secondary data so cheap and · The researcher cannot check
less time consuming because validity and can't find a
someone else has compiled it mechanism for a causation
· patterns and correlations clear theory only draw patterns and
and visible correlations from the data
· taken from large samples so · Statistical data is often
generalisability is high secondary data which means
· used and re-used to check that is can be easily be
different variables misinterpreted
· can be imitated to check · Statistical data is open to abuse
changes which increases it can be manipulated and
reliability and phrased to show the point the
representativeness researcher wants to show
(effects the objectivity)
· Because this is often secondary
data it is hard to access and
check
– descriptive · Can clarify large volumes of · Only allow you to make
data, there are no summations about the people
uncertainties about the values or objects that you have
you get (other than only actually measured.
measurement error, etc.). · Limited for making predictions
· Simpler to compute and model as data is only described and
· Prepares data for further not able to show significance.
analysis · Can change over time.
· Does not identify cause.
– graphical · Explain mathematical · Data misinterpretation
relationship · Complacency
· Visually appealing · Can be time consuming with
· Easy representation of data data cleansing
· Represents frequency · Can lose precision and
· Can be easy to prepare accuracy depending on plot
· Versatile and widely used · Some graphical models need
· Allow for easy simple specialist knowledge for
comparisons understanding and can be
misinterpreted

You might also like