Contrastive Analysis
Contrastive Analysis
Second Language Acquisition (SLA) is the study of how second (or additional) languages
are acquired. It is a relatively new field of study, emerging from its parent discipline of Applied
Linguistics in the wake of the failure of behaviorism to offer a satisfactory explanation for the
first or second language acquisition.
Among the major research questions in SLA are the following:
• To what extent are the processes of SLA the same as those of first language acquisition?
• Why is SLA seldom, if ever, as successful as first language acquisition?
• Why do some learners learn better and/or faster than others?
• Why do learners make errors?
• How does the first language (L1) affect the learning of the second (L2)
• Does instruction help – and, if so, how and why?
In attempting to answer these questions, researchers draw on the findings of other
'feeder' disciplines, such as linguistics, psychology, neurology and sociology. Since it is still
impossible to get 'inside' the brain of a learner, researchers use as data the output that learners
produce (including their errors), the input that they are exposed to, the various physical and
psychological factors that might be implicated, such as age, aptitude, motivation and learning
style, and the various contextual factors, such as whether the learning is instructed or naturalistic.
Since the demise of behaviorism, a great many new theories have emerged to account for
second language acquisition. All the theories of SLA are meant to account for the working of the
human mind, and all use metaphors to represent this invisible reality. The major theories of SLA
in the past half a century are introduced below.
1.1. Behaviorism
Skinner, along with other scholars, proposed his theory of behaviorism which studied
human and animal behavior only in terms of physical processes, without reference to mind. It was
based on the view that all learning – including language learning – occurs through a process of
imitation, practice, reinforcement and habit formation. According to behaviorism, the
environment is crucial not only because it is the source of the linguistic stimuli that learners
need in order to form associations between the words they hear and the objects and events they
represent but also because it provides feedback on learners' performance. Behaviorists claimed
that when learners correctly produce language that approximates what they are exposed to in the
input, and these efforts receive positive reinforcement, habits are formed.
Behaviorism came under attack when Chomsky questioned the notion that children learn
their first language by repeating what they hear in the surrounding environment. He argued that
children produce novel and creative utterances – ones that they would never have heard in their
2
environment. Researchers asserted that children's creative use of language showed that they
were not simply mimicking what they beard in the speech of others, but rather, applying rules
and developing an underlying grammar. Following Chomsky's critique of behaviorist
explanations for language acquisition and a number of studies of L1 acquisition, behaviorist
interpretations of language acquisition fell into disfavor.
Chomsky proposed his Universal Grammar (UG) theory to account for first language
(L1) acquisition. The theory claims to account for the grammatical competence of every adult no
matter what language he or she speaks. Chomsky observed that all children learn language at a
time in their cognitive development when they experience difficulty grasping other kinds of
knowledge, which appear to be far less complex than language. Chomsky argued that the kind of
information which mature speakers of a language have of their L1 could not have been learned
from the incomplete and sometimes degenerate language they are exposed to (i.e. the poverty of
stimulus argument). Also, it was noted that children did not receive systematic corrective
feedback on their ill-formed utterances.
Despite all this, children would eventually acquire full competence in their mother tongue.
Therefore, Chomsky inferred that children must be equipped with an innate language faculty
which enables them to process language. This specialized module of the brain was originally
referred to as the language acquisition device (LAD) and later as UG. It was said to contain some
general principles which apply to all languages and also a set of parameters that can vary from
one language to another, but only within certain limits. The child's task would be to discover how
the language of his or her environment made use of those principles.
Chomsky's theory of UG was offered as a plausible explanation for L1 acquisition.
However, the question of whether UG can also explain L2 learning is controversial. One of the
reasons for this controversy is the claim that there is a critical period for language acquisition. It
is suggested that while UG allows a young child to acquire language during this critical period, it
is no longer available after puberty and that older L2 learners must make use of more general
learning processes which are not specific to language. Therefore, second language acquisition by
older learners is more difficult than for younger learners and it is never complete. The argument
is that although L2 grammars are still consistent with universal principles of all human languages,
learners tend to perceive the L2 in a way that is shaped by the way their L1 realizes these
principles.
Another criticism directed at this theory is that researchers who study L2 acquisition
from a UG perspective seek to discover a language user's underlying linguistic 'competence'
(what a language user knows) instead of focusing on his or her linguistic 'performance' (what a
language user actually does with the language). Therefore, these researchers are compelled to
use indirect means of investigating that competence. For example, rather than recording
spontaneous conversations, the researcher may ask a language user to judge whether a sentence
is grammatical or not.
3
1.3. Monitor Theory
Krashen proposed a theory which shares a number of the assumptions with the UG
approach but its scope is specifically second language acquisition. As with UG, the assumption is
that human beings acquire language without instruction or feedback on error. Krashen developed
this theory in the 1970s and presented it in terms of 'five hypotheses' in the 1980s. The
fundamental hypothesis of Monitor Theory is that there is a difference between 'acquisition' and
'learning'. Acquisition is hypothesized to occur in a manner similar to L1 acquisition, that with
the learner's focus on communicating messages and meanings; learning is described as a
conscious process, one in which the learner's attention is directed to the rules and forms of the
language. The 'monitor hypothesis' suggests that, although spontaneous speech originates in the
'acquired system', what has been learned may be used as a monitor to edit speech if the L2 learner
has the time and the inclination to focus on the accuracy of the message.
In light of research showing that L2 learners, like L1 learners, go through a series of
predictable stages in their acquisition of linguistic features, Krashen proposed the 'natural order
hypothesis'. The 'comprehensible input hypothesis' reflects his view that L2 learning, like L1
learning, occurs as a result of exposure to meaningful and varied linguistic input. Linguistic input
will be effective in changing the learner's developing competence if it is comprehensible (with the
help of contextual information) and also offers exposure to language which is slightly more
complex than that which the learner has already acquired. The 'affective filter hypothesis'
suggests, however, that a condition for successful acquisition is that the learner be motivated to
learn the L2 and thus receptive to the comprehensible input. Krashen's model can be summarized
as in figure 1.1.
Krashen has been criticized for the vagueness of the hypotheses and for the fact that some
of them are difficult to investigate in empirical studies. Nonetheless, Monitor Theory has had a
significant impact on the field of L2 teaching. Many teachers and students intuitively accept the
distinction between 'learning' and 'acquisition', recalling experiences of being unable to
spontaneously use their L2 even though they had studied it in a classroom. This may be especially
true in classrooms where the emphasis is on metalinguistic knowledge, or the ability to talk about
the language (usage), rather than on practice in using it communicatively (use).
4
1.4. Acculturation
Schumann proposed his theory of acculturation to explain the factors affecting adult
second language acquisition taking place without formal instruction, in naturalistic situations.
Acculturation is the process an individual need to go through in order to become adapted to a
different culture. For this to take place there will need to be changes in both social and
psychological behavior. Where the target culture involves a different language, a key part of the
acculturation process will involve language learning. Acculturation requires the learner to adjust
their social and psychological behavior in order to become more closely integrated with the target
culture. This process may be associated with culture shock as the learners discover that they
need to accept differences in behavior from those with which they are familiar from their own
culture.
For Schumann, acculturation theory provided an explanation for individual differences in
second language learning and represented the causal variable in the second language acquisition
process. In his model of the factors determining social and psychological distance, Schumann
established the positive and negative elements of acculturation. So, for example, the attitude of
the learner to the target social group could be a positive or negative factor while, psychologically,
motivation would be seen as a key factor. For him, the first stages of language acquisition are
characterized by the same processes that are responsible for the formation of pidgin languages.
When there are hindrances to acculturation – where social or psychological distance is great – the
learner will not progress beyond the early stages and the language will stay pidginized. The
learner's language may therefore fossilize due to the lack of contact with the target language
group.
Research in this mode of SLA has concentrated on the acculturation of immigrant workers
to their host country. The fact that many of the learners in this category fail to master the target
language is associated with their isolation and lack of social contact with the host population. This
lack of progress and the fossilization of their language skills has been linked to pidginization.
Acculturation is not generally associated with foreign language learning because this can take
place without any direct contact with the target country. As the theory stands, then, it would
appear to have little to offer instructed second or foreign language learning. However, there is an
argument for the probable relevance of the notion of psychological distance for foreign language
learning in the classroom. Also, attitude to the target culture and pupil motivation are likely to be
key factors in classroom foreign language learning.
5
'procedural knowledge'. Other theorists make a similar contrast between 'controlled' and
'automatic' processing. The difference is that controlled processing occurs when a learner is
accessing information that is new or rare or complex, and the action requires mental effort and
takes attention away from other controlled processes. For example, a language learner who
appears relatively proficient in a social conversation may struggle to understand complex
information because the controlled processing involved in interpreting the language itself
interferes with the controlled processing that would be needed to interpret the content.
Automatic processing, on the other hand, occurs quickly with minimal attention and effort.
Indeed, it is argued that we cannot prevent automatic processing and have little awareness or
memory of its occurrence. Thus, once language itself is largely automatic, attention can be focused
on the content. According to the information processing model, learning occurs when, through
repeated practice, controlled knowledge becomes automatic.
Some researchers working within information processing models of second language
acquisition have argued that nothing is learned without ‘noticing'. That is, in order for some
feature of language to be acquired, it is not enough for the learner to be exposed to it through
comprehensible input. The learner must actually notice what it is in that input that makes the
meaning. This idea has raised a considerable amount of interest in the context of instructed
second language learning.
1.6. Connectionism
6
7. Multidimensional Model
Pienemann introduced his 'Multidimensional Model' to account for the link between the
underlying cognitive processes and the stages in the L2 learner's development. In a research,
Pienemann observed that L2 learners acquired certain syntactic and morphological features of
the L2 in predictable stages. These features were referred to as 'developmental'. Other features,
referred to as 'variational’, appeared to be learned by some but not all learners and, in any case,
did not appear to be learned in a fixed sequence. With respect to the developmental features, it
was suggested that each stage represented a further degree of complexity in processing strings
of words and grammatical markers. For example, it seemed that learners would begin by picking
out the most typical word order pattern of a language and using it in all contexts. Later, they
would notice words at the beginning or end or sentences or phrases and would begin to be able
to move these. Only later could they manipulate elements which were less salient because they
were embedded in the middle of a string of words. Because each stage reflected an increase in
complexity, a learner had to grasp one stage before moving to the next, and it was not possible to
'skip a stage'. One of the pedagogical implications drawn from the research related to the
Multidimensional Model is the 'Teachability Hypothesis' that learners can only be taught what
they are psycholinguistically ready to learn.
Long proposed that a great deal of language learning takes place through social
interaction, at least in part because interlocutors adjust their speech to make it more accessible
to learners. Some of the L2 research in this framework is based on L1 research into children's
interaction with their caregivers and peers. L1 studies showed that children are often exposed to
a specialized variety of speech which is tailored to their linguistic and cognitive abilities (i.e. child-
directed speech). When native speakers engage in conversation with L2 learners, they may also
adjust their language in ways intended to make it more comprehensible to the learner (i.e.
foreigner talk). Furthermore. when L2 learners interact with each other or with native speakers
they use a variety of interaction techniques and adjustments in their efforts to negotiate meaning.
These adjustments include modifications and simplifications in all aspects of language,
including phonology, vocabulary, syntax and discourse. In an early formulation of this position
for second language acquisition, Long hypothesized that, as Krashen suggests, comprehensible
input probably is the essential ingredient for interlanguage development. However, in his view,
it was not in simplifying the linguistic elements of speech that interlocutors helped learners
acquire language. Rather, it was in modifying the interaction patterns, by paraphrasing, repeating,
showing or otherwise working with the L2 speaker to ensure that meaning is communicated.
Thus, he hypothesized, interactional adjustments improve comprehension, and comprehension
allows acquisition.
Although considerable research has been done to document the negotiation of meaning
in native/non-native interaction, it is not clear how (or whether) interaction contributes to L2
grammatical development. In a more recent formulation of the interaction hypothesis, Long
acknowledges that negotiation of meaning may not be enough for the successful development of
7
L2 vocabulary, morphology and syntax and that implicit negative feedback provided during
interaction may be required to bring L2 learners to higher levels of performance.
The Socio-cultural theory of SLA (SCT) is largely based on the work of pioneering Russian
psychologist, Lev Vygotsky, in the early twentieth century. This paradigm, despite the label
'sociocultural' does not seek to explain how learners acquire the cultural values of the L2 but
rather how knowledge of L2 is internalized through experiences of a sociocultural nature.
Theorists working within a sociocultural perspective of L2 learning operate from the assumption
that all learning is first social then individual. Unlike the early Interactionist views of SLA, SCT
theorists reject the view that interaction serves as a provider of input or of opportunities for
output. Indeed, they object to the terms 'input' and 'output', viewing them as indicative of a
mechanistic view of communication and learning. They argue that interaction cannot be properly
investigated by breaking it down into its component elements; rather it is necessary to look at the
active learner in his or her environment and study interaction in its totality in order to show the
emergence of learning. In fact, SCT argues for a much richer view of interaction and for treating
it as a cognitive activity in its own right.
SCT views language acquisition as an inherently social practice that takes place within
interaction as learners are assisted to produce linguistic forms and functions that they are unable
to perform by themselves. Subsequently ‘internalization’ takes place as learners move from
assisted to independent control over a feature. In this view, cognition needs to be investigated
without isolating it from social context. SCT sees language learning as dialogically based.
Theorists working within a sociocultural perspective of L2 learning propose that 'LAD' is located
in the interaction that takes place between speakers rather than inside their heads. That is
acquisition occurs in rather than as a result of interaction. From this perspective, then, L2
acquisition is not a purely individual-based process but shared between the individual and other
persons. One of the principle ways in which this sharing takes place is scaffolding (more recently
referred to as ‘collaborative dialogue’ or 'instructional conversation'). Scaffolding is a social
(or in SCT terms, inter-psychological) process through which learners internalize knowledge
dialogically. That is, it is a process by which one speaker (an expert or a novice) assists another
speaker (a novice) to perform a skill that they are unable to perform independently.
SCT also has a psychological dimension. This entails the extent to which an individual is
ready to perform the new skill. Vygotsky evoked the metaphor of the zone of proximal
development (ZPD) to explain the difference between an individual's actual and potential levels
of development. The skills that an individual has already mastered constitute his or her actual
level. The skills that the individual can perform when assisted by another person constitute the
potential level. Thus, learnt skills provide a basis for the performance of new skills. For
interaction to work for acquisition it needs to assist the learner in constructing zones of proximal
development. As mentioned above, this is achieved with the help of scaffolding.
Like cognitive theories of SLA, SCT assumes that the same general learning mechanisms
apply to language learning as with other forms of knowledge. However, SCT emphasizes the
integration of the social, cultural and biological elements. On the other hand, unlike the linguistic
8
theories of SLA, SCT does not offer any very thorough or detailed view of language as a formal
system.
1.10. Conclusion
While theories in any field can differ substantively and in many other ways, at some level
they are all interim understandings of how something works – in the case of SLA theories, interim
understandings of how people learn second languages. Just as any understanding of how the
human body works is likely to be relevant to medical practice at some level, so any theory of SLA
is likely to be at least indirectly relevant to language teaching practice, in that SLA is the process
language teaching is designed to facilitate.
The lack of any one comprehensive and conclusive theory of SLA is a source of frustration
to some commentators and practitioners. Others accept that language acquisition is such a
multidimensional phenomenon that no single theory will ever capture its complexity. One
criticism of SLA research is that it is generally conducted apart from the realities of the classroom.
Hence, its research questions may not be the ones that teachers want answered, or its methods
and results may not be generalizable to real learning situations. This may account for the
skepticism, even indifference that many teachers feel for SLA theory. Ironically, the SLA theory
that has attracted the most interest among teachers is Krashen's (now generally discredited)
claim that teaching does not benefit acquisition.
9
Chapter 2
The Basics of Contrastive Analysis
2.1. Introduction to CA
10
the failure of the structurally oriented contrastive studies to cope with problems encountered in
foreign language teaching, but it was also partly due to the fact that contrastive orientation had
been linked with Behaviorism, mainly as regards the role of transfer in language learning
and language use. When the idea of transfer was given up, the idea of the influence of the mother
tongue on second languages could not be accepted either. In the United States, one more reason
for the downfall of CA in the 1960s was the rapid growth of Generative linguistics which made
linguists more interested in universals than in linguistic differences.
Throughout the 1970s and 1980s, however, contrastive analysis was extensively
practiced in various European countries, particularly in Eastern European countries, and in the
early 1990s there were clear signs of a renewed interest. Since then, the rapid development of
automatic data processing and information technology has opened up new prospects for
contrastive approaches through the potential of large corpora.
As a key theoretical foundation of CA, behaviorism dominated the linguistic field until the
end of the 1960s. As a school of psychology, behaviorism emerged from empiricism, the
philosophical doctrine that all knowledge comes from experience. Behaviorism contributes to the
notion that human behavior is the sum of its smallest parts and components, and therefore that
language learning could be described as the acquisition of all of these discrete units. In other
words, language learning process is the formation of pieces of language habits. Habit formation
is an important concept in accounting for errors in the behaviorist view. A habit is formed when
a particular stimulus becomes regularly linked with a particular response. Accordingly, the
association of stimulus and response, negative or positive, will determine the occurrence of
errors to a great extent. If old habits get in the way of learning new habits, then errors occur.
This process is referred to as interference. Therefore, according to the behaviorist learning
theory, errors occur as a result of interference of the mother tongue.
Interference is the subcategory of a more general process called transfer. Transfer is a
general term describing the carryover of previous performance or knowledge to subsequent
learning. Positive transfer occurs when the prior knowledge benefits the learning task - that is,
when a previous item is correctly applied to present subject matter. Negative transfer occurs
when previous performance disrupts the performance of a second task. The latter can be referred
to as interference, in that previously learned material interferes with subsequent material - a
previous item is incorrectly transferred or incorrectly associated with an item to be learned.
It has been common in second language teaching to stress the role of interference - that
is, the interfering effects of the native language on the target language. It is of course not
surprising that this process has been so singled out, for native language interference is surely the
most immediately noticeable source of error among second language learners. The noticeability
of interference has been so strong that CA has viewed second language learning as exclusively
involving the overcoming of the effects of the native language. It is clear from learning theory that
a person will use whatever previous experience he or she has had with language to facilitate the
second language learning process. The native language is an obvious set of prior experiences.
Sometimes the native language is negatively transferred, and we say then that interference has
occurred.
11
It is very important to remember, however, that the native language of a second language
learner is often positively transferred, in which case the learner benefits from the facilitating
effects of the first language. We often mistakenly overlook the facilitating effects of the native
language in our desire for analyzing errors in the second language and for overstressing the
interfering effects of the first language. Nowadays, the widely used term interference is being
increasingly replaced by the label cross-linguistic influence (CLI) in order to avoid associations
with behaviorism. CLI is a cover term used to refer to situations where one language shows
the influence of another.
It is necessary to distinguish between two types of CA: theoretical and applied. Confusion
between the aims of these two types of CA has often resulted in the evaluation of the results of
theoretical research against applied objectives, or theoretical analysis has been performed for the
purposes of, for instance, language teaching. The obvious result has been increased uncertainty
about the usefulness of CA.
Theoretical contrastive studies produce extensive accounts of the differences and
similarities between the languages contrasted. Attempts are also made at providing adequate
models for cross-language comparison and at determining which elements in languages are
comparable and how it should be done. The alignment of languages also adds to the information
about the characteristics of individual languages or about linguistic analysis in general. No claims
should, however, be made for the applicability of the results for purposes other than linguistic
analysis.
On the other hand, the target of applied contrastive studies is the establishment of
information that can be used for purposes outside the language domain proper, such as
language teaching, translation, interpreting and bilingual education. Traditionally, this kind
of contrastive analysis has been mainly concerned with the identification of potential trouble in
the use of the language learner's target language.
12
The first objection to the traditional view of CA is directed at the concept of equivalence.
It is possible to argue that there are no grounds for considering two texts in two languages as
fully equivalent under any circumstances. All communication is culturally relative, and texts are
the same because they are communicative events. This makes them relative also in another sense.
It could, for instance, be hypothesized that two highly specialized technical or medical documents
are closer to each other than, for instance, a fictional text and its translation into another language.
Since many studies had resulted in the conclusion that the alignment and mapping of the
language codes have proved to be insufficient for applied purposes, recent contrastive studies
have adopted a dynamic approach where various psychological, sociological, and contextual
factors alongside the purely linguistic ones are taken into account. Therefore, in modem
contrastive linguistics, the theory and methodology adopted from linguistics has been
supplemented with those derived from sociology, psychology, social psychology, neurology,
cultural studies, ethnography, anthropology and related disciplines for the analysis of
pragmatic patterning, cognitive mechanisms and information processing systems involved.
The other difference between traditional and modern CA is that in traditional contrastive
studies the learner had been almost totally forgotten in much of what had been written about the
success - or mostly failure - of contrastive analysis from an applied viewpoint. Today, it is quite
evident that a straightforward setting alongside of two linguistic systems - even irrespective of
the level of analysis - is too simplistic and cannot easily produce information relevant for language
teaching purposes. There is simply too much variation in learner performance for it to be
accounted for by reference to linguistic phenomena alone. Therefore, modem approaches to CA
are more participant-oriented where the intentions of the language users and the process of
communication as a whole are taken into consideration. Language use is based on internalized
categories of rules and structures and on various processes, and therefore speakers observe
phenomena that they have learned, or choose, to observe. A student may hear, and thus also
produce, a certain language feature differently from what is expected by the teacher because the
student's perception is not governed by the patterning adopted for teaching from a theoretical or
pedagogical perspective. It is impossible to understand learners' problems unless it is known how
they feel, what they attempt to hear, what they actually hear, what the structures are that they
perceive, and how these differ from the perceptions of native speakers in similar situations. This
implies that true contrasts, at least from the learning point of view, lie inside each individual
learner, i.e. in the interaction of various types of information relating to the second/foreign
language, the mother tongue, and possible other languages.
Another distinguishing feature of modem contrastive linguistics is that, it is no longer
necessary for the contrastive linguist to invent the examples in the way it used to be done. It is
now possible to resort to corpora, where the relevant instances can be found by means of
automatic searches. The development of powerful computer tools makes it possible to carry out
contrastive studies of language features in context through the use of large computerized corpora.
In this way, new insights can be expected into Contrastive Discourse Analysis, Contrastive
Rhetoric and Contrastive Pragmatics. Many areas of syntax, semantics and lexis may also benefit
from the availability of large parallel corpora. At the same time, it may be possible to develop new
theoretical approaches to contrastive analysis.
13
Table 3.1. summarizes the differences between Traditional and Modem Contrastive Analysis.
Traditional CA Modern CA
Major focus: negative L1 transfer Major focus: principle of cross-linguistic influence
14
Chapter 3
The Development of CA
CA offered some strong claims in the area of language teaching which are characterized
as the Contrastive Analysis Hypothesis. Deeply rooted in the behavioristic and structuralist
approaches of the day, the CAH claimed that the principal barrier to second language acquisition
is the interference of the first language system with the second language system, and that a
scientific, structural analysis of the two languages in question would yield a taxonomy of linguistic
contrasts between them which in turn would enable linguists and language teachers to predict
the difficulties a learner would encounter. This can be summarized like this:
difference b/w L1 & L2 item >> interference of L1 into L2 >> difficulty in learning L2
As mentioned above, Behaviorism contributed to the notion that human behavior is the
sum of its smallest parts and components, and therefore that language learning could be
described as the acquisition of all of those discrete units. Moreover, human learning theories
highlighted interfering elements of learning, concluding that where no interference could be
predicted, no difficulty would be experienced since one could transfer positively all other items
in a language. The logical conclusion from these various psychological and linguistic assumptions
was that second language learning basically involved the overcoming of the differences between
the two linguistic systems – the native and target languages.
Intuitively the CAH has appeal in that we commonly observe in second language learners
plenty of errors attributable to the negative transfer of the native language to the target language.
It is quite common, for example, to detect certain foreign accents and to be able to infer, from the
speech of the learner alone, where the learner comes from. Native English speakers can easily
identify the accents of English language learners from Germany, France, Spain, and Japan, for
example.
One of the strongest claims of CAH was made by Robert Lado. He proposed that through
a systematic comparison of the language and the culture to be learned with the native language
and culture of the student it was possible to predict and describe the patterns that would cause
difficulty in learning, and those that would not. He also claimed that the key to ease or difficulty
in foreign language learning lie in the comparison between native and foreign language.
Therefore, those elements that were similar to the learner's native language would be simple
for him and those elements that were different would be difficult.
Such strong claims of CAH resulted in endeavors to create a hierarchy of difficulty by
which a teacher or linguist could make a prediction of the relative difficulty of a given aspect of
the target language. Stockwell and his associates constructed a hierarchy of difficulty for
grammatical structures of two languages in contrast. Their grammatical hierarchy included 16
levels of difficulty based on the same notions used to construct phonological criteria. Prator
captured the essence of this grammatical hierarchy in six categories of difficulty. Prator’s
15
hierarchy was supposedly applicable to both grammatical and phonological features of language.
The six categories, in ascending order of difficulty along with a number of examples for Persian-
English CA are listed below and further illustrated in Table 3.1.
• Level 0 – Transfer: No difference or contrast is present between the two languages. The
learner can simply transfer (positively) a sound, structure, or lexical item from the native
language to the target language. Here are some examples in Persian-English CA, where
no/very little difference exists between the two languages and the Persian learners of
English can directly transfer them to their L2: phonemes /p/, /b/, /m/, /v/, and words
like, radio, telephone, mother, tour, television, salad, nylon, taxi, shampoo, police, spray.
• Level 1 – Coalescence: Two items in the native language become coalesced into one item
in the target language. This requires that learners overlook a distinction they have grown
accustomed to. Here are some examples in Persian-English CA, where two or more items
in Persian converge into one item in English: ( )ﺗو & ﺷﻣﺎfor you; ( )داﻧﺸﺠﻮ & داﻧشآﻣوزfor
student; ( )داﯾﯽ & ﻋﻤﻮfor uncle; ( )ﭘﺳر ﺧواھر & ﭘﺳر ﺑرادرfor nephew; ( )آﻓﺗﺎب & ﺧﻮرﺷﯿﺪfor sun:
(ﭘﺳر ﺧﺎﻟﮫ, دﺧﺗر ﺧﺎﻟﮫ, )…ﭘﺳر ﻋﻣوfor cousin; ( )ﻓرش & ﻗﺎﻟﯽfor carpet.
• Level 3 – Reinterpretation: An item that exists in the native language is given a new shape
or distribution. For example, the phonemes /t/, /d/, /l/, /n/, /r/ and /b/ in Persian are
similar to their counterparts in English but have different phonetic realizations or are
different in terms of their distribution. Also, present perfect tense and noun-noun and
adjective-noun combinations in English and Persian as well as English words like Machine,
Coat, Jacket, terror, line, theater, service, lamp and their direct borrowings in Persian (ﺗﺋﺎﺗر,
(ﻻﯾن )ﺧﯾﺎﺑﺎن, (ﺗرور )ﺳﯾﺎﺳﯽ, ژاﮐت, ﮐت, ﻣﺎﺷﯾن, ﻻﻣپ, ( )ﺳروﯾس )دﺳﺗﺷوﯾﯽmay fall in this category.
• Level 4 – Over-differentiation: A new item entirely bearing little if any similarity to the
native language item must be learned. For example, the following phonemes, lexical items
or grammatical forms are absent in Persian: /θ/, /ð/, /ŋ/, initial consonant clusters like
/sk/, eggnog, the, Halloween, future in the past.
• Level 5 - Split (Divergence): One item in the native language becomes two or more in the
target language, requiring the learner to make a new distinction. Here are some examples
in Persian-English CA, where one item in Persian diverge into two or more items in
English: ( )آﻣوﺧﺗنfor teach & learn; ( )ﻣﯾزfor desk & table; ( )اوfor he & she; ( )ﺧوردنfor eat &
drink; ( )ﺳﺎﻋﺖfor time, watch, clock & hour; ( )ﺧودfor self, own & ego; ( )دﯾدنfor see, look &
watch; ( )ﭘﺎfor foot & leg; ( )ﮔﻔﺘﻦfor tell & say; ( )ﺣﯾﺎطfor yard & garden; ( )دردfor pain, ache &
sore; /i:/ for /i:/ & /i/; /v/ for /v/ & /w/ and /n/ for /n/ & /ŋ/.
16
Table 3.1. Hierarchy of Difficulty
Prator 's reinterpretation, and Stockwell and his associates' original hierarchy of difficulty
were based on principles of human learning as they were understood at the time. The first, or
"zero," degree of difficulty represented complete one-to-one correspondence and transfer, while
the fifth degree of difficulty was the height of interference. Prator and Stockwell both claimed that
their hierarchy could be applied to virtually any two languages and make it possible to predict
second language learner difficulties in any language with a fair degree of certainty and objectivity.
However, as we will see below, many of these predictions proved to be oversimplified and failed
to materialize.
The Contrastive Analysis Hypothesis (CAH) was widely influential in the 1950s and
1960s, but from the 1970s its influence dramatically declined. This was due to both theoretical
and practical flaws in the CAH as well as new realities on the ground. Some of the reasons for the
downfall of the CAH are mentioned below:
• While the association of CAH with behaviorism and structuralism gave it academic
legitimacy, it ultimately led to its downfall. From the late 1950s Chomsky mounted a
serious challenge against the behaviorist view of language acquisition and structuralist
linguistics which contributed to the decline of the CA.
• CAH was at odds with the views of later developments in applied linguistics including
Error Analysis, Interlanguage theory and Second Language Acquisition. The theory of
Interlanguage listed a number of sources of error of which first language interference was
only one. Therefore, Error Analysis, the examination of attested learner errors, began to
replace the error prediction of CA.
17
• A major flaw of the CAH was the dubious assumption that one could depend solely upon
an analysis of a linguistic product to yield meaningful insight into a psycholinguistic
process, i.e. second language learning.
• The empirical method of prediction based on the hierarchy of difficulty was shown to have
many shortcomings. Firstly, the process was oversimplified; subtle phonetic,
phonological, and grammatical distinctions were not carefully accounted for. Second, it
was very difficult, even with six categories, to determine exactly which category a
particular contrast fit into.
• The accumulation of empirical studies of SLA indicated that the CAH made the wrong
predictions. Firstly, it did not anticipate all the errors, i.e. it underpredicted some of the
actual errors. Second, some errors it did predict failed to materialize, i.e. it overpredicted
the presumed errors.
Despite continued criticism, contrastive analysis still remains a useful tool in the search
for potential sources of trouble in foreign language learning. CA cannot be overlooked in syllabus
design and it is a valuable source of information for the purposes of translation and
interpretation. As we will see later in this chapter, today, the scope of contrastive analysis has
gradually widened, along with the expansion of researchers' interests beyond the confines of the
sentence for instance, to interlanguage Pragmatics or Contrastive Rhetoric.
As, we saw above, the attempt to predict difficulty by means of contrastive analysis, i.e.
the strong version of the CAH, was quite unrealistic and impracticable. It was noted that at the
very least, this version demands of linguists that they have available a set of linguistic universals
formulated within a comprehensive linguistic theory, which deals adequately with syntax,
semantics, and phonology. But do linguists have available to them an overall contrastive system,
within which they can relate the two languages in terms of mergers, splits, zeroes, over-
differentiations, under-differentiations, reinterpretations? Therefore, while many linguists
claimed to be using a scientific, empirical, and theoretically justified tool in contrastive analysis,
in actuality they were operating more out of mentalistic subjectivity.
Yet contrastive analysis has intuitive appeal, and teachers and linguists have successfully
used the best linguistic knowledge available in order to account for observed difficulties in second
language learning. Such observational use of contrastive analysis is referred to as the weak
version of the CAH. The weak version does not imply a priori prediction of certain degrees of
difficulty; on the contrary, it adopts a posteriori – after the fact – approach. The weak version of
CAH contends that in the learning of L2, the native language of the learner does not really
'interfere' with his learning so much as it provides an 'escape hatch' when the learner gets
into a tight spot. In other words, it holds that when the learner doesn't know how to say
something in the target language, he ‘pads’ from his native language. This view point suggests that
what will be most difficult for the learner is what he does not already know. As learners are
learning the language and errors appear, teachers can utilize their knowledge of the target and
native languages to understand sources of error. The weak version of CA can be summarized like
this:
18
Limited knowledge of L2 >> Recourse to L1 >> difficulty in learning L2
The so-called weak version of the CAH is what remains today under the label cross-
linguistic influence (CLI), suggesting that we all recognize the significant role that prior
experience plays in any learning act, and that the influence of the native language as prior
experience must not be overlooked. The difference between today's emphasis on influence, rather
than prediction, is an important one. Aside from phonology, which remains the most reliable
linguistic category for predicting learner performance, other aspects of language present more of
a gamble. Syntactic, lexical, and semantic interference show far more variation among learners
than psychomotor based pronunciation interference.
Another blow to the strong version of the CAH was delivered by Oller and Ziahosseini,
who proposed the so-called moderate version or subtle differences version of the CAH.
According to this model, L2 items which are different from L1, rather than causing difficulty, are
more likely to be noticed and categorized. From this perspective, it is the similar items which
can pose a problem. This notion was based on the principle of stimulus generalization which
states that the more similar two stimuli are, the more likely a person is to respond to them as
if they were the same stimulus. Therefore, when the learner is faced with such a condition, he
may generalize a response learned to one stimulus to a similar stimulus. This, was claimed, would
create confusion on the side of the learner.
The moderate version of CA was proposed on the basis of a study of spelling errors. Oller
and Ziahosseini found that for learners of English as a second language, English spelling proved
to be more difficult for people whose native language used a Roman script (for example, French,
Spanish) than for those whose native language used a non-Roman script (Arabic, Japanese). The
strong form of the CAH would have predicted that the learning of an entirely new writing system
(Level 4 in the hierarchy of difficulty) would be more difficult than reinterpreting (Level 3)
spelling rules. Oller and Ziahosseini found the opposite to be true, concluding that ''wherever
patterns are minimally distinct in form or meaning in one or more systems, confusion may
result". As a result, from this perspective the easiest and most difficult conditions for learning L2
structures respectively correspond to Prator's split (Level 5) and reinterpretation (Level 3). The
moderate version of CA can be summarized like this:
little difference b/w L1 & L2 items >> confusion b/w L1 & L2 items >> difficulty in learning L2
19
Table 3.2 compares the different attitudes adopted towards L1 role in L2 learning since
the idea was first conceived by CAH.
Table 3.2. Different attitudes towards L1 role in L2 learning
On the basis of the discussion above, we can conclude that “the strong form of the CAH
was too strong, but the weak form was also perhaps too weale. CLI research offers a cautious
middle ground”. Specialized research on CLI in the form of contrastive lexicology, syntax,
semantics, and pragmatics continues to provide insights into second language acquisition (SLA)
that must not be overlooked. CLI implies much more than simply the effect of one's first
language on a second, the second language also influences the first. Moreover, subsequent
languages in multilinguals all affect each other in various ways. The implications of research
on CLI suggest that teachers must certainly be careful not to prejudge learners errors based on
their L1 backgrounds before they have even given them a chance to perform. At the same time,
they must also understand that CLI is an important linguistic factor at play in the acquisition of a
second language.
20
degrees of difficulty by means of principles of universal grammar suggesting that degrees of
markedness will correspond to degrees of difficulty.
Eckman pointed out that new L2 items are not always difficult; difficulty arises when
learning a marked feature in L2 when it is unmarked in L1. In other words, if the target language
contains structures that are marked, these will be difficult to learn. However, if the target
language structures are unmarked, they will cause little or no difficulty, even if they do not exist
in the learner's native language. The markedness version of CA can be summarized like this:
21
Chapter 4
The Basics of Error Analysis
4.1. Introduction to EA
The errors a person makes in the process of constructing a new system of language should
be analyzed carefully, for they possibly hold in them some of the keys to the understanding of the
process of second language acquisition. A learner's errors are significant in that they provide to
the researcher evidence of how language is learned or acquired and what strategies or
procedures the learner is employing in the discovery of the language. The study and analysis
of the errors made by second language learners is called Error Analysis (EA).
As an approach to understanding second language acquisition, EA saw its heyday in the
1970s. In the history of SLA research, error analysis was a phase of enquiry which followed on
from Contrastive Analysis. As a matter of fact, EA heralded the new era of SLA because previously
there was no generally accepted view that first (L1) and second (L2) language learning differed
significantly. As mentioned above, CA had been interested in comparing two linguistic systems –
the learner's L1 and the target L2 – with a view to determining structural similarities and
differences. The view of SLA which underpinned CA was that L2 learners transfer the habits
of their L1 into the L2. Where the L1 and the L2 items were the same, the learners would transfer
appropriate properties and be successful: a case of positive transfer. Where the L1 and the L2
items differed, the learner would transfer inappropriate properties and learner errors would
result: a case of negative transfer. This was the Contrastive Analysis Hypothesis. Errors on this
account were predicted to occur entirely at points of divergence between L1 and L2. However, as
we saw above, learners can commit errors that are not apparently due to L1 interference.
The awareness that some of the errors which L2 learners make are not the result of
negative transfer led to researchers focusing on errors themselves, rather than on
comparing the source and target languages. This shift of interest was captured in a well-known
article by Corder dealing with the significance of learners' errors. Errors came to be viewed as a
reflection of L2 learners' mental knowledge of the L2 or their interlanguage grammars. Rather
than being seen as something to be prevented, then, errors were viewed as signs that learners
were actively engaged in hypothesis testing which would ultimately result in the acquisition of
target language rules. Researchers therefore began to analyze corpora of L2 errors in order to
better understand the nature of this process.
22
backgrounds prompted many SLA researchers to adopt a mentalist position, the basic tenets of
which are:
1. Only human beings are capable of learning language.
2. The human mind is equipped with a faculty for learning language, referred to Language
Acquisition Device. This is separate from the faculties responsible for other kinds of
cognitive activity (for example, logical reasoning)
3. Input is needed, but only to 'trigger' the operation of the language acquisition device.
Furthermore, EA had strong links to the theory which was later to be called
Interlanguage Theory. This theory seeks to understand learner language on its own terms,
as a natural language with its own consistent set of rules. Interlanguage scholars reject the
view of learner language as merely an imperfect version of the target language. Coming after the
demise of behaviorism, interlanguage theory was in line with the growing body of cognitive
approaches in applied linguistics, where the focus was on the learner and how performance is
indicative of underlying processes and strategies.
One of the major cognitive processes identified by EA is overgeneralization which is, of
course, a particular subset of generalization. Generalization is a crucially important and
pervading strategy in human learning. To generalize means to infer or derive a law, rule, or
conclusion, usually from the observation of particular instances (induction). Much of human
learning involves generalization. The learning of concepts in early childhood is a process of
generalizing. A child who has been exposed to various kinds of animals gradually acquires a
generalized concept of "animal." That same child, however, at an early stage of generalization,
might in his or her familiarity with dogs see a horse for the first time and by analogy
overgeneralize the concept of "dog" and call the horse a dog. Similarly, a number of animals might
be placed into a category of "dog" until the general attributes of a larger category, "animal," have
been learned. This is also true about the children who at a particular stage of learning English as
an L1 overgeneralize regular past tense endings (walked, opened) as applicable to all past tense
forms (goed, flied) until they recognize a subset of verbs that belong in an "irregular" category.
An identical process is at play in L2 acquisition when L2 learners overgeneralize within
the target language after they gain some exposure and familiarity with the L2 (intralingual error).
In SLA, it has been common to refer to overgeneralization as a process that occurs as the L2
learner acts within the target language, generalizing a particular rule or item in the L2 –
irrespective of the L1 – beyond legitimate bounds. Typical examples in learning English as a
second language are past tense regularization and utterances like John doesn't can study
(negativization requires insertion of the do auxiliary before verbs) or He told me when should I
get off the train (indirect discourse requires normal word order, not question word order, after
the wh-word). Unaware that these rules have special constraints, the learner overgeneralizes.
Such overgeneralization can be witnessed among learners of English from almost any L1
background.
Many have been led to believe that a learner's interlanguage is influenced by only two
process of SLA: interference and overgeneralization. This is obviously a misconception. First,
interference and overgeneralization are the negative counterparts of the facilitating
processes of transfer and generalization. Second, while they are indeed aspects of different
processes, they represent fundamental and interrelated components of all human learning,
and when applied to L2 acquisition, are simply extensions of general psychological
principles. Interference of the L1 in the L2 is simply a form of generalizing that takes prior
first language experiences and applies them incorrectly. Overgeneralization is the incorrect
application – negative transfer – of previously learned L2 material to a present L2 context.
23
All generalizing involves transfer, and all transfer involves generalizing. Figure 4-1 illustrates
this classification.
24
mistake or an error. If, however, further examination of the learner's speech consistently reveals
such utterances as "John wills go," "John mays come," and so on, with very few instances of correct
third-person singular usage of modals, you might safely conclude that "cans," "mays," and other
such forms are errors indicating that the learner has not distinguished modals from other verbs.
But because of the few correct instances of production of this form, it is possible that the learner
is on the verge of making the necessary differentiation between the two types of verbs.
There are two major schools of thought with respect to learner errors, namely
behaviorism and mentalism. Behaviorism, maintains that if we are to get a perfect teaching
result, errors should never be committed in the first place, and therefore the occurrence of
errors is merely a sign of the present inadequacy of our teaching techniques. Behaviorists
hold that errors occur as a result of preoccupation with the old habits and the interference
of mother tongue. They also believe that errors are evidence of non-learning rather than
wrong learning; therefore, they should not be allowed to happen. In the 1950s and 1960s the
language teaching methods based on behavioristic principles (particularly Audiolingulism)
emphasized the importance of massive manipulative practice of the language, often in a rather
mechanical fashion, to ensure correctness. The drills were structured in such a way that it was
difficult for the student to make many mistakes. Hence, he heard only good models and was
encouraged by producing acceptable sentences all the time.
In the late 1960s, the mentalists, inspired by Chomsky's Generative Linguistics, put
forward a different view of errors, which has gained wide acceptance. Contrary to the behaviorist
perception that the learning of a new language is a struggle of overcoming the interference of the
old habits and mother tongue influence, the mentalist holds that the learning of language is a
constant process of making hypotheses about the target language. From this perspective,
human learning is fundamentally a process that involves the making of mistakes. Mistakes,
misjudgments, miscalculation, and erroneous assumptions form an important aspect of
learning virtually any skill or acquiring information. Errors were now viewed in a much more
positive light, as "windows" through which one may observe students' language learning process.
As the student learns a new language, very often he does not know how to express what he wants
to say. So, he makes a guess on the basis of his knowledge of his mother tongue and of what he
knows of the foreign language. The process is one of hypothesis formulation and refinement,
as the student develops a growing competence in the language he is learning. He moves from
ignorance to mastery of the language through transitional stages, and the errors he makes are
to be seen as a sign that learning is taking place. The currently advocated communicative
competence also echoes this positive way of looking at errors.
The last three decades of the 20th century witnessed various tendencies towards errors
within the mentalist tradition. In the 1970s, the methods based on Humanistic Approach (e.g.
Counseling Learning) adopted a non-interventionist approach as error correction was deemed to
undermine a stress-free learning environment. In the late 1970s the proponents of Natural
Approach pointed out that in L1 acquisition, mistakes often go uncorrected, yet are eventually
eradicated; error correction in this situation appears to be unnecessary, and to have little effect.
In the 1980s the advocates of Communicative Language Teaching recognized the need for fluency
practice which could lead to occasions when errors were allowed to pass uncorrected. though
perhaps only temporarily. Since 1990s, the more recent strands of the communicative approach
25
(e.g. Task-based Language Teaching) tend to advocate an optimal balance between attention to
form (and errors) and attention to meaning. Table 4.1. summaries the way teachers (T) in
different language teaching methods have tackled students (S) more in the last century.
Table 4.1. Attitudes towards Errors in Different Teaching Methods
Teaching
Method/ Typical Approach to the error
Approach
Grammar Having the S to get the correct answer is very important. T should
Translation (Late supply S with the correct answer, but error correction is not based on
19th Century) any theoretical principles.
Direct Method
(Early 20th T employs various techniques to get the S to self-correct.
century)
Audio-Lingual
If possible, S errors are avoided through T's awareness of where the S
Method Mid-20th
will have difficulty & restriction of what they are taught to say.
century
S errors are a natural & inevitable part of the learning process. T uses S
Silent Way errors as a basis for deciding where more work is necessary. T works
(1970s) with the S in getting them to self-correct by relying on their inner
criteria.
Suggestopedia
Errors are corrected gently, with the teacher using a soft voice.
(1970s)
Total Physical T should be tolerant of S errors and only correct major errors in an
Response (1970s) unobtrusive manner. As S progress, T can correct more minor errors.
26
Chapter 5
Practical considerations in EA
In order to achieve their goals, error analysts usually go through three consecutive steps
followed by a fourth optional stage: 1) compiling a corpus of L2 learner deviations from the
target language norms, 2) classifying these errors by type, 3) hypothesizing possible sources
for the errors, ,4) evaluating errors in terms of their impact on commendation, and finally,
taking steps for 5) error prevention/correction. In this section we will start by an examination
of error taxonomies. In the next section, the subsequent stage of error analysis – identifying the
source of the error – will be addressed, followed by the issue of error correction.
Once a corpus of errors had been compiled, the researcher would begin to identify and
classify the errors into types. The result of grouping together and labeling subgroups within a
corpus is known generally as a taxonomy. Various taxonomies for L2 learner errors have been
used, a few of which are introduced below:
• A major distinction is made at the outset between overt and covert errors. Overtly
erroneous utterances are unquestionably ungrammatical at the sentence level.
Covertly erroneous utterances are grammatically well formed at the sentence level
but are not interpretable within the context of communication. Covert errors, in other
words, are not really covert at all if you attend to surrounding discourse (before or after
the utterance). “I’m fine, thank you” is grammatically correct at the sentence level, but as
a response to “Who are you?” it is obviously an error. A simpler and more straightforward
set of terms, then, would be “sentence-level” and “discourse-level” errors.
• Errors can be categorized according to the way that they depart from the norm: omission,
addition, mis-selection, mis-formation and mis-ordering. However, such surface-
structure categories are very generalized and easily confused. Table 5.1. summarizes this
classification with examples:
27
distinguish different levels of errors. A word with a faulty pronunciation, for example,
might hide a syntactic or lexical error.
• Finally, two related dimensions of error, domain and extent can be considered in any
error analysis. Domain is the rank of linguistic, unit or the breadth of the context (from
phoneme to discourse) that must be taken as context in order to determine whether
an error has occurred. Extent is the rank or size of the linguistic unit that would have
to be deleted, replaced, supplied, or reordered in order to repair the sentence. These
categories help to operationalize the overt/covert distinction discussed above. So, in the
erroneous expression, 'a scissors', the domain is the phrase, and the extent is the word
(indefinite article) and in the example 'well, its a great hurry around' both domain and
extent are the whole sentence.
Up to this point the task of EA has been essentially one of labeling subgroups within a
corpus. And some error analyses stopped there. Others, however, went on to identify the source
of the error. Error analysis recognized two major sources of errors: L1 interference and L2
overgeneralization. Besides L1 interference and overgeneralization of target language linguistic
material, some of the other important suggested sources of L2 error include context of
acquisition or learning, and strategies of second language communication. In what follows,
all these concepts will be addressed in some detail.
L1 interference is the notion familiar from the Contrastive Analysis Hypothesis, but EA
views it as just one of a set of potential sources for L2 error, rather than the overriding source.
Nonetheless, Interlingual transfer is a significant source of error for all learners. The beginning
stages of learning a second language are especially vulnerable to interlingual transfer from the
native language, or interference. In these early stages, before the system of the second language
is familiar, the native language is the only previous linguistic system upon which the learner can
draw. While it is not always clear that an error is the result of transfer from the native language,
many such errors are detectable in learner speech. Fluent knowledge or even familiarity with a
learner's native language of course aids the teacher in detecting and analyzing such errors.
As we saw above, overgeneralization is another source of L2 error. One of the Major
contributions of learner language research has been its recognition of sources of error that extend
beyond interlingual errors in learning a second language. It is now clear that intralingual
transfer (within the target language itself) is a major factor in second language learning. Negative
intralingual transfer, or overgeneralization, can be illustrated in such utterances as "Does John
can sing?" and “He goed”. Researchers have found that the early stages of language learning
are characterized by a predominance of interference (interlingual transfer), but once
learners have begun to acquire parts of the new system, more and more intralingual transfer
– generalization within the target language – is manifested. This of course follows logically
from the tenets of learning theory. As learners progress in the second language, their previous
experience begins to include structures within the target language itself. It is important to note
that the teacher or researcher cannot always be certain of the source of an apparent intralingual
error, but repeated systematic observations of a learner's speech data will often remove the
ambiguity of a single observation of an error.
28
A third major source of error, although it overlaps with both types of transfer, is the
context of learning. "Context" refers, for example, to the classroom with its teacher and its
materials in the case of school learning or the social situation in the case of untutored second
language learning. The research has shown that sociolinguistic context of natural, untutored
language acquisition can give rise to certain dialect acquisition that may itself be a source of error.
However, the errors induced by a formal instruction have received more attention in the SLA
research.
In a classroom context the teacher or the textbook can lead the learner to make faulty
hypotheses about the language. This is what is alternatively called a false concept, transfer of
training or an induced error. Students often make errors because of a misleading explanation
from the teacher, faulty presentation of a structure or word in a textbook, or even because of a
pattern that was memorized in a drill but improperly contextualized. For example, in teaching the
preposition at the teacher may hold up a box and say I'm looking at the box. However, the learner
may infer that at means under. If later the learner uses at for under (thus producing *The cat is at
the table instead of The cat is under the table) this would be an induced error.
Another manifestation of language learned in classroom contexts is the occasional
tendency on the part of learners to give uncontracted or inappropriately formal forms of
language. We have all experienced foreign learners whose "bookish" language gives them away as
classroom language learners. Such phenomena may be the result of hypercorrection. In error
analysis, hypercorrection refers to the incorrect use of a word, pronunciation or other
linguistic feature in speaking as a result of the attempt to speak in an educated manner and
in the process replacing a form that is itself correct. For example, the use of "whom" instead of
"who" in "Whom do you think painted that picture?" is an example of hypercorrection.
Hypercorrections are sometimes used by a second language learner who is attempting to speak
correctly or by a speaker of a non-standard variety of a language, when speaking formally. This
may result in the speaker using more self-correction and using more formal vocabulary than
speakers of a standard variety of the language.
And finally, another source of error is related to the Communication strategies which
learners use to fill the gap in their knowledge. Communication strategies refer to ways learners
with limited command of language use to express a meaning in a second or foreign language.
Learners obviously use production strategies in order to enhance getting their messages across
or compensate for missing knowledge, but at times these techniques can themselves become a
source of error. For example, the learner may not be able to say It’s against the law to park here
and so he/she may say This place, cannot park. For handkerchief a learner could say a cloth for my
nose, and for apartment complex the learner could say building. Table 5.2. summarizes the most
common sources of error in SLA.
29
Table 5.2. Common sources of L2 error
Where the purpose of error analysis is to help learners learn an L2, there is a need to
evaluate errors. Some errors can be considered more 'serious' than others because they are more
likely to interfere with the intelligibility of what someone says. Language teachers will want to
focus their attention on these. This will guide them in deciding on the error correction strategy
they will eventually adopt. Language teachers are generally advised to intervene when the
learners' errors are frequent, global (interfere with the comprehensibility of the text), and
stigmatizing (would cause a negative evaluation from native speakers).
As far as the severity of errors is concerned, they can be classified as either global or local.
Global errors are those in the use of a major element of sentence structure (e.g. missing,
wrong or misplaced connectors), which makes a sentence or utterance difficult or impossible
to understand. Global errors hinder communication; they affect overall organization of the
utterance and prevent the hearer from comprehending some aspect of the message. For example,
the following errors in whatever context may be difficult to interpret:
*"Well, its a great hurry around."
*"I like take taxi but my friend said so not that we should be late for school."
Local errors, on the other hand, usually do not prevent the message from being heard,
often because there is only a rumor violation of one segment of a sentence (e.g. the verb),
allowing the hearer/reader to make an accurate guess about the intended meaning. The
following are examples of local errors:
*"Give me a scissors."
*"If I heard from him, I will let you know."
30
5.4. Error Correction
One of the keys to successful second language learning lies in the feedback that a learner
receives from others. Communication Feedback Model offered one of the first models for
approaching error in language classrooms. It describes how affective and cognitive feedback can
affect the message-sending process. Figure 5.1. metaphorically depicts what happens in this
model.
Figure 5.1. Affective & Cognitive Feedback
The "green light" of the affective feedback mode allows the sender to continue attempting
to get a message across; a "red light" causes the sender to abort such attempts. Unlike what this
figure may lead one to believe, the affective feedback does not necessarily precede the cognitive
feedback; both modes can take place simultaneously. The traffic signal of cognitive feedback is
the point at which error correction enters. A green light here symbolizes non-corrective feedback
that says "I understand your message." A red light symbolizes corrective feedback that can take on
numerous possible forms (outlined below) and causes the learner to make some kind of alteration
in production. The “yellow light” represents those various shades of color that are interpreted by
the learner as falling somewhere in between a complete green light and a red light, causing the
learner to adjust, to alter, to recycle, to try again in some way. The two types and levels of feedback
are charted below:
Affective Feedback
Positive: Keep talking; I'm listening.
Neutral: I'm not sure I want to maintain this dialog.
Negative: This conversation is over.
Cognitive Feedback
Positive: I understand your message; it's clear.
Neutral: I'm not sure if I correctly understand you.
Negative: I don't understand what you are saying.
Various combinations of the two major types of feedback are possible. For example, a person can
indicate positive affective feedback ("I affirm you and value what you are trying to communicate")
but give neutral or negative cognitive feedback to indicate that the message itself is unclear.
Negative affective feedback, however, regardless of the degree of cognitive feedback, will likely
result in the abortion of the communication.
31
The most useful implication of Communication Feedback Model for a theory of error
treatment is that cognitive feedback must be optimal in order to be effective. Too much
negative cognitive feedback – repeated interruptions, corrections, and overt attention to
errors – often leads learners to shut off their attempts at communication. They conclude that
so much is wrong with their production that there is little hope to get anything right. On the other
hand, too much positive cognitive feedback – teachers' willingness to let errors go
uncorrected and indicate understanding when understanding may not have occurred –
serves to reinforce the errors of the learner. The result is the persistence, and perhaps the
eventual fossilization, of such errors. The teacher's task is to strike the optimal balance
between positive and negative cognitive feedback: providing enough green lights to
encourage continued communication, but not so many that crucial errors go unnoticed, and
providing enough red lights to call attention to those crucial errors, but not so many that the
learner is discouraged from attempting to speak at all.
Error correction is a form of feedback, and there is a wide literature on the topic of
feedback in general and error treatment in particular. Earlier literature on error treatment
classified a number of options available to the teacher when addressing learners' errors. For
example, we can begin with identifying a series of questions that research had addressed: should
errors be corrected? If so, when? Which errors? How should they be corrected, and by whom? We
can address the concept of error treatment by introducing seven "basic options" which were
complemented by eight "possible features" within each option. All of the basic options and
features within each option are possible modes of error correction in the classroom (Table 5.3.).
Having noticed an error, the first and crucial decision the teacher makes is whether or not
to treat it at all. As mentioned above, some methods recommend no direct treatment of error at
all. They argued that in "natural," untutored environments, nonnative speakers are usually
corrected by native speakers on only a small percentage of errors that they make. Native speakers
were found to attend basically only to global errors and then usually not in the form of
interruptions but at transition points in conversations.
Table 5-3: Basic options and features of error treatment
Nevertheless, the students in the classroom generally want and expect errors to be
corrected and more recent research has clearly indicated that in many occasions learners do
benefit from teachers' corrective feedback. Therefore, the primary question remains as to
whether a particular deviant utterance should be addressed by the teacher. In order to do so, the
teacher needs to develop the intuition, through experience and established theoretical
foundations to make sure he/she has adopted an informed and appropriate position at given
moments.
32
One step toward developing such intuitions may be taken by considering the model below
which illustrates a series of observations and evaluations the teacher has to make when a student
has uttered some deviant form in the classroom. According to this model, after a student produces
a deviant utterance, the information is accessed, processed, and evaluated instantaneously, and
finally a decision is reached as what the teacher should do about the deviant form. This process
is summarized below in the form of ten successive screens. Of course, no sequence is implied here.
1. The teacher identifies the type or domain of deviation (lexical, phonological. etc.)
2. Often, but not always, he can identify its source of error, which will be useful in
determining how he might treat the deviation.
3. The complexity of the deviation may determine not only whether to treat or ignore but
how to treat, if that is the decision. In some cases a deviation may require so much
explanation, or so much interruption of the task at hand, that it isn't worth treating it.
4. Teachers' most crucial and possibly the very first decision among these ten factors is to
quickly decide whether the utterance is interpretable (local) or not (global). Local errors
can sometimes be ignored for the sake of maintaining a flow of communication.
Global errors by definition often call for some sort of treatment since the message
may otherwise remain garbled.
5. The teacher needs to make a guess at whether it is a performance slip (mistake) or a
competence error; this is not always easy to do but a teacher's intuition on this factor will
often be correct. Mistakes rarely call for treatment, while errors more frequently demand
some sort of teacher response.
6. Based on his knowledge about the learner, the teacher makes a series of instant
judgments about the learner's language ego fragility, anxiety level, confidence, and
willingness to accept correction. If, for example, the learner rarely speaks in class or
shows high anxiety and low confidence when attempting to speak, the teacher may decide
to ignore the deviant utterance.
7. Teacher's knowledge of the learner's linguistic stage of development will help him decide
how to treat the deviation.
8. Teacher's pedagogical focus at the moment will help him to decide whether or not to treat
the error. For example, is this a form-focused task? Does this lesson focus on the form that
was deviant? What are the overall objectives of the lesson or task?
9. The teacher also considers the communicative context of the deviation. For example, was
the student in the middle of a productive flow of language? How easily could he be
interrupted?
10. Amid all this, teacher's own style comes into play. For example, is he generally
interventionist or not? If he normally tends to make very few error treatments, a
treatment now on a minor deviation would be out of place and misinterpreted by the
student.
The teacher is now ready to decide whether to treat or ignore the deviation. If he decides
to do nothing, he just moves on. But if he decides to do something in the way of treatment, as
discussed earlier, he has a number of treatment options. For example, he has to decide when to
treat, who will treat, and how to treat, and each of those decisions offers a range of possibilities
as indicated in the chart below.
As for the issue of when teachers tend to correct errors, it was reported that the teachers
tend to correct more errors on occasions when there is greater form-focus in the class. Regarding
the question of 'which errors?' It was shown that lexical, discourse and content errors receive
more attention than errors in phonology and grammar. Moreover, studies on Error evaluation
indicate the necessity to consider exactly who is doing the correction. Possible answers to the
33
question of who should correct are the teacher, the learner making the error and other learners.
There is research to indicate that all these three can occur in various situations. Also, considerable
differences exist between native speaker and nonnative speaker teachers as regards the focus of
corrections, with the native teachers being more concerned about the fluency and non-native
teachers making more form-focused interventions. On the issue of how best to treat errors,
there have been various taxonomies of error modes, all indicating the rich set of possibilities open
to the teacher. The initial questions, as we saw above, were 'to treat or to ignore completely' and
'to treat immediately or delay'. The remaining ones are classified as possible features of error
treatments, such as 'blame indicated', 'location indicated' and so on.
Before we undertake an evaluation of EA, we need to bear in mind that Error analysis in
its original form, was an inductive phase of enquiry in SLA research. That is, it worked from
corpora of collected samples of error and tried to draw generalizations about patterns in those
samples. While the observation of such patterns is an important step in moving towards an
understanding of SLA, work since the 1980s has on the whole been deductive. Researchers start
with theory about SLA which generates hypotheses, which are themselves then tested against
error patterns. Deductive approaches are potentially much richer sources of explanation than
inductive approaches, and for that reason few researchers nowadays conduct error analyses of
the type described above. Now let's consider some of the criticisms directed at EA in the literature.
• EA is criticized for giving too much attention to learners' errors. While errors indeed
reveal a system at work, we can become so preoccupied with noticing errors that the
correct utterances in the second language go unnoticed. In our observation and analysis
of errors we must beware of placing too much attention on errors and not lose sight of
the value of clearly expressed language that is a product of the learner's progress and
development. While the diminishing of errors is an important criterion for increasing
language proficiency, the ultimate goal of second language learning is the attainment of
communicative fluency.
• It has been shown that EA fails to account for the strategy of avoidance. A learner who for
one reason or another avoids a particular sound, word, structure, or discourse category
may be assumed incorrectly to have no difficulty therewith. For example, researchers
have found that Chinese and Japanese learners of English make fewer errors on relative
clauses than Spanish and Farsi speakers. But the reason they do so is because they
produce fewer relative clauses. We can conclude that Chinese and Japanese speakers
avoid producing relative clauses because they know they are very different in English
from Chinese and Japanese. The absence of error therefore does not necessarily reflect
native-like competence.
34
• It was observed that EA can keep us too closely focused on specific languages rather than
viewing universal aspects of language. The language systems of learners may have
elements that reflect neither the target language nor the native language, but rather a
universal feature of some kind. This view is in keeping with the bio-programming theories
of SLA.
• A number of problems arose with the use of error taxonomies as an approach to the study
of SLA. For a taxonomy to be effective it should be easy to classify items uniquely under
one category or another. But in the case of error taxonomies it has often been difficult to
determine why an error should be classified in one way rather than another.
We can conclude that the faults of EA were too obvious for it to continue to serve as the
primary mode of SLA analysis. By the late 1970s, EA became more of a research tool for specific
problems and was incorporated into overall performance analysis which looks at the totality of
learner language performance. By the end of the decade, the theory of interlanguage and more
general SLA theory, to which EA contributed, had prevailed. Nowadays, it can be argued that EA
has survived only in the form of measures of accuracy in SLA research.
35
Chapter 6
lnterlanguage theory
The theory that motivated the research in both EA and later developments in SLA became
known as interlanguage theory. The interlanguage (IL) theory was in sharp contrast to CA. As we
saw above, CA was criticized on the basis of its behaviorist accounts of language learning as
it viewed L2 acquisition as a mechanical process of habit formation where already learned
habits (L1) interfered with the learning of new habits (L2). Thus, in this model, the challenge
facing the L2 learner was to overcome the interference of L1 habits. Therefore, CA sought to
identify the features of L2 that differed from those of the L1 so that learners could be helped to
form the 'new habits' of the L2 by practicing them intensively. EA, in contrast, became closely
associated with nativist views of language learning and the emergence of interlanguage
theory. Whereas behaviorism emphasized the role of environmental stimuli (nurture), nativist
theories emphasized the mental processes that occur in the 'black box' of the mind when learning
takes place (nature).
From a nativist perspective, the learner is no longer seen to be a passive recipient of TL
input, but rather someone who plays an active role, processing input, generating hypotheses,
testing them and refining them. In this model, linguistic data (input) is processed internally
by a pre-programmed cognitive faculty (UG) resulting in a knowledge system (competence)
that is then used in actual performance (output). The cognitive mechanisms dictate both
what is attended to in the input (i.e. noticed), and how what is attended to is processed as L2
knowledge (i.e. the learner's intake). The intake in turn serves as the basis for the learner’s
interlanguage.
Interlanguage is the term used to describe the grammatical system that a learner
creates in the course of learning another language. It is neither their first language system, nor
the target language system but occupies a transitional point between the two. This interlanguage
is seen as a rather independent system in its own right, and not simply a degenerate form of the
target language. This new approach to learner language replaced the dichotomous view
(native language versus target language) with a continuously variable or scalar view (native
language interlanguage target language). It also reflects the learner's evolving system
of rules. Some of these rules may be influenced by the first language (through transfer), others by
the target language (through generalization), while others are attributed to innate and
developmental principles (i.e. the universal grammar). Figure 6.1. illustrates the three kinds of
influence on learner language mentioned so far.
36
Interlanguage is said to be systematic because leaners behave 'grammatically' in the
sense that they draw on the 'rules' they have internalized – a view that casts doubt on the use
the term ‘error’ itself, as learners utterances are only erroneous with reference to target language
norms, not to the norms of their own grammars. One way that interlanguages show that they are
systematic is that they follow predictable stages, no matter what the learner's first language is
(order of acquisition). At a very early stage, interlanguage takes on the form that has been called
the basic learner variety. This is characterized by very basic syntax and few if any grammatical
word endings (inflections). Interlanguages are constantly evolving. When they stop doing so, they
stabilize at a point some way from the target (fossilization). As it happens, very few second
language learners achieve native-like proficiency. This is an argument for recognizing the
legitimacy of interlanguage, and for accepting that partial competence rather than full
competence, is a valid objective in second language learning.
In recent years researchers have come to understand that second language learning is a
process of the creative construction of a system in which learners are actively testing hypotheses
about the target language from a number of possible sources of knowledge: knowledge of the
native language, limited knowledge of the target language itself, knowledge of the
communicative functions of language, knowledge about language in general, and knowledge
about life, people, and the universe around them. Learners, in acting upon their environment,
construct what to them is a legitimate system of language in its own right – a structured set of
rules that brings some order to the linguistic chaos that confronts them.
A number of terms have been coined to describe the perspective that stresses the
legitimacy of learners' second language systems. The best known of these is Interlanguage.
Interlanguage refers to the separateness of a second language learner's system, a system
that has a structurally intermediate status between the native and target languages. Another
term was Approximative System which stressed the successive approximation to the target
language. Still an alternative jargon was called idiosyncratic dialect to connote the idea that the
learner's language is unique to a particular individual, that the rules of the learner's
language are peculiar to the language of that individual alone. These three jargons with their
different emphases are illustrated below:
Figure 6.2. Representations of Learner’s Second Language Systems
While each of these labels emphasizes a particular notion, they share the concept that
second language learners are forming their own self-contained linguistic systems. This is neither
the system of the native language nor the system of the target language, but a system based
37
upon the best attempt of learners to bring order and structure to the linguistic stimuli
surrounding them. The Interlanguage Hypothesis led to a whole new era of second language
research and teaching and presented a significant breakthrough from the CAH.
Since its conception in the 1970s, interlanguage theory has evolved considerably but its
central principles have remained largely intact. The main premises of interlanguage theory are
introduced below.
1. The learner constructs an implicit system of abstract linguistic rules which underlies
comprehension and production of the L2. This system of rules is viewed as a ‘mental
grammar’ and is referred to as an 'interlanguage'.
2. Some researchers have claimed that the systems learners construct contain variable
rules. That is, they argue that learners are likely to have competing rules at any one stage
of development. However, other researchers argue that interlanguage systems are
homogeneous and that variability reflects the mistakes learners make when they try to
use their knowledge to communicate. These researchers see variability as an aspect of
performance rather than competence.
3. The learner's grammar is permeable. That is, the grammar is open to influence from the
outside (i.e. through the input). It is also influenced from the inside. For example, the
omission, overgeneralization, and transfer errors which we considered in the previous
chapter constitute evidence of internal processing.
4. The learner's grammar is transitional. Learners change their grammar from one time to
another by adding rules, deleting rules, and restructuring the whole system. This results
in an interlanguage continuum. That is, learners construct a series of mental grammars
or interlanguages as they gradually increase the complexity of their L2 knowledge. For
example, initially learners may begin with a very simple grammar where only one form
of the verb is represented (e.g. 'paint'), but over time they add other forms (e.g. 'painting'
and 'painted'), gradually sorting out the functions that these verbs can be used to perform.
5. The learner's grammar is likely to fossilize. It is suggested that only about five percent of
learners go on to develop the same mental grammar as native speakers. The majority stop
some way short. The prevalence of backsliding (i.e. the production of errors representing
an early stage of development) is typical of fossilized learners. Fossilization does not
occur in L1 acquisition and thus is unique to L2 grammars.
38
Table 6.1. The Main Features of Interlanguage
There are many different ways to describe the progression of learners' linguistic
development as their attempts at production successively approximate the target language
system. Indeed, learners are so variable in their acquisition of a second language that stages of
development defy description. However, four major stages of learner language development have
been recognized:
1. The first is a stage of random errors, a stage that is called pre-systematic, in which the
learner is only vaguely aware that there is some systematic order to a particular class
of items. The written utterance "The different city is another one in the another two" surely
comes out of a random error stage in which the learner is making rather wild guesses at
what to write. Inconsistencies like "John cans sing," "John can to sing," and "John can
singing:' all said by the same learner within a short period of time, might indicate a stage
of experimentation and inaccurate guessing.
2. The second, or emergent, stage of learner language finds the learner growing in
consistency in linguistic production. The learner has begun to discern a system and to
internalize certain rules. These rules may not be correct by target language standards,
but they are nevertheless legitimate in the mind of the learner. This stage is characterized
by some backsliding, in which the learner seems to have grasped a rule or principle
and then regresses to some previous stage. This phenomenon of moving from a
39
correct form to an incorrect form and then back to correctness is referred to as U-
shaped learning. In general, the learner at this stage is still unable to correct errors
when they are pointed out by someone else. Avoidance of structures and topics is
typical. Consider the following conversation between a learner (L) and a native speaker
(NS) of English:
L: I go New York.
NS: You're going to New York?
L: [doesn't understand] What?
NS: You will go to New York?
L: Yes.
NS: When?
L: 1972.
NS: Oh, you went to New York in 1972.
L: Yes, I go 1972.
3. A third stage is a truly systematic stage in which the learner is now able to manifest more
consistency in producing the second language. While those rules that are stored in the
learner's brain are still not all well-formed, and some of them conform to the above
mentioned U-shaped processes, they are more internally self-consistent and, of course,
they more closely approximate the target language system. The most prominent
difference between the second and third stage is the ability of learners to correct their
errors when they are pointed out – even very subtly – to them. Consider the English
learner who described a popular fishing-resort area.
L: Many fish are in the lake. These fish are serving in the restaurants near the lake.
NS: [laughing] The fish are serving?
L: [laughing] Oh, no, the fish are being served in the restaurants!
4. A final stage, which some researchers call stabilization, is parallel to what others call a
post-systematic stage. Here the learner has relatively few errors and has mastered the
system to the point that fluency and intended meanings are not problematic. This fourth
stage is characterized by the learner's ability to self-correct. The system is complete
enough that attention can be paid to those few errors that occur and corrections be made
without waiting for feedback from someone else. At this point learners can stabilize too
fast, allowing minor errors to slip by undetected, and thus manifest fossilization of their
language.
40
Table 6.2. Stages of Interlanguage Development
Stage Features
All learners in all areas can experience uneven lines of progress, and in many cases,
especially in advanced stages of learning, those lines can reach an apparent "plateau" for a
considerable period of time. It is quite common to encounter in a learner's language various
erroneous features that persist despite what is otherwise a reasonably fluent command of the
language. This phenomenon is most notable in "foreign accents" in the speech of many of those
who have learned a second language after puberty. Syntactic and lexical errors can also persist in
the speech of those who have learned a language quite well. The relatively permanent
incorporation of incorrect linguistic forms into a person's second language competence has
been referred to as fossilization. In theory, such deviant forms are said to be resistant to
correction. However, some researchers doubt this, and prefer the term stabilization to
fossilization, because this leaves open the possibility for further development at some point in
the future.
There are various theories as to what causes fossilization. It is a well-known phenomenon
in learners who have acquired their second language in naturalistic conditions. So it has been
hypothesized that the lack of instruction, especially the lack of a focus on form, is the main
cause. This is used as an argument for giving explicit attention to grammar. Another theory is
that fossilization may be due to the lack of negative feedback on errors, a view that is used to
justify correction. Fossilization may also be due to the fact that learners have not been 'pushed'
to make their output more accurate. Yet another theory argues that some learners have no
41
social motivation to improve their interlanguage. Once they can meet their basic communicative
needs, fossilization (or pidginization) is likely to occur because they are not sufficiently
motivated to want to pass as members of the target language community (the process of
acculturation). Now that it is accepted that few if any second language learners achieve native-
like proficiency, the concept of fossilization is viewed less negatively. It is being replaced by the
idea of partial competence. In other words, for many learners it may be more realistic to aim for
a 'working knowledge' of the target language. This is also consistent with the more pragmatic
objectives of learning English as an international language.
Finally, we can conclude that both internal and external factors can lead to fossilization.
Beside the extrinsic elements of feedback and exposure, the presence or absence of internal
motivating factors, of seeking interaction with other people, of consciously focusing on forms, and
of one's strategic investment in the learning process can affect the process of fossilization.
6.5. Variability in IL
There is ample evidence in SLA to suggest that learner language displays significant
systematicity; however, a pervasive feature of SLA is its variability. The variability exists both
across learners and within individual learners. The inter-learner variability can be observed in
different learners who despite starting from the same point, and exposed to the same
conditions, exhibit significant differences in terms of the rate and the outcomes of learning.
Factors that might account for such variability may be internal, such as the learner's first
language, attitudes, motivation and learning style, or they may be external to the learner,
such as the amount and type of exposure, the availability of practice opportunities, and
whether or not the learner is receiving instruction. Table 6.3. summarizes some of the factors
that can contribute to inter-learner variability.
Table 6.3. Factors Contributing to Inter-learner Variability
42
Also, there is variability within individual learners. For example, learners sometime make
an error in the use of a specific target language structure and sometimes do not. Also, they may
use more than one way of expressing the same idea, more or less interchangeably, such as:
“Yesterday the thief steal the suitcase.”
“Yesterday the thief stealing the suitcase.”
“Yesterday the thief stole the suitcase.”
The identification of a 'stage' of development in the sequence of acquisition does not mean
that learners consistently make use of a single form among others that they use during the same
period. In fact, as we saw above, at any one stage of development the learner will employ different
forms for the same grammatical structure. For example, in the case of the past tense, at any one
time, a learner may mark some verbs correctly for past tense, fail to mark others at all, and
overgeneralize the regular -ed and the progressive -ing forms with yet other verbs.
However, these observations do not invalidate the claim that learner language is
systematic since it is possible that variability is also systematic. That is, we may be able to explain,
and even predict, when learners use one form and when another.
One of the examples of systematic interlanguage variation can be found in employing the
past tense structure in English. When learners begin to use past tense markers (either irregular
markers as in 'ate' or regular markers as in 'painted'), they do not do so on all verbs at the same
time. Learners find it easier to mark verbs for past tense if the verb refers to events (for example,
'arrive'), somewhat more difficult to mark verbs that refer to activities (for example, 'sleep'), and
most difficult to mark verbs that refer to states (for example, 'want').
The kind of verb also influences the kind of errors learners make. For example, with
activity verbs learners are more likely to substitute a progressive form for the past tense form:
After that the weather was nice so we swimming in the ocean.
In contrast, with state verbs they substitute the simple form of the verb:
Last night everything seem very quiet and peaceful.
Learners, then, pass through highly complex stages of development. These stages are not
sharply defined, however. Rather they are blurred as learners oscillate between stages.
One of the most fruitful areas of learner language research has focused on the variation
that arises from the difference between classroom contexts and natural situations outside
language classes. As researchers have examined instructed second language acquisition, it has
become apparent not only that instruction makes a difference in learners' success rates but also
that the classroom context itself explains a great deal of variability in learners' output.
One of the current debates in SLA theory centers on the extent to which variability can
indeed be systematically explained. The concept of variability in learner language has been
addressed in different ways by different research paradigms. In the linguistic approach, the
variability has been largely ignored. Specifically, the linguists in the Chomskyan tradition adopt
what is called homogeneous competence paradigm. In this approach, variation is seen as a
feature of performance rather than of the learner's underlying knowledge system. The general
claim is that in order to study language it is necessary to abstract what learners 'know' from what
they 'do'. This involves various kinds of idealization through which the linguist gains access to
data that are invariable and so can be used to investigate the learner's linguistic competence. The
43
second approach to interlanguage variability is essentially psycholinguistic in nature. In contrast
to the purely linguistic approach, in this model the variation is studied with reference to the
internal mechanisms that influence the learner's ability to process L2 knowledge under different
conditions of use (e.g. whether the linguistic task is planned or unplanned). Still a third way of
tackling the variability is recognizing the sociolinguistic factors which affect the learner
language. This involves studying language in relation to social context, e.g. investigating the
variability which arises within the speech of a single speaker as a result of changes in situattonal
context.
Therefore, we can conclude that, although some variability may be random in the
initial stages of interlanguage development (free variation), in later stages it will be largely
systematic (systematic variation) in the sense that it is possible to identify the probabilities
with which the different forms will occur in accordance with such factors as the type of
structure, addressee and the availability of time to plan utterances.
44