Home » Neurological » IJND-ID21.php International Journal of Neurological Disorders - ISSN: 2639-7021


Research Article

On Regional Differences of Brain Functions: in the Light of Metabesity?

Fred C C Peng1* and Virginia M Peng2

1Department of Neurosurgery and Neurological Institute, Taipei Veterans General Hospital, Taiwan 2Ritsumeikan University, Japan

*Address for Correspondence: Uqbah Iqbal, Life Planner, Suite P4, Level 31, AIA Cap Square Tower, Jalan Munshi Abdullah, 50100 Golden Triangle, Kuala Lumpur, Malaysia, Tel: +60-196-916-990; ORCID ID: orcid.org/0000-0003-0117-9201; E-mail: druqbahiqbal.aia@gmail.com/uqbah@siswa.ukm.edu.my

Submitted: 13 September 2018; Approved: 12 October 2018; Published: 13 October 2018

Citation this article: Peng FCC, Peng VM. On Regional Differences of Brain Functions: in the Light of Metabesity. Int J Neurol Dis. 2018;2(1): 010-021.

Copyright: © 2018 Peng FCC, et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited

Download Fulltext PDF

Purpose: In “Dementia in epilepsy: a clinical contribution to the metabesity of epileptology, geriatrics and gerontology” (Peng, 2017a) the author attributes a case of pathology to metabesity, in line with its original idea of the sharing of metabolic roots among differing neurological disorders. We now wish to extend in this article the notion of metabesity to include brain functions of normal healthy people for two objectives: (1) review of the regional differences of brain functions as claimed in the literature and (2) anatomo-neurophysiological details of metabesity when it is applied to behaviors in normal healthy people.

Reason: The reason is that brain functions have been a keen subject for intense inquiry in the history of not only neuroscience but also of philosophy in connection with the mind. In so doing, however, investigators in both have attempted to find or locate the solution of their investigations through one common inquiry “What Is Language and Where Is It Located?” without the awareness of the important and new idea of metabesity.

But they — neuroscientists in particular — immediately encounter a couple of questions: (1) is language a form of behavior or not? (2) If so, what does it entail as behavior in the brain? To overcome the difficulties, neuroscientists have been using clinical cases, e.g., split-brain patients in epilepsy, to justify their claims of lateralization and localization of brain functions.

Method: In this article we shall attempt to point out that in the light of metabesity under normal conditions such should not be the case, and shall also list up the underlying reasons for the problems involved in order to come up with a plausible solution of problems or answer to (1) Regional Differences of Brain functions in Relation to (2) Language in the Brain as the most complex form of human behaviors.

Result: The first question of “Is language a form of behavior or not?” may sound simple, but the answer has puzzled thousands of investigators in varying disciplines with no consensus in sight. There are at least four obvious problems: (1) the lack of understanding that the nervous systems are structurally interrelated and functionally interdependent has been the major one; (2) hence, there is no proper understanding of what behavior is in relation to brain functions; (3) as a result, the meaning of what language is has varied from sublime to ridiculous in respect to aphasia; and (4) the recognition of sign language as a language in its true sense was not available before 1960, and therefore has not been dealt with in the brain, albeit some linguists have put it in the straightjacket of oral language to probe “sign language grammar” after 1960 [1].

Discussion and Conclusion: We shall attend to each one of these reasons in order to answer the questions raised. That is, regional differences of brain functions under normal conditions are only partially true when behavior is taken into consideration in respect to language because language in the brain, verbal and nonverbal, is behavior which is memory-governed, meaning-centered, and multifaceted, but that it had never been understood as such until Peng first pointed it out in 2005 [2]. But the mistake has perpetuated to this date, although there are clues of recognition that sign language impairment cannot be handled as “aphasia” since signers, deaf or not, are not immune to sign language disorders, and that sign language is not at all lateralized to the left hemisphere, as some psychologists in California attempted to claim any more than oral language is lateralized to the left or right hemisphere.

We shall then conclude that regional differences of brain functions as they now stand cannot account for behaviors under normal conditions when language as such is involved unless behaviors are properly understood especially when language in the brain is understood to include oral language, sign language and written language. In other words, this article attempts to rectify the current misunderstanding of regional differences regarding brain functions vis-a-vis behaviors especially language in the brain as behavior among others, e.g., music by neglecting the major role played by the brain stem and the cranial nerves therein and the brain functions of memory and cognition which to us are heads and tails of the same coin

Introduction

The two questions of “What Is Language and Where Is It Located?” have been the core issue of Peng’s concern ever since he got his Ph.D. in 1963. Our answers are now: Language exists in two places; the brain and society. In each place, it is de facto behavior.

However, since our main purpose is to elaborate the first we apply the concept of metabesity leaving the second for another occasion so as to help linguists as well as neuroscientists and even lay people realize the importance of proper understanding of language in the brain functions as behavior which is memory-governed, meaning-centered and multifaceted and that memory and cognition in the brain are heads and tails of the same coin in relation to the production and reception of behaviors.

What is language?

The answer to “What Is Language?” varies depending on who answer the question and to whom the question is asked. Here are some examples.

To laypeople: The answer is likely to be:

(1) It has words, ways that we string them together and pronounce them to communicate ideas;

(2) It is a tool to communicate;

(3) It is written texts, with no distinction from oral language, owing to some languages, like Chinese, which have a built-in restriction to make the distinction. Such as: English (Spoken)/ (Written); Japanese (Spoken)/ (Written); Chinese (?)/ (Written); Taiwanese (Spoken)/ (?), American (Spoken)/ (?). Peng was asked once by a Chinese from China, Ni Hwei Pu Hwei Jiang Chung Wen? (Do you speak Chung Wen?) His answer was: Wo Pu Hwei (I can’t. Chung Wen is for writing).

To linguists; It has ‘lexicon’, ‘syntax’, ‘phonology’ and ‘semantics’. For this view, linguists have made three independent definitions as follows:

(1) To structural linguists in the forties and fifties the definition was: An abstract system of arbitrary vocal symbols. This is what the first author was taught.

(2) In the sixties, however, the definition was changed to: An infinite set of sentences which unfortunately dominated the scene in the Linguistic Society of America by wasting two decades for nonsensical arguments among linguists, owing to the upcoming trend in MT (Machine Translation) because of the Cold War with the then Soviet Union [1]. Chomsky’s Syntactic Structure in 1957 was an offshoot of the Cold War, making use of the methodology in MT headed by Victor Yngvy at MIT, to claim a new “mathematical” approach when it was not.

(3) Then came along Michael Halliday in the seventies to propose yet another definition in Systemics which is: A social-semiological system for making meanings through choice in varying contexts of situation.

In general, however most people make no distinction between language and speech or written texts. And this lack of distinction is further complicated by sign language which is now a bona fide form of language in various countries but has yet to develop a written form. We do not hesitate to stress that there will never be any written sign language, although Gallaudet University has devised a cumbersome system to notate ASL, which is no good at all to represent a written sign language.

To medical people: Language exists in the brain as a single unit, a thing controlled by a center. It is based on a notion of phrenology that was started by Gall who proclaimed the idea. This led to the subsequent claims of language centers in the brain, as may be explained below.

(1) Gall’s idea of phrenology: He claimed that the shape of the human skull covering the brain has different locations in each individual varying from one to another, which reveal the individual’s personality, intelligence, emotion, and language. Although his notion has subsequently diminished, its traces can still be found in fortune telling, like palm-reading as well as face-reading (面相) in Taiwan.

(2) Broca’s claim of articulate center: It was not until 1861 when Paul Broca, a French neuroscientist and later phrenologist, published the first clinical report of his observation in a patient, by the name of Leborgne, known otherwise as Tan-Tan in the literature, that the idea of language being lateralized to the left hemisphere first appeared, an idea that began to emerge in subsequent medical literature.

Tan-Tan’s brain was autopsied and taken to a conference in Paris where Broca presented the “evidence”. Broca presented his second case report on another patient by the name of Lelong to solidify his view of lateralization of language to the left hemisphere. And the location of this lateralized language in the brain has become known as Broca’s area, any damage to which produces language disorders which in aphasiology are therefore termed Expressive Aphasia.

However, Pierre Marie, another French neuroscientist, when he studied Tan-Tan’s brain later and Broca’s claim of lateralization of language to the left hemisphere on the basis of Tan-Tan’s language pathology and Lelong’s, referred to Gall’s ideas mentioned above as “nonsense”. Moreover, in 1906, he re-examined Tan-Tan’s brain, which had been kept in formalin at a museum in Paris, and declared in Semaine medicale, 23 May 1906, that “la troisième circonvolution frontale gauch ne joue aucun rôle special dans la function du langage” (the third, left, frontal convolution plays no special role in the function of language).

(3) Karl Wernicke: a German neuroscientist, followed up Broca’s line of development and published in 1874 his case report in an attempt to modify and define a systematic picture of aphasia which later was synthesized by Lichtheim to form what has become known in aphasiology as the Lichtheim-Wernicke’s Model of Aphasia.

Wernicke’s patient had a different location of lesion in the brain and his resultant aphasia has been known in the literature as Receptive Aphasia for which the responsible lesion site is known today as Wernicke’s area; it is supposed to be localized in the posterior one third of the superior gyrus of the left temporal lobe. But, interestingly enough, such an area cannot be pin-pointed by subsequent researchers as has been reported in the literature. Be that as it may, the point we are making is that Wernicke’s case also had an apoplectic episode, and that he made a speculation that the posterior one third of the superior gyrus of the left temporal lobe had a pathway to connect Broca’s area. It is now known as the Arcuatus Fasciculus, albeit not at all correct for connection as a direct pathway with Broca’s area, as will be shown further below, even though it is neuroanatomically significant as a pathway. But for some reason only the left Arcuatus Fasciculus is often mentioned in such a connection, when there are two, one in each hemisphere.

But Wernicke’s bold speculation was apparently inspired by Ferrier’s publication in 1873, who localized the auditory center in the first temporal convolution and speculated that there had to be a pathway which connected the auditory center in the left hemisphere and the third, left, frontal convolution advocated by Broca as the site of the faculty of articulate language. Again, Ferrier failed to recognize two such auditory centers, one in each hemisphere. Be that as it may, the credit of Wernicke’s speculated pathway of the Arcuatus Fasciculus should also be given in part to Ferrier.

We should add that these contributions have subsequently established the lasting tradition (albeit incorrectly) of two “language centers” – Broca’s area and Wernicke’s area – to this date. But unfortunately such a tradition led to the rush for more localizations of brain functions, e.g., music, calculation, memory (like Working Memory on which we shall comment later), and what not, leading to many claims of different regional differences of brain functions in the literature, albeit unfounded in our opinion, when metabesity is taken into consideration. We shall in this article attempt to set the record straight.

The debates of lateralization of language to the left hemisphere have thus come down to focus on the issue of “cerebral dominance” of brain functions which is often equated with “cerebral laterality” in neuroscience. To us, the equation is wrong. The reason is that the proponents of “cerebral dominance” assume that there are language centers in the brain, — remnant of phrenology in our view — but that they in addition tie together the so-called “language centers” to handedness (hand predominance), eye-predominance, ear-predominance, and leggedness (leg-predominance).

The point we are stressing is that once the concept of “language centers” is proven erroneous, the issue of “cerebral dominance” evaporates, although cerebral laterality remains, because animals also have “handedness”, eye-predominance, and ear-predominance. Put differently, the two hemispheres are functionally asymmetrical (i.e., different) but interdependent along with the peripheral nervous system in the brain stem, and structurally interrelated because they are more or less homologous in mirror-image. For this reason, we have changed cerebral laterality to asymmetry of brain functions.

The difference between cerebral dominance or cerebral laterality and asymmetry of brain functions, from our point of view, is this: The former assumes that one hemisphere dominates the other, in respect to brain functions for a behavior, say, language, to the extent that the dominant hemisphere does all the work to the exclusion of the other hemisphere. For this erroneous view, some Japanese neurologists even claim that the right hemisphere is useless.

The latter, on the other hand, indicates the sharing of such brain functions between the two hemispheres, in relation to the brain stem, albeit asymmetrically, for the expression of behaviors as shown in figure 1 and figure 2 further below, making use of the various body parts involved for production and reception. For this reason, we have chosen to change the terminology in favor of asymmetry of brain functions.

Brief Descriptions

A brief description of the nervous systems

In the preceding section, we more or less summarize what language is to people in differing groups, ranging from lay people to professional linguists, and medical experts, in order to point out the wide varieties of opinions none of which is accurate. In so doing, we took for granted the omission of neuroanatomical descriptions, a gap that needs to be filled in below.

First, the nervous systems are made up of (1) a central nervous system and (2) a peripheral nervous system. The former consists of the brain and the spinal cord, each being wrapped up by three layers of membranes, called meninges; that is, dural matter, arachnoid, and pia matter. Second, the brain has two hemispheres which are homologous but asymmetrical in mirror-image for functions. Each hemisphere has five lobes: (1) frontal lobe, (2) temporal lobe, (3) parietal lobe, (4) occipital lobe, on the lateral side, and (5) the limbic lobe on the medial side. Each lobe is “wrinkled up” to form varying bundles of concentrated nerve cells in six layers of differing multi-polar neurons, each one of these wrinkles is called a gyrus.

These gyri of each lobe are filled with the cell bodies of neurons, referred to collectively as cerebral cortex. The cortex of each lateral lobe is often referred to as neocortex in contrast with the cortex of the limbic lobe as paleocortex. There are six layers of differing multi-polar neurons in each lobe. We think these multi-polar neurons have a special behavioral function, in relation to language, which we hereby call function enhancement. It pertains to the production and reception of language in the brain as behavior, as we shall explicate in some detail later.

Of these five lobes, the structural descriptions are abundantly available in any textbook on neuroanatomy. Even casual functional descriptions of these lobes are also available. However, the functional interdependences of these lobes, in terms of behaviors for production and reception, are seldom offered. Although we do NOT pretend to exhaust such descriptions in this section, it is important that we at least speculate some of the basic functional properties of the five lobes, especially of the limbic lobe, in relation to the limbic system and the Papez Circuit, so as to point out the importance of how the nervous systems function in production and reception during a behavioral interaction between the dyadic particles. To do so, we will need to touch on our theoretical constructs which are based on such anatomo-physiological properties of these brain structures from the point of view of production and reception in the dyadic partners’ interactive behaviors

A brief description of our theoretical constructs

Since, to us, language exists in the brain, we shall now explicate what it is like. First, let us point out that in English there are two terms, language and speech, but that in French there are three terms: la langue, la parole, and le langage. To clarify this discrepancy we have divided language as behavior into two aspects: (1) Individual and (2) Social, by assigning langue and parole to the Individual Aspect of language - the brain - and le langage to the Social Aspect of language - society.

As such, let us also emphasize once again that language in the brain is behavior which is memory-governed, meaning-centered, and multifaceted, because sign language is now regarded as a language. The idea is to lead linguists to realize the importance of proper understanding of how brain functions work in production and reception of behaviors. The reason is that, first and foremost, such brain functions make use of body parts in order to enable each individual to make proper adjustments to the internal and external environment, be they proper or not.

In other words, unlike the prevailing view in linguistics and neuroscience, we wish to raise the question of (1) whether there is grammar in the brain or not for linguists, a “sacred notion” in linguistics, as well as (2) whether there is any language center in the brain or not for neuroscientists, including aphasiologists. Our conclusion is that there is no grammar in the brain, and that Halliday`s notion of “grammar brain” is a farce. Nor are there “language centers”, expressive or receptive, in the brain, a notion that is based on the poorly understood regional differences of brain functions and the failure to recognize the asymmetry of brain functions in production and reception of behaviors.

In particular, for the former, we comment on and deal with de Saussure’s notions of langue and parole as well as his notions of signifiant (sound) and signifié (concept) to point out with illustrations of contradictions that there is no grammar in the brain. It is an epistemological artifact conveniently created by linguists and not an ontological given. See Peng (2009) [3] for more details.

For this approach, we shall touch on a brief description of neuroanatomy to pave the way for discussion of our theoretical frameworks of catalytic mapping for production and coupling for reception; both of them illustrate for neuroscientists that language in the brain is behavior which depends very much on the basal ganglia as the subcortical structures for the extrapyramidal looping to form thinking and thoughts as well as the brain stem, in relation to the cranial nerves therein, for production and reception of language as behavior. Our theoretical frameworks, unlike all theories of linguistics today, will also help linguists understand how the brain functions work vis-a-vis behaviors, such as language and music.

In so doing, we encounter an important question. That is: what are the brain functions of memory and cognition, then? To psychologists, cognition consists of or subsumes thinking, learning, and memory. This is a wrong view because it puts memory apart from cognition as a subordinate, when the brain functions of memory and cognition are heads and tails of the same coin, enabling each individual to make proper adjustments to the external and internal environments, making use of body parts available to each human. That is, there are memory as contents, memory as capacity, and memory as mechanisms the last one of which may be regarded as the brain function of cognition; hence, memory and cognition are heads and tails of the same coin. We shall elaborate this idea on another occasion, entitled “The brain functions of memory and cognition are heads and tails of the same coin”.

Another group of neuropsychologists has also advocated the idea of working memory which is supposed to be a very brief short-term memory consisting of an Executive System and two slaves: (1) phonological system and (2) a visuo-spacial Scratch Pad. It is further claimed to be empowered by the pre-frontal gyrus.

Since then many neuroscientists have jumped to it and refer to the executive system, without explaining what that is, as the source of neurological disorder whenever in their presentations they counter memory impairment in their subjects. We will return to this notion later for more comments, because it is based on a new (but wrong) interpretation of regional differences of brain functions. The reason is that these neuropsychologists have no idea that language in the brain includes both oral and non-oral as will be explicated in great detail further below.

Our aim is therefore to stress that all behaviors, including language, are memory-governed in the brain, and that the brain functions of memory and cognition in respect to language in the brain as behavior depend very much on (1) the brain stem and (2) inner most structures — the basal ganglia, the striatum, the globus pallidos, and the thalamus and even the cerebellum to some extent. In other words, the former — the brain stem — is constantly telling the cerebrum (telencephalon) or cerebral cortex what to do for production and reception of behaviors in close relation with those inner-most structures. Put differently, language in the brain is behavior which is not lateralized to the left hemisphere and music, also behavior, is not lateralized to the right hemisphere, as assumed by people who believe in the regional differences of brain functions.

Such erroneous views of regional differences as higher brain functions assume that the dominant hemisphere does all the work to the exclusion of the other side. We challenge such regional differences on the following grounds because of metabesity: (1) the cranial nerves in the brain stem are needed by way of the important pathways called corona radiata (including internal capsule) in both hemispheres through the corpus callosum; (2) the basal ganglia must take part; and (3) other inner most structures as well as the peripheral nervous systems are also involved in language as behavior for production and reception. They all contribute important functions and play important roles in the realization of behaviors in production and reception. Without their contributions, there is no behavior in production and reception from the cerebrum alone.

Our view coincides with a recent new concept called metabesity; there was an international congress in London, in October 2017, targeting metabesity. See also “Dementia in Epilepsy: A Clinical Contribution to the Metabesity of Epileptology, Geriatrics and Gerontology” (Peng, 2017a) [4].

The justification of our view is that embryologically in the fetal life telencephalon — the two cerebral hemispheres — is the last to develop. Therefore, the cerebrum is NOT the life-supporting organ, as a baby can be born without the cerebrum; rather, it is the brain stem that is the life supporting organ. The evidence is that, as reported in The Japan Times some years ago, a baby girl by the name of Theresa was born in Miami Florida without the cerebrum. The mother wanted to donate her organs by appealing to the court for permission. But the baby died ten days or so before the mother could get permission from the court. See also Peng (2017b) [5].

Keep in mind that the “brainless baby” as the newspaper reported it had lived in the uterus for nine months before birth and continued to live for ten days or so after birth, during which time the nuclei of her cranial nerves in her brain stem were functioning properly, as she was able to suck milk from her mother’s breasts and had no problems moving her vocal apparatus, and limbs, facial muscles, including her lips and eyes, her head and neck, all of which were evidence for her survival in the support of her short life, because they all depend on the proper functions of her brain stem, not just making noises like crying and moving her entire body.

CNN also reported a case of a brain-less child in Cambodia, who was a four-year old boy, still living. These brain-less (anencephalous) children, though they may be short-lived, clearly support our notion that there are no regional differences of brain functions concentrating in telencephalon, without regards to the participation of the brain stem to tell the cerebrum (telencephalon) what to do.

In so doing, we stress that the brain functions of memory and cognition are not confined to the two hippocampi; nor is there such a thing called “working memory”; rather, they are the neurophysiological activities as electric impulses which result from the firing of neurotransmitters, transmitting from presynaptic neurons to postsynaptic neurons across the synaptic clefts. Therefore, the brain functions of memory and cognition are heads and tails of the same coin, owing to the chemical exchange of sodium and potassium, all or nothing, to become electric impulses which are the only signals the nervous systems recognize.

We have thus come to the conclusion that there is no grammar in the brain and emphasize again that grammar is the epistemological artifact conveniently created by linguists, and not an ontological given. See Peng (2009) [3] for more details. As such, language in the brain as behavior is not at all lateralized to the left hemisphere for most or to the right hemisphere for some, nor is music lateralized in the right hemisphere. Hence, we herewith present our serious challenge in this article to the notion of regional differences of brain functions confined in the cerebrum for lateralization and localizations.

For this reason and in so doing, we quote de Saussure’s idea that when concept unlocks its corresponding sound as a psychological phenomenon to form a tightly knit union, they are bound together like two sides of a sheet of paper, with concept on the top and sound on the bottom, such that you cannot cut the surface without cutting the bottom at the same time. See figure 5 further below. However, his idea is wrong, because the two sides, A and B, of the union must be separated again to allow one side — the side of sound images — to leave for exit, maintaining the other side — the side of proto-meanings (or “concept” to de Saussure) — in the brain.

Such is the case in anatomo-neurophysiology of which in our opinion de Saussure was not aware. Therefore, there is no grammar between the two sides of each union as shown in figure 5, because they are only catalytically mapped. For this reason, we will show two things: (1) the sound images – NOT physical sounds — so mapped must be separated from the concept for production, after passing the extrapyramidal looping as shown in figure 6, by means of the appropriate cranial nerves in the brain stem after leaving the extrapyramidal looping, so as to come out as physical sounds through the vocal apparatus via the cortico-bulbar pathways of the internal capsules for oral language; and (2) the sign images — NOT physical signs — so mapped in the case of sign language must also be separated from the concept for production, when leaving the extrapyramidal looping, by way of the cortico-spinal pathways through the internal capsules and the brachial apparatus after decussation for sign language.

In such illustrations, we specifically point out two things: (1) The physical sounds or gestures after production have no meaning in and by themselves, because the meaning remains in the brain of each speaker/signer. (2) The hearer/viewer must then reconstruct meaning on the sounds heard from speaker and gestures viewed from signer, but the meaning hearer/viewer reconstructs is nine times out of ten, because of function enhancement, not the same as the meaning speaker/signer originally constructed. For this reason, in a worst case, arguments or even quarrels take place between the dyadic partners.

What is language in the brain like?

Language in the brain is behavior made up of two planes. This is a statement which underlies our theoretical construct. Therefore, in this section, we now delve into neuroanatomy and neurophysiology to further substantiate our theoretical constructs. However, we should remind the reader that our theoretical constructs rest on (1) the primary brain functions which constitute an inchoate mass of impulses and, in part, (2) the brain functions of neuromuscular coordination in both production and reception which also constitute an inchoate mass of impulses.

Since the former must be connected to the latter in the brain in order to enable the individual to make proper adjustments to the internal and external environments as behavior for production and reception in varying contexts, we claim that they constitute two planes, namely, (1) Content Plane and (2) Expression Plane, the physiological functions of which thus result in behaviors. To these we add a new core distinction, viz., Individual Aspect and Social Aspect. They are combined with de Saussure’s core distinction of Langue and Parole to constitute a matrix as follows:

The idea embodied in this matrix can be re-constituted into two planes schematically as follows:

These two planes are now explained below.

Discussion

Content plane and expression plane

Each plane is an Inchoate Mass of Impulses. Content plane constitutes Proto-meanings and Expression Plane constitutes Sound-images in the case of oral language but Sign-images in the case of sign language.

In this section, we delineate the first plane in relation to the second plane on the basis of the above-mentioned matrix. Recall that language in the brain has two aspects which are like the two faces of Janus, one looking inward to the nervous system while the other looking outward to society. Thus, we claim that the intersections of the new core distinction and de Saussure’s langue constitute the content plane (i.e., I and II) in each individual’s primary brain functions, whereas the intersections of the new core distinction and de Saussure’s parole constitute the expression plane (i.e., III and IV) in each individual’s brain functions of neuromuscular coordination for both production and reception. The interactive connections between the individual’s primary brain functions and brain functions of neuromuscular coordination may be depicted as follows period.

We have also modified de Saussure’s speech circuit between members of a dyad in a diagram which ostensibly looks similar to the schematic representation illustrated above as figure 3. Since there are significant differences, they are specifically stated below.

The major differences between figure 3 and figure 4 are the followings:

(1) Figure 3 depicts each dyadic member’s brain functions for production and reception resulting from the interactive connections between his/ her primary brain functions (content plane) and his/ her brain functions of neuromuscular coordination (expression plane), whereas de Saussure’s idea (Figure 4) only illustrates the verbal contact between members of a dyad. That is, to us the interactive connections are neurophysiological in terms of impulses (proto-meanings) for both verbal and nonverbal, but to de Saussure what goes on in each dyadic member’s brain is psychological, constituting thought (or concept) which is an inchoate mass of ideas, and what enters the ear or comes of the mouth is physiological, thereby resulting in sound which is physical but just as indeterminate.

(2) To us, there is no grammar (or social-semiological system) in the brain, and therefore the interactive connections between content plane (or primary brain functions) and expression plane (or brain functions of neuromuscular coordination) are made possible directly by impulses caused by neurotransmitters which are facilitated by CREB (Camp Responsive Element Binding) proteins.

To Saussure, however, “the characteristic role of the language system (i.e., langue) vis-a-vis thought is not to create a material phonic means for the expression of ideas, but to serve as the intermediary between thought and sound so that their union necessarily brings about reciprocal delimitations of units” (Thibault, 1997) [6]. It is this very characteristic role of the language system (i.e., grammar) purported by de Saussure as the intermediary (or, in Hallidayan terminology, social-semiological system) between thought and sound that has influenced and dominated the contemporary theories of linguistics.

To Saussure, however, “the characteristic role of the language system (i.e., langue) vis-a-vis thought is not to create a material phonic means for the expression of ideas, but to serve as the intermediary between thought and sound so that their union necessarily brings about reciprocal delimitations of units” (Thibault, 1997) [6]. It is this very characteristic role of the language system (i.e., grammar) purported by de Saussure as the intermediary (or, in Hallidayan terminology, social-semiological system) between thought and sound that has influenced and dominated the contemporary theories of linguistics.

(3) Figure 3 indicates that there is no intermediary between primary brain functions and brain functions of neuromuscular coordination for production and reception in each dyadic partner’s brain, and therefore each such dyadic partner is at one and the same time a speaker and a hearer (in oral language) or a signer and a viewer (in sign language). For this reason, there is no left or right sound, when it is uttered or heard, in oral language, unless there is a hearing impairment on the part of hearer. But in sign language, it makes a difference between a right-handed signer and a left-handed signer, especially when two-hand signs are involved; even finger-spellings also make a difference, depending on whether the finger-spelling system is a one-hand system (as in ASL and JSL) or a two-hand system (as in British Sign Language). It follows that what the signer sees when he/ she signs is the mirror-image of what the viewer sees. No such differences exist in oral language between speaker and hearer, however.

(4) Thus, the meanings constructed by speaker or signer are nine times out of ten not the same as the meanings reconstructed by hearer or viewer. To de Saussure, on the other hand, figure 4 favors speaker at the expense of hearer, without taking into consideration sign language, because it assumes that the meaning produced by speaker via the intermediary through phonation is the same as the meaning received by hearer via again the intermediary but through audition.

(5) Figure 3 implies that when an impulse (concept or meaning) in the content plane is mapped onto its corresponding impulse (sound-image) in the expression plane, the catalytic mapping results in a state of language potentiation. But there is no such provision in figure 4.

We have in the preceding sections alluded to: (1) the primary brain functions in terms of impulses (proto-meanings) from varying contexts through the interactive connections with the brain functions of neuromuscular coordination in both production and reception: and (2) the results of contacts in terms of relationships between context of situation and context of culture. Therefore, we should explicate four terms; namely, mind set, context of culture, thought and ideology. They are the labels which depict varying forms of brain functions as meanings.

However, we must add that the first three terms overlap and will be used somewhat interchangeably. But let us explicate the term thought first in the content plane, so as to delineate the others, also in the content plane; our explications serve as the foundation on which the term ideology will be based, thereby resulting from them.

We must also mention and illustrate that many contexts of situation, which are said to be more dynamic and change continuously as time goes by, will cumulatively become impulses in each dyadic partner’s primary brain functions (i.e., content plane) as background noises.

These impulses accumulated over time in the content plane are not stationary “things” placed in “a box”; rather, they are electrical signals caused by chemical substances called neurotransmitters; the transductions of these neurotransmitters from one neuron to another are facilitated by the activator CREB protein and/or constantly controlled (or checked) by the repressor CREB protein regarding the excitatory and/or inhibitory transmissions of impulses. As a result, all impulses in the inchoate mass move around constantly, be they fresh and new impulses or background noises.

Many of such impulses are fresh, like encountering a new or renewed context of situation, but the majority of them are not fresh nor are they newly evoked; that is, they have been moving around in the brain in the form of memory as contents, which are impulses, for quite some time, ranging from childhood to a few years back or several days ago, as part of the “background noises”. On the basis of this dynamic nature of context of situation, we have postulated the many relationships between context of situation and context of culture, whereby context of culture can frequently merge with context of situation to serve as new contexts of situation in human behaviors. For this reason, the inchoate mass of impulses, of which the content plane is made up, is said to consist of three kinds of impulses:

• Fresh impulses;

• “Background noises”; and

• Not so fresh impulses

They together constitute thought in each dyadic partner’s brain, on account of catalytic mappings. It derives from the contexts of culture through experiences encountered from childhood, of which learning is a part, formal (like schooling) or informal (like playing).

In this sense, thought and context of culture in the primary brain functions overlap. That is, not all contexts of culture will become thought. However, the former — thought — pertains to language in the brain (oral, written, or sign), whereas the latter — context of culture — refers to both language in the brain and other non-language impulses which stand ready at anytime to serve as new contexts of situation in behaviors through experiences.

The next point that needs to be clarified is the notion of mind set. We define it as a set of impulses confined to each dyadic partner’s primary brain functions, irrespective of whether it is related to thought or context of culture, without much interactive connections with the brain functions of neuromuscular coordination for production, albeit not nil connection. However, the formation of a mind set in each dyadic partner’s brain is developmental, its impulses being accumulated for a fairly long time, even from childhood.

In this sense, the genesis of a mind set is established more on the interactive connections of each dyadic partner’s primary brain functions with the brain functions of neuromuscular coordination for reception, that is, more on passive than on active behaviors in the brain. Moreover, some individuals may maintain one mind set for a life time, because it is so strong that can lead to destruction in two respects: (a) addiction and (b) aggression: The former refers to food addiction/rejection, iPad/iPhone manipulation, nicotine/drug/alcohol addiction, whereas the latter refers to attacks, as may be evidenced by many suicide bombers in relation to the 9/11 attacks of the World Trade Centers and the Pentagon, or the insurgents in Iraq and Afghanistan. Such a strong mind set is often “nurtured” by a fanatic religious faith as a part of context of culture.

It is this characteristic nature of mind set that psychiatrists, when treating a patient, tend to look into for the patient’s past to determine the source or cause of his/her psychiatric problems (or illness), especially the experiences during childhood, by asking the patient to talk. It is here that the patient’s impulses in the mind set are connected interactively with his/her brain functions of neuromuscular coordination for production.

On the basis of the three terms just explicated, we now consider ideology as the hidden dimension of thought and/or context of culture, which must be expressed through behaviors, mostly verbal or otherwise, to reflect the dyadic partner’s mind set. In other words, ideology is the active behavior of mind set, proper or not, the impulses of which may come from the dyadic partner’s thought and/or context of culture, especially in relation to the social institutions of politics, religion, or economy, food consumption pertaining to habit forming, or even to theorizing in a discipline, academic or otherwise. In extreme cases, ideology can be deadly and destructive, nurtured by a disordered mind set over time. A good example was vividly displayed by the suicide bombers in the wars in Iraq and Afghanistan. A recent example – October 2, 2017 - is the machine gun shooting by a man in Las Vagas, killing 59 people killing 59 with several hundred people injured.

In view of the aforementioned, we should emphasize that in the Content Plane (the primary brain functions), there are two kinds of impulses: (1) Motoric Impulses for production in connection with the brain functions of neuromuscular coordination and (2) Sensory Impulses from reception also in connection with the brain functions of neuromuscular coordination.

The former start with motoric neurons in various regions of the cerebrum — after the extrapyramidal looping — the brain stem, and the cerebellum and end in the peripheral body parts, while the latter start with sensory neurons from the body parts but end in various regions of the cerebrum via the cranial nerves in the brain stem coordinated in part by the basal ganglia and the thalamus and in part by the cerebellum. However, between these two types of brain structures (i.e., neurons), there are association areas in the cerebrum where these two kinds of impulses communicate or interact by relaying from one gyrus to another in the neocortex as well as the paleocortex, because of function enhancement, thereby resulting in either meanings for production or meanings in reception. It is for this reason, we believe that the meaning hearer/viewer reconstructs is nine times out of then not the same as the meaning speaker/signer originally constructed.

These two types of meanings, however, are not exactly identical one-to-one, that is, one meaning in reception does not necessarily become the same meaning for production, because of the association areas and the coordinations of the neocortex and the paleocortex; these association areas and the coordinations also change, modify, and/or improve the meanings in reception when such meanings are ready to become the meanings for production.

These brain functions of change, modification, and improvement of meanings are the neurophysiological underpinning of our theoretical construct. That is, each individual is simultaneously a speaker and a hearer (for oral language) or a signer and a viewer (for sign language), and that when the individual as a dyadic partner utters an oral passage or gesticulates a sign passage to the other dyadic partner, the meanings the producer constructs in his/her brain are nine times out of ten NOT the same as the meanings the receiver reconstructs in his/her brain upon hearing speaker’s utterances or upon seeing signer’s gesticulations.

Brain functions in production: With these points in mind, we shall now illustrate how the meanings as impulses for production in the content plane are to be sent to the expression plane for catalytic mappings. Here, we take it for granted that meanings as impulses for production in the content plane are ready to go, through the extrapyramidal loop, for interactions with the corresponding sound images, also as impulses, in the expression plane, without taking into consideration meanings in reception. Here, for the sake of explicitness, we call meanings as impulses for production proto-meanings and the corresponding sound images (in the case of oral language) expression images to also include sign images (in the case of sign language). This assumption is needed, because meanings as impulses for production depend on meanings as impulses in reception which come from two sources:

1. Impulses from instantaneous sensory inputs

2. A good portion of the background noises which are available on demand.

During the interactions of the two planes, that is, the interactions of proto-meanings and expression images, we postulate that two neurophysiological processes take place:

1. Catalytic Mappings

2. Binding in order to set the stage of language potentiation.

Catalytic mapping refers to mapping of each proto-meaning onto each corresponding expression (acoustic or gesture) image. Binding refers to the result of such catalytic mappings whereby the proto-meanings so mapped change to linguistic meanings and each corresponding expression image changes to a sound image or a sign image when binding takes place. The result of binding is the formation of a state of language potentiation in each speaker/signer’s brain.

The results of binding are a series of unions as utterances — in the form of clusters of impulses — and must be lined up in the extrapyramidal looping, getting ready in a state of language potentiation, whether speaker intends to utter and signer intends to gesticulate or not for production. If so, then, the series of unions undergo the neurophysiological process of separation, so that only sound images go through the motor cortex in figure 6 for exit. If not, the series of unions remain in the extrapyramidal looping for the neurophysiological continuation of thinking which takes place initially (1) when forming each series of unions and additionally (2) when binding occurs to result in each state of language potentiation.

Given the explication of figure 6 above, we should now point out the relationships between figure 6 and figure 7. The focus is the brain functions of Chunking, Sequencing, and Linearization in figure 7 which take place (1) inside figure 6 where catalytic mapping and binding must occur as well as (2) when the results of catalytic mapping – sound images as impulses – after separation from linguistic meanings come out of the Motor Cortex A to move to the internal capsule for production in the corona radiata of each hemisphere.

The former takes shape during language potentiation, whereas the latter after separation enter the corticobulbar pathway for oral language or the corticospinal pathways for sign language as well as non-language gestures. The corticobulbar pathways lead to the appropriate cranial nerves in the brain stem for vocalization — where phonetics comes in — while the corticospinal pathways after the internal capsule lead to the spinal nerves through decussations below the brain stem for gesticulation.

To illustrate the working of these brain functions in figure 7, let us cite an example of phonetics which is the study of vocal tract shape accompanied by actual gestures in production as shown below.

Take note that in these cartoons, both articulation and gesticulation from the speaker/signer are produced more or less together. Nobody would normally produce them as Gesticulation – Articulation or Articulation – Gesticulation. Even so, the verbal articulation and the manual gesticulation take different pathways, after the sound-images and sign-images are separated from their respective linguistic meanings in the state of language potentiation. Be that as it may, there are two important neurophysiological facts:

(1) The sound-images must undergo the brain functions as shown in figure 7, as there are more than one union of linguistic meanings and sound-images for chunking, sequencing, and linearization, whereas the sign-images need not be, although in the first cartoon of Figure 8 both upper limbs are made use of. The reason is simple: There are two upper limbs but there is only one tongue. An English phonetician, by the name of Paget, unaware of such distinctions, once referred to oral language as “sign language of the tongue”.

(2) The sound-images take the corticobulbar pathways through the internal capsule and then they are activated by the appropriate cranial nerves in the brain stem for articulation without which there will be no physical sound produced. The sign images, on the other hand, take the corticospinal pathways, bypassing the cranial nerves to go directly through the internal capsule to reach the spinal cord after decussation of the impulses for each limb; that is, in the second cartoon, only one decussation is needed, whereas in the first cartoon, two decussations, one for each limb, are needed.

Then and only then can speaker/signer’s sound-images and sign-images be heard and seen by hearer/viewer. However, hearer/viewer must reconstruct what he has heard and seen to reconstruct the meanings speaker/signer constructed in each cartoon.

Brain functions in reception: Hearer/viewer, on the other hand, must reconstruct the meanings of what he has heard and seen, a neurophysiological process that requires three steps: (1) recognition, (2) identification, and (3) coupling. The first step is to recognize that those impulses heard as sound waves come from human voices which are familiar to him. The second step is to identify such familiar sounds with the impulses in his Expression Plane. (3) Once so identified, hearer/viewer must then couple those familiar sounds and signs as impulses with their appropriate impulses in his content plane in order to reconstruct his own linguistic meanings. It is here that we think function enhancement in the brain of reception takes place. A good example of function enhancement may be called inference which refers to the “extra” meaning hearer/viewer reconstructs, not intended or expected by speaker/signer, as may be illustrated below.

At that time, the background noises from his context of culture as we have explained must be evoked. Otherwise, a different reconstruction of meaning might result. For instance, an American who does NOT speak Japanese may take the gesture in the first cartoon to reconstruct the meaning of “something big”, perhaps as an inference, but will not be able to associate the meaning in his reconstruction with the linguistic meanings of speaker’s utterances in Japanese, because he cannot identify the sounds heard in Japanese.

Likewise, the gesture in the second cartoon might be recognized by a Taiwanese who does not speak Japanese when he couples its impulses in his content plane. But he is likely to reconstruct the meaning of “something weak or inferior or small”, perhaps as an inference, rather than “a woman”, without being able to associate the meaning in his reconstruction with the linguistic meanings of speaker’s utterances in Japanese, because he likewise fails to identify the sounds heard in Japanese. Even if the Taiwanese knows Japanese, he may reconstruct the meaning of the gesture correctly as “woman” but may in addition infer that the Japanese signer/speaker is cracking a “joke” as an inference.

What we want to emphasize is that each individual’s production depends very much on his/her reception spontaneously from varying contexts of situation or over time from accumulated contexts of culture.

We may even speculate that neurons of the five lobes interact, in connection with the Limbic System and the Papez Circuit. Here is a vivid example of behaviors pertaining to such interactions:

In the Taipei subway, each car has signs, warning passengers not to eat or drink, nor smoke in the subway. Violation will be fined NT$ 7,500. However, occasionally some passengers ignore the warning or are unaware of it, thereby taking a bottle of water to drink. The first author encountered four such violations, two women and two men, and warned the violators of the consequences of penalty, with different reactions as follows:

The two female violators, each one of whom reacted sharply; one said that she was thirsty and the other said, she just drank water, with angry facial expressions as if they could do what they wanted. The first author told each one of them immediately on two different occasions that there were a hundred or so passengers in the car and only she could not abide by the warning. One of them even challenged the first author to report the incident to the conductor. However, when realizing that there was a sign and seeing the warning she quickly took the next stop to alight.

The two men reacted differently; one of them remained quiet and took the next stop to exit; the other turned around and thanked Peng for pointing out because he was not aware of the warning as he had just returned to Taiwan.

Conclusion

Are there regional differences of brain functions, then?

We now come to the question for a conclusion of whether there are regional differences of brain functions or not, which is the crux of this article in the light of language in the brain as behavior as explicated above.

It should be clear by now that body parts must be involved for the production or reception of behaviors, such as language or music in the brain. Such being the case, the involvements of body parts are not at all undertaken by the cerebrum (or telencephalon) alone; rather, the brain stem and other inner-most brain structures must be deeply involved to accomplish the results.

Ostensibly, the reader may think that there are regional differences of brain functions confined to the cerebrum alone. But such an assumption, albeit incorrect, has been kept in the medical literature ever since Roger Sperry and his followers, like Gazzaniga, proclaimed their findings which hide several misconceptions: These misconceptions will become clear in the following explications.

Oral language

(1) Like most neuropsychologists, they presuppose, mistakenly, that language in the brain is not behavior, but a kind of cognitive activity differing from “behaviors” and therefore it can be tested or experimented to demonstrate, as was done by Gazzaniga, to show that it is lateralized to one hemisphere, and therefore there are regional differences of brain functions in the cerebrum alone.

(2) They further assume that language as such may be regarded as some kind of brain functions constituting a mysterious chunk, lateralized to one hemisphere, and therefore ignoring the processes involved in production and reception of language as behavior (and later, of music as well), without the awareness of asymmetry of brain functions in language for production and reception.

(3) To many neuropsychologists and recently cognitive linguists qua psychologists, however, language is a kind of cognitive activity, subsuming memory, learning, and thinking, but differing from behavior and hence can be lateralized in one hemisphere or even localized.

(4) To some linguists, e.g., Sydney Lamb who advocates “pathways of the brain” a la MT and Halliday who advocates“Grammar Brain”, however, regional differences of brain functions may not even be a major concern, because they have no idea of how brain functions work, and what pathways of the brain are like.

Our view, on the other hand, is that language in the brain is behavior, which is memory-governed, meaning-centered, and multifaceted, because sign language is a de facto language, and therefore language cannot be lateralized in one hemisphere controlled by a center in the brain, nor can it be localized to one region, any more than other brain functions, like music and the like, can be lateralized to the right hemisphere.

Put differently, their view on regional differences of brain functions are incorrect because language in the brain is denied of its behavioral nature in production and reception on account of asymmetry of brain functions, as we have explicitly explicated above. As such, it makes use of different body parts, involving the brain stem on account of the cranial nerves thereof, as well as the basal ganglia and their associated inner-most structures in order to enable each individual, like in any other non-language behavior, to make proper adjustment to the internal and external environments.

Given these premises, however, there are regional differences of brain functions as components based on their asymmetry, which may be listed as follows: Such components of language in the brain are shared by (1) the brain stem for the most part, (2) the inner-most brain structures, and (3) only in part by the cerebrum, especially in connection with function enhancement as illustrated above, when language in the brain is taken seriously as behavior. The crux of this view is the use of body parts for asymmetrical production and reception, especially in close relation to all cranial nerves in such production and reception, for each individual to make proper adjustments to the internal and external environments.

Such behaviors can be vividly observed if the reader watches any TV News report whereby the news caster talks or when a guest is interviewed for dyadic interactions. A good example was displayed by Hillary Clinton on Sunday (October 15th, 2017) when she was interviewed by Fareed Zakaria in his TV Show.

She talked in response to his questions, moving not only her mouth but her head and neck to punctuate her speaking, her shoulders and her upper limbs asymmetrically and, most interestingly her two eye-lids, accompanied by eye-brows, up and down, and her eye-balls. These body movements are actualized by the nuclei of her cranial nerves in her brain stem, accompanied by her second cranial nerves — optic nerves — which enabled her to see Mr. Fareed Zakaria. At the same time her two eighth nerves in her brain stem enabled her to hear the sounds and reconstruct meanings of what he was saying to her.

In so doing, the regional differences of her brain functions rest not in her cerebrum dichotomized between the left and the right hemispheres or among cortical regions of each hemisphere; rather, the regional differences of her brain functions, in the case of behaviors, verbal and nonverbal, account for the components of asymmetrical production and reception in behavior; such regional differences accomplish her interview in the TV show; put differently, her brain functions coming from her brain stem, in close collaboration with her basal ganglia, other inner most structures as well as the cerebellum, in the case of her oral language, are telling her cerebrum what to do. Why? The cerebrum is not the life supporting organ as is evidenced by the “brainless baby” reported by The Japan Times briefly mentioned earlier and additionally by the “brainless four-year boy in Cambodia reported by CNN as we have mentioned further above.

Sign language

In the case of sign language, there are regional differences also of brain functions, which are even more asymmetrical. However, the cerebral asymmetry in sign language is not so much in the brain stem for production in relation to the vocal tract shape and reception but, rather, in the corona-radiata contiguous to the internal capsule for the cortico-spinal pathways after the extrapyramidal looping in close functional relation with the cerebellum.

We should hasten to add, however, that sign language for production is not confined to the two upper limbs: It also involves certain cranial nerves in the brain stem regarding facial expression, especially tongue movements. For instance, tongue display is a part of certain signs in both American Sign Language and Japanese Sign Language. The first author spent two years to learn JSL but was often criticized by Japanese sign language teachers and deaf signers that his signing lacked facial expressions. Therefore, he never qualified to receive a certificate for JSL.

In reception, moreover, the main cranial nerves for sign language are the first and second cranial nerves plus the appropriate cranial nerves in the brain stem associated with the second cranial nerves, such as the third, fourth, and sixth cranial nerves. Can we say now that the brain stem in the case of sign language is also telling the cerebrum what to do? Our answer is an emphatic “Yes”.

Written language

In written language, asymmetry of brain function is even more vivid in production. Just observe how a left-hander writes, be it in English, Chinese, or Japanese. We would like to assume that Roman Alphabet for Western languages was most likely invented by right-handers. Even Chinese characters or Japanese Kana were also invented by right-handers. Hence, a great deal of asymmetry of brain functions take place for written language in production, be it undertaken by right- or left-handers.

In reception, however, we presume that there is no difference between right-handers and left-handers, although asymmetry of brain functions equally exists in both.

Music

By now we trust the reader has a fairly good understanding of what language in the brain is like and whether or not there are regional differences of brain function in behaviors. The answer is this: The regional differences of brain functions are not at all exclusive to the cerebrum, because brain functions for behaviors are not at all equivalent to higher cortical (or brain) functions as have here-to-before been so believed in the literature. The truth of the matter is that the cerebrum only shares a rather limited amount of brain functions in relation to behaviors for proper adjustments; it accepts impulses from the brain stem and is being told by the brain stem constantly what to do, when each individual produces and/or receives them asymmetrically.

How about music in the brain? Is it lateralized to the right hemisphere? The answer is an emphatic NO. When music includes singing, all cranial nerves in the brain stem must be activated, just like in oral language. When sight-reading is required, such as singing a hymn in a church service, the two optic nerves must be involved.

In the case of instrumental music, playing any musical instrument, be it a string, wood-wind, or brass instrument, not only the brain stem but also the upper limbs and the lower limbs must also be involved asymmetrically for playing a piece of music. If sight-reading is required, as in an orchestra, by the conductor or any player in the orchestra, even the percussionists, must employ the cranial nerves as well as the spinal nerves to perform, which involve both production and reception asymmetrically.

Even so, the task of regional differences in the brain remains large which requires a wide range of cooperative efforts to reveal the whole truth. The reason is that there remains the other half of language in society which many sociolinguists have already started to tackle. We hope that members of JASFL (Japan Association of Systemic Functional Linguistics) will join forces in this endeavor as a result of the publication of this article.

  1. Chomsky N. Syntactic structure. Mouton. 1957. https://goo.gl/ENYfKG
  2. Peng Fred. Language in the brain. London. Continuum. 2005.
  3. Peng Fred. Some thoughts on the origin and development of: language: a neurolinguistics approach. In language science and culture. PiotraLobacz, Piotr Nowak and WladyslawZaborcki (Editors). Poznan: Wydawnictwo Naukowe UAM, 2009.
  4. Peng Fred. Dementia in epilepsy: a clinical contribution to the metabesity of epilepsotlogy. Geriatrics and Gerontology. EC Neurology. 2017; 8: 110-117. https://goo.gl/PDRzMy  
  5. Peng Fred. Brain plasticity and cortical dysplasia in epilepsy: a common misconception for epilepsy surgery in children. EC Neurology. 2017; 9: 39-46. https://goo.gl/9954Gh
  6. Thibault P. Re-reading Saussure: the dynamics of signs in social life. London. New York. Routledge. 1997. https://goo.gl/CtEbuh