Monday, February 25, 2013

The Origins of ‘Cognition and Reality’

Birth of a Blog

Last Halloween a senior editor of a well-known psychology magazine contacted me. Would I be interested in becoming one of their bloggers? They liked my Twitter stream (, and knew from it that I had a new book coming out.  They suggested that there would be some mutual benefit to my providing them some online content.  That is, a blog could help sell books!

I gave the request some thought, replied that I was interested, provided a vague description of the kind of content that I might write about, and asked a few basic questions concerning the blogging process.  After a couple of days of further reflection, I concluded a) that my description was extremely vague (even to me), and b) that I really had no idea about what I was getting myself involved in.

My solution was to scan through a few of their blogs to get some idea of audience and length, and to try my hand at writing an entry.  The result, published in this “Cognition and Reality” blog, was something that did not take long to put together, and I felt that it was a better indication of my blogging intent than was my original reply.  I sent it on to the editor as an example of potential contributions.

They did not respond.

One consequence of my second e-mail was that someone from the magazine went to my home page and looked at my C.V. (Yes, I admittedly track that kind of activity).  That, combined with my two replies to their original request, must have given them second thoughts.  Was my work too flakey?  Was it not flakey enough?  I may never find out.

From a great deal of experience with my books, I know that when publishers stop contacting you, they are no longer interested in your work.  I thought that the blog situation was a little different, though.  After all, the editor contacted me first!  So, after waiting a couple of weeks I e-mailed them again, wondering if they were still interested.  They replied that they were, apologized, said that they were busy producing the latest issue of their magazine, and promised that I would hear back within two weeks.

Of course, they never got back to me.

I waited a month, and sent an admittedly snarky reminder about their promise to reconnect.  I noted that it was an interesting recruiting strategy to contact potential contributors, and then proceed to ignore them.  They probably read that as me expressing a declining interest in blogging for them, but, of course, that is pure conjecture.

I still have not heard back from them, and I likely never will.

I certainly was not interested in blogging for this magazine anymore, but their initial request did pique my interest in producing a blog.  I thought that it would provide me an opportunity to keep my writing habit sharp.  I also was interested in the possibilities that a blog offered in terms of a non-traditional medium for communicating about the foundations of cognitive science, and why the kind of research that I do might extend beyond the ‘ivory tower’.  If you are reading this entry, then you already know to what the original request to blog has led.

Why ‘Cognition and Reality’?

My research concerns very abstract investigations of foundational issues in cognitive science.  The title of this blog is a reminder to me to think continually about the implications my research has to the real world.  The title also reflects my growing interest in embodied cognitive science, which generates theories about cognition by taking an agent’s mind, body, and world all into consideration at the same time.  Finally, the blog title pays homage to Ulric Neisser’s book of the same title; Neisser’s book is a pioneering classic of embodied cognitive science that I misread as a graduate student, and rediscovered much more recently.

Becoming Bayesian: Three Books To Read

Intuitively speaking, being Bayesian is easy.  Prognosticator Nate Silver suggests that it requires following two general rules:  First, think probabilistically – evaluate different outcomes by considering their likelihoods of occurring.  Second, adapt to new information – as new data arrives, update your probabilities.

Academically speaking, being Bayesian is more difficult.  It requires a solid mathematical understanding of Bayesian inference.  After all, an academic Bayesian must perform such inference, or derive proofs concerning it.  Fortunately, Bayesians have produced a large and distinguished literature.  Unfortunately, statisticians, mathematicians and physicists have written much of it, intending it (reasonably) for consumption by other statisticians, mathematicians and physicists.  For a cognitive scientist like me this literature is intimidatingly difficult.  How does a cognitive scientist become Bayesian?

Recent research developments in my lab require that I become Bayesian, and must quickly train my students to become Bayesian as well.  After extensively searching the literature, and buying a number of classic texts, I believe that the fastest way for a cognitive scientist to become Bayesian is to read three key books.  More reading of the technical literature is required for a cognitive scientist to be Bayesian, but this additional reading will be much more rewarding if one begins as follows:

First, get excited about becoming Bayesian.  To do so, read The Theory That Would Not Die by Sharon Bertsch McGrayne.  (Full references for books are provided at the end of this blog entry.)

McGrayne’s book gives the history of Bayesian inference, beginning with Presbyterian minister Thomas Bayes’ 18th century rule for calculating, and updating, conditional probabilities.  McGrayne traces the evolution of this rule to its modern usage, providing a dizzying array of case studies.  She describes the role of Bayes’ rule for such real world problems as cracking the German WWII enigma code, searching for missing submarines and atomic bombs, and evaluating medical diagnoses.  Cognitive scientists may be particularly interested in her chapter on Alan Turing.  She also describes the continuing controversy surrounding Bayesian inference – Bayesianism is repeatedly abandoned because of concerns about some of its core assumptions (such as its subjective definition of probability) or because of the difficult calculations it entails when applied to real world problems.  Experimentally minded cognitive scientists will be interested in the biographical portraits that McGrayne provides of famous statisticians like Fischer, Pearson, and Savage.  McGrayne also chronicles the continual re-adoption and re-discovery of Bayes’ rule, because of its practical significance and because of the advent of new computational approaches. It is a theory that refuses to die – and that makes it very enticing!

Second, develop a working understanding of Bayesian inference.  To do so, read Doing Bayesian Data Analysis by John Kruschke.

McGrayne’s book gets one excited about Bayes, but provides very few technical details about what Bayesian inference involves.  Kruschke’s book solves this problem, by beginning with some basic elements of probability theory, then developing Bayes’ rule, introducing Bayesian inference for a coin-flipping scenario, and moving on to Bayesian alternatives for a number of core non-Bayesian statistical tasks.  Kruschke modestly notes that his book “is definitely not a mathematical statistics textbook in that it does not emphasize theorem proving, and any mathematical statistician would be totally bummed at the informality, dude.” However, this lack of formality is the book’s strength – it is informal, but informative.  Kruschke does not assume any statistical background at all, and builds up an understanding of Bayesian inference from first principles.  Furthermore, he replaces theorem proving with working computer programs, for which he provides generous comments.  The result: hands-on experience with core Bayesian concepts, a deep practical understanding of what Bayesian inference is all about, and an ability to actually apply Bayesian techniques to real data.  Kruschke’s book lays the foundation – and provides the courage – for moving on to classic, formal treatments of Bayesian inference.

Third, reflect on why being Bayesian might be an important foundation for cognitive science.  To do so, read Bayesian Rationality by Mike Oaksford and Nick Chater.

Classical cognitive science has evolved from logicism, which views thinking as carrying out operations in some kind of mental logic, a logic that ultimately defines judgments as being true or false.  Oaksford and Chater argue that such logicism is a mistake.  They instead argue that a better formalism for classical cognitive science is probability theory: “Logic-based approaches to cognition appeared to be viable for mathematical theorem proving and simple formal game playing, but seemed fundamentally ill-suited to representation and reasoning with real-world, common sense knowledge.”  Their book begins by considering logicism in order to contrast it with the Bayesian approach that they favor.  They develop their Bayesian move – which they name ‘the probabilistic turn’ – over several early chapters.  They then proceed to consider a number of core paradigms in the study of deductive (i.e. logicist) reasoning from the Bayesian perspective, making the case that the probabilistic approach is more appropriate.  This book is stimulating, because it links Bayesian inference to foundational assumptions in cognitive science – a link that is now leading to a lively and growing debate in the literature.  It also provides a satisfying tie-in to the Bayesian view of probability in statistics – which subjectively and controversially defines probability in terms of degree of confidence in beliefs.

To recap: if you are a cognitive scientist who is considering becoming Bayesian, then I recommend that you read these three books.  To be Bayesian you will need to read more – key authors are Cox, Jaynes, Jeffreys, and Savage – but rest assured that you will be well prepared to do so.

The Three Key Books To Read:

Below are the bibliographic details for the three books that I recommend, as well as links to information about them on

Monday, February 18, 2013

Where is the mind, and why does its location matter?

According to the late George Miller, cognitive science was born on September 11, 1956 at a symposium organized by MIT’s Special Interest Group on Information Theory. Miller left this interdisciplinary conference excited by his sense that experimental psychologists, linguists, and computer scientists faced similar problems; he believed that their search for solutions to these problems exhibited a deep theoretical unity. 

Where did this unity come from? The pioneers of cognitive science confronted issues rooted in the then-new science of information processing. A post-war invention -- the electronic computer – provided some fresh suggestions for solving these problems. Indeed, these pioneers assumed that cognition is computation; to them computation meant rule-governed symbol manipulation, exactly the type of processing brought to life by digital computers. Researchers who adopt this view are classical cognitive scientists. Classical cognitive scientists found unity because in the 1950s there was only one meaning to the term ‘information processing’. 

By the mid-1980s, the unity in cognitive science that inspired Miller was fragmenting. First, the promise of classical cognitive science had stalled. Classical cognitive science produced many promising, small-scale computer simulations of reasoning and language, but general-purpose machine intelligence still seemed a distant dream. Second, and more importantly, new ideas about information processing arose. Connectionist cognitive scientists championed artificial neural networks. Embodied cognitive scientists endorsed physically embodied agents -- behavior-based robots. In today’s journals, one finds extensive debates about the relative merits of classical, connectionist, and embodied cognitive science. 

Embodied cognitive science is the most recent reaction to the classical approach, and in many respects its ideas are the most revolutionary and dangerous. Embodied cognitive scientists describe classical models in terms of what is known as the classical sandwich, defining cognition in terms of a cycle of sense-think-act processing. That is, classical models do not permit direct connections between sensing and acting. Instead, the environment provides raw information to thinking processes that build an internal model of the world and use this model to plan appropriate action. Action only takes place in a classical model after this extensive modeling and planning and thinking have taken place. In a classical model, the middle of the sandwich -- the thinking -- really is the meat; planning is taken to be the ultimate goal of cognitive processing. 

Embodied cognitive science challenges the importance of planning, and defies the need for the classical sandwich. Rodney Brooks famously asked why we need to model the world when we can (through sensing, or ‘situation’) use the world as its own model. He built behavior-based robots whose sense-act reflexes could quickly react to the sensed world without modeling and reasoning about it. In the embodied revolution the purpose of cognition is to act, not to plan. 

Behavior-based robots do not merely sense their world; being physically embodied they can act upon it and change it. Extending this idea to human cognition, embodied cognitive science has renewed scholarly interest in how humans use the external world to support or scaffold cognition. For instance, an enormous amount of our memory is deliberately recorded outside our brains, in books, in electronic devices, in the World Wide Web. Our memory may not require us to think, by retrieving internally stored information. Instead it may require us to act on the world, storing (and later finding) information externally. 

If cognition is sense-acting, if it is scaffolded by the external world, then it is reasonable to wonder whether the world itself is literally part of cognition or of the mind. Philosophers Andy Clark and David Chalmers introduced the extended mind hypothesis, which was the proposal that the external world was literally part of the mind. According to the extended mind hypothesis the boundary of the mind is not the skull. Not surprisingly, this hypothesis is controversial; its supporters and detractors have published many articles and books to debate the whether the mind is extended. 

The debate about the extended mind hypothesis is important to academic cognitive science. If cognitive science is the scientific study of cognition and the mind, then it is critical to know exactly where the entity of interest resides! However, might the issue of whether the mind is extended or not be only of interest to the scholarly residents of the ivory tower? 

Importantly, it is not. 

For instance, I often see news articles and letters to newspaper editors critical of current educational practices. A frequent theme is the need to return to “common sense education”.  For instance, there are frequent appeals for educators to instill better thinking in students; a “common sense education” should produce students who can use their excellent memories to learn important concepts. What prevents this from occurring? A frequent claim is that technology is the culprit -- because of calculators, students can’t perform mental arithmetic; because of word processors with spellcheckers, students can’t spell; because of smart phones and social media, students are distracted, and so on. 

Such commonplace criticisms of education seem strongly related to classical cognitive science, and emphasize the importance of internal modeling, planning and reasoning. But we have seen that cognitive science is now seriously considering alternative notions of mentality. If embodied cognitive science’s extended mind hypothesis is true, then perhaps we should seriously pursue an educational system that promotes students’ use of technology, and which develops their skills in manipulating the world -- and its technology -- to scaffold their cognition. If cognition is acting, and not thinking or planning, then the educational implications of the extended mind hypothesis are staggering.

Sunday, February 17, 2013

How to Memorize π to 100 Decimal Places

One of my undergraduate courses introduces the foundations of cognitive science.  It first portrays three different approaches to cognition (classical, connectionist, embodied), emphasizing their differences and their mutual antagonism.  It then explores how to unify these three views. 

I end the course with a lecture on memory, because this topic is one that promotes theoretical unification.  Mnemonic techniques for converting numbers into words are clearly in the realm of classical cognitive scientists interested in symbol manipulation.  Techniques that assert associations between memories, so that a recalled item cues the recall of the next item, have definite connectionist attributes.  Techniques that promote the use of the real world, such as using well-known locations to store to-be-remembered material, evoke embodied cognitive science’s interest in cognitive scaffolding.   

I set the stage for my lecture by performing a feat of memory.  Recently I recalled π to 100 decimal places under the watchful eyes of my students. (As I wrote the digits across the board in front of the class, they used the internet to track my progress – without me asking!)   This performance is not particularly notable – world record holder Lu Chau has recited π to 67,890 digits – but it is grand enough for me, and it possibly impresses my students. 

Surprisingly, anyone interested in performing this memory feat who peruses the internet for how-to advice is often steered in inefficient and laborious directions.  Some websites recommend learning π by repetition:  “Repeat 4 digits out loud at least 20 times” says one.  Other sites recommend looking for numerical patterns within a block of digits, and then for other patterns that link one block to the next.  Others provide piems – poems that represent the desired digits – for memorization, and for later decoding.  A handful of sites refer to classic mnemonics for converting digits to words; but cautions the reader about difficulties;  the words have to be linked together into a (memorisable) story in order to recite π’s digits in the right order, which is quite challenging because the required words do not typically come in meaningful patterns! 

My own technique for this memory feat seems more efficient.  I employ two different mnemonic techniques in concert.  First, I use a standard technique for converting digits into words.  However, I deal with the sequencing of the words using a second technique, the ancient method of loci, in which I move from location to location in a ‘memory palace’, placing to-be-remembered images at each location.  Several of my ‘π words’ are chunked together in a single image.  My memory palace is a very familiar place – my house – and I can memorize 100 digits of π using only 16 different images.  To perform my feat, I simply walk through my memory palace, retrieving the stored images in the necessary order.  Each image recalls a phrase; I (knowing the Major method described below) translate the phrase into numbers; I then write out the numbers in front of the class. 

Of course, as this memory feat is part of a lecture, I am compelled to explain my methods to my students.  I begin by telling them the following story, which illustrates my use of the method of loci.  If you read the story, concentrating on creating each image (placing it in your own memory palace), then you will be able to perform this memory feat as well: 

I begin at the front sidewalk to my house. I look at the sidewalk, covered in snow, and place upon it an image of a running car motor resting on top of a red tulip; the tulip vibrates from the action of the engine. This image represents the concept Motor-Tulip. 

Next, I see the top of the stairs that lead to my front door. I imagine seeing NBA star Steve Nash limping down the stairs as he leaves my house. I place that image on the stairs, standing for the sports headline Nash! Lame! Leave! 

I open the front door, step into the vestibule. There I imagine an unknown person throwing a pickaxe (instead of a dart) to choose one of a number of nametags stuck to the wall. Each tag holds the name of a poem. This dynamic image conveys the meaning Pick-Poem-Name. 

Stepping beyond the vestibule, I look into the walk-in closet. I imagine seeing automatic conveyor of the type used by drycleaners; many fur coats hang from this device. My image activates, and the coats spin round and round the closet. This is clearly a Fur-Changer. 

Moving along, I look into the main floor bathroom. I add to this room an image of my mother in an incoherent rage, so angry that volumes of foam spew from her mouth. Yes, I see Mom-Foaming. 

The next room that I reach is the kitchen. On the counter, I see a large, overflowing bowl of dog kibble. I imagine making each of these kibbles even smaller by cutting them into pieces using a large, sharp knife. Setting this image into motion encodes Kibbles-Knife. 

Moving towards the end of the kitchen, I peer into the back entry and notice the upright freezer. I imagine opening the freezer, gazing inside; in my mind’s eye, I see an aluminum bucket being flipped up and down on a hot frying pan. To me, this image means Fried-Bucket. 

Beside the back entry is a pantry connected to the kitchen. On the small counter within it, I imagine seeing an old, torn, dirty roadmap; attached to the map is a lit fuse with its bright spark moving dangerously close to the end. This is a strange device, perhaps invented by one of my cats; it is a Shabby-Map-Bomb. 

I leave the kitchen, and enter the dining room. I imagine that the dining room table supports a large cooler of ice. On this ice are many of seal fins, eerily flipping up and down on their own. Tonight, dinner appears to be Cooled-Seal-Fins. 

I pass from the dining room into the living room. There, in my imagination, an unknown woman works, taking a black hockey puck and laboriously winding colorful ribbons around it. I do not know who this woman is, but I do know her role in the household: She is the Puck-Wrapper. 

I turn away from the living room, and begin to climb the stairs to the second floor. At the first stairs landing I imagine seeing a person with a very large earlobe from which dangles a strange earring. The earring consists of several different gnomes and a cup of coffee. To me the image represents Earlobe-Gnomes-Coffee. 

I climb the second flight of stairs, turn left and gaze into the laundry room. There I imagine a number of my former teachers all standing around using cloths to polish the nose of a large fish. Thus in the laundry room I place the concept Teachers-Shine-Fish-Nose. 

Beside the laundry room is a linen closet. I open it and imagine seeing a beehive from which swim a number of long, lanky pipefish. Apparently, the linen closet contains a Hive o’ Pipefish 

The next room that I enter is a second floor office. There I imagine seeing Knives Chau, a character from the movie Scott Pilgrim vs. the World. She is using a hammer to smash all my old LPs into bits. The concept in this room is Knives-Hammer-Vinyl. 

The office leads into the master bedroom. I imagine seeing on the bed a desert island surrounded by ocean. The island is small, but covered with a mass of moving ticks. The bed contains Marooned-Ticks. 

My final stop is the bathroom attached to the master bedroom. On the bathroom floor, I imagine seeing a single shoe that has the appearance of a cob of corn. This is my missing Shoe-Cob. 

I designed the images in the story above to be novel, bizarre, and dynamic in order to make them easy to remember. In fact, after selecting the images, I learned each of them as well as their position in my memory palace after only a couple of imaginary walk-throughs. 

I selected the words encoded by the images with a different mnemonic technique in mind. This technique was the Major method, for converting digits into consonant sounds. For instance, the consonant sound n is associated with the digit 2 because n has two downward strokes; the consonant sound m is associated with the digit 3 because m has three downward strokes. The table below provides the full Major method; I highly recommend reading about it in Lorayne, H. and J. Lucas (1974). The Memory Book. New York, Stein and Day. 

To use the Major method to remember a long number, take the number, convert its digits into consonant sounds, and then add vowels to make meaningful words and expressions. To recall the number, recall the expression – then work backwards to convert its consonant sounds back into digits. 

Consonant Sounds
sh, ch, g, j
p, b

 The images that I placed in the various loci in the story that I told above now make perfect sense: each represents words to translate into digits of π with the aid of the Major method, as shown in the table below:  

Words From Image
Nash! Lame! Leave
Hive O’ Pipefish
With a little practice using the Major method – I usually do so by memorizing the digits of license plates on other cars as I drive – converting words into digits becomes automatic.  This ability, combined with the method of loci and vivid mental imagery, permits you to perform memory feats like mine without too much effort.  However, if I ever want to reach Lu Chau’s level of performance, then I will have to move into a much, much larger memory palace!