Monday, March 25, 2013

The Extended Mind: Alberta Primetime Edition

Last Wednesday I, along with Ray Bilodeau of the J.R. Shaw School of Business at NAIT, was in a television discussion about the effects of smart technology on human intelligence.   The discussion was spawned when the team that puts the Alberta Primetime program together wondered  whether offloading memory and knowledge to smart phones and the like had detrimental effects on the human abilities of memorizing and thinking.  I think both panelists agreed that smart technology does not make us stupid, but you can be the judge: a link to that portion of the Alberta Primetime telecast is below.

There were a number of delightful ironies related to that panel.  How did I get the opportunity to be part of it?  My name came up in a web search by a producer, they sent me an e-mail, I pointed them to my February 18 blog post on the extended mind (“Where is the mind, and why does its location matter?”), and that was that.  In short, smart technology was responsible for me talking about its effects!

After agreeing to appear on the panel, I decided that I needed to prepare, and performed some additional web browsing and reading.  This immediately led me to several interesting sources, including Nicholas Carr’s famous article in The Atlantic that asked if Google was making us stupid.  The producer helpfully e-mailed links to some additional material, which also included Carr’s piece.

I do not agree with Carr’s argument (it is too anecdotal), but it led me to a wonderful example of how this debate about the effects of technology on intelligence is effectively 2400 years old.  Carr cites Plato’s dialogue between Socrates and Phaedrus as providing an argument about the negative effects of the written word.  I hunted down Plato’s Complete Works (1997, edited by John M. Cooper, Hackett publishing) from my son’s extensive philosophy library.  There I found the claim that writing “will introduce forgetfulness into the soil of those who learned: they will not practice using their memory because they will put their trust in writing, which is external and depends on signs that belong to others, instead of trying to remember from the inside, completely on their own. You have not discovered a potion for remembering, but for reminding; you provide your students with the appearance of wisdom, not with its reality” (pp. 551-552).  Of course, we only know of Socrates’ teachings because Plato wrote.  Carr cites Plato, who wrote about Socrates 2400 years ago, and I extend my knowledge a little further by doing some reading!

Of course, the World Wide Web and smart phones are only the latest tools for extending the mind.  To the probable dismay of Socrates, the written word is a far more pervasive technology for extending human cognition.  Not long after my television appearance, I finished reading a book on the history of computer programming languages (Go To by Steve Lohr, Basic Books, 2001).  In a chapter on the history of the GUI and the Mackintosh, Lohr cites some classic work – completely unknown by me – by J.C.R. Licklider, who was a Harvard-trained psychologist.  While working for Bolt, Beranek and Newman, Licklider published a paper that presciently argued that the goal of computing was to augment intelligence, and not to substitute for it.

I was intrigued by Lohr’s mention of this work, partly because Licklider was a psychologist, and partly because I had been thinking more about the extended mind after appearing on the Alberta Primetime panel.  Naturally, I used a new technology (the World Wide Web) to access an old tool (the written word) when I retrieved Licklider’s 1960 paper “Man-machine symbiosis”.  This paper is full of key ideas, and is particularly interesting to consider in terms of developments in computer technology (and its use) 50 years after it first appeared.  To me, two key themes of embodied cognitive science run throughout Licklider’s piece.  First, his notion of a symbiotic relationship between humans and computers is essentially one of emergence: the extended mind created by this symbiosis is capable of solving different problems than either component is on its own.  Second, Licklider in essence argued that the creation of this symbiosis required advances in affordances: that is, computer technology had to develop in a fashion that led to a seamless man-machine interface.  “It seems likely that the contributions of human operators and equipment will blend together so completely in many operations that it will be difficult to separate them neatly in analysis” (Licklider, 1960, p. 6).

One theme that runs through Licklider’s article is that the human contribution to man-computer symbiosis is to help guide the problem-solving process, and to evaluate its results.  To me, this suggests that by offloading some cognitive tasks to modern technology, we may be freeing our minds for making alternative contributions to extended thought.  Smart technology is not making us stupid; it is making us use our minds in different ways to take advantage of new affordances in the world.

Links:

 

Monday, March 18, 2013

Composing Atonal Music Using Strange Circles

In the 1920s, Arnold Schoenberg invented a technique, called dodecaphony, for composing atonal music.  Atonal music has no discernible musical key, because all twelve notes from Western music occur equally often.  In dodecaphony one begins by arranging all twelve notes in some desired order; this arrangement is the tone row.  One then takes the first note from the tone row to use in composing a new piece.  The composer chooses the duration of this note, and decides whether to repeat it.  However, once the use of this note is complete, the composer cannot use it again until all of the other eleven notes in the tone row have also been included.

In this blog, I explore an alternative approach to composing atonal music.  This technique uses ‘strange circles’ as sources of notes for a musical composition.  Strange circles are representations of musical notes that we have discovered in many networks that we have trained on musical classification problems.

First, I will briefly state what strange circles are.  Then, I will explain how I used them to generate the notes of a musical score.  I am going to provide that musical score, and also provide some sound files that illustrate the completed piece, as well as the components from which it was constructed.  If you are not interested in the details of my method, then feel free to skip to the end to hear its results! (Links are in the text, but are not evident in some browsers.  So, I have also set aside the links at the end of this post.)

Strange Circles

Most students of music encounter a device called the circle of fifths.  It arranges all twelve notes from Western music in a single circle; adjacent notes in this circle are a musical interval of a perfect fifth (or seven semitones) apart.  This circle helps determine the number of sharps or flats in a key signature, or guides chord progressions in jazz.

The Circle of Fifths

Other circular arrangements of notes are possible that separate adjacent notes using musical intervals different from the perfect fifth.  These are ‘strange circles’ because they are not typically taught to music students.  When my students and I train artificial neural networks to classify musical chords, and then examine the internal structure of the trained network, we often find that the network assigns notes to a variety of strange circles.

For example, we often find that networks employ two circles of major seconds, both shown below.  In each of these circles, adjacent notes are a major second (or two semitones) apart.  With this separation between notes, one circle captures half the notes in Western music, and the other captures the other six notes.

Two Circles of Major Seconds

We also frequently discover the four circles of major thirds in our networks; all four appear below.  In each of these circles, adjacent notes are a major third (or four semitones) apart.  With this separation between notes, one circle only captures a quarter of the notes in Western music.  As a result, four difference circles are required to represent all of the possible notes.
 
Four Circles of Major Thirds
 
Composing With Strange Circles

Why do networks use strange circles to represent musical structure?  One reason is that networks discover that notes that belong to the same strange circle are not typically used together to solve musical problems, such as classifying a musical chord.  Instead, the network discovers that combining notes from different strange circles is more successful.

I thought that a network’s approach to strange circles could create new music.  Networks combine notes from different strange circles, but do not expect notes from the same strange circle to co-occur.  What if the notes for a voice or a staff of a musical composition were provided by a strange circle?  If one created this voice by choosing only one note at a time from a strange circle, then its notes would not co-occur.  However, if one used different strange circles in this way to create different voices in the same composition, then this might be musically interesting.

My hypothesis led to action.  I created a six-voice piece of music, in which a strange circle provided the notes of each voice.  I used each of the six strange circles illustrated above.  To choose a note for a staff, I randomly selected a note from its strange circle.  (This move is also consistent with our network interpretations: networks treat each note from the same strange circle as being exactly the same note!)

I made three additional musical assumptions.  First, while each wheel generated a note name, I decided how high or low (in terms of octave) each note was positioned.  Second, I added a rest to each circle, so that at a randomly selected moment any strange circle could be silent.  This was to provide some musical variety.  Third, in order to ensure that all notes occurred equally often in the score, I sampled the two circles of major seconds twice as frequently relative to the other four strange circles.  That is, I used the circles of major seconds to generate four quarter notes per bar, while the other four circles were used to generate two half notes per bar. The score generated from the ‘strange circles’ in this fashion consisted of 14 bars.  Its first four bars are below; a pdf of the entire score is available here.
 
 
Individually, the strange circles are not musically interesting, though they do seem musical.  For instance, listen here to a single circle of major thirds.  There is not much you can do using only three notes and the occasional rest!  You can also listen here to a single circle of major seconds.  It is a bit more interesting, because it uses six different notes.

Music that is more interesting emerges from combining the random outputs of different circles.  I enjoyed the results of pairing the two circles of major seconds together, which you can hear here.  Combining two circles of major thirds also shows promise, as you can hear; even more interesting results result from combining all four circles of major thirds.

Of course, the full composition involvescombining the notes of all six circles simultaneously.  I was surprised at its musicality.  My impression of this piece was that it is a modern, atonal composition.  I am no Schoenberg, but I humbly submit that composing music by combining strange circles provides an interesting and alternative method to dodecaphony.

Links
 

Friday, March 15, 2013

Not Just The Facts: Teaching Cognitive Science

The last few days have been difficult for post-secondary education in Alberta, and many challenges and changes are imminent.

Substantial cuts to university operating grants were announced in the provincial budget on March 7.  There are ongoing discussions about how to implement budget cuts across institutions, faculties, and departments.  There is pressure from the minister of Enterprise and Advanced Education for institutional reorganization.  Researchers sense encouragement of applied research, and discouragement of “curiosity driven” research. Students, faculty, and staff are marching on the legislature.  Coffee with some of my younger colleagues includes mentions of exploring opportunities elsewhere.

Not surprisingly, some of these discussions are online.  Many can be found by searching Twitter for the hashtag #abpse.  The minister, Thomas Lukaszuk, contributed to this Twitter feed by posting a link to a YouTube video that provides “An Open Letter To Educators.”  The minister simply aksed “Interesting – what do you think?”  In the video, a former University of Nebraska student argues that universities are in the business of communicating facts to students, but now the internet makes facts free.  Universities have to adapt to this reality or die.  His university experience – with large classes, powerpoint lectures, fact memorization and regurgitation, and professors not interested in even learning student names –led him to drop out.  Presumably, he could find better facts (for free) using his web browser or smart phone.  His point is that if the goal of a university education is memorizing facts, then it is a waste of time and money.

I have a lot of different reactions to this video, but the most straightforward is simply that its creator missed the point of a university education.  As a professor, I am not in the business of uploading facts to student brains.  Instead, my goal (as advertised in my teaching dossier)  is to create an environment in which students learn by building – reading, discussing, writing, programming, simulating, experimenting – as they explore the exciting interdisciplinary ideas at the foundation of cognitive science.

One easy example of putting this teaching philosophy into action is my fourth-year course on embodied cognitive science.  Sure, this course involves me lecturing.  It involves more than this, though.  Students get hands-on experience with behaving agents by building a variety of robots out of LEGO Mindstorms components.  They use these robots to explore how very simple agents can generate surprising and complicated behavior when embedded in the real world.  Building on lectures and hands-on activities early in the course (where students learn about and construct our versions of some famous robots), by the end of the course students create, study, and document their very own robot projects.  Examples of student projects in the most recent edition of this course are online; these videos (not to mention the robots and their programs) were student creations.  (I even took the time to learn all of the students' names.)

A more recent example involves my PhD student and two senior undergraduate students, both taking independent study courses with me this year.  All three students have been involved in lab activities, and the fruits of these labours appeared at the recent Joseph R. Royce Research Conference hosted by the Department of Psychology at the University of Alberta.  All three students (seen in the image below) gave their first poster presentation at this conference.  Of course, this required them to be involved in the research, as well as the poster preparation.   More importantly, at the conference itself the students engaged in conversations with conference attendees about the work on display and about why it is significant.  You learn a lot about your own research from conversations of this type, both by learning how to present it, and by dealing with a variety of unanticipated questions!

The two examples above are from my direct experience, and I could provide many others from my own experience as a professor (and in my earlier years as a student!).  I am not in a position to make general claims about the state of university education, but I do not believe that I am an exception to the rule.  Many other instances can be found in my own department (there were many undergraduate and graduate students presenting posters and talks at the Royce conference), and in all of the other departments at my institution.  If this was not generally true at most universities I would be astonished.
 
Brian, Sheldon and Josh at the poster session of the Royce conference
 

Monday, March 11, 2013

'Gee Whiz' Connectionism

The artificial neural networks produced in my lab always seem to find solutions to problems that are more elegant and clever than any solutions that I could come up with on my own.  Importantly, to be humbled in this way by a network, I have to take the time to interpret their internal structure.

Artificial neural networks are “brain like” computer simulations.  A network consists of a number of different processing units that send signals to one another through weighted connections.  These networks learn to make desired responses to stimuli.  A stimulus is a set of features that one presents to a network by activating the network’s input units.  When activated, input units send signals through weighted connections to other network processors, eventually producing a pattern of activity in the network’s output units.  This output activity is the network’s response to the presented stimulus.  Often network responses are patterns of ‘on’ and ‘off’ values that classify a stimulus by naming its category.

The responses of a new network will not be very accurate, because the network has not yet learned the desired stimulus-response relationship.  Learning proceeds by giving a network feedback about its responses, feedback for changing its internal structure.  Feedback is usually a measure of response error -- the difference between desired activity and actual activity for each output unit.  A learning rule uses these errors to adjust all of the connection weights inside the network.  The next time the network receives this stimulus, it will produce a more accurate response because of these weight adjustments.  By repeatedly presenting a set of training patterns, and by using feedback to adjust network structure, a network can learn to make a very sophisticated judgment about stimuli.

The connectionist revolution that struck cognitive science in the 1980s was a revolt against the ‘symbols + rules’ models that had defined classical cognitive science for many decades.  Connectionists argued that artificial neural networks were far more appropriate models of cognition than were logicist models inspired by the operations of the digital computer. The gooey, brain-like innards of networks did not seem to have explicit symbols or rules, and seemed better suited for solving the ill-posed problems that humans are so good at dealing with.

The vanguard of the neural network revolution was something that I like to call ‘gee whiz’ connectionism.  Connectionists would take some prototypical problem from the domain of classical cognitive science (usually involving language or logic), and would train a network to deal with it.  Then they would claim that, gee whiz, a radical non-classical model of this problem now existed.  In the 1980s, everyone assumed that the internal structure of networks was a huge departure from classical models, so the mere creation of a network to solve a classical problem was a sufficient contribution to the literature, as well as a critique of the classical approach.

The problem with ‘gee whiz’ connectionism was that it never validated its core assumption – that the insides of networks were decidedly non-classical – by actually peering inside them to see how they worked.  When we started to interpret network structure many years ago, we found that the differences between a network and a classical model was often less distinct than connectionists imagined.  We also found that more often than not networks had discovered representations for solving problems that were new, exciting, and interesting.  Furthermore these representations were often far cleverer than any that I could think up on my own.

Recent results in my lab reminded me that networks usually discover ingenious solutions to problems.  We train some networks to solve probability problems, and we use math to explore the relationship between network structure and important probability theorems.  Our math has allowed us to derive equations to define network structure (e.g. connection weights) in terms of the probability rules.  These equations all turned out to be loglinear models – equations that involve adding and subtracting natural logarithms of variables.  For instance, we might find that the equation for some weight w is the loglinear model ln(a) + ln(b).

We also found in some of our more interesting equations that the variables in our loglinear models looked like important elements in probability theory and in other branches of statistics, (e.g. some variables looked like something called the odds ratio).  What puzzled us, though, was that the network was taking the natural logarithm of these variables.  This made the relationship between network structure and other branches of mathematics harder to define.  Why were networks using the logarithms of these variables?

It finally struck me that this was in fact an extremely elegant solution to a mathematical problem faced by any of our probability networks.  In many cases, to determine some probability based on different pieces of evidence, one has to multiply other probabilities together.  However, the processors in our artificial neural networks cannot multiply or divide – they can only add or subtract the signals that they are receiving.  The network’s solution to this conundrum is to do all of its calculations by using logarithms of variables, because in the world of logarithms, adding and subtracting amounts to multiplying and dividing.  Once the logarithmic calculations are complete, the output units use their activation function to remove the logarithm and return the desired result.
 
In short, our probabilistic networks discovered how to use logarithms to perform multiplication and division.  Gee whiz, we would never have discovered this had we not looked at the details of their internal structure.

Monday, March 04, 2013

Desert Island Books For Cognitive Scientists



Teaching the foundations of cognitive science requires providing students a sense of its historical and philosophical roots.  My lectures try to accomplish this by providing heavy doses of quotes from, and pictures of, pioneering cognitive scientists.  (I gathered enough of this material to launch a gallery of cognitive scientists.)

Last term, as part of my mission to expose students to the roots of cognitive science, I found myself describing various pioneering works as ‘desert island books’.  I told my class that these were classic texts that any marooned cognitive scientist would be content to have by their side when facing a lengthy wait for rescue.  I am not expecting to be in that situation myself, but after 25 years at the University of Alberta I can comfortably imagine retiring to my retreat at Hastings Lake, Alberta to spend some serious time reading classics of cognitive science.  In addition, I am long enough in the tooth to be able to suggest a few foundational texts for budding cognitive scientists.

What books would provide me contentment on a desert island?

I generated the list below to answer this question.  I constrained it in two ways.  First, I limited it to thirteen books.  Second, I tried to give equal representation to the three major approaches to cognitive science (classical, connectionist, and embodied).

Classical Cognitive Science

Miller, G. A., Galanter, E., & Pribram, K. H. (1960). Plans and the Structure Of Behavior. New York:  Henry Holt & Co.

Newell, A., & Simon, H. A. (1972). Human Problem Solving. Englewood Cliffs, NJ: Prentice-Hall.

Pylyshyn, Z. W. (1984). Computation and Cognition. Cambridge, MA.: MIT Press.

Simon, H. A. (1969). The Sciences of the Artificial. Cambridge, MA: MIT Press.

About 25 years ago I rescued my copy of Miller, Galanter and Pribram from a discard pile; after my first reading of it I was amazed at how current this pioneering book still managed to be.  A recent look through it reminded me of its attempt to bridge cognitivism with cybernetics.  Newell and Simon provide an incredible manifesto of modeling in a classic book that introduces production systems, physical symbol systems, and protocol analysis.  Pylyshyn’s book offers a rich theoretical account of the implications of assuming that cognition is computation, including a deep discussion of what is involved in validating models of cognition.  Simon’s masterpiece provides a link between the science of cognition and the science of design, and is a continuous source of inspiration about how to think like a cognitive scientist.

Connectionist Cognitive Science

McCulloch, W. S. (1988). Embodiments of Mind. Cambridge, MA: MIT Press.

Minsky, M. L., & Papert, S. (1969). Perceptrons: An Introduction To Computational Geometry (1st ed.). Cambridge, Mass.,: MIT Press.

Rosenblatt, F. (1962). Principles of Neurodynamics. Washington: Spartan Books.

Rumelhart, D. E., & McClelland, J. L. (1986). Parallel Distributed Processing, V.1. Cambridge, MA: MIT Press.

The McCulloch book is a collection of his important papers, primarily from the 1940s into the 1960s, many of which are classics.  It is not an easy read, but it is fun, and it is also incredible to see the breadth of topics covered – links from the abstract to the physical abound.  Minsky and Papert provide a wonderfully challenging read that illustrates how computational analyses of artificial neural networks should proceed.  Rosenblatt’s magnum opus introduces the perceptron, but is far deeper than some might expect, and foresees aspects of the New Connectionism.  The Rumelhart and McClelland book heralded New Connectionism; this first volume of a pair of books gives the reader a lot of dangerous information about how to carry out connectionist research. (It is largely responsible for my developing my own skills in this field; I suspect that many connectionists taught themselves from reading it in the late 1980s.)

Embodied Cognitive Science

Braitenberg, V. (1984). Vehicles: Explorations in Synthetic Psychology. Cambridge, MA: MIT Press.

Gibson, J. J. (1979). The Ecological Approach To Visual Perception. Boston, MA: Houghton Mifflin.

Neisser, U. (1976). Cognition and Reality: Principles And Implications Of Cognitive Psychology. San Francisco: W. H. Freeman.

Winograd, T., & Flores, F. (1987). Understanding Computers and Cognition. New York: Addison-Wesley.

This is quite a mixed bag of selections, which is only proper, because the embodied approach is fairly fragmented.  Braitenberg provides a collection of thought experiments that illustrate the importance of realizing that an agent is embedded in its environment.  It ties in nicely with the Simon book mentioned earlier.  Gibson’s theory of perception is a foundational example of the key elements of embodied cognitive science, and Gibson’s work inspired Neisser’s embodied treatment of cognition.  Winograd and Flores offer a fascinating critique of classical cognitive science, and suggest embodied solutions to these problems.  I read both Neisser and Winograd and Flores when I was a student and missed the point of both books; 25 years later I was astounded with how prescient both were, and was amazed at my inability to understand them properly on the first read!

Combining Elements Of All Three Approaches

Marr, D. (1982). Vision. San Francisco, Ca.: W.H. Freeman.

The last book on my list is such a seminal work that it really stands on its own.  Furthermore, Marr’s theory combines strong elements of all of the different schools of cognitive science: he is clearly concerned with constructing representations in the classical sense, but develops algorithms that are essentially connectionist, and his proofs concern properties of the world (i.e. natural constraints).  His ability to move from mathematical proofs to single cell recordings of visual neurons is astonishing; for me, this was one of the two most influential books that I have ever read (the other being Pylyshyn’s, cited above).