YPP Network Description

The MacArthur Research Network on Youth and Participatory Politics (YPP) formed out of recognition that youth are critical to the future of democracy and that the digital age is introducing technological changes that are impacting how youth develop into informed, engaged, and effective actors.

Howard Gardner
Subscribe to Howard Gardner feed Howard Gardner
Hobbs Professor of Cognition and Education / Harvard Graduate School of Education
Updated: 25 min 18 sec ago

Two Departures from the Professoriate: A World Apart

March 27, 2018 - 7:29am

Matt Welsh was a highly talented assistant professor of computer science at Harvard University. Like some but by no means all junior professors, he was approved to become a full time, tenured professor—indeed, he was the occupant of a named chair “The Gordon McKay Professor of Computer Science.”

Just months after receiving tenure, Welsh resigned his professorship to become a fellow at Google. This decision caused quite a stir. Because so many were surprised, he decided to explain why.

In an article with the straightforward title “Why I’m Leaving Harvard,” Welsh begins by saying that he did not have any major problems with his work at Harvard. He liked his colleagues, said that the students were the best that he could ever hope for, and underscored that he had plenty of support for the research that he wanted to do.

But he went on to say, “There is one simple reason that I’m leaving academia: I simply love work I’m doing at Google. I get to hack all day working on problems that are orders of magnitude larger and more interesting than I can work on at any university… [W]orking at Google is realizing the dream I’ve had of building big systems… I can just sit down and write the code and deploy the system, on more machines than I will ever have access to at a university.”

Let’s take Welsh at his word—and assume that he did not leave Harvard simply (or primarily) to triple his salary or get a mortgage-free house or guaranteed scholarships for members of his family. In our country, few would say that Welsh should be deprived of the opportunity to fulfill his life’s dream.

Yet as a fellow professor and as one who believes in educational institutions, I am disappointed—in him and/or in the system. 

From the time of graduate school if not before, Welsh was supported in his pursuit of the doctorate and of post doctorate work—on the tacit assumption that, if he had the opportunity, he would join the professoriate. Citizens (via their taxes) as well as private funding agencies put their faith in him. And now, he is working for private industry—admittedly having lots of fun, perhaps doing some good, but the more cynical would say that he has “gone over to the dark side.”

Consider an entirely different case—that of Erin Bartram. Trained as an historian of 19th century America, Bartram was an assistant professor at a much less prestigious school—the University of Hartford. After years of searching unsuccessfully for a tenure track job, she decided to leave academe. She would have done so silently and without any public knowledge had she not decided to write an essay titled “The Sublimated Grief of the Left Behind” (an example of a genre apparently dubbed “quit lit”). In her soul-searching evocative piece, she notes, “We don’t want to face how much knowledge [a] colleague has in their head that just going to be lost to those who remain, and even worse, we don’t to face how much knowledge that colleague has in their head that’s going to be utterly useless for the rest of their lives.” To her surprise, the essay went viral, and as Bartram comments, subsequently and ruefully, had that not happened, “I would have been nobody.”

Of course, and alas, Bartram’s story is far more common than Welsh’s. Every year, many hundreds of young scholars—primarily in the humanities and the “softer” social sciences—receive their doctorates and try, unsuccessfully, to find a full-time tenure-track position. Some find a post doc position for a year or two; some fill in for a professor who is on sabbatical; some moonlight on several campuses (so-called “taxi cab professors”); some end up teaching in high schools or for-profit institutions; and some, and one could even call them “lucky,” end up teaching at a second or third tier school, or a community college, with a teaching (and perhaps also an advising) load so heavy that there is essentially no chance that they can carry out the scholarship that they were trained to do—and that they presumably want to do.

And many quit the academy altogether as Bertram has apparently done—sometimes trying to be “independent scholars,” more often seeking and accepting positions that would have been more appropriate for those who have not spent years fulfilling the requirements for a doctoral degree.

Darting back to the words of Welsh, these less fortunate young scholars would not be soothed by his concession: “I also admire the professors who flourish in an academic setting, writing books, giving talks, mentoring students, sitting on government advisory boards, all that. I never found most of these things very satisfying, and all of that extra work only takes away from time spent building systems, which is what I really want to be doing.”

I hope that readers of this blog join me in asking, “What’s wrong with this picture?”, or, more properly, “What’s wrong with with these pictures?” It’s lamentable that Welsh does not appreciate many facets of the traditional professorship (which presumably he should have known about by the second year of doctoral training); it’s tragic that Bartram is one of thousands of trained scholars who never get the opportunity to teach students who want to learn and to add their own bricks—small and not so small—to the edifice of knowledge in their chosen field of study.

At least in the United States, from pre-school to graduate school, education is no longer a public good; it’s become a private good. The lucky few get to do just what they want to do—even if they never see another student or teach another class. A large majority of those with doctorates would give anything to take the place of the “leaver,” but never gain the chance.

Other than remain with the unsatisfactory status quo, could this situation be handled be done differently?

One solution, with a long history in Europe, is to have two separate tracks—in practice, or at least in theory. For a fortunate few, there is a research track, wherein you join an institute, get to carry out the research that you want to, and never need to teach any students unless you so desire. For a vast majority, you either begin by teaching secondary students far from the metropolis, or you consign yourself to teaching huge lectures in the big universities, without having any contact with students, many of whom never show up in class and most of whom will never graduate. You are solely a teacher—not a teacher-researcher-scholar.

Another solution would be for the colleges and universities to cease graduate training altogether and let wealthy private companies set up their own schools. As I understand it, Google hires hundreds of post-docs, most trained at prestigious universities. Why not have Google, or Amazon, or Microsoft, or “the next big tech company” train the next generation of in-house researchers? We’d have to decide (as a society) whether to award doctorates, doctorates with an asterisk (PhD, with a Twitter bird or Amazon shopping cart next to it), or some newfangled degree.

Yet another possibility. Perhaps students who elect to pursue a PhD would understand from the beginning that there is a respectable alternate life style, called an independent scholar. After all, this option was in effect the tradition in much of Europe over the centuries. And of course, if you or your family are wealthy enough, this remains a viable pathway today. Not for Bertram, alas; she has to worry about how to pay next month’s rent.

But there is a better way, though I have to admit it is no longer the American way: an agreed upon bargain between our higher degree awarding institutions and our talented students who want to be teachers and scholars. If our institutions train you to be a skilled scholars and teacher, you commit to giving back—to staying within the professoriate, barring unusual circumstances. (Presumably Welsh could even come to like, to cherish his Harvard undergraduates.) Conversely, if we take you on as a student and you complete the requirements successfully, we commit to providing a job which makes use of your talents. If that means radically reducing the number of doctorates in history or in Romance languages, better to do so “up front” than to hold out the false hope of a job—and in the process, ensuring many repetitions of Erin Bartram’s sad saga.

Categories: Blog

“Why Are You Doing That Research?”

March 12, 2018 - 8:01am

I am often asked about how I became interested in a certain line of research. Recently, people have asked me why, at the age of 70, I embarked on a very large empirical study of higher education in the United States.

One answer: I’ve always been interested in education. As a young child, I thought that one day I would teach classes to children of every age. I’ve stayed in school all my life. But for the first half of my scholarly career, I carried out studies in developmental psychology and neuropsychology, without a particular focus on education.

That situation changed for two reasons. First of all, my theory of multiple intelligences, never of particular interest to psychologists, proved of great interest to educators. (One inevitably notices what others notice.) Second, I took a teaching job at the Harvard Graduate School of Education. My students were largely involved in education but not in higher education.

As a result, if you look at my published writings in education from 1985 to 2015, they have focused almost entirely on K-12 education.

But while my research veered toward pre-collegiate education, my colleagues and I were also undertaking a major study of professions in American life. That study, known initially as The Good Work Project, took place over 10 years. It yielded a variety of publications and toolkits, now collated under the label of The Good Project.

In that research, I and my colleagues—Wendy Fischman and Lynn Barendsen—were disturbed by a particular finding. Young people whom we interviewed wanted to do “good work” but felt they could not afford to do it at that point in their lives. They wanted to be successful and well-off, and so they, like their peers, had to be willing to cut corners, or so they thought. They told us that one day, once they had achieved their material goals, then they would be doing good work and modeling such work for other people.

We wondered whether we could do something with college students to orient them to the importance of carrying out good work from the start—work that is technically excellent, personally engaging, and—most important—carried out in an ethical way.

Accordingly, we looked for opportunities where we could work with college students on these issues. The first opportunity arose at Colby College in Waterville, Maine. There, a colleague, Sandy Maisel, invited us to teach a course in ethics, along with the nearby Institute for Global Ethics. With a set of engaged students, we reviewed many of the ethical dilemmas that arise in work life. The course worked out well.

The following year, I had the opportunity to carry out a similar, briefer course at Amherst College with the then-president, Tony Marx.

While carrying out these experimental courses, we had also been speaking to educators at Harvard College—much closer to home!

In talks with the Freshman Dean’s Office, particularly with Dean Tom Dingman and his associate Katie Steele, we began to design a course for Harvard freshman which we ultimately called “Reflecting on Your Life.” That course, now in its 11th year, continues to this day.

Especially important was the collegiality of Dick Light, a long-time friend and colleague and, unlike me, a genuine expert in higher education. Dick was a full collaborator in the design and execution of “Reflecting on Your Life.”

Over the years, Dick and I spoke frequently and at length about what it takes to make the most of the college experience—and to do so in a way that makes sense to you but also to others in your present and future worlds. That, in a nutshell, is the core idea of “Reflecting on Your Life.” As a result of these conversations, Dick and I wondered whether it might be opportune to carry out an empirical study of higher education in the United States today.

We were struck by two phenomena. On the one hand, nearly every month, new books appear—often rather depressing in tone—about the state of higher education in America. On the other hand, most of these books have far more “attitude” and considerably more recommendations than they have data carefully gathered and thoroughly analyzed.

Over the course of many months and many conversations, there arose a project, which we initially called “Liberal Arts and Sciences in the 21st Century.” Now in its fifth year, the project has carried out close to 2000 interviews on 10 deliberately disparate campuses. We speak to all the major stakeholders on each campus. In our interviews, we ask participants, particularly students, about their perspectives on excellence, ethics, and engagement. These virtues relate to the college experience, in both academic and campus life.

Soon we will turn our attention to making sense of these in-depth interviews. In the meantime, Dick has been promoted to Senior Research Advisor, and Wendy Fischman, who worked with me initially on The Good Work Project, is now Senior Project Manager. Over the next months and years, we will report many of our impressions and findings on this blog, Life-Long Learning.

Categories: Blog

Early Stops on My Quest for Mind

February 26, 2018 - 2:01pm

As an adolescent, I became interested in the field of psychology. My interests in psychological topics resembled those of most American adolescents—curiosity about my own personality, emotions, family relations, ambitions, and anxieties. And so it is not surprising that when I had the chance (as an undergraduate), I seized the opportunity to study with Erik Erikson, a famous psychoanalyst. As a corollary, my reading and writing in psychology were primarily in the psychoanalytic and dynamic psychological traditions. Indeed, I initially applied to graduate school to study clinical psychology, though in the end I chose not to pursue that profession.

And that is because, right after graduating from college, thanks to a chance summer job opportunity with a noted scholar named Jerome Bruner, I encountered a quite different area of psychology. Cognitive psychology was on the ascent—more respectable within the discipline of “psychological science,” and (most importantly) of greater interest to me. The cognitive area also explores the mind (that’s what psychology does!), but it zeroes in on the mind of thinking, reasoning, creating, problem solving, problem finding, and, more broadly, the use of language and other symbol systems. We might say that cognitive psychology is situated in the senior common room of the academy, while dynamic psychology inhabits the room with the couch. Before long, I had become a convert to the branch of psychology represented by Professor Bruner and his colleagues.

My first book for the general reader was The Quest for Mind: Piaget, Lévi-Strauss and the Structuralist Movement. In this book, published in 1973, I described the major methodologies and findings of two thinkers who could readily be called cognitivists: Jean Piaget, a biologist-turned-psychologist, the master of cognitive development in children and adolescents; and Claude Lévi-Strauss, an anthropologist turned systematic thinker, the master—in his memorable phrase—of “the savage mind.”

For a first book, The Quest for Mind was widely reviewed, well received, and gave me a modest reputation as a reliable synthesizer of the works of others. In the book, I was doing what one is trained to do in high school and the university—understanding and summarizing clearly the ideas of others, with only a bit of personal commentary. My scholarly contribution, if any, was to find intriguing similarities and differences between the ideas and findings of Piaget and Lévi-Strauss. (Only later did I stick my own scholarly neck further out and write about my own developing ideas—most conspicuously, the theory of multiple intelligences.)

In retrospect, I have become curious about why I chose to write about these two figures—Piaget and Lévi-Strauss—and why and how I chose to link them. I don’t know precisely what I thought in the early 1970s, but I attempt here to resurrect the intellectual milieu of the time.

Following my introduction to cognitive studies courtesy of Professor Bruner, I spent a year in England. I read widely. Two of the scholars who most impressed me were Piaget and Lévi-Strauss. Personal encounters also mattered. In 1965, Lévi-Strauss was invited to London to give the prestigious Huxley Lecture to the Royal Anthropological Institute of Great Britain and Ireland. I attended and was impressed with his wide knowledge, his carefully crafted arguments, and his ability to interact in a sophisticated manner (and in excellent English) with his British peers—specifically the noted anthropologists Edmund Leach and Rodney Needham. And then, revealingly, at the start of my honeymoon with Judy Krieger, who was also a student of cognitive development, we elected to travel to Geneva and sit in on Piaget’s weekly seminar. We met “Le Patron,” as he was nicknamed, and exchanged pleasantries. Thereafter, on several occasions when Piaget came to the United States, I went to hear him speak.

And so, by the summer of 1966, I was already immersed in the worlds of Piaget and Lévi-Strauss, both through their writings and through personal observations. As far as I can recall, at that time their American readership did not overlap very much—experts in child development and education were reading and arguing about Piaget, scholars in sociology and anthropology were reading and debating with Lévi-Strauss.

I believe that I was attracted by two features. First of all, I was intrigued by the concrete details of Piaget’s clever experiments with children of different ages, and by Lévi-Strauss’s vivid accounts of his field work in South America as well as his keen analyses of exotic myths and enigmatic kinship relations. Scholars were trying to map out how these two intriguing populations (Western children and indigenous populations) thought about the worlds of objects and of persons.

Second, and importantly, I was impressed by the systematicity of their thinking. Both Piaget and Lévi-Strauss were trying to lay out the logical structures that gave rise to the words and the behaviors that they and other social scientists had observed. More specifically, when Piaget asked young persons to explain physical phenomena (like the conservation of mass in the face of various physical distortions) or moral dilemmas (like the equitable distribution of desirable objects), he claimed that specific logical structures were at work. He then expressed these structures in the language of logic—technically, group and grouping theories. And when Lévi-Strauss described the manner in which choices of mates were facilitated or prohibited in different cultures, or how themes (like power) and contrasts (like the raw and the cooked) were captured in myths, he too laid out the basic structures in algebraic form. In retrospect (after almost fifty years!), I would say I was too impressed with Piaget and Lévi-Strauss’s penchant for describing the logics that underlies thought and behavior of children, inhabitants of distant societies, and (of course, by implication) our own thought process. I was looking, so to speak, for “dry land” that undergirded seemingly messy human behaviors.

And then I made a leap, one that, as an American, I was perhaps more poised to make than a European student would have been. I saw both Piaget (Genevan) and Lévi-Strauss (Parisian) as rationalists, growing out of a lengthy French intellectual tradition that placed a premium on logic. That tradition, dating back at least to the time of René Descartes, conceived of the mind as a privileged territory, quite apart from more mundane physical or physiological material. Not that Piaget and Lévi-Strauss were dualists in any literal sense: they both fully subscribed to the biological and physical scientific views of the time. And yet, perhaps more so than those in an Anglo-American empiricist tradition, Piaget and Lévi-Strauss continued to believe that one could study the mind directly, by-passing more mundane or more materialist bases.

It’s a bit odd that I was attracted to this formalist way of thinking. I was not trained in mathematics or physics and in those days had but modest interest in the biological sciences. But I have never forgotten a casual remark that Bruner made to me during that fateful summer of 1965—he said, “Howard, you think like a physicist.” If there was any force to this comment—the kind of sentence that almost everyone else present would have immediately forgotten but that has haunted me for decades—it raised the possibility that I was looking for a more formal way of thinking about the rich qualitative phenomena that interested me.

Quite possibly Bruner also realized that I was looking for a firmer foundation of my earlier interests. As a student of an interdisciplinary field called “social relations,” I had initially been most attracted to the writings of my tutor Erikson and also those of the sociologist David Riesman. Neither of these thinkers had any attraction to formal analysis (indeed, Erikson had never gone to college, and Riesman had not earned a Ph.D.). Perhaps I found myself lacking a powerful response to criticisms levelled at the seemingly artistic approaches to scholarship assumed by these mentoring figures—criticisms on the part of my friends, my other teachers, or my superego.

Anyway, whatever the cognitive or affective motivators, I began to think about the relations between, as well as the differences across, these two towering intellects. I then decided to write an article comparing their contributions. To my pleasure, it was accepted and published in Social Research, an established social science journal. Almost on a whim, I sent the article to Piaget and Lévi-Strauss. To my delight and astonishment, both men responded quickly and personally—to me, a mere graduate student (though Piaget conferred a doctorate on me in his address “Cher Docteur Gardner”). Only in the last few years, in digging up their correspondence, did I realize that their typed letters had the same date on them—April 10, 1970. Those letters, proud possessions, are now hung side by side in my office. I hope that they signal to me and to my students the importance—indeed the irreplaceability—of substantive communication among scholars and between mentors and mentees.

Before long, this article and comments from colleagues stimulated me to write the aforementioned book. (I note that, in the aforementioned 1970 article, I penned the pregnant phrase, “A task for a book, without question, but one worth sketching at this juncture.”) I conducted memorable interviews with both Piaget and Lévi-Strauss. I’d like to think that those personal encounters conferred a pertinence and passion to the book that took it beyond their printed words.

I was fortunate to find a wonderful editor, Dan Okrent; a distinguished publisher, Alfred A. Knopf; and a remarkable copy editor, Mel Rosenthal. That publishing team helped me to produce a book that I still have positive feelings about.

Now, because of two recent developments, I have a somewhat different slant on this book.

Development 1: For a volume in her honor, I was asked by my friend and colleague Sherry Turkle to write about one of her books, and together we decided that I should write about her first book Psychoanalytic Politics. Sherry’s book was also a study of the French intellectual tradition—in her case, focusing on another intellectual luminary, Jacques Lacan, a revolutionary psychoanalyst who was far more controversial than either Piaget or Lévi-Strauss. While I make no claim to have probed Sherry’s own motivations, I see both of us as having stretched to master another intellectual culture, one quite alien from ours, as a means of stretching our own social scientific understanding.

Development 2: As part of our current large national study of higher education, I’ve become intrigued by those experiences that students (and other informants) consider to be transformational. More frequently than I could have ever have anticipated, students today nominate their time abroad—the proverbial junior year, though it can occur in any year of college, or in a gap year, or a summer internship, or, as in my own case, a postgraduate fellowship. In one sense, it’s disappointing that so many young persons describe as particularly transformative their time away from the home campus. (Why bother to have a well-appointed home campus? Does an airplane ticket suffice?) But perhaps one needs both the time on campus and the time away for such an effect to be felt.

Both Sherry Turkle and I had spent time abroad before we embarked on these particular studies. But I think in my case—and I would speculate in Sherry Turkle’s case as well—this was not just a cultural year abroad. It was an intellectual voyage abroad, immersed in another cultural tradition, that expanded our own understanding of the work in which we were interested—and, perhaps, in ways that we ourselves only dimly comprehend, affected what we did thereafter. Indeed, my decision—while I was still studying the texts of Piaget and Lévi-Strauss—to study neuroscience and spend 15 years working on a neurological ward reveals yet another effort to find some “hard ground” undergirding the human issues in which I was interested. But these speculations and subsequent events are stuff for another blog, at another time.

Categories: Blog

Audi Aicon Campaign Uses Multiple Intelligences

February 14, 2018 - 12:07pm

German car company Audi has unveiled the Aicon, an electric self-driving car, and a recent advertising campaign for the model uses Howard Gardner’s theory of multiple intelligences to highlight the vehicle’s various features.

For example, the electric motor is highlighted as evidence of the car’s naturalist intelligence, which its autonomous driving capabilities are cited as mathematical intelligence.

Gardner comments on this development below:

My friend, Tom Hoerr, a leading authority on the theory of multiple intelligences, brought this advertisement to my attention. When I developed the concept of MI, I never anticipated how much mileage (!) others might get out of the idea. As I quipped to my children, they may get a kick out of this, but not a kickback—the idea of multiple intelligences has always been in the public domain.

Click here to see more.

Categories: Blog

Reflections on Transformations

February 7, 2018 - 8:42am

In several blogs I this series, I have written about the transformative powers of education. Drawing on my own experiences over a long life spent almost exclusively in educational institutions, I have recalled mentors, books, travels, and meetings—usually ones that had a positive transformative effect, but also mentioning at least one transformational experience that was decidedly negative. I have not yet written about family, friends, colleagues or, indeed, enemies that have also had major influences on my development. Perhaps someday…

It’s opportune to step back and to think more broadly about the meaning of transformation and the processes by which it may occur. I’ll approach this in a somewhat schematic way—with the thought that I and others can fill in the blanks, or, if more appropriate, leave them blank!

Size of Transformation

As one who has spent much of every day reading, I have little hesitation in saying that becoming literate is a Big T Transformation. (Indeed, it has been for the species, as well as for billions of individuals.) Writing belongs in the same category.

But there are also smaller transformations in the realm of literacy. When I learned to dictate essays, rather than scrawling them by hand, that prosthesis certainly changed my life in significant ways—but I would not consider that to be a major transformation. By an analogy, being able to navigate via driving a standard car constitutes a major transformation in the lives of most young people—but then, learning to drive a truck or even a bus is not nearly so great a leap. (Driverless cars may constitute a different kind of transformation!)

Length of Time for Transformative Effect

Take teaching as an example. I have always wanted to be a teacher and, indeed, began to teach others when I was very young (no doubt making a mess of things). A major transformation occurred when I was able to prepare a lesson plan and then, if something unexpected but promising occurred, to toss the lesson plan aside and, so to speak, go with the flow. This transformation required both mastery of materials and the flexibility to juggle priorities, while honoring the broader aims of the lesson or the course.

Speaking of teaching, I can point to a transformation that took much less time. For years, in any big course assigned to me, I simply lectured for most of an hour—as I like to quip, “easier for the teacher, easier for the students”. But about 20 years ago, when video became widely available, I decided to record all of my lectures; to ask students to view the videos prior to class (most did); and then to base the class on a discussion of the recorded lecture (and associated readings). This shift took time to effect—but I would say that after one year (which would be 10-20 recoded lectures), I became able to lead comfortably what we now call a “flipped classroom.”

Areas of Transformation

Those who know me (which includes faithful readers of this blog) realize that I focus, probably too much, on academic and intellectual matters… the likely fate of a professor, I suppose. But in conversations with Rakesh Khurana, the remarkable Dean of Harvard College, I’ve become convinced that higher education in the liberal arts and sciences should strive to bring about three changes (in students and others):

1) changes in how one conceptualizes the world of ideas and associated practices;

2) changes in how one relates to peers and other individuals; and

3) changes in how one thinks about oneself.

Like the trio of good worker, good citizen, and good person (see thegoodproject.org), one can achieve one “good” without nudging the others. It is certainly possible that a college (or a high school or even a summer camp) can affect one of these spheres without affecting the others. As examples, there are certainly students whose way of interacting with peers or teachers can be significantly affected in one way, without there being comparable shifts in the other spheres—and vice versa. Moreover, in any particular case, there may be good reasons not to change the way that one thinks about others, nor the ways in which one interacts with others. But in most cases, young people benefit if all three modes of being are changed in a significant way—hopefully, of course, in a productive direction.

Let me be concrete—and personal. I would like young people to attend schools which strive to bring about these transformations. Along those lines, I hope that most young people will have the opportunity to consider—and perhaps pursue –career paths other than the ones that they (and, most probably, their hovering parents) had thought were their “chosen” trajectories.  I am not so pleased when such young people had embraced a passion or a mission, only to delete that worthy goal in favor of pursuing the easy way or the most lucrative way. By the same token, one could have arrived at college as a genuine democrat (small d!) and leave as a snob, a non-caring member of an elite—a less than happy transformation, on my values.

Deceptive Transformations and Non-Transformations

Though we have learned to give tests or award badges for many skills, we are far from having reliable ways of ascertaining whether transformations have taken place. All of us can think of experiences which seemed very important at the time—a first date, a first love, a first trophy, a first public embarrassment—but whose distinctiveness seems to have faded or disappeared over time. Call this a false positive.

By the same token, we can certainly think of experiences which, at the time, seemed casual or unimportant but whose significance looms larger as our lives unfold. For example, when, in the spring of 1967, one of my teachers suggested that I drive out to Brandeis University to meet Professor Nelson Goodman, I had no idea that this meeting would lead me to help launch an organization—Project Zero– that has been central in my life for a half century. Nor when on short notice I decided to hitch a ride to Ann Arbor, Michigan, did I have any idea that a chance conversation with the driver—a young scholar named David McNeill—would lead me both to a new field of study (cognitive psychology) and introduce me to Judy Krieger, the fellow student who became my wife and the mother of three of my children. Call these the “hidden triggers” of life-changing transformations.

As we think about transformations, some other questions arise. We think of transformations as moving forward—but there are certainly regressions. As individuals become older, and, in less happy cases, begin to dement, they can revert to thinking in a qualitatively more primitive way. There is also the question of whether societies can actually mandate transformations. Of course they can do so in a legal way (for example, “I now pronounce you husband and wife”) or via a dramatic experience (for example, circumcision as part of an initiation rite). But it is less clear that you yourself are changed by such legalistic process–though the ways in which others treat you may well alter significantly (in which case we can say that the others have been at least slightly transformed!).

Disciplinary Transformations 

And as a scholar, I should mention the transformation of fields of knowledge—as described dramatically by the historian of science Thomas Kuhn in his famous book The Structure of Scientific Revolutions. Any field can be so transformed—think Darwin, think Einstein, think Picasso (or Virginia Woolf or Duke Ellington), think Stravinsky (or The Beatles). The fields may be transformed but not necessarily their long time practitioners—as has often been quipped, “In the academy, change occurs one funeral at a time.”

Closing Challenge

If you accept my claim that transformations are important—indeed, that they constitute a central goal of education—you need to confront a dis-settling situation. At present, we have few if any systematic ways of studying transformations. Models occur to me—models from biology (morphogenesis—my dictionary says “a transformation by magic or sorcery”), models from mathematics (chaos theory), or models from physics (change of state). But when it comes to transformations of human beings, we remain moored to far older methods—introspections, observations, and an occasional low-key measure. Indeed, in our study of higher education, we attempt to determine whether an individual has learned to think in a liberal arts way… certainly not a way that is embedded in the genome, nor, indeed, available in most cultures across the centuries. Another story, another time.

Categories: Blog

2018 RHSU Edu-Scholar Public Influence Rankings

January 23, 2018 - 9:08am

Howard Gardner is second amongst the most influential education thinkers in a 2018 ranking.

The annual RHSU Edu-Scholar Public Influence Rankings, released by Rick Hess in his column in Education Week, is a list of American university-based scholars who are shaping educational policy and practice. The ranking is based on factors such as Google Scholar and Amazon rankings, as well as press and web mentions.

The 2018 ranking of second place is a three-spot jump for Gardner, who has moved up from the #5 slot in 2017. Rounding out this year’s top five are Linda Darling-Hammond, Angela Duckworth, Gloria Ladson-Billings, and Diane Ravitch.

Click here to read the release in full, and as always, congratulations to all the scholars who made the list.

Categories: Blog

Transformative Experiences: Positive and Negative

January 22, 2018 - 11:57am

It is certainly reasonable for education to foster transformative experiences. I am skeptical of any educational program, whether preschool or adult classes, which denies that it seeks to bring about such experiences. To be sure, transformation cannot be guaranteed. Moreover, there can be false positives—one thinks that the trip to Europe with one’s friends was transformative, but it actually disappears from memory with little trace. And there can be false negatives: one takes a course in art history and gives it a low rating. But later in life, one calls regularly and gratefully on knowledge and skills from the course as one begins to collect or indeed to make drawings.

In my own case, during my years as a graduate student at Harvard, I had two experiences which, at least in retrospect, prove to have been quite transformative. One was decidedly negative, the other clearly positive.

The bad news first. As a beginning graduate student in developmental psychology, I was required to take a course in social psychology. The course was taught by two young instructors: Stanley Milgram, an expert in the experimental study and manipulation of human behavior, and Thomas Pettigrew, an expert on the nature and sources of prejudice. Each week, about 15 graduate students read key texts and then, seminar style, sat around a table and discussed the readings critically.

I took the course a few years after Milgram had carried out and published his path-breaking studies on “obedience to authority.” One week, we read and discussed a key article about the amazing finding—that, contrary to what the overwhelming majority of psychiatrists had predicted, most American subjects would deliver a powerful electric shock to another individual simply because a man wearing a laboratory jacket had so instructed them.

In the course of the discussion, I made a few critical remarks about the experiment. I wish that I could remember their substance; I do remember that they were quite reasonable comments, and I think that I presented them in a polite or at least a non-confrontational manner.

What then happened, fifty years after the event, still makes me shudder. For several minutes Milgram viciously attacked me, saying that I was trying to ruin him, destroy his career, undermine social psychological experiments, and blow up the field of social psychology. None of these comments were fair-minded—indeed, I would call them paranoid.

What happened thereafter was even worse. Not a single person in the room—neither my fellow students, nor the other faculty member—rose to my defense in any way. Rather, like a sudden, one-time explosion, the episode passed, and we went on to other comments on other readings.

Only after the session did my fellow classmates and the other professor come and speak to me and, in effect, apologized to me for Milgram’s unprompted explosion. I was too stunned to ask them why and, indeed, sought to suppress if not banish the experience altogether from my mind.

But in fact the opposite occurred. I learned invaluable lessons from the experience, ones that have stayed with me to this day. First, you can get attacked by an authority figure, and there is nothing that you can do about it at the time. Second, do not expect to be defended, even by individuals who know better. Three, continue to speak your mind, but strive to do so in as non-confrontational a manner as possible. And, if you cross over a line, apologize.

You may wonder what happened, in a small academic department, between Milgram and me. We never discussed the event. We entered into the more traditional student-faculty relationship, and he even read and commented on some of my papers. And later on, well after I graduated, we had a few professional contacts. Milgram died at a very young age—his loss to the field (as well as to his family and friends) was severe, and I have long since forgiven him, though I have never forgotten him.

On to the positive experience—and happily so. As a graduate student, I was a founding member of Harvard Project Zero, a research group in education which happily survives to this day, fifty years later. Project Zero was initially directed by Nelson Goodman, an eminent philosopher who was particularly interested in the nature of different kinds of symbols, including those in the arts. (With David Perkins, I co-directed Project Zero from 1972-2000.)

Goodman and I became quite close, and we even collaborated on projects—he as a seasoned philosopher, I as a budding psychologist. We were both interested in how different kinds of symbols are understood and processed—for example, to use the language of Susanne Langer (see here), how human beings process written and spoken language as compared to how we process the visual arts or dance.

At that time, studies of the functions of the two halves of the brain were becoming known, due chiefly to experimental procedures whereby one can send stimuli only to the left or the right hemisphere. Both Goodman and I wondered whether the difference between language (and what is often called a discursive symbol) and depiction (what is often called a presentational symbol) might be respected (respectively!) by the left and right hemispheres of the brain—thus giving a “material” basis for a distinction important to philosophers and psychologists.

It so happened that Norman Geschwind, a brilliant young neurologist working in Boston, had been studying this issue—disorders of higher cortical functions with brain-damaged patients. Possibly with my help, Goodman invited Geschwind to speak to our small research group, then housed in a small building on Prescott Street in Cambridge.

As with the Milgram episode, I do not remember the details of Geschwind’s presentation—though I could certainly invent one convincingly because I heard him lecture at least 100 times in the succeeding fifteen years. But as an individual interested in artistry (then the principal focus of Project Zero), I was fascinated to learn that famous artists had been studied after they had sustained damage to their brains—and that what they were still able to do, as opposed to capacities that had been impaired, revealed important information about artistry. So, for example, a famous composer had sustained damage to certain areas of the left hemisphere; he could no longer speak, but he could still compose. In contrast, an eminent painter with damage to certain areas of the right hemisphere could still speak about his drawings, but the spatial configurations were greatly distorted.

Not only were these findings fascinating and counter-intuitive. I came to realize that a study of the brain—and particularly of cortical pathology—might hold answers to questions about the nature and organization of artistry. These were questions that had haunted me but for which I had hitherto lacked both populations to study and methods by which to study them.

Not long thereafter, I made one of the major decisions of my then young scholarly life. Instead of continuing work in developmental psychology and looking for a teaching job, I instead would seek to do postdoctoral work in neuropsychology with Norman Geschwind. He was kind enough to agree. I did a three year post-doctoral fellowship under his guidance, and we became colleagues and friends until his untimely death in 1984—when he, like Milgram, was still in the prime of life.

I have one other memory from Geschwind’s visit to Project Zero. He was scheduled to speak in the afternoon. I had to return home after the formal talk because I had recently become a father and wanted to be with my wife and daughter. But after dinner, I returned to Project Zero, and what had started out as a standard seminar was still proceeding until well into the night.

I cannot draw any deep conclusions from these two examples—one of very short duration but with long-time personal consequences, the other somewhat longer and with long-time scholarly consequences. But I can say to every student—of any age—that you should always be open to such life-changing experiences and seek to give them as positive a spin as possible. And I can say to every teacher, as Henry Adams memorably wrote, “A teacher can affect eternity; he can never tell where his influence stops.”

Categories: Blog

Travel as Transformational

January 5, 2018 - 11:51am

In our large national study of higher education, we ask students—and others connected with colleges and universities—whether college can or should be a “transformational experience.” Recently, we have asked informants whether they themselves can name a transformational educational experience of their own, either within or beyond traditional schooling. Often, when asked about transformational experiences, informants mention foreign travel—typically, the proverbial “junior year abroad.”

I am fortunate to have had several transformational experiences in college. I am also fortunate to have had the opportunity to travel a good deal. Alas, as is the case with many of my colleagues, the travel is all too often via air, to a hotel, a speaking venue, possibly a quick walk down the main promenade, well-regarded museum, or some other iconic public building, followed by a quick return to the airport and, if fortunate, an on-time flight back home. (As if to prove the point, this blog has been drafted and edited on plane trips.)

When I reflect on my own education, three travel experiences stand out as transformational.

A Year Abroad Following College

Upon my graduation from Harvard College, I was lucky enough to receive a year’s support to study in London and to travel elsewhere as well. Perhaps wisely, perhaps foolishly, I did only minimal study in a formal sense. Instead, often with friends who were also residing in London or nearby, I took advantage of the year to immerse myself in the artistic culture of England—attending many theatrical productions, especially at the recently launched National Theatre; visiting numerous museums and other cultural sites; travelling throughout the British Isles and Western Europe; and making a memorable train ride through Communist Eastern Europe to Moscow and Leningrad (now St. Petersburg again) in the heartland of what was then the Soviet Union. I also read widely, in a few European languages, and spent a great deal of time writing—among the products a diary, various shorter articles, and a long and totally unpublishable novel. I also maintained a long-distance relationship which culminated in a June wedding in London and a summer honeymoon motoring throughout Europe.

Back in the day, Americans of means did their “European tour”—partly to see iconic sites, partly to acquire culture or become cultured. Indeed, my parents, who had fled Nazi Germany in the late 1930s, had taken my sister Marion and me for our first trip abroad in the summer just before I began college. But it was really during the post-graduate year—the other bookend of college, so to speak—that I acquired some intimacy with European culture, or, as members of my family now put it, received the requisite “culture credits.” I’m now grateful that I can continue to draw on these credits nearly every day—even more so in late than in mid-life.

The two other examples relate much more closely to my own vocation as a teacher and scholar:

A First Visit to Reggio Emilia in Northern Italy

In the late 1970s, I was a young researcher in developmental psychology with a particular interest in the arts (thanks, in part, to the year spent in England). At that time, I heard about extraordinary schools for young Italian children begun in the early 1960s in a small city, hitherto unknown to me, called Reggio Emilia (hereafter RE). I also began to contribute modestly to a publication about early childhood education, emanating from RE, with the neat name Zerosei (“Zero to Six”).

I was delighted when my wife Ellen Winner, an expert in artistic development, and I were invited to visit RE in the early 1980s. At that time, the inspiring founder of the RE educational experiment, Loris Malaguzzi, was very much alive. He and his close colleagues, who had been with him almost from the start, hosted Ellen and me for several memorable days. We spent a lot of time in the local schools, visiting and interacting with students, teachers, and parents—and perhaps surprisingly, the language barrier did not hinder communication. Indeed, it may even have had certain advantages, heightening information taken in via other senses. I became an unabashed admirer of these schools, with their foci on family participation, art and design, explorations of the urban environment, and hands-on activities. Malaguzzi’s evocative “hundred languages of children” dovetailed neatly with my newly developed theory of multiple intelligences.

In 1994, Malaguzzi died suddenly. I was concerned that his magnificent educational experiment might flounder. I approached one of the funders of our own research, the Atlantic Philanthropies (AP), and I asked whether that funding agency might provide support so that the surviving educators in RE would have the opportunity to reflect, regroup, and anticipate next stages and challenges. Due to legal restrictions, AP was not able to provide direct funding to RE. But AP encouraged us to create a joint project in which our research team at Harvard would have the option of directing some funds to RE (in effect, subcontracting).

Thence began a wonderful collaboration—still ongoing—between the educators in northern Italy and the educational researchers at Project Zero in Cambridge. Most of the exciting practices and stances came from the Italian educators. But our research group made substantive contributions in helping RE personnel to understand better what is distinctive about their practices; how best to describe them (in Italian as well as English); and how these insights and processes might be conveyed to and implemented by motivated educators throughout the world. In a well-received book called Making Learning Visible, we described practices like collaborative learning, documentation of student work, and the cultivation of expertise in pedagogy and in the arts.

A Surprise Visit to the Middle Kingdom

China was long characterized as the “Middle Kingdom.” In 1949, it proclaimed itself as the People’s Republic of China, but it was referred to in the West in the 1950s and 1960s as Communist China. Beginning in 1966, the leadership of China launched a large scale, quite violent, and (it is now universally agreed) highly destructive movement called The Cultural Revolution. Only with the death of Chairman Mao Zedong in 1976, and the ascension to power of Deng Xiaoping in 1978, did a calmer and more constructive China come into being.

In 1980, I knew almost nothing about China—it was far away both geographically and culturally. No one was more surprised than I when I was invited to join what was described as the first post-Cultural Revolution trip to China undertaken by Harvard University. (The circumstances were idiosyncratic, having to do with the lifelong friendship between my then Dean, Paul Ylvisaker, and the long suffering president of a Chinese university, Xia Shuzhang.) Along with about ten colleagues, I spent two weeks in China, visiting several cities, learning about the culture as it had evolved over the millennia, being introduced to the nation’s recent turbulent history, and, most memorably, meeting with dozens of academics who had been horribly mistreated for a decade and were still shell shocked.

The single trip was memorable and certainly raised my consciousness about China (such “consciousness raising” is certainly one of the main dividends of travel). But China would not have become transformative for me had it not been for another unforeseen set of events.

Two years later, I was invited to join a delegation of American arts educators travelling to China—once again described as the first such organized trip to China. As a minor figure in the delegation, I had planned to under-dress and to carry little luggage with me (Rule #1 of Gardner travel: “Travel light!”). But the day before our delegation was scheduled to depart from JFK Airport, the designated leader of the delegation became ill. As the only member of the delegation who had been in China before, I was asked to lead. Dressed in one crumbling suit along with one moth eaten sweater, I led twelve far more distinguished American arts educators on a visit to China which included a major conference of arts educators from the two nations.

Those who know little about China might well assume that in a determinedly Communist country, there is little status hierarchy—whatever their position, the cadres look, dress, and act alike. And you—like me—would be completely wrong. Everywhere we went, it was easy to tell who was the leader, who were the other administrators, and who were relatively lower-status artists and academics. By the same token, wherever I went—and with whomever I dealt—I was THE professor, THE leader. I learned that where one sits, next to whom one sits, and who makes the toast and receives the obligatory gifts mattered in this context; as the designated head of the delegation, I remained the person-of-the-hour. (No doubt my threadbare dress was noted and remarked upon by many people.)

During the course of this tour, I learned a lot about education in the arts in China. I hope that our Chinese counterparts learned a lot as well about the situation in the United States and the West. But I was treated as a “big deal”—even though in no way was that true. (At the time I was not even a tenured professor, let alone a star of any sort or of any sector.)

That said, I am certain of one thing. What most impressed the members of the Chinese delegation was what happened when we Americans had to decide on something. In front of our dumbfounded Chinese colleagues, I would poll my American colleagues orally, weigh pros and cons, often take a vote, and we would eventually decide on a course, by vote or by consensus. Women counted as much as men, academics and artists and administrators had equal voice. Whatever China called itself or however it styled itself, we were giving them a lesson in democratic procedures.

Even with two trips under my belt, I still assumed that my Chinese adventures were a sideshow. But shortly thereafter, I became co-director of a large-scale study of arts education in China and the U.S. Thereafter, my colleagues and I made several trips to China—with my wife and our then infant son Benjamin joining us in 1987. Despite the fact that I never learned to speak or read Chinese, I became de facto an expert on the comparative study of education in the arts in our two countries. Indeed, after my fourth trip to China, I wrote a book about my experiences, with the title To Open Minds: Chinese Clues to the Dilemma of American Education.

The theme of the book—the dilemma alluded to in the subtitle—foregrounded the contrasting approaches to creativity in our two countries. Briefly, in the U.S. and other Western countries, including the Italy of Reggio Emilia, we value the creative explorations undertaken by young children. Only after relatively unstructured time in the first years of life do we introduce and value more systematic, disciplined study. In contrast, in China, the emphasis from the first falls heavily on disciplined learning—as the oft- repeated cliché has it, “One must walk before one can run.” Only much later, after discipline and skills are completely ensconced, are certain individuals encouraged—or at least permitted—to take more of an imaginative or creative leap.

I argued that either approach to the nurturing of creativity is valid. The risk for the Western approach is that one becomes so attracted to exploration that one never acquires the essential skills and discipline. The risk for the Chinese approach is that skills become so entrenched that one never takes a risk—or that by the time that one is prepared to branch out, it may be too late to accomplish anything that is truly innovative.

To Open Minds was written in a burst of energy in 1987 and published in 1989, just about the time of the horrific mass killings, mostly of young students, in Beijing’s Tiananmen Square. I was so upset by this brutality that I did not visit China for many years. Indeed, by the time of my first post-Tiananmen trip  in 2004, it had become a totally different country. In some ways, China has become more like the United States. In other ways, especially in light of American state and federal educational policies, the United States has moved to a more classically Chinese orientation of drill-and-kill, unfolding in a pervasive omni-testing environment.

With the benefit of the passage of time and the shifting of norms, I see the two approaches to creativity as a continuing oscillation within and also between our two nations. And of course, any account of creativity has been complexified these days by the introduction of powerful digital tools and devices, essentially unknown at the time of my first trips to China and Reggio Emilia alike.

What do we gain from travel, particularly travel that we venture to characterize as transformational? I suggest at least three benefits.

  1. In the case of my first trip to Europe and my post-college year wandering about England and the Continent, I learned far more about the Western cultural heritage—the background of my own family, as well as the intellectual roots and scope of what I had studied in the humanities (and some sciences) in high school and college. Let’s call this depth.
  2. Reggio Emilia exposed me to ways of teaching and learning that I might have read about in the writings of progressive educators like John Dewey or might have seen in certain progressive schools that I visited in the American northeast. But I had never seen these ideas realized with the seriousness, vividness, and longevity that I saw every day, nearly every hour, on every visit, to RE. Let’s call this realization of potential.
  3. With its long history, and its recent turbulence, China constituted the most alien travel experience I have had. The alienation (in the literal sense) was underscored because I do not read or speak Chinese and because I lived alone for a month in Xiamen (and that’s where, on long and lonely evenings, I began to write To Open Minds.) But ever since that trip, whenever I consider any cultural issue, I have China in mind as an alternative, a radically different culture against which I can test my own assumptions and predilections. Let’s call this a comparison case.

And while I have not continued to visit and study China (life is short), I have had many excellent students from China; I continue to follow political and artistic events in China; and, as we plow through a new century, I am open to the possibility that this one may prove to be the “Chinese Century.” As a Westerner and democrat, I hope that China will be less Stalinist, and more Confucian.

Stepping back from travel, I believe that these benefits can come from other experiences—wide reading can certainly deepen one’s knowledge, flesh out hypotheses and intimations, and provide vivid comparisons. But, all the same, travel to faraway places probably achieves these dividends most vividly—and it’s also the most fun.

Categories: Blog

Project Zero: Celebrating 50 Years

December 18, 2017 - 9:24am

Founded in 1967 as an investigation of arts education, Project Zero turned 50 years old in 2017 and now encompasses research in topics ranging from understanding and creativity to interdisciplinary learning and ethics.

On October 13, 2017, PZ held a public forum at the Harvard Graduate School of Education to celebrate this milestone, reminiscing on half a century of contributions to the field of education and beyond, and looking forward to the future as a center of innovative thinking.

Speakers at this forum included:

  • James Ryan, Dean of the Harvard Graduate School of Education
  • Drew Faust, President of Harvard University
  • Daniel Wilson, Director of Project Zero 
  • Howard Gardner, Principal Investigator and former Co-Director at Project Zero
  • David Perkins, Principal Investigator and former Co-Director at Project Zero
  • Shari Tishman, Principal Investigator and former Director at Project Zero
  • Steve Seidel, Principal Investigator and former Director at Project Zero

A video of this event is available below. Throughout the year, PZ will be commemorating its 50th year through particular themes and events. Check their website for more information.

Categories: Blog

On Liberal Education: Views from Abroad

December 11, 2017 - 1:42pm

In the United States, when we contemplate the phrases “liberal education,” “liberal arts education,” or “education in the liberal arts and sciences,” we face two essentially opposed perspectives. On the one hand, the years beyond high school have long been seen as a period when young people can leave home, spend several years in a comfortable setting (perhaps near, perhaps distant from their families), mix with peers, enjoy an active social life, and perhaps learn things that are interesting and useful. We can call this the romantic view of higher education. More recently, however, the high expense of higher education, as well as the lesser likelihood of finding a good job right after graduation, has led to a less happy perspective. Perhaps college is not worth it; indeed, in recent polls, politically conservative respondents actually indicate that higher education is bad for the national interests—a pattern of response that would have been unthinkable a few decades ago. We can call this the disenchanted view.

In other parts of the world, higher education has had a quite different history. For one thing, it has been restricted to a small elite: students who have done well in secondary school and have passed a challenging completion exam. Second, it is usually pointedly vocational; one goes to university to become an engineer a lawyer, or a physician. Third, and importantly, it has traditionally been free or of low cost, with the vast majority of students living at home and not “on campus.”

But recently, notably in Europe and Asia, an increasing number of institutions of higher learning—both government-sponsored and for-profit—have been launched. (It’s been estimated that there are about 200 self-styled liberal education institutions outside the United States.) Students who decide to enroll have a “blanker slate”—they do not arrive with a disposition to romanticize or castigate this form of education. Taking advantage of this situation, three scholars (Jakob Tonda Dirksen, Daniel Kontowski, and David Kretz) have asked students who are attending or have attended liberal arts institutions in Europe to answer the question, “What is liberal education, and what could it be?”. The editors have published the responses of 17 students in the slim volume What is Liberal Education and What Could It Be? European Students on Their Liberal Arts Education.

In some ways, the respondents are reminiscent of students at select American liberal arts schools. By their own testimony, they tend to come from relatively affluent backgrounds—and yet also have to defend themselves against friends and family members who ask them why they are not pursuing a vocational career. As Leon says of his school, “Leuphana University is a quite homogeneous space where many of us students come from middle-class backgrounds, spent a year volunteering before entering university, and speak at least three languages.”

More so than most Americans students, those enrolled in European colleges that style themselves as “liberal arts” centers see themselves as risk takers. The kind of education that they have chosen to pursue is unfamiliar to many in their worlds—and so they feel like they have proclaimed themselves as different from their peers. In this sense, they are more like first-generation students in the United States: they have placed a distance between themselves and both family member and secondary school peers—who, if they had pursued tertiary education at all, would have been more likely to pursue a conventional degree in a single subject matter (like economics) or in a professional career.

These select students declare their uniqueness in what they say and how they say it. Consider this evidence:

  • Some lines from the poem “Artists and Scientists: The Uncommon View” by Nathalie (Leiden University College):

We are the artists one hasn’t seen before
Since we draw connections through actions, reactions, and dissatisfaction
We are the scientists of the shades of grey
When everyone’s leaving, we smile and stay to inspect it all

  • Teun (University College, Utrecht) on “The Headaches and Joys of an Open Curriculum”:

For the Renaissance women and men, they have an opportunity to avoid choosing
For the Tailors, a way to choose precisely what they wanted
For the Shoppers, they will try different courses and see what grabs them
For the Avoiders, a way to avoid courses or approaches that they feared or were not interested in

  • Sem (University of Winchester) makes a drawing of the difference between the oblivious child and the one who has seen the light:

  • While Lukas (Leuphana Univesrity, Luneberg) mixes Simon and Garfunkel’s lyrics with his own text about liberal education:

Simon and Garfunkel: “Like a bridge over troubled water I will lay me down”

Lukas: Against this background, my study was a salad bowl of experiences. All over the liberal arts, the “multi-, inter-, & trans-disciplinary hype of un-education accompanied me smoothly, carried me safe”

As should be evident, the writers collated by the editors are a lively group, not reluctant to express what is on their mind in artistic form.

Some of their testimony is more pointed and critical:

  • Iesse (Leuphana University in Luneberg) wonders whether, instead of being critical of capitalism, he and his peers are being prepared to join the neo-liberal class—becoming in effect the future “Davos” women and men: “For me liberal education rather corresponds to the latest developments in capital society—its ideals of capital accumulation, market-liberalism, comprehensive competitiveness, and the inherent exploitations of capitalism. Whether one likes or dislikes this will certainly vary with context. I guess that many people, are, like, me, torn.”
  • Jacob (European College of Liberal Arts, Berlin) questions a number of widely held assumptions. He wonders whether liberal education truly achieves critical thinking (let alone the more radical challenging of assumptions); rather than being “interdisciplinary,” he suggests that it is pre-disciplinary; and it fails to ask whether a competitive career is really the sole aim of life. As he concludes, “modern liberal education misses the same introspective qualities that it fails to develop in its students.”

I’ve introduced some of the more exotic responses to the questions put forth by the editors—allusive in their use of artistic tools and/or pointedly critical of the programs in which they have matriculated.

But this sample in isolation gives a distorted picture of the testimonies in this collection. Overall, I was impressed by the thoughtfulness of the responses—whether literal or metaphoric, whether critical or complimentary. Here are some powerful points made by the students:

  • Drawing on Plato, Clara (Leiden University College) sees the education of the soul as the ideal conception of liberal education. She delineates how one might educate the soul of the good man (the ideal soul, the just soul, the educated soul) as one which is wise and courageous and which is able to moderate its desires so that it may focus on the achievement of higher (immaterial) goods. The true liberal artisans would get along well with Plato; they are open minded critical thinkers, who do not back away from challenges.
  • Nathan (Amsterdam University College) points out that “the liberal arts have shown me that it is this professional and academic humility—at a time when young people are pressured to have clearly articulated convictions, interests, and ambitions—that will allow me to dare to explore disciplines beyond my specialization.”
  • Sanne (University College, Roosevelt, Middleburg) praises the features of campus life that American students too often take for granted: “The university college made sure rent was affordable; living together next to your fellow students only minutes from the university made working together easy to arrange, and there were always people around to have a cup of tea with late at night. UCR students really formed a strong close-knit community.”
  • Arthur (King’s College, London) asserts that “the liberal arts afforded me the opportunity to think as a history student, as philosophy student, as a film student, and as a literature student at the same time. I found that studying multiple different subjects at the same time allowed me to pool knowledge and different methodologies from each discipline for the benefit of a project. In addition I could take different ways of thinking from different disciplines to approach a subject in a new way.”
  • Brita (King’s College, London) declares that “ultimately liberal arts and its inter-disciplinarity has for me involved an acknowledgement of life as simultaneously meaningless and bursting with meaning. I am no longer to conceptualize or express my life and future life without including art, personal growth, relationships and emotions, as well as academic and professional progress… Liberal arts can teach you what is good, what is bad, what you value, and what does not matter to you. Ultimately, what more could you ask of an education?”

These voices from young persons studying liberal arts outside of the United States are illuminating in two ways:

  1. They cast fresh light on features of a form of education that has long been associated with the United States—both its prestigious private institutions and its capacious public institutions—that may have become less visible and less vivid to those who have long taken their assumptions for granted.
  2. At a time when liberal education is under severe attack in the United States (for some valid reasons, but mostly for reasons that are ill-informed), this informal European study suggests some features that may flourish in soils remote from our shores.

Note: For expositional purposes, some of the quotations above have been lightly edited. I trust that the intended meaning always comes through.

Categories: Blog

The Price of Passion… And Its Rewards

November 28, 2017 - 12:47pm

Visiting a campus that is not very selective (I’ll call it “Downtown University”) as part of our study of higher education, I spoke to a middle aged painter (I’ll call him “Henry”) who teaches drawing and painting to undergraduates. A handful of his students hope to be able to make a living as artists of some sort. The vast majority take his courses because they would like a job in an arts-related business (perhaps fashion or communications or advertising); because they want to become art teachers in public schools; or out of curiosity, hobby, or—that bugaboo of contemporary non-vocational education—in order to meet “distribution requirements.”

In the course of our conversation, Henry mentioned that he himself had studied at a conservatory, where most of the students fashioned themselves as future artists or teachers of art; and that he had also taught at two highly selective Ivy League colleges. Curious to learn about his experiences at these more selective schools, I departed from our customary protocol and asked him to compare his “ivy-covered” students with those at Downtown University.

Henry thought for awhile and then said, “Well, in many ways it is easier to teach drawing and painting at an Ivy School. The students are highly articulate, and since I like to give verbal feedback, it’s easy to explain to them what they might do differently and why and for what purpose. Also, I begin with highly technical lessons and, used to being obedient and to following rules, the students have fewer problems mastering technique than those who come from less privileged or more chaotic backgrounds.”

Henry paused again and added, “But there’s a big problem with many of the Ivy students. To get into these highly selective schools, students need to amass a portfolio of assets: high grades, high test scores, and a panoply of extra-curricular and service activities. I understand and respect that. But then when they arrive at college, they feel that they have to continue that pattern. They know no other! And so, come the weekend, they divide their time between homework, seeing friends, going to athletic events (if they are not actually on one of those numerous teams), or some other artistic or athletic or academic club. And before they know it, it’s late Sunday evening, if not early Monday morning.”

But to become an artist, Henry explained, “You need to have passion. Making art has to be the most important thing that you do. You need to be prepared to spend nights and all weekend on your painting or your mural or your triptych—in fact, you have to want to spend your time on that artistic endeavor. Of course, you pay a price, but it’s a price that you realize you have to pay, and you will want to continue to pay into the indefinite future.”

I did not want to put words into Henry’s mouth, but it seemed he was saying that, inadvertently, preparation for college may undermine the drive, passion, grit, and love that enable a young person to pursue certain careers, and especially a career in artistry, where no holds should be barred. If so, this is a steep price for an individual—or, indeed, for a culture—to pay, especially when the individual or the culture is unaware of this sacrifice.

To be sure, perhaps such individuals should be directed to artistic conservatories—to Julliard or Curtis in music, to Rhode Island School of Design or Parsons School of Design in the visual arts. But then, two costs are incurred: the students themselves are deprived of a balanced education in the liberal arts and sciences, and their classmates lack contact with future major artists (no Yo-Yo Ma or Leonard Bernstein at Harvard, no Frank Stella at Princeton or Helen Frankenthaler at Bennington).

Perhaps there is a way to decrease the dilemma that Henry foregrounded. In current efforts to rethink college admission—for example, Turning the Tide—it’s been proposed that on their applications, students should only list 1-2 extra-curricular activities. Not only would this stricture slow down the trend toward quantity rather than quality, but it might reward those students who have a passion for the arts, or, indeed, for any hobby, discipline, or topic. And perhaps, in a similar vein, college students should be restricted to one major, rather than the two, or, increasingly, three majors, two badges, and a certificate to spare, that I’ve been hearing about of late.

Of course, as colleagues have reminded me, students (and their parents and advisers) are keen readers of changing signals. And so, if colleges decide to valorize those students who seem to have a passion, no doubt there will be efforts to “game the system.” At least some may attempt to “fake passion.” One has to hope that those who preside over college admissions will be able to discern which applicants are truly and passionately engaged in an activity and which simply purport to be passionate.

A bigger challenge is to change the way in which we as a society think about and admire children growing up. All of society recognizes that certain young persons will excel in an area—be it chess, spelling, baseball, or, to use examples from the arts, drawing, caricature, mime, or musical performance.  But all too often these young persons are seen as anomalies, as freaks, as Gladwellian outliers—and so, as not particularly relevant to the rest of society or, to be specific, to child-rearing at home, or classroom education at school.

If, instead, from a young age, children were encouraged to find an idea or activity that inspired them, that they enjoyed, that they wanted to get better at, and from which they gained “flow,” not only would we have more youths of passion and with passion, of purpose and with purpose. Equally important, we would be bestowing on these young people a gift that they would have for the rest of their lives. When I was young, I enjoyed playing the piano, quite possibly because my mother sat alongside me on most days. Now, as someone well on in years, I remain passionate about music. Whenever possible, I listen to music. And when I am home, I play the piano every day—only for myself, to be sure—and there is no activity, whatever its resonance of Walter Mitty, from which I gain more satisfaction. I am grateful that this passion has endured, and I wish that everyone had an activity from which they can gain sustenance throughout their lives.

Note: I thank Wendy Fischman, Lloyd Thacker, and Rick Weissbourd for their helpful comments on this piece.

Categories: Blog

Podcast: Alanis Morissette Interviews Howard Gardner

November 27, 2017 - 12:48pm

Howard Gardner has been interviewed by Alanis Morissette for her podcast “Conversation with Alanis Morissette.”

Morissette, a Grammy Award-winning singer and entertainer, spoke with Gardner about his work, ranging from the theory of multiple intelligences to The Good Project to his latest co-authored book, The App Generation.

To listen to the piece in full, click here.

 

Categories: Blog

Wolfram and Gardner Discuss Computational Thinking

November 15, 2017 - 1:35pm

Howard Gardner and Stephen Wolfram shared the stage on November 6, 2017, at the Harvard Graduate School of Education to discuss Wolfram’s theories of computational thinking.

Stephen Wolfram is the creator several innovative computational systems and the founder and CEO of Wolfram Research. According to his website, he is a pioneer in the area of computational thinking, a mode of formulating problems and solutions, and has been responsible for many discoveries, inventions and innovations in science, technology and business.

A full video of the conversation is available via YouTube below.

Categories: Blog

Contrasting Views of Human Behavior and Human Mind: An Epistemological Drama in Five Acts

November 14, 2017 - 12:06pm

Last month, I received an unexpected communication from Dr. Henry (Hank) Schlinger, a scholar whom I did not know. As he pointed out, this was a somewhat delayed communication, since it referred to an article of mine written quite some time ago. 

In his note to me, Dr. Schlinger argued that I had been mistaken in my assertion that his brand of psychology—called behaviorism—has been discredited and that another brand of psychology—called cognitive psychology—had taken its place. And he took issue with the way in which I had dramatized this process—I had dubbed the change “the cognitive revolution”—and personalized it, citing the work of linguist Noam Chomsky as being a principal factor in challenging the behaviorist account of “verbal behavior” put forth by B.F. Skinner, a well-known psychologist.

After some reflection, I decided both to respond to Dr. Schlinger and to share the correspondence with Noam Chomsky, whom I have known for many years. (I also knew “Fred” Skinner, who was a neighbor, and who befriended my young son, Benjamin, with whom he walked around the neighborhood.) Chomsky responded and, with this permission, I quote his response here.

There ensued one more round of letters—and I’ve described collection as “a suite of letters in five acts.” I reproduce the exchange here. I would like to think that it is an example of how scholars can disagree profoundly but do so in a respectful way. I thank both Hank Schlinger and Noam Chomsky for their cooperation.

Act I: An Opening Foray from Hank Schlinger

Dear Professor Gardner,

I know I’m a bit late to the game, but I just read your article “Green ideas sleeping furiously” (1995), and I have the following comments.

In your article, you said the following:

“Chomsky’s review of Verbal Behavior was a major event in the movement that was to topple behaviorism and itself become a new orthodoxy,” and “His own research, however, was quite specifically grounded in linguistics and took a decidedly unusual perspective on human language.”

As for Chomsky’s research, I’m curious what you’re referring to because I just looked at all the articles he lists on his CV and didn’t see one research article; that is, no experiments.

As to Chomsky’s review toppling behaviorism, I find that curious too because I’m a radical behaviorist and the last time I looked, I’m still here and teaching behavior analysis classes at my university. And there are thousands of other behavior analysts like me all over the world who belong to numerous professional organizations and who publish in journals devoted to the experimental, conceptual, and applied analysis of behavior.

As to the new orthodoxy, again I’m curious what that was or is. It certainly wasn’t Chomsky’s “theory” of 1957, because that “theory” is gone and his positions have changed with the intellectual wind as one would expect of a non-experimental, rationalist.

As I wrote in 2008 on the 50th anniversary of Skinner’s book:

It seems absurd to suggest that a book review could cause a paradigmatic revolution or wreak all the havoc that Chomsky’s review is said to have caused to Verbal Behavior or to behavioral psychology. To dismiss a natural science (the experimental analysis of behavior) and a theoretical account of an important subject matter that was 23 years in the writing by arguably the most eminent scientist in that discipline based on one book review is probably without precedent in the history of science. 

To sum up the logical argument against Chomsky’s “review” of Skinner’s book Verbal Behavior in a rather pithy statement, a neuroscientist at Florida State University once asked rhetorically, “What experiment did Chomsky do?”

And for all of Chomsky’s and your diatribes against Skinner, his book, and the science he helped to foster, his book has been selling better than ever and is now being used as the basis of language training programs all over the world for individuals with language delays and deficits.

Science doesn’t proceed by rational argument, but by experimentation. The experimental foundation of behavior analysis is without precedent in psychology and the principles derived therefrom not only parsimoniously explain a wide range of human behaviors—yes, including language—but they have been used successfully to ameliorate behavioral problems in populations ranging from people diagnosed with autism to business and industry. And what have Chomsky’s “theories” enabled us to do?

I would say that the proof is in the pudding. The fact that some psychologists have not been convinced says a lot about them, but nothing about the pudding.

In case you’re interested, I’ve attached a couple of articles that bear on the subject. You might also want to check out this relevant article:

Andresen, J. T. (1990). Skinner and Chomsky 30 years later. Or: The return of the repressed. Historiographia Linguistica, 17,(1-2), 145 –165.

Sincerely, 

Hank Schlinger

 ***

Act II: Howard Gardner Responds

Dear Dr. Schlinger,

I appreciate your taking the time to write to me.

Clearly, we have very different views of science. As I understand it, for you science is totally experimental and good science has to change the world, hopefully in a positive direction.

I have a much more capacious view of science—going back to its original etymology as “knowledge.” There are many ways to know the world and that includes many forms of science. Much of Einstein’s work was totally theoretical; Darwin’s work was primarily observational and conceptual; whole fields like astronomy (including cosmology), geology, and evolutionary biology do not and often cannot carry out experiments.

An even more fundamental difference: I basically accept Thomas Kuhn’s argument, in The Structure of Scientific Revolutions that the big changes in science involve the adoption of fundamentally different questions and even fundamentally different views of the world. Physics in Aristotle’s time turns out to have been a wholly different enterprise than it was for Newton; Einstein, and then quantum mechanics entailed paradigm shifts again. A similar evolution/revolution occurred in other fields, ranging from biology to geology.

In the field that we both know—psychology—there were what are often called mini-paradigm shifts from the associationism and structural-functionalism of the nineteenth century to the behaviorism of the early decades of the 20th century, to the cognitive revolution (which I chronicled in The Mind’s New Science), and now-again—the emergence of cognitive neurosciences, including psychology.

These paradigm shifts occur for many reasons—and the shifts are not all progressive—but they affect what promising younger scientists (whether theoretically or empirically oriented) consider to be questions/problems worth investigating and how they proceed to investigate them.

It’s in this spirit, and on the basis of this analysis, that I, and many others, claim that over the last several decades, the behaviorist approach was replaced by a cognitive approach to psychological (and related) issues and questions. Neither Skinner nor Chomsky caused this change; but they serve as convenient “stand-ins” for a process that involved many scientists doing many kinds of theoretical and empirical work in many societies.

Turning to your specific point, neither I (nor, I believe Chomsky) dismiss the belief that one can affect behavior by rewards and punishment. Indeed, nearly everyone in the world believes this—including the proverbial grandmothers. From our perspective, the behaviorist approach has two crippling difficulties:

  1. When results come out differently than anticipated—for example, behavior changing for all time because of one positive or negative experience or behavior failing to change despite several experiences—then the analysis is simply reconfigured to account for the results. If a behavior changes, then it must have been reinforced. In that way, as with psychoanalysis, it becomes circular.
  2. While the experimental analysis of behavior may explain certain aspects of verbal behavior, it leaves out what many of us consider to be the most interesting and important set of questions: what is language, how does it differ from other human processes and behaviors, how do we account for the universals of language as well as the speed and similarity with which languages are acquired, despite their superficial differences.

None of this should be seen as an indication that your own work is anachronistic or as a critique of the work per se—but it is a claim that the world of science moves on and that what was on center stage in the U.S. (and the Soviet Union) seventy years ago is now decidedly a side show.

I may post parts of our exchange on my website. Please let me know if you prefer to be identified or not.

Sincerely,

Howard

*** 

ACT III: Communication from Noam Chomsky

Thanks for letting me see the exchange. I have a different view of what an experiment is. Take standard elicitation of the judgments about grammatical status and interpretation, e.g., the example that apparently troubled him: “colorless green ideas….”, “revolutionary new ideas…”, “furiously sleep ideas green colorless,” etc. – the kind of judgments that litter my papers and all papers on linguistics. Each is an experiment, in fact, the kind of experiment familiar for centuries in perceptual psychology. By now they have also been replicated very carefully by controlled experiments, e.g. Jon Sprouse’s, which show that the judgments used as illustrations in standard texts have about 98% confirmation under carefully controlled experiment. Furthermore, there is experimental work of the kind that Schlinger would regard as experiment under his narrow view, in psycholinguistics and neurolinguistics, confirming many of the conclusions drawn in theoretical work based on the usual kinds of (highly reliable) elicitation experiments, e.g. work showing crucially differential brain activity in invented languages that do or do not conform to deep linguistic universals.

In contrast, work in the Skinnerian paradigm has yielded essentially nothing involving language or other domains related to human (or even animal) higher mental processes. Or for that matter anywhere apart from extremely narrow conditions.

I always felt that the death-knell for Skinnerian (and indeed most) behaviorism was Lashley’s serial order paper, apparently ignored (as far as I could determine then, or have since) until I brought it up in my review. And the last nail in the coffin should have been Breland-Breland on instinctual drift. And shortly after a mass of work by others trained within that tradition: Brewer, Dulaney, by now too many others to mention.

Noam

 ***

ACT IV: Hank Schlinger’s Further Comments

Dear Howard,

Again, thank you for your reply. I appreciate the opportunity to have this exchange. Below are my comments.

1. Yes, we have different views of science, but you misread my view. I do not think science is or should be totally experimental, but I do believe that the natural sciences—and you, or other psychologists, may not want to include psychology in that exclusive club (see below)—have proceeded first by experimentation, the results of which led to laws and then theories, which were used to understand and make predictions about novel phenomena. And, while the goal of science it not necessarily to change the world, the natural sciences, through experimentation, have enabled us to cure and prevent diseases, for example, and to develop technologies that have dramatically changed our world, in many instances, for the better. 

1a. Einstein’s theoretical work was based on the experimental foundation of physics. And while much of Darwin’s work was observational, he also conducted experiments, and his thinking was informed by experimental biology.

1b. It is true, as you say, that astronomers, geologists, and evolutionary biologists in some cases may not be able to conduct experiments, though sometimes they do—and must. But their theoretical work is predicated on the discovery of laws through experimentation with things here on earth that are observable, measurable, and manipulable. Otherwise, they are no better than philosophers.

2. I know you have written about the so-called cognitive revolution; I have your book. I say, “so-called because one psychologist’s cognitive revolution is another psychologist’s cognitive resurgence (Greenwood, 1999), myth (Leahey, 1992), or even rhetorical device (O’Donohue & Ferguson, 2003). As Leahey (1992) points out, “But we need not assume that Kuhn is good philosophy of science, and instead rescue psychology from the Procrustean bed of Kuhnianism. His various theses have been roundly criticized (Suppe, 1977), and the trend in history and philosophy of science today, excepting Cohen, is toward emphasizing continuity and development instead of revolution.” (p. 316).

3. As for the claim by you and other cognitive revolution proponents that “the behaviorist approach was replaced by a cognitive approach to psychological (and related) issues and questions,” not all cognitive psychologists adhere to that position. The cognitive psychologist Roddy Roediger (2004) called it a “cartoon view of the history of psychology.” That, plus the frequent statements by cognitivists that Chomsky’s review of Skinner’s Verbal Behaviornot only demolished the book but behaviorsm as well, remind me of the real fake news spewed by Fox News, and now Trump, that is accepted as truth because it is repeated so often. It’s a bit like saying that humans evolved from apes, ignoring that apes still exist. Yes, the predominant view among psychologists is a cognitive one, but it has always been the case. And, behavior analysis still exists. The idea that there ever was a behavioristic hegemony is absurd. Even some of the so-called behaviorists, such as Tolman and Hull, were barely indistinguishable from today’s cognitive psychologist. 

4. Calling the results of decades of systematic experimentation—which by the way, is promoted in almost every introductory psychology textbook I have ever seen as the only method to discover cause and effect—on operant learning “rewards and punishment,” is like calling the centuries of experimental work which led to the theory gravity “apples falling from trees,” which “nearly everyone in the world believes …including the proverbial grandmothers.” That fails to appreciate or even understand what systematic experimentation contributes to our understanding and, yes, knowledge, of the world.

5. Your depiction of the “two crippling difficulties” of the behaviorist approach are simply caricatures created by cognitivists to justify the necessity of their (the cognitivists’) anachronistic, dualistic, view of psychology. Without providing references, your first difficulty remains an unsupported assertion. And, numerous behavior analysts, starting with Skinner himself, have dealt effectively with your second difficulty. The fact that cognitivists refuse to be convinced is the real issue.

6. Back to the beginning, we—and I mean you and I as stand-ins for cognitive and behavioral psychologists—do have different views of science. My science is importantly based on, but not limited to, experimentation. In other words, going back to Watson’s (1913) call to action, a natural science. Yours is apparently based mostly on reason and logic (a rationalist position, like Chomsky’s) and as Skinner once wrote (in a book apparently relegated to the historical trash heap by the cognitivist’s hero—Chomsky) about appealing to hypothetical cognitive constructs to explain language behavior, “There is obviously something suspicious in the ease with which we discover in a set of ideas precisely those properties needed to account for the behavior which expresses them. We evidently construct the ideas at will from the behavior to be explained. There is, of course, no real explanation” (p. 6). This, in a nutshell, is the weakness of the cognitive approach.

As an editor of a mainstream psychology journal recently said in reply to a colleague of mine who wrote in his submission that “if psychology is to be a natural science, then it has to study the actual behaivor of individual organisms,” “Why should psychology aspire to become a natural science? Psychology is a social science.”

This seems to be a (or the) critical difference between our respective disciplines.

Yours truly,

Hank

P.S. Here are a couple of more recent (than Kuhn) approaches to the philosophy of science.

Hull, D. L. (1988). Science as a process. Chicago: University of Chicago Press.

Hull, D. L. (2001). Science and selection: Essays on biological evolution and the philosophy of science. New York: Cambridge University Press.

———————————-

References

Greenwood, J. D. (1999). Understanding the cognitive revolution in psychology. Journal of the History of the Behavioral Sciences, 35, 1-22.

Leahey, T. H. (1992). Mythical revolutions in the history of American psychology. American Psychologist, 47, 308-318.

O’Donohue, W., & Ferguson, K. E. (2003). The structure of the cognitive revolution: An examination from the philosophy of science. The Behavior Analyst, 26, 85-110.

Roediger, H. L. (2004). What happened to behaviorism? APS Observer (https://www.psychologicalscience.org/observer/what-happened-to-behaviorism)

 ***

ACT V: Howard’s End (for this play…)

Dear Hank,  

Thanks for continuing our conversation. Here are some quick responses:

1. We do have different views of science but, in your recent note, you put forth a more reasonable perspective. You say that the natural sciences proceed from experimentation. I’d rather contend that science can proceed from observations, from experiments, from interesting ideas, and even from grand theories. The “conversation” is continuous and can go in many directions.

2. On the nature of experiments, Noam Chomsky makes an important point. There is not a sharp line between observation, informal investigations and more formal experiments. When it comes to judgments of grammaticality, there is no reason for large subject masses, control groups, high power statistics. Almost all judgments are pretty clear—and in the few ambiguous cases can be investigated more systematically, if they is reason to do so. And of course, modern linguistic theory has generated thousands of experiments, reported in dozens of journals.

The most difficult question you raise is whether there has indeed been a revolution, and whether Kuhn’s formulation helps us to understand what happened as cognitivism moved center stage (to continue my dramaturgical metaphor) and behaviorism become a side show. There is no way to ‘test’ these propositions. The discipline that will eventually determine whether my account of the last century, or your account of the last century, is more accurate is intellectual history or the history of science.

Indeed, we can each quote many contemporary scholars and observers who support ‘our’ respective positions, but in the end, the judgments that matter will be made by history.

3. That said, I don’t accept your contention that I am a rationalist and not an empiricist. The record does not support your contention (hundreds of empirical and experimental studies over almost five decades). In more recent years, I do think of my work as social science rather than natural science, but social science has empirical standards and measures as well, and I use them as rigorously as appropriate.

Best,

Howard

 ***

EPILOGUE:

With the fifth act completed, the curtain descends on our conversation… at least for now. But I’d be delighted if others who read the exchanges would join in.

Categories: Blog

Comment on “Three Cognitive Dimensions for Tracking Deep Learning Progress”

November 14, 2017 - 11:47am

The original metaphor for each of the several intelligences was that of a computer, or a computational device. I sought to convey that that there exist different kinds of information in the world—information deliberately more abstract than a signal to a specific sensory organ—and that the human mind/brain has evolved to be able to assimilate and operate upon those different forms of information. To be more concrete, as humans we are able to operate upon linguistic information, spatial information, musical information, information about other persons, and so on—and these operations constitute the machinery of the several intelligences.

Even at the time that the theory was conceived—around 1980—I was at least dimly aware that there existed various kinds of computational processes and devices. And by the middle 1980s, I had become aware of a major fault-line within the cognitive sciences. On the one hand, there are those who (in the Herbert Simon or Marvin Minsky tradition) think of computers in terms of their operating upon strings of symbols—much like a sophisticated calculator or a translator. On the other hand, there are those who (in the David Rumelhart or James McClelland tradition) think of computers in terms of neural networks that change gradually as a result of repeated exposure to certain kinds of data presented in certain kinds of ways. A fierce battle ground featured rival accounts of how human beings all over the world master language so efficiently—but it eventually has played out with respect to many kinds of information.

Fast forward thirty years. Not only do we have computational devices that work at a speed and with amounts of information that were barely conceivable a few decades ago. We are also at the point where machines seem to have become so smart at so many different tasks—whether via symbol manipulation or parallel distributed processing or some other process or processes—that they resemble or even surpass the kinds of intelligence that, since Biblical times, we have comfortably restricted to human beings. Artificial intelligence has in many respects (or in many venues) become more intelligent than human intelligence. And to add to the spice, genetic manipulations and direct interventions on the brain hold promise–or threat—of altering human intelligence in ways that would have been inconceivable… except possibly to writers of science fiction.

In an essay “Three Cognitive Dimensions for Tracking Deep Learning Progress,” Carlos Perez describes the concept of AGI—self-aware sentient automation. He goes on to delineate three forms of artificial intelligence. The autonomous dimension reflects the adaptive intelligence found in biological organisms (akin to learning by neural networks). The computation dimension involves the decision making capabilities that we find in computers as well as in humans (akin to symbol manipulation). And the social dimension involves the tools required for interacting with other agents (animate or mechanical)—here Perez specifically mentions language, conventions, and culture.

These three forms of artificial intelligence may well be distinct. But it is also possible they may confound function (what a system is trying to accomplish) and mechanism (how the system goes about accomplishing the task). For instance, computation involves decision making—but decision making can occur through neural networks, even when intuition suggests that it is occurring via the manipulation of symbols. By the same token, the autonomous intelligence features adaptation, which does not necessarily involve neural networks. I may be missing something—but in any case, some clarification on the nature of these three forms, and how we determine which is at work (or in play), would be helpful.

Returning to the topic at hand, Perez suggests that these three dimensions map variously onto the multiple intelligences. On his delineation, spatial and logical intelligences align with the computational dimension; verbal and intrapersonal intelligences align with the social dimension; and, finally, the bodily-kinesthetic, naturalistic, rhythmic-musical, and interpersonal intelligences map onto the autonomous dimension.

I would not have done the mapping in the same way. For example, language and music seem to me to fall under the computational dimension. But I applaud the effort to conceive of the different forms of thinking that might be involved as one attempts to account for the range of capacities of human beings (and, increasingly, other intelligent entities) that must accomplish three tasks: carry out their own operations by the available means; evolve in light of biological and other physical forces; and interact flexibly with other agents in a cultural setting. I hope that other researchers will join this timely effort.

(I thank Jim Gray and David Perkins for their helpful comments on this piece.)

Categories: Blog

The Professions: Can They Help Us Invigorate Non-Professional Education?

November 13, 2017 - 1:42pm

For many years, within the United States, the phrases “higher education” and “the professions” have evoked different associations. When you go to a four year college to pursue higher education, you are supposed to sample broadly across subject matters and disciplines; hone your speaking and writing abilities; and master critical (and perhaps creative) thinking.

In contrast, when you seek training for a profession or vocation, traditionally after you have graduated from a four year college, you master those skills and make those networking connections that will help you to succeed as a physician, lawyer, professor, social worker, or architect. Of course, in many other countries, you typically choose a profession after completing secondary school; and it is assumed (rightly or wrongly) that you have already accrued those skills and understandings that many Americans pursue in college.

Indeed, this “division of labor” has occurred in my own thinking and my own blogging. Until this past spring, I wrote a bi-weekly blog called “The Professional Ethicist.” In mid-2017, I suspended that blog so as to a launch a new one called “Life-Long Learning.” Ultimately, this new blog, which you are reading, will focus increasingly on higher education, and specifically higher education of the non-vocational variety—think Princeton, think Pomona.

Yet, nowadays, as I have detailed on both blogs, the educational and vocational landscapes are undergoing tremendous changes, at a very rapid pace. In the case of the professions, an ever increasing amount of the routine work is now being executed by smart apps or programs or by trained paraprofessionals; accordingly, the survival of “professions as we have known them” is by no means assured. With respect to higher education, the costs are so great, and the anxieties about finding work post-college are so acute, that the very phrase “liberal arts” is considered toxic. The search for vocational justifications of curricula (and even of extra-curricular activities) is ubiquitous.

Amidst this rapidly shifting domain, an understanding of professions may prove helpful to both sectors. On my definition, professionals are individuals who have acquired expertise in a practice valued by a society; are able to make complex and often vexing judgments in a fair and disinterested way; and, as a consequence of their expertise and their ethical fiber, are offered and merit trust, status, and reasonable compensation.

Though professions were at one time cordoned off from the rest of society, that situation no longer obtains. We can argue about whether that shift constitutes a desirable state of affairs. But I’ve come to realize that ultimately we would like expertise and ethics from every member of society, from every citizen. The phrases, “She is acting like a professional” and “How professionally done!” should be applicable to any worker, whether a plumber or waiter, a minister, musician, or mogul. Indeed, I would not want to live in a society where the notion of “behaving professionally” had lost its meaning.

How does this formulation link to higher education? Under reasonable conditions, any young person who has succeeded in secondary school and is attending college should be on her way to disciplined thinking—that is, being able to analyze issues and think in the way of a scientist (e.g. a biologist, a chemist), a social scientist (e.g. an economist, a psychologist), a thinker in the humanities (e.g. a historian, a literary or artistic connoisseur). Mastering a particular discipline is not nearly as important as apprehending the ways in which various spheres of scholarship make sense of the world. College should be the time at which—and the place in which—students acquire ways of thinking that are elusive for most individuals until later adolescence. As possible candidates for these modes, I would suggest philosophical thinking (what are the enduring conundra that humans have struggled with, how have we done so, and how have we fared), interdisciplinary and synthetic thinking (how do we combine insights from, say, history and physics, in thinking about the concept of time); and an understanding of semiotics (what are the different symbol systems, ranging from written language to computer codes, by which individuals have captured and communicated their knowledge and how do those symbol systems work). In future writings, I’ll flesh out these requirements.

By the completion of such a secondary (high school) and tertiary (college) education, students should know what these forms of expertise are like and also know, if not have mastered, the sector(s) where they would like to be employed, at least for a while. They are on the way to achieving one leg of professionalism—call it relevant knowledge and skills.

Which leaves the second facet: being aware of vexing problems, having the motivation to tackle them, and being committed to doing so in a disinterested and ethical manner. One established way of gaining this expertise is to work as an intern or apprentice in an office or company that exemplifies and transmits an impressive professionalism. (Conversely, an internship or apprenticeship where professionalism is routinely flouted portends future failure in thoughtful tackling of tricky dilemmas.)

My “modest proposal” is that the college itself should serve as a model of professionalism. Teachers, administrators, and other adult members of the institution should hold themselves to high standards, expect those standards to be observed by others, and hold accountable members of the community who disregard or undermine the standards. And going beyond specific individuals, the rules, structures, practices, and—an important word—the norms of the college community should capture and embody the values of a profession. In this case, the profession happens to be education and/or scholarly research. But colleges are inhabited by a range of professionals (from lawyers to engineers to ministers to nurses and physicians); accordingly, the community should model the stances of professions in general, and, equally important, what it means to behave in a professional manner.

This last paragraph may sound idealistic, if not “holier than thou”; but I mean it, seriously and literally. I have observed enough workers in numerous institutions over many years to feel confident in saying that some embody professionalism, while others flout it, knowingly or unknowingly. Moreover, ill-chosen leadership can rapidly undermine the professionalism of an institution (and if you think I have in mind the current executive branch of the federal government, I won’t dissuade you), and it’s much more difficult to resurrect professionalism than to wreck it.

The very fragility of many of our professions and many of our colleges may harbor a rare opportunity. If we were to take (as a primary mission) crafting our institutions of higher education as laboratories for the professions, we might end up strengthening both. And, indeed, if we look at the earliest years of our colleges in the United States, the picture I’ve presented here would be quite familiar. It’s perhaps worth noting that in the 17th century, it was the ministry for which college students in the American colonies were being prepared.

Categories: Blog

The von Humboldt Brothers—As Scholars and Siblings

November 1, 2017 - 7:42am

In the previous blog, I introduced two remarkable scholars from the early 19th century, Wilhelm von Humboldt (1767-1835), linguist and architect of the Prussian educational system; and his younger brother, Alexander (1769-1859), naturalist, explorer, traveler, and masterful speaker and essayist. Here I explore whether their sibling status and birth order may have contributed to their distinctive styles and achievements.

That they were distinctive seems clear. Indeed, I found vivid testimony to this effect from Wilhelm.

“Since we were children, we have diverged like two opposite poles, although we have always loved one another and at times have confided in one another… My father and my mother, who had only two children, should have had these two who—although in general they both are similarly oriented toward the world of thought and the contemplative life—should nonetheless diverge more completely than could be expected if they had been born on different planets.”

What does the record show? Both von Humboldt brothers benefited from a first rate home education, delivered by an expert tutor. Both Alexander and Wilhelm were intellectually ambitious—they absorbed vast amounts of information readily and had quick access to that information. Wilhelm stayed close to home, metaphorically and literally; he did not leave the European continent and carried out work that could be (and perhaps was!) done in his study. Essentially (and confessedly) an introvert and an introspecter, he liked to pore over books, reading in and thinking about multiple languages. He pondered how the education that he and Alexander had had might be shared with a much larger population. Wilhelm was comfortable as a European and a German, a conventional householder, with a supportive wife and eight children, five of whom survived until adulthood. He was religious, often invoking God.

In sharp contrast, Alexander sought to get away from home, literally and metaphorically. He wanted to explore unfamiliar and sometimes unknown terrains, have exotic adventures, and place himself at risk. He had an enormous appetite for new information which he then sought to synthesize—in his own way, rather than as part of an organized discipline. He rarely wrote about human character—his own or that of others. And he relished the emerging art of popularizing—both in person and via his readable, multi-volume Cosmos. We might say that he wanted to circumvent the establishment and put forth his own syntheses, rather than address those in more established fields of knowledge in a more conventional manner. He rarely spoke or wrote about God and may well have been agnostic.

How might these contrasts relate to their status as siblings, or, to use the more common jargon of our times, the “birth order effects”? Thanks to the work of many researchers, notably Frank Sulloway,  we know that first-borns grow up more quickly, orient more toward an adult world, tend to think more conventionally, and are more likely to become leaders—and especially bossy ones! In contrast, second (and later) borns are less oriented toward the adult world, more engaged with peers, and, as Sulloway amply documents, are “born to rebel.”

An ecological way of thinking about brothers is illuminating. We might think of the first-born, for whatever reason, as electing to excel in certain pursuits and to do so in certain ways. (Often first-borns emulate the life choices of the father, and this tendency is reinforced by the practice of primogeniture.) The oldest son thereby comes to occupy a certain role in physical and psychological space. The second son has an option: either compete with the first born (which can happen if the second born is especially able or, perhaps, is for one or another reason favored by the elder generation); or, more likely, seek to make his or her mark in a distinctly different arena.

This description can help to explain the brothers von Humboldt. Wilhelm occupied the home space, studying materials that are local and date back to classical times, and was dedicated to his homeland (das Vaterland—Father land!). It is perhaps not surprising that the younger sibling Alexander would travel far, focusing on new objects of study, unfamiliar specimens, recently discovered processes (e.g. electricity) in the physical (rather than the psychological) environment, and that he would identify with French and universal rather than Germanic and nationalistic tendencies. Indeed, his Francophilia and universalist tendencies caused the greatest strain in the relationship between the brothers. (In persons and in nations, we continue to behold this antimony between nationalism and globalism today—200 years after the Humboldtian era.)

Which leaves the question of intellectual style. Both von Humboldts were systematizers—gathering lots of data and trying to organize those data in the most propitious way. In this respect they are both typical early 19th century scholars, maturing in an era before the advent of extensive experimental science, testable models, and verifiable or disprovable frameworks and hypotheses.

Stepping onto a speculative limb, I’d propose the following. Within his own scholarship, Wilhelm looked across a range of languages (and eras) and sought to uncover the most fundamental properties of human language. He sought unity in diversity. With respect to his multiple studies across many geographies, Alexander also sought unity—he believed in and illustrated the connections across species (plant, animal, human), ecologies (volcanoes, oceans, the sky), and processes (light, electricity, weather, temperature). In this way, they were brothers under the pen as well as under their skin… and, I would add, very Germanic and very humanistic in thought.

Yet they differed revealingly in the arenas in which they had the greatest impact. Paradoxically, the introverted Wilhelm impacted educational systems at the time, an impact that extends to this day, and of course he had a large family. Alexander had no children and did not create any institutions; he impacted those scholars who were inspired by his discoveries and those numerous ordinary persons—often young—who were captivated by his talks and his writings. Each brother used his talents and energies as he saw fit, but of course neither could control how the world would respond to his endeavors.

And neither man transcended his era. Yet as we look today at linguistics—a la Chomsky—or at popular synthesizers—like Carl Sagan or E. O. Wilson—we acknowledge the debt that they owe to two distinctive Germans, long forgotten by name to be sure, whose intellectual achievements cast a long-lasting shadow.

ReferenceHumanist without portfolio: An anthology of the writings of Wilhelm von Humboldt. Detroit: Wayne University Press, p. 386, 408.

Categories: Blog

Reflections on Education in the Arts

October 18, 2017 - 12:56pm

In honor of the Museum of Modern Art’s 80th year of museum education programming, Howard Gardner has written a contribution on Medium in which he reflects on what his fifty years of experience have taught him about education in the arts.

Click here to read the piece in full via Medium, in which Gardner discusses the role of the arts and the way that arts education, particularly in museums, has changed over the decades.

Categories: Blog

The Remarkable von Humboldt Brothers

October 17, 2017 - 12:05pm

Throughout much of the 19th century, the von Humboldt brothers were among the most famous persons in the world—celebrities before that term was bandied about. Alexander (1769-1859) was known for his pioneering five-year trip as a naturalist to Latin America and for his synthesizing writings, chief among them his multi-volume Cosmos. His older brother Wilhelm (1767-1835) was renowned for devising the university system in Prussia, organizing the pre-university educational system throughout German-speaking territory, and carrying out highly original studies in linguistics.

Today, the two have been justifiably celebrated through the naming (in their joint honor) of the Humboldt University in Berlin; and yet except among specialists, the substance of their work is little known except perhaps in Germany. In this blog, I summarize and salute their quality work. In a companion blog, I explore why brothers—a mere two years apart in age—could each make important contributions to scholarship and yet do so in highly distinctive ways.

First Wilhelm, the older brother. Always more studious than Alexander, Wilhelm made connections early on with leading Germanic thinkers, chief among them Goethe and Schiller. He mastered their works, along with those of the most important scholar of the era, Immanuel Kant. While a leading thinker and copious writer (though one who did not publish much during his lifetime), Wilhelm’s “career path” did not coalesce until he was asked by the Prussian government in effect to organize the educational system.

In a remarkably brief period of time, Wilhelm laid out a bold and highly original vision: universities should combine teaching and research; students should read and think widely, across the disciplinary terrain; and there should be few formal barriers to organizing one’s own studies (in this, the Prussian system differed from the more structured Napoleonic system). Moreover, there should be a systematic sequence, beginning with elementary schools, which, in their emphasis on play and discovery, were very progressive; these fed into more selective secondary schools (Gymnasiums) with libraries and scientific laboratories; and then ultimately, for the select few, the privilege of higher education in the company of superb scholars. Today, as the terrain of higher education is widely contested across the world, it is noteworthy that the preeminent German philosopher Jurgen Habermas draws explicitly on the Humboldtian conception of the university.

Wilhelm was always fascinated by languages. At a young age he learned Latin, Greek, and the major European languages; and later he dabbled in Sanskrit, Basque, and Kawi (the language of Java) and wrote about the Bhagavad Gita. Clearly he was one of the leading masters of language in his era. Of more importance for our time, Wilhelm also thought deeply about the nature of language: its components, its structure, its role in thought, and its salient defining role in human nature. No less a contemporary authority than Noam Chomsky pays tribute to Wilhelm’s pioneering thinking about language, delineating ways in which his own path-breaking work has taken as a point of departure the Humboldtian enterprise.

Alexander received the same education as Wilhelm—personal tutoring in the major disciplines and topics of the time and similar university experiences. And though not as overtly precocious as Wilhelm, Alexander was certainly a gifted student. But while Wilhelm saw himself as a European, and devoted many years to choreographing Prussian education, Alexander was a prototypical adventurer. He was eager to leave Europe and in fact embarked on a five year expedition to the Americas, chiefly Latin America. (He stopped off in Washington, D.C., to pay a visit to President Thomas Jefferson, Vice President James Madison, and Secretary of the Treasury Albert Gallatin.) In the course of his adventurous expedition, Alexander assembled a huge collection of the flora and fauna of that part of the world; as just one example, he brought back 60,000 plant specimens, representing 6,000 species, of which 2,000 were new to European scholars.

After his American journey was completed, Alexander devoted many years (and all of his inherited fortune) to the publication of his findings. These encyclopedic naturalist works had enormous influence on Charles Darwin, who followed much the same travelling path as Alexander a few decades later; and von Humboldt’s influence extended to the United States, through writers like Thoreau and Whitman.

Not only did Alexander crave worldwide travel adventures (he wanted desperately to travel to China and India but never made it); he did not much like the German speaking societies and instead lived for years in Paris. (He met Napoleon, but, perhaps not wanting to be upstaged by this celebrity of natural science, Napoleon brushed him off.) Only after brother Wilhelm had finished his work in education did Alexander return to Berlin, where he survived his brother by more than two decades. Alexander became a masterful lecturer, entertaining and edifying huge audiences with his ideas and his travel adventures. And then as his magnum opus, he wrote the five volume Cosmos, an unprecedented synthesis of knowledge from the range of scientific disciplines as well as an effort to portray how these parts fit into an overarching tapestry. No wonder that master writer-lecturer Carl Sagan chose the same single word title as Alexander—in effect he was producing the second Cosmos.

Perusing the German and English literature on the brothers von Humboldt, I found much on their lives together—their rather unhappy childhoods, their living together and apart, their extensive correspondence over four decades, their different interests and temperaments, and—in the more recent writings, though not in those from the 19th century—speculations that Alexander was gay and that Wilhelm was preoccupied with what Goethe called “Das Ewig Weibliche—the eternal feminine.”

But I have found little about the resemblances and differences between their intellectual endas and almost nothing about how these relationships might reflect that the fact that they were siblings—older and younger brothers. A full study would require considerable time and considerable expertise—neither of which I have available! But in my next blog, I will offer some speculations.

Categories: Blog

The Arts Have Much More to Teach Us

October 3, 2017 - 7:07am

In the sea of claims about arts education, especially the idea that study of the arts leads to higher academic achievement overall, what is true, and what calls for more research?

In a piece for Education Week, Howard Gardner and Ellen Winner reflect on this question, calling for further examination of artistic thinking and practice.

In a meta-analysis conducted in the late 1990s, Winner and her colleague Lois Hetland looked at studies of how high vs. low exposure to arts education affected math and reading scores, finding no correlation despite many assumptions to the contrary. However, Winner and Hetland suggested a renewed focus on how arts education is actually taught using a “habits of mind” framework (reported in Studio Thinking). More research of this type is needed in order to understand the potential benefits of arts education.

Click here to read the full article.

Categories: Blog

Pages