Accessibility statement

 

Professor Don Norman

University of California, San Diego

Gee. Hero? I'm honored.

My career began many decades ago with an interest in the unseeable: in this case, electrons. When I was in high school I discovered the field of electronics, where the working of electronic devices were invisible, so there was no easy way to figure out how things worked. So, off I went to MIT to major in Electrical Engineering. The age of digital computers had just started. The early computers used vacuum tubes and an amazing array of clever hacks for memory. Undergraduates had difficulty understanding the difference between analog and digital computers – and most of us preferred the analog ones: I did my undergraduate thesis using an analog computer. While an undergraduate, I had a summer job at Remington Rand in Minneapolis, where they were building one of the very first transistorized computers. I was part of the team developing tests for the reliability of the circuits.

After MIT I went to the University of Pennsylvania to study computers (because that is where the first American computers had been built: ENIAC and EDVAC. But, nope, they weren’t doing that any more. There was no computer science at that time.” Wait,” they said. “We are starting a department and in a few years you could be the first student.” I couldn’t wait. Meanwhile, a new chair of psychology was appointed, whose PhD was Physics. I thought if I couldn’t study human-made computers, maybe I could study the brain. When I talked with him, he said: “You don’t know anything about psychology – that’s wonderful. We want you.”  

My seven lessons for life are below:

Lesson One: plan all you like, but when an unexpected alternative course arises that sounds exciting, take it.

As an electrical engineer with both a BS and MS degree, I was asked to install buzzers between the faculty offices and the secretary’s office. My clumsy mechanical skills somehow managed to succeed. After that I was asked to write a computer program to test my advisor’s theories. So I learned the machine language of the University’s only computer: The million dollar Remington Rand Univac I, with 1000 words of memory (implemented by the clever hack of sending pulses through a mercury-filled tube). ($1 million in 1960 dollars is about $9 million today.) It was programmed in machine language – no, not assembly code, interpreters and compilers hadn’t been invented yet: we typed in the alpha-numeric symbols. To run the program, I would sign out the Univac for an hour (only one person could use it at a time). The computer was huge: it filled the room. You could walk inside it--and you had to in order to replace one of the 5,000 vacuum tubes. (The only people today who know what a vacuum tube is are old folks like me, historians, and fanatic purists who insist on vacuum tubes for their amplifiers when listening to music.) All cellphones today (even the dumb ones) are far more powerful than Univac.  (If you want to know what my first computer was truly like, read the Wikipedia article. https://en.wikipedia.org/wiki/UNIVAC_I .)

I have two degrees in E.E. and a PhD in what was then called Mathematical Psychology. In my first job at Harvard, I started to introduce human information processing approaches to psychologists. After Harvard I joined the newly formed University of California, San Diego, where I taught an undergraduate course on Information processing psychology that led to a textbook – dramatically different from the dull, tedious books then being used. (Information Processing Psychology morphed into Cognitive Psychology, and then later, into Cognitive Science) I became chair of psychology, but felt stifled by the limited approaches. So I started the first Cognitive Science department so I could have one place where we combined psychology, computer science, AI, neuroscience, linguistics, and anthropology. 

My research topics were on human memory and attention. I wrote books about them. I wrote a book called “Memory and Attention” and then an introductory textbook (with my colleague Peter Lindsay) called “Human Information Processing.” I used the information processing skills learned as an engineer to explain human behavior. Today that is taken for granted. In those days, it was considered scandalous. Psychologists shunned me, but the AI community invited me to join them. So I gave talks at MIT and CMU and became a consultant to the Xerox Palo Alto Research Center (PARC) and published numerous papers in the journals for AI and Cognitive Psychology.

Lesson Two: When you switch fields, apply the learnings of your earlier fields to the new one.

You will be surprised to discover how often concepts thought elementary in the old field are considered brilliant insights in the new one.

My life changed when I was called in by the Nuclear Regulatory Commission to study the large nuclear power station accident (Three Mile Island). My committee was asked to explain why the operators made so many errors but we concluded that the errors were because of the poor design of the controls. Aha! I said, with my deep background in technology and people, I should work on the interface between people and technology. That decision was life-changing. It took me out of the laboratory to the real world, and as you will shortly see, out of the University into the world of business.

Lesson Three: Repeat Lesson One as many times as possible

Plan all you like, but when an unexpected alternative courses arises that sounds exciting, take it.

So I switched my research studies at UCSD to aviation safety: how to design controls so that pilots would understand the sate of their airplanes, the goal being to enhance safety. We worked with the Office of Naval Research and NASA (the aviation group at NASA Ames in Silicon Valley). That’s where a lot of our basic principles for design came from.  I also started teaching a course I called “Cognitive Engineering” (a term now applied to a field of research and even a conference and journal).

When the first home computers came out they were incomprehensible for ordinary folks. Our laboratory was run by a succession of different computers all made by the Digital Equipment Corporation (a wonderful company that failed to transition itself when the first home computers came out. Digital thought they were playthings. Well, that was true at first, but soon, these playthings were more powerful than the machines Digital, Silicon Graphics, SUN, and other companies were producing. We wrote a book about all that and called it “User Centered System Design” (UCSD) where some of the fundamental principles of human-computer Interaction were developed, although we didn’t use that word then.

As a consultant to PARC I was using the newly invented laser printers and graphical user interfaces that PARC had invented. This was clearly the future of human-computer interaction. The people at PARC in Palo alto knew they were changing the world as they invented the laser printer, the ethernet, the PostScript page description language for laser printers, the Alto computer, and a text editor for the Alto called Bravo. The Xerox management in Rochester, NY thought it was all a waste of money, so the creative engineers at PARC left and started companies: PostScript led to the company Adobe, the ethernet led to 3COM, Bravo became Microsoft Word, and the Alto led to the Apple Lisa (which failed) and the Macintosh (which is still with us today).

When the Macintosh first came out we invited the people who developed it to visit our research group at UCSD. To my great surprise some of them had been my students! 

In the mid 1980s I spent a sabbatical at Cambridge, England and couldn’t work the doors, light switches or water faucets (called “taps” in Britain). I realized that the principles we had been studying for  aviation safety, nuclear power plants, and computers applied to everything. Hence the book: “Design of Everyday Things.”

Lesson Four: Well, this is simply Lesson One again.

When I returned to UCSD, I started the Department of Cognitive Science and became its first chair. My research group split into two parts, one which looked at new processing models, the other that continued the work started with the UCSD book, The first group ended up inventing Neural Networks (which we called “connectionist computing” at first (and one of our postdocs, Geoff Hinton, went on to expand the networks to create what is today called “deep learning.” (When I asked Geoff recently what theoretical breakthrough had led to deep learning he said, “nothing. The breakthrough was that computers had become 1,000 times more powerful.”

Lesson Five: That’s Lesson Two again.

The activities of Silicon Valley were using the stuff from my research laboratory. I was publishing article and books: they were changing the world.  So I retired from UCSD (the first of my five retirements) and joined Apple, first as an “Apple Fellow” and then as Vice President of Advanced Technology. I learned about the constraints of cost, supply-chain management, sales, and customer expectations in developing products: having a great idea was the easiest part. While there, together with several hand-picked colleagues, we invented the term “User Experience (UX)” and started to define how UX fit into the design world. And I discovered real designers.

Eventually, I left Apple, started my own company (Nielsen Norman group), and then followed a client to Chicago for an educational startup that failed. So I joined Northwestern University in their computer science department. (See – I finally became a real Computer Scientist.)

I’ve lived a fascinating life. And when asked for my advice to students, my answer is simple: do not do what others expect you to do. Do what you want to do. Do what excites you. If you have a job doing things you are not excited by, you won’t do a good job. If you do what you enjoy, you will accomplish much. And having a feeling of enjoyment, excitement and the notion of contributing to the world’s greater good is worth far more than any salary could mean. If other people think that what you are doing is wrong, nonsensical, and worthless, this means you are on to something. All creative, world-changing ideas start off by being thought to be crazy. After all, if you are breaking new ground, other people will find it uncomfortable. But if you truly believe and are excited, have confidence. In my case, what I did was find those people around the world in the scientific community who agreed with me. So we formed teams of people, pushing forward these crazy, new ideas. Today, they are so well accepted that now they are called “old-fashioned.” If your ideas are so well accepted that they are considered old-fashioned, that is success. After all, the whole point of science and engineering is to continually be learning, advancing our understanding, moving forward. Expect successful ideas to be incorporated into thought patterns, but modified and changed along the way. Old fashioned? Yup, I will be the first to say so.

I make it a point to learn a new topic every year. And when people all pile into the area I had developed, I leave. I let the new people fill in the details while I work in some other, new, unexplored area. Some people keep working in the same area their entire lives. Both approaches are valid: You have to pick the strategy that fits you.

Along the way, I wrote a large number of books (it is over 20 today, translated into 20 languages). I  was one of the founders of the field of Human-Computer Interaction (perversely abbreviated as CHI by the computer scientists), and to my great surprise started being called a designer.

Why design? Because design requires working across all the disciplines of the university tackling a wide variety of problems, from medical devices, to lung cancer, pandemics, and world hunger, education, and water. This makes it the most exciting field in the world. Design is a way of thinking, of using evidence, and of creating physical devices and procedures and strategies that change the world. Is it Computer Science? Cognitive Science? Engineering? In my opinion, it needs all these disciplines – and more. It also needs to understand the economy, business, political structurers, and the environment. Who cares what it is called if it has the capability to change the world in a positive way?

Pretty good for a computer scientist, eh?  I’ve lived long enough to have accumulated a number of honors. Honorary degrees, membership in all sort of societies, including the National Academy of Engineering (in their computer science division), and now, most wonderful of all: hero.  But what I am most proud of is the legacy I leave behind: my students and the hundreds of thousands of people I have inspired through my books.

My goal for computer scientists is that we should be designing systems that help empower people, that put people first, technology second, and that use technology to make life easier, more pleasant, and to dramatically enhance people’s capabilities. It’s not about the technology: It’s about the people.

Lesson Six: The way to an enjoyable and productive life is to work on topics that excite you.

Never stop being curious. Always keep learning, whether it is a new programming language, or quantum computing, or an entirely new field of technology, or human and societal behavior. Curiosity and interest keeps people active and satisfied. You need a purpose in life and a goal, and then the feeling of accomplishment. Learn multiple areas of knowledge and combine the knowledge into a unique set of skills. In other words, continually obey all six lessons, including this one. Lesson six is simple, even while being recursive: never stop applying Lesson Six.

Lesson Seven.

Lesson Six is difficult to follow. You will encounter many setbacks. At times you will wonder why you persist. Lesson Seven is that if you want a simple, easy life, you will only accomplish simple and easy things. All important problems are extremely difficult. After all, if they weren’t difficult, they would have already been solved.