INTELLIGENCE AUGMENTATION

INTELLIGENCE AUGMENTATION

Pattie Maes [1.20.98]
Introduction by:
Pattie Maes

Introduction
By John Brockman

Pattie Maes came to the United States nine years ago to work with Marvin Minsky and Rodney Brooks at the MIT Artificial Intelligence Lab. She had received her Ph.D. in AI at the University of Brussels and was attracted by Marvin and Rod's more alternative view on artificial intelligence, and artificial life. After working for the AI lab for two years she moved to MIT's Media Lab, which is more interdisciplinary than the AI lab, something that attracted her given that she has varied interests. She used to be happy just doing research, but given that her research is finally applied anyway, she realized that she would only be happy if her work made it into the real world, and so that's one of the reasons that she started Firefly, a start-up which sells software that allows Web sites to personalize the interactions that they have with the visitors of their Web sites. An example is Barnes and Noble, a Firefly customer.

Barnes and Noble uses the tools to recognize individual people so that they can provide personalized service to those people. Maes points out that it used to be the case that many years ago you would go to the corner bookstore and the owner there would know you, would know what you have bought before, they would know about your interests and they could give you personalized service and say, "Hey, did you know there was a new book by Isabelle Allende?" They know you have an interest in certain types of writers. She believes that Web sites will have to provide that same kind of very personalized high quality service on the Web, because this will be one of the ways in which they can distinguish themselves.

"If Barnes and Noble on the Web knows me," she says, "knows what I'm interested in, can help me find the stuff that I'm interested in, can tell me hey, you've bought the other Isabelle Allende books, did you know that she has a new book out, or did you know that there's another author very similar to Marquez, etc., that just published a new book? If I get that kind of personalized service, even though in this case it's implemented by an algorithm rather than by the corner bookstore owner, I'm going to be much more loyal and go to Barnes and Noble because they give me this personalized treatment, they recognize me, they greet me, they remember what I've bought before.

-JB


JB: Let's start with Firefly and work backwards. What are you doing now?

MAES: I started out doing artificial intelligence, basically trying to study intelligence and intelligent behavior by synthesizing intelligent machines. I realized that what I've been doing in the last seven years could better be referred to as intelligence augmentation, so it's IA as opposed to AI. I'm not trying to understand intelligence and build this stand-alone intelligent machine that is as intelligent as a human and that hopefully teaches us something about how intelligence in humans may work; instead what I'm doing is building integrated forms of man and machine, and even multiple men and multiple machines, that have as a result that one individual can be super-intelligent. So it's more about making people more intelligent and allowing people to be able to deal with more stuff, more problems, more tasks, more information. Rather than copying ourselves, I'm building machines that can do that.

We've been using many techniques in this nascent field of intelligence augmentation. One technique is that you rely on software entities which we've termed software agents that are typically long-lived, continuously running, fairly simple, and that can help you keep track of a certain task, or that can help you by automating that task, or semi-automating it so it's as if you were extending your brain or expanding your brain by having software entities out there that are almost part of you that are looking out for your interests and helping you deal with multiple tasks.

One of the limitations of our minds as they are now, is that we're good at doing one thing at a time and keeping track of one thing, but the nature of our everyday concerns is very different, and we have to deal with multiple problems and do a lot of multi-tasking and continuously keep track of all these different things. It's something we're not good at, something we're not made for. But we can extend ourselves, or augment ourselves, by having software entities that are an extension of ourselves and act on our behalf. It can be very simple things, like a very simple monitor, for example, that monitors for you whether there's still milk in your fridge, and that reminds you when the milk is running out, and even reminds you at the right time, when you are driving past the supermarket or when you're in the supermarket.

JB: Smart refrigerator?

MAES: I have a two-year old who drinks a lot of milk, so it's one of the concerns that I have to deal with, one of many concerns. Instead of having to check every morning and every evening and try to keep remembering how much milk there is in the fridge, why can't the fridge do this for me? That's a simple example, but I would be very happy if that problem were solved and I didn't have to worry about that. Our lives are full of silly little problems like that, but they matter a lot—we have to deal with them. To a large extent these little extensions of ourselves could deal with such concerns, or could help us deal with them. The digital equivalent of this, is the monitors that keep track of your stocks or something, and tell you if a certain stock that you own has been rising more than usual or down more than usual, things like that. So I have this vision where we could extend or augment our minds by these software entities that help us, that know what we care about, what the problems are that we are trying to solve. Every one of them may deal with a very specific small problem, and they don't even have to be intelligent, and they're trivial to build, but I think they would make a huge difference in our efficiency, the efficiency of our lives.

JB: What technology is involved?

MAES: The example I gave of the milk is a very simple one. In other situations it may be something more complicated, like we build agents that monitor your reading habits, or, say, news reading habits. They may pick up a certain regularity in what you read, like maybe you own a lot of Apple stock and you want to make sure you see every article about Apple, and you read every article about Apple in the newspaper. That could easily be automated. We built agents that monitor what you read, keep track of all of that and memorize it, and then discover patterns—that you read every article about Apple Computer—and then offer to automate that for you and to highlight those articles in the newspaper so that you definitely won't miss them.

JB: How do you read the newspapers?

MAES: This only works for electronic newspapers. We have prototypes of these kinds of systems and, you can just monitor what a person reads and try to infer from that what it is they're interested in. You can also ask them for more explicit feedback, and ask them, "Did you like this article? Do you want more of these kinds of articles in the future? Was this something that you didn't like even though you read it?" That's going a step further. It involves the use of machine-learning techniques. A lot of that kind of work is finding its way into products like Microsoft's Office 97 where there is a simple form of an assistant which monitors what you're doing, which knows about typical patterns of activities that you engage in and which gives you help which is contextualized, based on data it has about sequences of action that people engage in when involved in a particular task.

JB: Are we talking about anthropomorphic assistants?

MAES: Agents are not necessarily personified. It won't necessarily look like a cute character on your screen—there's no reason nor need for doing that. For example, the Firefly work doesn't have any kind of personification, and still there is a system there that helps you in a personalized way. The two are orthogonal issues, and it's up to a designer of an agent to decide whether it's appropriate to use personification or not. In most of the work that we've done at our lab, the agents are not at all personified. In any event, I'll be happy to anticipate all of Jaron Lanier's comments and talk about them in the interview.

JB: Debate with an empty chair?

MAES: Jaron and I already had our debate on Hot Wired's "brain tennis" pages. We've already gone through this whole thing. But let me continue. I talked initially about very simple agents that would be completely programmed, like the milk monitor in your fridge; I talked about agents that can do some machine learning and that can pick up patterns and offer to automate them. A third approach that we have been pursuing the most actively is one where agents are not necessarily smart at all themselves, but what they do is they allow you to benefit from the intelligence of other people that have solved the problem that you are currently dealing with.

Take for example the buying of a car. I went through the process of trying to figure out what car to buy just a couple of months ago. I didn't know what methods to use, I didn't have a clue about what car I wanted to buy. I did a lot of research on the Web. The first problem was finding what Web site was worth going to in terms of car information, or new car information. Then I had to learn what the different Web sites could offer me; which ones have good reviews, which ones give you the information about actual cost and prices of new cars.

I dealt with this problem for a month or two, and I accumulated a lot of information about new cars and about car information on the Web, where you should go first and second and third etc., and then once I decided what car to buy I also gathered information about the exact cost of that car to the dealer, what the lowest price was that I could possibly get away with and how to get the best deal. I learned about the different dealerships in and around Boston for the particular Saab I wanted to buy. It's such a shame that someone else cannot benefit from all that work that I did . Wouldn't it have been great if something would have recorded some of my experience and some of what I learned so that then that knowledge would be available to another person who is going through exactly the same problem? We are a social species, and we can benefit from each other's intelligence and each other's problem solving. Very few of the problems that we deal with, very few of the tasks or activities that we deal with are completely original in the sense that nobody else has ever faced that same problem before. Almost every problem that we deal with is something that hundreds or sometimes even millions of other people have dealt with before.

It would benefit society if we could more easily reuse the knowledge and experience other people have gained about problems. This is one of the ways that we have built software agents—they don't necessarily have any information themselves about what you do when you want to buy a car, but what they do is monitor, and collect a lot of information about people solving problems, and then give you some of that condensed information and especially patterns that it finds among many people solving that problem.

JB: But, Pattie, you're not average, and besides, everyone wants to learn from geniuses.

MAES: Often people want to learn from people like themselves, or from people that they want to be like, or people they want to look like, and to some extent this is what these agents do. They figure out which people you should be drawing from, and they also gather some of the information, and allow you to benefit from the problem solving that other people have done when they dealt with a similar problem.

JB: How do you know who the other people are?

MAES: You may not necessarily need to know, although in some of our algorithms you can specify the kind of people you want.

JB: Take the example of the Zagat restaurant guides. You assume that the people rating the restaurants are hip foodies who know at least as much as you do about restaurants. If you didn't have that orientation, you wouldn't trust the book and you wouldn't buy and use it.

MAES: This is a great example because in choosing a restaurant you don't want a recommendation based on the average of what other people do, but you want to get recommendations from people like you. The collaborative filtering software which we developed at MIT and which Firefly commercializes, does exactly that. We have a restaurant site on the Web called Boston Eats. You can go there and tell the system which restaurants in Boston you like, and whether or not you have very expensive taste or if cost isn't an issue for you; etc. If a student goes there and tells the system what restaurants they're interested in they may say they prefer cheap restaurants because they're on a budget. So you may not want to get recommendations based on their opinions and they may not be very interested in your recommendations for more pricey restaurants. In short, you want to get recommendations from people that have similar tastes as you do.

I'm from Europe, I love eating a lot of food that some Americans would think is disgusting, like brains, kidney, etc. I love getting recommendations for the kind of restaurants where I can find liver and rabbits, and so on. I want to get recommendations from other people whose tastes are similar to mine. This is exactly what these software agents do. If you tell the system which restaurants you like and dislike, and everybody else does the same thing, then the system can identify who your taste-mates are, who the people are who have the most similar taste as you do, the people who like and dislike the same kind of restaurants. The system will only look at their opinions about restaurants that you don't know to give you recommendations, so you get recommendations from the people that like the same kind of restaurants that you like. The agents themselves don't know anything about restaurants, but what they do know, what they can analyze, is which people are similar to which other people, and which people you should listen to, which people should give you recommendations, which other people's problem-solving and opinions you should rely on.

JB: Could it be that one of the reasons you seem to attract a lot of flack is that by calling these algorithms "agents" they become personified. Some critics would claim that these so called agents make us less human not more human.

MAES: The reason why we use the word agent is to emphasize that you are delegating something. Whenever you delegate something there is a certain risk involved that whoever or whatever you delegate to may not do the task exactly the way you would have done it. In that sense I think it is appropriate to use the word agent, so that people keep in mind that there is an entity, acting on your behalf, doing things on your behalf, and so things may not get done exactly the way you would do them if you were to do it yourself. It's an agent in a sense that a travel agent is an agent, or a real estate agent is an agent; they work for you, they know something about your preferences and interests etc. with respect to the problem, but still, if you had enough time to do the job yourself, you may do a better job of it. Another reason we use the word agent is that we are changing the traditional notion of software. So far people have mostly used the metaphor of a tool to describe and build software.

Usually we think of software as passive. You have to turn on and instruct it to do something and then it will do it. The agent's approach to software is different in the sense that the agents are continuously running. You don't want to have to start up that agent in your fridge that's watching the milk, it should continuously be taking care of that particular task for you, so it's long-lived software that is continuously running. That is very different from the kind of software that we've been using in the past, and that's another reason why a different term is appropriate—you have a different kind of relationship with this software. To rephrase McLuhan, every extension of ourselves is an amputation, and that's very much true for every technology that we invent that automates some things on our behalf. Take the pocket calculator. People today don't want to live without it any more, and most of us either have one on our computer that we can use or one on our desk. We've delegated the task of doing calculations to the pocket calculator, and this extension of ourselves also has meant an amputation, because 20 or 30 years ago people used to be able to do all these very complicated calculations in their head. They had all these tricks, these heuristics that we don't even know anymore. We've lost these as a population. The pocket calculator, a technology from which we derive benefit, is also an amputation which has made us less good at performing that a particular function.

It's important to keep that in mind, that if agents automate a certain task for you, then you may not be very good at that task anymore because you rely on the agent that is automating it for you, and after awhile you no longer know how to do it yourself. I don't care if I don't know how to do a lot of tasks any more. I don't need to be good at checking whether there is milk in the fridge; I'm perfectly happy delegating this to some technology and being less good at that. For other tasks, in other domains, you want to be careful, either because you may not want to lose the ability to perform the task yourself, or because the agent is not perfect enough to delegate the whole task with satisfactory results.

Examples include finding new music or deciding what news to read in a newspaper. You don't want to have an agent telling you exactly what articles you should be reading—you always want to be doing some browsing yourself, because otherwise there's this risk you'll get tunnel vision. The agent gives you gives more of the kind of articles that you like and over time you get a narrower and narrower selection of news. In the end you read just one type of story. This can be dangerous. It's important in that case to design the whole system so that the agent is only used as assistive technology. This is a problem that can be solved in the design of the interface with the agent.

To illustrate this point, we have built a software agent that makes a personalized newspaper for a user in two different ways. In the first way this agent takes all the news articles, picks the ones that it thinks you'll be interested in given what it knows about what you've been reading in the past, and it then gives you a personalized selection. This approach involves a risk that you are never even going to do some browsing yourself, and you're just going to read what the agent has presented to you, and then you get that tunnel vision problem. However, you can build that same agent by just having the agent highlight in the newspaper the articles that it thinks you will be interested in. It doesn't change the newspaper. You will still see all the articles in the newspaper have that element of serendipity, but the agent assists you, because it has highlighted these articles. Even if it's in very small print or it's on a page somewhere deep in the newspaper, you won't miss it, because you could just go through the paper and see what all the highlights are and make sure you've definitely read the stuff that you have a long-term interest in. It's important for us as designers of agents that we keep these issues in mind, and that we come up with interfaces like the highlighting interface that avoid the problem in which the extension becomes an amputation.

JB: On a Utopian level you're talking about designing agents at MIT Media Lab as pure research. That's a very different situation than the real world where these technologies are going to be implemented by corporations that are interested in selling things. Unless you configure your own agent, or you retain a service where the agent is strictly controlled by you, the computer user is going to be served by an agent of a search engine company or a bookselling company or a catalog company, etc. Once such corporations find out you want x or y or z, you begin to lose what remains of your privacy, then you lose your identity by becoming an economic cipher to the new band of info-transactional conglomerates. You're going to be targeted for direct mail, unsolicited email, and pretty soon they're selling your home phone number, your blood type, your medical profile. And if any governmental agency wants your information, you better believe they will be able to get it without the benefit of a subpoena.

Another problem is that one of the characteristics that agents seem to have in common and which again distinguishes them from other software is that they know about you. But would they know if you're a jerk and would they tell you if they did?

MAES: That would be nice actually. But let me first answer your concerns. What is important is that the information these agents have about you is yours and only yours; it is completely up to you to decide and specify who gets access to what information. This is one of the reasons I talk of these agents as extensions of yourself. There's a lot of very personal information in your head, and you control what information you release to whom.

JB: Don't you think that it's naive to think that's going to happen?

MAES: No it isn't, because in fact there is already a standard that has been proposed to the W-3 consortium, the OPS standard - open profiling standards—which has been proposed by Firefly and Netscape, and then afterwards Microsoft joined as well. That standard specifies how personal information about a user could be stored in the browser, and it also specifies that information would be the property of the user, and that the user will be able to specify that every time a site asks for certain information for example, the user will specify which information can be given to that site and what can be given to that site. This is similar to cookies, but cookies done in the right way. The difference between cookies and the OPS standard is that you will know what the site is asking for. A site is asking for my taste in x or y, or another site is asking for my age, or my this or my that, and you can say no, I'm not going to give it to you—or if you think you can get a value out of it, out of giving it to that site, you will give it.

Privacy is one of the primary problems we have to get right if we want agent technology to be widely adopted, and so we've been very concerned with this. Even though, as you say, I'm a researcher and it's not necessarily my concern we've been very involved in this issue and making sure it will belong to the user, that none of that information is accessible to anyone except the user, that when it gets passed it is encrypted so it can't be stolen from you. It's always made clear who is asking for it and for what, and you have to give approval to anyone who asks for it.