User:Ke CHEN/Proposed/Affective computing

From Scholarpedia
Jump to: navigation, search

Affective Computing is computing that relates to, arises from, or deliberately influences emotion and other affective phenomena. The field was originally named and defined treating affect and emotion essentially synonymously, and there is still no widely agreed upon definition of either the term "emotion" or "affect" in the literature; however, there is a general acceptance that affect is the broader term, and that states such as "interest" are affects, whether or not they are emotions, while states such as "anger" are both an emotion and an affect. Regardless of the resolution of the precise definitions of emotion and affect, research in Affective Computing addresses the broader sense of the two terms, and contributes to Artificial Intelligence, Pattern Recognition, Machine Learning, Human-Computer Interaction, Social Robotics, Autonomous Agents, Cognitive and Affective Sciences, Affective Neuroscience, Neuroeconomics, Health-behavior Change, and many other areas where technology is used to detect, recognize, measure, model, simulate, communicate, elicit, handle, or otherwise understand and directly influence emotion and other affective phenomenon. Here is one illustration of its possible future use:

Imagine your robot entering the kitchen as you prepare breakfast for guests. The robot looks happy to see you and greets you with a cheery "Good morning." You mumble something it does not understand. It notices your face, vocal tone, smoke above the stove, and your slamming of a pot into the sink, and infers that you do not appear to be having a good morning. Immediately, it adjusts its internal state to "subdued," which has the effect of lowering its vocal pitch and amplitude settings, eliminating cheery behavioral displays, and suppressing unnecessary conversation. Suppose you exclaim unprintable curses that are out of character for you, yank your hand from the hot stove, rush to run your fingers under cold water, and mutter "... ruined the sauce." While the robot's speech recognition may not have high confidence that it accurately recognized all of your words, its assessment of your affect and actions indicates a high probability that you are upset and possibly hurt. At this moment it might turn its head with a look of concern, search its empathetic phrases and select, "Burn-Ouch ... Are you OK?" and wait to see if you are, before selecting the semantically closest helpful response, "Shall I list sauce recipes?" As it goes about its work helping you, it watches for signs of your affective state changing - positive or negative. It may modify an internal model of you, representing what is likely to elicit displays such as pleasure or displeasure from you, and it may later try to generalize this model to other people, and to development of common sense about people's affective states. It looks for things it does that are associated with improvements in the positive nature of your state, as well as things that might frustrate or annoy you. As it finds these, it also updates its own internal learning system, indicating which of its behaviors you prefer, so it becomes more effective in how it works with you.

In the scenario above, the robot shows the ability to interpret emotion of a person from both direct outward observation as well as from its reasoning about the situation. It is also capable of adjusting its response both subtly, such as suppressing conversation and subduing its tone of voice, and through overt actions, such as choosing and displaying an empathetic expression. Additionally, affective information is used by the robot to build better models of how to behave successfully in social-emotional situations, and affect-like mechanisms operating within the robot can adjust its own machine learning algorithms, biasing what is and is not remembered or learned, as well as updating default biases used for making decisions and choosing actions.

While it is easiest to see how to apply affective computing to anthropomorphic technologies such as interface agents or robots, it is also the case that people interact in affective and social ways with many non-anthropomorphic technologies. Thus, affective computing is not limited to technologies with faces, voices, or human characteristics. For example, affective computing can be used in the design process to improve products by helping recognize customer experiences that cause stress, frustration, or other undesirable states, e.g., designing new product labels that elicit fewer brow furrows or frowns, or designing vehicle interfaces that improve operator safety. Affective technology can also detect positive customer experiences, and help companies get a better sense of what brings about these desirable results. There are also opportunities to use affective technology to assist people with special needs, such as people on the autism spectrum who sometimes face extraordinary challenges recognizing, predicting, and responding to emotional information. Computational models of affect and emotions and the methods for their development also have application to basic research: the technological models and methods contribute to understanding these complex phenomena, and thus to basic theory development. In many cases, the goal of affective technology is to provide systematic measures or tools that help people gain understanding about affect and emotion and better achieve its communication or expression, not to make technology itself more emotionally intelligent.

Figure 1: Overview of several affective computing research areas

Contents

Research scope and challenges

Research in Affective Computing can be organized into five areas, although these are not mutually exclusive: (1) technology for sending affective information - displaying or otherwise portraying an affective state, or mediating the expression or communication of emotion, e.g., modulate graphics, pitch, font, word choice, or physical movements of the technology to indicate an affective quality; (2) technology for receiving and interpreting affective information - sensing, recognizing, modeling and predicting emotional and affective states, e.g., the customer looks and sounds angry now and if I say this, it might make the customer angrier; (3) methods for computers to respond intelligently and respectfully to handle perceived affective information, e.g., the strategy of acting subdued around a person who is upset; (4) computational mechanisms that synthesize or simulate internal emotions, as if the machine had its own emotions, e.g., implementing regulatory and biasing functions of emotion, such as shifting a strategy when in a state akin to being "frustrated" or choosing a breadth-first search when in a state akin to being in a "good mood;" and (5) social, ethical, and philosophical issues related to the development and deployment of affective computing technologies, e.g., how should emotional data be treated, say compared to medical or personal preference data, and when (if ever) can one accurately say that a technology has feelings?

Research in Affective Computing combines engineering and computer science with psychology, cognitive science, neuroscience, sociology, linguistics, education, medicine, psychophysiology, value-centered design, ethics, philosophy, and more, in order to enable advances in basic understanding of affect and its role in biological agents, and across a broad range of human experience. For example, the dominant psychology theories in the affective sciences do not currently include states such as frustration or boredom, states that are commonly observed when people interact with technology. Existing affective taxonomies also do not include states such as “subdued;” however, a study comparing a computer using subdued and enthused speech with drivers showed that choosing subdued speech led to significantly better driver performance and safety when drivers were previously exposed to upsetting stimuli. With the construction of interactive systems that can record and process affective aspects of interaction through use of technology, researchers in Affective Computing can measure and monitor complex dynamic states of activity that happen naturally in everyday experience, including state duration and frequency, psychophysical, behavioral, and social-communicative characteristics. Thus, affective technology provides information that can contribute to improved theories in these other sciences, to more fully explain and predict measurable human experience. Revised theories can then be implemented in technology and tested under natural usage, and this process iterated until theory achieves descriptive and predictive accuracy.

Affective Computing faces many challenges, likely to require decades of effort, before researchers might succeed in building comprehensive computational models of emotion; nonetheless, there are already useful spin-offs of its application in the commercial world. For example, over 400 Million US dollars were spent in 2006 on call center speech analytics software, including software that automatically detects if customers sound upset so that those calls can be flagged to learn how to handle them better. While current affective pattern analysis tools do not detect states such as “upset” perfectly, they are useful in helping detect a smaller subset of "potentially upset" cases for a person to search. Affective computing can thus aid businesses in boosting their data-based understanding of how to improve customer service, even while the computer does not have any comprehensive model for understanding customer emotion.

Affective Computing researchers often situate studies in real contexts where emotion is combined with other factors. For example, an intelligent tutoring system might consist of an interface agent (tutor) interacting with a learner who is making lots of mistakes and starts smiling at the tutor. In such a case, the computer should be able to discern that those smiles probably do not express happiness, even if the learner’s orbicularis oculi and zygomaticus muscles (key muscles identified in the facial affect coding system for expressing happiness) are contracted, as seen in the case where children smiled more (with both these muscles) after failure than after success. Rather, the smiles may occur because of social factors known to be present during human-computer interaction, even though the person knows that the tutor is just a piece of software. The smart computer tutor looks jointly at affective expressions such as facial cues together with non-affective information such as performance, personality, social context, and past history, in order to decide what the student is likely to be feeling and what pedagogical move to make next.

One of the challenges in Affective Computing research is how to deal robustly with naturally-occurring affective information, which is usually not in either a pure or static form, but changes continuously, combining like words and phrases into sentences, which take on meaning over time. A smile followed by a head shake and raised eyes can have a different meaning than the same smile followed by a series of head nods and direct gaze. Affective technology is built to recognize complex affective-cognitive states by jointly analyzing head and facial movements and tracking how they change over time such as when a look of interest morphs into one of concentration, then into confusion, and perhaps from there into frustration or anger and other mixed feelings. Moreover, such temporal trajectories interact with social and cultural circumstances, and with relationships where prior expectations and norms have been established. A person might express their pleasure very differently around children than around professional colleagues, and quite differently when meeting with a colleague in front of customers, than when meeting later with that same colleague over drinks. While many basic facial expressions occur similarly across cultures, the rules for when they are displayed change with cultural, social, and relational circumstances. Decoding how and when these changes occur is part of making technology smart about handling and helping with affective communication.

Displaying, communicating, or mediating expression of affect

Figure 2: It is easy to give machines the appearance of having emotion, but it is hard to give them the ability to know what emotion to express when. The software agent shown here, created by Ken Perlin and John Lippincott, has a variety of continuously changing expressive parameters that, when hooked to sensors of human non-verbal communication, enable the agent to adjust its expression to each person (Burleson et al., 2004).

Technology can easily give the appearance of having emotion without having the components that traditionally accompany biological emotion (Figure 2). For years, Apple Macintosh computers have displayed a smile when booting successfully, and a sad expression when not booting successfully, even though the computer has no underlying feelings of happiness or sadness. Artists can masterfully craft robotic dogs, animated characters, and other technologies to look, sound, and behave as if they have emotions. Technology that sends affective information - portraying affect through some modality - is easy to build. However, the hardest challenge in real-time interaction is figuring out when to communicate which affect. Without understanding social display rules and other important cues about the interaction context, technology is quite likely to irritate people with its emotional outputs. For example, Microsoft Window’s operating system used to play a triumphant tune when the system booted, which fit the mood well when a new machine booted successfully. However, when a person experienced the triumphant tune right after having to reboot because of a system crash, this was annoying. Interestingly, rebooting a Mac and encountering its smile does not usually have the same irritating effect; in fact, people commonly are seen to smile after they have made a mistake and are trying to redeem themselves.

The difficulty of communicating emotion through text-based online interaction has led to the development of emoticons and other means for people to add affective intent. Increasingly, artists and interaction designers augment chat, instant messaging, and other technologies with new means of communicating emotion, mapping emotion to colors, shapes, format, dynamic fonts, and more. The extent to which these attempts are based on sound principles and actually work to accurately or effectively communicate emotion are topics of research.

With affect-communicating technology, people who rely on text-to-speech devices can be given the choice to have some of their affective parameters (e.g., typing pressure, heart rate, skin conductance, or some combination of these) automatically modulate their synthetic speech output. For example, physiological arousal might be used to modulate pitch or loudness with a single on/off switch controlled by the typist, as opposed to having to annotate each word and phrase directly, which could be arduous. Sometimes affective technologies can be used to help people who are non-speaking, non-typing, and unable to express emotion through the usual nonverbal channels. Technology to sense physiological or other parameters and map these to an output that the disabled person can control might be used to communicate states such as "I'm very calm" or "I'm overloaded" to people they trust. The ability to use technology to communicate even some simple affective state changes may help reduce misunderstandings that might otherwise arise. For example, in autism there are cases where a non-speaking person may appear very different on the outside, e.g., calm and content, compared to what they feel on the inside, e.g., in pain and enormously stressed, and the ability to signal what is really going on could be very important to avert misunderstandings and prevent escalation of a negative state. At the same time, such a technology needs to be under the control of the person so that they can also hide feelings that they do not wish to communicate, and protect themselves from people who may wish to make them feel worse.

Some people prefer non-affective interaction with computers and do not wish for computers to communicate social-emotional signals. These individuals might choose to not use affective computing technology, or they might wish to have the option to turn off its social-emotional communication. Affective computing designers can honor this preference by designing in such aspects of control or adaptation. Since a driving principle in the development of affective computing is to honor people's affective preferences in the technology design, the preference of "show me no affective communication" should be accommodated. In short, forcing affective communication with somebody who does not want it is as inappropriate as not engaging in affective communication with someone who does want it.

When considering how to accommodate competing preferences, it is useful to consider how people do this. Most people are capable of modulating their emotional expression in response to others, e.g., turning it up for their children or toning it down for their boss, although some people are better at this than others. With technology, such modulation can be explicitly controlled. In fact, a person who has difficulty modulating their emotion or communicating emotion might sometimes enjoy using technology that has this ability. For example, one person might interact with another through the computer, choosing on their side to turn off emotional communication, but choosing on the other side for the computer to make them appear friendly and social, e.g. greeting with a smile and occasionally nodding in synchrony with what the other person says. Or they might choose to have all the emotions they truly display on their end muted on the other end (e.g. during a business negotiation or poker game.) Use of technology to mediate affect communication thus increases human communication possibilities. Today, most text-based technology for communication limits emotional bandwidth - suppressing nonverbal affect whether you want it to or not, and often causing misunderstanding, e.g., a brief reply causing the recipient to think the sender was upset, when in fact he was rushed. Increasing emotional bandwidth increases opportunities to communicate. At the same time, mediated affective communication may be made to appear more or less sincere than it would have appeared in face-to-face interaction. Thus, technology that mediates affective communication can generate more or less trust in the information that is communicated (see ethical/social/philosophical section at end of this article for more examples of issues like this.)

Sensing, recognizing, modeling, and predicting affective state

Emotion researchers have traditionally used questionnaires, human observation, and physiological sensing to gather data for assessing emotional state. Affective Computing expands these options, enabling new kinds of real-time, automatic, mobile, and sometimes less obtrusive measurements, giving technology the ability to read affective cues from complex patterns that include tone of voice, language, facial expressions, posture, gestures, autonomic nervous system measures, and whatever combinations of modalities that people are comfortable with having sensed.

Advances in affective technologies allow for more natural sensing, measurement, and modeling of emotion outside the laboratory. For example, small wearable sensors, cameras, and microphones can measure affective information in a social or other interactive setting without having to interrupt the interaction to ask "how do you feel right now? - please fill out this brief questionnaire." Affect sensing technology provides opportunities to improve the ecological validity of sampling affective information for a variety of scientific purposes. Validity is a special challenge in emotion research because emotions change with what is truly meaningful and significant to a person, and a laboratory experiment is rarely as meaningful and significant to a participant as are things in the person's real life. Technologies that sample data from real-life natural experiences increase the likelihood of developing scientific theories of emotion that fit real life.

Technologies that sense and help communicate affect can also be used to enable new kinds of expressive and artistic experiences. A variety of international artists and performers have used affective technologies: giving audiences wearable communicators to silently display their affect, using tools to artistically amplify or transform their personal affective expression, and giving performers new opportunities to share visualizations or audio portrayals of their emotional state with their audiences. Artists may also use affective technologies to provoke reflection, debate and discussion about its potential uses and misuses.

Tools of pattern analysis and machine learning are often useful to discover possibly nonlinear combinations of sensor data that correspond to complex dynamic affective states. These techniques can construct statistical models to aid not only in recognizing a current state, but also in predicting the next state or states, much like speech models can be used to predict likelihoods of certain words following others. Models may be person-dependent or person-independent, and may also be made conditional on other context variables.

Advances in artificial intelligence, common-sense reasoning about emotion, and user and context models are also important in building up scripts and other modeling mechanisms that help computers predict and understand which emotion is likely to occur in a situation. Consider the knowledge that if you give a person a present that they do not like, then they will probably smile and say "thank you" in order to be polite; however, if they really like the present, then they may show even more enthusiasm and delight, perhaps repeatedly saying thanks, smiling, talking about the present, and continuing to smile for some time afterward. Computers can be taught such interpretations, which vary with context and culture. They can be taught the likely antecedents to emotion, and given the ability to test in their own environments when the antecedents are present and whether and when the commonly expected emotional response occurs. Networked computers can also combine observations, constructing broadly-applying empirical models of what emotional displays and outcomes arise in interaction during daily life. These models can in turn be used to construct better theories of human interaction, and to build technologies that are more understanding of people and better at helping us.

Responding intelligently and respectfully to perceived emotions

Figure 3: Computers sans anthropomorphic faces and voices or the pronoun “I”, can adapt responses to human emotion that subsequently influence the affective state of those with whom they interact. The example shown here (Klein et al., 2002) did not pretend to have real feelings, yet conveyed the impression of active listening, empathy, and sympathy in an effort to help frustrated computer users feel better.

When a person reveals affective information to a recipient, the recipient can choose ways to respond that may be helpful or harmful. For example, if a person lets a computer (or robot or agent) know that its action is very frustrating then the computer could try to recognize its gaffe and take steps to avoid it in the future. It could issue an acknowledgement of the frustration it has caused, and perhaps even apologize, and see if this helps alleviate the person’s frustration. Sometimes it might be appropriate for a computer to display an empathetic or caring response. While some people object to a computer expressing feelings when it does not actually have them, it is possible for a computer to come across as empathizing, and for it to appear caring without it pretending to feel anything a person feels (Figure 3). Studies suggest that computer-provided empathy can reduce frustration and stress and can impact perceptions of caring, which could help in health-care technologies, among others.

The idea of having a computer show empathy grew out of a body of research findings that people interact with computers similarly to how they interact with other people; consequently, the theory of human-human interaction can be applied to inform hypotheses of what may work in human-computer interaction. For example, if we know that Bob does not like it when people behave triumphantly after he experiences misfortune, then we can predict that Bob won’t like it when a computer plays a triumphant tune after he experiences misfortune. While most people do not treat computers exactly the same as other people, nonetheless many principles about human-human interaction carry over to human-computer interaction. These principles are often used to shape the design of affective technology.

Synthesizing and simulating emotions or implementing their regulatory and biasing functions

Emotion-like mechanisms inside a machine can perform functions that may or may not appear emotional to an outside observer. The best known emotion synthesis technologies usually trigger visible or verbal displays of emotion; for example, an emotion model within the Hasbro/iRobot toy doll My Real Baby evaluates inputs and causes the doll’s facial expressions and vocalizations to change, making the doll appear to have emotions. Thus, we say an internal emotion model synthesizes emotion, that is it creates an internal state that is capable of triggering the outward appearance of having an emotion, although it also may not trigger any outward appearance, and may only change what happens inside. This use of an emotion model is in contrast to another use of emotion models, described above, to identify which emotion is likely to be present given some observations now (recognition), or which emotion is likely to come next (prediction). Some emotion models can be used both for analysis (recognition or prediction) and for synthesis (simulating or giving rise to the emotion). For example, cognitive appraisal models can be used both for recognizing antecedents that may give rise to an emotion (analysis) or for actually giving rise to a state in a computational system (synthesis).

Emotion synthesis and simulation can be used inside a computer to influence processes that do not involve showing any emotion. The idea of employing such influences in computers is inspired by studies in neuroscience and psychology illuminating ways in which emotion beneficially biases creativity, decision making, judgement, perception, and other cognitive processes, as well as how emotion regulates attention, action selection and other behaviors in humans and animals. Above, it was mentioned how a computer might change its search strategy based on affective state; this idea comes out of studies showing emotion influences human creativity, where slight positive states are associated with more out-of-the-box thinking, yet without impairing judgment. Studies also show that people's positive or negative moods cause them to perceive and judge information differently, which in turn leads to multiple perspectives, a process that could also be beneficial to implement in some computing systems. Additionally, human studies have illuminated the importance of attention mechanisms in guiding learning and perception; similar mechanisms are believed to be useful to guide computers that learn and perceive. As scientists uncover biological mechanisms in which emotion contributes to everyday rational, intelligent, and beneficial behavior in people and other animals, affective computing researchers construct the computer analogs of these mechanisms and investigate if they are beneficial for building better functioning computing systems. Similarly, as affective computing researchers figure out how to build better functioning systems, they may create mechanisms with unknown but possible biological equivalents, which might inspire a search for these mechanisms in biological systems.

Social, ethical, and philosophical issues

Affective technologies enable a wide variety of interesting new and beneficial advances; however, technological power to sense, measure, monitor, communicate, influence, and manipulate emotion could also be used for harmful or otherwise undesirable purposes. Any new technological capability raises social, ethical, and philosophical questions, and the fifth area of affective computing research attempts to address these with respect to the new capabilities this technology brings. Some examples follow.

When technology is "sending" a person's affective information, should what is sent always be under the sender's control, and if so, how do you give a person control over what they send without over-burdening their interaction? For example, a person communicating online through an avatar might want only their socially positive affective information sent, or if playing poker, then they may want no affective information sent or misleading affective information sent. Is this fair and acceptable to the other people who interact with the avatar? Should others be entitled to know if their communication partner is using special technological filters controlling what is being communicated?

When a computer has an internal affective state, should it be required to portray that state outwardly, and if so, how and under what conditions? The famous HAL 9000 computer in the film 2001: A Space Odyssey was able to hide its paranoid state from people on board the ship, which led to disaster for the lives of the crewman. People do not constantly show their affective states, and if a computer was expected to show its internal state constantly then it could become very annoying. What is the best balance for expressing affective state and on what factors does this balance depend?

Given technology that senses and interprets affective information, how can you protect the privacy of people who do not want their information sensed? This research question is a cousin to historical questions governing the use of lie detectors, or polygraphs, which typically sense physiological stress that can accompany a person's effort to deceive. The US government currently restricts the use of polygraphy in the workplace; however, the same government is actively funding research to develop technology to recognize people's emotions in public places, including technology that would operate without people knowing that they are being sensed (e.g. through remote thermography, laser Doppler vibrometry, and other detection techniques that work at a distance in order to try to detect potential terrorists in airports and other large public areas.) Note that researchers in most government-funded universities and research institutions are prevented by institute review boards from sensing information from people without getting their informed consent. The deployment of affect-sensing technology that is used without people's consent violates the wishes of many individuals; as such, it not only violates the fundamental principles of affective computing research to respect affective preference, but it also violates standard ethical practice.

No affect recognition system is always perfect: whether the recognition is implemented by people or by technology, there is usually disagreement and error, especially when multiple people are asked to rate which of multiple affective states are present. Nonetheless, many people, especially non-technical users of a new technology, often think that when a computer processes something, its decision will be perfect or at least highly reliable, and this can lead to a tendency for people to over-trust a computer's judgment of information, whether affective or not. Designers of affective technology have a responsibility to let users know about its limits in a way that clearly communicates the potential margins of error, so users can be very careful what actions or decisions are carried out on the basis of the (possibly erroneously) perceived affect.

The sensing of people's affective data, with the possibility of its storage and real-time or delayed transmission to others, raises many questions concerning the use and possible misuse of affective information. For example, a driver might like to have her car navigation system sense and adjust its voice to her mood, which can increase driving safety; however, she might wish to prevent the data from being given to her insurance company, who might raise the price of her policy if they find out she frequently gets behind the wheel when angry. People may also object to being pummeled with ads or product promotions just because they showed interest (and the computer recognized it) when they were glancing at a billboard. Great care must be taken to respect people’s wishes about what is and is not sensed, stored, or shared. Designers of affective computing systems need to clearly communicate whether collected information could be associated with a person's identity, who if anybody it might be shared with, and what benefits and harms might occur from sharing this information.

Sometimes affective systems show measurable improvements in people's safety, health, and productivity. To the extent that technology beneficially impacts important factors like health and safety, it may become incumbent upon designers not to add this ability merely as a special feature, but to add it as a health or safety requirement. Progress in affective technology could thus lead to many new requirements - perhaps requiring new tax software to not exceed a certain average frustration level for typical taxpayers, or requiring air traffic control interfaces to respond in a way that enhances controller attention, or requiring a software package used by medical staff and patients to not increase stress since excessive stress can depress immune system functioning. As affective information becomes measurable and manageable, there will be new demands on its use, some which may be desirable, while others will be less so.

Technology that "has emotional state" raises philosophical questions about what it means to have feelings. While computers have been built to have mechanisms inspired by biological functions of emotion, these mechanisms, to date, remain different from giving a computer feelings in the same sense a person experiences. When (if ever) can one accurately say that a robot or a piece of software has feelings, in the same sense that we talk about human feelings?

Affective computing researchers, while addressing technical challenges of making systems that can send, sense, intelligently handle and simulate affective information, need to not fall prey to the common scientific tendency to make something just because it can be done. An ever-present challenge is to work together with people from diverse backgrounds, accepting and giving constructive criticism on new findings, and seeking public input to steadily discern what should be done in developing technology to improve human experience.

References

Breazeal, C. and Picard, R. (2006) In Neuroergonomics: The Brain at Work (Eds, Parasuraman, R. and Rizzo, M.) Oxford University Press, Oxford, 275-292.

Burleson, W., Picard, R. W., Perlin, K. and Lippincott, J. (2004) In Workshop on Empathetic Agents, International Conference on Autonomous Agents and Multiagent Systems, Columbia University, New York, NY, 69-78.

Clore, G. L. (1992) In L. Martin and A. Tesser, editors, The Construction of Social Judgments. Lawrence Erlbaum Associates, 1992.

de Rosis, F., Novielli, N., Carofiglio, V., Cavalluzzi, A. and Carolis, B. D. (2006) International Journal of Biomedical Informatics, 39, 514-531.

D'Mello, S. K., Craig, S. D., Sullins, J. and Graesser, A. C. (2006) International Journal of Artificial Intelligence in Education, 16, 3-28.

Gadanho, S. C. (2003) Journal of Machine Learning Research, 4, 385-412.

Hager, J. C., Ekman, P., Friesen, W. V. (2002). Facial action coding system. Salt Lake City, UT: A Human Face. ISBN 0-931835-01-1.

Healey, J. and Picard, R. W. (2005) IEEE Trans. on Intelligent Transportation Systems, 6, 156-166.

Hjortsjö, C. H. (1970). Man's face and mimic language. Malmö: Nordens Boktryckeri. Swedish version: “Människans ansikte och mimiska språket”, 1969: Malmö, Studentlitteratur

Isen, A. M. (2000) In M. Lewis and J. Haviland, editors, Handbook of Emotions. Guilford, New York, 2 edition, 2000.

el Kaliouby, R., Picard, R. W. and Baron-Cohen, S. (2006) Progress in Convergence (Eds, Bainbridge, W. S. and Roco, M. C.) Annals of the New York Academy of Sciences 1093: 228-248, doi:10.1196/annals.1382.016.

el Kaliouby, R. and Robinson, P. (2005) In Real-Time Vision for Human-Computer Interaction, Springer-Verlag, pp. 181-200.

Klein, J., Moon, Y. and Picard, R. W. (2002) Interacting with Computers, 14, 119-140.

Marsella, S., Gratch, J. and Rickel, J. (2004) In Life-like Characters Tools, Affective Functions and Applications(Eds, Prendinger, H. and Ishizuka, M.) Springer, New York, pp. 46.

Nass, C., Jonsson, I.-M., Harris, H., Reeves, B., Endo, J., Brave, S. and Takayama, L. (2005) Conference on Human Factors in Computing Systems, CHI '05 extended abstracts on Human factors in computing systems, ACM: New York, NY, USA. pp. 1973-1976

Ortony, A., Clore, G. L., and Collins, A. (1988) The Cognitive Structure of Emotions, Cambridge University Press, Cambridge, England.

Pantic, M. and Rothkrantz, L. J. M. (2003) Proc. of the IEEE, 91, 1370-1390.

Picard, R. W. (1997) Affective Computing, MIT Press, Cambridge, MA.

Picard, R. W., Vyzas, E. and Healey, J. (2001) IEEE Transactions Pattern Analysis and Machine Intelligence, 23.

Prendinger, H., Mori, J. and Ishizuka, M. (2005) Int'l J of Human-Computer Studies, 62, 231-245.

Reeves, B. and Nass, C. (1996) The Media Equation, Cambridge University Press, New York.

Schneider, K. and Josephs, I. (1991) Journal of Nonverbal Behavior, vol. 15: no. 3, 185–98.

(ISSN:    0191-5886-Print/1573-3653-Online) 

Trappl, R., Petta, P. and Payr, S. (Eds.) (2002) Emotions in Humans and Artifacts, MIT Press, Cambridge.

Tsiamyrtzis, P., Dowdall, J., Shastri, D., Pavlidis, I., Frank, M. G. and Ekman, P. (2006) International Journal of Computer Vision, 71, 197-214.

UPI (2006) United Press International, Spy Software used in Call Centers, http://www.physorg.com/news80412604.html accessed Nov 19, 2008.

Recommended reading

Carberry, S., F. de Rosis (2007) Special Issue on Affective Modeling and Adaptation, User Modeling and User-Adapted Interaction: The Journal of Personalization Research.

Douglas-Cowie, E., R. Cowie, et al. (2003). Special Issue on Speech and Emotion. Speech Communication. 40.

Fellous,J-M. & Arbib, M. (2005). Who Needs Emotions? NY: Oxford University Press.

Isbister, K. and K. Höök (2007) Special Issue on Evaluating Affective Interactions, International Journal of Human-Computer Studies.

Paiva, R. Prada, and R. W. Picard (Eds.), Affective Computing and Intelligent Interaction 2007, Lecture Notes in Computer Science 4738, 2007. Springer-Verlag, Berlin Heidelberg 2007

Russell, J. A., and José Miguel Fernández-Dols (1996) The Psychology of Facial Expression, Cambridge University Press.

Tao, J., T. Tan, and R. W. Picard (Eds.), Affective Computing and Intelligent Interaction 2005, Lecture Notes in Computer Science 3784, 2005. Springer-Verlag, Berlin Heidelberg 2005.

See also

HUMAINE association portal of affective computing related research: http://emotion-research.net/

Personal tools
Namespaces

Variants
Actions
Navigation
Focal areas
Activity
Tools