Difference between revisions of "Affective Computing"

From Cyborg Anthropology
Jump to: navigation, search
Line 1: Line 1:
[[File:Kelly-Dobson-Blender.jpg|200px|thumb|right|Kelly Dobson controlling her blender with voice.]]
+
===Definition===
 +
Developing computing architectures that account for human issues such as usability, touch, access, persona; characteristics, emotion and background. Those who build systems by these principles think of computing as a solution or a helper for problems or essences of human living, especially in an industrial world.
  
'''Learning the Language of Machines'''  
+
"Plutowski (2000) identifies three broad categories of research within the area of affective or emotional computing,: emotional expression programs that display and communicate simulated emotional affect; emotional detection programs that recognize and react to emotional expression in humans; and emotional action tendencies, instilling computer programs with emotional processes in order to make them more efficient and effective. Thus specific projects at MIT (www.media.mit.edu/affect) have included the development of affective wearables such as affective jewelry, expression glasses, and a conductors jacket to designed to extend a conductor's ability to express emotion and intentionally to the audience and to the orchestra." (Plutowski 2000, in [http://books.google.com/books?id=AazrLYOaetwC&dq=broad+categories+of+research+within+the+area+of+affective+or+emotional+computing+(&source=gbs_navlinks_s Knowing capitalism] By N. J. Thrift. SAGE, 2005 - Business & Economics - 256 pages).
  
Instead of teaching machines to understand humans, MIT’s [[Kelly Dobson]] programmed a blender to understand voice activation, but not the typical voice one uses. Instead of saying “Blender, ON!”, she made an auditory model of a machine voice.  
+
"computational systems with the ability to sense, recognize, understand and respond to human moods and emotions . Picard (1997; 2000) argues that to be truly interactive computers must have the power to recognize, feel and express emotion".  
  
If she wants the blender to begin, she simply growls at it. The low-pitched “Rrrrrrrrr” she makes turns the blender on low. If she wants to increase the speed of the machine, she increases her voice to “RRRRRRRRRRR!”, and the machine increases in intensity. This way, the machine can understand volume and velocity, instead of a human voice. Why would a machine need to understand a human command when it can understand a command much more similar to its own human language?
+
{{clear}}
  
<private>
+
===Learning the Language of Machines===
  
---
+
[[File:Kelly-Dobson-Blender.jpg|200px|thumb|right|Kelly Dobson controlling her blender with voice.]]
Cite this correctly!
+
  
In [[The Automatic Production of Space]] Plutowski (2000) identifies three broad categories of research within the area of affective or emotional computing
+
Instead of teaching machines to understand humans, MIT’s [[Kelly Dobson]] programmed a blender to understand voice activation, but not the typical voice one uses. Instead of saying “Blender, ON!”, she made an auditory model of a machine voice.
</private>
+
 
 +
If she wants the blender to begin, she simply growls at it. The low-pitched “Rrrrrrrrr” she makes turns the blender on low. If she wants to increase the speed of the machine, she increases her voice to “RRRRRRRRRRR!”, and the machine increases in intensity. This way, the machine can understand volume and velocity, instead of a human voice. Why would a machine need to understand a human command when it can understand a command much more similar to its own human language?
  
See: [[Media Lab at MIT]]
+
===Related Reading===
 +
*[[Media Lab at MIT]]
  
 
[[Category:Book Pages]]
 
[[Category:Book Pages]]
 
[[Category:Unfinished]]
 
[[Category:Unfinished]]

Revision as of 21:12, 12 February 2011

Definition

Developing computing architectures that account for human issues such as usability, touch, access, persona; characteristics, emotion and background. Those who build systems by these principles think of computing as a solution or a helper for problems or essences of human living, especially in an industrial world.

"Plutowski (2000) identifies three broad categories of research within the area of affective or emotional computing,: emotional expression programs that display and communicate simulated emotional affect; emotional detection programs that recognize and react to emotional expression in humans; and emotional action tendencies, instilling computer programs with emotional processes in order to make them more efficient and effective. Thus specific projects at MIT (www.media.mit.edu/affect) have included the development of affective wearables such as affective jewelry, expression glasses, and a conductors jacket to designed to extend a conductor's ability to express emotion and intentionally to the audience and to the orchestra." (Plutowski 2000, in Knowing capitalism By N. J. Thrift. SAGE, 2005 - Business & Economics - 256 pages).

"computational systems with the ability to sense, recognize, understand and respond to human moods and emotions . Picard (1997; 2000) argues that to be truly interactive computers must have the power to recognize, feel and express emotion".

Learning the Language of Machines

Kelly Dobson controlling her blender with voice.

Instead of teaching machines to understand humans, MIT’s Kelly Dobson programmed a blender to understand voice activation, but not the typical voice one uses. Instead of saying “Blender, ON!”, she made an auditory model of a machine voice.

If she wants the blender to begin, she simply growls at it. The low-pitched “Rrrrrrrrr” she makes turns the blender on low. If she wants to increase the speed of the machine, she increases her voice to “RRRRRRRRRRR!”, and the machine increases in intensity. This way, the machine can understand volume and velocity, instead of a human voice. Why would a machine need to understand a human command when it can understand a command much more similar to its own human language?

Related Reading