Wednesday, September 26, 2012

Book Reading #2: "Attractive Things Work Better" from "Emotional Design"

Attractive Things Work Better - from Emotional Design by Donald Norman

In “Design of Everyday Things,” Donald Norman introduced readers to the finer points of object design. He explained how designers used certain tactics and strategies to make objects either naturally easy to use, or accidentally make things very difficult to use. Topics covered in “Design of Everyday Things,” included affordances, mapping, and constraints. Most of these topics focused on the physical layout of an object and the how and why users interacted with the objects. In the first chapter of “Emotional Design,” titled “Attractive Things Work Better,” Donald Norman switches gears and explains the emotional side of the psychology of people and how it related to user interactions with objects. He discusses how an attractive object can be easier to use than an unattractive object that has the same features and functionalities. This separate approach to object design helps fill in some of the gaps in understanding related to human interaction with everyday objects.

To start off the chapter, Norman explains a research study done by two Japanese researchers, which found that attractive objects can be easier to use than unattractive objects that have the same features and functionalities. At this point in the chapter, I thought that there was a misconception in the study and that by making an object more attractive, the designers also improved the mapping, visibility, and affordances of the object. However, the next part of the chapter is dedicated to explaining how an Israeli scientist had the same skeptical viewpoint that I just explained and how that scientist recreated the experiment and had not only similar results, but more extreme results confirming the Japanese research.

After explaining the correlated research, Norman then spends some time explaining why he thinks that this phenomenon happened and how designers can use this information to improve object design. He explains that if people feel happier and more relaxed, they tend to be more accepting of alternative ideas and more focused on the big picture. This state of happiness and relaxation can be achieved by telling jokes at meetings, watching something funny, giving gifts/awards, and meeting in a relaxing environment. This approach would work better for brainstorming and long-term planning meetings.

The other end of the spectrum is very much the opposite. When people are anxious, angry, or scared, they tend to have a more narrow range of focus and are more apt to dive into the deeper levels of a problem. This state of anxiousness and fear can be subtly introduced through sounds, sights, and feel. For example, a flashing red light, fast pace music, and rigid, hard chairs might introduce the described state of being. This situation would be most effective in deep-dive meetings, where the goal is to try to solve a particular problem.

I really enjoyed reading this chapter, because I can really see how this has affected meetings that I have been in before. This book also makes me consider the way that I approach problems. I think that I can change my study habits to take advantage of this situation by picking which projects/assignments I should work on at a given time based on the mood that I am in. Another thing that this chapter made me think about is how a lot of meetings that I have attended have been poorly executed by trying to tackle some problems in the brainstorming area and other problems in the deep-dive area in the same meeting. The meetings that did this tended to have very poor outcomes.

Wednesday, September 19, 2012

Book Reading #1: Design of Everyday Things

In "Design of Everyday Things," author Donald Norman points studies the little things about object design that can influence its success. Most software/hardware designers focus solely on functionality and aesthetics, but too many designers fail to develop objects with easy of use in mind. What I found most intriguing about Design of Everyday Things is I could relate to every single example Norman stated, whether from personal experience or related experiences from friends and family. While I shared all of these experiences, only some of them formed design habits in my mind. This book also made me stop and think about software projects that I am currently working, trying find a way to improve the user's interaction with the device. Norman breaks down the areas of design analysis into seven chapters, each one focusing on a particular aspect of the interaction between humans, their environment, and the devices that they interact with.

In chapter one, Norman introduces the ideas of affordances, conceptual models, mapping, and feedback. Affordances are simply what users think an object might do. Conceptual models are users' mental image of how a device should perform. Mapping stresses the importance of location and orientation of devices so that their use is obvious. Feedback allows users to gain valuable information back from a device they are interacting with. In chapter two, the author discusses device users blaming themselves, device users blaming the wrong cause, the Gulf of Evaluation and Execution, and the Seven Stages of Interaction. Users often blame themselves if they cannot figure something out that they think should be easy. Users often blame random events if they cannot find a true source of trouble. The Gulf of Evaluation is the difference between the designer's mental design model and the user's conceptual model. The Gulf of Execution is the difference between the options a user thinks a device should have and the options actually available. In chapter three, Norman explains the difference between knowledge in the head and knowledge in the world. A truly efficient design will maximize the design of the device by including both types of knowledge and natural mapping. In chapter four, the author explains how physical, cultural, and logical constraints allow users to accurately understand a device with minimal memorization or training. In chapter five, we study the differences between slips and mistakes, and how the difference can be considered when designing devices/objects. In chapter six,  the idea of evolutionary design is introduced and we begin to relate it to common design problems. In chapter seven, Norman summarizes the first six chapters into a guide of sorts, highlighting the main "rules" that will help designers create more user-friendly devices.

I really enjoyed reading this book, because it made me stop and think about everything around me. Literally. It is easy to find objects that appear to have been designed very poorly, with little consideration for how a user would actually interact with it. The references to outdated technologies provided comical relief when trying to read mass amounts of content in short amounts of time. Below I spend some time discussing each
chapter in further detail.

Chapter 1: The Psychopathology of Everyday Things

Introduction
In chapter one of “Design of Everyday Things,” the Author, Donald Norman, presents several examples of objects with exceptionally bad design flaws. For example, the author describes a high-tech phone, capable of call-back and redial features, but illustrates how the designers fell short. The idea behind presenting the design flaws is to provide readers with a series of guidelines that will allow for the design of easily understandable and intuitive devices.

Affordances
The first key concept highlighted in chapter one is the idea of “affordances.” Affordances are the perceived possible uses of an object. More plainly, an affordance is how a person expects to interact with an object. For example, a poorly designed knob could imply that a user could push it, instead of correctly operating the knob by turning it. In this example, the designer could add ridges or grooves around the edge of the knob in order to imply that you will rotate the device. Also, they could make buttons more shallow, so that it would be difficult to try and rotate them. This is an example of how to take advantage of affordances in order to make a superior designed object.

Conceptual Models
The next key concept highlighted in chapter one is the importance of a good conceptual model. A conceptual model is how a potential user imagines that a device will operate. The example used in the book is a bike, which is really two bikes facing each other and sharing the front tire. Users should know that the device will operate poorly because they will try to form a conceptual model in their mind. If each half of the object (left and right) operate similar to a normal bike, then when two users operate the bike pedals, the force generated by each user will directly oppose the other user. This conceptual model will provide potential users with a base idea of how to use an object. A poorly designed object will either have a conceptual image that raises concerns regarding its operation or does not provide clues that will help the users form a conceptual model at all.

Mapping
The third key concept from chapter one is mapping, which stresses the benefit of spatially arranging objects in such a way that users will intuitively know what the objects purposes are. A good example of mapping is placing the controls for power windows in an automobile on the inside door panels versus the center console. This location implies that it does some action to some object located on or near the door. You can further improve this design by having the power window control be an up-down switch which will provide users with an excellent conceptual model. Another mapping example is when cell phone designers put the volume control rocker on the side of a cell phone. This spatial location allows the users to still operate the call volume while having the phone pressed against their faces. The fact that the control is a rocker switch, which operates up-and-down, only enhances the user’s conceptual model of which direction they should push the rocker if they want the volume to increase, or go up.

Feedback
Finally, the idea of feedback enhances users interaction with objects by confirming their actions by providing a logical response. A very simple example of this is for a device to beep when a user presses a button. If the device did not provide this feedback, then some users would question whether they actually pressed the button down enough or not. Another similar example is for touchscreen devices to provide tactile feedback to users. In this example, I also wish that there was a way to provide visual feedback signaling the exact location of the touch input.


Chapter 2: The Psychology of Everyday Actions

Blaming Yourself
Donald Norman stresses one main point repeatedly in chapter: “Don’t blame yourself for bad designs.” Norman explains that users are likely to blame themselves and feel ashamed if they cannot figure out how to perform an action that they believe should be easy. However, most of the time this confusion is due to poor designs that often lead to faulty conceptual models forming in users’ minds. He further states that most people feel this same confusion and embarrassment, but are more likely to still blame themselves than to blame the design. This is illustrated using an example of a “return” key versus an “enter” key on a keyboard. This example is obviously not the best example anymore, because those two keys have been phased out and we are left with one simple “enter” key.

Blaming the Wrong Cause
Along with blaming ourselves, humans tend to be explanatory creatures, looking to find an acceptable explanation for any action that defies our previous expectations. Norman explains this using an example of a co worker whose computer is having problems where they connect to a library catalog and then their computer died. The user associated the two events as cause and effect, because they did not have another suitable explanation. In reality, the problem had nothing to do with the library catalog, but this story helps explain why humans might look to pass the blame onto something convenient.

Another important idea that the author introduces is the idea of blaming the wrong cause in social situations as well. Norman mentions that if we personally succeed, then we attribute the success to hard work and perseverance. However, if someone else succeeds at a venture, then we are more likely to blame the environment, casting the achievement off as good fortune. Inversely, if we personally fail at something, we will blame it on the environment, claiming bad luck. Furthermore, if someone else fails at something, we will say that it is because they did not try hard enough, ignoring any part the environment might have played in the situation. I find this point very interesting, because this is definitely how I see personal successes and failures, as well as other peoples’ successes and failures. This makes me realize that I should try to be more understanding of what all contributes to a given situation.

Seven Stages of Action
Norman also discusses what he believes to be the seven stages associated with action. First, “perceiving the state of the world” is when a person observes something in their environment. Second, “interpreting the state of the world” is when the person attempts to explain what they are perceiving in the world. Next, “forming the goal” establishes the purpose for an action, or what we want. Then, “forming the intention” is when an individual plans to do something to achieve a goal. Next, “specifying an action” is the process of refining the results of “forming an intention” so that a single action, or set of actions, is outlined. Then, “executing an action” is when the person actually carries out the action, or set of actions planned. Finally, “evaluating the outcome” happens when the individual reflects on the effects of their action.

The Gulf of Execution and Evaluation
The final points that Norman makes in chapter 2 revolve around the idea of a “Gulf of Execution” and a semi-related “Gulf of Evaluation.” In the first case, the “Gulf of Execution” describes the gap between the actions that a user expects a device to have and the actual actions provided by the device. Another way to think about this is usability. Are users able to use a device without strenuous effort being applied to learning the device’s actions? Next, the “Gulf of Evaluation” describes the gap between a user being able to visualize how a device will operate and the intended conceptual model. Is the user able to properly and accurately form a realistic, working conceptual model in their mind? If so, then the “Gulf of Evaluation” is minimized in the particular situation.

Chapter 3: Knowledge in the Head and in the World

Information is in the World
Whenever knowledge is in an environment, the need for people to memorize it diminishes. One really good example that Norman makes in chapter 3 is U.S. coins. If you were asked to correctly draw a U.S. penny, you would most likely place key things in wrong locations. You may even draw the head facing the wrong way. This does not mean that every singled one of us is incapable of distinguishing a penny from a dime, or a nickel, or a quarter. The idea is that when knowledge is stored readily available in the surrounding environment, the need to memorize it vanishes. An example of situations that wreak havoc in this situation would be if two coins, very different in value, were made in similar shape, size, and color, because there would be a high chance that they would be mixed up from time to time.

The Power of Constraints
Another thing that aids in our ability to process information and respond accordingly without having to fully memorize the information is situational constraints. For example, the English language and our past experiences with conversations and stories, allows us to accurately guess what word should come next when trying to remember the lyrics to a song or the words to a poem. For example, if I were to say “The color of that ball is ______,” one would expect the next word to be a color. The word “jump” or “finish” do not make sense in the sentence. Furthermore, if I were to quote a poem with adequate rhyming, you would be given further constraints. For example, “Roses are red, violets are blue, sugar is sweet, and so are ____.” Given this poem, you would not only be able to constrain the possible word choices by sentence structure, but also by rhyming, because the appropriate word is expected to rhyme with “blue.”

Memory is Knowledge in the Head
However, for situations where environmental knowledge and logical constraints cannot be applied to simplify a situation, one must rely on memory. It is important to distinguish between two main types of memory: long term memory and short term memory. Long term memory is when a person must remember something for extended periods of time. Long term memory usually takes more effort to store information and takes more effort to recall the memory as well. Inversely, short term memory is best used to store a seven digit or less number, or a short sentence or phrase, for a very short amount of time. Normally, short term memory is lost as soon as a person’s attention is directed somewhere else, so it is important to not get distracted.

Furthermore, there are three structures of memory: "Memory for arbitrary things," "Memory for meaningful relationships," and "Memory through explanations." The first memory structure describes memorizing something without being able to relate it to some relationship to an existing piece of knowledge, or without logical or physical constraints. The second memory structure type is a way of memorizing something based on relating it to something else that is either common knowledge or environmental knowledge. For example, remembering that the left light switch controls the left light and vice versa for the right light. This is easy to remember because we relate it to a well known, easily related piece of knowledge. Finally, the third memory structure type stores information by figuring out logical or physical constraints that will help shape our understanding of the object or idea in question. The example that Norman uses for this memory structure type is the sewing machine bobbin that appears to "magically" intertwine the top and bottom strands of thread. After explaining how the bobbin actually accomplishes this "magical" feat, it becomes much easier to remember, because we have placed logical constraints on the information.

Natural Mappings
The explanation of memory that we have been discussing in chapter three brings us back to the idea of natural mapping. An object with good natural mapping allows the information related to the use or functionality of the object to be stored in the environment. This means that the user will not have to worry about arbitrarily memorizing its instructions for use. However, it is important to weigh the benefits of knowledge in the world versus knowledge in the head. For example, knowledge in the world requires no learning, but is only available in a particular environment. On the other hand, knowledge in the head requires learning, but it can be recalled regardless of environmental hints.

 Chapter 4: Knowing What to Do

Physical Constraints
Continuing the trend of discussing design principles, Norman delves further into the idea of constraints. Physical constraints are a very powerful form of constraints. When faced with an object, we can gain insight into the intended functionality of the object by simply observing the physical constraints. Norman uses a Lego set as an example, but the thing that kept coming to my mind is the toy for babies/toddlers where you have to put blocks of different shapes through holes of different shapes on a ball. Through physical constraints, most of us know that the circle block goes with the circular hole, the triangle block goes with the corresponding triangular hole, and so on. No one needs to explain this to us, we try once or twice and learn that physical constraints matter.

Cultural Constraints
Another important type of constraint is cultural constraints. Here we use previous knowledge of our cultural norms in order to fill in the blanks. For cultural constraints, Norman's Lego example works perfectly. When English speaking Americans go to put the pieces with "Police" written on the sides, the are able to determine that the piece should be oriented so that the word will be right-side-up, even if the piece can also physically fit up-side-down.

Logical Constraints
The final major type of constraint is logical constraints, which allow users to use logic to determine descriptions about an object. Logical constraints tend to be very similar to cultural constraints, because they are both forms of previous knowledge, however logical constraints should bridge cultures. Norman continues with the Lego example to illustrate logical constraints. Logic dictates that all pieces of the Lego kit be used. However, I think that this example is lacking for a true explanation of logical constraints. This is because Lego kits are notorious for including extra parts. A better example of logical constraints is if you take apart your blender, when you put it back together there should be no extra parts. I believe that this example works better because we know that there should be no extra parts from the beginning, whereas in the Lego example, the only way we would know if the kit contained extra parts would be if the instructions contained a parts list. This is not the case. Another similar example would be IKEA furniture, but my last run-in with IKEA furniture has left me to traumatized to explain further.


Chapter 5: To Err is Human

Slips vs Mistakes
When categorizing errors it is important to distinguish from slips and mistakes. A slip is an accidental, automatic or routine behaviors. Slips tend to occur when two actions are very similar, for example storing your eggs in the pantry instead of the fridge. A mistake happens when a person consciously decides something and then is wrong. For example, I cannot remember whether I should turn left or right here, but my ultimate destination is to the east of my current location, therefore I will take the turn closest to east. If this turn ends up being the wrong direction, I have made a mistake.

Within the category of slips, there are six subcategories. The first subcategory is capture errors, which are when the beginning of two actions are similar, but you accidentally transfer to the wrong action. For example, when counting time versus money we restart time after the sixtieth count, whereas we restart the count for money when after the one-hundredth count. The next slip subcategory is description errors, which happen when an action can be performed on two similar objects and we end up mixing up the objects. The example that comes to my mind is holding a non-edible object in one hand and an edible object in the other, and then trying to eat the non-edible object because the action required to put it in your mouth is so similar for both objects. The third slip subcategory is data-driven errors, which are when someone accidentally mixes up two pieces of data that they are attempting to use for something. For example, if someone is speaking a sentence that they are thinking in their head, but because they are looking at a sign that says "Frank's," they accidentally say "Frank's" in their sentence. The fourth slip subcategory is associative action errors, which happen when which is very similar to data-driven errors, but instead of the distraction being external, it is reversed. You might be performing a physical action, when you accidentally mix up what you are thinking into the action you are performing. The fifth slip subcategory is loss-of-activation errors, which happen when you start to do something and either forget the details of what you are doing, or forget that you are doing anything particular at all. This is one of the most common slips in my life. I will go towards the backdoor to let my dog in, but I forget and end up grabbing the laundry out of the dryer, which happened to be on my way. Finally, the last slip subcategory is mode errors, which are when different modes of operation, which have different associated actions, get switched in our minds and cause us to do some unintentional action.

Mistakes tend to be much easier to analyze because they are usually more simple. If you evaluate a situation wrong, collect the wrong data, or aim at the wrong goal, then there is a good chance that you will make a mistake. However, it is important to consider both mistakes and slips when designing an object. If you can account for some of the types of slips or mistakes mentioned, then you can create a device that is more useful and intuitive. There are many different ways to accomplish this, but the main one that stuck out to me was forcing functions. For example, if you are designing a website form needing a phone number, check the format of the phone number and don't let the user move on until the format matches the desired format. This should prevent users from accidentally entering another number, for example their social security number.

Chapter 6: The Design Challenge

Evolutionary Design and the Typewriter
When a typewriter inventor, Mr. Sholes, was deciding on the intricacies of his device he used multiple sources for feedback. Firstly, he gathered responses and reviews from potential everyday users such as writers. Secondly, there are physical constraints. He experimented with many different keyboard designs until the QWERTY keyboard was chosen as the victor. One reason for the need of multiple rows is so that buttons directly next to each other would not cause interference with the typebars in the background. The QWERTY  keyboard became the standard for keyboard layouts all across the world and is still used today, even though the physical constraint no longer exists. There is another popular keyboard format, called DVORAK, attempts to improve upon the QWERTY keyboard by arranging the keys in such a way that the most commonly used keys are the home keys and the least used keys are the farthest away from the home keys. The lesson here is that once a product is designed, accepted as satisfactory, and gained popularity, then it would not be worth it to further change it.

Design Problems
There are many different problems that arise when a designer is created an item. First off, there is an ever-going battle between usability and aesthetics. If one dominates the other, then problems pop up. If an item is so aesthetically pleasing that no one can figure out how to use it, then the designer failed. Conversely, if an item is designed so well that it is extremely easy to use, but it looks horrible, then the designer failed. The problem is that most consumers do not focus on usability when they are shopping for something. Price and aesthetics are the first things noticed. Also, designers may think that their device is easy to use, but sometimes that is because as they design and test the device, they become an expert. It is never a good idea to test something on experts alone. Finally, designers have to please their clients, because they are the ones paying the bills. However, the client may not be a good test user either, so designers should be careful in these situations as well.

Chapter 7: User-Centered Design

Seven Principles for Transforming Difficult Tasks into Simple Ones
1) Use both knowledge in the world and knowledge in the head
2) Simplify the structure of tasks
3) Make things visible: bridge the gulfs of Execution and Evaluation
4) Get the mappings right
5) Exploit the power of constraints, both natural and artificial
6) Design for error
7) When all else fails, standardize

*NOTE: I borrowed these verbatim from the book because I thought Norman did a great job summarizing the points made in this book and I would like to use them in my future designs.

Three Aspects of Mental Models
The design model is how the designer/engineer intended the object to be. The conceptual model is how the users actually mentally perceive the objects actions. If an object is designed well, there will be very little difference between the design model and the conceptual model. The system diagram is best described as the visual portion of a device.

Standardization
There are a few things to keep in mind when discussing standardization. First, when done properly, standardization can reduce ambiguity. However, when standardizing you must be careful that you do not lock-in a standard too early and get stuck with primitive technology. Unfortunately, this explanation does not give any hint at what the right time will be. There will be some arbitrary point somewhere on the timeline of a technology's development when little future change is expected to occur, this is the point that I would argue is the best point for standardization.


5 Examples of Bad Design


1) Road bike gear shifters & brake levers:
Modern road bikes have some good design principles built in to the  the gear shifters and brake levers, but the designers still provide no clear indication of which side does the front brakes or rear brakes.When looking at the bike, you can clearly gain a decent conceptual image, but you still have no way of knowing which lever corresponds to which brake. They also incorporate the shifters into the brake lever, so that you can push the lever to the inside and it will shift for you. This is a great idea and very convenient, unfortunately the ambiguity regarding the front/back breaks makes this an overall bad design, requiring the users to memorize the functionality, because there is not appropriate mapping.



2) Door Lock with ID Scanner
The main problem with this door lock with integrated ID scanner is that there are four possible ways to insert your ID card. Even if you are smart enough to guess that the scanner would need more area than possible in the part in front of the slot, you would still have two options to choose from. I believe that this could be clarified by adding an arrow on the side requiring the magnetic strip, or if they added a picture of the card oriented the right way. Although, the design does give decent feedback by beeping and lighting either a red light for a bad card or a green light for a good card.



3) 1996 VW GTI Gear Shifter
The peculiar thing about this gear shifter is that the reverse gear is left and up, which is easily shown on the label on top of the shifter. So, at first glance this object appears to be designed well, with a label allowing users to create a conceptual model in their minds. However, I kept having trouble getting the transmission to go into reverse. I would push the right direction, but it seemed like something was wrong because I would have to force it into reverse. Later on I realized that in order to get it into reverse, all you have to do is press down on the shifter and then push it to the left and then up. The designers probably did this to prevent someone putting the transmission into reverse instead of first gear, which is usually to the left and up. However, due to the lack of sufficient labeling and knowledge in the environment, I was not able to figure out how to use it easily. 



4) USB Plug
This one really needs no introduction. USB plugs are ambiguous and frustrating. Even if you use USB plugs all of the time, it is easy to mix up the orientation because there is no evidence of correct orientation. Technically there is physical constraints in place because if you try to insert the plug in the wrong orientation, it will not go in. However, I believe that if the USB designers would have provided a physical distinction in the plug shape, then it would save us all a lot of time. Examples of physical modifications possible are miniUSB and microUSB shapes, which provide sufficient mapping and logical/physical constraints so that users rarely mix up the orientation.




5) Weird Water Fountains
These water fountains can be found in the A&M Rec Center and G. Rollie White Colosseum, however I have never once seen them used. When I first saw them I noticed the way to turn on the water, which is done by either pressing the bar or rotating the knob. I also noted that there was a pipe on the top of the dish, which I assumed as for filling water bottles until I tried and realized that the water only comes out of the back of the pipe and trickles down the basin. The only possible solution that I could come up with is that this is a "spitting" fountain, which still makes no sense. There are clearly physical constraints in place, but I cannot figure out how to use this device. There is no labeling, no cultural knowledge, no environmental knowledge, and no logical assumptions easily made.


5 Examples of Good Design



1) Doorbell
Doorbells are a great example of a simple, but good design. The long flat shape of the button, without ridges or edges, implies push. Also it is visible, located at an average eye height, and is mapped well by placing it  near the door.


2) Truck Dome Lights
My truck's dome lights provide a really nice example of a simple, but good design. There are two buttons corresponding to two lights. It is easy to tell that the buttons are indeed buttons, because there are not many other options for interaction. They do not stick out enough to try and rotate them or pull them. Also, the buttons are mapped well, by placing them next to the light that they each operate. The buttons also provide great feedback by physically clicking when they are pressed, not to mention the lights comes on.


3) Flashlight
This flashlight shows great design principles. There are obviously two ways to interact with the flashlight, one is to press the button which is conveniently located near the lens to imply that it toggles the light itself. The other function is the rear cap is ridged to imply that the only way of physically interacting with the end is to twist it. The button also provides great feedback by physically clicking when it is pressed, not to mention the light comes on.


4) Mirror Controls
While in my truck, I noticed that the mirror control was very easy to use due to good design. The top rocker can be pressed left or right. The bottom directional pad can go left, right, up, or down. From the mapping, labeling, physical constraints, and cultural constraints it is easy to deduce that if you push the rocker left you will control the left mirror and vice versa. Once a mirror is selected, then the directional pad can be used to adjust the orientation of the mirror. Even more, both sets of buttons provide ample amounts of feedback. Firstly, the top button stays left or right when you push it, but the directional pad springs back to its normal position.


5) Beer Taps
While I was sitting there thinking of what my last object should be, it jumped right out at me! Beer taps are a great example of design because they have a long lever on top, which implies that it should be pulled. The open, tube-shaped tip underneath the lever hint that something will come out of them when the lever is pulled. Using my cultural and logical knowledge I am able to deduce that beer should come out of the hole if the lever is pulled, since these are after all located in a bar. Great design.

Thursday, September 13, 2012

Homework #3: Chinese Room


Introduction:
In “Minds, Brains, and Programs,” John Searle argues against certain aspects of Artificial Intelligence (AI) by comparing the situation to unique example. John Searle is from the Department of Philosophy at the University of California in Berkeley, California. Searle claims that full Artificial Intelligence is not possible and he attempts to defend his claim against multiple responses from other very noteworthy universities.
Summary:
One of the first distinctions explained in the paper is the difference between weak Artificial Intelligence and strong Artificial Intelligence. Searle explains that weak AI, by definition, is simulating understanding of something. This would be similar to someone giving you step-by-step instructions on how to do something, but you not really understanding what you are doing. You will probably be able to complete the task, assuming that the instructions are thorough enough, but you will not actually understand what you did. Conversely, strong AI is true understanding. The metaphor that Searle compares this situation to is an English speaking man, trapped in a Chinese Room. The man is first given a set of Chinese characters. Then the man is given instructions in English relating each Chinese character from the first set to a Chinese character provided in a new step. Then someone outside of the room engages in a conversation with the man in Chinese. They pass him a question, and then he replies with an answer. The important part that Searle points out is that the man does not actually understand the conversation taking place past the idea of having a formal list of instructions and following those instructions. If the man were to be given the same set of questions and answers in English, then he would truly understand the conversation, because he would know what they were talking about.
Some of the critics of his paper have written responses to which he defends his claim. For example, a group from Yale University devised a response termed as “The Robot Reply.” In this response, the contributors from Yale argue that if instead of the human, they place a robot into the room. Then, they would pass instructions in Chinese to the robot, which the robot would refer to its corresponding instruction in a language that it knows, and then it would perform some action. Examples of these actions could be hammering a nail, drinking water, opening a window, etc. The team from Yale argues that in this case, the robot would surely understand the Chinese language, because it has a physical action associated with each instruction. Searle then defends his claim by stating that strong AI is defined as internal understanding, relying on nothing from the environment, besides the input given. He then points out that the team from Yale rely on actions taken place in the environment, which defeat the definition of strong AI.
Discussion:
The Chinese Room is a very interesting paper which raises key questions regarding the current state of Artificial Intelligence and future possibilities. I believe that most of the AI that I have been exposed to, or could imagine, would fall into the weak AI category. I believe that the main problem with the strong AI definition and theory is that it bases its proof on the idea of understanding. However, understanding has so many different levels. For example, if I read a book about a group of characters interacting in a fictional place, then I would say that I understand what is happening. However, some of my understanding would be influenced by my past experiences related to the story line. What I mean to say is that someone else reading the same story would have a different understanding of what is going on.  Does a reader focus on the big picture, noting the high level significance of the events? Or does the reader strictly follow the immediate interactions between characters. The point that I wish to make is that I feel like Searle and his antagonist base their arguments around the idea of understanding, but there are so many levels of understanding. I believe that the English speaking man in the example understands the Chinese characters he is interacting with because he has some set of instructions explaining how to process the characters. However, I believe that he does not have the same level of understanding as a true Chinese speaking person. Furthermore, if two Chinese speaking individuals interacted with the English speaking man, they might have different levels of understanding between them. If one Chinese speaking person is more intelligent and experienced than the other, then they might have a higher level of understanding than the less intelligent, less experienced, Chinese speaking person. This does not mean that the less intelligent, less experienced, Chinese speaking person does not understand at all.

Sunday, September 9, 2012

Paper Reading #6: Intimacy in Long-Distance Relationships over Video Chat

Introduction:
Relationships are hard work. They take time, effort, and patience. Throw in geographic separation and they become even harder to maintain. "Intimacy in Long-Distance Relationships over Video Chat" is a research paper, done by a team of two researchers from Canada, which hopes to evaluate the effectiveness of different forms of communication used in long distance relationships. Carman Neustaedter is an assistant professor at Simon Fraser University in Vancouver, Canada, whose main research area is in Human-Computer Interaction. One interesting fact about Carman is that he enjoys embedding puzzles in his research papers, which he compares to Indiana Jones or DaVinci Code. Saul Greenberg is a professor in the Department of Computer Science at the University of Calgary. Saul mainly studies Human-Computer Interaction, specifically how technology plays a role in the everyday interactions of people. 

Summary:
Carman and Saul devised a plan to interview a group of people in long distance relationships in order to discover how technology enhances the ability to maintain a relationship over a distance. They interviewed fourteen participants who were all in a long distance relationship and all already using technology to maintain a connection to their significant others. The participants were split by gender and some were in same-sex relationships. Most of the participants were college students, graduate students, or post-graduate researchers. The research team interviewed these participants individually in order to gain insight into how the participants were already using technology to stay connected to their significant others. Most of the participants used phone text messages as a way to stay in contact throughout the day. They would use phone calls for important or emotional conversations, or when they wanted each other's undivided attention. Finally, participants used video chat as a way to hang out and spend time with each other. Some participants would only use video chat for short conversations, but most would leave the connection open for extended periods of time while they performed everyday activities such as studying, doing laundry, or eating a meal. Furthermore, some participants would send affection through air-kisses or air-hugs. Less than half of the participants used video chat as a way to satisfy sexual desires, and the other half stated that shyness or privacy issues were the reason why they did not. One of the main things that people agreed upon is that video chat allowed body language queues to enhance the conversation.  


Related Work:
There is a lot of related work done in the area of human interaction via digital devices. One of the main areas of concern regarding digital communication is privacy. Here are some related papers that investigate digital privacy further: "Over-exposed? Privacy patterns and considerations in online and mobile photo sharing," "Primitive emotional contagion," "Unpacking 'privacy' for a networked world," and "The taste of privacy: An analysis of college student privacy settings in an online social network." Another main area of related research is studying the effect of Facebook and emotions on relationships. Some papers that focus on this subject are: "The impact Facebook rituals can have on a romantic relationship," "Public displays of connection," and "More information than you ever wanted: Does Facebook bring out the green eyed monster of jealousy." Finally, some researchers focus on the actual creation of software and hardware devices that might enhance the quality of digital communication in: "Tangible interfaces for remote collaboration and communication," "ComTouch: design of a vibrotactile communication device," and "Emotional touch: a novel interface to display emotional tactile information to a palm."

Evaluation:
Carman and Saul only used qualitative and subjective forms of evaluation to measure effectiveness of different forms of digital communication in long distance relationships. This was mainly because they were studying the users' level of acceptance of preexisting technologies used in communication with significant others in long distance relationships. They would personally interview participants, either in person or via video chat, and ask open ended questions about different digital communication technologies and what role they played in maintaining relationships when separated geographically. 

Discussion:
I picked this paper as my final paper reading blog because it relates to my life. I have been dating the same girl for over nine years and we have been faced with distance boundaries before. Of the nine summers that we have spent together, we have spent five of them in different cities, or sometimes states. We also spent one whole year apart when I started college a year before her. In the early part of our relationship, video chatting was not as readily available, so we stuck to phone calls. In more recent summers we have used video chatting to varying levels. One area of frustration for a lot of video chat users is connection speed issues. This paper mentions this, but it does not focus on this issue as a main point of research. I believe that video chatting is great because it allows users to view and interpret contextual clues, which can drastically change how messages are interpreted. For example body language can show comprehension, confusion, anger, sadness, tiredness, etc. This information is lost when talking over the phone alone. However, I sometimes got very frustrated when bad connections caused conversations to be cut short, or damaged. This paper did a great job of evaluating thoughts and feelings of preexisting technologies, but did not focus on ways of improvement. Most of the related work focused on internet privacy, Facebook interactions, or ways of improving digital communication, rather than user's level of acceptance of preexisting technologies on long distance relationships. Therefore, I believe that this paper is novel because of the way it combines ideas, but is lacking in suggestions for improvement.


Thursday, September 6, 2012

Paper Reading #5: Touch me once and I know it's you! mplicit Authentication based on Touch Screen Patterns


Introduction:As most Android users know, the draw a shape password application is quick and easy to use. However, it is not as safe as one might assume. First off, an attacker could easily watch the phone owner enter the shape and then mimic the shape. Secondly, an attacker could simply look at the smudge patterns on the screen and decipher the password pattern. These are serious flaws with a pattern based password and in "Touch me once and I know it’s you! Implicit authentication based on touch screen patterns,” a research team attempts to address these issues.
Summary:Alexander De Luca, Alina Hang, Frederik Brudy, Christian Lindner, and Heinrich Hussmann are from the University of Munich and are the researchers for this paper. The research team is attempting to differentiate between different users keying in the same pattern password. In addition, the team will use this ability to increase protection of pattern based passwords by checking for individual users’ identities. The team first observed user input information, such as speed, pressure, and path taken, on trivial tasks, such as horizontal swipe, vertical swipe, and horizontal swipe. Once they gathered this data, they began developing a security application that will incorporate these additional features into the existing pattern-based password programs. The research team will then perform another research experiment which will utilize the newly created program and test users and attackers input results. The categories that will be tracked are false negatives, true negatives, false positives, and true positives, which will then be used to calculate an overall accuracy rate.
 
Related Work:A lot of research has been done regarding biometric verification techniques, and I will name a few here. First, “Biometric verification at a self service interface” investigates the use of physiological biometrics to securely identify users. Second, a research team investigates “How biometrics will fuse flesh and machine,” which further investigates physiological biometric verification methods. Then, a research team investigates the user acceptance of biometric verification methods in “Employee acceptance of computerized biometrics security systems. Another paper that references user acceptance of biometric verification systems is “Theoretical examination of the effects of anxiety and electronic performance monitoring on biometric security systems.” Next, in “A user study using images for authentication” a research team studies using pictures for passwords all together. An application of a very specific physiological biometric test is used in “An iris biometric system for public and personal use.” In “An introduction to evaluating biometric systems,” researchers study the broad topic and application of physiological biometrics. “Bodycheck: Biometric access protection devices and their programs put to the test,” is a study in which a research team investigates both physiological and behavioral biometrics to improve security functions. A study of privacy and big brother syndrome is conducted in “Biometrics in the mainstream: what does the U.S. public think.” Finally, a team studies the benefit of picture password regarding memory in “The memorability and security of passwords – some empirical results.”
Evaluation:In order to evaluate the first part of the research, including the base information regarding touch pressure and speed, the team used quantitative objective methods. This allows the team to gather hard evidence of their hypothesis. Then, the team uses quantitative objective methods again when measuring the accuracy of the real users and attackers keying in the correct pattern password. The results of the experiment can be seen in the following chart.
True Positives
False Negatives
True Negatives
False Positives
Accuracy
398
92
852
231
77%
False Rejection Rate: 19%
False Acceptance Rate: 21%
Discussion:I believe that this research paper did a great job at investigating a cheap and logical solution to an everyday problem. The main difference between the related works and this study is the type of biometrics testing used.  Most of the studies investigate the idea of using physiological biometrics, such as finger prints and retina scans. Alternatively, this study investigates the use of behavioral physiological biometrics, such as pressure and speed. The main two benefits of these two approaches are 1) no additional hardware is required in addition to an Android smartphone and 2) no privacy questions come into play regarding the storage of physiological biometric identity information. The main problem that I see with this study is that a lot of times, multiple people use a phone. For example, friends and significant others know each other’s password patterns and this research paper does not account for this situation.

Wednesday, September 5, 2012

Paper Reading #4: Reducing Compensatory Motions in Video Games for Stroke Rehabilitation

Introduction:
I remember when I was a young kid and my mom would yell at me to get off the video games, but times have changed. Recently, many video game creators have incorporated physical activity into video games. This is most easily noted with the advancements Nintendo has made with the Wii gaming console. Progress has not stopped here either. Video games are now being used to provide low cost alternative to costly physical therapy sessions. The idea is that if a patient will benefit from a lot of physical training, but cannot afford to attend physical therapy for all of the recommended time, then they could use smart video games to help bridge the gap. 

Summary:
In "Reducing Compensatory Motions in Video Games for Stroke Rehabilitation," two researchers attempt to push the bar even further. Gazihan Alankus and Caitlin Kelleher use video games to help stroke victims perform physical therapeutic shoulder exercises and detect compensatory movements such as patients leaning to a side or backwards to falsely increase range of motion. They accomplish this feat by first observing stroke rehabilitation patients and noting the most common compensatory motions. Then, they designed a wearable, physical sensor network that consists of a series of Wii remotes place on patient's arms and torso. Next, they designed a video game that would use simple strategic exercises to provide physical therapy to the stroke patients. 



Finally, they incorporate the spine angle sensors as compensatory sensors and modify the video game to reward and punish  the patient based on the compensation levels. The game that they had the most success with is a hot air balloon that is steadily traveling towards the right hand side of the screen. The patients can alter the height of the hot air balloon by raising one of their arms. Parachuters are suspended across the screen and the patient must attempt to rescue the parachuters by picking them up. Obstacles are introduced to encourage sustained holds and to prevent jerking motions. If a patient compensates by leaning their torso to one side or backwards, then the hot air balloon tilts. If the hot air balloon tilts past a certain amount or if they crash the hot air balloon, then their score goes down.




Related Work:
One piece of related work, which discusses video game based therapy, is "Virtual rehabilitation after stroke." Then, "Optimizing engagement for stroke rehabilitation using serious games," also discusses video games used for therapy. For a slightly different approach, one team of researchers investigate motivational factors in video game based therapy in "Design strategies to improve patient motivation during robot-aided rehabilitation. However, I think that Gazihan and Caitlin did a pretty good job considering and mentioning motivational factors. Another piece of research that is related is "Use of low-cost, commercially available gaming console (Wii) for rehabilitation of an adolescent with cerebral palsy." I thought that this article seemed interesting because I have a family member with cerebral palsy and she has mentioned her physical training exercises that she is instructed to do and a video game version would be great. Similar to the cerebral palsy focused research paper, "Improving patient motivation in game development for motor deficit rehabilitation" investigates video game based therapy methods for motor disability patients.  One group attempts to use an older gaming system to improve the economic availability factors in "Feasibility of using the Sony PlayStation 2 gaming platform for an individual poststroke: a case report." More related papers are "Game design in virtual reality systems for stroke rehabilitation," "PlayStation 3 based tele-rehabilitation for children," "Tailoring virtual reality technology for stroke rehabilitation: a human factors design," and " Effects of intensity of rehabilitation after stroke: a research synthesis."

Evaluation:
For the evaluation portion of this paper, the research team used a combination of objective and subjective, qualitative and quantitative analysis. For the accuracy of the Wii remote sensors, they used an objective, quantitative analysis. This was done because objective, quantitative analysis tends to best represent the accuracy of physical sensors. Then, when the researchers conducted a case study, they performed subjective, qualitative analysis on the research patients. This subjective feedback, combined with direct observation, allowed the researchers to modify and tailor the game to most efficiently and effectively prevent compensatory movements in poststroke rehabilitation therapy.

Discussion:
The main thing that I like about this article is that it attempts to address a very powerful problem. My granddad had multiple strokes before he passed away, and I wish that they had something like this to help him maximize his recovery therapy. The main difference between this research project and the ones mentioned in the related work section is its focus on compensatory movements. Some of the research papers listed focus on the possibility of tele-presence physical therapy, but this still requires a physician on the other end of the connection, which is still costly to use at high frequencies. The other research papers focus on using video games for rehabilitation, but do not adequately address compensatory movements. This paper definitely is novel because it actually shapes the video game around the idea of rewarding good form and attempting to prevent compensation. 

Saturday, September 1, 2012

Paper Reading #3: Augmenting the Scope of Interactions with Implicit and Explicit Graphical Structures

Introduction:
In this blog post, I will discuss a CHI scholarly journal looking to improve group based graphical editing. “Augmenting the Scope of Interactions with Implicit and Explicit Graphical Structures,” is a scholarly research paper published by Raphael Hoarau and Stephane Conversy at the University of Toulouse in Toulouse, France. Raphael is a PhD student, studying Human-Computer Interaction at the Laboratory for Interactive Computing at the University of Toulouse. Stephane is an associate professor at the Laboratory for Interactive Computing at the University of Toulouse, focusing his research on Human-Computer Interaction. In this paper Raphael and Stephane discuss a software program, ManySpector, they created to improve group based graphical editing and a research study to test user interactions and acceptability of their software program. In this blog, I will discuss the overview of Raphael and Stephane’s research paper, related work, how they evaluated ManySpector, and my opinion regarding their research, findings, and presentation.

Summary:

Before the research team could test human perception of group based graphical editing, they first had to develop a software program capable of manipulating grouped objects the way that they would like to. The idea behind their software program is to allow users to group objects multiple times, display attributes of group objects, and then edit those attributes all at once. For example, if I have five shapes on the screen grouped together and three text fields grouped together, and I want to change the color of all of the shapes to green, I would simply have to select the group, which will display the group’s attributes and values, and then change the color attribute to green. 





There are already systems that allow group editing to be done, but Raphael and Stephane believe that these systems do not provide enough editing capability. For example, word allows you to group objects together, but the only options it then provides for editing is ungroup, scale, and rotate, which adjust the group as if it were one object. Raphael and Stephane’s point is some users may want to group a set of objects and then rotate each individual object to a designated, uniform angle independently. Manipulations such as these cannot be done in any graphical editor currently on the market.

Evaluation:

In order to evaluate the usability of their software program, Raphael and Stephane conducted a research study consisting of a tutorial and two exercises without help. The tutorial lasted fifteen minutes and consisted of basic grouping and editing skills. Then, they asked the participants to create a certain scene or object. They also asked the participants to think out loud so that the study administrators could follow their train of thought of what they are attempting to do at any given moment. At the end of the study, Raphael and Stephane asked the study participants to fill out a subjective qualitative questionnaire, that utilized the Likert scale in order to rate their interactions with the software program. The majority of the participants stated that they would need more training to be comfortable using the tool and that they would not feel proficient in such little time.

Discussion:

I believe that group based graphical editing is an area that is prime for improvements, but Raphael and Stephane’s ManySpector is not the final product to do it. There needs to be a way of making the grouping relationships more intuitive and requiring less training in order to grasp the concepts. However, when it comes to effort involved in creating such a software program, I am very impressed. Rahael and Stephane created a rather robust application for testing and proof of concept. I am definitely interested in trying out a piece of software like this.

Paper Reading #2: Taming Wild Behavior - The Input Observer for Obtaining Text Entry and Mouse Pointing Measures from Everyday Computer Use


Introduction:
The purpose of this blog entry is to present The Input Observer, which is an experiment to test real-world text input and mouse movement speed and accuracy. “Taming Wild Behavior: The Input Observer for Obtaining Text Entry and Mouse Pointing Measures from Everyday Computer Use” is a scholarly paper published by Abigail C. Evans and Jacob O. Wobbrock from the University of Washington. The main goal of The Input Observer is to measure text entry and mouse movement speed and accuracy “in the wild,” or outside of a controlled test environment. Abigail is a research assistant at the University of Washington, who specializes in graphic design, web development, and knitting (sweaters, gloves, and such). Jacob is an associate professor at the University of Washington, who specializes in CHI. A CHI researcher by the name of Hurst performed a similar experiment, tracking mouse movements and text input, but their tests were limited to one application on the computer. Whereas The Input Observer gathers information directly from the operating system, so it does not distinguish between testing applications. In this blog, I will discuss the overview of Abigail and Jacob’s research paper, related work, how they evaluated The Input Observer, and my opinion regarding their research, findings, and presentation.

Related Work:
One piece of related work studying pointing performance, "Accurate measurements of pointing performance in situ observations" does a pretty good job measuring pointer input, but it seems to be scripted in the sense that the observation occurs in a controlled program. Also, "Automatically detecting pointing performance" is another example of an in situ study of mouse performance. A slightly different take on pointing performance is considered in "Understanding pointing problems in real world computing environments." Similar studies are done in "Accuracy measures for evaluating computer pointing devices" and "Optimality in human motor performance." Furthermore, similar work is done in "Goal crossing with mice and trackballs for people with motor impairments" and "Instrumenting the crowd: using implicit behavioral measures to predict task performance." Researchers investigate the problems and advantages of in-the-wild studies in "Being in the thick of in-the-wild studies: the challenges and insights of researcher participation." This is also discussed in "Ethnography and participation observation" and "Into the wild: challenges and opportunities for field trial methods."

Summary:
As I mentioned before, the main purpose of The Input Observer was to observe input speed and accuracy in the wild. However, in normal input speed and accuracy testing environments, a predetermined script is used to ensure consistent results. Abigail and Jacob believe that this also affects test results, because users have to read and copy the test text, instead of creating the text as they go. Most of the time in a real world setting, we will be inventing the text as we type. In order to truly measure these characteristics in the wild, no scripts could be used. This presented a challenge from the very beginning. In order to overcome this, the researchers devised a innovative idea of how to measure text input, including edits and errors. An example of an edit is if I type “I went to the supermarket,” but then I backspace and I write “I went to the bowling alley,” this would be an example of an edit because it was not a grammar or spelling mistake, but rather a content mistake. Users were not penalized for edits. On the other hand, an example of an error is if I were to type “I like to splunk” and then I backspace partially and change a misspelled word, or “I like to spelunk.” This is an example of an error because of a misspelled word. In The Input Observer, errors count against the sample user. In order to find valuable strings of words on which to test input speed and accuracy, The Input Observer would record text input until a finalizing remark was made (punctuation, enter, etc.). Then, the system would test the length of the string, requiring a minimum length of 24 characters and no edits. If the string was considered a valid test example, then the system would treat that one string as a sample input, recording the input speed and accuracy.

Evaluation:
When it came to evaluating the performance of The Input Observer, the team compared two scenarios. First, the team loading The Input Observer on twelve computers in a test environment and provided a script for users to follow. Next, the team loaded The Input Observer on twelve computers in test user’s homes and did not provide a script for them to follow. Then, the team used objective quantitative methods to compare the results of the two scenarios. The main point of emphasis regarding the comparison of results is that the team did not expect the results to match up. They expect the results to be different because they believe that people will perform differently “in the wild,” when they are creating their own script as they go. This scenario is what the research team hopes to measure with The Input Observer. The results were as follows.





Summary:
After reading “Taming Wild Behavior: The Input Observer for Obtaining Text Entry and Mouse Pointing Measures from Everyday Computer Use,” I believe that Abigail and Jacob have created a great test system that will provide much more useful results than a script-based text input and mouse movement speed and accuracy test. However, I am not happy with their evaluation method because they compared themselves to the situation that they were trying to prove a point about. The purpose of The Input Observer is to record results in the wild, which means that they should not compare their results to the test environment results. On the other hand, I do not see a viable alternative for testing their results, because of the wild, unscripted nature of their experiment setting. Therefore, I believe that comparing their results to the test-environment results was the only possible solution.