Thursday, September 13, 2012

Homework #3: Chinese Room


Introduction:
In “Minds, Brains, and Programs,” John Searle argues against certain aspects of Artificial Intelligence (AI) by comparing the situation to unique example. John Searle is from the Department of Philosophy at the University of California in Berkeley, California. Searle claims that full Artificial Intelligence is not possible and he attempts to defend his claim against multiple responses from other very noteworthy universities.
Summary:
One of the first distinctions explained in the paper is the difference between weak Artificial Intelligence and strong Artificial Intelligence. Searle explains that weak AI, by definition, is simulating understanding of something. This would be similar to someone giving you step-by-step instructions on how to do something, but you not really understanding what you are doing. You will probably be able to complete the task, assuming that the instructions are thorough enough, but you will not actually understand what you did. Conversely, strong AI is true understanding. The metaphor that Searle compares this situation to is an English speaking man, trapped in a Chinese Room. The man is first given a set of Chinese characters. Then the man is given instructions in English relating each Chinese character from the first set to a Chinese character provided in a new step. Then someone outside of the room engages in a conversation with the man in Chinese. They pass him a question, and then he replies with an answer. The important part that Searle points out is that the man does not actually understand the conversation taking place past the idea of having a formal list of instructions and following those instructions. If the man were to be given the same set of questions and answers in English, then he would truly understand the conversation, because he would know what they were talking about.
Some of the critics of his paper have written responses to which he defends his claim. For example, a group from Yale University devised a response termed as “The Robot Reply.” In this response, the contributors from Yale argue that if instead of the human, they place a robot into the room. Then, they would pass instructions in Chinese to the robot, which the robot would refer to its corresponding instruction in a language that it knows, and then it would perform some action. Examples of these actions could be hammering a nail, drinking water, opening a window, etc. The team from Yale argues that in this case, the robot would surely understand the Chinese language, because it has a physical action associated with each instruction. Searle then defends his claim by stating that strong AI is defined as internal understanding, relying on nothing from the environment, besides the input given. He then points out that the team from Yale rely on actions taken place in the environment, which defeat the definition of strong AI.
Discussion:
The Chinese Room is a very interesting paper which raises key questions regarding the current state of Artificial Intelligence and future possibilities. I believe that most of the AI that I have been exposed to, or could imagine, would fall into the weak AI category. I believe that the main problem with the strong AI definition and theory is that it bases its proof on the idea of understanding. However, understanding has so many different levels. For example, if I read a book about a group of characters interacting in a fictional place, then I would say that I understand what is happening. However, some of my understanding would be influenced by my past experiences related to the story line. What I mean to say is that someone else reading the same story would have a different understanding of what is going on.  Does a reader focus on the big picture, noting the high level significance of the events? Or does the reader strictly follow the immediate interactions between characters. The point that I wish to make is that I feel like Searle and his antagonist base their arguments around the idea of understanding, but there are so many levels of understanding. I believe that the English speaking man in the example understands the Chinese characters he is interacting with because he has some set of instructions explaining how to process the characters. However, I believe that he does not have the same level of understanding as a true Chinese speaking person. Furthermore, if two Chinese speaking individuals interacted with the English speaking man, they might have different levels of understanding between them. If one Chinese speaking person is more intelligent and experienced than the other, then they might have a higher level of understanding than the less intelligent, less experienced, Chinese speaking person. This does not mean that the less intelligent, less experienced, Chinese speaking person does not understand at all.

Sunday, September 9, 2012

Paper Reading #6: Intimacy in Long-Distance Relationships over Video Chat

Introduction:
Relationships are hard work. They take time, effort, and patience. Throw in geographic separation and they become even harder to maintain. "Intimacy in Long-Distance Relationships over Video Chat" is a research paper, done by a team of two researchers from Canada, which hopes to evaluate the effectiveness of different forms of communication used in long distance relationships. Carman Neustaedter is an assistant professor at Simon Fraser University in Vancouver, Canada, whose main research area is in Human-Computer Interaction. One interesting fact about Carman is that he enjoys embedding puzzles in his research papers, which he compares to Indiana Jones or DaVinci Code. Saul Greenberg is a professor in the Department of Computer Science at the University of Calgary. Saul mainly studies Human-Computer Interaction, specifically how technology plays a role in the everyday interactions of people. 

Summary:
Carman and Saul devised a plan to interview a group of people in long distance relationships in order to discover how technology enhances the ability to maintain a relationship over a distance. They interviewed fourteen participants who were all in a long distance relationship and all already using technology to maintain a connection to their significant others. The participants were split by gender and some were in same-sex relationships. Most of the participants were college students, graduate students, or post-graduate researchers. The research team interviewed these participants individually in order to gain insight into how the participants were already using technology to stay connected to their significant others. Most of the participants used phone text messages as a way to stay in contact throughout the day. They would use phone calls for important or emotional conversations, or when they wanted each other's undivided attention. Finally, participants used video chat as a way to hang out and spend time with each other. Some participants would only use video chat for short conversations, but most would leave the connection open for extended periods of time while they performed everyday activities such as studying, doing laundry, or eating a meal. Furthermore, some participants would send affection through air-kisses or air-hugs. Less than half of the participants used video chat as a way to satisfy sexual desires, and the other half stated that shyness or privacy issues were the reason why they did not. One of the main things that people agreed upon is that video chat allowed body language queues to enhance the conversation.  


Related Work:
There is a lot of related work done in the area of human interaction via digital devices. One of the main areas of concern regarding digital communication is privacy. Here are some related papers that investigate digital privacy further: "Over-exposed? Privacy patterns and considerations in online and mobile photo sharing," "Primitive emotional contagion," "Unpacking 'privacy' for a networked world," and "The taste of privacy: An analysis of college student privacy settings in an online social network." Another main area of related research is studying the effect of Facebook and emotions on relationships. Some papers that focus on this subject are: "The impact Facebook rituals can have on a romantic relationship," "Public displays of connection," and "More information than you ever wanted: Does Facebook bring out the green eyed monster of jealousy." Finally, some researchers focus on the actual creation of software and hardware devices that might enhance the quality of digital communication in: "Tangible interfaces for remote collaboration and communication," "ComTouch: design of a vibrotactile communication device," and "Emotional touch: a novel interface to display emotional tactile information to a palm."

Evaluation:
Carman and Saul only used qualitative and subjective forms of evaluation to measure effectiveness of different forms of digital communication in long distance relationships. This was mainly because they were studying the users' level of acceptance of preexisting technologies used in communication with significant others in long distance relationships. They would personally interview participants, either in person or via video chat, and ask open ended questions about different digital communication technologies and what role they played in maintaining relationships when separated geographically. 

Discussion:
I picked this paper as my final paper reading blog because it relates to my life. I have been dating the same girl for over nine years and we have been faced with distance boundaries before. Of the nine summers that we have spent together, we have spent five of them in different cities, or sometimes states. We also spent one whole year apart when I started college a year before her. In the early part of our relationship, video chatting was not as readily available, so we stuck to phone calls. In more recent summers we have used video chatting to varying levels. One area of frustration for a lot of video chat users is connection speed issues. This paper mentions this, but it does not focus on this issue as a main point of research. I believe that video chatting is great because it allows users to view and interpret contextual clues, which can drastically change how messages are interpreted. For example body language can show comprehension, confusion, anger, sadness, tiredness, etc. This information is lost when talking over the phone alone. However, I sometimes got very frustrated when bad connections caused conversations to be cut short, or damaged. This paper did a great job of evaluating thoughts and feelings of preexisting technologies, but did not focus on ways of improvement. Most of the related work focused on internet privacy, Facebook interactions, or ways of improving digital communication, rather than user's level of acceptance of preexisting technologies on long distance relationships. Therefore, I believe that this paper is novel because of the way it combines ideas, but is lacking in suggestions for improvement.


Thursday, September 6, 2012

Paper Reading #5: Touch me once and I know it's you! mplicit Authentication based on Touch Screen Patterns


Introduction:As most Android users know, the draw a shape password application is quick and easy to use. However, it is not as safe as one might assume. First off, an attacker could easily watch the phone owner enter the shape and then mimic the shape. Secondly, an attacker could simply look at the smudge patterns on the screen and decipher the password pattern. These are serious flaws with a pattern based password and in "Touch me once and I know it’s you! Implicit authentication based on touch screen patterns,” a research team attempts to address these issues.
Summary:Alexander De Luca, Alina Hang, Frederik Brudy, Christian Lindner, and Heinrich Hussmann are from the University of Munich and are the researchers for this paper. The research team is attempting to differentiate between different users keying in the same pattern password. In addition, the team will use this ability to increase protection of pattern based passwords by checking for individual users’ identities. The team first observed user input information, such as speed, pressure, and path taken, on trivial tasks, such as horizontal swipe, vertical swipe, and horizontal swipe. Once they gathered this data, they began developing a security application that will incorporate these additional features into the existing pattern-based password programs. The research team will then perform another research experiment which will utilize the newly created program and test users and attackers input results. The categories that will be tracked are false negatives, true negatives, false positives, and true positives, which will then be used to calculate an overall accuracy rate.
 
Related Work:A lot of research has been done regarding biometric verification techniques, and I will name a few here. First, “Biometric verification at a self service interface” investigates the use of physiological biometrics to securely identify users. Second, a research team investigates “How biometrics will fuse flesh and machine,” which further investigates physiological biometric verification methods. Then, a research team investigates the user acceptance of biometric verification methods in “Employee acceptance of computerized biometrics security systems. Another paper that references user acceptance of biometric verification systems is “Theoretical examination of the effects of anxiety and electronic performance monitoring on biometric security systems.” Next, in “A user study using images for authentication” a research team studies using pictures for passwords all together. An application of a very specific physiological biometric test is used in “An iris biometric system for public and personal use.” In “An introduction to evaluating biometric systems,” researchers study the broad topic and application of physiological biometrics. “Bodycheck: Biometric access protection devices and their programs put to the test,” is a study in which a research team investigates both physiological and behavioral biometrics to improve security functions. A study of privacy and big brother syndrome is conducted in “Biometrics in the mainstream: what does the U.S. public think.” Finally, a team studies the benefit of picture password regarding memory in “The memorability and security of passwords – some empirical results.”
Evaluation:In order to evaluate the first part of the research, including the base information regarding touch pressure and speed, the team used quantitative objective methods. This allows the team to gather hard evidence of their hypothesis. Then, the team uses quantitative objective methods again when measuring the accuracy of the real users and attackers keying in the correct pattern password. The results of the experiment can be seen in the following chart.
True Positives
False Negatives
True Negatives
False Positives
Accuracy
398
92
852
231
77%
False Rejection Rate: 19%
False Acceptance Rate: 21%
Discussion:I believe that this research paper did a great job at investigating a cheap and logical solution to an everyday problem. The main difference between the related works and this study is the type of biometrics testing used.  Most of the studies investigate the idea of using physiological biometrics, such as finger prints and retina scans. Alternatively, this study investigates the use of behavioral physiological biometrics, such as pressure and speed. The main two benefits of these two approaches are 1) no additional hardware is required in addition to an Android smartphone and 2) no privacy questions come into play regarding the storage of physiological biometric identity information. The main problem that I see with this study is that a lot of times, multiple people use a phone. For example, friends and significant others know each other’s password patterns and this research paper does not account for this situation.

Wednesday, September 5, 2012

Paper Reading #4: Reducing Compensatory Motions in Video Games for Stroke Rehabilitation

Introduction:
I remember when I was a young kid and my mom would yell at me to get off the video games, but times have changed. Recently, many video game creators have incorporated physical activity into video games. This is most easily noted with the advancements Nintendo has made with the Wii gaming console. Progress has not stopped here either. Video games are now being used to provide low cost alternative to costly physical therapy sessions. The idea is that if a patient will benefit from a lot of physical training, but cannot afford to attend physical therapy for all of the recommended time, then they could use smart video games to help bridge the gap. 

Summary:
In "Reducing Compensatory Motions in Video Games for Stroke Rehabilitation," two researchers attempt to push the bar even further. Gazihan Alankus and Caitlin Kelleher use video games to help stroke victims perform physical therapeutic shoulder exercises and detect compensatory movements such as patients leaning to a side or backwards to falsely increase range of motion. They accomplish this feat by first observing stroke rehabilitation patients and noting the most common compensatory motions. Then, they designed a wearable, physical sensor network that consists of a series of Wii remotes place on patient's arms and torso. Next, they designed a video game that would use simple strategic exercises to provide physical therapy to the stroke patients. 



Finally, they incorporate the spine angle sensors as compensatory sensors and modify the video game to reward and punish  the patient based on the compensation levels. The game that they had the most success with is a hot air balloon that is steadily traveling towards the right hand side of the screen. The patients can alter the height of the hot air balloon by raising one of their arms. Parachuters are suspended across the screen and the patient must attempt to rescue the parachuters by picking them up. Obstacles are introduced to encourage sustained holds and to prevent jerking motions. If a patient compensates by leaning their torso to one side or backwards, then the hot air balloon tilts. If the hot air balloon tilts past a certain amount or if they crash the hot air balloon, then their score goes down.




Related Work:
One piece of related work, which discusses video game based therapy, is "Virtual rehabilitation after stroke." Then, "Optimizing engagement for stroke rehabilitation using serious games," also discusses video games used for therapy. For a slightly different approach, one team of researchers investigate motivational factors in video game based therapy in "Design strategies to improve patient motivation during robot-aided rehabilitation. However, I think that Gazihan and Caitlin did a pretty good job considering and mentioning motivational factors. Another piece of research that is related is "Use of low-cost, commercially available gaming console (Wii) for rehabilitation of an adolescent with cerebral palsy." I thought that this article seemed interesting because I have a family member with cerebral palsy and she has mentioned her physical training exercises that she is instructed to do and a video game version would be great. Similar to the cerebral palsy focused research paper, "Improving patient motivation in game development for motor deficit rehabilitation" investigates video game based therapy methods for motor disability patients.  One group attempts to use an older gaming system to improve the economic availability factors in "Feasibility of using the Sony PlayStation 2 gaming platform for an individual poststroke: a case report." More related papers are "Game design in virtual reality systems for stroke rehabilitation," "PlayStation 3 based tele-rehabilitation for children," "Tailoring virtual reality technology for stroke rehabilitation: a human factors design," and " Effects of intensity of rehabilitation after stroke: a research synthesis."

Evaluation:
For the evaluation portion of this paper, the research team used a combination of objective and subjective, qualitative and quantitative analysis. For the accuracy of the Wii remote sensors, they used an objective, quantitative analysis. This was done because objective, quantitative analysis tends to best represent the accuracy of physical sensors. Then, when the researchers conducted a case study, they performed subjective, qualitative analysis on the research patients. This subjective feedback, combined with direct observation, allowed the researchers to modify and tailor the game to most efficiently and effectively prevent compensatory movements in poststroke rehabilitation therapy.

Discussion:
The main thing that I like about this article is that it attempts to address a very powerful problem. My granddad had multiple strokes before he passed away, and I wish that they had something like this to help him maximize his recovery therapy. The main difference between this research project and the ones mentioned in the related work section is its focus on compensatory movements. Some of the research papers listed focus on the possibility of tele-presence physical therapy, but this still requires a physician on the other end of the connection, which is still costly to use at high frequencies. The other research papers focus on using video games for rehabilitation, but do not adequately address compensatory movements. This paper definitely is novel because it actually shapes the video game around the idea of rewarding good form and attempting to prevent compensation. 

Saturday, September 1, 2012

Paper Reading #3: Augmenting the Scope of Interactions with Implicit and Explicit Graphical Structures

Introduction:
In this blog post, I will discuss a CHI scholarly journal looking to improve group based graphical editing. “Augmenting the Scope of Interactions with Implicit and Explicit Graphical Structures,” is a scholarly research paper published by Raphael Hoarau and Stephane Conversy at the University of Toulouse in Toulouse, France. Raphael is a PhD student, studying Human-Computer Interaction at the Laboratory for Interactive Computing at the University of Toulouse. Stephane is an associate professor at the Laboratory for Interactive Computing at the University of Toulouse, focusing his research on Human-Computer Interaction. In this paper Raphael and Stephane discuss a software program, ManySpector, they created to improve group based graphical editing and a research study to test user interactions and acceptability of their software program. In this blog, I will discuss the overview of Raphael and Stephane’s research paper, related work, how they evaluated ManySpector, and my opinion regarding their research, findings, and presentation.

Summary:

Before the research team could test human perception of group based graphical editing, they first had to develop a software program capable of manipulating grouped objects the way that they would like to. The idea behind their software program is to allow users to group objects multiple times, display attributes of group objects, and then edit those attributes all at once. For example, if I have five shapes on the screen grouped together and three text fields grouped together, and I want to change the color of all of the shapes to green, I would simply have to select the group, which will display the group’s attributes and values, and then change the color attribute to green. 





There are already systems that allow group editing to be done, but Raphael and Stephane believe that these systems do not provide enough editing capability. For example, word allows you to group objects together, but the only options it then provides for editing is ungroup, scale, and rotate, which adjust the group as if it were one object. Raphael and Stephane’s point is some users may want to group a set of objects and then rotate each individual object to a designated, uniform angle independently. Manipulations such as these cannot be done in any graphical editor currently on the market.

Evaluation:

In order to evaluate the usability of their software program, Raphael and Stephane conducted a research study consisting of a tutorial and two exercises without help. The tutorial lasted fifteen minutes and consisted of basic grouping and editing skills. Then, they asked the participants to create a certain scene or object. They also asked the participants to think out loud so that the study administrators could follow their train of thought of what they are attempting to do at any given moment. At the end of the study, Raphael and Stephane asked the study participants to fill out a subjective qualitative questionnaire, that utilized the Likert scale in order to rate their interactions with the software program. The majority of the participants stated that they would need more training to be comfortable using the tool and that they would not feel proficient in such little time.

Discussion:

I believe that group based graphical editing is an area that is prime for improvements, but Raphael and Stephane’s ManySpector is not the final product to do it. There needs to be a way of making the grouping relationships more intuitive and requiring less training in order to grasp the concepts. However, when it comes to effort involved in creating such a software program, I am very impressed. Rahael and Stephane created a rather robust application for testing and proof of concept. I am definitely interested in trying out a piece of software like this.

Paper Reading #2: Taming Wild Behavior - The Input Observer for Obtaining Text Entry and Mouse Pointing Measures from Everyday Computer Use


Introduction:
The purpose of this blog entry is to present The Input Observer, which is an experiment to test real-world text input and mouse movement speed and accuracy. “Taming Wild Behavior: The Input Observer for Obtaining Text Entry and Mouse Pointing Measures from Everyday Computer Use” is a scholarly paper published by Abigail C. Evans and Jacob O. Wobbrock from the University of Washington. The main goal of The Input Observer is to measure text entry and mouse movement speed and accuracy “in the wild,” or outside of a controlled test environment. Abigail is a research assistant at the University of Washington, who specializes in graphic design, web development, and knitting (sweaters, gloves, and such). Jacob is an associate professor at the University of Washington, who specializes in CHI. A CHI researcher by the name of Hurst performed a similar experiment, tracking mouse movements and text input, but their tests were limited to one application on the computer. Whereas The Input Observer gathers information directly from the operating system, so it does not distinguish between testing applications. In this blog, I will discuss the overview of Abigail and Jacob’s research paper, related work, how they evaluated The Input Observer, and my opinion regarding their research, findings, and presentation.

Related Work:
One piece of related work studying pointing performance, "Accurate measurements of pointing performance in situ observations" does a pretty good job measuring pointer input, but it seems to be scripted in the sense that the observation occurs in a controlled program. Also, "Automatically detecting pointing performance" is another example of an in situ study of mouse performance. A slightly different take on pointing performance is considered in "Understanding pointing problems in real world computing environments." Similar studies are done in "Accuracy measures for evaluating computer pointing devices" and "Optimality in human motor performance." Furthermore, similar work is done in "Goal crossing with mice and trackballs for people with motor impairments" and "Instrumenting the crowd: using implicit behavioral measures to predict task performance." Researchers investigate the problems and advantages of in-the-wild studies in "Being in the thick of in-the-wild studies: the challenges and insights of researcher participation." This is also discussed in "Ethnography and participation observation" and "Into the wild: challenges and opportunities for field trial methods."

Summary:
As I mentioned before, the main purpose of The Input Observer was to observe input speed and accuracy in the wild. However, in normal input speed and accuracy testing environments, a predetermined script is used to ensure consistent results. Abigail and Jacob believe that this also affects test results, because users have to read and copy the test text, instead of creating the text as they go. Most of the time in a real world setting, we will be inventing the text as we type. In order to truly measure these characteristics in the wild, no scripts could be used. This presented a challenge from the very beginning. In order to overcome this, the researchers devised a innovative idea of how to measure text input, including edits and errors. An example of an edit is if I type “I went to the supermarket,” but then I backspace and I write “I went to the bowling alley,” this would be an example of an edit because it was not a grammar or spelling mistake, but rather a content mistake. Users were not penalized for edits. On the other hand, an example of an error is if I were to type “I like to splunk” and then I backspace partially and change a misspelled word, or “I like to spelunk.” This is an example of an error because of a misspelled word. In The Input Observer, errors count against the sample user. In order to find valuable strings of words on which to test input speed and accuracy, The Input Observer would record text input until a finalizing remark was made (punctuation, enter, etc.). Then, the system would test the length of the string, requiring a minimum length of 24 characters and no edits. If the string was considered a valid test example, then the system would treat that one string as a sample input, recording the input speed and accuracy.

Evaluation:
When it came to evaluating the performance of The Input Observer, the team compared two scenarios. First, the team loading The Input Observer on twelve computers in a test environment and provided a script for users to follow. Next, the team loaded The Input Observer on twelve computers in test user’s homes and did not provide a script for them to follow. Then, the team used objective quantitative methods to compare the results of the two scenarios. The main point of emphasis regarding the comparison of results is that the team did not expect the results to match up. They expect the results to be different because they believe that people will perform differently “in the wild,” when they are creating their own script as they go. This scenario is what the research team hopes to measure with The Input Observer. The results were as follows.





Summary:
After reading “Taming Wild Behavior: The Input Observer for Obtaining Text Entry and Mouse Pointing Measures from Everyday Computer Use,” I believe that Abigail and Jacob have created a great test system that will provide much more useful results than a script-based text input and mouse movement speed and accuracy test. However, I am not happy with their evaluation method because they compared themselves to the situation that they were trying to prove a point about. The purpose of The Input Observer is to record results in the wild, which means that they should not compare their results to the test environment results. On the other hand, I do not see a viable alternative for testing their results, because of the wild, unscripted nature of their experiment setting. Therefore, I believe that comparing their results to the test-environment results was the only possible solution.

Wednesday, August 29, 2012

Paper Reading #1: TapSense - Enhancing finger interaction on touch surfaces

Introduction:
"TapSense: Enhancing finger interaction on touch surfaces," is a research paper from the CHI conference, written by Chris Harrison, Julia Schwarz, and Scott E. Hudson of the CHI Institute at Carnegie Mellon University. Julia Schwarz is a PhD student at Carnegie Mellon University, studying in the CHI Institute. She is also a very good skier. In fact, she is so good that she was a ski instructor for a while before going to graduate school. Here is a very impressive video of her doing a ton of awesome ski tricks while juggling. Chris Harrison is also a PhD student at Carnegie Mellon University, whose research focuses on mobile interaction techniques and input technologies.   Before Carnegie Mellon, Chris earned his bachelors and masters of science in Computer Science from NYU. Scott Hudson is a CHI professor at Carnegie Mellon, and sponsor of Chris and Julia's research project. In fact, he is the director and founder of the CHI (HCII) PhD program at Carnegie Mellon University.

Summary:
In "TapSense", the team modifies two touch-screen devices in order to provide more types of interaction with touch displays. They are able to differentiate between a finger tip, finger pad, nail, and knuckle tap on a touch screen. This remarkable feat is accomplished by attaching a medical stethoscope and an electret condenser microphone to a touch-screen device. An electret condenser microphone is a type of microphone which does not require polarizing power supply, but instead uses a permanently charged material. They use the sound recorded by this setup to differentiate between the acoustic sound produced by tapping different parts of a human hand on a touch screen device. Below in figure 1, you can see example hand touches possible by using TapSense.


The idea behind wanting to be able to differentiate between different hand motions on a touch screen is to provide a higher level of input possibilities. These input variations could be used as "right-click" or "scrolling" features. Current "right-click" features on touch-screen devices include holding or double-tapping motions, which can sometimes be counter intuitive. The research team hopes to use TapSense to help improve user experiences with touch-devices.

Related Work:
All three of the contributing authors of this paper are active in the CHI field, including many other research projects. The main other research paper by Chris Harrison and Julia Schwarz that caught my eye was "Phone as a Pixel: Enabling Ad-Hoc, Large Scale Displays using Mobile Devices" in which they write a software program to create a large-screen display, using individual mobile phones as pixels.  As for related work not mentioned in this paper, there are a number of different papers published on enhancing touch surface interaction in one way or another. "TeslaTouch: electrovibration for touch surfaces" [1] is a paper describing how one team of CHI researchers has found a way of measuring pressure differences accurate enough to allow a range of actions based on the pressure of the "tap". In "Exploring physical information cloth on a multitouch table," [2] the researchers take a step away from the traditional touch surface and create a drapable sensor cloth that can be used for a variety of different touch capabilities. Another different, but related research paper, "BubbleWrap: a textile-based electromagnetic haptic display" [3] explores the possibility of creating touch surfaces that have different resistances or firmnesses. This would allow them to require users to alter the touch pressure they use to press a button for example.

Other related works:
"Evaluating tactile feedback and direct vs. indirect stylus input in pointing and crossing selection tasks" [4]
"Low-cost multi-touch sensing through frustrated total internal reflection" [5]
"Transformed up-down methods in psychoacoustics" [6]
"ToolGlass and magic lenses: The see-through interface" [7]
"ShapeTouch: leveraging contact shape on interactive surfaces" [8]
"HoloWall: designing a finger, hand, body, and object sensitive wall" [9]
"Rubbing and tapping for precise and rapid selection on touch-screen displays" [10]
 

Evaluation:
In order to evaluate the tactile feedback of different parts of a human hand on a touch screen device, the team uses quantitative, unbiased sensor readings that provide acoustic levels. However, when the team was assessing the overall functionality of the TapSense program, they tend to use mostly quantitative, objective evaluation. This is mainly because there exists an easy way to test the accuracy of the application. They speak of the comparison to using a single finger touch input method, but this appears to be a relative comparison and includes personal opinion. An example of this can be seen when the team compares their tool to other tools that provide similar functionality. One alternative uses a wristband sensor that measures impact force and movement range, which the team discredits because this requires the users to drastically change the way they interact with a touch screen by adding a wrist band.

Discussion:
The TapSense team came up with a novel concept and a great proof-of-concept with this touch-device modification system. While the TapSense system used in this research project requires an external stethoscope and microphone, new phones could integrate smaller, cheaper, and more efficient hardware embedded in the phone. Another great thing about TapSense is computer and mobile-technology device enthusiasts will actually use its features. Some inventions require too much change in normal activity for society to fully use their features. TapSense requires such a small change in user input and that is why I believe that users are more likely to use the great features. The main area for improvement that I noticed was multi-touch TapSense applications. At the time of the TapSense research paper publication, the TapSense system could not differentiate between double finger taps and double knuckle taps, because the separate taps dissipate the acoustic response in different, unpredictable ways.

Sources:

  1. Bau, Olivier and Poupyrev, Ivan and Israr, Ali and Harrison, Chris. TeslaTouch: electrovibration for touch surfaces. In CHI’2010, Ext. UIST ’10. ACM. pp. 283-292
  2. Mikulecky, Kimberly and Hancock, Mark and Brosz, John and Carpendale, Sheelagh. Exploring physical information Cloth on a multitouch table. In CHI’2011, Ext. ITS ’11. ACM. pp. 140-149
  3. Bau, O., U. Petrevski, and W. Mackay. BubbleWrap: a textile-based electromagnetic haptic display. in CHI'2009, Ext. Abstracts. 2009: ACM. pp. 3607-3612
  4. Forlines, C. and R. Balakrishnan. Evaluating tactile feedback and direct vs. indirect stylus input in pointing and crossing selection tasks. in CHI'08. 2008: ACM. pp. 1563–1572
  5. Han, J. Low-cost multi-touch sensing through frustrated total internal reflection. in UIST'05. 2005: ACM. pp. 115-119
  6. Levitt, H., Transformed up-down methods in psychoacoustics. The Journal of the Acoustical society of America, 1971. 49(2): pp. 467-477.
  7. Bier, E. A., Stone, M. C., Pier, K., Buxton, W., and DeRose, T. D. Toolglass and magic lenses: The see-through interface. In Proc. SIGGRAPH, 73–80. ACM, 1993.
  8. Cao, X., Wilson, A. D., Balakrishnan, R., Hinckley, K., and Hudson, S. E. ShapeTouch: leveraging contact shape on interactive surfaces. In Proc. of Tabletop, 129–136. IEEE, 2008
  9. Matsushita, N. and J. Rekimoto. HoloWall: designing a finger, hand, body and object sensitive wall. in UIST'97. 1997: ACM. pp. 209-218
  10. Olwal, A., S. Feiner, and S. Heyman. Rubbing and tapping for precise and rapid selection on touch-screen displays. in CHI'08. 2008: ACM. pp. 295-304