Saturday, March 17, 2007

Talk by John Canny: Toward Natural Human-Computer Interaction:

The talk will be held 2/9/2007 at 2:00pm at the UCI McDonnel Douglas Engineering Auditorium. There will be no Informatics seminar as a result. More details on the talk are located here:

Abstract: This talk covers several current projects at the Berkeley Institute of Design (BID) on more natural human-machine interaction. Multiview is a video-conferencing system that preserves eye contact in group situations, and closely mimics face-to-face for certain high-stakes communication tasks. We are pursuing several projects on technology for developing regions. This work covers language learning, story writing, speech interfaces and micro-finance. In this setting, "naturalness" is particularly important and strongly tied to the context of the interaction.

The remainder of the talk will discuss a general framework for natural interaction. The key again is to expose and use context. We argue that context must be studied on 3 planes (roughly time scales). One of these, the activity plane, has been reified in a prototype called CAAD that builds models of user work activity from desktop logs. The other two planes are being explored through current and inter-related projects on natural speech interfaces and story understanding.

About the Speaker: John Canny is a Professor in Computer Science at UC Berkeley, working in human-computer interaction, ubicomp and privacy. He holds the Paul and Stacy Jacobs Distinguished Professorship. His 1987 Ph.D. from MIT received the ACM dissertation award. His publications span HCI, ubiquitous computing, computer vision, robotics, cryptography, IR, and CSCW, with best paper awards in three of these areas.




http://luci.ics.uci.edu/blog/archives/2007/02/talk_by_john_ca.html




Human Computer Interaction

Over the holidays I read a magnificent book titled Designing Interactions by Bill Moggridge. It’s an incredible collection of history combined with interviews from many of the great computer interface designers and entrepreneurs from the past 30+ years. The stories are superb, the interviews well done, and the pictures are incredible. It’s a must read for anyone serious about designing computer software of any sort. It’s a big book – I petered out about two thirds of the way through it as Moggridge shifted from storytelling to predicting the future – but I fault myself for trying to consume it at one time (and expect I’ll go back and try some of the later chapters again.)

The other day, as I pounded away on my keyboard and moved my mouse around the screen clicking away feverishly, my mind started wandering on the “there must be a better way theme.” My mind wandered to an afternoon that I spent playing Guitar Hero with my friend Dave Jilk and it occurred to me that there have been three companies that came out of my fraternity at MIT (ADP) that have built companies commercializing unique models of human computer interaction.

Guitar Hero from Harmonix Music Systems is the first and one that Dave and I were both involved in early in their life. While the first person shooter video game metaphor has been around forever (think Asteriods and Space Invaders – the category was NOT created by Doom – just made more fun and bloodier), I eventually wore out on video games because I got bored of killing things. When Harmonix came out with Guitar Hero, I got a copy but didn’t do anything with it. A few months ago I finally started playing with it and immediately became addicted. So have a bunch of other people as it became one of the top ten games of 2006. Interestingly, much of the buzz around the Nintendo Wii has been similar – rather than using a joystick to move a killing machine around a fantasy world, we get to interact with games much more physically – through a different interaction metaphor.

The Roomba from iRobot is another example of this. Colin Angle and his partners ultimately created a consumer based robot that does one thing extremely well – vacuum your floor. Metaphorically, they’ve simply wrapped a bunch of software in a consumer device that enables a radically different and fascinating human computer interaction model. If you’ve got a Roomba and a dog, you’ve also learned that the animal computer interaction model is a blast to observe.

Oblong is another company that came out of someone’s brain that resided at 351 Mass Ave in Cambridge (yup – it must have been something in the water.) The best way to describe Oblong is to ask the question “do you remember Tom Cruise in Minority Report? Remember the wall sized computer he controlled with his hands. That’s what John Underkoffler and his partners at Oblong have created.

It didn’t dawn on me how important this was until I started putting the pieces together that our current UI metaphor – which started at Xerox, was popularized by Apple, and mainstreamed by Microsoft – is starting to grow long in the tooth. I’ve been using a T-mobile Dash for the past few months and while I love the device, the Microsoft UI is immensely frustrating. I’ve trained myself to be incredibly efficient with in (and largely control the phone functions with speech), but the iPhone bashed me over the head with the current level of fatigue that I (and I expect others) have with their current UI metaphors.

While Amy likes to ask me - when she gets frustrated with Windows – “what was wrong with DOS and the command line anyway?” it prompts me to wonder why I’m sitting at my desk pounding away at a keyboard. There are – and will be – better ways. It’ll be fun to look back N years from now and say “boy – that WIMP UI sure was quaint” kind of the way we think of “C:\>” today.

Update: This morning, as I was reading the Wall Street Journal Online, I saw Walt Mossberg’s review of Enso from Humanized. Excellent retro stuff – now I get to type “Run Firefox” to run Firefox.



http://www.feld.com/blog/archives/002154.html

Thursday, March 1, 2007

SWAN System To Help Blind And Firefighters Navigate Environment

Imagine being blind and trying to find your way around a city you've never visited before -- that can be challenging for a sighted person. Georgia Tech researchers are developing a wearable computing system called the System for Wearable Audio Navigation (SWAN) designed to help the visually impaired, firefighters, soldiers and others navigate their way in unknown territory, particularly when vision is obstructed or impaired. The SWAN system, consisting of a small laptop, a proprietary tracking chip, and bone-conduction headphones, provides audio cues to guide the person from place to place, with or without vision.

"We are excited by the possibilities for people who are blind and visually impaired to use the SWAN auditory wayfinding system," said Susan B. Green, executive director, Center for the Visually Impaired in Atlanta. "Consumer involvement is crucial in the design and evaluation of successful assistive technology, so CVI is happy to collaborate with Georgia Tech to provide volunteers who are blind and visually impaired for focus groups, interviews and evaluation of the system."

Collaboration

In an unusual collaboration, Frank Dellaert, assistant professor in the Georgia Tech College of Computing and Bruce Walker, assistant professor in Georgia Tech's School of Psychology and College of Computing, met five years ago at new faculty orientation and discussed how their respective areas of expertise -- determining location of robots and audio interfaces -- were complimentary and could be married in a project to assist the blind. The project progressed slowly as the researchers worked on it as time allowed and sought funding. Early support came through a seed grant from the Graphics, Visualization and Usability (GVU) Center at Georgia Tech, and recently Walker and Dellaert received a $600,000 grant from the National Science Foundation to further develop SWAN.

Dellaert's artificial intelligence research focuses on tracking and determining the location of robots and developing applications to help robots determine where they are and where they need to go. There are similar challenges when it comes to tracking and guiding robots and people. Dellaert's robotics research usually focuses on military applications since that is where most of the funding is available.

"SWAN is a satisfying project because we are looking at how to use technology originally developed for military use for peaceful purposes," says Dellaert. "Currently, we can effectively localize the person outdoors with GPS data, and we have a working prototype using computer vision to see street level details not included in GPS, such as light posts and benches. The challenge is integrating all the information from all the various sensors in real time so you can accurately guide the user as they move toward their destination."

Walker's expertise in human computer interaction and interface design includes developing auditory displays that indicate data through sonification or sound.

"By using a modular approach in building a system useful for the visually impaired, we can easily add new sensing technologies, while also making it flexible enough for firefighters and soldiers to use in low visibility situations," says Walker. "One of our challenges has been designing sound beacons easily understood by the user but that are not annoying or in competition with other sounds they need to hear such as traffic noise."

SWAN System Overview

The current SWAN prototype consists of a small laptop computer worn in a backpack, a tracking chip, additional sensors including GPS (global positioning system), a digital compass, a head tracker, four cameras and light sensor, and special headphones called bone phones. The researchers selected bone phones because they send auditory signals via vibrations through the skull without plugging the user's ears, an especially important feature for the blind who rely heavily on their hearing. The sensors and tracking chip worn on the head send data to the SWAN applications on the laptop which computes the user's location and in what direction he is looking, maps the travel route, then sends 3-D audio cues to the bone phones to guide the traveler along a path to the destination.

The 3-D cues sound like they are coming from about 1 meter away from the user's body, in whichever direction the user needs to travel. The 3-D audio, a well-established sound effect, is created by taking advantage of humans' natural ability to detect inter-aural time differences. The 3-D sound application schedules sounds to reach one ear slightly faster than the other, and the human brain uses that timing difference to figure out where the sound originated.

The 3-D audio beacons for navigation are unique to SWAN. Other navigation systems use speech cues such as "walk 100 yards and turn left," which Walker feels is not user friendly.

"SWAN consists of two types of auditory displays - navigational beacons where the SWAN user walks directly toward the sound, and secondary sounds indicating nearby items of possible interests such as doors, benches and so forth," says Walker. "We have learned that sound design matters. We have spent a lot of time researching which sounds are more effective, such as a beep or a sound burst, and which sounds provide information but do not interrupt users when they talk on their cell phone or listen to music."

The researchers have also learned that SWAN would supplement other techniques that a blind person might already use for getting around such as using a cane to identify obstructions in the path or a guide dog.

Next Steps

The researchers' next step is to transition SWAN from outdoors-only to indoor-outdoor use. Since GPS does not work indoors, the computer vision system is being refined to bridge that gap. Also, the research team is currently revamping the SWAN applications to run on PDAs and cell phones, which will be more convenient and comfortable for users. The team plans to add an annotation feature so that a user can add other useful annotations to share with other users such as nearby coffee shops, a location of a puddle after recent rains, and perhaps even the location of a park in the distance. There are plans to commercialize the SWAN technology after further refinement, testing and miniaturizing of components for the consumer market.

Article Source: http://www.content.onlypunjab.com

Contact: Elizabeth Campell Georgia Institute of Technology

Wednesday, February 28, 2007

User Driven Modelling - Research Approach

The intention for this research is to enable non-programmers to create software from a user interface that allows them to model a particular problem or scenario. This involves a user entering information visually in the form of a tree diagram. The aim is to develop ways of automatically translating this information into program code in a variety of computer languages. The research is on translating from an abstract model of a problem expressed by a user, to software to solve the problem, and visualise the solution. This is very important and useful for many people who have insufficient time to learn programming languages. Scaffidi et al (2005) explain how much programming is undertaken by those who are not professional programmers. The the open source Protégé ontology editor is used, this is developed from a project of Stanford University, there is a page on the Protégé Community Wiki (2006) to describe this. This is used to research visualisation, and visualisation techniques to create a human computer interface that allows non experts to create software.

This research demonstrates how a taxonomy can be used as the information source, from which it is possible to automatically produce software. This technique is most suitable at present to modelling, visualisation, and searching for information. The research is about the technique of User Driven Model (UDM) Development that could be part of a wider approach of User Driven Programming (UDP). This approach involves the creation of a visual environment for software development, where modelling programs can be created without the requirement of the model developer to learn programming languages. The theory behind this approach is examined, and also the main practical work in creation of this system. The basis of this approach is modelling of the software to be produced in ontology management systems such as Jena (Jena, 2006), and Protégé (Stanford University, 2006). It also has the potential to be computer language and system independent as one representation could be translated into many computer languages or Meta languages (Dmitriev, 2006).

The development of visual user interfaces has been a major step forward. The use of pictorial metaphors such as folders to represent a collection of files has greatly aided human computer interaction. Pictorial metaphors give visual feedback so the user knows what the software system is doing. This technique can be used more dynamically in simulations. Simulations represent the real world problem and provide constant feedback to the user on how the system is progressing. In this sense, all software should be regarded as a simulation. Pictorial metaphors are static, while a users' mental model is made up of mental images connected together by a set of rules. The user runs a mental model like a simulation. Static user interfaces rely on a user to string together images into a mental model which correctly represents what the system is doing. A user may generate a mental model in response to user interface metaphors which is inconsistent with the system model. Simulation can help to ensure that the designers' model, system model and users' model are all the same. This subject is explored by Crapo et al. (2001), and is the basis of the visualisation techniques used to enable the user to create and understand models that are subsequently translated into software representations. This is also explained in chapter one of Watch What I Do: Programming by Demonstration (Cypher, 1993), this explains how the Pygmalion language attempts to bridge the gap between the programmer's mental model of a subject and what the computer can accept. The author of this system David Smith (Smith, 1977) went on to develop office oriented icons as part of the Xerox's "Star" computer project.

The research applies this User Driven technique to aerospace engineering but it should be applicable to any subject. The basis of the research is the need to provide better ways for people to specify what they require from computer software using techniques that they understand, instead of needing to take the intermediate steps of either learning a computer language(s) or explaining their requirements to a software expert. These intermediate steps are expensive in terms of time, cost, and level of misunderstanding. If users can communicate intentions directly to the computer they can receive quick feedback and be able to adapt their techniques in a quick and agile way in response to this feedback.

A modelling environment needs to be created by software developers in order to allow users/model builders/domain experts to create their own models. This modelling environment could be created using an open standard language such as XML (eXtensible Markup Language). As the high level translation though this would depend on tools developed using lower level languages, this is why tools such as Protégé and DecisioPro (now called Vanguard Studio) (Vanguard Software, 2006) are used. Until recently XML has been used to represent information but languages such as Java, C++, and Visual Basic have been used for the actual code. Semantic languages such as XML could be used in future for software development as well as information representation, as they provide a higher level declarative view of the problem.


Article Source: http://www.content.onlypunjab.com

I am a Researcher in the final year of my PhD. I specialise in applying Semantic Web techniques. My current research is on a technique of 'User Driven Modelling/Programming'. My intention is to enable non-programmers to create software from a user interface that allows them to model a particular problem or scenario. This involves a user entering information visually in the form of a tree diagram. I am attempting to develop ways of automatically translating this information into program code in a variety of computer languages. This is very important and useful for many employees that have insufficient time to learn programming languages. I am looking to research visualisation, and visualisation techniques to create a human computer interface that allows non experts to create software.

I am a member of the Institute for End User Computing (IEUC).

Monday, February 26, 2007

Top 8 Reasons HCI is in its Stone Age

1. Screen Corners

Let me introduce you to one of the greatest mysteries of our time: After more than 20 years of research, development and competition in the field of HCI, not one single leading operating system developing company has come up with an OS that utilizes the four corners of the screen. Any five-year-old earth child has probably already figured out that the screen corners are the easiest points to hit - the only locations hittable without looking. Ray Charles figured that out. Stevie Wonder figured that out. And they would probably make a better design team than any money-driven market thugs.

It gets better: The irony is that we argue about whether systems should be application-centered or document-centered, probably the two most important entities in a computer. Have you ever seen a system which lets you, out-of-the-box, hit a corner in order to do anything at all even remotely related to anything having anything at all to do with a document or application? So maybe documents aren't the most important entity in a computer. Browse the internet by hitting the screen corner? Check mail in the screen corner? Get Info in the screen corner? System preferences in the screen corner? Switching applications in the screen corner? No, or, well. In Mac OS X you can trigger Exposé by hitting a screen corner, although Exposé rhymes bad with point six below, so that hardly counts.

2. OS GUI's are Designed for Beginners.

Ooooh. there's nothing wrong with that, as long as you can grow with your user interface. Problem is, we outgrow it in a matter of hours, and after that the OS is nothing but a nail in the eye, a cow in the car, a space tit, a belly-barn shackle in the reunion of unjustified friends. Just something you have to hazzle with. So is it possible to design a system that's suits both beginners and professionals? (No t33n-N30, the answer isn't »Pr3f3r3nc3Zz!!!!!!!! 1337-H4XX0R5!!!«.) Leaving the question unanswered for now, let's just face the fact that we are all beginners the first few hours in our computing career. The rest of the time, we're victims. Wait... an image is forming in my mind... It's a sweaty, hard-working bare-chested carpenter with a tiny red plastic hammer in his hand. Yes. This is his tool. Yes. He's been using it since he was 5.

3. Visual Attention - Sine Qua Non

Every single little tiny-weeny little interaction-shraction requires your visual attention. And I'm not talking peripheral attention, nooooo, then we could all go home and interact, couldn't we? You have to actually drop focus on what you're looking at and move your eyesight in order to find that tiny little resize button of the window. If your screen is large enough, you are even forced to move your head to find that window resizing widget. There's more penalty: once you're done, you must relocate that thing or text you were reading before you got the divine idea of resizing the window. The same goes for moving, scrolling, closing, zooming, panning and... . The Alfred Einsteins over at Adobe's somehow found out their users like to pan their documents (inside information? mole in the building?), so they assigned the SPACEBAR to invoke the »divine semi-mode of panning«. All respect to Adobe for that - they did better than the combined efforts of Redmond, Cupertino, Ray Charles and Stevie Wonder (which equals the combined efforts of Ray Charles and Stevie Wonder). However according to my book, an action as atomic as panning mustn't be mode driven. In this particular case, Adobes panning only works if the user isn't inside a text object typing, in which case that »divine semi-mode of panning« is reduced to nothing but a space. An unwanted space at that.

»But sir, all the other keys were busy!!«.
No they weren't.

Situations like these make me feel sorry for the spacebar. So big and strong... He totally rules over the other keys, and yet all he produces is... nothingness. I hope I never find myself in the situation of having to explain to aliens what the LARGEST KEY ON THE KEYBOARD does. »Well... this key? Right over here? Ah, the chubby one! It.. spaces... kind of... leaps.. a tiny bit. In the text... See...? Nothingness! Hey, I know how this must sound... Hey! Wait!! No!! Come back!! But we just met!! COME BACK!!«
That's alright, they would probably have left anyway as soon as they saw me clicking »Start« in order to shut the computer down.

4. Multiple representation of the file system.

I'm talking files and folder here. One representation on the desktop and another one when opening and saving files (yes, dialogs). See point six.

5. Our love of choice

I bet you my bunny the former Soviet union could have designed a better operating system GUI than any of the software vendors of today. Not only would their GUI allow you to get the job done faster, it would completely lack preferences, freedom of choice and any settings even remotely related to changing the way you interact. And there's more: Their GUI would provide one way and one way only of accomplishing an atomic task. Imagine what that would do to a context menu!
Throw one preference at my bunny and he'll probably tell you why it's unnecessary.

By the way, did you know that one-knob faucets were originally designed for disabled persons?


6. Our Disrespect for Spatialness.

Spatial navigation is a condition for eventually being able to navigate by using muscle memory. Muscle memory, along with prediction, is superior to any other navigation method in terms of speed.This, of course, requires that you know your own file system.

In order to explain what spatial navigation is all about, we shall delve into a comparison to (say) getting cutlery out of kitchen drawers:
Before you open the kitchen drawer (which you can easily locate since it only exists in one place and in one shape), you already know the forks are on the left side. Even if you don't, you can easily localize the forks by just looking, and you will soon have learned that forks are stored on the left side.

Adding non-spatialness to the same example would mean that you first have to locate the kitchen drawer. Even if you have a nice and shine drop-shadowed kitchen drawer in your mind, your mental model of the kitchen, you still have to associate it with the name »kitchen drawer« in order to find it. Because the string »kitchen drawer« is what you're looking for. Eventually you locate and open the kitchen drawer and see but cutlery labels - »forks«, »knifes«, »spoons« - and you are forced once again to recall what the thing you're looking for is called. ("Zirconium oxide kyocera knife"...)

You will eventually open the non-spatial drawer again... (that is, after you've located it, for who knows? It might be easier this time. Perhaps you're in a part of the kitchen close to the drawer? Or maybe the cutlery drawer is in the recent items list? Go and have a look, but beware - if it isn't there, you have to navigate to the kitchen drawer »from scratch«) ...and you may figure that this time, at least you know exactly where to look... (this is true only if you manage to find and open exactly the same representation of the kitchen drawer you used last time) ...BUT, you would then be using spatial navigation!

»Spatial navigation eventually renders the declarative knowledge of the kitchen drawer secondary to the task of opening it, making way for an autonomous stage in retrieving cutlery out of kitchen drawers.«
(WELL PUT J R Anderson, The Architecture of Cognition, 1983!! (Ok. Somewhat modified to fit the example.)

Folders hosting a large number of files pose a problem no matter what representation you use - probably because 1000 files in a folder cannot be called »organized«.
Spatial navigation is in our nature. A desktop is entirely spatial, so what's the point of having a non-spatial metaphor?

Anyway, predictability is the key word, and is along with screen corners by far the thing I lack the most in modern operating system GUI's.

7. Terminology

The terminology we use is a strong indicator of stone age: »User-oriented« design. »User centered« design. Come on! Around whom else would the design be oriented?!

8.
We wish to rotate an image, shrink it 50%, attach it to an e-mail and send it to a deaf musician.
I'll leave this one up for you to decide: Which of the following approaches do you think would prove to be the easier one?

A. Utilizing a »modern« interface: The procedure would involve several clicks, mouse drags and keystrokes, and also require expert skills in order to complete the task in less time than one minute. Moreover, in order to complete the task at all, a number of subtasks (which are actually unrelated to the task at hand) need tending to. We need for instance worry about choosing a file name and a location in the process of storing the image, and then, from the e-mail application, locating the image we just stored in order to attach it.

B. Say »Tip a quarter to the right, crop by half and e-mail to Stevie Wonder«.

By the way, did you know that one-knob faucets were originally designed for disabled persons?


http://juicability.blogspot.com/2005/09/top-8-reasons-hci-is-in-its-stone-age.html

Friday, February 23, 2007

The Human Computer Interaction Graduate Program (Certificate, MS & PhD)

As the use of computers becomes increasingly central, the study of Human Computer Interaction will emerge as one of the most dynamic and important areas of research. Interdisciplinary in the extreme, this emerging field will have an impact on nearly every area of human endeavor. The Human Computer Interaction graduate major reflects a broad recognition both in academia and industry that the need exists to specifically train researchers in this burgeoning area, to meet the challenges faced by this rapidly evolving area of technological progress.

The women and men who contribute to this new paradigm will be shaping the future.

Purpose:

The study of the relationship between humans and increasingly powerful, portable, interconnected and ubiquitous computers is becoming one of the most dynamic and significant fields of technical investigation. The Interdepartmental Graduate Major in Human Computer Interaction is an interdisciplinary training program created to provide advanced education and training while fostering research excellence in Human Computer Interaction at Iowa State University.

HCI Degrees:

* M.S. in Human Computer Interaction
The entrance requirements for the M.S. in HCI graduate program include transcripts, test scores and other indicators that the student applicant can be successful at the graduate level. Furthermore, we require the demonstrated ability to write software competently. For those who do not have the required computing background, we provide an introductory survey course, with appropriate follow up material.
* Ph.D. in Human Computer Interaction
The entrance requirements for the PhD in HCI include
a) the three courses required for the master's program
b) a master's degree and
c) a portfolio demonstrating the potential for research at the boundaries of the human computer interface.

HCI Certificate:

The Human Computer Interaction Graduate Program would like to announce the addition of an online Graduate Certificate in Human Computer Interaction. Through distance learning, students who are working in business and industry are able to take courses to learn more about Human Computer Interaction, furthering their education without having to travel to the Iowa State University campus to study or committing to a full graduate program. The certificate can be earned in one year if two courses per semester are taken, or in two years if a student chooses to take one course each semester.

The Human Computer Interaction (HCI) Initiative

Driven by unprecedented technological progress, the study of the relationship between humans and increasingly powerful, portable, interconnected and ubiquitous computers is fast becoming one of the most dynamic and significant fields of technical investigation. Intent on establishing a leadership position in the rapidly changing field of Human Computer Interaction (HCI), ISU is making a strategic investment to accelerate research, attract talented students and faculty members and expand the graduate program in this vital area of study.

Interdisciplinary in the extreme, this emerging field is having an impact on nearly every area of human endeavor. With researchers representing departments from every college in the University and related research underway at the Virtual Reality Applications Center and elsewhere, ISU is well positioned to quickly expand its focus and become a leader in HCI research.

The technical research component of the HCI initiative focuses on five areas:

* Information sensorization - human factors, cognitive models, virtual and augmented reality interfaces, haptics
* Mobile/ubiquitous interfaces - wireless connectivity, integration of remote sensors and participants, group interfaces
* Intelligent agents - network-based software services for individuals, groups and organizations
* Accessibility for non-technical collaborators - technology to facilitate interdisciplinary collaboration
* Enabling infrastructure - software and hardware to facilitate HCI research




http://www.hci.iastate.edu/

Thursday, February 22, 2007

Human-Computer Interaction Specialization Requirements

General MSI Requirements (for students entering Fall 2007)
All MSI students who enter in Fall 2007 or later must complete at least 48 credit hours of graduate coursework, including

* 2 core courses (course descriptions will be posted shortly)
* 1 core technology course (may be waived based on previous coursework or test)
* 1 methods course (from list to be posted)
* 1 management course (from list to be posted)
* 6 credits in cognate courses (3 cognate credits may be in an SI specialization other than your own; 3 credits must be taken elsewhere at U-M)
* 6 credits that meet Practical Engagement requirements, through credit-based internships or class-based experiential learning


These general MSI program requirements become effective for the 2007-2008 academic year. (View general MSI requirements for students who entered prior to Fall 2007.)

Additional HCI Requirements

Three Required HCI Courses
Students in the HCI specialization must take the following three required courses:

* 622 Evaluation of Systems and Services
* 682 Interface and Interaction Design
* 688 Fundamentals of Human Behavior


Two Additional HCI Courses

HCI students must choose two from among the following set of courses:

* 539 Design of Complex Web Sites (3 credits)
* 551 Information-Seeking Behavior (3 credits)
* 553 Multimedia Production (3 credits)
* 557 Visual Persuasion (3 credits)
* 561 Natural Language Processing (3 credits)
* 572 Database Application Design (3 credits)
* 583 Recommender Systems (1.5 credits)
* 649 Information Visualization (3 credits)
* 658 Information Architecture (3 credits)
* 670 Information in Organizations (3 credits)
* 684 eCommunities: Analysis and Design of Online Interaction Environments (3 credits)
* 689 Computer-Supported Cooperative Work (3 credits)


Programming Requirement

HCI students must have two semesters of programming -- either previously completed or taken at U-M -- or must show competence through an exam. SI offers

* 539 Design of Complex Web Sites
* 543 Programming I
* 653 Programming II

to help you fulfill this requirement.

Statistics Requirement
HCI students must have one semester of statistics, either previously completed (transcript required) or taken at U-M. SI offers

* 544 Introduction to Statistics and Data Analysis

to help you fulfill this requirement.


These additional HCI program requirements are effective for the 2006-2007 academic year.




http://www.si.umich.edu/msi/hci-reqs.htm