Wednesday, February 28, 2007

User Driven Modelling - Research Approach

The intention for this research is to enable non-programmers to create software from a user interface that allows them to model a particular problem or scenario. This involves a user entering information visually in the form of a tree diagram. The aim is to develop ways of automatically translating this information into program code in a variety of computer languages. The research is on translating from an abstract model of a problem expressed by a user, to software to solve the problem, and visualise the solution. This is very important and useful for many people who have insufficient time to learn programming languages. Scaffidi et al (2005) explain how much programming is undertaken by those who are not professional programmers. The the open source Protégé ontology editor is used, this is developed from a project of Stanford University, there is a page on the Protégé Community Wiki (2006) to describe this. This is used to research visualisation, and visualisation techniques to create a human computer interface that allows non experts to create software.

This research demonstrates how a taxonomy can be used as the information source, from which it is possible to automatically produce software. This technique is most suitable at present to modelling, visualisation, and searching for information. The research is about the technique of User Driven Model (UDM) Development that could be part of a wider approach of User Driven Programming (UDP). This approach involves the creation of a visual environment for software development, where modelling programs can be created without the requirement of the model developer to learn programming languages. The theory behind this approach is examined, and also the main practical work in creation of this system. The basis of this approach is modelling of the software to be produced in ontology management systems such as Jena (Jena, 2006), and Protégé (Stanford University, 2006). It also has the potential to be computer language and system independent as one representation could be translated into many computer languages or Meta languages (Dmitriev, 2006).

The development of visual user interfaces has been a major step forward. The use of pictorial metaphors such as folders to represent a collection of files has greatly aided human computer interaction. Pictorial metaphors give visual feedback so the user knows what the software system is doing. This technique can be used more dynamically in simulations. Simulations represent the real world problem and provide constant feedback to the user on how the system is progressing. In this sense, all software should be regarded as a simulation. Pictorial metaphors are static, while a users' mental model is made up of mental images connected together by a set of rules. The user runs a mental model like a simulation. Static user interfaces rely on a user to string together images into a mental model which correctly represents what the system is doing. A user may generate a mental model in response to user interface metaphors which is inconsistent with the system model. Simulation can help to ensure that the designers' model, system model and users' model are all the same. This subject is explored by Crapo et al. (2001), and is the basis of the visualisation techniques used to enable the user to create and understand models that are subsequently translated into software representations. This is also explained in chapter one of Watch What I Do: Programming by Demonstration (Cypher, 1993), this explains how the Pygmalion language attempts to bridge the gap between the programmer's mental model of a subject and what the computer can accept. The author of this system David Smith (Smith, 1977) went on to develop office oriented icons as part of the Xerox's "Star" computer project.

The research applies this User Driven technique to aerospace engineering but it should be applicable to any subject. The basis of the research is the need to provide better ways for people to specify what they require from computer software using techniques that they understand, instead of needing to take the intermediate steps of either learning a computer language(s) or explaining their requirements to a software expert. These intermediate steps are expensive in terms of time, cost, and level of misunderstanding. If users can communicate intentions directly to the computer they can receive quick feedback and be able to adapt their techniques in a quick and agile way in response to this feedback.

A modelling environment needs to be created by software developers in order to allow users/model builders/domain experts to create their own models. This modelling environment could be created using an open standard language such as XML (eXtensible Markup Language). As the high level translation though this would depend on tools developed using lower level languages, this is why tools such as Protégé and DecisioPro (now called Vanguard Studio) (Vanguard Software, 2006) are used. Until recently XML has been used to represent information but languages such as Java, C++, and Visual Basic have been used for the actual code. Semantic languages such as XML could be used in future for software development as well as information representation, as they provide a higher level declarative view of the problem.


Article Source: http://www.content.onlypunjab.com

I am a Researcher in the final year of my PhD. I specialise in applying Semantic Web techniques. My current research is on a technique of 'User Driven Modelling/Programming'. My intention is to enable non-programmers to create software from a user interface that allows them to model a particular problem or scenario. This involves a user entering information visually in the form of a tree diagram. I am attempting to develop ways of automatically translating this information into program code in a variety of computer languages. This is very important and useful for many employees that have insufficient time to learn programming languages. I am looking to research visualisation, and visualisation techniques to create a human computer interface that allows non experts to create software.

I am a member of the Institute for End User Computing (IEUC).

Monday, February 26, 2007

Top 8 Reasons HCI is in its Stone Age

1. Screen Corners

Let me introduce you to one of the greatest mysteries of our time: After more than 20 years of research, development and competition in the field of HCI, not one single leading operating system developing company has come up with an OS that utilizes the four corners of the screen. Any five-year-old earth child has probably already figured out that the screen corners are the easiest points to hit - the only locations hittable without looking. Ray Charles figured that out. Stevie Wonder figured that out. And they would probably make a better design team than any money-driven market thugs.

It gets better: The irony is that we argue about whether systems should be application-centered or document-centered, probably the two most important entities in a computer. Have you ever seen a system which lets you, out-of-the-box, hit a corner in order to do anything at all even remotely related to anything having anything at all to do with a document or application? So maybe documents aren't the most important entity in a computer. Browse the internet by hitting the screen corner? Check mail in the screen corner? Get Info in the screen corner? System preferences in the screen corner? Switching applications in the screen corner? No, or, well. In Mac OS X you can trigger Exposé by hitting a screen corner, although Exposé rhymes bad with point six below, so that hardly counts.

2. OS GUI's are Designed for Beginners.

Ooooh. there's nothing wrong with that, as long as you can grow with your user interface. Problem is, we outgrow it in a matter of hours, and after that the OS is nothing but a nail in the eye, a cow in the car, a space tit, a belly-barn shackle in the reunion of unjustified friends. Just something you have to hazzle with. So is it possible to design a system that's suits both beginners and professionals? (No t33n-N30, the answer isn't »Pr3f3r3nc3Zz!!!!!!!! 1337-H4XX0R5!!!«.) Leaving the question unanswered for now, let's just face the fact that we are all beginners the first few hours in our computing career. The rest of the time, we're victims. Wait... an image is forming in my mind... It's a sweaty, hard-working bare-chested carpenter with a tiny red plastic hammer in his hand. Yes. This is his tool. Yes. He's been using it since he was 5.

3. Visual Attention - Sine Qua Non

Every single little tiny-weeny little interaction-shraction requires your visual attention. And I'm not talking peripheral attention, nooooo, then we could all go home and interact, couldn't we? You have to actually drop focus on what you're looking at and move your eyesight in order to find that tiny little resize button of the window. If your screen is large enough, you are even forced to move your head to find that window resizing widget. There's more penalty: once you're done, you must relocate that thing or text you were reading before you got the divine idea of resizing the window. The same goes for moving, scrolling, closing, zooming, panning and... . The Alfred Einsteins over at Adobe's somehow found out their users like to pan their documents (inside information? mole in the building?), so they assigned the SPACEBAR to invoke the »divine semi-mode of panning«. All respect to Adobe for that - they did better than the combined efforts of Redmond, Cupertino, Ray Charles and Stevie Wonder (which equals the combined efforts of Ray Charles and Stevie Wonder). However according to my book, an action as atomic as panning mustn't be mode driven. In this particular case, Adobes panning only works if the user isn't inside a text object typing, in which case that »divine semi-mode of panning« is reduced to nothing but a space. An unwanted space at that.

»But sir, all the other keys were busy!!«.
No they weren't.

Situations like these make me feel sorry for the spacebar. So big and strong... He totally rules over the other keys, and yet all he produces is... nothingness. I hope I never find myself in the situation of having to explain to aliens what the LARGEST KEY ON THE KEYBOARD does. »Well... this key? Right over here? Ah, the chubby one! It.. spaces... kind of... leaps.. a tiny bit. In the text... See...? Nothingness! Hey, I know how this must sound... Hey! Wait!! No!! Come back!! But we just met!! COME BACK!!«
That's alright, they would probably have left anyway as soon as they saw me clicking »Start« in order to shut the computer down.

4. Multiple representation of the file system.

I'm talking files and folder here. One representation on the desktop and another one when opening and saving files (yes, dialogs). See point six.

5. Our love of choice

I bet you my bunny the former Soviet union could have designed a better operating system GUI than any of the software vendors of today. Not only would their GUI allow you to get the job done faster, it would completely lack preferences, freedom of choice and any settings even remotely related to changing the way you interact. And there's more: Their GUI would provide one way and one way only of accomplishing an atomic task. Imagine what that would do to a context menu!
Throw one preference at my bunny and he'll probably tell you why it's unnecessary.

By the way, did you know that one-knob faucets were originally designed for disabled persons?


6. Our Disrespect for Spatialness.

Spatial navigation is a condition for eventually being able to navigate by using muscle memory. Muscle memory, along with prediction, is superior to any other navigation method in terms of speed.This, of course, requires that you know your own file system.

In order to explain what spatial navigation is all about, we shall delve into a comparison to (say) getting cutlery out of kitchen drawers:
Before you open the kitchen drawer (which you can easily locate since it only exists in one place and in one shape), you already know the forks are on the left side. Even if you don't, you can easily localize the forks by just looking, and you will soon have learned that forks are stored on the left side.

Adding non-spatialness to the same example would mean that you first have to locate the kitchen drawer. Even if you have a nice and shine drop-shadowed kitchen drawer in your mind, your mental model of the kitchen, you still have to associate it with the name »kitchen drawer« in order to find it. Because the string »kitchen drawer« is what you're looking for. Eventually you locate and open the kitchen drawer and see but cutlery labels - »forks«, »knifes«, »spoons« - and you are forced once again to recall what the thing you're looking for is called. ("Zirconium oxide kyocera knife"...)

You will eventually open the non-spatial drawer again... (that is, after you've located it, for who knows? It might be easier this time. Perhaps you're in a part of the kitchen close to the drawer? Or maybe the cutlery drawer is in the recent items list? Go and have a look, but beware - if it isn't there, you have to navigate to the kitchen drawer »from scratch«) ...and you may figure that this time, at least you know exactly where to look... (this is true only if you manage to find and open exactly the same representation of the kitchen drawer you used last time) ...BUT, you would then be using spatial navigation!

»Spatial navigation eventually renders the declarative knowledge of the kitchen drawer secondary to the task of opening it, making way for an autonomous stage in retrieving cutlery out of kitchen drawers.«
(WELL PUT J R Anderson, The Architecture of Cognition, 1983!! (Ok. Somewhat modified to fit the example.)

Folders hosting a large number of files pose a problem no matter what representation you use - probably because 1000 files in a folder cannot be called »organized«.
Spatial navigation is in our nature. A desktop is entirely spatial, so what's the point of having a non-spatial metaphor?

Anyway, predictability is the key word, and is along with screen corners by far the thing I lack the most in modern operating system GUI's.

7. Terminology

The terminology we use is a strong indicator of stone age: »User-oriented« design. »User centered« design. Come on! Around whom else would the design be oriented?!

8.
We wish to rotate an image, shrink it 50%, attach it to an e-mail and send it to a deaf musician.
I'll leave this one up for you to decide: Which of the following approaches do you think would prove to be the easier one?

A. Utilizing a »modern« interface: The procedure would involve several clicks, mouse drags and keystrokes, and also require expert skills in order to complete the task in less time than one minute. Moreover, in order to complete the task at all, a number of subtasks (which are actually unrelated to the task at hand) need tending to. We need for instance worry about choosing a file name and a location in the process of storing the image, and then, from the e-mail application, locating the image we just stored in order to attach it.

B. Say »Tip a quarter to the right, crop by half and e-mail to Stevie Wonder«.

By the way, did you know that one-knob faucets were originally designed for disabled persons?


http://juicability.blogspot.com/2005/09/top-8-reasons-hci-is-in-its-stone-age.html

Friday, February 23, 2007

The Human Computer Interaction Graduate Program (Certificate, MS & PhD)

As the use of computers becomes increasingly central, the study of Human Computer Interaction will emerge as one of the most dynamic and important areas of research. Interdisciplinary in the extreme, this emerging field will have an impact on nearly every area of human endeavor. The Human Computer Interaction graduate major reflects a broad recognition both in academia and industry that the need exists to specifically train researchers in this burgeoning area, to meet the challenges faced by this rapidly evolving area of technological progress.

The women and men who contribute to this new paradigm will be shaping the future.

Purpose:

The study of the relationship between humans and increasingly powerful, portable, interconnected and ubiquitous computers is becoming one of the most dynamic and significant fields of technical investigation. The Interdepartmental Graduate Major in Human Computer Interaction is an interdisciplinary training program created to provide advanced education and training while fostering research excellence in Human Computer Interaction at Iowa State University.

HCI Degrees:

* M.S. in Human Computer Interaction
The entrance requirements for the M.S. in HCI graduate program include transcripts, test scores and other indicators that the student applicant can be successful at the graduate level. Furthermore, we require the demonstrated ability to write software competently. For those who do not have the required computing background, we provide an introductory survey course, with appropriate follow up material.
* Ph.D. in Human Computer Interaction
The entrance requirements for the PhD in HCI include
a) the three courses required for the master's program
b) a master's degree and
c) a portfolio demonstrating the potential for research at the boundaries of the human computer interface.

HCI Certificate:

The Human Computer Interaction Graduate Program would like to announce the addition of an online Graduate Certificate in Human Computer Interaction. Through distance learning, students who are working in business and industry are able to take courses to learn more about Human Computer Interaction, furthering their education without having to travel to the Iowa State University campus to study or committing to a full graduate program. The certificate can be earned in one year if two courses per semester are taken, or in two years if a student chooses to take one course each semester.

The Human Computer Interaction (HCI) Initiative

Driven by unprecedented technological progress, the study of the relationship between humans and increasingly powerful, portable, interconnected and ubiquitous computers is fast becoming one of the most dynamic and significant fields of technical investigation. Intent on establishing a leadership position in the rapidly changing field of Human Computer Interaction (HCI), ISU is making a strategic investment to accelerate research, attract talented students and faculty members and expand the graduate program in this vital area of study.

Interdisciplinary in the extreme, this emerging field is having an impact on nearly every area of human endeavor. With researchers representing departments from every college in the University and related research underway at the Virtual Reality Applications Center and elsewhere, ISU is well positioned to quickly expand its focus and become a leader in HCI research.

The technical research component of the HCI initiative focuses on five areas:

* Information sensorization - human factors, cognitive models, virtual and augmented reality interfaces, haptics
* Mobile/ubiquitous interfaces - wireless connectivity, integration of remote sensors and participants, group interfaces
* Intelligent agents - network-based software services for individuals, groups and organizations
* Accessibility for non-technical collaborators - technology to facilitate interdisciplinary collaboration
* Enabling infrastructure - software and hardware to facilitate HCI research




http://www.hci.iastate.edu/

Thursday, February 22, 2007

Human-Computer Interaction Specialization Requirements

General MSI Requirements (for students entering Fall 2007)
All MSI students who enter in Fall 2007 or later must complete at least 48 credit hours of graduate coursework, including

* 2 core courses (course descriptions will be posted shortly)
* 1 core technology course (may be waived based on previous coursework or test)
* 1 methods course (from list to be posted)
* 1 management course (from list to be posted)
* 6 credits in cognate courses (3 cognate credits may be in an SI specialization other than your own; 3 credits must be taken elsewhere at U-M)
* 6 credits that meet Practical Engagement requirements, through credit-based internships or class-based experiential learning


These general MSI program requirements become effective for the 2007-2008 academic year. (View general MSI requirements for students who entered prior to Fall 2007.)

Additional HCI Requirements

Three Required HCI Courses
Students in the HCI specialization must take the following three required courses:

* 622 Evaluation of Systems and Services
* 682 Interface and Interaction Design
* 688 Fundamentals of Human Behavior


Two Additional HCI Courses

HCI students must choose two from among the following set of courses:

* 539 Design of Complex Web Sites (3 credits)
* 551 Information-Seeking Behavior (3 credits)
* 553 Multimedia Production (3 credits)
* 557 Visual Persuasion (3 credits)
* 561 Natural Language Processing (3 credits)
* 572 Database Application Design (3 credits)
* 583 Recommender Systems (1.5 credits)
* 649 Information Visualization (3 credits)
* 658 Information Architecture (3 credits)
* 670 Information in Organizations (3 credits)
* 684 eCommunities: Analysis and Design of Online Interaction Environments (3 credits)
* 689 Computer-Supported Cooperative Work (3 credits)


Programming Requirement

HCI students must have two semesters of programming -- either previously completed or taken at U-M -- or must show competence through an exam. SI offers

* 539 Design of Complex Web Sites
* 543 Programming I
* 653 Programming II

to help you fulfill this requirement.

Statistics Requirement
HCI students must have one semester of statistics, either previously completed (transcript required) or taken at U-M. SI offers

* 544 Introduction to Statistics and Data Analysis

to help you fulfill this requirement.


These additional HCI program requirements are effective for the 2006-2007 academic year.




http://www.si.umich.edu/msi/hci-reqs.htm

Human-Computer Interaction Specialization

The specialization in Human-Computer Interaction (HCI) educates the professional who is designing and developing technologies that fit the organization and work practices, the work to be done, and the capabilities of the user.

Students learn how to create effective human-computer interaction both by determining useful system functionality, and by designing a usable interface. The "interface" is broadly construed to include not just the visual/auditory display and interaction dialog, but the situation in its entirety, the group in which this task takes place, and the organizational goals and resources. The specialization has applicability to people who are designing technologies for work, education, entertainment and social interaction, and takes as its design materials the technology and social processes (the coordination among actors, the incentive scheme, etc.).

Graduates from the HCI specialization are employed in a variety of professions: as entrepreneur software developers; as team members involved in software development in a larger organization; as inventors of the next interaction paradigm; and as the strategists interested in achieving the organization's goals with distributed talent connected by new technologies.

HCI courses also serve those who wish to become effective webmasters, evaluators of software for use in an organization, writers of software reviews in a magazine like InfoWorld, and writers of technical documentation and training programs. Some job titles of recent graduates are:

* Webmaster
* Technical writer
* Software developer
* Entrepreneur



http://www.si.umich.edu/msi/hci.htm