Human performance modeling

Human performance modeling (HPM) is a method of quantifying human behavior, cognition, and processes. It is a tool used by human factors researchers and practitioners for both the analysis of human function and for the development of systems designed for optimal user experience and interaction .[1] It is a complementary approach to other usability testing methods for evaluating the impact of interface features on operator performance.[2]

History

The Human Factors and Ergonomics Society (HFES) formed the Human Performance Modeling Technical Group in 2004. Although a recent discipline, human factors practitioners have been constructing and applying models of human performance since World War II. Notable early examples of human performance models include Paul Fitts' model of aimed motor movement (1954),[3] the choice reaction time models of Hick (1952)[4] and Hyman (1953),[5] and the Swets et al. (1964) work on signal detection.[6] It is suggested that the earliest developments in HPM arose out of the need to quantify human-system feedback for those military systems in development during WWII (see Manual Control Theory below), with continued interest in the development of these models augmented by the cognitive revolution (see Cognition & Memory below).[7]

Human Performance Models

Human performance models predict human behavior in a task, domain, or system. However, these models must be based upon and compared against empirical human-in-the-loop data to ensure that the human performance predictions are correct.[1] As human behavior is inherently complex, simplified representations of interactions are essential to the success of a given model. As no model is able to capture the complete breadth and detail of human performance within a system, domain, or even task, details are abstracted away to these keep models manageable. Although the omission of details is an issue in basic psychological research, it is less of a concern in applied contexts such as those of most concern to the human factors profession.[7] This is related to the internal-external validity trade-off. Regardless, development of a human performance model is an exercise in complexity science.[8] Communication and exploration of the most essential variables governing a given process are often just as important as the accurate prediction of an outcome given those variables.[7]

The goal of most human performance models is to capture enough detail in a particular domain to be useful for the purposes of investigation, design, or evaluation; thus the domain for any particular model is often quite restricted.[7] Defining and communicating the domain of a given model is an essential feature of the practice - and of the entirety of human factors - as a systems discipline. Human performance models contain both the explicit and implicit assumptions or hypotheses upon which the model depends, and are typically mathematical - being composed of equations or computer simulations - although there are also important models that are qualitative in nature.[7]

Individual models vary in their origins, but share in their application and use for issues in the human factors perspective. These can be models of the products of human performance (e.g., a model that produces the same decision outcomes as human operators), the processes involved in human performance (e.g., a model that simulates the processes used to reach decisions), or both. Generally, they are regarded as belonging to one of three areas: perception & attention allocation, command & control, or cognition & memory; although models of other areas such as emotion, motivation, and social/group processes continue to grow burgeoning within the discipline. Integrated models are also of increasing importance. Anthropometric and biomechanical models are also crucial human factors tools in research and practice, and are used alongside other human performance models, but have an almost entirely separate intellectual history, being individually more concerned with static physical qualities than processes or interactions.[7]

The models are applicable in many number of industries and domains including military,[9][10] aviation,[11] nuclear power,[12] automotive,[13] space operations,[14] manufacturing,[15] user experience/user interface (UX/UI) design,[2] etc. and have been used to model human-system interactions both simple and complex.

Model Categories

Command & Control

Human performance models of Command & Control describe the products of operator output behavior, and are often also models of dexterity within the interactions for certain tasks.

Hick-Hyman Law

Hick (1952) and Hyman (1953) note that the difficulty of a choice reaction-time task is largely determined by the information entropy of the situation. They suggested that information entropy (H) is a function of the number of alternatives (n) in a choice task, H = log2(n + 1); and that reaction time (RT) of a human operator is a linear function of the entropy: RT = a + bH. This is known as the Hick-Hyman law for choice response time.[7]

Pointing

Pointing at stationary targets such as buttons, windows, images, menu items, and controls on computer displays is commonplace and has a well-established modeling tool for analysis - Fitts's law (Fitts, 1954) - which states that the time to make an aimed movement (MT) is a linear function of the index of difficulty of the movement: MT = a + bID. The index of difficulty (ID) for any given movement is a function of the ratio of distance to the target (D) and width of the target (W): ID = log2(2D/W) - a relationship derivable from information theory.[7] Fitts' law is actually responsible for the ubiquity of the computer mouse, due to the research of Card, English, and Burr (1978). Extensions of Fitt's law also apply to pointing at spatially moving targets, via the steering law, originally discovered by C.G. Drury in 1971[16][17][18] and later on rediscovered in the context of human-computer interaction by Accott & Zhai (1997, 1999).[19][20]

Manual Control Theory

Complex motor tasks, such as those carried out by musicians and athletes, are not well modeled due to their complexity. Human target-tracking behavior, however, is one complex motor task that is an example of successful HPM.

The history of manual control theory is extensive, dating back to the 1800s in regard to the control of water clocks. However, during the 1940s with the innovation of servomechanisms in WWII, extensive research was put into the continuous control and stabilization of contemporary systems such as radar antennas, gun turrets, and ships/aircraft via feedback control signals.

Analysis methods were developed that predicted the required control systems needed to enable stable, efficient control of these systems (James, Nichols, & Phillips, 1947). Originally interested in temporal response - the relationship between sensed output and motor output as a function of time - James et al. (1947) discovered that the properties of such systems are best characterized by understanding temporal response after it had been transformed into a frequency response; a ratio of output to input amplitude and lag in response over the range of frequencies to which they are sensitive. For systems that respond linearly to these inputs, the frequency response function could be expressed in a mathematical expression called a transfer function.[7] This was applied first to machine systems, then human-machine systems for maximizing human performance. Tustin (1947), concerned with the design of gun turrets for human control, was first to demonstrate that nonlinear human response could be approximated by a type of transfer function. McRuer and Krenzel (1957) synthesized all the work since Tustin (1947), measuring and documenting the characteristics of the human transfer function, and ushered in the era of manual control models of human performance. As electromechanical and hydraulic flight control systems were implemented into aircraft, automation and electronic artificial stability systems began to allow human pilots to control highly sensitive systems These same transfer functions are still used today in control engineering.

From this, the optimal control model (Pew & Baron, 1978) developed in order to model a human operator's ability to internalize system dynamics and minimize objective functions, such as root mean square (RMS) error from the target. The optimal control model also recognizes noise in the operator's ability to observe the error signal, and acknowledges noise in the human motor output system.

Technological progress and subsequent automation have reduced the necessity and desire of manual control of systems, however. Human control of complex systems is now often of a supervisory nature over a given system, and both human factors and HPM have shifted from investigations of perceptual-motor tasks, to the cognitive aspects of human performance.

Signal Detection Theory (SDT)

Although not a formal part of HPM, signal detection theory has an influence on the method, especially within the Integrated Models. SDT is almost certainly the best-known and most extensively used modeling framework in human factors, and is a key feature of education regarding human sensation and perception. In application, the situation of interest is one in which a human operator has to make a binary judgement about whether a signal is present or absent in a noise background. This judgement may be applied in any number of vital contexts. Besides the response of the operator, there are two possible "true" states of the world - either the signal was present or it was not. If the operator correctly identifies the signal as present, this is termed a hit (H). If the operator responds that a signal was present when there was no signal, this is termed a false alarm (FA). If the operator correctly responds when no signal is present, this is termed a correct rejection (CR). If a signal is present and the operator fails to identify it, this is termed a miss (M).

In applied psychology and human factors, SDT is applied to research problems including recognition, memory, aptitude testing, and vigilance. Vigilance, referring to the ability of operators to detect infrequent signals over time, is important for human factors across a variety of domains.

A developed area in attention is the control of visual attention - models that attempt to answer, "where will an individual look next?" A subset of this concerns the question of visual search: How rapidly can a specified object in the visual field be located? This is a common subject of concern for human factors in a variety of domains, with a substantial history in cognitive psychology. This research continues with modern conceptions of salience and salience maps. Human performance modeling techniques in this area include the work of Melloy, Das, Gramopadhye, and Duchowski (2006) regarding Markov models designed to provide upper and lower bound estimates on the time taken by a human operator to scan a homogeneous display.[21] Another example from Witus and Ellis (2003) includes a computational model regarding the detection of ground vehicles in complex images.[22] Facing the nonuniform probability that a menu option is selected by a computer user when certain subsets of the items are highlighted, Fisher, Coury, Tengs, and Duffy (1989) derived an equation for the optimal number of highlighted items for a given number of total items of a given probability distribution.[23] Because visual search is an essential aspect of many tasks, visual search models are now developed in the context of integrating modeling systems. For example, Fleetwood and Byrne (2006) developed an ACT-R model of visual search through a display of labeled icons - predicting the effects of icon quality and set size not only on search time but on eye movements.[7][24]

Visual Sampling

Many domains contain multiple displays, and require more than a simple discrete yes/no response time measurement. A critical question for these situations may be "How much time will operators spend looking at X relative to Y?" or "What is the likelihood that the operator will completely miss seeing a critical event?" Visual sampling is the primary means of obtaining information from the world.[25] An early model in this domain is Sender's (1964, 1983) based upon operators monitoring of multiple dials, each with different rates of change.[26][27] Operators try, as best as they can, to reconstruct the original set of dials based on discrete sampling. This relies on the mathematical Nyquist theorem stating that a signal at W Hz can be reconstructed by sampling every 1/W seconds. This was combined with a measure of the information generation rate for each signal, to predict the optimal sampling rate and dwell time for each dial. Human limitations prevent human performance from matching optimal performance, but the predictive power of the model influenced future work in this area, such as Sheridan's (1970) extension of the model with considerations of access cost and information sample value.[7][28]

A modern conceptualization by Wickens et al. (2008) is the salience, effort, expectancy, and value (SEEV) model. It was developed by the researchers (Wickens et al., 2001) as a model of scanning behavior describing the probability that a given area of interest will attract attention (AOI). The SEEV model is described by p(A) = sS - efEF + (exEX)(vV), in which p(A) is the probability a particular area will be samples, S is the salience for that area; EF represents the effort required in reallocating attention to a new AOI, related to the distance from the currently attended location to the AOI; EX (expectancy) is the expected event rate (bandwidth), and V is the value of the information in that AOI, represented as the product of Relevance and Priority (R*P).[25] The lowercase values are scaling constants. This equation allows for the derivation of optimal and normative models for how an operator should behave, and to characterize how they behave. Wickens et al., (2008) also generated a version of the model that does not require absolute estimation of the free parameters for the environment - just the comparative salience of other regions compared to region of interest.[7]

Visual Discrimination

Models of visual discrimination of individual letters include those of Gibson (1969), Briggs and Hocevar (1975), and McClelland and Rumelhart (1981), the last of which is part of a larger model for word recognition noted for its explanation of the word superiority effect. These models are noted to be highly detailed, and make quantitative predictions about small effects of specific letters.[7]

Depth Perception

A qualitative HPM example includes te Cutting and Vishton (1995) model of depth perception, which indicates that cues to depth perception are more effective at various distances.

Workload

Although an exact definition or method for measurement of the construct of workload is debated by the human factors community, a critical part of the notion is that human operators have some capacity limitations and that such limitations can be exceeded only at the risk of degrading performance. For physical workload, it may be understood that there is a maximum amount that a person should be asked to lift repeatedly, for example. However, the notion of workload becomes more contentious when the capacity to be exceeded is in regard to attention - what are the limits of human attention, and what exactly is meant by attention? Human performance modeling produces valuable insights into this area.[7]

Byrne and Pew (2009) consider an example of a basic workload question: "To what extent do task A and B interfere?" These researchers indicate this as the basis for the psychological refractory period (PRP) paradigm. Participants perform two choice reaction-time tasks, and the two tasks will interfere to a degree - especially when the participant must react to the stimuli for the two tasks when they are close together in time - but the degree of interference is typically smaller than the total time taken for either task. The response selection bottleneck model (Pashler, 1994) models this situation well - in that each task has three components: perception, response selection (cognition), and motor output. The attentional limitation - and thus locus of workload - is that response selection can only be done for one task at a time. The model makes numerous accurate predictions, and those for which it cannot account are addressed by cognitive architectures (Byrne & Anderson, 2001; Meyer & Kieras, 1997). In simple dual-task situations, attention and workload are quantified, and meaningful predictions made possible.[7]

Horrey and Wickens (2003) consider the questions: To what extent will a secondary task interfere with driving performance, and does it depend on the nature of the driving and on the interface presented in the second task? Using a model based on multiple resource theory (Wickens, 2002, 2008; Navon & Gopher, 1979), which proposes that there are several loci for multiple-task interference (the stages of processing, the codes of processing, and modalities), the researchers suggest that cross-task interference increases proportional to the extent that the two tasks use the same resources within a given dimension: Visual presentation of a read-back task should interfere more with driving than should auditory presentation, because driving itself makes stronger demands on the visual modality than on the auditory.[7]

Although multiple resource theory the best known workload model in human factors, it is often represented qualitatively. The detailed computational implementations are better alternatives for application in HPM methods, to include the Horrey and Wickens (2003) model, which is general enough to be applied in many domains. Integrated approaches, such as task network modeling, are also becoming more prevalent in the literature.[7]

Numerical typing is an important perceptual-motor task whose performance may vary with different pacing, finger strategies and urgency of situations. Queuing network-model human processor (QN-MHP), a computational architecture, allows performance of perceptual-motor tasks to be modelled mathematically. The current study enhanced QN-MHP with a top-down control mechanism, a close-loop movement control and a finger-related motor control mechanism to account for task interference, endpoint reduction, and force deficit, respectively. The model also incorporated neuromotor noise theory to quantify endpoint variability in typing. The model predictions of typing speed and accuracy were validated with Lin and Wu's (2011) experimental results. The resultant root-meansquared errors were 3.68% with a correlation of 95.55% for response time, and 35.10% with a correlation of 96.52% for typing accuracy. The model can be applied to provide optimal speech rates for voice synthesis and keyboard designs in different numerical typing situations.[29]

The psychological refractory period (PRP) is a basic but important form of dual-task information processing. Existing serial or parallel processing models of PRP have successfully accounted for a variety of PRP phenomena; however, each also encounters at least 1 experimental counterexample to its predictions or modeling mechanisms. This article describes a queuing network-based mathematical model of PRP that is able to model various experimental findings in PRP with closed-form equations including all of the major counterexamples encountered by the existing models with fewer or equal numbers of free parameters. This modeling work also offers an alternative theoretical account for PRP and demonstrates the importance of the theoretical concepts of “queuing” and “hybrid cognitive networks” in understanding cognitive architecture and multitask performance.[30]

Cognition & Memory

The paradigm shift in psychology from behaviorism to the study of cognition had a huge impact on the field of Human Performance Modeling. Regarding memory and cognition, the research of Newell and Simon regarding artificial intelligence and the General Problem Solver (GPS; Newell & Simon, 1963), demonstrated that computational models could effectively capture fundamental human cognitive behavior. Newell and Simon were not simply concerned with the amount of information - say, counting the number of bits the human cognitive system had to receive from the perceptual system - but rather the actual computations being performed. They were critically involved with the early success of comparing cognition to computation, and the ability of computation to simulate critical aspects of cognition - thus leading to the creation of the sub-discipline of artificial intelligence within computer science, and changing how cognition was viewed in the psychological community. Although cognitive processes do not literally flip bits in the same way that discrete electronic circuits do, pioneers were able to show that any universal computational machine could simulate the processes used in another, without a physical equivalence (Phylyshyn, 1989; Turing, 1936). The cognitive revolution allowed all of cognition to be approached by modeling, and these models now span a vast array of cognitive domains - from simple list memory, to comprehension of communication, to problem solving and decision making, to imagery, and beyond.[7]

One popular example is the Atkinson-Shiffrin (1968) "modal" model of memory. Also, please see Cognitive Models for information not included here..

Routine Cognitive Skill

One area of memory and cognition regards modeling routine cognitive skills; when an operator has the correct knowledge of how to perform a task and simply needs to execute that knowledge. This is widely applicable, as many operators are practiced enough that their procedures become routine. The GOMS (goals, operators, methods, and selection rules) family of Human Performance Models popularized and well-defined by researchers in the field (Card et al., 1983; John & Kieras, 1996a, 1996b) were originally applied to model users of computer interfaces, but have since been extended to other areas. They are useful HPM tools, suitable for a variety of different concerns and sizes of analysis, but are limited in regard to analyzing user error (see Wood & Kieras, 2002, for an effort to extend GOMS to handling errors).[7]

The simplest form of a GOMS model is a keystroke-level model (KLM) - in which all physical actions are listed (e.g., keystrokes, mouse clicks), also termed operations, that a user must take in order to complete a given task. Mental operations (e.g., find an object on the screen) augment this using a straightforward set of rules. Each operations has a time associated with it (such as 280 ms for a keystroke), and the total time for the task is estimated by adding up operation times. The efficiency of two procedures may then be compared, using their respected estimated execution times. Although this form of model is highly approximate (many assumptions are taken at liberty), it is a form of model still used today (e.g., in-vehicle information systems and mobile phones).[7]

Detailed versions of GOMS exist, including:

--CPM-GOMS: "Cognitive, perceptual, motor"" and 'critical path method" (John & Kieras, 1996a, 1996b) - attempts to break down performance into primitive CPM units lasting tens to hundreds of milliseconds (durations for many operations in CPM-GOMS models come from published literature, especially Card et al., 1983).[7]

--GOMSL / NGOMSL: GOMS Language or Natural GOMS Language, which focus on the hierarchical decomposition of goals, but with an analysis including methods - procedures people use to accomplish those goals. Many generic mental operations in the KLM are replaced with detailed descriptions of the cognitive activity involving the organization of people's procedural knowledge into methods. A detailed GOMSL analysis allows for the prediction of not only execution time, but also the time it takes for learning the procedures, and the amount of transfer that can be expected based on already known procedures (Gong and Kieras, 1994). These models are not only useful for informing redesigns of user-interfaces, but also quantitatively predict execution and learning time for multiple tasks.[7]

Decision-Making

Another critical cognitive activity of interest to human factors is that of judgement and decision making. These activities starkly contrast to routine cognitive skills, for which the procedures are known in advance, as many situations require operators to make judgments under uncertaintly - to produce a rating of quality, or perhaps choose among many possible alternatives. Although many disciplines including mathematics and economics make significant contributions to this area of study, the majority of these models do not model human behavior but rather model optimal behavior, such as subjective expected utility theory (Savage, 1954; von Neumann & Morgenstern, 1944). While models of optimal behavior are important and useful, they do not consider a baseline of comparison for human performance - though much research on human decision making in this domain compares human performance to mathematically optimal formulations. Examples of this include Kahneman and Tversky's (1979) prospect theory and Tversky's (1972) elimination by aspects model. Less formal approaches include Tversky and Kahneman's seminal work on heuristics and biases, Gigerenzer's work on 'fast and frugal' shortcuts (Gigerenzer, Todd, & ABC Research Group, 2000), and the descriptive models of Paune, Bettman, and Johnson (1993) on adaptive strategies.[7]

Sometimes optimal performance is uncertain, one powerful and popular example is the lens model (Brunswick, 1952; Cooksey, 1996; Hammond, 1955), which deals with policy capturing, cognitive control, and cue utilization, and has been used in aviation (Bisantz & Pritchett, 2003), command and control (Bisantz et al., 2000); to investigate human judgement in employment interviews (Doherty, Ebert, & Callender, 1986), financial analysis (Ebert & Kruse, 1978), physicians' diagnoses (LaDuca, Engel, & Chovan, 1988), teacher ratings (Carkenord & Stephens, 1944), and numerous others.[7] Although the model does have limitations [described in Byrne & Pew (2009)], it is very powerful and remains underutilized in the human factors profession.[7]

Situation Awareness (SA)

Models of SA range from descriptive (Endsley, 1995) to computational (Shively et al., 1997).[14][31][32] The most useful model in HPM is that of McCarley et al. (2002) known as the A-SA model (Attention/Situation Awareness). It incorporates two semi-independent components: a perception/attention module and a cognitive SA-updated module.[14] The P/A model of this A-SA model is based on the Theory of Visual Attention.[33] (Bundesen, 1990) (refer to McCarley et al., 2002).[14]

Integrated Models

Many of these models described are very limited in their application. Although many extensions of SDT have been proposed to cover a variety of other judgement domains (see T.D. Wickens, 2002, for examples), most of these never caught on, and SDT remains limited to binary situations. The narrow scope of these models is not limited to human factors, however - Newton's laws of motion have little predictive power regarding electromagnetism, for example. However, this is frustrating for human factors professionals, because real human performance in vivo draws upon a wide array of human capabilities. As Byrne & Pew (2009) describe, "in the space of a minute, a pilot might quite easily conduct a visual search, aim for and push a button, execute a routine procedure, make a multiple-cue probabilistic judgement" and do just about everything else described by fundamental human performance models.[7] A fundamental review of HPM by the National Academies (Elkind, Card, Hochberg, & Huey, 1990) described integration as the great unsolved challenge in HPM. This issue remains to be solved, however, there have been efforts to integrate and unify multiple models and build systems that span across domains. In human factors, the two primary modeling approaches that accomplish this and have gained popularity are task network modeling and cognitive architectures.[7]

Task Network Modeling

The term network model refers to a modeling procedure involving Monte Carlo simulation rather than to a specific model. Although the modeling framework is atheoretical, the quality of the models that are built with it are only as high of a quality as the theories and data used to create them.[7]

When a modeler builds a network model of a task, the first step is to construct a flow chart decomposing the task into discrete sub-tasks - each sub-task as a node, the serial and parallel paths connecting them, and the gating logic that governs the sequential flow through the resulting network. When modeling human-system performance, some nodes represent human decision processes and.or human task execution, some represent system execution sub-tasks, and some aggregate human/machine performance into a single node. Each node is represented by a statistically specified completion time distribution and a probability of completion. When all these specifications are programmed into a computer, the network is exercised repeatedly in Monte Carlo fashion to build up distributions of the aggregate performance measures that are of concern to the analyst. The art in this is in the modeler's selection of the right level of abstraction at which to represent nodes and paths and in estimating the statistically defined parameters for each node. Sometimes, human-in-the-loop simulations are conducted to provide support and validation for the estimates.. Detail regarding this, related, and alternative approaches may be found in Laughery, Lebiere, and Archer (2006) and in the work of Schwieckert and colleagues, such as Schweickert, Fisher, and Proctor (2003).[7]

Historically, Task Network Modeling stems from queuing theory and modeling of engineering reliability and quality control. Art Siegel, a psychologist, first though of extending reliability methods into a Monte Carlo simulation model of human-machine performance (Siegel & Wolf, 1969). In the early 1970s, the U.S. Air Force sponsored the development of SAINT (Systems Analysis of Integrated Networks of Tasks), a high-level programming language specifically designed to support the programming of Monte Carlo simulations of human-machine task networks (Wortman, Pritsker, Seum, Seifert, & Chubb, 1974). A modern version of this software is Micro Saint Sharp (Archer, Headley, & Allender, 2003). This family of software spawned a tree of special-purpose programs with varying degrees of commonality and specificity with Micro Saint. The most prominent of these is the IMPRINT series (Improved Performance Research Integration Tool)[34] sponsored by the U.S. Army (and based on MANPRINT) which provides modeling templates specifically adapted to particular human performance modeling applications (Archer et al., 2003). Two workload-specific programs are W/INDEX (North & Riley, 1989) and WinCrew (Lockett, 1997).

The network approach to modeling using these programs is popular due to its technical accessibility to individual with general knowledge of computer simulation techniques and human performance analysis. The flowcharts that result from task analysis lead naturally to formal network models. The models can be developed to serve specific purposes - from simulation of an individual using a human-computer interface to analyzing potential traffic flow in a hospital emergency center. Their weakness is the great difficulty required to derive performance times and success probabilities from previous data or from theory or first principles. These data provide the model's principle content.

Cognitive Architectures

Cognitive Architectures are broad theories of human cognition based on a wide selection of human empirical data and are generally implemented as computer simulations. They are the embodiment of a scientific hypothesis about those aspects of human cognition relatively constant over time and independent of task (Gray, Young, & Kirschenbaum, 1997; Ritter & young, 2001). Cognitive architectures are an attempt to theoretically unify disconnected empirical phenomena in the form of computer simulation models. While theory is inadequate for the application of human factors, since the 1990s cognitive architectures also include mechanisms for sensation, perception, and action. Two early examples of this include the Executive Process Interactive Control model (EPIC; Kieras, Wood, & Meyer, 1995; Meyer & Kieras, 1997) and the ACT-R (Byrne & Anderson, 1998).

A model of a task in a cognitive architecture, generally referred to as a cognitive model, consists of both the architecture and the knowledge to perform the task. This knowledge is acquired through human factors methods including task analyses of the activity being modeled. Cognitive architectures are also connected with a complex simulation of the environment in which the task is to be performed - sometimes, the architecture interacts directly with the actual software humans use to perform the task. Cognitive architectures not only produce a prediction about performance, but also output actual performance data - able to produce time-stamped sequences of actions that can be compared with real human performance on a task.

Examples of cognitive architectures include the EPIC system (Hornof & Kieras, 1997, 1999), CPM-GOMS (Kieras, Wood, & Meyer, 1997), the Queuing Network-Model Human Processor (Wu & Liu, 2007, 2008),[35][36] ACT-R (Anderson, 2007; Anderson & Lebiere, 1998), and QN-ACTR (Cao & Liu, 2013).[37]

The Queuing Network-Model Human Processor model was used to predict how drivers perceive the operating speed and posted speed limit, make choice of speed, and execute the decided operating speed. The model was sensitive (average d’ was 2.1) and accurate (average testing accuracy was over 86%) to predict the majority of unintentional speeding[35]

ACT-R has been used to model a wide variety of phenomena. It consists of several modules, each one modeling a different aspect of the human system. Modules are associated with specific brain regions, and the ACT-R has thus successfully predicted neural activity in parts of those regions. Each model essentially represents a theory of how that piece of the overall system works - derived from research literature in the area. For example, the declarative memory system in ACT-R is based on series of equations considering frequency and recency and that incorporate Baysean notions of need probability given context, also incorporating equations for learning as well as performance, Some modules are of higher fidelity than others, however - the manual module incorporates Fitt's law and other simple operating principles, but is not as detailed as the optimal control theory model (as of yet). The notion, however, is that each of these modules require strong empirical validation. This is both a benefit and a limitation to the ACT-R, as there is still much work to be done in the integration of cognitive, perceptual, and motor components, but this process is promising (Byrne, 2007; Foyle and Hooey, 2008; Pew & Mavor, 1998).

Team/Crew Performance Modeling

GOMS has been used to model both complex team tasks (Kieras & Santoro, 2004) and group decision making (Sorkin, Hays, & West, 2001).

Modeling Approaches

Computer Simulation Models/Approaches

Example: IMPRINT (Improved Performance Research Integration Tool)

Mathematical Models/Approaches

Example: Cognitive model

Comparing HPM Models

To compare different HPM models, one of ways is to calculate their AIC (Akaike information criterion) and consider the Cross-validation criterion.[38]

Benefits

Numerous benefits may be gained from using modeling techniques in the human performance domain.

Specificity

A sizable majority of explanations in psychology are not only qualitative but also vague. Concepts such as "attention", "processing capacity", "workload", and "situation awareness" (SA), both general and specific to human factors, are often difficult to quantify in applied domains. Researchers differ in their definitions of such terms, which makes it likewise difficult to specify data for each term. Formal models, in contrast, typically require explicit specification of theoretical terms. Specificity requires that explanations be internally coherent; while verbal theories are often so flexible that they fail to remain consistent, allowing contradictory predictions to be derived from their use. Not all models are quantitative in nature, however, and thus not all provide the benefit of specificity to the same degree.[7]

Objectivity

Formal models are generally modeler independent. Although great skill is involved in constructing a specific model, once it is constructed, anybody with the appropriate knowledge can run it or solve it, and the model produces the same predictions regardless of who is running or solving the model. Predictions are no longer leashed to the biases or sole intuition of a single expert but, rather, to a specification that can be made public.[7]

Quantitativeness

Many human performance models make quantitative predictions, which are critical in applied situations. Purely empirical methods analyzed with hypothesis testing techniques, as standard in most psychological experiments, focus on providing answers to vague questions such as "Are A and B different?" and then "Is this difference statistically significant?"; while formal models often provide useful quantitative information such as "A is x% slower than B."[7]

Clarity

Human performance models provide clarity, in that the model provides an explanation for observed differences; such explanations are not generally provided by strictly empirical methods.[7]

Issues

Misconceptions

Many human performance models share key features with Artificial Intelligence (AI) methods and systems. The function of AI research is to produce systems that exhibit intelligent behavior, generally without consideration of the degree to which that intelligence resembles or predicts human performance, yet the distinction between AI methods and that of HPM is at times unclear. For example, Bayesian classifiers used to filter spam emails approximate human classification performance (classifying spam emails as spam, and non-spam emails as importation) and are thus highly intelligence systems, but fail to rely on interpretation of the semantics of the messages themselves; instead relying on statistical methods. However, Bayesian analysis can also be essential to human performance models.[7]

Usefulness

Models may focus more on the processes involved in human performance rather than the products of human performance, thus limiting their usefulness in human factors practice.[7]

Abstraction

The abstraction necessary for understandable models competes with accuracy. While generality, simplicity, and understandability are important to the application of models in human factors practice, many valuable human performance models are inaccessible to those without graduate, or postdoctoral training. For example, while Fitts's law is straightforward for even undergraduates, the lens model requires an intimate understanding of multiple regression, and construction of an ACT-R type model requires extensive programming skills and years of experience. While the successes of complex models are considerable, a practitioner of HPM must be aware of the trade-offs between accuracy and usability.[7]

Free Parameters

As is the case in most model-based sciences, free parameters rampant within models of human performance also require empirical data a priori.[7] There may be limitations in regard to collecting the empirical data necessary to run a given model, which may constrains the application of that given model.

Validation

Validation of human performance models is of the highest concern to the science of HPM.

Usually researchers using R square and Root Mean Square (RMS) between the experimental data and the model's prediction.

In addition, while validity may be assessed with comparison between human data and the model's output, free parameters are flexible to incorrectly fit data.[7]

Common Terms

-Free Parameter: The parameters of a model whose values are estimated from the data to be modeled to maximally align the model's prediction.[39]

-Coefficient of determination (R Square): A line or curve indicate how well the data fit a statistic model.

-Root Mean Square (RMS): A statistical measure defined as the square root of the arithmetic mean of the squares of a set of numbers.[40]

See also

Cognitive Architectures

Cognitive Model

Cognitive Revolution

Decision-Making

Depth Perception

Human Factors

Human Factors (Journal)

Human Factors & Ergonomics Society

Manual Control Theory

Markov Models

Mathematical Psychology

Monte Carlo

Salience

Signal Detection Theory

Situation Awareness

Visual Search

Workload

References

  1. Sebok, A., Wickens, C., & Sargent, R. (2013, September). Using Meta-Analyses Results and Data Gathering to Support Human Performance Model Development. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 57, No. 1, pp. 783-787). SAGE Publications.
  2. Carolan, T., Scott-Nash, S., Corker, K., & Kellmeyer, D. (2000, July). An application of human performance modeling to the evaluation of advanced user interface features. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 44, No. 37, pp. 650-653). SAGE Publications.
  3. Fitts, P. M. (1954). "The information capacity of the human motor system in controlling the amplitude of movement". Journal of Experimental Psychology. 47 (6): 381–91. doi:10.1037/h0055392. PMID 13174710. S2CID 501599.
  4. Hick, W. E. (1952). "On the rate of gain of information". Quarterly Journal of Experimental Psychology. 4 (1): 11–26. doi:10.1080/17470215208416600. S2CID 39060506.
  5. Hyman, R (1953). "Stimulus information as a determinant of reaction time". Journal of Experimental Psychology. 45 (3): 188–96. doi:10.1037/h0056940. PMID 13052851. S2CID 17559281.
  6. Swets, J. A., Tanner, W. P., & Birdsall, T. G. (1964). Decision processes in perception. Signal detection and recognition in human observers, 3-57.
  7. Byrne, Michael D.; Pew, Richard W. (2009-06-01). "A History and Primer of Human Performance Modeling". Reviews of Human Factors and Ergonomics. 5 (1): 225–263. doi:10.1518/155723409X448071. ISSN 1557-234X.
  8. Warwick, W., Marusich, L., & Buchler, N. (2013, September). Complex Systems and Human Performance Modeling. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 57, No. 1, pp. 803-807). SAGE Publications.
  9. Lawton, C. R., Campbell, J. E., & Miller, D. P. (2005). Human performance modeling for system of systems analytics: soldier fatigue (No. SAND2005-6569). Sandia National Laboratories.
  10. Mitchell, D. K., & Samms, C. (2012). An Analytical Approach for Predicting Soldier Workload and Performance Using Human Performance Modeling. Human-Robot Interactions in Future Military Operations.
  11. Foyle, D. C., & Hooey, B. L. (Eds.). (2007). Human performance modeling in aviation. CRC Press.
  12. O’Hara, J. (2009). Applying Human Performance Models to Designing and Evaluating Nuclear Power Plants: Review Guidance and Technical Basis. BNL-90676-2009). Upton, NY: Brookhaven National Laboratory.
  13. Lim, J. H.; Liu, Y.; Tsimhoni, O. (2010). "Investigation of driver performance with night-vision and pedestrian-detection systems—Part 2: Queuing network human performance modeling". IEEE Transactions on Intelligent Transportation Systems. 11 (4): 765–772. doi:10.1109/tits.2010.2049844. S2CID 17275244.
  14. McCarley, J. S., Wickens, C. D., Goh, J., & Horrey, W. J. (2002, September). A computational model of attention/situation awareness. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 46, No. 17, pp. 1669-1673). SAGE Publications.
  15. Baines, T. S.; Kay, J. M. (2002). "Human performance modelling as an aid in the process of manufacturing system design: a pilot study". International Journal of Production Research. 40 (10): 2321–2334. doi:10.1080/00207540210128198. S2CID 54855742.
  16. DRURY, C. G. (1971-03-01). "Movements with Lateral Constraint". Ergonomics. 14 (2): 293–305. doi:10.1080/00140137108931246. ISSN 0014-0139. PMID 5093722.
  17. Drury, C. G.; Daniels, E. B. (1975-07-01). "Performance Limitations in Laterally Constrained Movements". Ergonomics. 18 (4): 389–395. doi:10.1080/00140137508931472. ISSN 0014-0139.
  18. Drury, Colin G.; Montazer, M. Ali; Karwan, Mark H. (1987). "Self-Paced Path Control as an Optimization Task". IEEE Transactions on Systems, Man, and Cybernetics. 17 (3): 455–464. doi:10.1109/TSMC.1987.4309061. S2CID 10648877.
  19. Accot, Johnny; Zhai, Shumin (1997-01-01). "Beyond Fitts' law". Proceedings of the ACM SIGCHI Conference on Human factors in computing systems. CHI '97. New York, NY, USA: ACM. pp. 295–302. doi:10.1145/258549.258760. ISBN 0897918029. S2CID 53224495.
  20. Accot, Johnny; Zhai, Shumin (1999-01-01). "Performance evaluation of input devices in trajectory-based tasks". Proceedings of the SIGCHI conference on Human factors in computing systems the CHI is the limit - CHI '99. CHI '99. New York, NY, USA: ACM. pp. 466–472. doi:10.1145/302979.303133. ISBN 0201485591. S2CID 207247723.
  21. Melloy, B. J.; Das, S.; Gramopadhye, A. K.; Duchowski, A. T. (2006). "A model of extended, semisystematic visual search" (PDF). Human Factors: The Journal of the Human Factors and Ergonomics Society. 48 (3): 540–554. doi:10.1518/001872006778606840. PMID 17063968. S2CID 686156.
  22. Witus, G.; Ellis, R. D. (2003). "Computational modeling of foveal target detection". Human Factors: The Journal of the Human Factors and Ergonomics Society. 45 (1): 47–60. doi:10.1518/hfes.45.1.47.27231. PMID 12916581. S2CID 10022486.
  23. Fisher, D. L.; Coury, B. G.; Tengs, T. O.; Duffy, S. A. (1989). "Minimizing the time to search visual displays: The role of highlighting". Human Factors: The Journal of the Human Factors and Ergonomics Society. 31 (2): 167–182. doi:10.1177/001872088903100206. PMID 2744770. S2CID 12313971.
  24. Fleetwood, M. D.; Byrne, M. D. (2006). "Modeling the visual search of displays: a revised ACT-R model of icon search based on eye-tracking data". Human-Computer Interaction. 21 (2): 153–197. doi:10.1207/s15327051hci2102_1. S2CID 6042892.
  25. Cassavaugh, N. D., Bos, A., McDonald, C., Gunaratne, P., & Backs, R. W. (2013). Assessment of the SEEV Model to Predict Attention Allocation at Intersections During Simulated Driving. In 7th International Driving Symposium on Human Factors in Driver Assessment, Training, and Vehicle Design (No. 52).
  26. Senders, J. W. (1964). The human operator as a monitor and controller of multidegree of freedom systems. Human Factors in Electronics, IEEE Transactions on, (1), 2-5.
  27. Senders, J. W. (1983). Visual sampling processes (Doctoral dissertation, Universiteit van Tilburg).
  28. Sheridan, T (1970). "On how often the supervisor should sample". IEEE Transactions on Systems Science and Cybernetics. 2 (6): 140–145. doi:10.1109/TSSC.1970.300289.
  29. Lin, Cheng-Jhe; Wu, Changxu (2012-10-01). "Mathematically modelling the effects of pacing, finger strategies and urgency on numerical typing performance with queuing network model human processor". Ergonomics. 55 (10): 1180–1204. doi:10.1080/00140139.2012.697583. ISSN 0014-0139. PMID 22809389. S2CID 8895779.
  30. Wu, Changxu; Liu, Yili (2008). "Queuing network modeling of the psychological refractory period (PRP)". Psychological Review. 115 (4): 913–954. CiteSeerX 10.1.1.606.7844. doi:10.1037/a0013123. PMID 18954209.
  31. Endsley, M. R. (1995). "Toward a theory of situation awareness in dynamic systems". Human Factors. 37 (1): 85–104.
  32. Shively, R. J., Brickner, M., & Silbiger, J. (1997). A computational model of situational awareness instantiated in MIDAS. Proceedings of the Ninth International Symposium on Aviation Psychology (pp. 1454-1459). Columbus, OH: University of Ohio.
  33. Bundesen, C. (1990). A theory of visual attention. Psychological Review, 97, 523-547.
  34. Samms, C. (2010, September). Improved Performance Research Integration Tool (IMPRINT): Human Performance Modeling for Improved System Design. InProceedings of the Human Factors and Ergonomics Society Annual Meeting(Vol. 54, No. 7, pp. 624-625). SAGE Publications.
  35. Wu, Changxu; Liu, Yili (2007-09-01). "Queuing Network Modeling of Driver Workload and Performance". IEEE Transactions on Intelligent Transportation Systems. 8 (3): 528–537. doi:10.1109/TITS.2007.903443. ISSN 1524-9050. S2CID 16004384.
  36. Wu, Changxu; Liu, Yili; Quinn-Walsh, C.M. (2008-09-01). "Queuing Network Modeling of a Real-Time Psychophysiological Index of Mental Workload #x2014;P300 in Event-Related Potential (ERP)". IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans. 38 (5): 1068–1084. doi:10.1109/TSMCA.2008.2001070. ISSN 1083-4427. S2CID 6629069.
  37. Cao, Shi; Liu, Yili (2013). "Queueing network-adaptive control of thought rational (QN-ACTR): An integrated cognitive architecture for modelling complex cognitive and multi-task performance". International Journal of Human Factors Modelling and Simulation. 4 (1): 63–86. doi:10.1504/ijhfms.2013.055790.
  38. Busemeyer, J. R. (2000) Model Comparisons and Model Selections Based on Generalization Criterion Methodology, Journal of Mathematical Psychology 44, 171-189
  39. Computational Modeling in Cognition: Principles and Practice (2010) by Stephan Lewandowsky and Simon Farrell
  40. "Root-mean-square value". A Dictionary of Physics (6 ed.). Oxford University Press. 2009. ISBN 9780199233991. Oxford University Press. 2009. ISBN 9780199233991.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.