Infant looking time has been the primary measure of perceptual and cognitive development since Fantz (1958, 1963) first observed that the visual system of young infants can be understood by measuring where and how long infants look at a pair of stimuli presented within their field of view. The basic procedure developed by Fantz has been adapted and refined in order to answer questions about the processes involved in infants’ memory, perceptual abilities, perceptions of social interactions, and much more (see Kellman & Arterberry, 1998, for a review). These adaptations have used dynamic stimuli and static images (e.g., Shaddy & Colombo, 2004; Xiao, Quinn, Wheeler, Pascalis, & Lee, 2014) and stimuli with auditory components (Reynolds, Zhang, & Guy, 2013; Werker, Cohen, Lloyd, Cassasola, & Stager, 1998). Some adaptations have tailored the presentation of the stimuli to the infants’ own behavior (Cohen, 1972; Horowitz, Paden, Bhana, & Self, 1972). Many adaptations have also varied the features and nature of the phases of the experiment.

The most important adaptation of this procedure stemmed from the recognition that infants’ looking behavior changes with familiarization (Colombo & Mitchell, 2009; Oakes, 2010). Fantz (1964) first observed this by presenting infants with two items side by side on a series of trials; one item was always the same from trial to trial, and the other item was unique to each trial. Over time, infants’ looking to the repeating stimulus decreased, but their looking to the new item on each trial did not. This led to the recognition that infants’ looking changes with familiarization, and that we can use this behavior to understand how infants perceive and remember the events, objects, and people around them.

This seminal finding resulted in the development of three standard procedures widely used in the literature: visual paired comparison (e.g., Fagan, 1990; Rose, 1983), fixed familiarization (e.g., Kaldy & Leslie, 2005; Plunkett, Hu, & Cohen, 2008; Quinn, Yahr, Kuhn, Slater, & Pascalis, 2002), and habituation of looking time (e.g., Baumgartner & Oakes, 2011; Casasola, Cohen, & Chiarello, 2003; Younger & Cohen, 1983). In each of these procedures, infants initially are presented with some stimulus to study or learn, and then their memory for that now-familiar stimulus is tested by presenting them with both novel and familiar stimuli. The procedures differ in the nature of the familiarization and (to a certain extent) how the stimuli are presented. In the visual paired comparison (VPC) task, infants typically are presented with the initial stimulus for a single trial. Usually, infants are required to accumulate a specific amount of looking at the familiar stimulus during this period (e.g., Rose, 1981). Following this familiarization period, infants are shown a novel and a familiar stimulus presented side by side, and their looking at each stimulus is recorded in order to establish whether the infants show a preference for the novel stimulus.

In the fixed-familiarization variations of the procedure, infants are presented with the stimulus or stimuli over a series of trials, and all infants receive the same number of trials. Following this initial familiarization phase, infants are presented with novel and familiar stimuli (e.g., Quinn et al., 2002). The procedures for habituation of looking time are similar to these fixed-familiarization procedures, except that infants are presented with the familiarization stimulus or stimuli over a series of trials until their looking time decreases to some criterion level, typically 50% of their initial or peak level of looking. After infants’ looking has reached this criterion, they are shown familiar and novel stimuli (Baumgartner & Oakes, 2011; Casasola et al., 2003; Kelly et al., 2009; Younger & Cohen, 1983). Again, the question is whether infants will show increased interest in the novel relative to the familiar stimulus. In both variations (fixed familiarization and habituation), the stimuli can be presented one or two at a time, all trials can be of the same duration or can vary depending on the infants’ level of interest, and infants’ responding to the novel and familiar test stimuli is assessed after the familiarization period.

Over the decades, labs have created a variety of ways to conduct such procedures. Initially, stimuli were presented on cards or physical displays, and infants’ looking time was recorded with a stopwatch (Fagan, 1974; Rose, 1981). Variations of this method are still in use today (e.g., Quinn, Lee, Pascalis, & Tanaka, 2016). As inexpensive computers and display monitors have become widely available, software solutions have become much more commonplace. An example of a basic setup for testing infants is depicted in Fig. 1. The infant is seated with a parent in front of one or more stimulus displays (left), and an observer is seated out of sight (behind the black curtain) viewing the infants’ behavior (right). The observer records infants’ looking using computer software developed for this purpose. Many experiments are written for proprietary platforms, for example E-Prime (e.g., Leppänen, Richmond, Vogel-Farley, Moulson, & Nelson, 2009). Other languages have also been used (for an example in Python, see https://github.com/jfkominsky/PyHab). Often, such solutions are specific to the particular question being tested (Christodoulou, Johnson, Moore, & Moore, 2016), although more general-purpose programs are being developed.

Fig. 1
figure 1

(Left) An infant in a looking-time procedure, seated on a parent’s lap facing two stimulus displays. (Right) An experimenter observing an infant and recording looking behavior via keypresses. Note that the experimenter is situated behind the stimulus display, and thus is hidden from the infant’s view via the black curtain

Here we describe Habit2, an application designed for configuring and running a wide variety of infant looking-time experiments. It can be configured in a number of ways to replicate the conditions of many different published studies, and it can also be configured in novel ways if a question requires it. Habit2 can be customized to present familiarization stimuli until a specific amount of looking is recorded, a set number of trials are presented, or a habituation criterion is met. Habit2 is also flexible in the type and number of stimuli presented, how individual trials are defined, and how the habituation criterion is established. In addition, Habit2 can be configured to conduct a preference study without familiarization, similar to Fantz’s (1958, 1963) original studies. Habit2 is installed with several simple sample experiments (reflecting the four standard procedures described above) preconfigured. We explain the Habit2 settings for each in more detail below.

Habit2 is based on an earlier program, Habit, that was originally developed by Harold Chaput and Les Cohen at the University of Texas for Mac OS/9, and that is now obsolete. Habit2 preserves the main functionality of the original Habit software, with several improvements. The most important improvement is that Habit2 runs on modern computers with current operating systems. Habit2 is a rewrite that runs both on 64-bit Intel-based Macs running Mac OS/X 10.10 (Yosemite) or later and on 64-bit Windows 7 and 10. Thus, unlike the original Habit software, Habit2 can be used with both the Mac and Windows operating systems. In addition to this significant change, Habit2 incorporates several features that make it more flexible than the original Habit. In particular, as we describe below, the new version of Habit allows more ways to define trials and experimental phases, and thus can be used in a wider range of experimental designs than the original software could.

Habit2 overview

Habit2 has a user-friendly graphical user interface (GUI) that allows a user to configure a wide variety of infant looking-time experiments, save the settings, and share those settings between computers or researchers (see Fig. 2). It was developed to be extremely flexible and to be able to be configured in many different ways in order to implement preference procedures, familiarization procedures, habituation procedures, VPC procedures, or violation-of-expectation procedures. It can be used to simultaneously present stimuli and record looking times, or to present stimuli in order to allow offline coding of looking behavior. It can be used without stimulus presentation for reliability coding, and can be adapted to be used to record looking during live stimulus presentation (e.g., “puppet shows”). The full details describing all of the possible implementations are provided in the user manual (available at http://habit.ucdavis.edu).

Fig. 2
figure 2

The GUI that appears when Habit2 is launched

Habit2 allows for flexible configuration of an unlimited number of experimental phases, of how individual looks are defined, of how trials are timed and defined, of what kinds of stimuli are presented, of how many stimulus presentation monitors are used, and so on. Once a user has configured settings for a particular experiment or procedure, those settings are saved locally and can be exchanged between computers. Thus, this information could be stored with other experimental materials in a repository such as the Open Science Framework (OSF.io), to support replication efforts. In addition, existing experimental settings can be used as a template for developing new procedures. Users can make copies of existing experiments and edit those copies in order to quickly and easily construct variations of an experimental configuration. Habit2 comes preloaded with several templates that can be modified to meet the needs of a particular study. Some preferences—such as assigning stimuli to a specific monitor and defining a network drive path to a shared stimulus folder—can be customized on a per-machine basis, to allow labs to use Habit2 experiments in different testing rooms with unique configurations.

When experiments are run, the user acts as the observer, indicating that a subject is looking by pressing and releasing keys on the keyboard. The durations of keypresses are stored by Habit2 as the duration of looking. However, it is also possible to run experiments in Habit2 without indicating any looking during the trials, and to record the looking times offline using a different program. The results of each experimental run are saved locally every time an experiment is run. The results file includes a copy of the settings for the experiment, a record of keypresses and other timed events generated during the experimental run, and information about the stimuli presented. If the user decides to record looking offline, from recordings of the session, the results file provides an important record of which stimulus or stimuli were presented on each trial. In addition, Habit2 allows the user to view experimental results and export them in different formats for further analysis.

Habit2 saves all experiment settings, results, and log files in the Habit2 workspace. This organizational strategy contributes to the flexibility of the program. Multiple workspaces may be created on a machine, allowing experimenters to keep groups of related experiments together in a single workspace, while allowing for a separation between unrelated groups of experiments. In addition, workspace folders may be copied between machines or shared by different users over a network. Each workspace has its own set of Local Preferences, which are settings that may depend on the specific monitor and file system configurations on a given machine. The workspace, how to assign monitors as stimulus presentation or “control” monitors, and the stimulus root folder are selected in the Preferences dialog.

Habit2 is extremely flexible, and users can create and save settings for a wide range of individual experiments. Users can create experiments that present visual stimuli on one to three monitors (a separate control monitor is also required), with or without audio stimuli, or that present audio stimuli alone. Users can specify what information will be available to experimenters during data collection (i.e., the current phase, the stimulus being presented, the looking direction, or no information at all), whether or not an attention-getter is used between trials, and if so, the attention-getting stimulus. Examples of these dialog boxes are presented in Fig. 3.

Fig. 3
figure 3

The new experiment dialog box (a) and the dialogs for each of the subsections of the experiment settings dialog (b–e)

When an experiment is run, Habit2 interprets looking behavior on the basis of keystrokes made by the experimenter observing the subject: Pressing and holding a key down indicates that the subject is looking at a stimulus, and releasing that key indicates the subject has looked away. Coders can report looks to the left (by pressing and holding the “4” key), looks to the right (by pressing and holding the “6” key), and looks to the center (by pressing and holding the “5”). However, Habit2 allows users to set the parameters for when looking should be recorded as well as the parameters for what counts as a single look. For example, users can set criteria for how long infants must look at the stimulus (the minimum duration of a keypress, or minimum looking time), as well as how long infants must look away from the stimulus (the maximum duration of a key release, or maximum looking-away time) before a keypress is recorded. These two parameters vary across laboratories (and even sometimes across studies within a laboratory), and thus Habit2 can be configured to record looking time in different ways, depending on the needs and culture of a lab group. In essence, these parameters determine what Habit2 considers to be a single look. Habit2 can be configured such that every keypress and release—no matter how brief—is recorded as a single look. Alternatively, Habit2 can be configured such that very short glances at the stimulus (as well as inadvertent keypresses by experimenters) will not be considered looks. The maximum looking-away time allows the user to determine whether brief looks away between periods of looking will be ignored. Moreover, in Habit2—and unlike in the original version of Habit—users can determine whether or not such brief looks away should be included in periods of the recorded looking time.

Consider some illustrative examples, schematically depicted in Fig. 4. In both examples, the minimum looking time is set to 500 ms, and the maximum looking-away time is set to 200 ms. In the first example (the top diagram in Fig. 4), the infant looks at the stimulus for 2 s and then looks away for (at least) 200 ms. Habit2 would determine that a “look” has occurred after 200 ms of the looking-away period, because the infant had looked for more than the minimum look duration of 500 ms (satisfying Criterion 1), and the looking-away period was greater than 200 ms (satisfying Criterion 2). In this case, Habit2 will report a “complete look” of 2 s.

Fig. 4
figure 4

Schematic depictions of two look sequences. The smiley face at the top is the stimulus, which is presented for the period indicated by the arrow labeled “stimulus on.” The infant, located at the bottom of the figure, is looking at the stimulus when the nose (triangle) is pointing toward the stimulus and the looking line is elevated to the level of the word “looking.” The infant is looking away when the nose is pointed to the side and the looking line is at the level of the phrase “not looking.” See the text for detailed description of these examples

In the second example (the bottom diagram in Fig. 4), the infant looks at the stimulus for 2 s, then looks away for 150 ms, then looks back at the stimulus for 1 s, before looking away for 200 ms. In this case, Habit2 would determine that a “look” ended only after 200 ms of the second looking-away period. Although the infant looked away after a look that met the 500-ms minimum looking duration, the duration of this brief looking-away period was only 150 ms, which is shorter than the “maximum looking-away time.” Thus, this looking-away time was too short to trigger the end of the first 2 s “look.” The second looking-away period exceeded the “maximum looking-away time,” so Habit2 considers the look to be complete at that point. Note that when a phase is configured to have trials that end after a “single complete look,” the trial would actually end when the “maximum looking away time” is exceeded, as indicated in Fig. 4 by the “stimulus off” notation.

In the second example, the recorded duration of the “complete look” will depend on whether Habit2 was configured to include the durations of brief looks away in the total looking time. If so, the “complete look” would be reported as 3,150 ms, because the 150-ms period of nonlooking is included. If Habit2 is configured to exclude those brief durations of looks away, the “complete look” would be reported as being 3,000 ms long, because the short period of nonlooking is not counted as part of the looking time. This flexibility allows Habit2 to instantiate different practices adopted in different lab cultures. Moreover, because the settings may be made available in open science repositories, the use of this setting will make it more transparent how different labs define looks in their studies.

In addition to allowing flexibility in how looking behavior is recorded, Habit2 is also flexible in the number and specifications of the phases in an experiment. An experiment consists of a sequence of one or more phases, and each phase consists of one or more trials. In the original Habit, all experiments included three phases (pretest, habituation, and test), although it was possible to include no trials in a phase, essentially eliminating that phase. In Habit2, an experiment must involve at least one phase, but there is no upper limit on the number of phases an experiment can have. Thus, unlike the original version of Habit, it is transparent in Habit2 how to specify the exact number of phases needed, and it is possible to have experiments with more than three phases. Even more importantly, there is no restriction on the nature of any phase in Habit2. In the original Habit, the pretest and test phases were inflexibly set as fixed numbers of trials, although the specific number could be determined by the user. Only the “habituation” phase could be altered. In Habit2, however, each phase can be reconfigured in many different ways. A phase must include at least one trial, but there is no upper limit on the number of trials that can be included. In addition, the nature of these phases is flexible (see Fig. 5 for the dialog boxes that allow the user to configure the phases). Although general settings (such as how looking is defined and whether stimuli are presented on one or two monitors) are true for all phases of the experiment, other aspects of individual phases must be configured specifically for that phase. For example, the user specifies for each phase the number of trials and whether the number of trials is fixed (i.e., the same for all infants) or varies as a function of infant behavior. A fixed number of trials would be selected when using a fixed familiarization phase (e.g., Quinn et al., 2016), a preference study in which infants’ preferences on a series of trials are evaluated (Ross-Sheehy, Oakes, & Luck, 2003), a violation-of-expectation study that involved alternating presentations of possible and impossible events (e.g., Wynn, 1992), or fixed numbers of trials during a pretest, or postfamiliarization, or post-habituation-test (e.g., Werker, Fennell, Corcoran, & Stager, 2002).

Fig. 5
figure 5

The three dialog boxes to configure experimental phases: Phase Settings, Trial Settings, and Stimuli

Phase duration and termination can vary according to infants’ behavior, in one of two ways. First, the phases can be habituation phases, in which infants’ looking must reach a prespecified criterion to end. The criterion to end is the set of conditions that must be met before the phase ends, specifically when the infants’ duration of looking has decreased compared to a baseline. Importantly, habituation can be defined in many ways; the user can specify the percent decrease required, the number of trials to use in the habituation calculation, the particular baseline to use, and what type of moving window (or block of trials) to use in the calculation. Therefore, it should be possible to instantiate any habituation criterion that has been published in the literature (see Oakes, 2010, for a review). The second way that phases can vary according to infants’ behavior is in terms of the infants’ total looking. In such phases, trials are presented until infants accumulate a specific amount of looking to the stimuli, or until a maximum number of trials is reached. This is how one can use Habit2 to configure typical VPC studies in which infants are required to accumulate a specific amount of looking to a stimulus before the test stimuli are presented (Rose, 1981).

In each phase, the parameters that control the length of individual trials in that phase are specified in Trial Settings. Note that this means that trials can be defined differently in each phase of the experiment, providing additional flexibility in experimental design. Trials can be configured to end when infants have accumulated a specific amount of looking, after a single complete look (as defined in the experiment-wide look settings), after infants have shown a specific amount of inattention (looking away from the stimulus—the Continuous Time Inattentive setting), or after a fixed duration, regardless of infants’ looking behavior. Habit2 also allows for flexibility in how time off task (i.e., looking away from the stimulus) between recorded looking behavior is treated (e.g., whether or not it is included as part of the total time looking), whether trials are repeated if infants are inattentive, and so on (complete details about these and all other settings are in the user manual).

Finally, Habit2 has considerable flexibility in the type and number of stimuli that are specified for each phase. The stimuli in Habit2 can be image files, movie files, audio files, or a combination, and are specified by selecting media files on the local machine or on a shared network drive. Audio stimuli (language, music) may be presented simultaneously with visual stimuli or alone (by selecting “background color only” for the visual component of a stimulus). By selecting “Use Independent Sound Stimuli” on the experiment’s Stimulus Display configuration page, users can specify a separate sound file for each visual stimulus. In addition, different types of stimuli can be presented in different phases. Stimulus orders can be specified, or stimuli can be presented in a different random order for each subject.

Examples of instantiating paradigms in Habit2

To illustrate the power and flexibility of Habit2 for conducting infant looking-time studies, we include templates in Habit2 for how to set up to run five published studies. These studies represent the most common uses of procedures to assess infants’ looking times. These templates can be used to create new experiments or simply to explore the features of Habit2. They can be found by clicking on the “Create new experiment” icon (the big green plus sign) in Habit2 and checking the “Use a template” box. The five templates are (1) Ross-Sheehy, illustrating the settings for the preference task used by Ross-Sheehy, Oakes, and Luck (2003) to assess infants’ visual short-term memory; (2) Rose, illustrating the settings for the VPC procedure often used to study infants’ visual recognition memory (e.g., Rose, 1981); (3) Quinn, illustrating the settings for a familiarization test procedure, such as that used by Quinn et al. (2016) to study infants’ categorization of race; (4) Brannon, illustrating habituation as used by Brannon, Lutz, and Cordes (2006) to assess infants’ sensitivity to changes in the size of objects; and (5) Baumgartner, illustrating habituation as used by Baumgartner and Oakes (2011, Exp. 2) to examine infants’ attention to a correlation between two features of dynamic events. These examples reflect a wide range in how looking is defined, how many phases are involved, and how those phases are configured.

For example, the Ross-Sheehy, Rose, and Quinn templates all involved presenting stimuli on two monitors, so “Dual Monitors” was selected on the Stimulus Display page for each. Both habituation templates, Brannon and Baumgartner, involve presenting only one item on one monitor, so the single-monitor option is selected.

The originally published experiments varied in their use and type of attention-getting stimulus between trials. Ross-Sheehy et al. (2003) and Baumgartner and Oakes (2011) used an attention-getting stimulus between trials. The use of (or lack of) an attention-getter was not specified in the other published studies. However, none of the published experiments that were used as models for the templates seemed to simply progress without any delay between trials. Therefore, for all templates we used an attention-getter, but we used different attention-getters in the different templates. To use a beeping, blinking box, as was done by Ross-Sheehy et al. (2003) and Baumgartner and Oakes (2011), on the Intertrial Interval page we select the “Sound-only attention getter” and specify the sound file to be used with the blinking box (see the Habit documentation at https://habit2-docs.readthedocs.io/en/latest/_intertrialinterval.html for a link to instructions to create such a box). To have a blank screen, we selected the “Background color only” option in the Intertrial Interval tab (see the Rose and Brannon templates). This option creates a situation such that when the experiment is run, Habit2 will put a gray background on each monitor between trials until the experimenter indicates that the trial should proceed (by pressing the Enter or Return key). This allows the experimenter control over when trials are initiated, but no stimulus is presented between trials. Finally, it is possible to select a visual or audio stimulus to be presented on the monitor(s) during the intertrial interval, as demonstrated in the Quinn template.

The templates differ in how looking duration is recorded. Ross-Sheehy et al. (2003), Rose (1981), and Quinn et al. (2016) recorded all periods of looking during the trial; therefore, in the Look Settings for these templates, the minimum values are selected for the “minimum looking time” (1 ms) and a “minimum looking-away time” (0 ms). When using these settings, Habit2 will record every keypress and release as a looking time. Both Brannon et al. (2006) and Baumgartner and Oakes (2011) required minimum looking durations and maximum looking-away times before counting keypresses and key releases as starting and stopping points of looks. Brannon et al. used a minimum looking time of 500 ms and a maximum looking-away time is 2,000 ms, and Baumgartner and Oakes (Exp. 2) used a minimum looking time of 1,000 ms and a maximum looking-away time of 1,000. In these templates, these values are set in the Look Settings tab.

These published experiments, and their corresponding templates, also differed in the number of phases and how the phases were defined. Ross-Sheehy et al. (2003) included a single phase, with six trials, each 20 s in duration. This single phase contained precisely the same number of trials, all of the same duration, for each infant tested. Thus, in our template, we created a phase with a fixed number of trials and trials of a fixed duration, starting from stimulus onset. All of the other experiments and corresponding templates included two phases—a familiarization or habituation phase followed by a test phase. The familiarization phases of these experiments (and templates) vary. Like the Ross-Sheehy template, the familiarization phase in the Quinn template is simply a fixed number of trials of a fixed duration.

The other templates illustrate how to end a phase on the basis of an infants’ looking. The Rose template includes a familiarization phase that includes only a single trial, and the trial (and phase) ends when the infant has accumulated a specified amount of looking to the stimuli. In Habit2, we achieve this by including a phase with a fixed number of trials (one) and checking the box that indicates that the trial ends when the maximum looking time has been reached—the “End trial after maximum looking time” box when creating a new phase, which corresponds to the check box “Use Look Settings” and selecting “Accumulated look time” in the Trial Settings menu for an existing phase. Note that although the original Rose article does not specify a criterion for ending trials if infants do not accumulate the prespecified amount of looking, in Habit2 we can use the “Continuous time inattentive” option on the Trial Settings Tab to have Habit2 end the trial or phase after an infant has failed to look for a relatively long time (e.g., 30 s). Thus, in the Rose template, both the phase and trial durations vary depending on how long it takes infants to accumulate the predetermined amount of looking.

The two habituation templates, Brennan and Baumgartner, both have habituation phases in which trials continue until infants’ looking decreases to some criterion level. The Brannon template illustrates how to set up a habituation phase in which infants are shown a single image in a series of habituation trials, and the Baumgartner template illustrates an example when multiple stimuli are presented during habituation. Experiments differ in the number of trials used to evaluate habituation; Brannon et al. (2006) used blocks of three trials, and Baumgartner and Oakes (2011) used blocks of four trials. Experiments also differ in whether or not the blocks of trials used to evaluate habituation can overlap; sliding windows compare the baseline (e.g., Trials 1–3) to overlapping blocks of trials (e.g., Trials 2–4, 3–5, and 4–6), and fixed windows compare the baseline (e.g., Trials 1–4) with nonoverlapping blocks of trials (e.g., Trials 5–8, Trials 9–12, and so on). Using sliding windows allows infants to reach the habituation criterion on any trial. This is appropriate when infants are habituated only to a single stimulus, as in Brannon et al.’s study, but it may create problems when infants are habituated to multiple stimuli, as in Baumgartner and Oakes’s. Specifically, the use of a sliding window with multiple stimuli might result in infants’ having different exposures to—and different familiarity with—one stimulus versus another. Thus, in Habit2 it is possible to also calculate habituation using a fixed window, to make sure that infants see each stimulus the same number of times

Experiments can also differ in the decrease in looking required for infants to reach habituation; both Brannon et al. (2006) and Baumgartner and Oakes (2011) used 50%, but Habit2 allows the user to set any percentage of decrease as a criterion. Another variation between experiments is what is used as the baseline for calculating habituation. The Brannon template illustrates using as a baseline the first block (or window) of trials in which some minimum amount of looking is accumulated. In Brannon et al.’s study, habituation was evaluated using as a baseline the first block that summed to at least 12 s. The Baumgartner template illustrates using as a baseline the first block of trials, regardless of how much looking is accumulated. In Habit2, it is also possible to use as a baseline the block (or window) of trials that includes the most looking. Finally, experiments differ in the maximum number of trials that can be included in the habituation phase; in the Brannon template the maximum number of trials in the habituation phase is 16, and in the Baumgartner template the maximum number of trials in the habituation phase is 20. In each of these templates, the phase ends either when the infant meets the habituation criterion or when the maximum number of trials have been presented, whichever comes first. The point is that Habit2 allows the user to flexibly instantiate many variations of a habituation procedure.

Following the familiarization phase, the Rose, Quinn, Brennan, and Baumgartner templates each have a test phase with a fixed number of trials. In the Rose template, there are two test trials with two stimuli (using the dual-monitor setting), each lasting 5 s, starting when an infant first looks at one of the stimuli (i.e., when a keypress is detected). In the Quinn template there is a test phase consisting of two trials, each with two stimuli, 10 s in duration, starting from the onset of the stimuli. The Brannon template includes a test phase of six trials, and the Baumgartner template includes a test phase of four trials. In both of these templates, the trial duration depends on the infants’ looking behavior.

The templates also illustrate differences in how the timing of trials starts and ends. Most of the templates begin timing at the start of the stimulus. In the Rose template, the timing of the trials in the test phase begins when the infant first looks at the stimulus. Many of the templates have one or more phases with fixed length trials; trials continue for a set duration, regardless of infants’ looking. This is true for the single phase in the Ross-Sheehy template, for the test phase of the Rose template, and for both phases in the Quinn template. However, Habit2 also allows the user to establish controls for trial duration based on infant looking. For example, in Brannon et al. (2006), the presentation of the stimulus on each trial was dependent on the infant’s looking behavior. Recall that in this template we indicated that a “look” was any looking more than 500 ms that ended with a period of looking away of at least 2,000 ms. Brannon et al. presented the stimulus on each trial until infants had looked away for 2,000 ms, or until the infant had looked at the stimulus for 60 s without looking away. Similarly, Baumgartner and Oakes’s (2011) trials began when at least 1 s of looking had been accumulated, and they ended when the infant looked away for 1 s, or when 35 s had elapsed.

In Habit2 we have the capability to include other features that vary across procedures and labs. For example, Habit2 allows the user to specify whether trials should be repeated if no looking is recorded within a defined period of time. The Ross-Sheehy template illustrates how to instantiate the criteria used by Ross-Sheehy et al. (2003), that trials must include some recorded looking, or else the trial would be repeated. This is achieved in Habit2 by using a “Max initial time inattentive” and inputting the amount of time that can elapse without any looking before the trial is terminated and repeated. Similarly, this feature allowed us to instantiate in the Baumgartner template the condition that trials were repeated if the infant did not look in the first 10 s of the trial. These examples illustrate many of the most common features of experiments that can be instantiated in Habit2, but many other features and settings in Habit2 allow researchers an endless number of possibilities when designing experiments.

Summary

We have described here a stand-alone software solution for conducting experiments examining infants’ looking times to stimuli. The program, Habit2, is powerful and flexible and can be used to conduct a wide variety of experimental designs and procedures. Importantly, the software can be customized to parameters that reflect the practices of a particular area of research, rather than having preset parameters that reflect a particular culture. In addition, Habit2 can present static images, dynamic movies, and sound files, and can present a combination of those files within a single experiment. It can present visual stimuli on a single monitor or two monitors, allowing a variety of stimulus configurations to be used. As a result, Habit2 is effective for studies of infants’ basic perceptual or memory abilities, as well as for linking sounds to visual stimuli, word learning, discrimination of emotional stimuli, perception and recognition of complex physical and social events, and much more.

Author note

This research and preparation of this manuscript were made possible by NIH Grant R01EY022525, awarded to L.M.O. D.S. was supported by NIH Vision Research Core Grant P30EY012576. L.M.C. was supported by NIH Training Grant T32EY015387. Open practices statement: The software described here is freely available at https://habit.ucdavis.edu.