Abstract

The purpose of this online study was to examine the effectiveness of concurrent positive and negative visual feedback on the performance of a rotary-pursuit task. One hundred and nine physical education students were randomly assigned to three groups: a positive feedback group (n = 37), a negative feedback group (n = 35), and a control group (no feedback; n = 37). The students participated from their own home computers and performed an easy, moderate, and difficult rotary-pursuit task. On Day 1, the participants performed a pre-test with no feedback and practiced eight trials of each level of difficulty with the assigned feedback. On Day 2, they practiced eight trials of each level of difficulty again. On Day 3, they practiced eight trials of each level of difficulty with feedback and performed a post-test with no feedback. Finally, the participants were asked to report their subjective assessment of the task difficulty. The main findings were that in the task of moderate difficulty, negative feedback led to the best performance during practice. In addition, regardless of the difficulty level, practicing with negative feedback led to the best performance in the post-test. The results suggest that task difficulty moderates the effects of feedback on performance and that providing concurrent negative visual feedback in a continuous task may be more advantageous.

1 Introduction

Feedback – the information learners receive as they try to produce a motor action – is one of the most important features of practice in motor learning (Schmidt et al., 2018). Feedback is often intrinsic and represents information that performers of the motor actions receive from their own senses but can also be extrinsic when it is received from other individuals (e.g., coaches, teachers, instructors) or from external devices (e.g., computers, video screens). Extrinsic feedback can augment intrinsic feedback and provide valuable information for the performer of the action and, therefore, it is often referred to as augmented feedback (Schmidt et al., 2018).

Augmented feedback can be delivered in several ways. For example, terminal feedback is provided after a motor action is completed while concurrent feedback is given while the action is in progress (Sattelmayer et al., 2016). In general, Sigrist et al. (2012) suggested that concurrent feedback enhances acquisition performance but not learning. Sattelmayer et al. (2016) added that there is some (although limited) evidence to show that terminal feedback can be more beneficial in a transfer test, compared with concurrent feedback. A possible explanation for the suggested superiority of terminal feedback over concurrent feedback in a transfer test is the guidance hypothesis. According to this hypothesis, learners who practice with concurrent or frequent feedback may become dependent on it, and thus, when feedback is removed in a transfer test, their performance suffers (Salmoni et al., 1984; Wulf & Shea, 2002).

A number of variables can moderate the benefits of either terminal or concurrent feedback. For example, regardless of the timing of the feedback, it can be either positive – emphasizing what the learner did well, or negative – emphasizing what the learner did wrong. Studies on terminal feedback usually show that providing the learner with positive feedback leads to better performance and learning compared with negative feedback (e.g., Badami et al., 2012; Chiviacowsky et al., 2009; Chiviacowsky & Wulf, 2007; Saemi et al., 2011; Saemi et al., 2012; but see Halperin et al., 2020, for improved performance when receiving negative feedback). However, not much is known about the differences between positive and negative concurrent feedback.

To explain potential disparities between positive and negative feedback, we can examine the literature pertaining to rewards and punishments in motor learning. Often, individuals tend to perceive positive feedback as rewarding and negative feedback as punishing. However, it is imperative to note that from a behaviorist standpoint, this perception is not universally consistent due to the subjective nature of what constitutes reward and punishment. What might be rewarding for one individual might not hold the same value for another (Lohse et al., 2019). Nevertheless, rewards can be extrinsic (e.g., money) or intrinsic (e.g., feeling of competence) (Lohse et al., 2019). Providing positive feedback can, for example, lead to elevated feelings of competence, and thus motivation.

Positive feedback, when rewarding to an individual, facilitates a form of learning known as reward-based learning, which is processed in the motor cortex via neurons that release dopamine (Beierholm et al., 2013). Consequently, the utilization of rewards should potentially enhance longer-term retention (Shmuelof et al., 2012). In contrast, when individuals face punishments, the learning process relies on movement errors that are processed in the cerebellum (but may have little effect on postponed retention) (Galea et al., 2010), and so negative feedback can improve adaptation during training. The abovementioned literature indicates that rewards and punishments affect learning and memory retention differently (Galea et al., 2015), and so understanding these mechanism can offer valuable insights into the roles positive and negative feedback play in influencing performance and learning.

Moreover, the effects of positive and negative concurrent feedback may be moderated by task difficulty. Indeed, task difficulty has been shown to moderate the effects of terminal feedback (e.g, Guadagnoli et al., 1996; Sidaway et al., 2012). In children for example, the provision of feedback after 33% of trials resulted in improved learning of an easy task, but in a more difficult task, learning was better facilitated when giving feedback after 100% of the trials (Sidaway et al., 2012). In another study on adults (Guadagnoli et al., 1996, Exp. 2), summary feedback after 5 or 15 trials led to better retention performance of an easy task compared to feedback after each trial. However, in a complex task, compared to summary feedback after 15 trials, feedback after each trial led to better retention performance. Finally, in a study that used concurrent feedback (Wulf et al., 1998, Exp. 2), more frequent feedback led to better learning of a complex task.

Our purpose in the current study was to examine the effects of positive and negative concurrent feedback on performance of a continuous tracking task of various difficulties. We chose to examine concurrent feedback for two reasons. First, the literature on the relationships between positive and negative concurrent feedback, task difficulty, and motor performance is limited. Second, concurrent feedback is used in various continuous sporting activities. For example, runners receive feedback from their coaches while running and Formula 1 drivers get real-time feedback from their engineers during races as they drive. Consequently, studying concurrent feedback is of both theoretical and practical significance. We hypothesized that: (1) task difficulty would moderate the effect of positive and negative feedback on performance, and (2) based on the literature on terminal feedback, positive feedback would lead to improved performance compared with negative feedback.

2 Methods

This study was conducted online on a cloud-based platform (Gorilla.sc, Anwyl-Irvine et al., 2019). This platform allows for creating online studies in which participants participate from their own computer. All raw data is available in an online repository (https://doi.org/10.17605/OSF.IO/QTZW8).

2.1 Participants

The sample was calculated using G*Power (Faul et al., 2007) to find a between-factor effect in a two-way Analysis of Variance (ANOVA) (three groups with three levels of difficulties) with repeated measures on the Difficulty factor. In previous studies on positive and negative feedback, researchers usually found moderate to large effect sizes (e.g., \(\eta^2_p\) > .28 in Chiviacowsky & Drews, 2016; Cohen’s d > 0.7 in Abbas & North, 2018; and Cohen’s d > 0.50 in a continuous task Goudini et al., 2018). Therefore, we used a moderate effect size (Cohen’s f = 0.25) for our power analysis. The following values were used for the calculation: Cohen’s f = 0.25, alpha = .05, number of groups = 3, number of measurements = 3, correlation among repeated measures = .5, required power = .80. We performed the power analysis based on the between factor (Group effect) and not the group-by-difficulty interaction because group differences per se were of interest and because previous effect sizes for a between factor were available. Due to the study being powered to detect main effects, findings concerning interactions may have been underpowered, and as such, they should be regarded as exploratory.

The calculation showed that 108 participants are required to achieve this statistical power. Out of 113 physical education students who participated, 109 were included in the analysis. Data of four participants were discarded because their data suggested that they did not engage with the task (see data analyses section for details). The participants were randomly assigned by the software to three groups: (a) a positive feedback group (n = 37; 20 females), (b) a negative feedback group (n = 35; 18 females), and (c) a control group (n = 37; 19 females). All participants read an online informed consent form and checked a box stating that they agree to participate. The study was approved by the Ethics Committee of the Academic College at Wingate (Approval # 321).

2.2 Task

We used a computerized rotary-pursuit task in which participants were asked to use their computer mouse to follow a small circle of different radii that moved on the circumference of a larger circle with a radius of 250 pixels. Three task difficulties were used: (a) an easy task – radius of circle to track = 40 pixels, (b) a moderate difficulty task – radius of circle = 30 pixels, and (c) a difficult task – radius of circle = 20 pixels, as can be seen in Figure 1. Each trial lasted 12 seconds during which the small circle the participants were asked to follow encircled the circumference of the larger circle three times (four seconds per orbit). The sizes of the circles in each of the tasks were based on a pilot study of 10 participants who practiced in various radii. For the easy task, the chosen radius led to 60-70% success (time the curser was on target out of the total 12 seconds of the trial); for the task of moderate difficulty – 45-55%, and for the difficult task – 20-30%.

Figure 1: An illustration of the task for (a) the negative feedback group in which a red square with the word off appearing when the curser is not on the circle, (b) for the positive feedback group in which a green square with the word on appearing when the curser is on the circle, and (c) for the control group with no feedback.

2.3 Procedure

The study was completed in a series of three sessions. A link to the study’s website was sent to each of the participants and participation took place on each participant’s personal computer. In Session 1, the participants read an online consent form and agreed to participate in the study. Then, they performed a pre-test that included five trials from each of the three levels of difficulty with no feedback. The acquisition phase began after the pre-test. The participants performed eight trials, with a five-second rest between trials, from each of the three difficulties with feedback based on group assignment (see Figure 1) for a total of 24 trials. In the control group, the mouse cursor was visible, enabling participants to track the target’s position and determine whether the cursor was precisely on or off the target. Therefore, the difference between groups was only in the valence or intensity of the feedback. The participants in the positive feedback group observed a prominent green square with the word on in its center when the curser was on target. Conversely, the participants in the negative feedback group observed a prominent red square with the word off in its center when the curser deviated from the target.

Session 2 took place 24-48 hours after Session 1 and included 24 trials – eight trials from each level of difficulty. Finally, in Session 3 that took place 24-48 hours after Session 2, the participants performed again the 24 trials and in addition, performed a post-test that was similar to the pre-test. The order of the tasks in the pre-test, the acquisition sessions, and the post-test was counterbalanced. In addition, after completing eight trials from each difficulty level during the three acquisition sessions, the participants were asked to report their subjective assessment of the task difficulty on a scale of 1 (very easy) to 10 (very difficult).

2.4 Data Analyses

The main dependent variable was time-on-target that could range between 0-12 seconds in each trial (each trial lasted 12 seconds). Four participants were removed from the study because they presented time-on-target values that were shorter than the time recorded if the curser did not move at all (when the curser did not move, the circle encircled it three times as it moved around the circumference of the large circle).

To examine differences in time-on-target in the pre-test we conducted a two-way ANOVA (Group [positive feedback/negative feedback/control] X Difficulty [easy, moderate, hard]) with repeated measures on the Difficulty factor. To examine differences in subjective assessment of task difficulty, we conducted a three-way ANOVA (Group [positive feedback/negative feedback/control] X Difficulty [easy, moderate, hard] X Session [Day 1/2/3]) with repeated measures on the two latter factors. To examine the differences during the post-test, we conducted a two-way ANCOVA (Group X Difficulty) with repeated measures on the Difficulty factor and with the pre-test times-on-target as a covariates. To examine performance during acquisition we conducted a three-way ANOVA (Group X Difficulty X Session] with repeated measures on the two latter factors. We chose ANOVA for the acquisition trials and ANCOVA for the post-test due to the similarity between the pre-test and post-test. Both assessments comprised an identical number of trials and provided no feedback to participants. Therefore, to address variations between groups during the pre-test, we opted for ANCOVA. In contrast, the acquisition trials differed between groups in terms of feedback type, setting them apart from the pre-test trials. As a result, we made the decision not to include the pre-test in this analysis.

Whenever necessary, we used the Greenhouse-Geisser correction for violation of the assumption of sphericity. Holm-Bonferroni post-hoc analyses were conducted for all significant findings, and partial eta-squared or Cohen’s d were used as effect sizes to match the relevant statistical test. Alpha for all analyses was set at .05. All statistical analyses were conducted in JASP (JASP Team, 2020), and R (R Core Team, 2020).

3 Results

Time-on-target durations for all groups and for all conditions are presented in Figure 2.

Figure 2: Time-on-target durations for all experimental groups in the easy task (a), the moderate difficulty task (b), and the hard task (c). Error bars represent 95% confidence intervals.

3.1 Time on Target during the Pre-test

A two-way ANOVA (Group X Difficulty) with repeated measures on the Difficulty factor revealed a Group effect, F(2, 106) = 4.29, p = .016 , \(\eta^2_p\) =.08. A Bonferroni post-hoc analysis found a difference between the negative feedback group (6.3 ± 1.7 s) and the control group (5.2 ± 1.5 s, Cohen’s d = 0.61) but there was no difference from the positive feedback group (5.5 + 1.5 s). There was also a Difficulty effect, F(2, 212) = 776.92, p < .01 , \(\eta^2_p\) =.88. As expected, times-on-target differed significantly between all difficulties. There was no Group X Difficulty interaction, F(4, 212) = .71, p = .59 , \(\eta^2_p\) =.01.

3.2 Time on Target Differences between Pre-test and Post-test

A two-way ANOVA (Test X Difficulty) with repeated measures on both factors revealed a Test effect, F(1, 106) = 14.17, p < .01 , \(\eta^2_p\) =.12. Time-on-target was longer in the post-test (6.2 ± 1.7 s) compared to the pre-test (5.7 ± 1.7 s, Cohen’s d = 0.3). There was also a difficulty effect, F(1.79, 189.68) = 1,356.19, p < .01 , \(\eta^2_p\) =.93. Bonferroni post-hoc analysis showed that all three difficulties differed significantly (3.6, 6.2 and 8.1 s, for the hard, moderate, and easy tasks, respectively; all Cohen’s d > 1.00). There was no Test X Difficulty interaction, F(2, 212) = 1.29, p = .28 , \(\eta^2_p\) =.01.

3.3 Time on Target during Acquisition

A three-way ANOVA (Group X Difficulty X Session) revealed a Group effect, F(2, 102) = 8.31, p < .001, \(\eta^2_p\) =.14, a Difficulty effect, F(2, 408) = 1,993.09, p < .001, \(\eta^2_p\) =.95, and a Group X Difficulty interaction, F(4, 408) = 2.48, p = .045 , \(\eta^2_p\) =.05. To find the source of this interaction we averaged time-on-target for all sessions in each difficulty level and conducted one-way ANOVAs between groups in each level of difficulty. All three ANOVAs were significant: hard difficulty, F(2, 106) = 6.25, p < .01 , \(\eta^2_p\) =.11; moderate difficulty, F(2, 106) = 7.62, p < .01 , \(\eta^2_p\) = .13; easy difficulty, F(2, 106) = 7.03, p < .01 , \(\eta^2_p\) =.12. Bonferroni post-hoc analyses were conducted to reveal the interaction.

In the hard level of difficulty, the only significant difference was between the negative feedback group (4.4 ± 1.7 s) and the control group (3.3 ± 1.1 s; Cohen’s d = 0.83). Time-on-target in the positive feedback group (3.8 ± 1.3 s) did not differ from the control group (Cohen’s d = 0.41) or the Negative feedback group (Cohen’s d = .42).

Similarly, in the easy level of difficulty, the only significant difference was between the negative feedback group (9.1 ± 1.4 s) and the control group (7.8 ± 1.6 s; Cohen’s d = 0.88). Time-on-target in the positive feedback group (8.4 ± 1.7 s) did not differ from the control group (Cohen’s d = 0.40) or the Negative feedback group (Cohen’s d = .48).

However, in the moderate level of difficulty, Time-on-target differed significantly between the negative feedback group (7.3 ± 1.8 s), the positive feedback group (6.3 ± 1.7 s; Cohen’s d = 0.61), and the control group (5.8 ¬± 1.7 s; Cohen’s d = 0.91). There was no difference between the positive feedback group and the control group (Cohen’s d = 0.30). (see Figure 3).

Figure 3: The Group X Task interaction. In the moderate difficulty, differences were significant between all three groups. In the hard and easy difficulties, differences were significant only between the negative feedback group and the control group. * p < .05; Error bars represent the standard error.

There was no session effect, F(2, 204) = 2.04, p = .13, \(\eta^2_p\) =.02, no Group X Session effect, F(4, 408) = 0.88, p = .48, \(\eta^2_p\) =.02, no Difficulty X Session effect, F(4, 408) = 0.99, p = .41, \(\eta^2_p\) =.01, and no Group X Difficulty X Session effect, F(8, 408) = 1.06, p = .39, \(\eta^2_p\) =.02.

3.4 Time on Target in the Post-test

A two-way ANCOVA (Group X Difficulty) with pre-test time-on-target in the three difficulties as covariates revealed a Group main effect, F(2, 101) = 3.74, p = .03 , \(\eta^2_p\) =.07. A Holm-Bonferroni post-hoc analysis showed that the participants in the negative feedback group were able to stay on target for longer durations (6.9 ± 1.5 s) compared with the participants in the control group (5.5 ± 1.6 s, p = .04, Cohen’s d = 0.56), but not compared to the participants in the positive feedback group (6.3 ± 1.7 s, p > .99, Cohen’s d = 0.13). The difference between the positive feedback group and the control group was not significant, but the effect size was moderate (p = .08, Cohen’s d = 0.44).

There was also a Difficulty effect, F(1.76, 177.39) = 15.88, p < .001, \(\eta^2_p\) =.14. Time-on-target was highest in the easy condition (8.3 ± 2.0 s) compared with the moderate-difficulty condition (6.5 ± 1.9 s; Cohen’s d = 1.25), and compared with the hard condition (3.8 ± 1.4 s; Cohen’s d = 3.10). There was also a significant difference between the moderate and the hard difficulties (Cohen’s d = 1.85). There was no Group X Difficulty interaction, F(3.51, 177.39) = 0.49, p =.72, \(\eta^2_p\) =.01.

3.5 Subjective Perception of Difficulty

A three-way ANOVA (Group X Difficulty X Session) revealed a main effect for Session, F(2, 424) = 11.01, p < .001 , \(\eta^2_p\) =.09. A Holm-Bonferroni post-hoc analyses revealed that the perception of difficulty in Session 1 (5.8 ± 1.8) was larger compared with the perception in Session 2 (5.5 ± 1.7; Cohen’s d = 0.13), and compared with Session 3 (5.3 ± 1.9; Cohen’s d = 0.23). There was also a difference between sessions 2 and 3 (Cohen’s d = 0.1).

There was no Group effect, F(2, 106) = 2.38, p = .10 , \(\eta^2_p\) =.04, and no Difficulty effect, F(2, 424) = 0.37, p = .69 , \(\eta^2_p\) =.00. There was also no Group X Difficulty interaction, F(4, 424) = 0.29, p = .88 , \(\eta^2_p\) =.01, Group X Session interaction, F(4, 424) = 0.45, p = .77 , \(\eta^2_p\) = .01, Difficulty X Session interaction, F(3.68, 389.52) = 0.61, p = .64 , \(\eta^2_p\) =.01, or a Group X Difficulty X Session interaction, F(7.35, 389.52) = 0.46, p = .87 , \(\eta^2_p\) =.01.

4 Discussion

The purpose of the current study was to examine the effectiveness of concurrent positive and negative feedback on the performance of an easy, moderate, and difficult rotary-pursuit task. We hypothesized that difficulty would moderate the effects of feedback on performance, and that positive feedback will be more beneficial than negative feedback. Our findings partially supported our first hypothesis. During practice, while there were no differences between the study groups (positive feedback, negative feedback, control - no feedback) in the easy task, in the moderate difficulty task the participants who received negative feedback outperformed those who received positive feedback, and those who received no feedback. In the difficult task, participants who received negative feedback outperformed the participants who received no feedback, but not those who received positive feedback.

Our data did not support the second hypothesis of superiority of positive feedback. In fact, negative feedback was superior to positive feedback. When augmented feedback is given after a trial or after a block of trials, it is usually positive feedback that leads to improved learning (e.g., Badami et al., 2012; Chiviacowsky et al., 2009; Chiviacowsky & Wulf, 2007; Saemi et al., 2011; Saemi et al., 2012). However, in the current study we used concurrent feedback. Sigrist et al. (2012) suggested that concurrent feedback may enhance acquisition performance but not learning. The literature, however, produces mixed results. For example, Walsh et al. (2009) showed that, compared with terminal feedback, providing feedback during task performance led to reduced performance in a transfer test. Similarly, Schmidt & Wulf (1997) showed that continuous concurrent feedback during acquisition interfered with retention performance. When learning a complex rowing task, terminal feedback was also better than concurrent feedback (Sigrist et al., 2013). In contrast to the abovementioned findings, several studies have shown that concurrent feedback can be beneficial (Hinder et al., 2009; Saijo & Gomi, 2010; Wulf et al., 1998).

In the current study, we provided either positive or negative concurrent visual feedback. Our findings suggest that task difficulty moderates the effects of concurrent visual feedback on performance. Negative feedback led to better performance compared with positive feedback in a task of moderate difficulty, but not in easy or hard tasks. It is possible that the easy task did not require much effort and, on the other hand, the hard task was too difficult for the feedback manipulation to assist. Task difficulty is an important factor when researching learning strategies. In this study, a task that led to ~55% success (time-on-target = ~6.5 seconds in each trial of 12 seconds) was able to differentiate between the feedback manipulations while tasks that led to ~70% success (time-on-target = ~8.4 seconds in each trial of 12 seconds) did not. In addition, ~32% success (time-on-target = ~3.8 seconds in each trial of 12 seconds) exposed a difference between feedback and no feedback, but not between the two types of feedback (negative and positive). However, when feedback was removed in the post-test, regardless of the task difficulty, the participants who practiced with negative feedback outperformed those who practiced with no feedback.

One possible explanation for the benefits of negative feedback compared with positive feedback is the timing of the feedback. Negative feedback appeared when the curser was off the target, and therefore alerted participants to correct their movements. In contrast, positive feedback appeared when the curser was on the target, and thus may have inadvertently shifted participants’ visual attention from the task to the visual feedback – a maladaptive shift in attention that represents a shift from top-down (goal-directed) to bottom-up (stimulus-driven) visual attention. To examine this possible explanation, researchers may consider in future studies adding eye-tracking data that can reveal participants’ foveal vision throughout the performance of the task.

Another potential explanation for the advantages associated with negative feedback in the current study is that, as Galea et al. (2015) showed, negative feedback can accelerate learning. Negative feedback can increase cerebellar sensitivity to the discrepancy between expected and perceived location of the mouse curser (see for example Tseng et al., 2007, who showed that sensory errors alone were required for learning). In contrast, Abe et al. (2011) found no immediate disparity in performance of a motor task following learning between reward and punishment. However, rewards notably enhanced long-term retention at six hours, 24 hours, and even 30 days post-training. Considering that in the current study, the post-test was conducted immediately after training, it is plausible that the benefits associated with positive feedback may not have materialized yet.

An important finding in the current study is the difference between task difficulty and the subjective perception of difficulty. While the results showed clear differences in performance between the hard, moderate, and easy tasks, the participants did not perceive these differences and rated all tasks similarly. The only difference in perception of difficulty was between Session 1 and Session 2 and 3. One possible explanation for this difference is the online methodology we used. Participants in such studies can answer questionnaires halfheartedly and the researcher may not be able to notice it. For the performance variable (i.e., time-on-target), we were able to use some form of quality control by excluding certain values that clearly suggested that a participant performed the task inattentively. This is more difficult to do when participants answer a questionnaire and is a limitation of this study. One way to tackle this problem in future studies is to use attentional checks or instructional manipulation checks (e.g., Oppenheimer et al., 2009). A second limitation of this study is that it was not powered to detect a Group X Difficulty interaction and thus it may have lacked the power to do so.

Finally, a third limitation pertains to the baseline differences observed between the negative feedback group and the control group. These baseline differences could account for at least some of the subsequent differences noted during acquisition and the post-test. It is noteworthy, however, that baseline differences were identified solely between the negative feedback group and the control group, not between the negative feedback group and the positive feedback group. Consequently, differences found during acquisition in the moderate difficulty between the negative and positive feedback groups were less likely to be influenced by these baseline distinctions. Additionally, we conducted the analysis of the post-test while considering the pre-test values as covariates. Although ANCOVA is not always advised, especially when baseline differences are not due to chance (Jamieson, 2004; Miller & Chapman, 2001), in the current study, group assignments were randomized by the software without any involvement of the researchers. Under such conditions, where baseline differences are likely due to chance, ANCOVA can be used to remove variance associated with pre-test values from the post-test (Jamieson, 2004).

In summary, the results of the current study showed that negative concurrent visual feedback might be more beneficial than positive concurrent visual feedback in the performance of a visual tracking task, depending on task difficulty. When feedback was removed during the post-test, those participants who practiced with negative feedback outperformed those who practiced with no feedback. A likely explanation for this finding is that concurrent positive feedback shifts participants’ visual attention off task. This can be verified in future studies with the use of eye trackers.

5 Additional Information

5.1 Data Accessibility

The raw data for this project is available in an online repository (https://doi.org/10.17605/OSF.IO/QTZW8).

5.2 Author Contributions

  • Contributed to conception and design: GZ, CO
  • Contributed to acquisition of data: CO
  • Contributed to analysis and interpretation of data: GZ, CO
  • Drafted and/or revised the article: GZ
  • Approved the submitted version for publication: GZ

5.3 Conflict of Interest

Authors have no conflicts of interest to declare.

5.4 Funding

The authors did not receive any funding for this project.

5.5 Acknowledgments

The authors have no individuals related to this project to acknowledge.

6 References

Abbas, Z. A., & North, J. S. (2018). Good-vs. poor-trial feedback in motor learning: The role of self-efficacy and intrinsic motivation across levels of task difficulty. Learning and Instruction, 55, 105–112. https://doi.org/10.1016/j.learninstruc.2017.09.009
Abe, M., Schambra, H., Wassermann, Eric M., Luckenbaugh, D., Schweighofer, N., & Cohen, Leonardo G. (2011). Reward Improves Long-Term Retention of a Motor Memory through Induction of Offline Memory Gains. Current Biology, 21(7), 557–562. https://doi.org/10.1016/j.cub.2011.02.030
Anwyl-Irvine, A. L., Massonnié, J., Flitton, A., Kirkham, N., & Evershed, J. K. (2019). Gorilla in our midst: An online behavioral experiment builder. Behavior Research Methods, 52(1), 388–407. https://doi.org/10.3758/s13428-019-01237-x
Badami, R., Vaezmousavi, M., Wulf, G., & Namazizadeh, M. (2012). Feedback About More Accurate Versus Less Accurate Trials: Differential Effects on Self-Confidence and Activation. Research Quarterly for Exercise and Sport, 83(2), 196–203. https://doi.org/10.5641/027013612800745275
Beierholm, U., Guitart-Masip, M., Economides, M., Chowdhury, R., Düzel, E., Dolan, R., & Dayan, P. (2013). Dopamine Modulates Reward-Related Vigor. Neuropsychopharmacology, 38(8), 1495–1503. https://doi.org/10.1038/npp.2013.48
Chiviacowsky, S., & Drews, R. (2016). Temporal-comparative feedback affects motor learning. Journal of Motor Learning and Development, 4(2), 208–218. https://doi.org/10.1123/jmld.2015-0034
Chiviacowsky, S., & Wulf, G. (2007). Feedback After Good Trials Enhances Learning. Research Quarterly for Exercise and Sport, 78(2), 40–47. https://doi.org/10.1080/02701367.2007.10599402
Chiviacowsky, S., Wulf, G., Wally, R., & Borges, T. (2009). Knowledge of Results After Good Trials Enhances Learning in Older Adults. Research Quarterly for Exercise and Sport, 80(3), 663–668. https://doi.org/10.1080/02701367.2009.10599606
Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2), 175–191. https://doi.org/10.3758/bf03193146
Galea, J. M., Mallia, E., Rothwell, J., & Diedrichsen, J. (2015). The dissociable effects of punishment and reward on motor learning. Nature Neuroscience, 18(4), 597–602. https://doi.org/10.1038/nn.3956
Galea, J. M., Vazquez, A., Pasricha, N., Orban de Xivry, J.-J., & Celnik, P. (2010). Dissociating the Roles of the Cerebellum and Motor Cortex during Adaptive Learning: The Motor Cortex Retains What the Cerebellum Learns. Cerebral Cortex, 21(8), 1761–1770. https://doi.org/10.1093/cercor/bhq246
Goudini, R., Saemi, E., Ashrafpoornavaee, S., & Abdoli, B. (2018). The effect of feedback after good and poor trials on the continuous motor tasks learning. Acta Gymnica, 48(1), 3–8. https://doi.org/10.5507/ag.2018.001
Guadagnoli, M. A., Dornier, L. A., & Tandy, R. D. (1996). Optimal Length for Summary Knowledge of Results: The Influence of Task-Related Experience and Complexity. Research Quarterly for Exercise and Sport, 67(2), 239–248. https://doi.org/10.1080/02701367.1996.10607950
Halperin, I., Ramsay, E., Philpott, B., Obolski, U., & Behm, D. G. (2020). The effects of positive and negative verbal feedback on repeated force production. Physiology & Behavior, 225, 113086. https://doi.org/10.1016/j.physbeh.2020.113086
Hinder, M. R., Riek, S., Tresilian, J. R., Rugy, A. de, & Carson, R. G. (2009). Real-time error detection but not error correction drives automatic visuomotor adaptation. Experimental Brain Research, 201(2), 191–207. https://doi.org/10.1007/s00221-009-2025-9
Jamieson, J. (2004). Analysis of covariance (ANCOVA) with difference scores. International Journal of Psychophysiology, 52(3), 277–283. https://doi.org/10.1016/j.ijpsycho.2003.12.009
JASP Team. (2020). JASP (Version 0.16.2)[Computer software]. https://jasp-stats.org/
Lohse, K., Miller, M., Bacelar, M., & Krigolson, O. (2019). Errors, rewards, and reinforcement in motor skill learning. In Skill acquisition in sport (pp. 39–60). Routledge.
Miller, G. A., & Chapman, J. P. (2001). Misunderstanding analysis of covariance. Journal of Abnormal Psychology, 110(1), 40–48. https://doi.org/10.1037/0021-843x.110.1.40
Oppenheimer, D. M., Meyvis, T., & Davidenko, N. (2009). Instructional manipulation checks: Detecting satisficing to increase statistical power. Journal of Experimental Social Psychology, 45(4), 867–872. https://doi.org/10.1016/j.jesp.2009.03.009
R Core Team. (2020). R: A language and environment for statistical computing (version 4.1.3)[computer software]. R Foundation for Statistical Computing. https://www.R-project.org/
Saemi, E., Porter, J. M., Ghotbi-Varzaneh, A., Zarghami, M., & Maleki, F. (2012). Knowledge of results after relatively good trials enhances self-efficacy and motor learning. Psychology of Sport and Exercise, 13(4), 378–382. https://doi.org/10.1016/j.psychsport.2011.12.008
Saemi, E., Wulf, G., Varzaneh, A. G., & Zarghami, M. (2011). Feedback after good versus poor trials enhances motor learning in children. Revista Brasileira de Educação Fı́sica e Esporte, 25(04), 673–681.
Saijo, N., & Gomi, H. (2010). Multiple Motor Learning Strategies in Visuomotor Rotation. PLoS ONE, 5(2), e9399. https://doi.org/10.1371/journal.pone.0009399
Salmoni, A. W., Schmidt, R. A., & Walter, C. B. (1984). Knowledge of results and motor learning: A review and critical reappraisal. Psychological Bulletin, 95(3), 355–386. https://doi.org/10.1037/0033-2909.95.3.355
Sattelmayer, M., Elsig, S., Hilfiker, R., & Baer, G. (2016). A systematic review and meta-analysis of selected motor learning principles in physiotherapy and medical education. BMC Medical Education, 16(1). https://doi.org/10.1186/s12909-016-0538-z
Schmidt, R. A., Lee, T. D., Winstein, C., Wulf, G., & Zelaznik, H. N. (2018). Motor control and learning: A behavioral emphasis. Human kinetics.
Schmidt, R. A., & Wulf, G. (1997). Continuous Concurrent Feedback Degrades Skill Learning: Implications for Training and Simulation. Human Factors: The Journal of the Human Factors and Ergonomics Society, 39(4), 509–525. https://doi.org/10.1518/001872097778667979
Shmuelof, L., Huang, V. S., Haith, A. M., Delnicki, R. J., Mazzoni, P., & Krakauer, J. W. (2012). Overcoming Motor Forgetting Through Reinforcement Of Learned Actions. The Journal of Neuroscience, 32(42), 14617–14621a. https://doi.org/10.1523/jneurosci.2184-12.2012
Sidaway, B., Bates, J., Occhiogrosso, B., Schlagenhaufer, J., & Wilkes, D. (2012). Interaction of Feedback Frequency and Task Difficulty in Children’s Motor Skill Learning. Physical Therapy, 92(7), 948–957. https://doi.org/10.2522/ptj.20110378
Sigrist, R., Rauter, G., Riener, R., & Wolf, P. (2012). Augmented visual, auditory, haptic, and multimodal feedback in motor learning: A review. Psychonomic Bulletin & Review, 20(1), 21–53. https://doi.org/10.3758/s13423-012-0333-8
Sigrist, R., Rauter, G., Riener, R., & Wolf, P. (2013). Terminal Feedback Outperforms Concurrent Visual, Auditory, and Haptic Feedback in Learning a Complex Rowing-Type Task. Journal of Motor Behavior, 45(6), 455–472. https://doi.org/10.1080/00222895.2013.826169
Tseng, Y., Diedrichsen, J., Krakauer, J. W., Shadmehr, R., & Bastian, A. J. (2007). Sensory Prediction Errors Drive Cerebellum-Dependent Adaptation of Reaching. Journal of Neurophysiology, 98(1), 54–62. https://doi.org/10.1152/jn.00266.2007
Walsh, C. M., Ling, S. C., Wang, C. S., & Carnahan, H. (2009). Concurrent Versus Terminal Feedback: It May Be Better to Wait. Academic Medicine, 84(Supplement), S54–S57. https://doi.org/10.1097/acm.0b013e3181b38daf
Wulf, G., & Shea, C. H. (2002). Principles derived from the study of simple skills do not generalize to complex skill learning. Psychonomic Bulletin & Review, 9(2), 185–211. https://doi.org/10.3758/bf03196276
Wulf, G., Shea, C. H., & Matschiner, S. (1998). Frequent Feedback Enhances Complex Motor Skill Learning. Journal of Motor Behavior, 30(2), 180–192. https://doi.org/10.1080/00222899809601335




Communications in Kinesiology