Predation risk allocation hypothesis
The predation risk allocation hypothesis attempts to explain how and why animals' behaviour and foraging strategies differ in various predatory situations, depending on their risk of endangerment.[1] The hypothesis suggests that an animal's alertness and attention, along with its willingness to hunt for food, will change depending on the risk factors within that animal's environment and the presence of predators that could attack. The model assumes there are different levels of risk factors within various environments and prey animals will behave more cautiously when they are found in high-risk environments.[2] The overall effectiveness of the model for predicting animal behaviour varies, therefore, its results are dependent on the prey species used in the model and how their behaviour changes. There are several reasons the predation risk allocation hypothesis was developed to observe how animal behaviour varies depending on its risk factors. Mixed results have been found for the model's effectiveness in predicting predator defensive behaviour for various species.
Hypothesis background
The hypothesis attempts to explain how animals demonstrate anti-predator behaviours in different environments depending on risk factors, i.e. predatory threats.[1] Threat levels can vary among different habitats, depending on the type of terrain and other animals inhibiting that zone.
There are two main predictions used in the predation risk allocation hypothesis. The first assumes that animals will increase foraging in safer environments, at times when predators are not present. The predicted advantage of foraging while predators are absent allows animals to eat and gain energy to then fight against predators upon their arrival.[2]
The second prediction anticipates that animals will demonstrate less anti-predator behaviours when they have been in a high-risk environment for a long period of time.[3][1] When significant time has passed in the same environmental location, the animal needs to eat to survive, therefore they are more likely to forage and spend less energy defending against predators when they have been in the same environment for a long period of time. These animals have to be less selective over their foraging times since they are not left with many options.[3][1]
The model cannot be used for animals that exhibit control over their predation risk. These animals would not demonstrate the anticipated behavioural responses in accordance with the hypothesis predictions.[1] For instance, if animals could control their predation risk, they would not exhibit avoidance behaviours in response to predation in safer situations and therefore not supporting the hypothesis.[1] Another observation found animals with more time to learn about risk factors in their habitats were then better able to demonstrate behaviours found to be consistent with this hypothesis.[4] Likewise, those animals without sufficient time to learn or understand the risk factors in their area will not display behaviours that support the hypothesis.[4]
Case studies
Various studies have observed the effectiveness of the predation risk allocation hypothesis for both vertebrate and invertebrate animals. The results both support and refute the hypothesis.
Snails
Freshwater physid snails (Physella gyrina) act in accordance with the hypothesis in response to crayfish predators in their environments. The snails' frequency of activity, response to predators and interaction behaviours with their environments have been observed in different contexts. The physid snails' behaviour occurs in response to the level of predatory threat in their habitat area, they increase foraging and demonstrate higher activity measures in lower predation risk levels whereas they will decrease these behaviours in high risk predation areas.[5]
Fish
The convict cichlids (Archocentrus nigrofasciatus) have been observed in two different contexts, high and low-risk predatory settings. These fish behave with less anti-predator and foraging behaviours when located in dangerous predatory areas, high-risk compared to the low-risk zones. These behaviour adjustments in various contexts support the risk allocation hypothesis since the animals follow its assumptions.[2]
Tadpoles
Tadpoles of the pool frog (Rana lessonae) do not follow the predictions risk allocation hypothesis with their foraging behaviours. Tadpoles were observed and these animals did not increase their foraging behaviours in zones with less threat. Instead, they continued a constant feeding pattern, not dependent on their living condition.[3]
Voles
Behaviour of bank voles (Clethrionomys glareolus) in response to least weasel prey does not support the risk allocation hypothesis, which can demonstrate that bank voles cannot assess risk in their territories. There were no changes in the foraging behaviours in different risk contexts or the amount of time these animals spent in these zones. The voles demonstrated more anti-predator behaviour in the high-risk situation however they did not increase their foraging behaviour in the high-risk zones. Since the bank vole behaviour was not demonstrated as predicted in both zones, they cannot be used to support the hypothesis.[6]
Hypothesis application
The predation risk allocation hypothesis can help researchers learn how animals make behavioural responses to predators, since it is the first research that observes temporal variation in different risk situations.[7] Animals' responses to predators can be better understood by observing behaviour adjustments to modified risk levels. The hypothesis however, does not explain behaviour in all types of variable risk situations, since this concept assumes that risk levels in every environment will change over time.[7] The risk allocation hypothesis best supports observations of animal behaviour for those animals that developed and evolved in the same environments where they received information about that zone's local predators.[8] These animals would therefore be most informed on what to expect and how to react in their environments.[8] Animals that are exposed to risky situations i.e. predation, more frequently, may demonstrate similar behaviours in both high-risk and safe situations due to habituation.[9] These animals become used to the constant threat and therefore would not act the same compared to animals who are not used to high-risk situations since they have become more immune to these instances.[9]
References
- Lima and Bednekoff. (1999) Temporal Variation in Danger Drives Antipredator Behavior: The Predation Risk Allocation HypothesisThe American Naturalist, 177, 143-146
- Ferrari, M. C. O., Sih, A., & Chivers, D. P. (2009) The paradox of risk allocation: A review and prospectus. Animal Behaviour, 78, 579-585
- Buskirk, J. V., Muller, C., Portmann, A., & Surbeck, M. (2002) A test of the risk allocation hypothesis: Tadpole responses to temporal change in predation risk. Ecology, 13, 526-530.
- Ferrari, M. C. O., Rive, A. C., MacNaughton, C. J., Brown, G. E., & Chivers, D. P. (2008) Fixed vs. random temporal predictability of predation risk: An extension of the risk allocation hypothesis. Ethology, 114, 238-244.
- Sih, A., & McCarthy, T. M. (2002) Prey responses to pulses of risk and safety: Testing the risk allocation hypothesis. Animal Behaviour, 63, 437-443
- Sundell, J., Dudek, D., Klemme, I., Koivisto, E., Pusenius, J., & Ylönen, H. (2004) Variation in predation risk and vole feeding behaviour: A field test of the risk allocation hypothesis Oecologia, 139, 157-162
- Lima, S. L.; Bednekoff, P. A. (1999). "Temporal variation in danger drives antipredator behaviour: The predation risk allocation hypothesis". The American Naturalist. 153: 649–659. doi:10.2307/2463621.
- Luttbeg, B. (2017). "Re-examining the causes and meaning of the risk allocation hypothesis". The American Naturalist. 189: 644–656. doi:10.1086/691470.
- Mirza, R. S., Mathis, A., Chivers, D. P. (2005). Does Temporal Variation in Predation Risk Influence the Intensity of Antipredator Responses? A Test of the Risk Allocation Hypothesis. Ethology, 112, 44-51.