Caratheodory-π solution

A Carathéodory-π solution is a generalized solution to an ordinary differential equation. The concept is due to I. Michael Ross and named in honor of Constantin Carathéodory.[1] Its practicality was demonstrated in 2008 by Ross et al.[2] in a laboratory implementation of the concept. The concept is most useful for implementing feedback controls, particularly those generated by an application of Ross' pseudospectral optimal control theory.[3]

Mathematical background

A Carathéodory-π solution addresses the fundamental problem of defining a solution to a differential equation,

when g(x,t) is not differentiable with respect to x. Such problems arise quite naturally[4] in defining the meaning of a solution to a controlled differential equation,

when the control, u, is given by a feedback law,

where the function k(x,t) may be non-smooth with respect to x. Non-smooth feedback controls arise quite often in the study of optimal feedback controls and have been the subject of extensive study going back to the 1960s.[5]

Ross' concept

An ordinary differential equation,

is equivalent to a controlled differential equation,

with feedback control, . Then, given an initial value problem, Ross partitions the time interval to a grid, with . From to , generate a control trajectory,

to the controlled differential equation,

A Carathéodory solution exists for the above equation because has discontinuities at most in t, the independent variable. At , set and restart the system with ,

Continuing in this manner, the Carathéodory segments are stitched together to form a Carathéodory-π solution.

Engineering applications

A Carathéodory-π solution can be applied towards the practical stabilization of a control system.[6][7] It has been used to stabilize an inverted pendulum,[6] control and optimize the motion of robots,[7][8] slew and control the NPSAT1 spacecraft[3] and produce guidance commands for low-thrust space missions.[2]

See also

References

  1. Biles, D. C., and Binding, P. A., “On Carathéodory’s Conditions for the Initial Value Problem," Proceedings of the American Mathematical Society, Vol. 125, No. 5, May 1997, pp. 1371–1376.
  2. Ross, I. M., Sekhavat, P., Fleming, A. and Gong, Q., "Optimal Feedback Control: Foundations, Examples and Experimental Results for a New Approach," Journal of Guidance, Control and Dynamics, Vol. 31, No. 2, pp. 307–321, 2008.
  3. Ross, I. M. and Karpenko, M. "A Review of Pseudospectral Optimal Control: From Theory to Flight," Annual Reviews in Control, Vol.36, No.2, pp. 182–197, 2012.
  4. Clarke, F. H., Ledyaev, Y. S., Stern, R. J., and Wolenski, P. R., Nonsmooth Analysis and Control Theory, Springer–Verlag, New York, 1998.
  5. Pontryagin, L. S., Boltyanskii, V. G., Gramkrelidze, R. V., and Mishchenko, E. F., The Mathematical Theory of Optimal Processes, Wiley, New York, 1962.
  6. Ross, I. M., Gong, Q., Fahroo, F. and Kang, W., "Practical Stabilization Through Real-Time Optimal Control," 2006 American Control Conference, Minneapolis, MN, June 14-16 2006.
  7. Martin, S. C., Hillier, N. and Corke, P., "Practical Application of Pseudospectral Optimization to Robot Path Planning," Proceedings of the 2010 Australasian Conference on Robotics and Automation, Brisbane, Australia, December 1-3, 2010.
  8. Björkenstam, S., Gleeson, D., Bohlin, R. "Energy Efficient and Collision Free Motion of Industrial Robots using Optimal Control," Proceedings of the 9th IEEE International Conference on Automation Science and Engineering (CASE 2013), Madison, Wisconsin, August, 2013
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.