Vector-valued function

A vector-valued function, also referred to as a vector function, is a mathematical function of one or more variables whose range is a set of multidimensional vectors or infinite-dimensional vectors. The input of a vector-valued function could be a scalar or a vector (that is, the dimension of the domain could be 1 or greater than 1); the dimension of the function's domain has no relation to the dimension of its range.

Example: Helix

A graph of the vector-valued function r(z) = 2cosz,4sinz, z indicating a range of solutions and the vector when evaluated near z = 19.5

A common example of a vector-valued function is one that depends on a single real parameter t, often representing time, producing a vector v(t) as the result. In terms of the standard unit vectors i, j, k of Cartesian 3-space, these specific types of vector-valued functions are given by expressions such as

where f(t), g(t) and h(t) are the coordinate functions of the parameter t, and the domain of this vector-valued function is the intersection of the domains of the functions f, g, and h. It can also be referred to in a different notation:

The vector r(t) has its tail at the origin and its head at the coordinates evaluated by the function.

The vector shown in the graph to the right is the evaluation of the function near t =19.5 (between 6π and 6.5π; i.e., somewhat more than 3 rotations). The helix is the path traced by the tip of the vector as t increases from zero through 8π.

In 2D, We can analogously speak about vector-valued functions as

or

Linear case

In the linear case the function can be expressed in terms of matrices:

where y is an n ×1 output vector, x is a k ×1 vector of inputs, and A is an n × k matrix of parameters. Closely related is the affine case (linear up to a translation) where the function takes the form

where in addition b is an n ×1 vector of parameters.

The linear case arises often, for example in multiple regression, where for instance the n ×1 vector of predicted values of a dependent variable is expressed linearly in terms of a k ×1 vector (k < n) of estimated values of model parameters:

in which X (playing the role of A in the previous generic form) is an n × k matrix of fixed (empirically based) numbers.

Parametric representation of a surface

A surface is a 2-dimensional set of points embedded in (most commonly) 3-dimensional space. One way to represent a surface is with parametric equations, in which two parameters s and t determine the three Cartesian coordinates of any point on the surface:

Here F is a vector-valued function. For a surface embedded in n-dimensional space, one similarly has the representation

Derivative of a three-dimensional vector function

Many vector-valued functions, like scalar-valued functions, can be differentiated by simply differentiating the components in the Cartesian coordinate system. Thus, if

is a vector-valued function, then

The vector derivative admits the following physical interpretation: if r(t) represents the position of a particle, then the derivative is the velocity of the particle

Likewise, the derivative of the velocity is the acceleration

Partial derivative

The partial derivative of a vector function a with respect to a scalar variable q is defined as[1]

where ai is the scalar component of a in the direction of ei. It is also called the direction cosine of a and ei or their dot product. The vectors e1, e2, e3 form an orthonormal basis fixed in the reference frame in which the derivative is being taken.

Ordinary derivative

If a is regarded as a vector function of a single scalar variable, such as time t, then the equation above reduces to the first ordinary time derivative of a with respect to t,[1]

Total derivative

If the vector a is a function of a number n of scalar variables qr (r =1, ..., n), and each qr is only a function of time t, then the ordinary derivative of a with respect to t can be expressed, in a form known as the total derivative, as[1]

Some authors prefer to use capital D to indicate the total derivative operator, as in D/Dt. The total derivative differs from the partial time derivative in that the total derivative accounts for changes in a due to the time variance of the variables qr.

Reference frames

Whereas for scalar-valued functions there is only a single possible reference frame, to take the derivative of a vector-valued function requires the choice of a reference frame (at least when a fixed Cartesian coordinate system is not implied as such). Once a reference frame has been chosen, the derivative of a vector-valued function can be computed using techniques similar to those for computing derivatives of scalar-valued functions. A different choice of reference frame will, in general, produce a different derivative function. The derivative functions in different reference frames have a specific kinematical relationship.

Derivative of a vector function with nonfixed bases

The above formulas for the derivative of a vector function rely on the assumption that the basis vectors e1, e2, e3 are constant, that is, fixed in the reference frame in which the derivative of a is being taken, and therefore the e1, e2, e3 each has a derivative of identically zero. This often holds true for problems dealing with vector fields in a fixed coordinate system, or for simple problems in physics. However, many complex problems involve the derivative of a vector function in multiple moving reference frames, which means that the basis vectors will not necessarily be constant. In such a case where the basis vectors e1, e2, e3 are fixed in reference frame E, but not in reference frame N, the more general formula for the ordinary time derivative of a vector in reference frame N is[1]

where the superscript N to the left of the derivative operator indicates the reference frame in which the derivative is taken. As shown previously, the first term on the right hand side is equal to the derivative of a in the reference frame where e1, e2, e3 are constant, reference frame E. It also can be shown that the second term on the right hand side is equal to the relative angular velocity of the two reference frames cross multiplied with the vector a itself.[1] Thus, after substitution, the formula relating the derivative of a vector function in two reference frames is[1]

where NωE is the angular velocity of the reference frame E relative to the reference frame N.

One common example where this formula is used is to find the velocity of a space-borne object, such as a rocket, in the inertial reference frame using measurements of the rocket's velocity relative to the ground. The velocity NvR in inertial reference frame N of a rocket R located at position rR can be found using the formula

where NωE is the angular velocity of the Earth relative to the inertial frame N. Since velocity is the derivative of position, NvR and EvR are the derivatives of rR in reference frames N and E, respectively. By substitution,

where EvR is the velocity vector of the rocket as measured from a reference frame E that is fixed to the Earth.

Derivative and vector multiplication

The derivative of a product of vector functions behaves similarly to the derivative of a product of scalar functions.[2] Specifically, in the case of scalar multiplication of a vector, if p is a scalar variable function of q,[1]

In the case of dot multiplication, for two vectors a and b that are both functions of q,[1]

Similarly, the derivative of the cross product of two vector functions is[1]

Derivative of an n-dimensional vector function

A function f of a real number t with values in the space can be written as . Its derivative equals

.

If f is a function of several variables, say of , then the partial derivatives of the components of f form a matrix called the Jacobian matrix of f.

Infinite-dimensional vector functions

If the values of a function f lie in an infinite-dimensional vector space X, such as a Hilbert space, then f may be called an infinite-dimensional vector function.

Functions with values in a Hilbert space

If the argument of f is a real number and X is a Hilbert space, then the derivative of f at a point t can be defined as in the finite-dimensional case:

Most results of the finite-dimensional case also hold in the infinite-dimensional case too, mutatis mutandis. Differentiation can also be defined to functions of several variables (e.g., or even , where Y is an infinite-dimensional vector space).

N.B. If X is a Hilbert space, then one can easily show that any derivative (and any other limit) can be computed componentwise: if

(i.e., , where is an orthonormal basis of the space X), and exists, then

.

However, the existence of a componentwise derivative does not guarantee the existence of a derivative, as componentwise convergence in a Hilbert space does not guarantee convergence with respect to the actual topology of the Hilbert space.

Other infinite-dimensional vector spaces

Most of the above hold for other topological vector spaces X too. However, not as many classical results hold in the Banach space setting, e.g., an absolutely continuous function with values in a suitable Banach space need not have a derivative anywhere. Moreover, in most Banach spaces setting there are no orthonormal bases.

Vector field

A portion of the vector field (sin y, sin x)

In vector calculus and physics, a vector field is an assignment of a vector to each point in a space, most commonly Euclidean space .[3] A vector field on a plane can be visualized as a collection of arrows with given magnitudes and directions, each attached to a point on the plane. Vector fields are often used to model, for example, the speed and direction of a moving fluid throughout three dimensional space, such as the wind, or the strength and direction of some force, such as the magnetic or gravitational force, as it changes from one point to another point.

The elements of differential and integral calculus extend naturally to vector fields. When a vector field represents force, the line integral of a vector field represents the work done by a force moving along a path, and under this interpretation conservation of energy is exhibited as a special case of the fundamental theorem of calculus. Vector fields can usefully be thought of as representing the velocity of a moving flow in space, and this physical intuition leads to notions such as the divergence (which represents the rate of change of volume of a flow) and curl (which represents the rotation of a flow).

A vector field is a special case of a vector-valued function, whose domain's dimension has no relation to the dimension of its range; for example, the position vector of a space curve is defined only for smaller subset of the ambient space. Likewise, n coordinates, a vector field on a domain in n-dimensional Euclidean space can be represented as a vector-valued function that associates an n-tuple of real numbers to each point of the domain. This representation of a vector field depends on the coordinate system, and there is a well-defined transformation law (covariance and contravariance of vectors) in passing from one coordinate system to the other.

Vector fields are often discussed on open subsets of Euclidean space, but also make sense on other subsets such as surfaces, where they associate an arrow tangent to the surface at each point (a tangent vector).

More generally, vector fields are defined on differentiable manifolds, which are spaces that look like Euclidean space on small scales, but may have more complicated structure on larger scales. In this setting, a vector field gives a tangent vector at each point of the manifold (that is, a section of the tangent bundle to the manifold). Vector fields are one kind of tensor field.

See also

Notes

  1. Kane & Levinson 1996, pp. 29–37
  2. In fact, these relations are derived applying the product rule componentwise.
  3. Galbis, Antonio; Maestre, Manuel (2012). Vector Analysis Versus Vector Calculus. Springer. p. 12. ISBN 978-1-4614-2199-3.

References

  • Kane, Thomas R.; Levinson, David A. (1996), "1–9 Differentiation of Vector Functions", Dynamics Online, Sunnyvale, California: OnLine Dynamics, Inc., pp. 29–37
  • Hu, Chuang-Gan; Yang, Chung-Chun (2013), Vector-Valued Functions and their Applications, Springer Science & Business Media, ISBN 978-94-015-8030-4
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.