Low-power electronics
Low-power electronics are electronics, such as notebook processors, that have been designed to use less electric power than usual, often at some expense. In the case of notebook processors, this expense is processing power; notebook processors usually consume less power than their desktop counterparts, at the expense of lower processing power.[1]
History
Watches
The earliest attempts to reduce the amount of power required by an electronic device were related to the development of the wristwatch. Electronic watches require electricity as a power source, and some mechanical movements and hybrid electromechanical movements also require electricity. Usually, the electricity is provided by a replaceable battery. The first use of electrical power in watches was as a substitute for the mainspring, to remove the need for winding. The first electrically powered watch, the Hamilton Electric 500, was released in 1957 by the Hamilton Watch Company of Lancaster, Pennsylvania.
The first quartz wristwatches were manufactured in 1976, using analog hands to display the time.[2]
Watch batteries (strictly speaking cells, as a battery is composed of multiple cells) are specially designed for their purpose. They are very small and provide tiny amounts of power continuously for very long periods (several years or more). In some cases, replacing the battery requires a trip to a watch repair shop or watch dealer. Rechargeable batteries are used in some solar-powered watches.
The first digital electronic watch was a Pulsar LED prototype produced in 1970.[3] Digital LED watches were very expensive and out of reach to the common consumer until 1975, when Texas Instruments started to mass-produce LED watches inside a plastic case.
Most watches with LED displays required that the user press a button to see the time displayed for a few seconds because LEDs used so much power that they could not be kept operating continuously. Watches with LED displays were popular for a few years, but soon the LED displays were superseded by liquid crystal displays (LCDs), which used less battery power and were much more convenient in use, with the display always visible and no need to push a button before seeing the time. Only in darkness, you had to press a button to light the display with a tiny light bulb, later illuminating LEDs.[4]
Most electronic watches today use 32 kHz quartz oscillators.[2]
As of 2013, processors specifically designed for wristwatches are the lowest-power processors manufactured today—often 4-bit, 32 kHz processors.
Mobile computing
When personal computers were first developed, power consumption was not an issue. With the development of portable computers however, the requirement to run a computer off a battery pack necessitated the search for a compromise between computing power and power consumption. Originally most processors ran both the core and I/O circuits at 5 volts, as in the Intel 8088 used by the first Compaq Portable. It was later reduced to 3.5, 3.3, and 2.5 volts to lower power consumption. For example, the Pentium P5 core voltage decreased from 5V in 1993, to 2.5V in 1997.
With lower voltage comes lower overall power consumption, making a system less expensive to run on any existing battery technology and able to function for longer. This is crucially important for portable or mobile systems. The emphasis on battery operation has driven many of the advances in lowering processor voltage because this has a significant effect on battery life. The second major benefit is that with less voltage and therefore less power consumption, there will be less heat produced. Processors that run cooler can be packed into systems more tightly and will last longer. The third major benefit is that a processor running cooler on less power can be made to run faster. Lowering the voltage has been one of the key factors in allowing the clock rate of processors to go higher and higher. [5]
Electronics
Computing elements
The density and speed of integrated-circuit computing elements has increased exponentially for several decades, following a trend described by Moore's Law. While it is generally accepted that this exponential improvement trend will end, it is unclear exactly how dense and fast integrated circuits will get by the time this point is reached. Working devices have been demonstrated which were fabricated with a MOSFET transistor channel length of 6.3 nanometres using conventional semiconductor materials, and devices have been built that use carbon nanotubes as MOSFET gates, giving a channel length of approximately one nanometre. The density and computing power of integrated circuits are limited primarily by power-dissipation concerns.
The overall power consumption of a new personal computer has been increasing at about 22% growth per year.[6] This increase in consumption comes even though the energy consumed by a single CMOS logic gate in order to change its state has fallen exponentially in accordance with Moore's law, by virtue of shrinkage.[6]
An integrated-circuit chip contains many capacitive loads, formed both intentionally (as with gate-to-channel capacitance) and unintentionally (between conductors which are near each other but not electrically connected). Changing the state of the circuit causes a change in the voltage across these parasitic capacitances, which involves a change in the amount of stored energy. As the capacitive loads are charged and discharged through resistive devices, an amount of energy comparable to that stored in the capacitor is dissipated as heat:
The effect of heat dissipation on state change is to limit the amount of computation that may be performed within a given power budget. While device shrinkage can reduce some parasitic capacitances, the number of devices on an integrated circuit chip has increased more than enough to compensate for reduced capacitance in each individual device. Some circuits – dynamic logic, for example – require a minimum clock rate in order to function properly, wasting "dynamic power" even when they do not perform useful computations. Other circuits – most prominently, the RCA 1802, but also several later chips such as the WDC 65C02, the Intel 80C85, the Freescale 68HC11 and some other CMOS chips – use "fully static logic" that has no minimum clock rate, but can "stop the clock" and hold their state indefinitely. When the clock is stopped, such circuits use no dynamic power but they still have a small, static power consumption caused by leakage current.
As circuit dimensions shrink, subthreshold leakage current becomes more prominent. This leakage current results in power consumption, even when no switching is taking place (static power consumption). In modern chips, this current generally accounts for half the power consumed by the IC.
Reducing power loss
Loss from subthreshold leakage can be reduced by raising the threshold voltage and lowering the supply voltage. Both these changes slow down the circuit significantly. To address this issue, some modern low-power circuits use dual supply voltages to improve speed on critical paths of the circuit and lower power consumption on non-critical paths. Some circuits even use different transistors (with different threshold voltages) in different parts of the circuit, in an attempt to further reduce power consumption without significant performance loss.
Another method that is used to reduce power consumption is power gating:[7] the use of sleep transistors to disable entire blocks when not in use. Systems that are dormant for long periods of time and "wake up" to perform a periodic activity are often in an isolated location monitoring an activity. These systems are generally battery- or solar-powered and hence, reducing power consumption is a key design issue for these systems. By shutting down a functional but leaky block until it is used, leakage current can be reduced significantly. For some embedded systems that only function for short periods at a time, this can dramatically reduce power consumption.
Two other approaches also exist to lower the power overhead of state changes. One is to reduce the operating voltage of the circuit, as in a dual-voltage CPU, or to reduce the voltage change involved in a state change (making a state change only, changing node voltage by a fraction of the supply voltage—low voltage differential signaling, for example). This approach is limited by thermal noise within the circuit. There is a characteristic voltage (proportional to the device temperature and to the Boltzmann constant), which the state switching voltage must exceed in order for the circuit to be resistant to noise. This is typically on the order of 50–100 mV, for devices rated to 100 degrees Celsius external temperature (about 4 kT, where T is the device's internal temperature in Kelvins and k is the Boltzmann constant).
The second approach is to attempt to provide charge to the capacitive loads through paths that are not primarily resistive. This is the principle behind adiabatic circuits. The charge is supplied either from a variable-voltage inductive power supply or by other elements in a reversible-logic circuit. In both cases, the charge transfer must be primarily regulated by the non-resistive load. As a practical rule of thumb, this means the change rate of a signal must be slower than that dictated by the RC time constant of the circuit being driven. In other words, the price of reduced power consumption per unit computation is a reduced absolute speed of computation. In practice, although adiabatic circuits have been built, it has been difficult for them to reduce computation power substantially in practical circuits.
Finally, there are several techniques for reducing the number of state changes associated with a given computation. For clocked-logic circuits, the clock gating technique is used, to avoid changing the state of functional blocks that are not required for a given operation. As a more extreme alternative, the asynchronous logic approach implements circuits in such a way that a specific externally supplied clock is not required. While both of these techniques are used to different extents in integrated circuit design, the limit of practical applicability for each appears to have been reached.
Wireless communication elements
There are a variety of techniques for reducing the amount of battery power required for a desired wireless communication goodput.[8] Some wireless mesh networks use "smart" low power broadcasting techniques that reduce the battery power required to transmit. This can be achieved by using power aware protocols and joint power control systems.
Costs
In 2007, about 10% of the average IT budget was spent on energy, and energy costs for IT were expected to rise to 50% by 2010.[9]
The weight and cost of power supply and cooling systems generally depends on the maximum possible power that could be used at any one time. There are two ways to prevent a system from being permanently damaged by excessive heat. Most desktop computers design power and cooling systems around the worst-case CPU power dissipation at the maximum frequency, maximum workload, and worst-case environment. To reduce weight and cost, many laptop computers choose to use a much lighter, lower-cost cooling system designed around a much lower Thermal Design Power, that is somewhat above expected maximum frequency, typical workload, and typical environment. Typically such systems reduce (throttle) the clock rate when the CPU die temperature gets too hot, reducing the power dissipated to a level that the cooling system can handle.
See also
References
- "Intel Processor Letter Meanings [Simple Guide]". 2020-04-20.
- Eric A. Vittoz. "The Electronic Watch and Low-Power Circuits". 2008.
- "All in Good Time: HILCO EC director donates prototype of world's first working digital watch to Smithsonian". Texas Co-op Power. Feb 2012. Retrieved 2012-07-21.
- U.S. Patent 4,096,550: W. Boller, M. Donati, J. Fingerle, P. Wild, Illuminating Arrangement for a Field-Effect Liquid-Crystal Display as well as Fabrication and Application of the Illuminating Arrangement, filed 15 October 1976.
- Microprocessor Types and Specifications, by Scott Mueller and Mark Edward Soper, 2001
- Paul DeMone. "The Incredible Shrinking CPU: Peril of Proliferating Power". 2004.
- K. Roy, et al., "Leakage current mechanisms and leakage reduction techniques in deep-submicrometer CMOS circuits", Proceedings of the IEEE, 2003.
- "How to use optional wireless power-save protocols to dramatically reduce power consumption" by Bill McFarland 2008.
-
King, Rachael (2007-05-14). "Averting the IT Energy Crunch". Businessweek. Archived from the original on 2013-01-05.
Energy costs, now about 10% of the average IT budget, could rise to 50% ... by 2010.
- Brad Graves (2021-08-15). "Wiliot Series C Totals $200M". San Diego Business Journal. Retrieved 2022-07-08.
Further reading
- Gaudet, Vincent C. (2014-04-01) [2013-09-25]. "Chapter 4.1. Low-Power Design Techniques for State-of-the-Art CMOS Technologies". In Steinbach, Bernd [in German] (ed.). Recent Progress in the Boolean Domain (1 ed.). Newcastle upon Tyne, UK: Cambridge Scholars Publishing. pp. 187–212. ISBN 978-1-4438-5638-6. Retrieved 2019-08-04. (455 pages)
External links
- "High-level design synthesis of a low power, VLIW processor for the IS-54 VSELP Speech Encoder" by Russell Henning and Chaitali Chakrabarti (NB. Implies that, in general, if the algorithm to run is known, hardware designed to specifically run that algorithm will use less power than general-purpose hardware running that algorithm at the same speed.)
- CRISP: A Scalable VLIW Processor for Low Power Multimedia Systems by Francisco Barat 2005
- A Loop Accelerator for Low Power Embedded VLIW Processors by Binu Mathew and Al Davis
- Ultra-Low Power Design by Jack Ganssle
- K. Roy and S. Prasad, Low-Power CMOS VLSI Circuit Design, John Wiley & Sons, Inc., ISBN 0-471-11488-X, 2000, 359 pages.
- K-S. Yeo and K. Roy, Low-Voltage Low-Power VLSI Subsystems, McGraw-Hill 2004, ISBN 0-07-143786-X, 294 pages.