Embedded Office Blog

Embedded Market, Basics, Functional Safety, Embedded Security

Power Optimisation in SoCs (Systems on Chips)

by Embedded Office (comments: 0)

Power optimisation crucial in embedded SoCs

Power requirements are of particular importance in embedded processors and circuit design for two principal reasons. Firstly, the latest feature-rich chips have more features and functions or run at faster clock speeds and intrinsically require more power, just to operate correctly. Secondly, in the quest for maximum performance from a given design and physical presentation of a device, hardware designers are under pressure to deliver the maximum performance possible to gain or maintain a competitive advantage in the modern marketplace – or to roll back new technological frontiers.

However, the tools available and some of the more potentially useful methodologies involved are still in their developmental stages, with considerable implications for software design, architecture and implementation. In particular, high-level languages tend to be processor intensive yet need to link effectively with the lowest level of machine hardware.

Ignoring the topic could lead to processor over usage, overheating and errors – with potentially dangerous consequences. Currently, solutions are somewhat piecemeal; the responsibility for coordinating development metrics, test processes and solutions falls to individual companies and engineering teams.

For the future, however, it will be important to integrate this analysis into a complete flow within the design process. In the eyes of some semiconductor engineering experts, the combination of these requirements is a near-perfect example of the need for a shift left approach.

Current measures – smarter code and sub-systems

Software scheduling has become increasingly important in the relationship between power consumption and chips’ thermal limits – i.e. maintaining reliability, preventing overheating and avoiding potential catastrophe.

As processor arrays have become more energy hungry, developers have also needed to write smarter code and algorithms to reduce processor load. Other measures include the shrewd use of idle time to spread loads wherever possible (instead of allowing peaks to accumulate), as well as sleep and low power modes. In operating systems with tickless kernels, interrupts are not timed by the processor clock – instead, they are delivered on demand.

However, it is paradoxical that the time spent on processor development work to achieve these overhead savings in itself incurred more cost and, therefore, required payback through even greater performance.

Voltage and frequency scaling

Newer techniques include dynamic frequency scaling (also known as CPU throttling), which involves adjusting the microprocessor frequency based on current system needs. On mobile devices, this conserves battery reserves while also reducing cooling costs and noise.

Conversely, dynamic voltage scaling involves reducing the voltage when possible. This approach generally saves more power, due to a mathematical square effect in the electrical equation for power, voltage and resistance (P = V squared / R).

Predicting wider system response

However, focussing on individual sub-systems for relatively short periods is not of as much value as predicting overall system responses and performance. To appreciate power requirements at system levels, early specifications and virtual prototypes come into their own.

RTL (register transfer level) power estimation techniques use FPGAs (field programmable gate arrays) to simulate models the new synchronous processor circuits. As the FPGAs contain semiconductors, they can be switched to emulate differing new designs with relative ease and better estimate the responses of a circuit under various operating conditions. HDLs (hardware description languages) can be used to program the FPGAs in high-level representations of the prototype circuits under test, though the latest FPGAs may also have processor cores and hardened memory blocks that do not even require programming. In all cases, the aim is to derive low-level representations, to circuit wiring levels of detail.

According to a recent article in Semiconductor Engineering, some engineering development teams still estimate the maximum processor performance without doing a detailed analysis. Here, decisions regarding which power-governing factor to use are somewhat arbitrary – as, probably, are the thresholds at which chip performance will be scaled up or down. In essence, the trade off is between thermal issues and processor performance; tuning down too early may give poor results against benchmarks, whereas melting processors can lead to disaster. While engineers strive to ensure that architects and software developers are aware of the power and thermal aspects of individual projects, they readily acknowledge that this is often based on scant information. The process of parameter modelling for physical implementation is still emerging.

Standards for the future

Although there is no current universal system model, experts are placing faith in a draft standard that should be reusable across multiple target platforms to analyse power usage. Entitled Portable Stimulus (somewhat confusingly, to some minds), the early adopter specification for this new modus operandi became available for public review earlier this year (2017). The original impetus was geared towards functional verification, but engineers soon noticed that as the method defined an activity or workload profile, it also worked well when used to analyse architectural power requirements.

Additionally, the new IEEE P2415 standard aims to provide a uniform view of the power capabilities of hardware blocks and thereby facilitate their comparison and control through an intermediate (unified abstraction) layer between the processor hardware and the power management functions of a high-level OS. Its benefits include boosted tools and productivity for software developers, with the ability to (at least partially) automate the manipulation of low-level machine power states. In short, previously complex and error-prone hardware voltage and control signals should become easier to implement.

Collectively, these changes could be of similar magnitude to the embedded systems world as ACPI (Advanced Configuration and Power Interface) became to computer servers.

Summary:

Finally, in a point that warrants some consideration, Moore’s Law (first voiced in 1965) suggested that the complexity of devices in relation to the minimum component costs would approximately double on an annual basis from that time onwards. Decades later in the 2000s, Moore reappraised his own earlier pronouncement and even began to dismiss it. In contrast, other experts suggested the factor should be reduced to 1.5 as, according to them, the advances would be less exponential. Though opinions vary, it seems clear that despite spectacular technological innovation over recent decades, more of the same is in the pipeline. The prognosis, then, would appear to underline the growing need for efficient power management in embedded processors and SoCs.

Go back

Update Notification

For an automatic notification on new blog articles, just register your EMail address.

We are the Blogger:

Andrea Dorn

After my study of industrial engineering I worked at an engineering service provider. As team leader and sales representative, I was responsible for customers from aviation and mechanical engineering. I am part of the Embedded Office team since 2010. Here I am responsible for the Sales and Marketing activities. I love being outside for hiking, riding or walking no matter the weather.

Fridolin Kolb

I have more than 20 years experience in developing safety critical software as developer and project manager in medical, aerospace and automotive industries. I am always keen on finding a solution for any problem. The statement “This won’t never work”, will never work for me. In my spare time You can find me playing the traverse flute in our local music association, spending time with my family, or in a session as member of our local council and member of the local church council. So obviously I am lacking the ability to say “No” to any challenge ;-).

Michael Hillmann

I have been working for 20 years in safety critical software development. Discussing and solving challenges with customers and colleagues excites me again and again. In my spare time I can be found while hiking, spending time with my family, having a sauna with friends - or simply reading a good book.

Wolfgang Engelhard

I’m a functional safety engineer with over 10 years of experience in software development. I’m most concerned with creating accurate documentation for safety critical software, but lately found joy in destruction of software (meaning I’m testing). Spare time activities range from biking to mentoring a local robotics group of young kids.

Matthias Riegel

Since finishing my master in computer science (focus on Embedded Systems and IT-Security), I’ve been working at Embedded Office. Before that, I worked with databases, and learned many unusual languages (like lisp, clojure, smalltalk, io, prolog, …). In my spare time I’m often on my bike, at the lathe or watching my bees.