Share

A machine-learning-based methodology for fast and efficient modeling of power consumption in digital architectures

In this research, we proposed an AI-assisted methodology to significantly speed up power consumption modeling. Clustering techniques are used to select representative windows, automatically reducing register transfer level (RTL) traces. This drastically reduces the amount of data and generation times required without sacrificing model accuracy.

The purpose of the proposed methodology (Figure 1) is to leverage a combination of machine learning and a drastic reduction in the size of training datasets to speed up power consumption modeling for digital architectures.

 

Figure 1: Reduced training dataset generation flow.

 

Traditionally, these models are generated using datasets that contain RTL traces describing the circuit’s internal activity and power consumption data at logic-gate level obtained using commercial power analysis tools. However, logic-gate simulations are cumbersome, and generating the corresponding power consumption profiles is costly. Our methodology overcomes these two major bottlenecks.

We used k-means clustering to automate the selection of representative windows. First, RTL traces are segmented into fixed-size windows and then aggregated to enable robust clustering on complex time series.

The analysis is thus restricted to a small but informative subset of runtime segments. The gate-level simulations and corresponding power calculations are based on this subset. The amount of data to process is drastically reduced, but the behavioral diversity needed for learning is preserved.

Experimental results, obtained on a RISC-V Rocket core and a masked AES, show significant gains: Power consumption analysis time was reduced by up to 28 times and model training time was reduced by up to 49 times. The average prediction error was around 5%, despite the compressed data. As an example, Figure 2 shows a cycle-by-cycle comparison of a segment of predicted power (green) and reference power (red) for a masked AES.

 

Figure 2: Example of comparison between reference power and predicted power for a masked AES.

 

Up to 28x

faster data generation.

49x

faster training of power models while maintaining a prediction error rate of around 5%.

Our AI-assisted approach, which optimizes the selection of training data, significantly reduces simulation requirements without sacrificing the accuracy of power consumption models. This advance will pave the way toward faster, more scalable modeling.

Rebecca Cabean

Caaliph Andriamisaina

Research Engineer — CEA-List

Learn more

Use cases, applications, technology transfer

  • Automated reduction of representative windows for faster generation of power models for the RISC-V Rocket core.
  • Fast energy analysis in security environments with cryptographic systems like masked AES.

 

Patent

  • « Procédé de construction et d’entraînement d’un détecteur de la présence d’anomalies dans un signal temporel, dispositifs et procédé associés ». French patent no.: FR2014188 (réf CEA : DD20704)

Major projects/partnerships

  • This research aligns with CEA-List’s AI for EDA activities, which focus on using AI to make hardware architecture design faster and better-optimized. The methodology developed here is currently being implemented as part of the EU TRISTAN project; it will also be used in the EU RIGOLETTO project and in the national CADabrIA project, part of the PHOENIX program. Finally, it is being used in an industrial R&D project with our partner Silvaco.

 

Flagship publication