Technology

Accelerating AI tasks while preserving data s

source : www.eurekalert.org

With the proliferation of compute-intensive machine learning applications, such as chatbots that perform real-time language translations, device manufacturers often integrate specialized hardware components to quickly move and process the massive amounts of data these systems require.

Choosing the best design for these components, known as deep neural network accelerators, is challenging because they can have a huge range of design options. This difficult problem becomes even more difficult when a designer tries to add cryptographic operations to protect data from attackers.

Now MIT researchers have developed a search engine that can efficiently identify optimal designs for deep neural network accelerators that maintain data security while improving performance.

Their search tool, known as SecureLoop, is designed to consider how the addition of data encryption and authentication measures will impact the accelerator chip’s performance and power consumption. An engineer could use this tool to obtain the optimal design of an accelerator tailored to his neural network and machine learning task.

Compared to conventional scheduling techniques that do not consider security, SecureLoop can improve the performance of accelerator designs while keeping data protected.

Using SecureLoop can help a user improve the speed and performance of demanding AI applications, such as autonomous driving or medical image classification, while keeping sensitive user data safe from certain types of attacks.

“If you are interested in a calculation where you want to maintain the security of the data, the rules we previously used to find the optimal design are now broken. So all that optimization has to adapt to this new, more complicated set of constraints. And that’s what (lead author) Kyungmi has done in this paper,” said Joel Emer, an MIT professor of practice in computer science and electrical engineering and co-author of a paper on SecureLoop.

Emer is joined on the paper by lead author Kyungmi Lee, a graduate student in electrical engineering and computer science; Mengjia Yan, Homer A. Burnell Career Development Assistant Professor of Electrical Engineering and Computer Science and member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and senior author Anantha Chandrakasan, dean of the MIT School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science. The research will be presented at the IEEE/ACM International Symposium on Microarchitecture.

“The community passively accepted that adding cryptographic operations to an accelerator would incur overhead. They thought it would introduce only a small variation in the compromise space between designs. But this is a misconception. In fact, cryptographic operations can significantly disrupt the design space of low-power accelerators. Kyungmi has done a fantastic job identifying this problem,” Yan added.

Safe acceleration

A deep neural network consists of many layers of interconnected nodes that process data. Normally, the output of one layer becomes the input of the next layer. Data is grouped into units called tiles for processing and transfer between external memory and the accelerator. Each layer of the neural network can have its own data tiling configuration.

A deep neural network accelerator is a processor with an array of computing units that parallelizes operations, such as multiplication, at each layer of the network. The accelerator diagram describes how data is moved and processed.

Because space on an accelerator chip is limited, most data is stored in off-chip memory and retrieved by the accelerator when needed. But because data is stored off-chip, it is vulnerable to an attacker who can steal information or change certain values, causing the neural network to malfunction.

“As a chip manufacturer, you cannot guarantee the security of external devices or the overall operating system,” Lee explains.

Manufacturers can protect data by adding authenticated encryption to the accelerator. Encryption encrypts the data using a secret key. The data is then separated into uniform parts by authentication, and each part of the data is assigned a cryptographic hash, which is stored together with the part of the data in off-chip memory.

When the accelerator retrieves an encrypted piece of data, known as an authentication block, it uses a secret key to recover and authenticate the original data before processing it.

But the sizes of authentication blocks and data tiles do not match, so there can be multiple tiles in one block, or a tile can be split between two blocks. The accelerator can’t randomly grab a fraction of an authentication block, so it may end up collecting extra data, which consumes extra energy and slows down the calculations.

Moreover, the accelerator still needs to perform the cryptographic operation on each authentication block, which adds further computational costs.

An efficient search engine

With SecureLoop, the MIT researchers sought a method that could identify the fastest and most energy-efficient accelerator scheme – one that minimizes the number of times the device must access external memory to collect additional blocks of data due to encryption and authentication.

They started by expanding an existing search engine that Emer and his associates had previously developed, called Timeloop. First, they added a model that could account for the additional computations required for encryption and authentication.

They then reformulated the search problem into a simple mathematical expression, allowing SecureLoop to find the ideal authentic block size in a much more efficient way than searching through all possible options.

“Depending on how you allocate this block, the amount of unnecessary traffic can increase or decrease. If you allocate the cryptographic block smartly, you can only retrieve a small amount of additional data,” says Lee.

Finally, they incorporated a heuristic technique that ensures SecureLoop identifies a scheme that maximizes the performance of the entire deep neural network, rather than just a single layer.

Ultimately, the search engine executes an accelerator scheme, which includes the data tiling strategy and the size of the authentication blocks, that provides the best possible speed and energy efficiency for a specific neural network.

“The design space for these accelerators is enormous. What Kyungmi did was come up with some very pragmatic ways to make that search manageable, so that she could find good solutions without having to exhaustively search the space,” says Emer.

When tested in a simulator, SecureLoop identified schemes that were up to 33.2 percent faster and demonstrated a 50.2 percent better energy delay product (a metric related to energy efficiency) than other methods that did not consider security.

The researchers also used SecureLoop to investigate how the design space for accelerators changes when security is considered. They found that allocating a little more on-chip space to the cryptographic engine and sacrificing some on-chip memory space can lead to better performance, Lee says.

In the future, the researchers want to use SecureLoop to find accelerator designs that can withstand side-channel attacks, which occur when an attacker has access to physical hardware. For example, an attacker can monitor a device’s energy consumption to obtain secret information, even if the data is encrypted. They also extend SecureLoop so it can be applied to other types of computations.

This work is funded in part by Samsung Electronics and the Korea Foundation for Advanced Studies.

###

Written by Adam Zewe, MIT News

Paper: “SecureLoop: design space exploration of secure DNN accelerators”

https://par.nsf.gov/biblio/10465225-secureloop-design-space-exploration-secure-dnn-accelerators


source : www.eurekalert.org

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button