traductor

jueves, 28 de octubre de 2021

Nuevos dispositivos de memorias para aplicaciones de computación neuromórfica

 

Nuevos dispositivos de memorias para aplicaciones de computación neuromórfica

Para entrenar e implementar redes neuronales artificiales, los ingenieros requieren dispositivos avanzados capaces de realizar cálculos intensivos en datos. En los últimos años, equipos de investigación de todo el mundo han estado tratando de crear tales dispositivos, utilizando diferentes enfoques y diseños.
Una posible manera de crear estos dispositivos es realizar hardware especializado en el que las redes neuronales puedan ser mapeadas directamente. Esto podría implicar, por ejemplo, el uso de matrices de dispositivos memristivos, que simultáneamente realizan cálculos paralelos.
 
Los investigadores del Instituto Max Planck de Física de Microsestructura y la empresa SEMRON GmbH en Alemania han diseñado recientemente nuevos dispositivos memcapacitivos de eficiencia energética (es decir, condensadores con memoria) que podrían usarse para implementar el aprendizaje automático lgoritmos. Estos dispositivos, presentados en un artículo publicado en Nature Electronics, funcionan explotando un principio conocido como blindaje de carga.
 
"Nos dimos cuenta de que además de los enfoques digitales convencionales para ejecutar redes neuronales, había en su mayoría enfoques memristivos y sólo muy pocas propuestas memcapacitivas", dijo Kai-Uwe Demasius, uno de los investigadores que llevó a cabo el estudio, a TechXpl mineral. "Además, nos dimos cuenta de que todos los chips de IA disponibles comercialmente solo son digitales/mixtas basados en señal y hay pocos chips con dispositivos de memoria resistiva. Por lo tanto, empezamos a investigar un enfoque alternativo basado en un dispositivo de memoria capacitiva. "
 
Mientras revisaban estudios previos, Demasius y sus colegas observaron que todos los dispositivos memcapacitivos existentes eran difíciles de escalar y presentaban un rango dinámico pobre. Así se propusieron desarrollar dispositivos más eficientes y fáciles de escalar. El nuevo dispositivo memcapacitivo que crearon se inspira en sinapsis y neurotransmisores en el cerebro.
 
"Los dispositivos memcapacitadores son inherentemente muchas veces más eficientes energéticamente en comparación con los dispositivos memristivos, porque se basan en campo eléctrico en lugar de corriente y la relación señal-ruido es mejor para el primer caso", dijo Demasius. "Nuestro dispositivo memcapacitor se basa en el cribado de carga, lo que permite una escalabilidad mucho mejor y un mayor rango dinámico en comparación con ensayos previos para realizar dispositivos memcapacitivos. "
 
El dispositivo creado por Demasius y sus colegas controla el acoplamiento del campo eléctrico entre un electrodo de puerta superior y un electrodo inferior de lectura a través de otra capa, llamada capa de escudo. Esta capa blindaje es a su vez ajustada por una memoria analógica, que puede almacenar los diferentes valores de peso de las redes neuronales artificiales, de manera similar a cómo los neurotransmisores en el cerebro almacenan y transmiten información.
 
 

New memcapacitor devices for neuromorphic computing applications

To train and implement artificial neural networks, engineers require advanced devices capable of performing data-intensive computations. In recent years, research teams worldwide have been trying to create such devices, using different approaches and designs.

One possible way to create these devices is to realize specialized hardware onto which can be mapped directly. This could entail, for instance, the use of arrays of memristive devices, which simultaneously perform parallel computations.

Researchers at Max Planck Institute of Microstructure Physics and the startup SEMRON GmbH in Germany have recently designed new energy-efficient memcapacitive devices (i.e., capacitors with a memory) that could be used to implement machine-learning algorithms. These devices, presented in a paper published in Nature Electronics, work by exploiting a principle known as charge shielding.

"We noticed that besides conventional digital approaches for running neural networks, there were mostly memristive approaches and only very few memcapacitive proposals," Kai-Uwe Demasius, one of the researchers who carried out the study, told TechXplore. "In addition, we noticed that all commercially available AI Chips are only digital/mixed signal based and there are few chips with resistive memory devices. Therefore, we started to investigate an alternative approach based on a capacitive memory device."

While reviewing previous studies, Demasius and his colleagues observed that all existing memcapacitive devices were difficult to scale up and exhibited a poor dynamic range. They thus set out to develop devices that are more efficient and easier to scale up. The new memcapacitive device they created draws inspiration from synapses and neurotransmitters in the brain.

"Memcapacitor devices are inherently many times more energy efficient compared to memristive devices, because they are electric field based instead of current based and the signal-to-noise ratio is better for the first case," Demasius said. "Our memcapacitor device is based on charge screening, which enables much better scalability and higher dynamic range in comparison to prior trials to realize memcapacitive devices."

The device created by Demasius and his colleagues controls the coupling between a top gate electrode and a bottom read-out electrode via another layer, called the shielding layer. This shielding layer is in turn adjusted by an analog memory, which can store the different weight values of artificial neural networks, similarly to how neurotransmitters in the brain store and convey information.

To evaluate their devices, the researchers arranged 156 of them in a crossbar pattern, then used them to train a neural network to distinguish between three different letters of the roman alphabet ("M," "P' and "I'). Remarkably, their devices attained energy efficiencies of over 3,500 TOPS/W at 8 Bit precision, which is 35 to 300 times larger compared to other existing memresistive approaches. These findings highlight the potential of the team's new memcapacitors for running large and complex deep learning models with a very low power consumption (in the μW regime).

"We believe that the next generation human-machine interfaces will heavily depend on (ASR)," Demasius said. "This not only includes wake-up-word detection, but also more complex algorithms, like speech-to-text conversion. Currently ASR is mostly done in the cloud, but processing on the edge has advantages with regards to data protection amongst other."

If speech recognition techniques improve further, speech could eventually become the primary means through which users communicate with computers and other electronic devices. However, such an improvement will be difficult or impossible to implement without large neural network-based models with billions of parameters. New devices that can efficiently implement these models, such as the one developed by Demasius and his colleagues, could thus play a crucial role in realizing the full potential of artificial intelligence (AI).

"We founded a that facilitates this superior technology," Demasius said. "SEMRON´s vision is to enable these large on a very small formfactor and power these algorithms with battery power or even energy harvesting, for instance on ear buds or any other wearable."

SEMRON, the start-up founded by Demasius and his colleagues, has already applied for several patents related to deep learning models for speech recognition. In the future, the team plans to develop more neural network-based models, while also trying to scale up the memcapacitor-based system they designed, by increasing both its efficiency and device-density.

"We are constantly filing patents for any topic related to this," Demasius said. "Our ultimate goal is to enable every device to carry heavily AI functionality on device and we also envision a lot of approaches when it comes to training or deep learning model architectures. Spiking neural nets and transformer based neural networks are only some examples. One strength is that we can support all these approaches, but of course constant research is necessary to keep up with all new concepts in that domain." 

More information: Kai-Uwe Demasius, Aron Kirschen, and Stuart Parkin, Energy-efficient memcapacitor devices for neuromorphic computing, Nature Electronics(2021). DOI: 10.1038/s41928-021-00649-y

 

 
MATHEMATICS FOR MACHINE LEARNING (AND DEEP LEARNING)
 
A highly recommended course by Jon Krohn
The full curriculum and free sample videos are available here: jonkrohn.com/udemy
All of the lessons from the course are (or will shortly be, in the case of the final Calculus videos) also available for free on YouTube, split across two playlists:
1.) Linear Algebra: https://lnkd.in/d-G82FdU
And if you have access to the O'Reilly Media learning platform, you can enjoy the studio-recorded, professionally-produced versions of all this content there: https://lnkd.in/dU-bB6QS
 
https://www.udemy.com/course/machine-learning-data-science-foundations-masterclass/?referralCode=AEB33B4212DC4FFD97F9

No hay comentarios: