What is the best generative AI approach?


Generative adversarial networks and variational autoencoders are two of the most popular approaches to working with generative AI techniques. In general, GANs tend to be more widely used with multimedia, while VAEs are used more in signal analysis.

How does this translate into concrete pragmatic value? Generative AI techniques help create AI models, synthetic data and realistic multimedia, such as voices and images. While these techniques are sometimes used to create deep fakes, they can also create realistic voiceovers for movies and generate images from brief text descriptions. They also generate drug discovery targets, recommend product design choices, and improve safety algorithms.

How do GANs work?

GANs were first introduced by Ian Goodfellow and fellow researchers at the University of Montreal in 2014. They have shown tremendous promise in generating many types of realistic data. Yann LeCun, chief AI scientist at Meta, wrote that GANs and their variations are “the most interesting idea of ​​the last decade in machine learning.”

For starters, they were used to generate realistic speech, mimicking people for better translations, including voices and lip movements. They also translated imagery and differentiated night from day, as well as delineated dance movements between bodies. They are also combined with other AI techniques to improve security and create better AI classifiers.

The actual mechanics of GANs involves the interaction of two neural networks that work together to generate and then classify data representative of reality. GANs generate content using a generator neural network that is tested against a second neural network: the discriminator network, which determines whether the content looks “real”. This feedback helps form a better network of generators. The discriminator can also detect fake content or a piece of content that is not part of the domain. Over time, both neural networks improve, and feedback helps them learn to generate data as close to reality as possible.

How do pedelecs work and do they compare to GANs?

pedelecs were also first introduced in 2014, but by Google researcher Diederik Kingma and Max Welling, research chair in machine learning at the University of Amsterdam. pedelecs also promise to create more efficient classification engines for various tasks, with different mechanics. At their core, they rely on neural network auto-encoders consisting of two neural networks: an encoder and a decoder. The encoder optimizes for more efficient ways to represent the data, while the decoder optimizes for more efficient ways to regenerate the original dataset.

Traditionally, self-encoding techniques clean data, improve predictive analysis, compress data, and reduce the dimensionality of datasets for other algorithms. VAEs go further to minimize errors between the raw signal and the reconstruction.

Tiago Cardoso, product manager at enterprise content management software provider Hyland, said, “EVAs are extraordinarily powerful in delivering near-original content with just a reduced vector. It also allows us to generate non-existent content that can be used without a license.

The biggest difference seen when juxtaposing GANs and pedelecs is how they are applied. Pratik Agrawal, partner in the digital transformation practice at management consulting firm Kearney, said GANs are typically used to process any kind of imagery or visual data. He finds that VAEs work best for signal processing use cases, such as anomaly detection for predictive maintenance or safety analytics applications.

Since VAEs and GANs are examples of neural networks, their applications may be limited in real-world commercial examples, Agrawal said. Data scientists and developers working with these techniques must relate results to inputs and perform sensitivity analysis. It is also essential to consider factors such as the durability of these solutions and who manages them, the frequency of their maintenance and the technological resources necessary to update them.


About Author

Comments are closed.