To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
Added in 24 Hours
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

From Wikipedia, the free encyclopedia

Riffusion
Developer(s)
  • Seth Forsgren
  • Hayk Martiros
Initial releaseDecember 15, 2022
Repositorygithub.com/hmartiro/riffusion-inference
Written inPython
TypeText-to-image model
LicenseMIT License
Websiteriffusion.com
Generated spectrogram from the prompt "bossa nova with electric guitar" (top), and the resulting audio after conversion (bottom)

Riffusion is a neural network, designed by Seth Forsgren and Hayk Martiros, that generates music using images of sound rather than audio.[1] It was created as a fine-tuning of Stable Diffusion, an existing open-source model for generating images from text prompts, on spectrograms.[1] This results in a model which uses text prompts to generate image files, which can be put through an inverse Fourier transform and converted into audio files.[2] While these files are only several seconds long, the model can also use latent space between outputs to interpolate different files together.[1][3] This is accomplished using a functionality of the Stable Diffusion model known as img2img.[4]

The resulting music has been described as "de otro mundo" (otherworldly),[5] although unlikely to replace man-made music.[5] The model was made available on December 15, 2022, with the code also freely available on GitHub.[2] It is one of many models derived from Stable Diffusion.[4]

Riffusion is classified within a subset of AI text-to-music generators. In December 2022, Mubert[6] similarly used Stable Diffusion to turn descriptive text into music loops. In January 2023, Google published a paper on their own text-to-music generator called MusicLM.[7][8]

References

  1. ^ a b c Coldewey, Devin (December 15, 2022). "Try 'Riffusion,' an AI model that composes music by visualizing it".
  2. ^ a b Nasi, Michele (December 15, 2022). "Riffusion: creare tracce audio con l'intelligenza artificiale". IlSoftware.it.
  3. ^ "Essayez "Riffusion", un modèle d'IA qui compose de la musique en la visualisant". December 15, 2022.
  4. ^ a b "文章に沿った楽曲を自動生成してくれるAI「Riffusion」登場、画像生成AI「Stable Diffusion」ベースで誰でも自由に利用可能". GIGAZINE.
  5. ^ a b Llano, Eutropio (December 15, 2022). "El generador de imágenes AI también puede producir música (con resultados de otro mundo)".
  6. ^ "Mubert launches Text-to-Music interface – a completely new way to generate music from a single text prompt". December 21, 2022.
  7. ^ "MusicLM: Generating Music From Text". January 26, 2023.
  8. ^ "5 Reasons Google's MusicLM AI Text-to-Music App is Different". January 27, 2023.
This page was last edited on 29 November 2023, at 11:16
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.