This is an updated version of a March blog post with some more details on what I presented for the conclusion of the OpenAI Scholars program. Mostly, I’ve added a brief results section. If you’re interested in collaborating further on this please reach out!
A few weeks ago, OpenAI released Jukebox, “a neural net that generates music, including rudimentary singing, as raw audio in a variety of genres and artist styles.” The results are impressive, showing an ability to mimic specific artists’ style and voices. My personal favorite is this Simon & Garfunkel sample: when people say Jukebox is around 2014-15 in the GAN face timeline, that sample falls somewhere near Garfunkel-quality lyrics and Simon-quality vocals.
A few weeks ago the European Commission released a White Paper on Artificial Intelligence detailing what actions the EU may take in the next few years to regulate applications of AI and promote and incentivize new AI innovation.
Welcome to Episode 2 of Pamela’s random coronavirus thoughts! I will say, these are even more scattered than Episode 1, and probably belong in a separate space from my OpenAI posts, but this will do for now.
Growing up, I knew that saying to my sister that Meg Cabot wasn’t the greatest author to ever live would cause her to yell at me. I knew this because I had pretty strong knowledge of my sister and her particular quirks. Saying this to other older sisters would probably yield mixed results. Criticizing Meg Cabot was, in this case, a non-universal trigger. It works on some older sisters, but not all.
I wrote this up Friday afternoon as part of an attempt at a business school essay. I’m too anxious to actually apply, but I do think it will be useful to use this blog as a collection of all the thoughts I have over the next few months, even if some of those thoughts feel tangentially related to language models. These posts in particular will be far from polished or thought out (and, TBH, probably not always properly cited), but will hopefully be something I can look back on as evidence of what I worked on and thought about during this time. So, taking all the MBA things out, here is what I’m thinking about this week:
In February 2019, OpenAI released the language model GPT-2. Well, they released results from it. The full 1.5 billion parameter model was actually released 9 months later in November. In this post I’ll define different release strategies and talk through some of the arguments for and against each.