Blog posts

2020

Adversarial Attacks and NLP

6 minute read

Published:

This is an updated version of a March blog post with some more details on what I presented for the conclusion of the OpenAI Scholars program. Mostly, I’ve added a brief results section. If you’re interested in collaborating further on this please reach out!

Open Questions on the Generative Models and Fair Use Debate

10 minute read

Published:

A few weeks ago, OpenAI released Jukebox, “a neural net that generates music, including rudimentary singing, as raw audio in a variety of genres and artist styles.” The results are impressive, showing an ability to mimic specific artists’ style and voices. My personal favorite is this Simon & Garfunkel sample: when people say Jukebox is around 2014-15 in the GAN face timeline, that sample falls somewhere near Garfunkel-quality lyrics and Simon-quality vocals.

Integrated Gradients

11 minute read

Published:

Update (7/02): I’ve mostly finished this post now, we’ll leave it here.

COVID-19 and Manufacturing

2 minute read

Published:

Welcome to Episode 2 of Pamela’s random coronavirus thoughts! I will say, these are even more scattered than Episode 1, and probably belong in a separate space from my OpenAI posts, but this will do for now.

Universal Adversarial Triggers

5 minute read

Published:

Growing up, I knew that saying to my sister that Meg Cabot wasn’t the greatest author to ever live would cause her to yell at me. I knew this because I had pretty strong knowledge of my sister and her particular quirks. Saying this to other older sisters would probably yield mixed results. Criticizing Meg Cabot was, in this case, a non-universal trigger. It works on some older sisters, but not all.

COVID-19 and Public Spaces

3 minute read

Published:

I wrote this up Friday afternoon as part of an attempt at a business school essay. I’m too anxious to actually apply, but I do think it will be useful to use this blog as a collection of all the thoughts I have over the next few months, even if some of those thoughts feel tangentially related to language models. These posts in particular will be far from polished or thought out (and, TBH, probably not always properly cited), but will hopefully be something I can look back on as evidence of what I worked on and thought about during this time. So, taking all the MBA things out, here is what I’m thinking about this week:

Release Strategies and Language Models

5 minute read

Published:

In February 2019, OpenAI released the language model GPT-2. Well, they released results from it. The full 1.5 billion parameter model was actually released 9 months later in November. In this post I’ll define different release strategies and talk through some of the arguments for and against each.

Getting Stuck @OpenAI

less than 1 minute read

Published:

Hi everyone! This blog post will be short since I’m not done with the one I wanted to write. That said, I’ll flesh out this one a bit more over the next week with code and math.

Getting Started @OpenAI

6 minute read

Published:

Hey everyone, I’m Pamela Mishkin (an anagram of Manlike Mishap)!