OpenAI has released a research paper which details OpenAI Codex. Codex is the engine which powers the VS Code writing extension Github Copilot. I encourage you to read the research paper a few times on your own to understand how it works and how it was trained/fine tuned. Compared to most research papers, it’s not that bad of a read with mostly simple language.
It has significant implications if you currently work in the tech industry either as a programmer or functional leader/manager. A major takeaway for me, which I found surprising, was that OpenAI Codex is only 12 billion paramaters in size. This is tiny compared to GPT-3 which is 175 billion parameters. What will the GPT-3 equivalent of Codex be capable of doing? At what point is it a significant threat to programmer livelihoods?
I discuss the research paper at an early level among other things on my latest podcast episode (including expanding on my camping trip I posted earlier this week), make sure you check it out: