I uploaded a new podcast episode this week:
In this episode, I discuss a few things. I shared feedback I got from the GPT-3 community about the Slack Group shutting down as well as a new multimodal AI model called LatentVisions which I’m really excited about.
But there’s one topic in particular I discussed this episode that I wanted to share here on the newsletter specifically - do you think it’s time to refresh GPT-3?
I don’t believe (I could be wrong) that GPT-3 has been retrained since Fall of 2019. It does not “learn” overtime from interacting with users and to my knowledge, due to its architecture, it cannot be updated here and there either. I guess I’m just curious - are there plans to update GPT-3 and have it “catch up” with the times?
Last time I checked, it still thought Donald Trump was president:
… and it barely knows about COVID-19. The world has changed a lot since it was initially trained.
Don’t get me wrong, GPT-3 is still tremendously powerful, and most use cases may not need such an updated model of the world anyways, but is there even any interest from others in the developer community who want to see GPT-3 retrained and updated? What are the plans for creating a new engine with updated training data? I’m not sure if OpenAI has commented on this before or what their plans even are.
I understand this is a costly endeavour and may even potentially introduce prompt output regressions for GPT-3 powered commercial apps, but I think it’s worth thinking about and starting to discuss openly. Specifically, I would love to hear if others have had prompt outputs suffer as a result of GPT-3 being so out-of-date. Has your use case been jeopardized from bad model training data? Please share in the comments below or on my YouTube. I would love to hear everyone’s feedback on this topic. Thank you!
Also, just a friendly reminder, but at this point, the podcast is available now on most podcasting platforms, please consider subscribing:
Other Podcast Apps (RSS Link) - https://feed.podbean.com/bakztfuture/feed.xml