I wanted to share an update on my side project, Mead.AI. Currently, there is nothing available on the site if you go to it, but behind the scenes, I can confidently tell you that the proof of concept is coming along, which is really exciting!
Yesterday, we completed the “AI proof of concept” and now it’s just a matter of making the simple frontend user interface and basic backend functionality like authentication and accepting payments before the site can be released into private beta.
Do you have a cool logo ready?
You bet! I hired someone super talented to bring the logo possibility to life.
The logo icon is meant to look like an “M” but also to represent the intersections of different modalities. Imagine text on the right side and maybe video, audio, images, or sensory data on the left.
Ok cool … so, what actually is Mead.AI?
It’s a hobbyist server/website where myself and other people can access multimodal AI models conveniently. Right now, you can enter text and it can generate an image for you. You can also provide weightings, starter images, and different reference images to get a more targeted image output.
The results are not as good as DALL-E, but they are still really impressive and definitely fun to play with until DALL-E comes out.
If you’re interested in more background about VQGAN and the art that’s possible to make through it, I encourage you to check out this article written recently by Vice which covers this explosion in multimodal art.
What is the benefit of using Mead.ai?
Right now, it’s not as user friendly to access these types of models through google collab or running them locally on your own computer, and also, I think there are different features you could build to make it easier to create great art which I’d love to add as well. Of course, it’s always very handy to have access to a good GPU which is running 24 hours a day, 7 days a week.
How will Mead.ai make money?
You’re going to laugh, but right now the working plan is to lose money on Mead.ai. It really is just a hobby server and I’ll likely be spending hundreds of dollars a month paying for the servers with a GPU running all the time.
I am asking every user to pay, whatever they can, to help me pay off the server costs which will only grow more overtime. The first few users are just people I know personally who want access to image generation multimodal models regularly and want to just join a private network of creators.
Is there any long term vision behind Mead.ai?
Yes, but it’s way, way too early to say. If you’re interested in my long term vision for Mead, I suppose the best thing to do is just watch my upcoming series called GPT-X, DALL-E, and our Multimodal Future. It’s a crystallized version of how I can imagine the future of software tools liberating creatives.
The thing is, the space is unfolding rapidly and to be honest, other companies like OpenAI, Google, Adobe, and AWS are better positioned to enter and create the kinds of possibilities I will describe in the series.
This is not to bring down the potential of Mead, I’m still really excited about it, I just err more on the side of having fun with it and making friends along the way no matter how things turn out. In a way, the series is my way to influence this space meaningfully, even if I don’t end up being the one to capitalize on it personally.
What are the models based on?
They are based on VQGAN and CLIP.
What do the results look like so far?
Well, I entered, “GPT-3, DALL-E, and our Multimodal AI future with the art style of Futurist Syd Mead” and here’s the result it gave back to me:
Here’s another one I entered, “A Matte Painting Purple Trees”:
So much fun! I love this stuff
Can I get access to Mead?
Right now, I really want to work with people who are already using multimodal AI notebooks regularly (like several times a day) … living and breathing this stuff. I’m talking like several times a day and posting it to their social media, maybe even monetizing it through NFT’s. If this is you and you want immediate access, DM me on twitter, I’ll send over the signup/payment form, and I’ll connect you as soon as it’s ready. You don’t have to worry, if you’re not happy, I’ll refund your money. The goal is just to help me subsidize the cost of GPU’s I will be paying out of pocket personally.
I’ll eventually open it up to the entire substack email list once the project is further along and a lot of the bugs have been sorted out. It’s super beta right now, so just a heads up, if you want to use it … expect a lot of things to break lol
Neat stuff! I'd try it out. When the tech gets to the right point, I have commercial use cases.