Take Your Learning to the Next Level! See More Content Like This On The New Version Of Perxeive.
Get Early Access And Exclusive Updates: Join the Waitlist Now!
Take Your Learning to the Next Level! See More Content Like This On The New Version Of Perxeive.
Get Early Access And Exclusive Updates: Join the Waitlist Now!
The Perxieve app uses models trained on the OpenAI GPT-3 API to generate an essay style document based on the contents of a suitable webpage, such as a Wikipedia page. I thought it might be helpful to share my experiences of using the API and the key lessons I learned.
First, an introduction to OpenAI. OpenAI is an AI research and deployment company with a mission to ensure that artificial general intelligence benefits all of humanity. GPT-3 is a Large Language Model created by OpenAI that is trained to predict the next word in a sentence. The model has 175 billion parameters and was trained on 570 gigabytes of text.
These are the five key lessons learned implementing OpenAI GPT-3 based models in production:
The OpenAI GPT-3 API documentation is best in class, so getting started is quick and easy. That is, once you have access to the API. Access to the API was historically granted multiple days after you made an application. As of 18 November the waitlist has been removed so you should get access immediately. However, just in case, if you plan to try out the API, get your application for access made before you do anything else. Once you get your API key you are good to go with the free trial. The documentation is easy to navigate and gives lots of examples. I would recommend jumping into the "Playground" provided and trying out the examples. Once you have a basic grounding in what the examples enable you to do, try changing the parameters to get a feel for how they work. The Playground gives you a safe space get a handle on the basics. It is also useful if you subsequently apply to have a model approved for use in production, as you can save models to the Playground to showcase example prompts and parameters.
To gain a thorough understanding of GPT-3's capabilities it is important to try out each of the models that are available. However, once you want to dig into the detail and learn more deeply how the API can be used, I would recommend choosing your model carefully. Given that the DaVinci model is the most sophisticated, it is tempting to focus on that model. However, the free trial gives you $18 of credits to use and if your completions use a large number of tokens then you may use up your free allowance quite quickly. In my experience, the results from the Curie model gave an optimal combination of quality, speed and cost for my use case.
The results that you receive from the GPT-3 models are only as good as the context that you provide them. The context for the model is given via a parameter labelled "prompt". The vast majority of the time you spend getting the model to achieve your goal is in designing an appropriate prompt. The documentation really is best in class and you will make the best use of your time by reading the sections on prompts a few times as you iterate through alternative prompt designs. In particular, the best results are achieved by providing examples in your prompts. Success comes from balancing the number of examples with any other context you wish to provide whilst operating within the constraints on the number of tokens that the model can accept. There are lots of ways you can access the API to test your prompt designs. I found using Postman to send my requests to the API the most efficient as I could also use it to call my own server API endpoints to test the models as implemented in a production-like environment.
If you intend to use the models that you create in production, then take time early in your development process to implement the safety best practices that are recommended in the documentation. For example, implementing the OpenAI Content Filter is an easy way of setting yourself up for success. In addition to the documentation, it is worth spending a little time to read through the application for approval that is required before you go live in production. Along with the documentation, the application questions give you good guidance on the type of safety features that would be appropriate for your use case. Thinking about this early in your process may help you implement the models in production such that you maximise the probability of your models being approved by the OpenAI team.
Once you have a model working that achieves your goal, OpenAI enables you to train a fine-tune model that achieves superior results. If you are going to use your models in production then I highly recommend taking the time to train fine-tune models based on the prompts that you previously designed using the few shot learning methods. In order to achieve superior results, the fine-tune models require hundreds of training examples. So, it can be a very time consuming process, depending on how manual the process of generating your examples is. I spent around a month creating training examples for the four models that I use in production. The time spent was well worth it as the results were excellent, achieving vastly superior output. The process of training the fine-tune models using the API is trivial once you have the training example data in the required format and you have installed the OpenAI command line interface.
I highly recommend giving the API a try. The technical barriers to entry are very low so anyone can try it out. It opens up endless possibilities and the potential use cases are only limited by our imaginations.