Sam Altman, the CEO of OpenAI, provided an update on the company's roadmap when he was in Europe.
Altman claims that a lack of computational power is hindering OpenAI's near-term objectives and causing customers to worry about the API's dependability.
The API for fine-tuning models is also constrained by the GPU shortage, he claimed. Low-rank adaptation (LoRA), a fine-tuning technique that has proved extremely helpful to the open-source community, but not yet used by OpenAI.
Due to a shortage of computing capacity, the 32k context window version of GPT-4 is likewise not yet deployable, and there are few private models available with budgets exceeding $100,000. Nevertheless, Altman thinks that this year, a context window of up to one million tokens is feasible.