Foundation models have evolved so quickly since OpenAI introduced ChatGPT in October 2022 that they’re more like a mutation. The size of the training data sets for most models has shrunk significantly, overall performance is improving significantly, expert models are increasingly multi-modal and mixed, and open source models are legitimate enterprise-grade options. As open source models become stronger and the size of the models becomes smaller, many enterprises could become de facto foundation model builders themselves, provided the cost-benefit analysis plays in their favor. Is HyperPod a legitimate option for getting there? Early signs show that it might be. |
|