ChatGPT took the world by storm in 2022, showing how Foundation Models enable new and innovative AI solutions both for consumer and business use cases. Now, with the addition of Foundation Models to public cloud providers' service catalogs, virtually any organization can leverage Generative AI in their solutions.

What are Foundation Models?

The key attribute that distinguishes Generative AI from traditional AI techniques is that the former incorporates Foundation Models (FMs for short).

AI models aren't new, and we've been successfully using AI models to enable smarter technology solutions for years. Historically, AI Models are trained for specific purposes, such as translating text between German and English, classifying the content of images, or answering questions contained in a specific knowledge base.  

How Foundation Models are Different

Foundation models look like models, but are different under the covers

What makes FMs unique is that they're typically trained on very large data sets (billions of source documents), have a large number of parameters (billions), and a single FM can be applied to a wide variety of use cases.  

A Foundation Model is trained with large input data set to emit the most likely response to some given input. The FM doesn't really understand the meaning of the input, but the large set of training data allows it to generate reasonable responses to questions across a broad range of knowledge domains.

Yet while FMs have the potential to successfully bring AI to many use cases, they are very expensive to train and operate.  Most enterprises don't have the technical, compute, or financial resources to train and operate the current crop of Foundation Model technologies in-house.

Foundation Models in the Cloud

Fortunately, the broad applicability of FMs to many uses cases make public cloud providers well suited to host Foundation Models as part of their overall set of AI services. A large, common foundation model is an ideal asset to share in a multi-tenant service.

Cloud-Based Foundation Model Offerings

Foundation models are available in the cloud.

The largest public cloud companies have released Generative AI services based on Foundation Models.  All of these services are Platform-as-a-Service offerings and can be integrated into customer-created applications such as chatbots, websites or custom software.

The big three (in alphabetical order below) each provide similar Generative AI services based on Foundation Models. At the time of this writing some capabilities are still in preview, but even preview features are on a fast track towards general availability across global regions.

Amazon Bedrock

  • Bedrock is built as a Generative AI-specific PaaS service hosted on AWS.
  • Bedrock is specifically targeted to provide Generative AI services using foundation models.
  • The service provides access to a variety of models from companies such as AI21 Labs, Anthropic, Meta and Amazon's own Titan LLM, and more.
  • Models support text-to-text, text-to-image, document summarization, embeddings and more.
  • Tuning with customer-provided data is supported.

Google Vertex AI

  • Delivered as a part of Google's existing Vertex AI PaaS platform.
  • Built with Google's own PaLM foundation model.
  • Supports text to text, text to image, document summarization, embeddings and more.
  • Tuning with customer-provided data is supported.

Microsoft Azure OpenAI Service

  • While falling under the umbrella of Azure AI Services, OpenAI Service is built as a Generative AI-specific service that hosts OpenAI created foundation models.
  • Azure OpenAI Service is one result of Microsoft's strategic partnership with OpenAI, the company behind the well-known ChatGPT and DALL-E products.
  • Available foundation models include a variety of OpenAI models, including numerous GPT-3, GPT-4 and Codex models.
  • Models provide text to text, text to image (DALL-E), embeddings and more.
  • Tuning with customer-provided data is supported.

Which Cloud Service is the Best?

Deciding between cloud services

With each of the major cloud providers deploying similar-sounding services, it can be difficult to decide which platform to choose.  As with any technology decision, the right answer depends on the situation.

What models would apply the best?

If you review the above attributes of each cloud service, most of the features sound similar (custom data, text-to-text support, PaaS deployment model, etc.).

What's unique between them are the foundation models provided.  Some models are available on multiple platforms, such as Meta Llama 2, but many are available only on one cloud, such as OpenAI models and AWS Titan. Some models are proprietary or exclusive and may never be deployed to clouds operated by competitors.

While foundation models are designed to be general purpose, many have specialties, e.g., conversation, Q&A or programming code generation, so a certain model may be better than others for a specific use case. By conducting initial prototyping and evaluation, we may find a model from one cloud provider is particularly compelling for our use case.

Where are my data and applications hosted?

As with many public cloud solutions, we have to consider what customer-provided data will a Foundation Model be tuned with, and what applications will consume the output of the model.

Training an AI service in one public cloud with data from a different public cloud could incur data egress costs and higher latency.  While the effectiveness of the solution is of paramount importance, all things being equal building cross-cloud applications generally introduces additional complexity and cost.

What integrations are we planning?

For each of the major cloud service providers, the foundation model is surrounded by services that add value to our existing or planned applications. All things being equal, we may find that the application or service we're planning may integrate better with one public cloud platform than another.  

Perhaps the format of our tuning data is better supported by one service than the other, or a Foundation Model resource is available in a data center closer to our target audience on one cloud but not the other.


Generative AI is on the radar of many organizations and promises to open new opportunities to add value to applications and make employees more efficient as they complete their assigned tasks.

The primary attributes that make Generative AI so compelling are:

  1. Foundation Models can be trained once, and be applied to a wide variety of AI use cases.
  2. Foundation Models tend to have excellent natural language generation abilities and can dynamically format and summarize information to make it easier to understand.
  3. Foundation model language understanding and knowledge search features can be extended to new data via customer-provided data tuning.

As Generative AI services are added to public cloud providers' catalogs, the high cost of training and deploying the technology is amortized across many customers, making this new technology cost-effective for organizations of many sizes to incorporate into their own knowledge management systems.