Skip to content

Zero-shot and few-shot prompting for the BloomZ 176B foundation model with the simplified Amazon SageMaker JumpStart SDK Rajakumar Sampathkumar AWS Machine Learning Blog

  • by

Amazon SageMaker JumpStart is a machine learning (ML) hub offering algorithms, models, and ML solutions. With SageMaker JumpStart, ML practitioners can choose from a growing list of best performing and publicly available foundation models (FMs) such as BLOOM, Llama 2, Falcon-40B, Stable Diffusion, OpenLLaMA, Flan-T5/UL2, or FMs from Cohere and LightOn.

In this post and accompanying notebook, we demonstrate how to deploy the BloomZ 176B foundation model using the SageMaker Python simplified SDK in Amazon SageMaker JumpStart as an endpoint and use it for various natural language processing (NLP) tasks. You can also access the foundation models thru Amazon SageMaker Studio. The BloomZ 176B model, one of the largest publicly available models, is a state-of-the-art instruction-tuned model that can perform various in-context few-shot learning and zero-shot learning NLP tasks. Instruction tuning is a technique that involves fine-tuning a language model on a collection of NLP tasks using instructions. To learn more about instruction tuning, refer to Zero-shot prompting for the Flan-T5 foundation model in Amazon SageMaker JumpStart.

Zero-shot learning in NLP allows a pre-trained LLM to generate responses to tasks that it hasn’t been specifically trained for. In this technique, the model is provided with an input text and a prompt that describes the expected output from the model in natural language. Zero-shot learning is used in a variety of NLP tasks, such as the following:

Multilingual text and sentiment classification
Multilingual question and answering
Code generation
Paragraph rephrasing
Summarization
Common sense reasoning and natural language inference
Question answering
Sentence and sentiment classification
Imaginary article generation based on a title
Summarizing a title based on an article

Few-shot learning involves training a model to perform new tasks by providing only a few examples. This is useful where limited labeled data is available for training. Few-show learning is used in a variety of tasks, including the following:

Text summarization
Code generation
Name entity recognition
Question answering
Grammar and spelling correction
Product description and generalization
Sentence and sentiment classification
Chatbot and conversational AI
Tweet generation
Machine translation
Intent classification

About Bloom

The BigScience Large Open-science Open-access Multilingual (BLOOM) language model is a transformer-based large language model (LLM). BLOOM is an autoregressive LLM trained to continue text from a prompt on vast amounts of text data using industrial-scale computational resources. As such, it is able to output coherent text that is hardly distinguishable from text written by humans. BLOOM can also be instructed to perform text tasks it hasn’t been explicitly trained for by casting them as text generation tasks.

With its 176 billion parameters, BLOOM is able to generate text in 46 natural languages and 13 programming languages. For almost all of them, such as Spanish, French, and Arabic, BLOOM is the first language model with over 100 billion parameters ever created. Researchers can download, run, and study BLOOM to investigate the performance and behavior of recently developed LLMs down to their deepest internal operations.

Solution overview

In this post, we show how to use the state-of-the-art instruction-tuned BloomZ 176B model from Hugging Face for text generation. You can use the BloomZ 176B model with few-shot learning and zero-shot learning for many NLP tasks, without fine-tuning the model. There is no need to train a new model because models like BloomZ 176B have a significant number of parameters such that they can easily adapt to many contexts without being retrained. The BloomZ 176B model has been trained with a large amount of data, making to applicable for many general-purpose tasks.

The code for all the steps in this demo is available in the following notebook.

Instruction tuning

The size and complexity of LLMs have exploded in the last few years. LLMs have demonstrated remarkable capabilities in learning the semantics of natural language and producing human-like responses. Many recent LLMs are fine-tuned with a powerful technique called instruction tuning, which helps the model perform new tasks or generate responses to novel prompts without prompt-specific fine-tuning. An instruction-tuned model uses its understanding of related tasks or concepts to generate predictions to novel prompts. Because this technique doesn’t involve updating model weights, it avoids the time-consuming and computationally expensive process required to fine-tune a model for a new, previously unseen task.

Instruction tuning involves fine-tuning a language model on a collection of NLP tasks using instructions. In this technique, the model is trained to perform tasks by following textual instructions instead of specific datasets for each task. The model is fine-tuned with a set of input and output examples for each task, allowing the model to generalize to new tasks that it hasn’t been explicitly trained on as long as prompts are provided for the tasks. Instruction tuning helps improve the accuracy and effectiveness of models and is helpful in situations where large datasets aren’t available for specific tasks.

Prompt engineering for zero-shot and few-shot NLP tasks on BLOOM models

Prompt engineering deals with creating high-quality prompts to guide the model towards the desired responses. Prompts need to be designed based on the specific task and dataset being used. The goal here is to provide the model with necessary information to generate high-quality responses while minimizing noise. This could involve keywords, additional contexts, questions, and more.

A well-designed prompt can make the model more creative and generalized so that it can easily adapt to new tasks. Prompts can also help incorporate domain knowledge on specific tasks and improve interpretability. Prompt engineering can greatly improve the performance of zero-shot and few-shot learning models. Creating high-quality prompts requires careful consideration of the task at hand, as well as a deep understanding of the model’s strengths and limitations.

In the following table, the BloomZ 176B model is used to generate responses for various zero-shot and few-shot NLP tasks. The first column shows the task, the second column contains the prompt provided to the model (where the template text is bold and the non-bold text is the example input) and the third column is the response from the model when queried against the prompt

For all the following tasks, we set the hyperparameters as 600 to be the max_length, num_return_sequences, top_k to be 250, top_p to be 0.95, and no sampling.

Task
Prompt Input
Output

Zero-shot learning tasks

Multilingual text or sentiment classification (Chinese to English)
一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。Would you rate the previous review as positive, neutral or negative?
positive

Multilingual question answering (English to Chinese)
Explain to me in Traditional Chinese what is the backpropagation
梯度反向传播算法是一种机器学习算法,用于训练人工神经网络。该算法使用梯度下降来最小化损失函数。

Code generation
Write a binary search tree with O(log(n)) computational complexity.
{code_start}

def binary_search(a, x):
low = 0
high = len(a) – 1
while low <= high:
mid = (low + high) // 2
if a[mid] < x:
low = mid + 1
elif a[mid] > x:
high = mid – 1
else:
return mid
return -1′

else:
return mid
return low’                                                                                      
Output is truncated

Paragraph rephrase
Amazon Web Services (AWS) has announced nine major new updates for its cloud-based machine learning platform, SageMaker.
SageMaker aims to provide a machine learning service which can be used to build, train, and deploy ML models for virtually any use case.
During this year’s re:Invent conference, AWS made several announcements to further improve SageMaker’s capabilities.nnHow would you rephrase that briefly using English?,Amazon Web Services (AWS) has announced nine major new updates for its cloud-based machine learning platform, SageMaker.
SageMaker aims to provide a machine learning service which can be used to build, train, and deploy ML models for virtually any use case.
During this year’s re:Invent conference, AWS made several announcements to further improve SageMaker’s capabilities. nThe above sentence is very complicated. Please provide me a simplified synonymous version consisting of multiple sentences:
Amazon SageMaker gets a major update Amazon Web Services (AWS) has announced nine major new updates for its cloud-based machine learning platform, SageMaker. SageMaker aims to provide a machine learning service that can be used to build, train and deploy ML models for virtually any use case

Summarization
Amazon Comprehend uses natural language processing (NLP) to extract insights about the content of documents. It develops insights by recognizing the entities, key phrases, language, sentiments, and other common elements in a document. Use Amazon Comprehend to create new products based on understanding the structure of documents. For example, using Amazon Comprehend you can search social networking feeds for mentions of products or scan an entire document repository for key phrases.
You can access Amazon Comprehend document analysis capabilities using the Amazon Comprehend console or using the Amazon Comprehend APIs. You can run real-time analysis for small workloads or you can start asynchronous analysis jobs for large document sets. You can use the pre-trained models that Amazon Comprehend provides, or you can train your own custom models for classification and entity recognition.
All of the Amazon Comprehend features accept UTF-8 text documents as the input. In addition, custom classification and custom entity recognition accept image files, PDF files, and Word files as input.
Amazon Comprehend can examine and analyze documents in a variety of languages, depending on the specific feature. For more information, see Languages supported in Amazon Comprehend. Amazon Comprehend’s Dominant language capability can examine documents and determine the dominant language for a far wider selection of languages.nn===nWrite a summary of the previous text in English:
Amazon Comprehend uses natural language processing (NLP) to extract insights about the content of documents. It develops insights by recognizing the entities, key phrases, language, sentiments, and other common elements in a document. Use Amazon Comprehend to create new products based on understanding the structure of documents. For example, using Amazon Comprehend you can search social networking feeds for mentions of products or scan an entire document repository for key phrases.
You can access Amazon Comprehend document analysis capabilities using the Amazon Comprehend console or using the Amazon Comprehend APIs. You can run real-time analysis for small workloads or you can start asynchronous analysis jobs for large document sets. You can use the pre-trained models that Amazon Comprehend provides, or you can train your own custom models for classification and entity recognition.
All of the Amazon Comprehend features accept UTF-8 text documents as the input. In addition, custom classification and custom entity recognition accept image files, PDF files, and Word files as input.
Amazon Comprehend can examine and analyze documents in a variety of languages, depending on the specific feature. For more information, see Languages supported in Amazon Comprehend. Amazon Comprehend’s Dominant language capability can examine documents and determine the dominant language for a far wider selection of languages.

Common sense reasoning and natural language inference
Given that The world cup has kicked off in Los Angeles, United States. Does it follow that The world cup takes place in United States. Yes or no?,
The world cup has kicked off in Los Angeles, United States. Using only the above description and what you know about the world, is “The world cup takes place in United States” definitely correct? Yes or no?,

Yes

Question answering
question: what are the key features of new Kindle?\nnContext: The newest and most innovative Kindle yet lets you take notes on millions of books and documents, write lists and journals, and more.
For readers who have always wished they could write in their eBooks, Amazon’s new Kindle lets them do just that. The Kindle Scribe is the first Kindle for reading and writing and allows users to supplement their books and documents with notes, lists, and more.
Here’s everything you need to know about the Kindle Scribe, including frequently asked questions.
The Kindle Scribe makes it easy to read and write like you would on paper
The Kindle Scribe features a 10.2-inch, glare-free screen (the largest of all Kindle devices), crisp 300 ppi resolution, and 35 LED front lights that automatically adjust to your environment. Further personalize your experience with the adjustable warm light, font sizes, line spacing, and more.
It comes with your choice of the Basic Pen or the Premium Pen, which you use to write on the screen like you would on paper. They also attach magnetically to your Kindle and never need to be charged. The Premium Pen includes a dedicated eraser and a customizable shortcut button.
The Kindle Scribe has the most storage options of all Kindle devices: choose from 8 GB, 16 GB, or 32 GB to suit your level of reading and writing.
\nnAnswer
take notes on millions of books and documents, write lists and journals, and more

Sentence and sentiment classification
Review:nThis moive is so great and once again dazzles and delights usnIs this movie review sentence negative or positive?nOPTIONS:n-positive n-negative
Positive

Imaginary article generation based on title
Title: “University has new facility coming up”\nGiven the above title of an imaginary article, imagine the article.\n”””

 The £1.2m project will see the creation of a new teaching and learning centre at the University of the Highlands and Islands (UHI) in Inverness.

It will be built on the site of the former Inverness College of Commerce.

UHI said the new facility would be used for teaching and learning, as well as student support and administration.

The university said it would also be used for conferences and events.

The new building will be built on the site of the former Inverness College of Commerce

The university said the new building would be built on the site of the former Inverness College of Commerce, which closed in 2013.

It said the new facility would be used for teaching and learning, as well as student support and administration.

It would also be used for conferences and events.

UHI said the new facility would be built on the site of the former Inverness College of Commerce

The university said the new building would be built on the site of the former Inverness College of Commerce, which closed in 2013.

It said the new facility would be used for teaching and learning, as well as student support and administration.

It would also be used for conferences and events.

UHI said the new facility would be built on the site of the former Inverness…

Summarizing a title based on the article
‘The newest and most innovative Kindle yet lets you take notes on millions of books and documents, write lists and journals, and more.
For readers who have always wished they could write in their eBooks, Amazon’s new Kindle lets them do just that. The Kindle Scribe is the first Kindle for reading and writing and allows users to supplement their books and documents with notes, lists, and more.
Here’s everything you need to know about the Kindle Scribe, including frequently asked questions.
The Kindle Scribe makes it easy to read and write like you would on paper
The Kindle Scribe features a 10.2-inch, glare-free screen (the largest of all Kindle devices), crisp 300 ppi resolution, and 35 LED front lights that automatically adjust to your environment. Further personalize your experience with the adjustable warm light, font sizes, line spacing, and more.
It comes with your choice of the Basic Pen or the Premium Pen, which you use to write on the screen like you would on paper. They also attach magnetically to your Kindle and never need to be charged. The Premium Pen includes a dedicated eraser and a customizable shortcut button.
The Kindle Scribe has the most storage options of all Kindle devices: choose from 8 GB, 16 GB, or 32 GB to suit your level of reading and writing. nn \n\nGive me a good title for the article above.
Amazon’s Kindle Scribe: Everything you need to know

Few-shot learning tasks

Summarization
[Original]: Amazon scientists, in collaboration with researchers from the University of Sheffield, are making a large-scale fact extraction and verification dataset publicly available for the first time. The dataset, comprising more than 185,000 evidence-backed claims, is being made available to hopefully catalyze research and development that addresses the problems of fact extraction and verification in software applications or cloud-based services that perform automatic information extraction.
[Summary]: Amazon and University researchers make fact extraction and verification dataset publicly available.
###
[Original]: Prime members in the U.S. can get even more delivered to their door with a Prime membership. Members can now enjoy one year of Grubhub+ valued at $9.99 per month for free—at no added cost to their Prime membership. To activate this deal, visit amazon.com/grubhub. This new offer includes unlimited, $0 food delivery fees on orders over $12 as well as exclusive perks for Grubhub+ members and rewards like free food and order discounts. Plus, diners can “eat good while doing good” by opting into Grubhub’s Donate the Change program, a donation-matching initiative that raised more than $25 million in 2021 alone, benefiting more than 20 charitable organizations across the country.
[Summary]: Prime members in the U.S. can enjoy one year of Grubhub+ for free, with no food-delivery fees on eligible orders.
###
[Original]: Amazon scientists, in collaboration with researchers from the University of Sheffield, are making a large-scale fact extraction and verification dataset publicly available for the first time. The dataset, comprising more than 185,000 evidence-backed claims, is being made available to hopefully catalyze research and development that addresses the problems of fact extraction and verification in software applications or cloud-based services that perform automatic information extraction.
[Summary]:

[Summary]: Amazon and University researchers make fact extraction and verification dataset publicly available.

Code generation
description: a orange button that says stop
code: <button style=color:white; background-color:orange;>Stop</button>
###
description: a blue box that contains yellow circles with red borders
code: <div style=background-color: blue; padding: 20px;><div style=background-color: yellow; border: 5px solid red; border-radius: 50%; padding: 20px; width: 100px; height: 100px;>
###

description: a Headline saying Welcome to AI
code:
code: <h1>Welcome to AI</h1>’

Name entity recognition
[Text]: Fred is a serial entrepreneur. Co-founder and CEO of Platform.sh, he previously co-founded Commerce Guys, a leading Drupal ecommerce provider. His mission is to guarantee that as we continue on an ambitious journey to profoundly transform how cloud computing is used and perceived, we keep our feet well on the ground continuing the rapid growth we have enjoyed up until now.
[Name]: Fred
[Position]: Co-founder and CEO
[Company]: Platform.sh
###
[Text]: Microsoft (the word being a portmanteau of “microcomputer software”) was founded by Bill Gates on April 4, 1975, to develop and sell BASIC interpreters for the Altair 8800. Steve Ballmer replaced Gates as CEO in 2000, and later envisioned a “devices and services” strategy.
[Name]:  Steve Ballmer
[Position]: CEO
[Company]: Microsoft
###
[Text]: Franck Riboud was born on 7 November 1955 in Lyon. He is the son of Antoine Riboud, the previous CEO, who transformed the former European glassmaker BSN Group into a leading player in the food industry. He is the CEO at Danone.
[Name]:  Franck Riboud
[Position]: CEO
[Company]: Danone
###
[Text]: David Melvin is an investment and financial services professional at CITIC CLSA with over 30 years’ experience in investment banking and private equity. He is currently a Senior Adviser of CITIC CLSA.

    [Name]: Fred
[Position]: Co-founder and CEO
[Company]: Platform.sh

Question answering
Context: NLP Cloud was founded in 2021 when the team realized there was no easy way to reliably leverage Natural Language Processing in production.
Question: When was NLP Cloud founded?
Answer: 2021
###
Context: NLP Cloud developed their API by mid-2020 and they added many pre-trained open-source models since then.
Question: What did NLP Cloud develop?
Answer: API
###
Context: All plans can be stopped anytime. You only pay for the time you used the service. In case of a downgrade, you will get a discount on your next invoice.
Question: When can plans be stopped?
Answer: Anytime
###
Context: The main challenge with GPT-J is memory consumption. Using a GPU plan is recommended.
Question: Which plan is recommended for GPT-J?
Answer:”””

Answer: GPU plan’

Grammar and spelling correction
I love goin to the beach.
Correction: I love going to the beach.
###
Let me hav it!
Correction: Let me have it!
###
It have too many drawbacks.
Correction: It has too many drawbacks.
###

I do not wan to go
Correction:
    Correction: I don’t want to go

Product description and generalization
Generate a product description out of keywords.
Keywords: shoes, women, $59
Sentence: Beautiful shoes for women at the price of $59.
###
Keywords: trousers, men, $69
Sentence: Modern trousers for men, for $69 only.
###
Keywords: gloves, winter, $19
Sentence: Amazingly hot gloves for cold winters, at $19.
###

Keywords: t-shirt, men, $39
Sentence:
    Sentence: Cool t-shirt for men, for $39 only.

Sentence and sentiment classification
Message: When the spaceship landed on Mars, the whole humanity was excited
Topic: space
###
Message: I love playing tennis and golf. I’m practicing twice a week.
Topic: sport
###
Message: Managing a team of sales people is a tough but rewarding job.
Topic: business
###

Message: I am trying to cook chicken with tomatoes.
Topic:
    Topic: food

Chatbot and conversational AI
This is a discussion between a [human] and a [robot].
The [robot] is very nice and empathetic.
[human]: Hello nice to meet you.
[robot]: Nice to meet you too.
###
[human]: How is it going today?
[robot]: Not so bad, thank you! How about you?
###
[human]: I am ok, but I am a bit sad…
[robot]: Oh? Why that?
###

[human]: I caught flu
[robot]:
    [robot]: Oh no, I am sorry to hear that!

Tweet generation
keyword: markets
tweet: Take feedback from nature and markets, not from people
###
keyword: children
tweet: Maybe we die so we can come back as children.
###
keyword: startups
tweet: Startups should not worry about how to put out fires, they should worry about how to start them.
###

keyword: nlp
tweet:
    tweet: NLP is a tool, not a goal.

Machine translation
Hugging Face a révolutionné le NLP.
Translation: Hugging Face revolutionized NLP.
###
Cela est incroyable!
Translation: This is unbelievable!
###
Désolé je ne peux pas.
Translation: Sorry but I cannot.
###
    NLP Cloud permet de deployer le NLP en production facilement.
Translation:
    Translation: NLP Cloud makes it easy to deploy NLP in production.

Intent classification
I want to start coding tomorrow because it seems to be so fun!
Intent: start coding
###
Show me the last pictures you have please.
Intent: show pictures
###
Search all these files as fast as possible.
Intent: search files
###

Can you please teach me Chinese next week?
Intent:
    Intent: teach me chinese

Access the BloomZ 176B instruction-tuned model in SageMaker

SageMaker JumpStart provides two ways to get started using these instruction-tuned Bloom models: Amazon SageMaker Studio and the SageMaker SDK. The following sections illustrate what each of these options look like and how to access them.

Access the model with the simplified SageMaker JumpStart SDK

The simplified SageMaker JumpStart SDK facilitates training and deploying built-in SageMaker JumpStart models with a couple lines of code. This gives you access to the entire library of SageMaker JumpStart models, including the latest foundation models and image generation models, without having to supply any inputs besides the model ID.

You can take advantage of the model-specific default values we provide to specify the configuration, such as the Docker image, ML instance type, model artifact location, and hyperparameters, among other fields. These attributes are only default values; you can override them and retain granular control over the AWS models you create. As a result of these changes, the effort to write Python workflows to deploy and train SageMaker JumpStart models has been reduced, enabling you to spend more time on the tasks that matter. This feature is available in all Regions where JumpStart is supported, and can be accessed with the SageMaker Python SDK version 2.154.0 or later.

You can programmatically deploy an endpoint through the SageMaker SDK. You will need to specify the model ID of your desired model in the SageMaker model hub and the instance type used for deployment. The model URI, which contains the inference script, and the URI of the Docker container are obtained through the SageMaker SDK. These URIs are provided by SageMaker JumpStart and can be used to initialize a SageMaker model object for deployment.

Deploy the model and query the endpoint

This notebook requires ipywidgets. Install ipywidgets and then use the execution role associated with the current notebook as the AWS account role with SageMaker access.

Choose the pre-trained model

We choose the bloomz-176b-fp16 pre-trained model:

model_id = “huggingface-textgeneration1-bloomz-176b-fp16”

The notebook in the following sections uses BloomZ 176B as an example. For a complete list of SageMaker pre-trained models, refer to Built-in Algorithms with pre-trained Model Table.

Retrieve artifacts and deploy an endpoint

With SageMaker, we can perform inference on the pre-trained model without fine-tuning it first on a new dataset. We start by retrieving the deploy_image_uri, deploy_source_uri, and model_uri for the pre-trained model. To host the pre-trained model, we create an instance of sagemaker.model.Model and deploy it. This may take a few minutes.

Now we can deploy the model using the simplified SageMaker JumpStart SDK with the following lines of code:

from sagemaker.jumpstart.model import JumpStartModel

#if no instance for this model id is available, use a smaller id
model = JumpStartModel(model_id=model_id)

# ml.p4de.24xlarge is used by default. You can add the kwarg
# instance_type to change this setting.
predictor = model.deploy()

endpoint_name = predictor.endpoint_name

We use SageMaker large model inference (LMI) containers to host the BloomZ 176B model. LMI is an AWS-built LLM software stack (container) that offers easy-to-use functions and performance gain on generative AI models. It’s embedded with model parallelism, compilation, quantization, and other stacks to speed up inference. For details, refer to Deploy BLOOM-176B and OPT-30B on Amazon SageMaker with large model inference Deep Learning Containers and DeepSpeed.

Note that deploying this model requires a p4de.24xlarge instance and the deployment usually takes about 1 hour. If you don’t have quota for that instance, request a quota increate on the AWS Service Quotas console.

Query the endpoint and parse the response using various parameters to control the generated text

The input to the endpoint is any string of text formatted as JSON and encoded in utf-8 format. The output of the endpoint is a JSON file with generated text.

In the following example, we provide some sample input text. You can input any text and the model predicts the next words in the sequence. Longer sequences of text can be generated by calling the model repeatedly. The following code shows how to invoke an endpoint with these arguments:

from sagemaker.predictor import retrieve_default

predictor = retrieve_default(model_id=model_id, model_version=”*”, endpoint_name=endpoint=name)
response = predictor.predict(“How to make a pasta?”)
print(response[“generated_text”])

We get the following output:

[‘How to make a pasta? boil a pot of water and add salt. Add the pasta to the water and cook until al dente. Drain the pasta.’]

Access the model in SageMaker Studio

You can also access these models through the JumpStart landing page in Studio. This page lists available end-to-end ML solutions, pre-trained models, and example notebooks.

At the time of publishing the post, BloomZ 176B is only available in the us-east-2 Region.

You can choose the BloomZ 176B model card to view the notebook.

You can then import the notebook to run the notebook further.

Clean up

To avoid ongoing charges, delete the SageMaker inference endpoints. You can delete the endpoints via the SageMaker console or from the SageMaker Studio notebook using the following commands:

predictor.delete_model()
predictor.delete_endpoint()

Conclusion

In this post, we gave an overview of the benefits of zero-shot and few-shot learning and described how prompt engineering can improve the performance of instruction-tuned models. We also showed how to easily deploy an instruction-tuned BloomZ 176B model from SageMaker JumpStart and provided examples to demonstrate how you can perform different NLP tasks using the deployed BloomZ 176B model endpoint in SageMaker.

We encourage you to deploy a BloomZ 176B model from SageMaker JumpStart and create your own prompts for NLP use cases.

To learn more about SageMaker JumpStart, check out the following:

Zero-shot prompting for the Flan-T5 foundation model in Amazon SageMaker JumpStart
Run text generation with Bloom and GPT models on Amazon SageMaker JumpStart
Generate images from text with the stable diffusion model on Amazon SageMaker JumpStart
Run image segmentation with Amazon SageMaker JumpStart
Run text classification with Amazon SageMaker JumpStart using TensorFlow Hub and Hugging Face models
Amazon SageMaker JumpStart models and algorithms now available via API
Incremental training with Amazon SageMaker JumpStart
Transfer learning for TensorFlow object detection models in Amazon SageMaker
Transfer learning for TensorFlow text classification models in Amazon SageMaker
Transfer learning for TensorFlow image classification models in Amazon SageMaker

About the Authors

Rajakumar Sampathkumar is a Principal Technical Account Manager at AWS, providing customers guidance on business-technology alignment and supporting the reinvention of their cloud operation models and processes. He is passionate about cloud and machine learning. Raj is also a machine learning specialist and works with AWS customers to design, deploy, and manage their AWS workloads and architectures.

Dr. Xin Huang is an Applied Scientist for Amazon SageMaker JumpStart and Amazon SageMaker built-in algorithms. He focuses on developing scalable machine learning algorithms. His research interests are in the area of natural language processing, explainable deep learning on tabular data, and robust analysis of non-parametric space-time clustering. He has published many papers in ACL, ICDM, KDD conferences, and Royal Statistical Society: Series A journal.

Evan Kravitz is a software engineer at Amazon Web Services, working on SageMaker JumpStart. He enjoys cooking and going on runs in New York City.

 Amazon SageMaker JumpStart is a machine learning (ML) hub offering algorithms, models, and ML solutions. With SageMaker JumpStart, ML practitioners can choose from a growing list of best performing and publicly available foundation models (FMs) such as BLOOM, Llama 2, Falcon-40B, Stable Diffusion, OpenLLaMA, Flan-T5/UL2, or FMs from Cohere and LightOn. In this post and  Read More Advanced (300), Amazon SageMaker JumpStart, Generative AI, Technical How-to 

Leave a Reply

Your email address will not be published. Required fields are marked *