Skip to content

Enhancing GitHub Actions CI for FastAPI: Build, Test, and Publish Hector Martinez PyImageSearch

  • by

​[[{“value”:”


Table of Contents


Enhancing GitHub Actions CI for FastAPI: Build, Test, and Publish

In this follow-up guide, we dive deeper into enhancing your GitHub Actions CI pipeline for Python FastAPI applications, focusing on the critical stages of building, testing, and publishing. This tutorial will walk you through the next steps of your CI journey, emphasizing how to efficiently compile your application, run automated tests, and publish release artifacts to streamline your development process.

This lesson is the 3rd of a 4-part series on GitHub Actions:

  1. Introduction to GitHub Actions for Python Projects
  2. Setting Up GitHub Actions CI for FastAPI: Intro to Taskfile and Pre-Jobs
  3. Enhancing GitHub Actions CI for FastAPI: Build, Test, and Publish (this tutorial)
  4. Lesson 4

To learn how to advance your CI pipeline for Python FastAPI applications using GitHub Actions, focusing on building, testing, and publishing stages, just keep reading.


Enhancing GitHub Actions CI for FastAPI: Build, Test, and Publish

In this follow-up guide, we’ll continue enhancing the CI pipeline for Python FastAPI applications using GitHub Actions. In the previous post, we focused on the initial setup, covering Taskfile automation and pre-jobs to streamline your CI process. We demonstrated how pre-jobs optimize the pipeline by ensuring only necessary tasks are triggered, preventing redundant runs, and automating repetitive tasks like dependency installation, linting, and testing using Taskfile.

In this third post, we will dive deeper into the remaining jobs within the CI pipeline: building and testing, publishing test results, and releasing the application. We’ll cover how to:

  • Build and test the FastAPI application using PyTest and Flake8 for code quality checks.
  • Automate the publishing of test results, ensuring visibility into unit test performance.
  • Create and publish a release that packages your FastAPI project into a Python wheel file and uploads it to GitHub releases for easy deployment.

This continuation of our GitHub Actions CI/CD series provides a complete walkthrough of building, testing, and publishing for FastAPI applications, setting you up for seamless deployments in your continuous delivery (CD) pipeline.


What Readers Will Learn in This Post

In this post, you’ll build on the CI pipeline setup from the previous post and learn how to:

  • Building and Testing Your FastAPI Application: Configure the CI pipeline to run automated tests, check code quality with linters, and manage dependencies.
  • Publishing Test Results: Learn how to capture and publish test results for easy review.
  • Creating a Release: Package your FastAPI application as a Python wheel file, upload the release to GitHub, and prepare the application for deployment.

This guide ensures your CI pipeline doesn’t just validate your code but also prepares it for deployment, paving the way for continuous delivery (CD).


Configuring Your Development Environment

Since this is a CI/CD-focused lesson, you might expect to configure and install the necessary libraries for running code locally. However, in this case, libraries like FastAPI, Pillow, Gunicorn, PyTest, and Flake8 — although required — are only needed within the CI pipeline itself. These dependencies will be installed automatically when the pipeline runs in GitHub Actions, meaning there’s no need to configure them on your local development environment unless you’re testing locally.

To clarify: in this guide, the requirements.txt file in your repository ensures that GitHub Actions installs all required packages for you during the pipeline execution. Therefore, you can skip installing these dependencies locally unless you’re developing or testing outside the CI pipeline.


Can You Test a CI Pipeline Locally?

Yes, while GitHub Actions primarily runs your pipeline in the cloud, there are ways to simulate or test your CI pipeline locally. One option is to use tools like act (a tool that lets you run GitHub Actions locally), which mimics the GitHub Actions environment and allows you to catch issues before pushing code. Another approach is to locally run individual scripts or steps (e.g., your PyTest, linting, and build tasks) using your terminal to ensure there are no issues before triggering the pipeline on GitHub.

This allows for faster feedback and testing of changes without waiting for the entire CI pipeline to run in GitHub.

Fortunately, all these packages are easily installable via pip. You can use the following commands to set up your environment:

$ pip install -q fastapi[all]==0.98.0
$ pip install -q Pillow==9.5.0
$ pip install -q gunicorn==20.1.0
$ pip install -q pytest==8.2.2
$ pip install -q pytest-cov==5.0.0
$ pip install -q flake8==7.1.0

Need Help Configuring Your Development Environment?

Having trouble configuring your development environment? Want access to pre-configured Jupyter Notebooks running on Google Colab? Be sure to join PyImageSearch University — you will be up and running with this tutorial in a matter of minutes.

All that said, are you:

  • Short on time?
  • Learning on your employer’s administratively locked system?
  • Wanting to skip the hassle of fighting with the command line, package managers, and virtual environments?
  • Ready to run the code immediately on your Windows, macOS, or Linux system?

Then join PyImageSearch University today!

Gain access to Jupyter Notebooks for this tutorial and other PyImageSearch guides pre-configured to run on Google Colab’s ecosystem right in your web browser! No installation required.

And best of all, these Jupyter Notebooks will run on Windows, macOS, and Linux!


Project Directory Structure for Following Lessons

We first need to review our project directory structure.

Start by accessing the “Downloads” section of this tutorial to retrieve the source code and example images.

From there, take a look at the directory structure:

.
├── .github
│   └── workflows
│       ├── cd.yml
│       └── ci.yml
├── deployment
│   ├── Taskfile.yml
│   ├── docker
│   │   └── Dockerfile
│   └── scripts
│       └── clean_ghcr_docker_images.py
├── main.py
├── model.script.pt
├── pyimagesearch
│   ├── __init__.py
│   └── utils.py
├── requirements.txt
├── setup.py
└── tests
    ├── test_image.png
    ├── test_main.py
    └── test_utils.py

7 directories, 14 files

In the .github/workflows/ directory, we have:

  • ci.yml: This GitHub Actions workflow defines the Continuous Integration (CI) process. It specifies the steps to build, test, and package the application whenever there’s a change in the codebase. This includes tasks like running unit tests, linting the code, and creating a release if a new version is tagged.
  • cd.yml: The workflow that will handle Continuous Deployment (CD), including building Docker images and pushing them to a container registry. We’ll explore this in detail in the upcoming lesson.

In the deployment/ directory, we have:

  • Taskfile.yml: This file defines various tasks that can be run during the CI process, such as installing dependencies, linting the code, and running tests. The use of a Taskfile allows us to automate these processes and ensure consistency across different environments.
  • docker/: Contains the Dockerfile used for containerizing the application. This will be covered in the upcoming lesson when we discuss Continuous Deployment (CD).
  • scripts/: Includes utility scripts like clean_ghcr_docker_images.py, which is used to clean up old Docker images in the GitHub Container Registry (GHCR). This will also be covered in the CD lesson.

In the pyimagesearch directory, we have:

  • __init__.py: Initializes the module, allowing for easier imports and organization within the pyimagesearch directory.
  • utils.py: Contains utility functions for tasks such as loading the model, processing images, and performing inference. These utilities help keep the codebase modular and maintainable. This was covered in the FastAPI deployment blog post.

In the root directory, we have:

  • main.py: The main script that sets up the FastAPI application, defines API endpoints, and handles requests for inference and health checks. This was thoroughly discussed in the FastAPI deployment post.
  • model.script.pt: The serialized PyTorch model that is used for inference within the application.
  • requirements.txt: Lists all the dependencies needed to run the FastAPI application, including specific versions of libraries like Torch and FastAPI. This file is essential for ensuring that all dependencies are installed consistently across different environments. In our CI pipeline, this file is used to install the necessary packages before running tests and building the application.
  • setup.py: This script is used for packaging the application, allowing it to be easily installed or distributed as a Python package. It defines the package details, dependencies, and entry points. In the CI process, setup.py is used to build the wheel file, which is then published as part of the release process.

In the tests directory, we have:

  • test_image.png: A sample image used to verify that the model’s inference capabilities are functioning correctly.
  • test_main.py: Unit tests for the main application, particularly focusing on the FastAPI endpoints. While these tests were covered in the FastAPI deployment post, in this blog post, we’ll show how they are integrated into the CI pipeline to ensure continuous testing.
  • test_utils.py: Unit tests for the utility functions in utils.py. These tests help verify that any changes to utility functions do not introduce bugs. Again, these tests were covered previously, but here, we’ll focus on how they are used within the CI pipeline.

This directory structure sets up the foundation for our CI/CD pipeline. While some components were discussed in previous posts, like the FastAPI deployment, the focus of this blog post is on the CI pipeline, which ensures that the application is consistently tested and validated with each code change. The components related to Continuous Deployment, such as the Dockerfile and cd.yml, will be explored in the following lesson.


Building and Testing Your FastAPI Application

In this section, we delve into the core of the CI pipeline where your FastAPI application is built and thoroughly tested. This job ensures that your codebase is properly linted, dependencies are installed, and all tests are executed automatically. The process also includes uploading test coverage reports to track how much of the application is covered by tests. By automating these steps, you can consistently validate your application’s stability and code quality with each new code change.


Define the build-and-test Job

build-and-test:
  needs: pre_job
  if: ${{ needs.pre_job.outputs.should_skip != 'true' || github.event_name == 'pull_request' || startsWith(github.ref, 'refs/tags/v') }}
  runs-on: ubuntu-latest
  strategy:
    matrix:
      python-version: [3.9]
  • Job Dependency (needs: pre_job): This job depends on the outcome of the pre_job (defined earlier in the CI pipeline). It only runs if the pre-job indicates that the pipeline should not be skipped (i.e., should_skip != 'true').
  • Conditional Execution (if:): The job runs under two main conditions:
    • If the pre_job returns that, the CI pipeline should not be skipped.
    • If the CI is triggered by a pull request or a new tag, this ensures that CI runs on new feature branches or releases.
  • Runner (runs-on: ubuntu-latest): Specifies that the pipeline runs on the latest version of an Ubuntu environment. This provides a standardized Linux environment for building and testing the FastAPI application.
  • Matrix Strategy: The matrix strategy is used to define the Python version (3.9) under which the pipeline should run. A matrix can be extended to test multiple Python versions simultaneously by simply adding more versions (e.g., python-version: [3.7, 3.8, 3.9]), enabling cross-version compatibility testing.

Checkout the Repository

- name: Checkout repository 🛎
  uses: actions/checkout@v2
  with:
    fetch-depth: 0

This step checks out the code from your GitHub repository using the actions/checkout@v2 action. The fetch-depth: 0 argument ensures that the full history of the repository is fetched. This is useful if you need to run actions based on the repository’s commit history (e.g., determining changes for running tests).


Set Up Python

- name: Set up Python ${{ matrix.python-version }}
  uses: actions/setup-python@v2
  with:
    python-version: ${{ matrix.python-version }}

Here, we set up the Python environment using the actions/setup-python@v2 action. It installs the specific Python version defined in the matrix (3.9 in this case). If you add multiple Python versions in the matrix strategy, this job will run separately for each version, testing your FastAPI application in different environments.


Install Task

- name: Install Task 🗳️
  uses: arduino/setup-task@v1

This step installs Task using arduino/setup-task@v1. Task is a tool that helps automate and organize tasks in the CI pipeline, such as managing dependencies, linting, and running tests. Instead of repeating common commands in different sections, the Taskfile centralizes these steps for a cleaner, more maintainable pipeline.

We have covered Task along with the Taskfile in detail in our previous lesson. If you haven’t read it, you can find it here.


Install Dependencies

- name: Install dependencies
  working-directory: deployment
  run: task deps

This command runs the task deps command (defined in the Taskfile). The deps task installs all necessary dependencies using pip, including FastAPI, PyTest, Flake8, and other libraries. The working-directory is set to deployment to ensure that the Taskfile is executed from the correct directory.


Set PYTHONPATH

- name: Set PYTHONPATH
  run: echo "PYTHONPATH=$PYTHONPATH:$(pwd)" >> $GITHUB_ENV

Next, the PYTHONPATH environment variable is set to include the current working directory ($(pwd)). This ensures that Python can find all necessary modules and dependencies while running tests or executing commands in the CI pipeline. The GITHUB_ENV file is used to pass this environment variable to subsequent steps in the pipeline.


Lint with Flake8

- name: Lint with flake8
  working-directory: deployment
  run: task lint

This step runs Flake8 (a Python linter) to check for code style issues, syntax errors, and potential bugs. The task lint command (defined in the Taskfile) automatically invokes Flake8, scanning your FastAPI application for issues and ensuring your code meets the required style guide (e.g., PEP8).

Linting ensures that the code follows best practices before it proceeds to the testing phase, reducing the likelihood of code issues during development or deployment.


Run Tests with PyTest

- name: Test with pytest
  working-directory: deployment
  run: task test

Here, we run all unit tests defined for the FastAPI application using PyTest. The task test command triggers the PyTest runner, which scans the tests/ directory and executes all test cases. This ensures that your application functions as expected and that no functionality is broken with new code changes.


Upload Test Coverage Reports (XML)

- name: Upload test coverage xml 📦️
  uses: actions/upload-artifact@v2
  if: success() || failure()
  with:
    name: test-results
    path: |
      junit_result.xml
      coverage.xml

This step uploads the test coverage reports in XML format using the actions/upload-artifact@v2 action. The results include information about how much of your codebase is covered by tests (i.e., test coverage) and any test failures. This is useful for developers and reviewers to quickly assess the health of the codebase.

The condition if: success() || failure() ensures that the test results are uploaded regardless of whether the tests pass or fail.


Upload Test Coverage Reports (HTML)

- name: Upload test coverage html 📦️
  uses: actions/upload-artifact@v2
  if: success() || failure()
  with:
    name: htmlcov
    path: htmlcov/

In addition to XML reports, this step uploads the HTML version of the test coverage reports. HTML coverage reports provide a user-friendly interface to browse the code coverage details, making it easier to spot areas in your code that need better test coverage.

The below image shows the artifacts produced during the runtime of the CI pipeline, specifically the upload test coverage htmlcov and test-results artifacts generated in the previous two steps we discussed above.

  1. htmlcov: This represents the HTML coverage report generated after running the tests. It provides detailed information on which parts of the code were executed during the tests, allowing developers to assess test coverage visually.
  2. test-results: This artifact contains the results of the unit tests, often stored in formats like JUnit XML. These results help developers review which tests passed or failed, providing insight into the overall health of the application.

These artifacts are essential for maintaining code quality and ensuring that tests run properly on each commit or pull request. They can be downloaded for further analysis.


Summary

The build-and-test job is responsible for:

  1. Building: Setting up the environment, checking out the code, and installing dependencies.
  2. Linting: Running code style and syntax checks to ensure the code follows best practices.
  3. Testing: Running unit tests using PyTest to validate the functionality of the FastAPI application.
  4. Uploading Results: Generating and uploading test coverage reports (in both XML and HTML formats) for further analysis.

This ensures that every new code change is thoroughly tested, verified, and conforms to coding standards, helping maintain a reliable and efficient CI pipeline.


Publishing Test Results

This job handles the publishing of unit test results from the previous build-and-test job. After tests are executed and their results are saved as artifacts, this step ensures that these results are properly uploaded and made accessible for review. This allows you to see test outcomes directly within GitHub, ensuring continuous monitoring of code quality.

Let’s break down the key steps:

publish-test-results:
  needs: build-and-test
  runs-on: ubuntu-latest

This ensures that the test results are only published after the build and test job have been completed. It creates a dependency between the jobs, ensuring proper sequence.

The runs-on: ubuntu-latest specifies that the job will run on the latest version of Ubuntu.

steps:
  - name: Download Artifacts
    uses: actions/download-artifact@v2
    with:
      name: test-results
      path: artifacts

This step retrieves the test results (in this case, the junit_result.xml and coverage.xml files) from the previous job. The artifacts are downloaded to a specified path (artifacts) for use in subsequent steps.

- name: Publish Unit Test Results
    uses: EnricoMi/publish-unit-test-result-action@v1.15
    with:
      files: artifacts/**/junit_result.xml
      github_token: ${{ secrets.GITHUB_TOKEN }}
      report_individual_runs: true

This step uses the EnricoMi/publish-unit-test-result-action to publish the downloaded test results to GitHub. The key parameters include:

  • files: Specifies the file path where the test result (junit_result.xml) is located.
  • github_token: Grants access to the GitHub repository using the secret token.
  • report_individual_runs: If set to true, the results of individual test runs are reported, providing a detailed breakdown of test outcomes.

Below is the visual feedback generated by the GitHub Actions Bot after the test results have been processed and published. The bot automatically posts a comment summarizing the test results directly on the pull request or commit page, making it easy for developers to quickly see if their changes passed or failed the tests.

This job ensures that you have an easy-to-read, integrated view of your test results directly on GitHub, which is essential for keeping track of test coverage and identifying issues early in the development process.


Publish Release

The publish-release job is responsible for automating the process of creating and publishing a new release of the project on GitHub. This is especially useful in a CI/CD pipeline because it ensures that every time a new version of your FastAPI application is tagged (with something like v1.0.0), the code is packaged, a release is created on GitHub, and the necessary distribution files (e.g., wheel files) are uploaded automatically. This eliminates the need for manual intervention in the release process.

The main tasks handled by the publish-release job are:

  1. Checking out the repository: Ensuring that the codebase is available in the environment for building.
  2. Setting up the Python environment: Installing necessary tools for packaging and releasing the code.
  3. Building the package: Using setup.py to create distribution files (wheel and source distribution).
  4. Creating or finding a GitHub release: Either creating a new release on GitHub or finding an existing one for the current version tag.
  5. Uploading the package to the release: Adding the distribution files to the GitHub release for others to download.
  6. Generating and updating a changelog: Automatically generating a changelog based on recent commits and attaching it to the release.

Defining the Job

publish-release:
  needs: build-and-test
  runs-on: ubuntu-latest
  if: startsWith(github.ref, 'refs/tags/v')

needs: build-and-test ensures that the publish-release job only runs after the build-and-test job has completed successfully. This dependency prevents the release process from triggering if the build or tests fail.

The job runs on the latest version of Ubuntu in GitHub’s virtual environment. The if condition ensures that the release process only starts when a version tag (like v1.0.0) is pushed to the repository. This is a common convention for versioning software.


Checkout the Repository

- name: Checkout repository 🛎
  uses: actions/checkout@v2
  with:
    fetch-depth: 0
    ref: ${{ github.head_ref }}

The actions/checkout@v2 action checks out the project repository so that the code is available in the CI environment. fetch-depth: 0 retrieves the full commit history, which is important for generating changelogs or other operations that rely on version history.

ref: ${{ github.head_ref }} ensures that the correct commit or branch is checked out, which is especially useful when multiple branches or tags are involved.


Set Up the Python Environment

- name: Set up Python runtime 🐍
  uses: actions/setup-python@v2.2.1
  with:
    python-version: 3.9

The setup-python@v2.2.1 action sets up Python 3.9, which is necessary for building and packaging the Python project.

python-version: 3.9 specifies that Python 3.9 is the version to be used. You can adjust this to the required Python version for your project.


Install Task Automation and Dependencies

- name: Install Task 🗳️
  uses: arduino/setup-task@v1

This action sets up Task, a task runner, which is used for automating steps like dependency installation, testing, and linting.

- name: Install dependencies 🐍
  working-directory: deployment
  run: task deps

The task deps command is used to install all necessary Python dependencies (from requirements.txt or similar files) for the project. The working directory is set to deployment, indicating where the Taskfile.yml is located.


Install Wheel and Twine (Packaging Tools)

- name: Install Wheel 📦️
  run: python -m pip install wheel

- name: Install Twine 🐍
  run: python -m pip install --upgrade twine

Here, we install the wheel package, which is essential for building Python projects into .whl (wheel) files. These are binary distributions of Python packages that are easy to install.

Next, we also install twine, a tool used to upload the built Python package to a repository, such as GitHub Releases or PyPI (Python Package Index).


Build the Python Package

- name: Build package 🐍
  run: python setup.py sdist bdist_wheel

The python setup.py sdist bdist_wheel command runs the setup script to generate two types of distribution files:

  1. sdist: A source distribution that includes the raw source code.
  2. bdist_wheel: A wheel distribution, which is a binary format that’s easier to install and use.

Set the Package Path for Upload

- name: Find and Set Package File Path 📦️
  run: |
    PACKAGE_FILE=$(ls dist/*.whl)
    PACKAGE_NAME=$(basename $PACKAGE_FILE)
    echo "PACKAGE_FILE=$PACKAGE_FILE" >> $GITHUB_ENV
    echo "PACKAGE_NAME=$PACKAGE_NAME" >> $GITHUB_ENV

This script looks for the wheel file generated during the build process and sets it as an environment variable for later steps.

ls dist/*.whl searches the dist directory for the wheel file, and echo stores the path and file name of the wheel file as environment variables.


Create or Find a GitHub Release

- name: Create or Get Release 📦️
  id: create_release
  uses: actions/github-script@v3
  with:
    github-token: ${{ secrets.GITHUB_TOKEN }}
    script: |
      const { data: releases } = await github.repos.listReleases({
        owner: context.repo.owner,
        repo: context.repo.repo
      });
      let release = releases.find(r => r.tag_name === context.ref.replace('refs/tags/', ''));
      if (!release) {
        const response = await github.repos.createRelease({
          owner: context.repo.owner,
          repo: context.repo.repo,
          tag_name: context.ref.replace('refs/tags/', ''),
          name: context.ref.replace('refs/tags/', ''),
          body: 'Auto-generated release',
          draft: false,
          prerelease: false
        });
        release = response.data;
      }
      core.setOutput('upload_url', release.upload_url);
      core.setOutput('release_id', release.id);

This step uses GitHub’s API to check if a release already exists for the current tag. If a release doesn’t exist, it creates a new one.

core.setOutput() captures the release’s upload URL and release ID, which are needed in the next steps to upload assets.


Upload the Package to GitHub Release

- name: Upload package to GitHub Release 📦️
  uses: actions/upload-release-asset@v1
  with:
    upload_url: ${{ steps.create_release.outputs.upload_url }}
    asset_path: ${{ env.PACKAGE_FILE }}
    asset_name: ${{ env.PACKAGE_NAME }}
    asset_content_type: application/zip
  env:
    GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

The upload-release-asset action uploads the wheel file to the GitHub release created or retrieved earlier.

  • upload_url: Refers to the URL where the package will be uploaded.
  • asset_path: The path to the wheel file.
  • asset_name: The name of the wheel file to be shown in the release.
  • asset_content_type: Specifies the MIME type of the uploaded asset.

Generate and Update the Changelog

- name: Generate Changelog 📝
  run: |
    git-chglog -o CHANGELOG.txt --template .chglog/CHANGELOG-release.tpl.md $(git describe --tags $(git rev-list --tags --max-count=1))

- name: Update Release with Changelog 📦️
  uses: actions/github-script@v3
  with:
    github-token: ${{ secrets.GITHUB_TOKEN }}
    script: |
      const release_id = '${{ steps.create_release.outputs.release_id }}';
      const fs = require('fs');
      const changelog = fs.readFileSync('CHANGELOG.txt', 'utf8');
      await github.repos.updateRelease({
        owner: context.repo.owner,
        repo: context.repo.repo,
        release_id: release_id,
        body: changelog
      });
  • git-chglog: This tool generates a changelog based on the commit history and tags. It uses the git command to find the differences since the last tag and outputs a changelog in the CHANGELOG.txt file.
  • Update Release with Changelog: This step reads the CHANGELOG.txt and updates the release body on GitHub with the contents of the changelog.

In the image below, you can see the successful creation of version v1.0.2 of the Python package for the FastAPI application.

It shows three downloadable assets that were produced and uploaded during the Publish Release job:

  • 1. Wheel File (.whl): The pyimagesearch-1.0.2-py3-none-any.whl file is a compiled Python package. Users can easily install this wheel file locally by running the following command:
$ pip install pyimagesearch-1.0.2-py3-none-any.whl

This allows anyone to install the package along with its dependencies directly, making it ideal for distributing Python libraries or applications.

  • 2. Source Code (zip): This is a zip archive of the source code for version v1.0.2. It contains all the project files, including the original Python scripts and configuration files. Users can download this zip file, extract it, and explore or modify the code.
  • 3. Source Code (tar.gz): This is the same as the zip archive but in a .tar.gz format, which is more commonly used in Unix-like environments. Users can extract this by running the following:
$ tar -xvzf pyimagesearch-1.0.2.tar.gz

These assets provide users with multiple options for using the application: they can either install the ready-to-go Python package via the wheel file or explore and work with the source code by downloading the zip or tarball files.


Summary

This publish-release job automates the process of creating a GitHub release every time a new version tag is pushed. It ensures the Python project is built, packaged, and uploaded as a release artifact, including a changelog. This makes releasing new versions of the software efficient and repeatable without manual steps.


What’s next? We recommend PyImageSearch University.

Course information:
86 total classes • 115+ hours of on-demand code walkthrough videos • Last updated: October 2024
★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled

I strongly believe that if you had the right teacher you could master computer vision and deep learning.

Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science?

That’s not the case.

All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught.

If you’re serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery.

Inside PyImageSearch University you’ll find:

  • ✓ 86 courses on essential computer vision, deep learning, and OpenCV topics
  • ✓ 86 Certificates of Completion
  • ✓ 115+ hours of on-demand video
  • ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques
  • ✓ Pre-configured Jupyter Notebooks in Google Colab
  • ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!)
  • ✓ Access to centralized code repos for all 540+ tutorials on PyImageSearch
  • ✓ Easy one-click downloads for code, datasets, pre-trained models, etc.
  • ✓ Access on mobile, laptop, desktop, etc.

Click here to join PyImageSearch University


Summary

This tutorial advances the GitHub Actions CI pipeline for FastAPI applications, guiding you through the building, testing, and publishing stages. It’s the 3rd of a 4-part series focusing on automating essential CI steps for Python projects. This guide covers how to configure your development environment and project structure, then proceeds to set up a detailed build-and-test job using PyTest for testing and Flake8 for code linting.

It also includes steps for publishing test results and creating release packages. By the end, you’ll know how to automate the generation and upload of test coverage reports, package your application as a wheel file, and publish it to GitHub releases. This comprehensive walkthrough aims to optimize your CI/CD pipeline for a streamlined FastAPI deployment process.


Citation Information

Martinez, H. “Enhancing GitHub Actions CI for FastAPI: Build, Test, and Publish,” PyImageSearch, P. Chugh, S. Huot, R. Raha, and P. Thakur, eds., 2024, https://pyimg.co/ujt7k

@incollection{Martinez_2024_enhancing-github-actions-ci-fastapi,
  author = {Hector Martinez},
  title = {Enhancing GitHub Actions CI for FastAPI: Build, Test, and Publish},
  booktitle = {PyImageSearch},
  editor = {Puneet Chugh and Susan Huot and Ritwik Raha and Piyush Thakur},
  year = {2024},
  url = {https://pyimg.co/ujt7k},
}

To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), simply enter your email address in the form below!

Download the Source Code and FREE 17-page Resource Guide

Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you’ll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL!

The post Enhancing GitHub Actions CI for FastAPI: Build, Test, and Publish appeared first on PyImageSearch.

“}]] [[{“value”:”Table of Contents Enhancing GitHub Actions CI for FastAPI: Build, Test, and Publish What Readers Will Learn in This Post Configuring Your Development Environment Can You Test a CI Pipeline Locally? Project Directory Structure for Following Lessons Building and Testing…
The post Enhancing GitHub Actions CI for FastAPI: Build, Test, and Publish appeared first on PyImageSearch.”}]]  Read More Continuous Integration, FastAPI, GitHub Actions, Python Development, Tutorial, ci/cd, continuous integration, fastapi, flake8, github actions, pytest, python, taskfile, tutorial 

Leave a Reply

Your email address will not be published. Required fields are marked *