TecNimbus

Let’s build better software—and a more balanced life—together.

, ,

I Asked AI to Test My API — Here’s What Happened (Playwright + VS Code + Copilot)

AI is quickly changing how developers write code, debug issues, and even design systems. But one question many testers and developers are starting to ask is: can AI actually write useful API tests?

As someone working in test automation, I was curious to see how far AI has come in this space. Instead of manually writing API tests, I decided to run a small experiment. I gave an AI assistant an OpenAPI specification and asked it to generate automated API tests. Then I executed those tests using Playwright inside Visual Studio Code to see how well they worked in practice.

The idea was simple:

  1. Provide AI with an OpenAPI specification
  2. Ask it to generate automated API tests
  3. Run those tests using Playwright
  4. Analyze the results

Sounds straightforward, but the results were more interesting than I expected.

In this article, I’ll walk through the full experiment step by step—from setting up the environment and generating tests with AI, to running them and evaluating how reliable they actually are. If you’re a QA engineer, developer, or someone curious about the future of AI in testing, this experiment might give you a practical look at where things stand today.

Let’s see what happens when AI becomes your API test engineer.

✅ 1. Create a Playwright Project and Set Up AI in VS Code

For this experiment, I didn’t need to create a new Playwright project since I’m using an existing project. If you were starting fresh, you could run:

npm init playwright@latest

and follow the prompts to set up a basic project structure.

Since I already had a project ready, I focused on setting up AI inside Visual Studio Code using GitHub Copilot:

  1. Open Extensions in VS Code
  2. Search for GitHub Copilot Chat
  3. Install the extension

After installing, sign in with your GitHub account when prompted. Once signed in, the Copilot Chat panel is ready to generate code directly inside your project.

With the existing Playwright project and Copilot set up, the environment is ready to start generating AI-powered API tests.

✅ 2. Add the OpenAPI Spec (Petstore Swagger)

For this experiment, I used the Swagger Petstore OpenAPI specification, which provides a simple example API that’s perfect for testing.

I placed the Petstore OpenAPI JSON file in a folder called api-spec inside my project:

project-root/
└── api-spec/
    └── petstore.json

Having the OpenAPI spec in the project allows AI to read it and generate Playwright tests based on the endpoints and responses defined in the file

✅ 3. Add the OpenAPI Spec (Petstore Swagger)

With the OpenAPI spec ready, I asked AI to generate Playwright tests specifically for the pet object.

I used GitHub Copilot Chat in VS Code and provided the following prompt:

You are a Playwright API testing expert.

Using petstore.json OpenAPI spec given in the api-spec folder, generate Playwright API tests for pet object only in a new directory.

Requirements:
- Use Playwright test runner
- Use request fixture
- Validate response status
- Validate response structure
- Create one test per endpoint

Within a few seconds, the AI generated a test suite in a new folder (for example tests/pet).

This gave a ready-to-run test suite for the pet endpoints, saving a lot of time compared to writing all tests manually.

Next, it was time to run the AI-generated tests. Since the goal was to see how well the AI did, I decided to run all the generated tests without any modifications.

In the terminal, from the project root, I executed:

Playwright ran all the tests in the generated folder. Out of 8 generated tests, 5 tests passed, while 3 failed. At this stage, I didn’t change anything—this was purely to see the AI’s first attempt.

We’ll analyse the results in the next step to understand why some tests passed and why some failed, and what can be improved.