Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
124 changes: 124 additions & 0 deletions SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,124 @@
---
name: screenpy
description: Composition-based Screenplay Pattern test framework for Python. Use when writing or editing automated tests in Python following the Screenplay Pattern architecture.
---

# ScreenPy

Composition-based Screenplay Pattern test framework for Python. Actors are granted Abilities, perform Actions, ask Questions, and verify Resolutions. All via **protocols** (structural subtyping) — no base classes required.

**ScreenPy is the core.** Domain-specific Abilities, Actions, Questions, and Adapters come from extension packages (`screenpy_selenium`, `screenpy_requests`, `screenpy_playwright`, `screenpy_appium`, `screenpy_adapter_allure`). Always check which extensions a project has installed.

**When writing tests**, follow the coding style and conventions of the project under test — match its existing file layout, naming patterns, fixture style, and any custom Actions/Tasks/Questions already in use. Leverage ScreenPy's aliases (e.g. `See(...)` over `See.the(...)`, `Equals` over `IsEqualTo`) to make test code read as close to natural English as possible.

## Protocols

| Concept | Protocol | Method | Purpose |
|---|---|---|---|
| Ability | `Forgettable` | `forget()` | Access to a tool/resource |
| Action/Task | `Performable` | `perform_as(actor)` | Something an Actor does |
| Question | `Answerable` | `answered_by(actor)` | Retrieves a value |
| Resolution | `Resolvable` | `resolve() → Matcher` | Expected-value matcher |

Optional: `Describable` (`describe() → str`) for logging descriptions.

## Test Flow

```python
from screenpy import AnActor, given, when, then
from screenpy.actions import See
from screenpy.resolutions import IsEqualTo

Perry = AnActor.named("Perry").who_can(SomeAbility())

given(Perry).was_able_to(SetUp()) # arrange
when(Perry).attempts_to(DoAction()) # act
then(Perry).should( # assert
See.the(SomeQuestion(), IsEqualTo("expected")),
)
Perry.exit() # cleanup + forget abilities
```

`given`/`when`/`then`/`and_` are identity functions returning the Actor. Actor method aliases: `was_able_to`, `did`, `attempts_to`, `tries_to`, `will`, `does`, `should`, `shall` — all equivalent.

## Built-In Actions

`See`, `SeeAllOf`, `SeeAnyOf` — assertions. `Eventually` — retry until timeout. `Either(...).or_(...)` — try/fallback. `Silently` — suppress narration unless error. `MakeNote`/`Log` — store/log values. `Pause` — sleep (requires reason). `Debug` — breakpoint. `AttachTheFile`, `Stop`.

Aliases exist: `Assert=See`, `Quietly=Silently`, `Try=Either`, `Sleep=Pause`, etc.

## Built-In Resolutions

`IsEqualTo` (`Equals`), `IsNot` (`DoesNot`), `ContainsTheText`, `ReadsExactly`, `ContainsTheItem`, `ContainsTheEntry`, `ContainsTheKey`, `ContainsTheValue`, `ContainsItemMatching`, `HasLength`, `IsEmpty`, `IsGreaterThan`, `IsGreaterThanOrEqualTo`, `IsLessThan`, `IsLessThanOrEqualTo`, `IsCloseTo`, `IsInRange`, `StartsWith`, `EndsWith`, `Matches`. All wrap PyHamcrest matchers.

## Pacing & Narration

Decorators narrate through adapters on `the_narrator` (default: `StdOutAdapter`).

- `@act("title")` — suite grouping
- `@scene("title")` — feature grouping
- `@beat("{} does {thing}.")` — step narration (`{}` = actor name, `{thing}` = `self.thing`)
- `aside("message")` — ad-hoc log line

```python
the_narrator.attach_adapter(SomeAdapter()) # add adapters in conftest.py
```

## Director & Notes

```python
when(Perry).attempts_to(MakeNote.of_the(Q()).as_("key"))
# MUST be a separate call — noted_under evaluates at argument-build time
then(Perry).should(See.the(Q2(), IsEqualTo(noted_under("key"))))
```

## Writing Custom Components

**Ability:** class with `forget()`. **Action/Task:** class with `perform_as(actor)` + `@beat`. **Question:** class with `answered_by(actor)`. **Resolution:** class with `describe()` + `resolve() → Matcher` + `@beat`.

```python
# Action example
class ClickOn:
@beat("{} clicks on the {target}.")
def perform_as(self, the_actor):
the_actor.ability_to(BrowseTheWeb).browser.find_element(*self.target).click()
def describe(self): return f"Click on the {self.target}."
def __init__(self, target): self.target = target

# Resolution example
class IsPalpable:
def describe(self): return "A palpable tension."
@beat("... hoping it's palpable.")
def resolve(self): return has_saturation_greater_than(85)
def __init__(self): pass
```

## Settings

Override via code (`settings.TIMEOUT = 60`), env var (`SCREENPY_TIMEOUT=60`), or `[tool.screenpy]` in `pyproject.toml`.

## Cleanup

```python
Perry.has_ordered_cleanup_tasks(A(), B()) # stops on first failure
Perry.has_independent_cleanup_tasks(C(), D()) # runs all regardless
Perry.exit() # runs cleanup + forget
```

## Patterns

```python
# Retry
actor.will(Eventually(See.the(Q(), Equals("done"))).for_(30).seconds())

# Try/fallback
actor.will(Either(TryThis()).or_(DoThat()).ignoring(ValueError))

# Suppress narration
actor.will(Silently(NoisyTask()))

# Multiple assertions
then(actor).should(See.the(Q1(), R1()), See.the(Q2(), R2()))
then(actor).should(SeeAll.the((Q1(), R1()), (Q2(), R2()))) # same as above
then(actor).should(SeeAny.the((Q1(), R1()), (Q2(), R2()))) # passes if any match
```
126 changes: 126 additions & 0 deletions docs/agent_skills.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,126 @@
===================
AI Agent Skills
===================

ScreenPy ships with a ``SKILL.md`` file
at the root of the repository.
This file teaches AI coding agents
(such as GitHub Copilot, Cursor, or Claude)
how to write Screenplay Pattern tests
using ScreenPy.

Each official ScreenPy extension
will also include its own ``SKILL.md``
describing the Abilities,
Actions,
Questions,
and Resolutions it provides.

Any custom extensions you create
can follow this pattern
by adding their own ``SKILL.md`` files.

What is a Skill File?
=====================

A skill file is a concise reference document
written for AI agents,
not humans.
It describes:

* The concepts and protocols in the library.
* The available Actions,
Resolutions,
and other components.
* Common patterns and gotchas.
* How to write custom components.

Agents use this context
to generate correct,
idiomatic ScreenPy tests
without needing to read
the full documentation.

How to Use the Skills
=====================

There are several ways
to make these skills available
to your AI agent.

Workspace Context (Recommended)
-------------------------------

Most modern AI coding tools
automatically index files
in your workspace or repository.
If ScreenPy (and any extensions)
are installed as editable packages
or their source is present in your workspace,
the agent will discover
the ``SKILL.md`` files automatically.

Custom Instructions
-------------------

Many AI tools support custom instruction files
that are loaded into every conversation.
You can reference the ScreenPy skills
from your project's instruction file.

For **GitHub Copilot**,
create a ``.github/copilot-instructions.md``::

This project uses ScreenPy for testing.
Refer to the SKILL.md files in the screenpy,
screenpy_selenium, and screenpy_requests packages
for usage patterns and conventions.

For **Cursor**,
add a ``.cursorrules`` file
with similar content.

For **Claude Projects**,
paste the relevant skill files
into the project knowledge.

Concatenated Context
--------------------

For token-constrained setups,
you can concatenate the core skill
and any extension skills
into a single file::

cat \
.venv/lib/python3.*/site-packages/screenpy/SKILL.md \
.venv/lib/python3.*/site-packages/screenpy_selenium/SKILL.md \
> .agent-context/screenpy-skills.md

Then point your agent's custom instructions
at that combined file.

Tips
====

* **Let the agent match your style.**
The core skill instructs agents
to follow the conventions
of your existing test suite.
The more consistent your tests are,
the better the agent's output will be.

* **Prefer aliases for readability.**
The skill encourages agents
to use ScreenPy's natural-language aliases
(e.g. ``See(...)`` over ``See.the(...)``,
``Equals`` over ``IsEqualTo``)
so generated tests read like English.

* **Extension skills expand vocabulary.**
The core skill teaches the framework;
extension skills teach the domain-specific parts.
An agent with both
will generate tests
that use the right Actions and Questions
for your technology stack.
1 change: 1 addition & 0 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -29,5 +29,6 @@ and maintain.
narration
director
filehierarchy
agent_skills
deprecations
context
15 changes: 15 additions & 0 deletions example_test/conftest.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
"""Fixtures for the example ScreenPy test suite."""

from collections.abc import Generator

import pytest

from screenpy import AnActor


@pytest.fixture
def Perry() -> Generator:
"""Provide an Actor named Perry for each test."""
the_actor = AnActor.named("Perry")
yield the_actor
the_actor.exit()
50 changes: 50 additions & 0 deletions example_test/test_examples.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
"""Example tests demonstrating core ScreenPy functionality."""

from screenpy import AnActor, then, when
from screenpy.actions import Log, MakeNote, See, SeeAllOf
from screenpy.directions import noted_under
from screenpy.resolutions import (
ContainsTheItem,
ContainsTheText,
IsEqualTo,
IsLessThan,
Matches,
)


class TestLogging:
"""Tests for the Log action and a simple assertion."""

def test_log_a_message(self, Perry: AnActor) -> None:
"""An Actor can log a value and assert True is True."""
when(Perry).attempts_to(Log("This is a test!"))

then(Perry).should(See(True, IsEqualTo(True)))


class TestMakeNote:
"""Tests for the MakeNote action and noted_under direction."""

def test_make_note_and_recall(self, Perry: AnActor) -> None:
"""An Actor can note a value and assert against it later."""
when(Perry).attempts_to(
MakeNote.of("this is only a test").as_("memo"),
)

then(Perry).should(
See(noted_under("memo"), ContainsTheText("test")),
)


class TestSeeAllOf:
"""Tests for the SeeAllOf action with multiple assertions."""

def test_see_all_of_multiple_checks(self, Perry: AnActor) -> None:
"""An Actor can verify several conditions at once."""
then(Perry).should(
SeeAllOf.the(
(1, IsLessThan(2)),
([1, 2, 3], ContainsTheItem(3)),
("blahdy blah", Matches(r".*?y b.*?")),
),
)
Loading