skip to navigation
skip to content

Planet Python

Last update: January 14, 2026 07:44 PM UTC

January 14, 2026


Mike Driscoll

How to Type Hint a Decorator in Python

Decorators are a concept that can trip up new Python users. You may find this definition helpful: A decorator is a function that takes in another function and adds new functionality to it without modifying the original function.

Functions can be used just like any other data type in Python. A function can be passed to a function or returned from a function, just like a string or integer.

If you have jumped on the type-hinting bandwagon, you will probably want to add type hints to your decorators. That has been difficult until fairly recently.

Let’s see how to type hint a decorator!

Type Hinting a Decorator the Wrong Way

You might think that you can use a TypeVar to type hint a decorator. You will try that first.

Here’s an example:

from functools import wraps
from typing import Any, Callable, TypeVar


Generic_function = TypeVar("Generic_function", bound=Callable[..., Any])

def info(func: Generic_function) -> Generic_function:
    @wraps(func)
    def wrapper(*args: Any, **kwargs: Any) -> Any:
        print('Function name: ' + func.__name__)
        print('Function docstring: ' + str(func.__doc__))
        result = func(*args, **kwargs)
        return result
    return wrapper

@info
def doubler(number: int) -> int:
    """Doubles the number passed to it"""
    return number * 2

print(doubler(4))

If you run mypy —strict info_decorator.py you will get the following output:

info_decorator.py:14: error: Incompatible return value type (got "_Wrapped[[VarArg(Any), KwArg(Any)], Any, [VarArg(Any), KwArg(Any)], Any]", expected "Generic_function")  [return-value]
Found 1 error in 1 file (checked 1 source file)

That’s a confusing error! Feel free to search for an answer.

The answers that you find will probably vary from just ignoring the function (i.e. not type hinting it at all) to using something called a ParamSpec.

Let’s try that next!

Using a ParamSpec for Type Hinting

The ParamSpec is a class in Python’s typing module. Here’s what the docstring says about ParamSpec:

class ParamSpec(object):
  """ Parameter specification variable.
  
  The preferred way to construct a parameter specification is via the
  dedicated syntax for generic functions, classes, and type aliases,
  where the use of '**' creates a parameter specification::
  
      type IntFunc[**P] = Callable[P, int]
  
  For compatibility with Python 3.11 and earlier, ParamSpec objects
  can also be created as follows::
  
      P = ParamSpec('P')
  
  Parameter specification variables exist primarily for the benefit of
  static type checkers.  They are used to forward the parameter types of
  one callable to another callable, a pattern commonly found in
  higher-order functions and decorators.  They are only valid when used
  in ``Concatenate``, or as the first argument to ``Callable``, or as
  parameters for user-defined Generics. See class Generic for more
  information on generic types.
  
  An example for annotating a decorator::
  
      def add_logging[**P, T](f: Callable[P, T]) -> Callable[P, T]:
          '''A type-safe decorator to add logging to a function.'''
          def inner(*args: P.args, **kwargs: P.kwargs) -> T:
              logging.info(f'{f.__name__} was called')
              return f(*args, **kwargs)
          return inner
  
      @add_logging
      def add_two(x: float, y: float) -> float:
          '''Add two numbers together.'''
          return x + y
  
  Parameter specification variables can be introspected. e.g.::
  
      >>> P = ParamSpec("P")
      >>> P.__name__
      'P'
  
  Note that only parameter specification variables defined in the global
  scope can be pickled.
   """

In short, you use a ParamSpec to construct a parameter specification for a generic function, class, or type alias.

To see what that means in code, you can update the previous decorator to look like this: 

from functools import wraps
from typing import Callable, ParamSpec, TypeVar


P = ParamSpec("P")
R = TypeVar("R")

def info(func: Callable[P, R]) -> Callable[P, R]:
    @wraps(func)
    def wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
        print('Function name: ' + func.__name__)
        print('Function docstring: ' + str(func.__doc__))
        return func(*args, **kwargs)
    return wrapper

@info
def doubler(number: int) -> int:
    """Doubles the number passed to it"""
    return number * 2

print(doubler(4))

Here, you create a ParamSpec and a TypeVar. You tell the decorator that it takes in a Callable with a generic set of parameters (P), and you use TypeVar (R) to specify a generic return type.

If you run mypy on this updated code, it will pass! Good job!

What About PEP 695?

PEP 695 adds a new wrinkle to adding type hints to decorators by updating the parameter specification in Python in 3.12.

The main thrust of this PEP is to “simplify” the way you specify type parameters within a generic class, function, or type alias.

In a lot of ways, it does clean up the code as you no longer need to import ParamSpec of TypeVar when using this new syntax. Instead, it feels almost magical.

Here’s the updated code:

from functools import wraps
from typing import Callable


def info[**P, R](func: Callable[P, R]) -> Callable[P, R]:
    @wraps(func)
    def wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
        print('Function name: ' + func.__name__)
        print('Function docstring: ' + str(func.__doc__))
        return func(*args, **kwargs)
    return wrapper

@info
def doubler(number: int) -> int:
    """Doubles the number passed to it"""
    return number * 2

print(doubler(4))

Notice that at the beginning of the function you have square brackets. That is basically declaring your ParamSpec implicitly. The “R” is again the return type. The rest of the code is the same as before.

When you run mypy against this version of the type hinted decorator, you will see that it passes happily.

Wrapping Up

Type hinting can still be a hairy subject, but the newer the Python version that you use, the better the type hinting capabilities are.

Of course, since Python itself doesn’t enforce type hinting, you can just skip all this too. But if your employer like type hinting, hopefully this article will help you out.

Related Reading

The post How to Type Hint a Decorator in Python appeared first on Mouse Vs Python.

January 14, 2026 05:04 PM UTC


Real Python

How to Create a Django Project

Before you can start building your Django web application, you need to set up your Django project. In this guide you’ll learn how to create a new Django project in four straightforward steps and only six commands:

Step Description Command
1a Set up a virtual environment python -m venv .venv
1b Activate the virtual environment source .venv/bin/activate
2a Install Django python -m pip install django
2b Pin your dependencies python -m pip freeze > requirements.txt
3 Set up a Django project django-admin startproject <projectname>
4 Start a Django app python manage.py startapp <appname>

The tutorial focuses on the initial steps you’ll always need to start a new web application.

Use this tutorial as your go-to reference until you’ve built so many projects that the necessary commands become second nature. Until then, follow the steps outlined below and in the command reference, or download the PDF cheatsheet as a printable reference:

Free Bonus: Click here to download the Django Project cheat sheet that assembles all important commands and tips on one page that’s easy to print.

There are also a few exercises throughout the tutorial to help reinforce what you’re learning, and you can test your knowledge in the associated quiz:

Take the Quiz: Test your knowledge with our interactive “How to Create a Django Project” quiz. You’ll receive a score upon completion to help you track your learning progress:


Interactive Quiz

How to Create a Django Project

Check your Django setup skills. Install safely and pin requirements, create a project and an app. Start building your first site.

Get Your Code: Click here to download the free sample code that shows you how to create a Django project.

Prerequisites

Before you start creating your Django project, make sure you have the right tools and knowledge in place. This tutorial assumes you’re comfortable working with the command line, but you don’t need to be an expert. Here’s what you’ll need to get started:

You don’t need any prior Django experience to complete this guide. However, to build functionality beyond the basic scaffolding, you’ll need to know Python basics and at least some Django.

Step 1: Prepare Your Environment

When you’re ready to start your new Django web application, create a new folder and navigate into it. In this folder, you’ll set up a new virtual environment using your terminal:

Windows PowerShell
PS> python -m venv .venv
Shell
$ python3 -m venv .venv

This command sets up a new virtual environment named .venv in your current working directory. Once the process is complete, you also need to activate the virtual environment:

Windows PowerShell
PS> .venv\Scripts\activate
Shell
$ source .venv/bin/activate

If the activation was successful, then you’ll see the name of your virtual environment, (.venv), at the beginning of your command prompt. This means that your environment setup is complete.

You can learn more about how to work with virtual environments in Python, and how to perfect your Python development setup, but for your Django setup, you have all you need. You can continue with installing the django package.

Step 2: Install Django and Pin Your Dependencies

Read the full article at https://realpython.com/django-setup/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

January 14, 2026 02:00 PM UTC

Quiz: How to Create a Django Project

In this quiz, you’ll test your understanding of creating a Django project.

By working through this quiz, you’ll revisit how to create and activate a virtual environment, install Django and pin your dependencies, start a Django project, and start a Django app. You will also see how isolating dependencies helps others reproduce your setup.

To revisit and keep learning, watch the video course on How to Set Up a Django Project.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

January 14, 2026 12:00 PM UTC


Armin Ronacher

Porting MiniJinja to Go With an Agent

Turns out you can just port things now. I already attempted this experiment in the summer, but it turned out to be a bit too much for what I had time for. However, things have advanced since. Yesterday I ported MiniJinja (a Rust Jinja2 template engine) to native Go, and I used an agent to do pretty much all of the work. In fact, I barely did anything beyond giving some high-level guidance on how I thought it could be accomplished.

In total I probably spent around 45 minutes actively with it. It worked for around 3 hours while I was watching, then another 7 hours alone. This post is a recollection of what happened and what I learned from it.

All prompting was done by voice using pi, starting with Opus 4.5 and switching to GPT-5.2 Codex for the long tail of test fixing.

What is MiniJinja

MiniJinja is a re-implementation of Jinja2 for Rust. I originally wrote it because I wanted to do a infrastructure automation project in Rust and Jinja was popular for that. The original project didn’t go anywhere, but MiniJinja itself continued being useful for both me and other users.

The way MiniJinja is tested is with snapshot tests: inputs and expected outputs, using insta to verify they match. These snapshot tests were what I wanted to use to validate the Go port.

Test-Driven Porting

My initial prompt asked the agent to figure out how to validate the port. Through that conversation, the agent and I aligned on a path: reuse the existing Rust snapshot tests and port incrementally (lexer -> parser -> runtime).

This meant the agent built Go-side tooling to:

This resulted in a pretty good harness with a tight feedback loop. The agent had a clear goal (make everything pass) and a progression (lexer -> parser -> runtime). The tight feedback loop mattered particularly at the end where it was about getting details right. Every missing behavior had one or more failing snapshots.

Branching in Pi

I used Pi’s branching feature to structure the session into phases. I rewound back to earlier parts of the session and used the branch switch feature to inform the agent automatically what it had already done. This is similar to compaction, but Pi shows me what it puts into the context. When Pi switches branches it does two things:

  1. It stays in the same session so I can navigate around, but it makes a new branch off an earlier message.
  2. When switching, it adds a summary of what it did as a priming message into where it branched off. I found this quite helpful to avoid the agent doing vision quests from scratch to figure out how far it had already gotten.

Without switching branches, I would probably just make new sessions and have more plan files lying around or use something like Amp’s handoff feature which also allows the agent to consult earlier conversations if it needs more information.

First Signs of Divergence

What was interesting is that the agent went from literal porting to behavioral porting quite quickly. I didn’t steer it away from this as long as the behavior aligned. I let it do this for a few reasons. First, the code base isn’t that large, so I felt I could make adjustments at the end if needed. Letting the agent continue with what was already working felt like the right strategy. Second, it was aligning to idiomatic Go much better this way.

For instance, on the runtime it implemented a tree-walking interpreter (not a bytecode interpreter like Rust) and it decided to use Go’s reflection for the value type. I didn’t tell it to do either of these things, but they made more sense than replicating my Rust interpreter design, which was partly motivated by not having a garbage collector or runtime type information.

Where I Had to Push Back

On the other hand, the agent made some changes while making tests pass that I disagreed with. It completely gave up on all the “must fail” tests because the error messages were impossible to replicate perfectly given the runtime differences. So I had to steer it towards fuzzy matching instead.

It also wanted to regress behavior I wanted to retain (e.g., exact HTML escaping semantics, or that range must return an iterator). I think if I hadn’t steered it there, it might not have made it to completion without going down problematic paths, or I would have lost confidence in the result.

Grinding to Full Coverage

Once the major semantic mismatches were fixed, the remaining work was filling in all missing pieces: missing filters and test functions, loop extras, macros, call blocks, etc. Since I wanted to go to bed, I switched to Codex 5.2 and queued up a few “continue making all tests pass if they are not passing yet” prompts, then let it work through compaction. I felt confident enough that the agent could make the rest of the tests pass without guidance once it had the basics covered.

This phase ran without supervision overnight.

Final Cleanup

After functional convergence, I asked the agent to document internal functions and reorganize (like moving filters to a separate file). I also asked it to document all functions and filters like in the Rust code base. This was also when I set up CI, release processes, and talked through what was created to come up with some finalizing touches before merging.

Parting Thoughts

There are a few things I find interesting here.

First: these types of ports are possible now. I know porting was already possible for many months, but it required much more attention. This changes some dynamics. I feel less like technology choices are constrained by ecosystem lock-in. Sure, porting NumPy to Go would be a more involved undertaking, and getting it competitive even more so (years of optimizations in there). But still, it feels like many more libraries can be used now.

Second: for me, the value is shifting from the code to the tests and documentation. A good test suite might actually be worth more than the code. That said, this isn’t an argument for keeping tests secret — generating tests with good coverage is also getting easier. However, for keeping code bases in different languages in sync, you need to agree on shared tests, otherwise divergence is inevitable.

Lastly, there’s the social dynamic. Once, having people port your code to other languages was something to take pride in. It was a sign of accomplishment — a project was “cool enough” that someone put time into making it available elsewhere. With agents, it doesn’t invoke the same feelings. Will McGugan also called out this change.

Session Stats

Lastly, some boring stats for the main session:

This did not count the adding of doc strings and smaller fixups.

January 14, 2026 12:00 AM UTC

January 13, 2026


Gaël Varoquaux

Stepping up as probabl’s CSO to supercharge scikit-learn and its ecosystem

Note

Probabl’s get together, in falls 2025

I’m thrilled to announce that I’m stepping up as Probabl’s CSO (Chief Science Officer) to supercharge scikit-learn and its ecosystem, pursuing my dreams of tools that help go from data to impact.

Scikit-learn, a central tool

Scikit-learn is central to data-scientists’ work: it is the most used machine-learning package. It has grown over more than a decade, supported by volunteers’ time, donations, and grant funding, with a central role of Inria.

Scikit-learn download numbers; reproduce and explore on clickpy

And the usage numbers keep going up…

Scikit-learn keeps growing because it enables crucial applications: machine-learning that can be easily adapted to a given application. This type of AI does not make the headlines, but it is central to the value brought by data science. It is used across the board to extract insights from data and automate business-specific processes, thus ensuring function and efficiency of a wide variety of activities.


And scikit-learn is quietly but steadily advancing. The recent releases bring progress in all directions: computational foundations (the array API enabling GPU support), user interface (rich HTML displays), new models (eg HDBSCAN, temperature-scaling recalibration …), and always algorithmic improvements (release 1.8 brought marked speed ups to linear models or trees with MAE).

A new opportunity to boost scikit-learn and its ecosystem

Probabl recently raised a beautiful seed funding from investors who really understand the value and perspective of scikit-learn. We have a unique opportunity to accelerate scikit-learn’s development. Our analysis is that enterprises need dedicated tooling and partners to build best on scikit-learn, and we’re hard at work to provide this.

2/3rd of probabl’s founders are scikit-learn contributors and we have been investing in all aspects of scikit-learn: features, releases, communication, documentation, and training. In addition, part of scikit-learn’s success has always been to nurture an ecosystem, for instance via its simple API that has become a standard. Thus Probabl is not only consolidating scikit-learn, but also this ecosystem: the skops project, to put scikit-learn based models in production, the skrub project, that facilitates data preparation, the young skore project to track data science, fairlearn to help avoiding machine learning that discriminates, and more upstream projects, such as joblib for parallel computing.

My obsession as Probabl CSO: serving the data scientists

As CSO (Chief Science Officer) at Probabl, my role is to nourish our development strategy with understanding of machine learning, data science, and open source. Making sure that scikit-learn and its ecosystem are enterprise ready will bring resources for scikit-learn’s sustainability, enabling its ecosystem to grow into a standard-setting platform for the industry, that continues to serve data scientists. This mission will require consolidating the existing tools and patterns, and inventing new ones.


Probabl is in a unique position for this endeavor: Our core is an amazing team of engineers with deep knowledge of data science. Working directly with businesses gives us an acute understanding of where the ecosystem can be improved. On this topic, I also profoundly enjoy working with people who have a different DNA than the historical DNA of scikit-learn, with product research, marketing, and business mindsets. I believe that the union of our different cultures will make the scikit-learn ecosystem better.

Beyond the Probabl team, we have an amazing community, with a broader group of scikit-learn contributors who do an amazing job bringing together what makes scikit-learn so versatile, with a deep ecosystem of Python data tools enriched by so many different actors. I’m deeply greatful to the many scikit-learn and pydata contributors. At Probabl, we are very attuned to enabling the open-source contributor community. Such a community is what enables a single tool, scikit-learn, to serve a long tail of diverse usages.

January 13, 2026 11:00 PM UTC


PyCoder’s Weekly

Issue #717: Unit Testing Performance, Cursor, Recursive match, and More (Jan. 13, 2026)

#717 – JANUARY 13, 2026
View in Browser »

The PyCoder’s Weekly Logo


Unit Testing Your Code’s Performance

Testing your code is important, but not just for correctness also for performance. One approach is to check performance degradation as data sizes go up, also known as Big-O scaling.
ITAMA TURNER-TRAURING

Tips for Using the AI Coding Editor Cursor

Learn Cursor fast: AI-powered coding with agents, project-aware chat, inline edits, and VS Code workflow – ship smarter, sooner.
REAL PYTHON course

AI Code Review With Comments You’ll Actually Implement

alt

Unblocked is the AI code review that surfaces real issues and meaningful feedback instead of flooding your PRs with stylistic nitpicks and low-value comments. “Unblocked made me reconsider my AI fatigue. ” - Senior developer, Clio. Try now for Free →
UNBLOCKED sponsor

Recursive Structural Pattern Matching

Learn how to use structural pattern matching (the match statement) to work recursively through tree-like structures.
RODRIGO GIRÃO SERRÃO

PEP 822: Dedented Multiline String (d-String) (Draft)

PYTHON.ORG

PEP 820: PySlot: Unified Slot System for the C API (Draft)

PYTHON.ORG

PEP 819: JSON Package Metadata (Draft)

PYTHON.ORG

Django Bugfix Release: 5.2.10, 6.0.1

DJANGO SOFTWARE FOUNDATION

Articles & Tutorials

Coding Python With Confidence: Live Course Participants

Are you looking for that solid foundation to begin your Python journey? Would the accountability of scheduled group classes help you get through the basics and start building something? This week, two members of the Python for Beginners live course discuss their experiences.
REAL PYTHON podcast

Regex: Searching for the Tiger

Python’s re module is a robust toolset for writing regular expressions, but its behavior often deviates from other engines. Understanding the nuances of the interpreter and the Unicode standard is essential for writing predictable patterns.
SUBSTACK.COM • Shared by Vivis Dev

The Ultimate Guide to Docker Build Cache

alt

Docker builds feel slow because cache invalidation is working against you. Depot explains how BuildKit’s layer caching works, when to use bind mounts vs cache mounts, and how to optimize your Dockerfile so Gradle dependencies don’t rebuild on every code change →
DEPOT sponsor

How We Made Python’s Packaging Library 3x Faster

Underneath pip, and many other packaging tools, is the packaging library which deals with version numbers and other associated markers. Recent work on the library has shown significant speed-up and this post talks about how it was done.
HENRY SCHREINER

Django Quiz 2025

Last month, Adam held another quiz at the December edition of Django London. This is an annual tradition at the meetup, now you can take it yourself or just skim the answers.
ADAM JOHNSON

Live Python Courses: Already 50% Sold for 2026

Real Python’s instructor-led cohorts are filling up. Python for Beginners builds your foundation right the first time. Intermediate Python Deep Dive covers decorators, OOP, and production patterns with real-time expert feedback. Grab a seat before they’re gone at realpython.com/live →
REAL PYTHON sponsor

A Different Way to Think About Python API Clients

Paul is frustrated with how clients interact with APIs in Python, so he’s proposing a new approach inspired by the many decorator-based API server libraries.
PAULWRITES.SOFTWARE • Shared by Paul Hallett

Learn From 2025’s Most Popular Python Tutorials and Courses

Pick from the best Python tutorials and courses of 2025. Revisit core skills, 3.14 updates, AI coding tools, and project walkthroughs. Kickstart your 2026!
REAL PYTHON

Debugging With F-Strings

If you’re debugging Python code with print calls, consider using f-strings with self-documenting expressions to make your debugging a little bit easier.
TREY HUNNER

How to Switch to ty From Mypy

The folks at Astral have created a type checker known as “ty”. This post describes how to move from Mypy to ty, including in your GitHub Actions.
MIKE DRISCOLL

Recent Optimizations in Python’s Reference Counting

This article highlights some of the many optimizations to reference counting that have occurred in recent CPython releases.
ARTEM GOLUBIN

Projects & Code

yastrider: Defensive String Cleansing and Tidying

GITHUB.COM/BARRANK

gazetteer: Offline Reverse Geocoding Library

GITHUB.COM/SOORAJTS2001

bengal: High-Performance Static Site Generator

GITHUB.COM/LBLIII

PyPDFForm: Fire: The Python Library for PDF Forms

GITHUB.COM/CHINAPANDAMAN

pyauto-desktop: A Desktop Automation Toool

GITHUB.COM/OMAR-F-RASHED

Events

Weekly Real Python Office Hours Q&A (Virtual)

January 14, 2026
REALPYTHON.COM

PyData Bristol Meetup

January 15, 2026
MEETUP.COM

PyLadies Dublin

January 15, 2026
PYLADIES.COM

Chattanooga Python User Group

January 16 to January 17, 2026
MEETUP.COM

DjangoCologne

January 20, 2026
MEETUP.COM

Inland Empire Python Users Group Monthly Meeting

January 21, 2026
MEETUP.COM


Happy Pythoning!
This was PyCoder’s Weekly Issue #717.
View in Browser »

alt

[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

January 13, 2026 07:30 PM UTC


Real Python

Intro to Object-Oriented Programming (OOP) in Python

Object-oriented programming (OOP) is one of the most significant and essential topics in programming. This course will give you a foundational conceptual understanding of object-oriented programming to help you elevate your Python skills.

You’ll learn how to define custom types using classes and how to instantiate those classes into Python objects that can be used throughout your program.

Finally, you’ll discover how classes can inherit from one another, with a brief introduction to inheritance, enabling you to write maintainable and less redundant Python code.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

January 13, 2026 02:00 PM UTC


Python Software Foundation

Anthropic invests $1.5 million in the Python Software Foundation and open source security

We are thrilled to announce that Anthropic has entered into a two-year partnership with the Python Software Foundation (PSF) to contribute a landmark total of $1.5 million to support the foundation’s work, with an emphasis on Python ecosystem security. This investment will enable the PSF to make crucial security advances to CPython and the Python Package Index (PyPI) benefiting all users, and it will also sustain the foundation’s core work supporting the Python language, ecosystem, and global community.

Innovating open source security

Anthropic’s funds will enable the PSF to make progress on our security roadmap, including work designed to protect millions of PyPI users from attempted supply-chain attacks. Planned projects include creating new tools for automated proactive review of all packages uploaded to PyPI, improving on the current process of reactive-only review. We intend to create a new dataset of known malware that will allow us to design these novel tools, relying on capability analysis. One of the advantages of this project is that we expect the outputs we develop to be transferable to all open source package repositories. As a result, this work has the potential to ultimately improve security across multiple open source ecosystems, starting with the Python ecosystem.

This work will build on PSF Security Developer in Residence Seth Larson’s security roadmap with contributions from PyPI Safety and Security Engineer Mike Fiedler, both roles generously funded by Alpha-Omega

Sustaining the Python language, ecosystem, and community

Anthropic’s support will also go towards the PSF’s core work, including the Developer in Residence program driving contributions to CPython, community support through grants and other programs, running core infrastructure such as PyPI, and more. We couldn’t be more grateful for Anthropic’s remarkable support, and we hope you will join us in thanking them for their investment in the PSF and the Python community.

About Anthropic


Anthropic is the AI research and development company behind Claude — the frontier model used by millions of people worldwide.

About the PSF

The Python Software Foundation is a non-profit whose mission is to promote, protect, and advance the Python programming language, and to support and facilitate the growth of a diverse and international community of Python programmers. The PSF supports the Python community using corporate sponsorships, grants, and donations. Are you interested in sponsoring or donating to the PSF so we can continue supporting Python and its community? Check out our sponsorship program, donate directly here, or contact our team!


January 13, 2026 08:00 AM UTC


Talk Python to Me

#534: diskcache: Your secret Python perf weapon

Your cloud SSD is sitting there, bored, and it would like a job. Today we’re putting it to work with DiskCache, a simple, practical cache built on SQLite that can speed things up without spinning up Redis or extra services. Once you start to see what it can do, a universe of possibilities opens up. We're joined by Vincent Warmerdam to dive into DiskCache.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br> <a href='https://talkpython.fm/devopsbook'>Python in Production</a><br/> <br/> <h2 class="links-heading mb-4">Links from the show</h2> <div><strong>diskcache docs</strong>: <a href="https://grantjenks.com/docs/diskcache/?featured_on=talkpython" target="_blank" >grantjenks.com</a><br/> <strong>LLM Building Blocks for Python course</strong>: <a href="https://training.talkpython.fm/courses/llm-building-blocks-for-python" target="_blank" >training.talkpython.fm</a><br/> <strong>JSONDisk</strong>: <a href="https://grantjenks.com/docs/diskcache/api.html#jsondisk" target="_blank" >grantjenks.com</a><br/> <strong>Git Code Archaeology Charts</strong>: <a href="https://koaning.github.io/gitcharts/#django/versioned" target="_blank" >koaning.github.io</a><br/> <strong>Talk Python Cache Admin UI</strong>: <a href="https://blobs.talkpython.fm/talk-python-cache-admin.png?cache_id=cd0d7f" target="_blank" >blobs.talkpython.fm</a><br/> <strong>Litestream SQLite streaming</strong>: <a href="https://litestream.io?featured_on=talkpython" target="_blank" >litestream.io</a><br/> <strong>Plash hosting</strong>: <a href="https://pla.sh?featured_on=talkpython" target="_blank" >pla.sh</a><br/> <br/> <strong>Watch this episode on YouTube</strong>: <a href="https://www.youtube.com/watch?v=ze7N_RE9KU0" target="_blank" >youtube.com</a><br/> <strong>Episode #534 deep-dive</strong>: <a href="https://talkpython.fm/episodes/show/534/diskcache-your-secret-python-perf-weapon#takeaways-anchor" target="_blank" >talkpython.fm/534</a><br/> <strong>Episode transcripts</strong>: <a href="https://talkpython.fm/episodes/transcript/534/diskcache-your-secret-python-perf-weapon" target="_blank" >talkpython.fm</a><br/> <br/> <strong>Theme Song: Developer Rap</strong><br/> <strong>🥁 Served in a Flask 🎸</strong>: <a href="https://talkpython.fm/flasksong" target="_blank" >talkpython.fm/flasksong</a><br/> <br/> <strong>---== Don't be a stranger ==---</strong><br/> <strong>YouTube</strong>: <a href="https://talkpython.fm/youtube" target="_blank" ><i class="fa-brands fa-youtube"></i> youtube.com/@talkpython</a><br/> <br/> <strong>Bluesky</strong>: <a href="https://bsky.app/profile/talkpython.fm" target="_blank" >@talkpython.fm</a><br/> <strong>Mastodon</strong>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" ><i class="fa-brands fa-mastodon"></i> @talkpython@fosstodon.org</a><br/> <strong>X.com</strong>: <a href="https://x.com/talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @talkpython</a><br/> <br/> <strong>Michael on Bluesky</strong>: <a href="https://bsky.app/profile/mkennedy.codes?featured_on=talkpython" target="_blank" >@mkennedy.codes</a><br/> <strong>Michael on Mastodon</strong>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" ><i class="fa-brands fa-mastodon"></i> @mkennedy@fosstodon.org</a><br/> <strong>Michael on X.com</strong>: <a href="https://x.com/mkennedy?featured_on=talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @mkennedy</a><br/></div>

January 13, 2026 05:32 AM UTC

January 12, 2026


Real Python

Python's deque: Implement Efficient Queues and Stacks

You can use Python’s deque for efficient appends and pops at both ends of a sequence-like data type. These capabilities are critical when you need to implement queue and stack data structures that operate efficiently even under heavy workloads.

In this tutorial, you’ll learn how deque works, when to use it over a list, and how to apply it in real code.

By the end of this tutorial, you’ll understand that:

  • deque internally uses a doubly linked list, so end operations are O(1) while random indexing is O(n).
  • You can build a FIFO queue with .append() and .popleft(), and a LIFO stack with .append() and .pop().
  • deque supports indexing but doesn’t support slicing.
  • Passing a value to maxlen creates a bounded deque that drops items from the opposite end when full.
  • In CPython, .append(), .appendleft(), .pop(), .popleft(), and len() are thread-safe for multithreaded use.

Up next, you’ll get started with deque, benchmark it against list, and explore how it shines in real-world use cases, such as queues, stacks, history buffers, and thread-safe producer-consumer setups.

Get Your Code: Click here to download the free sample code that shows you how to implement efficient queues and stacks with Python’s deque.

Take the Quiz: Test your knowledge with our interactive “Python's deque: Implement Efficient Queues and Stacks” quiz. You’ll receive a score upon completion to help you track your learning progress:


Interactive Quiz

Python's deque: Implement Efficient Queues and Stacks

Use Python's deque for fast queues and stacks. Refresh end operations, maxlen rollover, indexing limits, and thread-safe methods.

Get Started With Python’s deque

Appending to and popping from the right end of a Python list are efficient operations most of the time. Using the Big O notation for time complexity, these operations are O(1). However, when Python needs to reallocate memory to grow the underlying list to accept new items, these operations slow down and can become O(n).

In contrast, appending and popping items from the left end of a Python list are always inefficient and have O(n) time complexity.

Because Python lists provide both operations with the .append() and .pop() methods, you can use them as stacks and queues. However, the performance issues you saw before can significantly impact the overall performance of your applications.

Python’s deque was the first data type added to the collections module back in Python 2.4. This data type was specially designed to overcome the efficiency problems of .append() and .pop() in Python lists.

A deque is a sequence-like data structure designed as a generalization of stacks and queues. It supports memory-efficient and fast append and pop operations on both ends.

Note: The word deque is pronounced as “deck.” The name stands for double-ended queue.

Append and pop operations on both ends of a deque object are stable and equally efficient because deques are implemented as a doubly linked list. Additionally, append and pop operations on deques are thread-safe and memory-efficient. These features make deques particularly useful for creating custom stacks and queues in Python.

Deques are also a good choice when you need to keep a list of recently seen items, as you can restrict the maximum length of your deque. By setting a maximum length, once a deque is full, it automatically discards items from one end when you append new items to the opposite end.

Here’s a summary of the main features of deque:

To create deques, you just need to import deque from collections and call it with an optional iterable as an argument:

Python
>>> from collections import deque

>>> # Create an empty deque
>>> deque()
deque([])

>>> # Use different iterables to create deques
>>> deque((1, 2, 3, 4))
deque([1, 2, 3, 4])

>>> deque([1, 2, 3, 4])
deque([1, 2, 3, 4])

>>> deque(range(1, 5))
deque([1, 2, 3, 4])

>>> deque("abcd")
deque(['a', 'b', 'c', 'd'])

>>> numbers = {"one": 1, "two": 2, "three": 3, "four": 4}
>>> deque(numbers.keys())
deque(['one', 'two', 'three', 'four'])

>>> deque(numbers.values())
deque([1, 2, 3, 4])

>>> deque(numbers.items())
deque([('one', 1), ('two', 2), ('three', 3), ('four', 4)])

If you instantiate deque without providing an iterable as an argument, then you get an empty deque. If you provide an iterable, then deque initializes the new instance with data from it. The initialization goes from left to right using deque.append().

The deque initializer takes the following two optional arguments:

  1. iterable holds an iterable that provides the initialization data.
  2. maxlen holds an integer number that specifies the maximum length of the deque.

As mentioned previously, if you don’t supply an iterable, then you get an empty deque. If you provide a value to maxlen, then your deque will only store up to maxlen items.

Finally, you can also use unordered iterables, such as sets, to initialize your deques. In those cases, you won’t have a predefined order for the items in the final deque.

Read the full article at https://realpython.com/python-deque/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

January 12, 2026 02:00 PM UTC


The Python Coding Stack

Need a Constant in Python? Enums Can Come in Useful

Python doesn’t have constants. You probably learnt this early on when learning Python. Unlike many other programming languages, you can’t define a constant in Python. All variables are variable!

“Ah, but there are immutable types.”

Sure, you can have an object that doesn’t change throughout its lifetime. But you can’t have a reference to it that’s guaranteed not to change. The identifier (variable name) you use to refer to this immutable type can easily switch to refer to something else.

“How about using all caps for the identifier. Doesn’t that make it a constant?”

No, it doesn’t. That’s just a convention you use to show your intent as a programmer that an identifier refers to a value that shouldn’t change. But nothing prevents that value from changing.

Here’s an all-uppercase identifier that refers to an immutable object:

All code blocks are available in text format at the end of this article • #1

The identifier is all caps. The object is a tuple, which is immutable. Recall that you don’t need parentheses to create a tuple—the comma is sufficient.

So, you use an all-uppercase identifier for an immutable object. But that doesn’t stop you from changing the value of FIXED_LOCATION:

#2

Neither using an immutable object nor using uppercase identifiers prevents you from changing this value!

So, Python doesn’t have constants. But there are tools you can use to mimic constant behaviour depending on the use case you need. In this article I’ll explore one of these: Enums.


All The Python Coding Place video courses are included in a single, cost-effective bundle. The courses cover beginner and intermediate level courses, and you also get access to a members-only forum.

Get The All Courses Bundle


Jargon Corner: Enum is short for enumeration, and you’ll see why soon. But don’t confuse this with the built-in enumerate(), which does something else. See Parkruns, Python’s enumerate and zip, and Why Python Loops Are Different from Other Languages • [Note: This is a Club post] for more on enumerate().


Let’s revisit our friend Alex from an article from a short while ago: “AI Coffee” Grand Opening This Monday. This article explored the program Alex used in his new coffee shop and how the function signature changed over time to minimise confusion and errors when using it. It’s a fun article about all the various types and styles of parameters and arguments you can have in Python functions.

But it didn’t address another potential source of error when using this code. So let’s look at a simple version of the brew_coffee() function Alex used to serve his coffee-drinking customers:

#3

When you call the function, you pass the coffee you want to this function:

#4

And elsewhere in the code, these coffees are defined in a dictionary:

#5

If you’ve written code like this in the past, you’ll know that it’s rather annoying—and error-prone—to keep using the strings with the coffee names wherever you need to refer to a specific coffee, such as when passing the coffees to brew_coffee().

The names of the coffees and the parameters that define them do not change. They’re constant. It’s a shame Python doesn’t have constants, you may think.

But it has enums…

#6

The CoffeeType enum contains seven members. Each member has a name and a value. By convention, you use all-uppercase names for the members since they represent constants. And these enum members behave like constants:

#7

When you attempt to reassign a value to a member, Python raises an exception:

Traceback (most recent call last):
  File ..., line 12, in <module>
    CoffeeType.ESPRESSO = 10
    ^^^^^^^^^^^^^^^^^^^
  ...
AttributeError: cannot reassign member ‘ESPRESSO’

The member names are also contained within the namespace of the Enum class—you use CoffeeType.ESPRESSO rather than just ESPRESSO outside the Enum class definition. So, you get autocomplete, refactor-friendly names, and fewer silent typos. With raw strings, "capuccino" (with a single “p”) can sneak into your code, and nothing complains until a customer is already waiting at the counter.

For these enum members to act as constants, their names must be unique. You can’t have the same name appear more than once:

#8

You include ESPRESSO twice with different values. But this raises an exception:

Traceback (most recent call last):
  File ..., line 3, in <module>
    ...
    ESPRESSO = 8
    ^^^^^^^^
  ...
TypeError: ‘ESPRESSO’ already defined as 1

That’s good news. Otherwise, these enum members wouldn’t be very useful as constants.

However, you can have an alias. You can have more than one member sharing the same value:

#9

The members MACCHIATO and ESPRESSO_MACCHIATO both have the value 4. Therefore, they represent the same item. They’re different names for the same coffee:

#10

Note that Python always displays the first member associated with a value:

CoffeeType.MACCHIATO

The output says CoffeeType.MACCHIATO even though you pass CoffeeType.ESPRESSO_MACCHIATO to print().

Incidentally, if you don’t want to have aliases, you can use the @unique decorator when defining the enum class.


Join The Club, the exclusive area for paid subscribers for more Python posts for premium members, videos, a members’ forum, and more.


You can also access the name and value of an enum member:

#11

Here’s the output from this code:

CoffeeType.ESPRESSO
ESPRESSO
1

The .name attribute is a string, and the .value attribute is an integer in this case:

#12

Here’s the output when you display the types:

<enum ‘CoffeeType’>
<class ‘str’>
<class ‘int’>

You’ll often use integers as values for enum members—that’s why they’re called enumerations. But you don’t have to:

#13

The values are now also strings:

CoffeeType.ESPRESSO
ESPRESSO
espresso

You can use these enum members instead of strings wherever you need to refer to each coffee type:

#14

…and again when you call brew_coffee():

#15

Now you have a safer, neater, and more robust way to handle the coffee types... and treat them as constants.

A Bit More • StrEnum and IntEnum

Let’s add some code to brew_coffee():

#16

This version is almost fine. But here’s a small problem:

Brewing a CoffeeType.CORTADO with 30ml of coffee and 
    60ml of milk. Strength level: 2

The output displays CoffeeType.CORTADO since coffee_type refers to an enum member. You’d like the output to just show the name of the coffee! Of course, you can use the .value attribute any time you need to fetch the string.

However, to make your coding simpler and more readable, you can ensure that the enum members are also strings themselves without having to rely on one of their attributes. You can use StrEnum instead of Enum:

#17

Members of a StrEnum also inherit all the string methods, such as:

#18

You call the string method .title() directly on the StrEnum member:

Macchiato

There’s also an IntEnum that can be useful when you want your enum members to act as integers. Let’s replace the coffee strength values, which are currently integers, with IntEnum members:

#19

You could use a standard Enum in this case. But using an IntEnum allows you to manipulate its members directly as integers should you need to do so. Here’s an example:

#20

This code is equivalent to printing 3 + 1. You wouldn’t be able to do this with enums unless you use the .value attributes.

And A Couple More Things About Enums

Let’s explore a couple of other useful enum features before we wrap up this article.

An enum class is iterable. Here are all the coffee types in a for loop:

#21

Note that CoffeeType is the class name. But it’s an enum (a StrEnum in this case), so it’s iterable:

Brewing a espresso with 30ml of coffee and 0ml of milk. Strength level: 3
Brewing a latte with 30ml of coffee and 150ml of milk. Strength level: 1
Brewing a cappuccino with 30ml of coffee and 100ml of milk. Strength level: 2
Brewing a macchiato with 30ml of coffee and 10ml of milk. Strength level: 3
Brewing a flat_white with 30ml of coffee and 120ml of milk. Strength level: 2
Brewing a ristretto with 20ml of coffee and 0ml of milk. Strength level: 4
Brewing a cortado with 30ml of coffee and 60ml of milk. Strength level: 2

I’ll let you sort out the text displayed to make sure you get ‘an espresso’ when brewing an espresso and to remove the underscore in the flat white!

And there will be times when you don’t care about the value of an enum member. You just want to use an enum to give your constants a consistent name. In this case, you can use the automatic value assignment:

#22

Python assigns integers incrementally in the order you define the members for Enum classes. Note that these start from 1, not 0.

The same integers are used if you use IntEnum classes. However, when you use StrEnum classes, Python behaves differently since the values should be strings in this case:

#23

The values are now the lowercase strings representing the members’ names.

Of course, the default values you get when you use auto() may be the values you need, after all. This is the case for both enums you created in this article, CoffeeType and CoffeeStrength :

#24

Using auto() when appropriate makes it easier to write your code and expand it later if you need to add more enum members.

Final Words

You can get by without ever using enums. But there are many situations where you’d love to reach for a constant, and an enum will do just fine. Sure, Python doesn’t have constants. But it has enums!

Photo by Valeria Boltneva


Code in this article uses Python 3.14

The code images used in this article are created using Snappify. [Affiliate link]

Join The Club, the exclusive area for paid subscribers for more Python posts, videos, a members’ forum, and more.

Subscribe now

You can also support this publication by making a one-off contribution of any amount you wish.

Support The Python Coding Stack


For more Python resources, you can also visit Real Python—you may even stumble on one of my own articles or courses there!

Also, are you interested in technical writing? You’d like to make your own writing more narrative, more engaging, more memorable? Have a look at Breaking the Rules.

And you can find out more about me at stephengruppetta.com


Appendix: Code Blocks

Code Block #1
FIXED_LOCATION = 51.75, 0.34
Code Block #2
FIXED_LOCATION
# (51.75, 0.34)
FIXED_LOCATION = "Oops!"
FIXED_LOCATION
# 'Oops!'
Code Block #3
def brew_coffee(coffee_type):
    # Actual code goes here...
    # It's not relevant for this article
Code Block #4
brew_coffee("espresso")
brew_coffee("cappuccino")
Code Block #5
coffee_types = {
    "espresso": {"strength": 3, "coffee_amount": 30, "milk_amount": 0},
    "latte": {"strength": 1, "coffee_amount": 30, "milk_amount": 150},
    "cappuccino": {"strength": 2, "coffee_amount": 30, "milk_amount": 100},
    "macchiato": {"strength": 3, "coffee_amount": 30, "milk_amount": 10},
    "flat_white": {"strength": 2, "coffee_amount": 30, "milk_amount": 120},
    "ristretto": {"strength": 4, "coffee_amount": 20, "milk_amount": 0},
    "cortado": {"strength": 2, "coffee_amount": 30, "milk_amount": 60},
}
Code Block #6
from enum import Enum

class CoffeeType(Enum):
    ESPRESSO = 1
    LATTE = 2
    CAPPUCCINO = 3
    MACCHIATO = 4
    FLAT_WHITE = 5
    RISTRETTO = 6
    CORTADO = 7
Code Block #7
from enum import Enum

class CoffeeType(Enum):
    ESPRESSO = 1
    LATTE = 2
    CAPPUCCINO = 3
    MACCHIATO = 4
    FLAT_WHITE = 5
    RISTRETTO = 6
    CORTADO = 7

CoffeeType.ESPRESSO = 10
Code Block #8
from enum import Enum

class CoffeeType(Enum):
    ESPRESSO = 1
    LATTE = 2
    CAPPUCCINO = 3
    MACCHIATO = 4
    FLAT_WHITE = 5
    RISTRETTO = 6
    CORTADO = 7
    ESPRESSO = 8
Code Block #9
from enum import Enum

class CoffeeType(Enum):
    ESPRESSO = 1
    LATTE = 2
    CAPPUCCINO = 3
    MACCHIATO = 4
    FLAT_WHITE = 5
    RISTRETTO = 6
    CORTADO = 7
    ESPRESSO_MACCHIATO = 4
Code Block #10
print(CoffeeType.ESPRESSO_MACCHIATO)
Code Block #11
# ...
print(CoffeeType.ESPRESSO)
print(CoffeeType.ESPRESSO.name)
print(CoffeeType.ESPRESSO.value)
Code Block #12
# ...
print(type(CoffeeType.ESPRESSO))
print(type(CoffeeType.ESPRESSO.name))
print(type(CoffeeType.ESPRESSO.value))
Code Block #13
from enum import Enum

class CoffeeType(Enum):
    ESPRESSO = "espresso"
    LATTE = "latte"
    CAPPUCCINO = "cappuccino"
    MACCHIATO = "macchiato"
    FLAT_WHITE = "flat_white"
    RISTRETTO = "ristretto"
    CORTADO = "cortado"

print(CoffeeType.ESPRESSO)
print(CoffeeType.ESPRESSO.name)
print(CoffeeType.ESPRESSO.value)
Code Block #14
# ...
coffee_types = {
    CoffeeType.ESPRESSO: {"strength": 3, "coffee_amount": 30, "milk_amount": 0},
    CoffeeType.LATTE: {"strength": 1, "coffee_amount": 30, "milk_amount": 150},
    CoffeeType.CAPPUCCINO: {"strength": 2, "coffee_amount": 30, "milk_amount": 100},
    CoffeeType.MACCHIATO: {"strength": 3, "coffee_amount": 30, "milk_amount": 10},
    CoffeeType.FLAT_WHITE: {"strength": 2, "coffee_amount": 30, "milk_amount": 120},
    CoffeeType.RISTRETTO: {"strength": 4, "coffee_amount": 20, "milk_amount": 0},
    CoffeeType.CORTADO: {"strength": 2, "coffee_amount": 30, "milk_amount": 60},
}
Code Block #15
# ...
brew_coffee(CoffeeType.CORTADO)
Code Block #16
# ...

def brew_coffee(coffee_type):
    coffee_details = coffee_types.get(coffee_type)
    if not coffee_details:
        print("Unknown coffee type!")
        return
    print(
        f"Brewing a {coffee_type} "
        f"with {coffee_details['coffee_amount']}ml of coffee "
        f"and {coffee_details['milk_amount']}ml of milk. "
        f"Strength level: {coffee_details['strength']}"
    )

brew_coffee(CoffeeType.CORTADO)
Code Block #17
from enum import StrEnum

class CoffeeType(StrEnum):
    ESPRESSO = "espresso"
    LATTE = "latte"
    CAPPUCCINO = "cappuccino"
    MACCHIATO = "macchiato"
    FLAT_WHITE = "flat_white"
    RISTRETTO = "ristretto"
    CORTADO = "cortado"

# ...

def brew_coffee(coffee_type):
    coffee_details = coffee_types.get(coffee_type)
    if not coffee_details:
        print("Unknown coffee type!")
        return
    print(
        f"Brewing a {coffee_type} "
        f"with {coffee_details['coffee_amount']}ml of coffee "
        f"and {coffee_details['milk_amount']}ml of milk. "
        f"Strength level: {coffee_details['strength']}"
    )

brew_coffee(CoffeeType.CORTADO)
Code Block #18
print(CoffeeType.MACCHIATO.title())
Code Block #19
# ...

class CoffeeStrength(IntEnum):
    WEAK = 1
    MEDIUM = 2
    STRONG = 3
    EXTRA_STRONG = 4

coffee_types = {
    CoffeeType.ESPRESSO: {
      "strength": CoffeeStrength.STRONG, 
      "coffee_amount": 30, 
      "milk_amount": 0,
    },
    CoffeeType.LATTE: {
      "strength": CoffeeStrength.WEAK, 
      "coffee_amount": 30, 
      "milk_amount": 150,
    },
    CoffeeType.CAPPUCCINO: {
      "strength": CoffeeStrength.MEDIUM, 
      "coffee_amount": 30, 
      "milk_amount": 100,
    },
    # ... and so on...
}

# ...
Code Block #20
print(CoffeeStrength.STRONG + CoffeeStrength.WEAK)
Code Block #21
# ...

for coffee in CoffeeType:
    brew_coffee(coffee)
Code Block #22
from enum import Enum, auto
class Test(Enum):
    FIRST = auto()
    SECOND = auto()

Test.FIRST
# <Test.FIRST: 1>
Test.SECOND
# <Test.SECOND: 2>
Code Block #23
from enum import StrEnum, auto
class Test(StrEnum):
    FIRST = auto()
    SECOND = auto()
   
Test.FIRST
# <Test.FIRST: 'first'>
Test.SECOND
# <Test.SECOND: 'second'>
Code Block #24
# ...
class CoffeeType(StrEnum):
    ESPRESSO = auto()
    LATTE = auto()
    CAPPUCCINO = auto()
    MACCHIATO = auto()
    FLAT_WHITE = auto()
    RISTRETTO = auto()
    CORTADO = auto()

class CoffeeStrength(IntEnum):
    WEAK = auto()
    MEDIUM = auto()
    STRONG = auto()
    EXTRA_STRONG = auto()
# ...

For more Python resources, you can also visit Real Python—you may even stumble on one of my own articles or courses there!

Also, are you interested in technical writing? You’d like to make your own writing more narrative, more engaging, more memorable? Have a look at Breaking the Rules.

And you can find out more about me at stephengruppetta.com

January 12, 2026 01:33 PM UTC


Python Bytes

#465 Stack Overflow is Cooked

<strong>Topics covered in this episode:</strong><br> <ul> <li><strong><a href="https://github.com/productdevbook/port-killer?featured_on=pythonbytes">port-killer</a></strong></li> <li><strong><a href="https://iscinumpy.dev/post/packaging-faster/?featured_on=pythonbytes">How we made Python's packaging library 3x faster</a></strong></li> <li><strong>CodSpeed</strong></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=waNYGS7u8Ts' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="465">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/?featured_on=pythonbytes"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://courses.pythontest.com/p/the-complete-pytest-course?featured_on=pythonbytes"><strong>The Complete pytest Course</strong></a></li> <li><a href="https://www.patreon.com/pythonbytes"><strong>Patreon Supporters</strong></a></li> </ul> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy">@mkennedy@fosstodon.org</a> / <a href="https://bsky.app/profile/mkennedy.codes?featured_on=pythonbytes">@mkennedy.codes</a> (bsky)</li> <li>Brian: <a href="https://fosstodon.org/@brianokken">@brianokken@fosstodon.org</a> / <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes">@brianokken.bsky.social</a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes">@pythonbytes@fosstodon.org</a> / <a href="https://bsky.app/profile/pythonbytes.fm">@pythonbytes.fm</a> (bsky)</li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 11am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</p> <p><strong>Michael #1: <a href="https://github.com/productdevbook/port-killer?featured_on=pythonbytes">port-killer</a></strong></p> <ul> <li>A powerful cross-platform port management tool for developers.</li> <li>Monitor ports, manage Kubernetes port forwards, integrate Cloudflare Tunnels, and kill processes with one click.</li> <li>Features: <ul> <li>🔍 Auto-discovers all listening TCP ports</li> <li>⚡ One-click process termination (graceful + force kill)</li> <li>🔄 Auto-refresh with configurable interval</li> <li>🔎 Search and filter by port number or process name</li> <li>⭐ Favorites for quick access to important ports</li> <li>👁️ Watched ports with notifications</li> <li>📂 Smart categorization (Web Server, Database, Development, System)</li> </ul></li> </ul> <p><strong>Brian #2: <a href="https://iscinumpy.dev/post/packaging-faster/?featured_on=pythonbytes">How we made Python's packaging library 3x faster</a></strong></p> <ul> <li>Henry Schreiner</li> <li>Some very cool graphs demonstrating some benchmark data.</li> <li>And then details about how various speedups <ul> <li>each being 2-37% faster</li> <li>the total adding up to about 3x speedup, or shaving 2/3 of the time.</li> </ul></li> <li>These also include nice write-ups about why the speedups were chosen.</li> <li>If you are trying to speed up part of your system, this would be good article to check out.</li> </ul> <p><strong>Michael #3</strong>: AI’s Impact on dev companies</p> <ul> <li><strong>On TailwindCSS</strong>: <a href="https://simonwillison.net/2026/Jan/7/adam-wathan/#atom-everything">via Simon</a> <ul> <li>Tailwind is growing faster than ever and is bigger than it has ever been</li> <li>Its revenue is down close to 80%.</li> <li>75% of the people on our engineering team lost their jobs here yesterday because of the brutal impact AI has had on our business.</li> <li>“We had 6 months left”</li> <li>Listen to the founder: “<a href="https://adams-morning-walk.transistor.fm/episodes/we-had-six-months-left?featured_on=pythonbytes">A Morning Walk</a>”</li> <li>Super insightful video: <a href="https://www.youtube.com/watch?v=tSgch1vcptQ&pp=0gcJCU0KAYcqIYzv">Tailwind is in DEEP trouble</a></li> </ul></li> <li><strong>On Stack Overflow</strong>: <a href="https://www.youtube.com/watch?v=Gy0fp4Pab0g">See video</a>. <ul> <li>SO was founded around 2009, first month had 3,749 questions</li> <li>December, SO had 3,862 questions asked</li> <li>Most of its live it had 200,000 questions per month</li> <li>That is a 53x drop!</li> </ul></li> </ul> <p><strong>Brian #4: CodSpeed</strong></p> <ul> <li>“CodSpeed integrates into dev and CI workflows to measure performance, detect regressions, and enable actionable optimizations.”</li> <li>Noticed it while looking through the <a href="https://github.com/fastapi/fastapi/blob/master/.github/workflows/test.yml?featured_on=pythonbytes">GitHub workflows for FastAPI</a></li> <li>Free for small teams and open-source projects</li> <li>Easy to integrate with Python by marking tests with <code>@pytest.mark.benchmark</code></li> <li>They’ve releases a GitHub action to incorporate benchmarking in CI workflows</li> </ul> <p><strong>Extras</strong></p> <p>Brian:</p> <ul> <li>Part 2 of <a href="https://courses.pythontest.com/lean-tdd/?featured_on=pythonbytes">Lean TDD</a> released this morning, “Lean TDD Practices”, which has 9 mini chapters.</li> </ul> <p>Michael:</p> <ul> <li>Our Docker build just broke because of <a href="https://mkennedy.codes/posts/devops-python-supply-chain-security/?featured_on=pythonbytes">the supply chain techniques from last week</a> (that’s a good thing!). Not a real issue, but really did catch an open CVE.</li> <li><a href="https://instatunnel.my/blog/the-1mb-password-crashing-backends-via-hashing-exhaustion?featured_on=pythonbytes">Long passwords are bad now</a>? ;)</li> </ul> <p><strong>Joke: <a href="https://x.com/PR0GRAMMERHUM0R/status/2008644769799434688?featured_on=pythonbytes">Check out my app</a>!</strong></p>

January 12, 2026 08:00 AM UTC


Python GUIs

What does @pyqtSlot() do? — Is the pyqtSlot decorator even necessary?

When working with Qt slots and signals in PyQt6 you will discover the @pyqtSlot decorator. This decorator is used to mark a Python function or method as a slot to which a Qt signal can be connected. However, as you can see in our signals and slots tutorials you don't have to use this. Any Python function or method can be used, normally, as a slot for a Qt signals. But elsewhere, in our threading tutorials we do use it.

What's going on here?

What does the documentation say?

The PyQt6 documentation has a good explanation:

Although PyQt6 allows any Python callable to be used as a slot when connecting signals, it is sometimes necessary to explicitly mark a Python method as being a Qt slot and to provide a C++ signature for it. PyQt6 provides the pyqtSlot() function decorator to do this.

Connecting a signal to a decorated Python method has the advantage of reducing the amount of memory used and is slightly faster.

From the above we see that:

When is it necessary?

Sometimes necessary is a bit vague. In practice the only situation where you need to use pyqtSlot decorators is when working with threads. This is because of a difference in how signal connections are handled in decorated vs. undecorated slots.

  1. If you decorate a method with @pyqtSlot then that slot is created as a native Qt slot, and behaves identically
  2. If you don't decorate the method then PyQt6 will create a "proxy" object wrapper which provides a native slot to Qt

In normal use this is fine, aside from the performance impact (see below). But when working with threads, there is a complication: is the proxy object created on the GUI thread or on the runner thread. If it ends up on the wrong thread, this can lead to segmentation faults. Using the pyqtSlot decorator side-steps this issue, because no proxy is created.

When updating my PyQt6 book I wondered -- is this still necessary?! -- and tested removing it from the examples. Many examples continue to work, but some failed. To be safe, use pyqtSlot decorators on your QRunnable.run methods.

What about performance?

The PyQt6 documentation notes that using native slots "has the advantage of reducing the amount of memory used and is slightly faster". But how much faster is it really, and does decorating slots actually save much memory?

We can test this directly by using this script from Oliver L Schoenborn. Updating for PyQt6 (replace PyQt5 with PyQt6 and it will work as-is) and running this we get the following results:

See the original results for PyQt5 for comparison.

First the results for the speed of emitting signals when connected to a decorated slot, vs non-decorated.

python
Raw slot mean, stddev:  0.578 0.024
Pyqt slot mean, stddev: 0.587 0.021
Percent gain with pyqtSlot: -2 %

The result shows pyqtSlot as 2% slower, but this is negligible (the original data on PyQt5 also showed no difference). So, using pyqtSlot will have no noticeable impact on the speed of signal handling in your applications.

Next are the results for establishing connections. This shows the speed, and memory usage of connecting to decorated vs. non-decorated slots.

python
Comparing mem and time required to create 10000000 connections, 1000 times

Measuring for 1000000 connections
              # connects     mem (bytes)          time (sec)
Raw         :   1000000      949186560 (905MB)    9.02
Pyqt Slot   :   1000000       48500736 ( 46MB)    1.52
Ratios      :                       20               6

The results show that decorated slots are about 6x faster to connect to. This sounds like a big difference, but it would only be noticeable if an application was connecting a considerable number of signals. Based on these numbers, if you connected 100 signals the total execution time difference would be 0.9 ms vs 0.15 ms. This is negligible, not to mention imperceptible.

Perhaps more significant is that using raw connections uses 20x the memory of decorated connections. Again though, bear in mind that for a more realistic upper limit of connections (100) the actual difference here is 0.09MB vs 0.004MB.

The bottom line: don't expect any dramatic improvements in performance or memory usage from using slot decorators, unless you're working with insanely large numbers of signals or making regular connections you won't see any difference at all. That said, decorating your slots is an easy win if you need it.

Are there any other reasons to decorate a slot?

In Qt signals can be used to transmit more than one type of data by overloading signals and slots with different types.

For example, with the following code the my_slot_fn will only receive signals which match the signature of two int values.

python
@pyqtSlot(int, int)
def my_slot_fn(a, b):
    pass

This is a legacy of Qt5 and not recommended in new code. In Qt6 all of these signals have been replaced with separate signals with distinct names for different types. I recommend you follow the same approach in your own code for the sake of simplicity.

Conclusion

The pyqtSlot decorator can be used to mark Python functions or methods as Qt slots. This decorator is only required on slots which may be connected to across threads, for example the run method of QRunnable objects. For all other slots it can be omitted. There is a very small performance benefit to using it, which you may want to consider when your application makes a large number of signal/slot connections.

For an in-depth guide to building Python GUIs with PyQt6 see my book, Create GUI Applications with Python & Qt6.

January 12, 2026 06:00 AM UTC


Zato Blog

SSH API Service in Python

SSH API Service in Python

This is a quick guide on how to turn SSH commands into a REST API service. The use-case may be remote administration of devices or equipment that does not offer a REST interface or making sure that access to SSH commands is restricted to selected external REST-based API clients only.

Python

The first thing needed is code of the service that will connect to SSH servers. Below is a service doing just that - it receives name of the command to execute and host to run in on, translating stdout and stderr of SSH commands into response documents which Zato in turn serializes to JSON.

# -*- coding: utf-8 -*-

# stdlib
from traceback import format_exc

# Zato
from zato.server.service import Service

class SSHInvoker(Service):
    """ Accepts an SSH command to run on a remote host and returns its output to caller.
    """

    # A list of elements that we expect on input
    input = 'host', 'command'

    # A list of elements that our responses will contain
    output = 'is_ok', 'cid', '-stdout', '-stderr'

    def handle(self):

        # Local aliases
        host = self.request.input.host
        command = self.request.input.command

        # Correlation ID is always returned
        self.response.payload.cid = self.cid

        try:
            # Build the full command
            full_command = f'ssh {host} {command}'

            # Run the command and collect output
            output = self.commands.invoke(full_command)

            # Assign both stdout and stderr to response
            self.response.payload.stdout = output.stdout
            self.response.payload.stderr = output.stderr

        except Exception:
            # Catch any exception and log it
            self.logger.warn('Exception caught (%s), e:`%s', self.cid, format_exc())

            # Indicate an error
            self.response.payload.is_ok = False

        else:
            # Everything went fine
            self.response.payload.is_ok = True

Dashboard

In the Zato Dashboard, let's go ahead and create an HTTP Basic Auth definition that a remote API client will authenticate against:

Now, the SSH service can be mounted on a newly created REST channel - note the security definition used and that data format is set to JSON. We can skip all the other details such as caching or rate limiting, for illustration purposes, this is not needed.

Usage

At this point, everything is ready to use. We could make it accessible to external API clients but, for testing purposes, let's simply invoke our SSH API gateway service from the command line:

$ curl "api:password@localhost:11223/api/ssh" -d \
    '{"host":"localhost", "command":"uptime"}'
{
    "is_ok": true,
    "cid": "27406f29c66c2ab6296bc0c0",
    "stdout": " 09:45:42 up 37 min,  1 user,  load average: 0.14, 0.27, 0.18\n"}
$
Note that, at this stage, the service should be used in trusted environments only, e.g. it will run any command that it is given on input which means that in the next iteration it could be changed to only allow commands from an allow-list, rejecting anything that is not recognized.

And this completes it - the service is deployed and made accessible via a REST channel that can be invoked using JSON. Any command can be sent to any host and their output will be returned to API callers in JSON responses.

More resources

➤ Python API integration tutorials
What is an integration platform?
Python Integration platform as a Service (iPaaS)
What is an Enterprise Service Bus (ESB)? What is SOA?
Open-source iPaaS in Python

January 12, 2026 03:00 AM UTC


Wingware

Wing Python IDE Version 11.0.7 - January 12, 2026

Wing Python IDE version 11.0.7 has been released. It improves performance of Search in Files on some machines, fixes using stdout.writelines in unit tests run from the Testing tool, reduces CPU used by rescanning for package managers, and fixes analysis failures on incorrect # type: comments.

Wing 11 Screen Shot

Downloads

Be sure to Check for Updates in Wing's Help menu after downloading, to make sure that you have the latest hot fixes.

Wing Pro 11.0.7

Wing Personal 11.0.7

Wing 101 11.0.7

Wing 10 and earlier versions are not affected by installation of Wing 11 and may be installed and used independently. However, project files for Wing 10 and earlier are converted when opened by Wing 11 and should be saved under a new name, since Wing 11 projects cannot be opened by older versions of Wing.

New in Wing 11

Improved AI Assisted Development

Wing 11 improves the user interface for AI assisted development by introducing two separate tools AI Coder and AI Chat. AI Coder can be used to write, redesign, or extend code in the current editor. AI Chat can be used to ask about code or iterate in creating a design or new code without directly modifying the code in an editor.

Wing 11's AI assisted development features now support not just OpenAI but also Claude, Grok, Gemini, Perplexity, Mistral, Deepseek, and any other OpenAI completions API compatible AI provider.

This release also improves setting up AI request context, so that both automatically and manually selected and described context items may be paired with an AI request. AI request contexts can now be stored, optionally so they are shared by all projects, and may be used independently with different AI features.

AI requests can now also be stored in the current project or shared with all projects, and Wing comes preconfigured with a set of commonly used requests. In addition to changing code in the current editor, stored requests may create a new untitled file or run instead in AI Chat. Wing 11 also introduces options for changing code within an editor, including replacing code, commenting out code, or starting a diff/merge session to either accept or reject changes.

Wing 11 also supports using AI to generate commit messages based on the changes being committed to a revision control system.

You can now also configure multiple AI providers for easier access to different models.

For details see AI Assisted Development under Wing Manual in Wing 11's Help menu.

Package Management with uv

Wing Pro 11 adds support for the uv package manager in the New Project dialog and the Packages tool.

For details see Project Manager > Creating Projects > Creating Python Environments and Package Manager > Package Management with uv under Wing Manual in Wing 11's Help menu.

Improved Python Code Analysis

Wing 11 makes substantial improvements to Python code analysis, with better support for literals such as dicts and sets, parametrized type aliases, typing.Self, type of variables on the def or class line that declares them, generic classes with [...], __all__ in *.pyi files, subscripts in typing.Type and similar, type aliases, type hints in strings, type[...] and tuple[...], @functools.cached_property, base classes found also in .pyi files, and typing.Literal[...].

Updated Localizations

Wing 11 updates the German, French, and Russian localizations, and introduces a new experimental AI-generated Spanish localization. The Spanish localization and the new AI-generated strings in the French and Russian localizations may be accessed with the new User Interface > Include AI Translated Strings preference.

Improved diff/merge

Wing Pro 11 adds floating buttons directly between the editors to make navigating differences and merging easier, allows undoing previously merged changes, and does a better job managing scratch buffers, scroll locking, and sizing of merged ranges.

For details see Difference and Merge under Wing Manual in Wing 11's Help menu.

Other Minor Features and Improvements

Wing 11 also adds support for Python 3.14, improves the custom key binding assignment user interface, adds a Files > Auto-Save Files When Wing Loses Focus preference, warns immediately when opening a project with an invalid Python Executable configuration, allows clearing recent menus, expands the set of available special environment variables for project configuration, and makes a number of other bug fixes and usability improvements.

Changes and Incompatibilities

Since Wing 11 replaced the AI tool with AI Coder and AI Chat, and AI configuration is completely different than in Wing 10, you will need to reconfigure your AI integration manually in Wing 11. This is done with Manage AI Providers in the AI menu. After adding the first provider configuration, Wing will set that provider as the default. You can switch between providers with Switch to Provider in the AI menu.

If you have questions, please don't hesitate to contact us at support@wingware.com.

January 12, 2026 01:00 AM UTC

January 10, 2026


EuroPython

Humans of EuroPython: Jakub Červinka

EuroPython wouldn’t exist if it weren’t for all the volunteers who put in countless hours to organize it. Whether it’s contracting the venue, selecting and confirming talks & workshops or coordinating with speakers, hundreds of hours of loving work have been put into making each edition the best one yet.

Read our latest interview with Jakub Červinka, a member of the EuroPython 2025 Operations Team and organizer of PyConCZ 2026.

Thank you for your service to EuroPython, Jakub!

altJakub Červinka, member of the Operations Team at EuroPython 2025

EP: What first inspired you to volunteer for EuroPython?

The community has always been the biggest draw for me. Having volunteered at our local Python conference previously, I already knew how rewarding it is to be part of the organizing team. When the opportunity to join EuroPython came up, I jumped at it without a second thought. I really like connecting with organizers, speakers, and attendees from across the continent.

EP: What&aposs one task you handled that attendees might not realize happens behind the scenes at EuroPython?

One year I took on the role of “designated driver”, essentially the person who handles the last-minute, ad-hoc tasks that arise during the conference. It ranged from running out to buy a cart full of hygiene products for the bathrooms, to hauling cases of bottled water when we were about to run dry, to picking up emergency prints on one of the hottest days of the year. These are the kinds of small but critical jobs that keep everything running smoothly, and I enjoy making sure they get done.

EP: How did volunteering for EuroPython impact your relationships within the community?

In the best possible way. Over the years, I’ve built lasting friendships, met people I had only known from online talks and tutorials, and had the chance to become a familiar face in the community myself. Every EuroPython and every local conference strengthens those connections and leaves you with renewed energy and inspiration to keep contributing.

EP: What&aposs one thing you took away from the experience that you still use today?

The importance of recognition and appreciation. A simple “thank you” or “great job” from an attendee can mean a lot to volunteers. We’re doing important work, but it’s not our paid job. That experience has made me much more intentional about expressing gratitude to everyone who helps, whether they’re fellow volunteers, staff, or people in service roles.

EP: Do you have any tips for first-time EuroPython volunteers?

Don’t be afraid to ask questions or offer help, there’s always something that needs doing, and everyone can contribute in their own way. Keep an eye out for small improvements you could suggest, introduce yourself to people, and most importantly, enjoy the experience. Volunteering is as much about building relationships and having fun as it is about getting tasks done.

EP: Thank you, Jakub!

January 10, 2026 09:49 AM UTC

January 09, 2026


Mike Driscoll

How to Switch to ty from Mypy

Python has supported type hinting for quite a few versions now, starting way back in 3.5. However, Python itself does not enforce type checking. Instead, you need to use an external tool or IDE. The first and arguably most popular is mypy.

Microsoft also has a Python type checker that you can use in VS Code called Pyright, and then there’s the lesser-known Pyrefly type checker and language server.

The newest type checker on the block is Astral’s ty, the maker of Ruff. Ty is another super-fast Python utility written in Rust.

In this article, you will learn how to switch your project to use ty locally and in GitHub Actions.

Installation

You can run ty with uvx if you do not want to install it by using the following command in your terminal: uvx ty

To install ty with uv, run the following:

uv tool install ty@latest

If you do not want to use uv, you can use the standalone installer. Instructions vary depending on your platform, so it is best to refer to the documentation for the latest information.

Note: Technically, you can use pip or pipx to install ty as well.

Running ty Locally

Once you have ty installed, you can run it using any of the following:

Running with uv

uv run ty

Running without Installation

uvx ty

Running ty Directly

ty check

Configuring ty

You can configure ty using either of the following:

There are many rules that you can change. Check out the documentation for full details.

In general, if you run mypy in strict mode, then running ty without changing any of its settings is very similar. However, ty currently does not highlight missing type hints. If you need to enforce adding type hints, you can use Ruff’s flake8-annotations.

Here is how to enable the flak8-annotations in your pyproject.toml file:

Using Flake8 annotations in Ruff

If you have other rules already selected, you can add “ANN” to the end of the list to enable it.

Running ty in GitHub Actions

Running ty in GitHub Actions is a great, free way to type-check your PRs. To add ty to GitHub Actions, create a new file named ty.yml in your GitHub repo in the following location:

.github/workflows/ty.yml

Make sure you include the leading period!

Next, inside your yaml file, you will add the following code:

name: ty
on:
  pull_request:
    types: [opened, synchronize, reopened, ready_for_review]
  workflow_dispatch:
jobs:
  build:
    if: github.event.pull_request.draft == false
    runs-on: self-hosted
    steps:
      - uses: actions/checkout@v3
      - name: Install Python
        uses: actions/setup-python@v4
        with:
          python-version: “3.12”
      - name: Install dependencies
        run: |
          python -m pip install --upgrade pip
          pip install ty==0.0.7      
      - name: Run ty
        run: ty check
        continue-on-error: false

Now, when your team opens a new PR in your project, it will automatically run ty against it. Feel free to update the Python version to the one you are using. Also note that this GitHub Action sets the ty version to 0.0.7, which you may need to update as newer releases become available.

Using ty with pre-commit

The ty project does not have official support for pre-commit yet. However, there is a ticket to add this functionality. In the meantime, several other people have provided their own workarounds to allow you to use ty with pre-commit:

When Astral supports pre-commit itself, you should update your pre-commit configuration accordingly.

However, for this tutorial, you can use that first link which tells you to add the following to your .pre-commit-config.yaml:

Using ty in pre-commit

Now, when you commit a file locally, pre-commit will run ty to check it for you automatically.

Wrapping Up

Type checkers can be really helpful in finding subtle bugs in your Python code. However, remembering to run them before pushing your code can be difficult, so make your life easier by adding the type checker to your CI!

 

Have fun and happy coding!

The post How to Switch to ty from Mypy appeared first on Mouse Vs Python.

January 09, 2026 03:16 PM UTC


The Python Coding Stack

Parkruns, Python’s enumerate and zip, and Why Python Loops Are Different from Other Languages • [Club]

If you live in the UK, you’re probably familiar with the Parkrun tradition: a friendly 5k run held every Saturday morning in hundreds of parks across the UK. Runners range from Olympians to people trying to lose some weight. It’s a well-oiled format replicated across all 893 Parkrun locations.

And here’s how they deal with the finish line logistics. Runners don’t wear bibs with numbers. When they cross the finish line, they enter a “funnel” marked by plastic cones and are handed a token with their position number. They then proceed to another official, who scans their personal barcode, which runners carry in their pockets or on a wristband, and the position token they received a few seconds earlier. This process matches the runner with their finishing position.

What’s this got to do with Python loops? And how does it help us understand why Python does loops differently from other languages?

First step, let’s create the Parkrun funnel. I’ll just put the first five finishers in this example:

>>> funnel = [”Jonathan”, “Michael”, “Samantha”, “Jessica”, “Daniel”]

Now, here’s something you definitely know already because it’s always one of the first things you’re taught when learning Python: Don’t loop through this list like this:

# Avoid this when coding in Python
>>> i = 0
>>> while i < len(funnel):
...     name = funnel[i]
...     print(name)
...     i += 1
...    
Jonathan
Michael
Samantha
Jessica
Daniel

This style mimics how other languages may work: you manually define and increment the index. To be fair, most people who shift from other languages are more likely to write the following version at some point:

# Also best to avoid this in Python
>>> for i in range(len(funnel)):
...     name = funnel[i]
...     print(name)
...    
Jonathan
Michael
Samantha
Jessica
Daniel

This version may seem more Pythonic since it uses Python tools such as range(), but still fails to make the most of Python’s iteration protocol. The Pythonic way of looping through this list is the following:

>>> for name in funnel:
...     print(name)
...    
Jonathan
Michael
Samantha
Jessica
Daniel

A question that’s often asked but rarely answered is: Why is this version preferred over the other two? I’ll write another short post to answer this question soon as I want to keep these The Club posts short whenever possible. So, let me state just a few reasons (there are more) and then I’ll move on to my main topic for today.

While you wait for my follow-up post on this, you can read more about Python’s Iterator Protocol, iterables, and iterators here:

But let’s move on.

Let’s say you want to print out the names alongside each runner’s position. You’d like the following output:

1. Jonathan
2. Michael
3. Samantha
4. Jessica
5. Daniel

“Aha!” I’m often told by some learners, “This is when you need to use the for i in range(len(funnel)) idiom, since you need the index!”

Python’s for loop doesn’t explicitly use the index, so you don’t have access to the index within the for loop. Many revert to the non-Pythonic idioms for this.

But Python provides tools that let you stay within the pure Pythonic style. Python’s for loop needs an iterator—it will create one from the iterable you provide. All Python iteration needs iterators, not just for loops. Iterators are Python’s tool for any iteration.

And there are some bespoke iterators in Python that handle most of your iteration needs. I recently wrote a series about the itertools module. The itertools module contains many such tools. Here’s the series: The itertools Series.

But there are also two built-in tools that many forget, but are extremely useful. The first one is enumerate().

Here’s how you can use it to display the Parkrun results:

>>> for index, name in enumerate(funnel, start=1):
...     print(f”{index}. {name}”)
...    
1. Jonathan
2. Michael
3. Samantha
4. Jessica
5. Daniel

Read more

January 09, 2026 01:57 PM UTC


Real Python

The Real Python Podcast – Episode #279: Coding Python With Confidence: Beginners Live Course Participants

Are you looking for that solid foundation to begin your Python journey? Would the accountability of scheduled group classes help you get through the basics and start building something? This week, two members of the Python for Beginners live course discuss their experiences.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

January 09, 2026 12:00 PM UTC

January 08, 2026


Rodrigo Girão Serrão

Recursive structural pattern matching

Learn how to use structural pattern matching (the match statement) to work recursively through tree-like structures.

In this short article you will learn to use structural pattern matching in recursive, tree-like data structures.

The examples from this article are taken from a couple of recent issues of my weekly newsletter.

A recursive data structure

Structural pattern matching excels at... matching the structure of your objects! For the two examples in this article, we'll be using a number of dataclasses that you can use to build abstract Boolean expressions:

from dataclasses import dataclass

class Expr:
    pass

@dataclass
class And(Expr):
    exprs: list[Expr]

@dataclass
class Or(Expr):
    exprs: list[Expr]

@dataclass
class Not(Expr):
    expr: Expr

@dataclass
class Var(Expr):
    name: str

For example, the code Not(And([Var("A"), Var("B")])) represents the Boolean expression not (A and B).

Evaluating a Boolean expression

Suppose you have a Boolean expression built out of the components shared above. How do you evaluate that formula if you are given the assignments that map the variables to their values?

For example, if you have the assignments {"A": True, "B": False} (for example, a dictionary that maps variable names to values), how can you determine that the expression Not(And([Var("A"), Var("B")])) is True?

This is where structural pattern matching can be applied recursively and it's where it really shines!

To solve this problem, you will write a function called evaluate(expression: Expr, assignments: dict[str, bool]) -> bool. Your function accepts an expression and the assignments in the form of a dictionary and it returns the final Boolean value the expression evaluates to.

Since you're accepting an expression, you're going to use the match statement on the full expression and then create a case branch for each of the possible expressions you might have:

  1. a variable;
  2. an And expression;
  3. an Or expression; or
  4. a Not expression.

The structure of the code looks like this:

def evaluate(expression: Expr, assignments: dict[str, bool]) -> bool:
    match expression:
        case Var(): pass
        case And(): pass
        case Or(): pass
        case Not(): pass

The trick here is realising that you're using Expr as the type of the argument but really, you always expect the argument to be an instance of one of the subclasses of Expr, and not a direct Expr instance.

However, to make sure you don't trip on a weird bug later on, and because this matching is supposed to be exhaustive – you're supposed to have one case for each subclass of Expr – you can defend yourself by including a catch-all pattern that raises an error.

When I'm being lazy, I just raise a RuntimeError:

def evaluate(expression: Expr, assignments: dict[str, bool]) -> bool:
    match expression:
        case Var(): pass
        case And(): pass
        case Or(): pass
        case Not(): pass
        case _:
            raise RuntimeError(
                f"Couldn't evaluate expression of type {type(expression)}."
            )

Now, it's just a matter of implementing the evaluation logic. In the case of a variable, all you have to do is fetch the variable value from the corresponding dictionary. However, to make it more convenient to...

January 08, 2026 03:22 PM UTC


Stéphane Wirtel

Automating TLS Certificate Monitoring with GitHub Actions, certificate_watcher, and Slack

Introduction

As a consultant constantly working with clients, I found myself in a familiar predicament: my head was always down, focused on delivering value to customers, but my own infrastructure monitoring was non-existent. I had no simple way to track SSL/TLS certificate expirations across the multiple domains I managed - personal sites, client projects, and community services.

I needed a solution, but I had several constraints:

  1. No time for complex setup: I couldn’t afford to spend days installing, configuring, and deploying yet another monitoring service
  2. Easy maintenance: Whatever I built had to be low-maintenance - I didn’t want another system to babysit
  3. Transparency and control: I wanted a simple text file in Git listing the hosts to monitor, so I could see exactly what was being checked and track changes over time
  4. Zero infrastructure: No servers to provision, patch, or pay for

Around this time, a friend named Julien shared his project called certificate_watcher, a lightweight Python tool for checking SSL certificate expiration. I contributed a few patches (if memory serves), and it clicked: what if I could combine this with GitHub Actions and Slack notifications?

January 08, 2026 12:00 AM UTC

January 07, 2026


Real Python

How to Build a Personal Python Learning Roadmap

If you want to learn Python or improve your skills, a detailed plan can help you gauge your current status and navigate toward a target goal. This tutorial will help you craft a personal Python learning roadmap so you can track your progress and stay accountable to your goals and timeline:

A Python Learning Roadmap Sheet that you can fill and print

The steps in this tutorial are useful for Python developers and learners of all experience levels. While you may be eager to start learning, you might want to set aside an hour or two to outline a plan, especially if you already know your learning goals. If you don’t yet have clear goals, consider spreading that reflection over a few shorter sessions across several days to clarify your direction.

Before you start, gather a few practical tools to support building your plan. This might include a notebook, a calendar or planner (digital or physical), a list of projects or goals you want to work toward, and any Python books or online resources you plan to use.

Note: If you learn best with structure and accountability, you can also follow your roadmap inside a cohort-based live course delivered by Real Python experts, with weekly live classes and live Q&A.

You can download a Personal Python Learning Roadmap worksheet to help you create your plan by clicking the link below:

Get Your Python Learning Roadmap: Click here to download a free, fillable Python learning roadmap PDF to help you set your aims and track your progress.

This tutorial will guide you through the planning process, starting with clarifying what you want to achieve and why. From there, you’ll map out the practical steps that will turn your goals into a realistic, actionable roadmap.

Step 1: Define Your Goals and Motivation

To create an effective learning roadmap, you first need to know what you want to achieve and what your motivation is. For this step, you’ll consider the following reflection prompt:

What do I want to accomplish with Python, and why?

Taking the time to answer this question sets the foundation for every decision you’ll make as you build your roadmap.

Define Your Goals

Start by deciding what you want to accomplish with Python, then write it down. Research shows that this small step can make a meaningful difference. In a study conducted by psychology researcher Dr. Gail Matthews at Dominican University of California, participants who wrote down their goals were significantly more likely to achieve them than those who didn’t.

If you’re not sure yet about your goals, here are some questions for you to consider:

  • Are there specific projects—or types of projects—that you’d like to work on? For example, data analysis, game development, or building a web app.

  • In what context or setting would you like to use your Python skills? For example, at work, in school, or as part of a personal interest or side project.

Remember to write these answers down either in your notebook or on the Personal Python Learning Roadmap worksheet included in this tutorial’s downloads. Having them written down will provide helpful context as you continue formulating your roadmap.

Determine Your Motivation

Once you have a general goal in mind, think about why you want to achieve it. Your motivation plays a key role in whether you’ll stick with your plan over time. As clinical psychology professor Dr. Jennifer Crawford explains:

If we don’t care about why we’re doing [a goal], then it makes it really difficult to stick with that new behavior.

Dr. Jennifer Crawford

She also encourages goal-setters to ask how their goals connect with something that’s important to them.

This idea is echoed by psychology professor Angela Duckworth in her book Grit, where she emphasizes that a strong sense of purpose helps you persevere when you encounter obstacles that might otherwise derail your progress.

Some possible reasons behind your “why” might include:

  • A personal interest or a love of learning
  • A desire to start or advance a career in software development
  • A goal of earning a computer science degree
  • An interest in volunteering your skills—for example, creating a Python application that supports a cause you care about

As you consider your motivation, see if you can dive deeper into the root of your reasons. A deeper look can add even more meaning and staying power to your goals. For example:

Read the full article at https://realpython.com/build-python-learning-roadmap/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

January 07, 2026 02:00 PM UTC


Stéphane Wirtel

dsmtpd 1.2.0: Test Your Emails Risk-Free

The Test Email That Never Should Have Been Sent

You know that feeling? You’re developing a new email feature, you run your test script, and boom — you realize 3 seconds too late that you used the production database. Your CEO just received an email with the subject “TEST - DO NOT READ - LOREM IPSUM”.

Or worse: you configured a cloud SMTP server for testing, forgot to disable actual sending, and now your Mailgun account is suspended for “suspicious activity” because you sent 847 emails to test@example.com in 5 minutes.

January 07, 2026 12:00 AM UTC


Python⇒Speed

Unit testing your code's performance, part 1: Big-O scaling

When you implement an algorithm, you also implement tests to make sure the outputs are correct. This can help you:

If you’re trying to make sure your software is fast, or at least doesn’t get slower, automated tests for performance would also be useful. But where should you start?

My suggestion: start by testing big-O scaling. It’s a critical aspect of your software’s speed, and it doesn’t require a complex benchmarking setup. In this article I’ll cover:

Read more...

January 07, 2026 12:00 AM UTC

January 06, 2026


PyCoder’s Weekly

Issue #716: Performance Numbers, async Web Apps, uv Speed, and More (Jan. 6, 2026)

#716 – JANUARY 6, 2026
View in Browser »

The PyCoder’s Weekly Logo


PyCoder’s Weekly 2025 Top Articles & Hidden Gems

PyCoder’s Weekly included over 1,500 links to articles, blog posts, tutorials, and projects in 2025. Christopher Trudeau is back on the show this week to help wrap up everything by sharing some highlights and uncovering a few hidden gems from the pile.
REAL PYTHON podcast

Python Numbers Every Programmer Should Know

Ever wonder how much memory an empty list takes? How about how long it takes to add two integers in Python? This post contains loads of performance data for common Python operations.
MICHAEL KENNEDY

Webinar: Building Deep Agents with Scale AI & Temporal

alt

Build AI agents that don’t stop running. Join Scale AI and Temporal to learn how Agentex and Python enables long-running, fault-tolerant agents with human-in-the-loop workflows, plus a live procurement agent demo →
TEMPORAL sponsor

What async Really Means for Your Python Web App?

Python continues to get better async support and with that comes pressure to switch. See the realistic effects that switching to async would have on your web servers.
ARTEM CHERNYAK

How uv Got So Fast

uv’s speed comes from engineering decisions, not just Rust. Static metadata, dropping legacy formats, and standards that didn’t exist five years ago.
ANDREW NESBITT

Articles & Tutorials

Python 3.6-3.14 Performance

One of the maintainers of Knave has been tracking Python performance data for a while and a recent upgrade of one of their machines meant they now had more info across different hardware. This post compares their performance test across Apple M1 & M5, Zen2 and Cascade Lake chips.
CREWTECH

Static Protocols in Python: Behaviour Over Inheritance

Static protocols bring structural typing to Python: type compatibility based on behaviour, not inheritance. This article explains how protocols differ from ABCs, goose typing, and classic duck typing, and how static type checkers use them to catch errors early.
PATRICKM.DE • Shared by Patrick Müller

Get Job-Ready With Live Python Training

Real Python’s 2026 cohorts are open. Python for Beginners teaches fundamentals the way professional developers actually use them. Intermediate Python Deep Dive goes deeper into decorators, clean OOP, and Python’s object model. Live instruction, real projects, expert feedback. Learn more at realpython.com/live →
REAL PYTHON sponsor

How to Build Internal Developer Tools With a Small Team

This opinion piece talks about how to build internal dev tools. It provides a mental model of product engineering to help decide whether to prioritise improving stability or adding new features.
PATRICKM.DE • Shared by Patrick Müller

How to Securely Store Secrets in Environment Variables

You shouldn’t store API keys, tokens, or other secrets with your code, they need to be protected separately. In this post, Miguel discusses how he handles secrets with environment variables.
MIGUEL GRINBERG

2025 Python Year in Review

Talk Python interviews Barry Warsaw, Brett Cannon, Gregory Kapfhammer, Jodie Burchell, Reuven Lerner, and Thomas Wouters and the panel discusses what mattered for Python in 2025.
TALK PYTHON podcast

Python Supply Chain Security Made Easy

Learn how to integrate Python’s official package scanning technology into your processes to help ensure the security of your development environment.
MICHAEL KENNEDY

PyPI in 2025: A Year in Review

Dustin summarizes all the happenings with the Python Packaging Index in 2025, including 130,000 new projects and over 2.5 trillion requests served.
DUSTIN INGRAM

Top Python Libraries of 2025

Explore Tryo-labs’ 11th annual Top Python Libraries roundup, featuring two curated Top 10 lists: one for General Use and one for AI/ML/Data tools.
DESCOINS & BELLO

Implicit String Concatenation

Python automatically concatenates adjacent string literals thanks to implicit string concatenation. This feature can sometimes lead to bugs.
TREY HUNNER

Safe Django Migrations Without Server Errors

How to run schema-changing Django migrations safely, avoiding schema/code mismatches and server errors during rolling deployments.
LOOPWERK

Projects & Code

vresto: Interface for Copernicus Sentinel Data

An elegant Python interface for discovering and retrieving Copernicus Sentinel data.
GITHUB.COM/KALFASYAN • Shared by Yannis Kalfas

onlymaps: A Python Micro-ORM

GITHUB.COM/MANOSS96

toon-formatter-py: TOON Data Formatting Library

GITHUB.COM/ANKITPAL181

Liberty Mail: Email Client for Sales Outreach

GITHUB.COM/EYEOFLIBERTY • Shared by [Ivan Kuzmin]

django-new: Create Django Applications With Pizzazz

GITHUB.COM/ADAMGHILL

Events

Weekly Real Python Office Hours Q&A (Virtual)

January 7, 2026
REALPYTHON.COM

Python Atlanta

January 9, 2026
MEETUP.COM

PyDelhi User Group Meetup

January 10, 2026
MEETUP.COM

DFW Pythoneers 2nd Saturday Teaching Meeting

January 10, 2026
MEETUP.COM

PiterPy Meetup

January 13, 2026
PITERPY.COM

Leipzig Python User Group Meeting

January 13, 2026
MEETUP.COM


Happy Pythoning!
This was PyCoder’s Weekly Issue #716.
View in Browser »

alt

[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

January 06, 2026 07:30 PM UTC