Image by editor
# Introduction
Python Project setup used to mean making a dozen small decisions before writing your first useful line of code. Which environmental manager? Which dependency tool? Which formatter? Which linter? What kind of checker? And if your project touches data, where should you start? Panda, duckdbor something new?
In 2026, that setup may be a lot easier.
For most new projects, the cleanest default stack is:
- ultraviolet For Python installation, environment, dependency management, locking, and command running.
- A type of fish For lining up and formatting.
- self-examination To check the type.
- polar For dataframe work.
This stack is fast, modern, and exceptionally compatible. Three of the four devices (uv, Ruff, and Ty) actually come from the same company, astralWhich means they integrate seamlessly with each other and with you. pyproject.toml.
# Understanding Why This Stack Works
Older setups often looked like this:
pyenv + pip + venv + pip-tools or Poetry + Black + isort + Flake8 + mypy + pandas
This worked, but it created significant overlap, incompatibilities, and maintenance overhead. You had different tools for environment setup, dependency locking, formatting, import sorting, linting, and typing. Every new project starts with an explosion of choice. The 2026 default stack collapses them all. The end result is fewer tools, fewer configuration files, and less friction when onboarding contributors or wiring up continuous integration (CI). Before jumping into setup, let’s take a look at what each tool is doing in the 2026 stack:
- UV: This is the basis of your project setup. It creates the project, manages versions, handles dependencies, and runs your code. Instead of manually setting up the virtual environment and installing packages, UV handles the heavy lifting. It keeps your environment consistent by using lockfiles and ensures everything is correct before running any commands.
- Raf: It’s your all-in-one tool for code quality. It’s extremely fast, checks for problems, fixes many of them automatically, and even formats your code. You can use it in place of tools like Black, Isort, Flake8 and others.
- tie: This is a new tool for type checking. It helps catch errors by checking types in your code and works with different editors. While new to tools like mypy or pyriteIt is optimized for modern workflows.
- Polar: This is a modern library for working with DataFrames. It focuses on efficient data processing by using lazy execution, which means it optimizes queries before running them. This makes it faster and more memory efficient than pandas, especially for big data tasks.
# Reviewing Prerequisites
Setup is quite simple. Here are some things you need to get started:
- Terminal: macOS Terminal, Windows PowerShell, or any Linux shell.
- Internet connection: Requires one-time UV installer and package download.
- Code Editor: vs code It’s recommended because it works well with Rough and Ty, but any editor is fine.
- Git: Required for version control; Note that uv initializes a git Repository automatically.
This is it. You do No Python needs to be installed beforehand. You do No Requires pip, venv, pyenv, or conda. UV handles installation and environment management for you.
# Step 1: Installing UV
uv provides a standalone installer that works on macOS, Linux, and Windows without the need for Python or War To be present on your machine.
MacOS and Linux:
curl -LsSf https://astral.sh/uv/install.sh | sh
Windows Powershell:
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
After installation, restart your terminal and verify:
Output:
uv 0.8.0 (Homebrew 2025-07-17)
This single binary now replaces the project management layer of pyenv, pip, venv, pip-tools, and Poetry.
# Step 2: Creating a New Project
Go to your project directory and create a new directory:
uv init my-project
cd my-project
UV creates a clean initial structure:
my-project/
├── .python-version
├── pyproject.toml
├── README.md
└── main.py
resize it src/ Layout, which improves importing, packaging, test isolation, and type-checker configuration:
mkdir -p src/my_project tests data/raw data/processed
mv main.py src/my_project/main.py
touch src/my_project/__init__.py tests/test_main.py
Your structure should now look like this:
my-project/
├── .python-version
├── README.md
├── pyproject.toml
├── uv.lock
├── src/
│ └── my_project/
│ ├── __init__.py
│ └── main.py
├── tests/
│ └── test_main.py
└── data/
├── raw/
└── processed/
If you need a specific version (like 3.12), UV can install and pin it:
uv python install 3.12
uv python pin 3.12
pin Writes the command version .python-versionMaking sure every team member uses the same interpreter.
# Step 3: Adding Dependencies
Adding dependencies is a single command that resolves, installs and locks simultaneously:
UV automatically creates a virtual environment (.venv/) resolves the dependency tree, installs and updates packages if none exists uv.lock With accurate, pinned versions.
For tools needed only during development, use this --dev flag:
uv add --dev ruff ty pytest
It puts them in a different place (dependency-groups) in Section pyproject.tomlTo reduce production dependence. you’ll never have to run again source .venv/bin/activate; when you use uv runThis automatically activates the correct environment.
# Step 4: configuring ruff (linting and formatting)
Raft is configured directly inside you pyproject.toml. Add the following sections:
(tool.ruff)
line-length = 100
target-version = "py312"
(tool.ruff.lint)
select = ("E4", "E7", "E9", "F", "B", "I", "UP")
(tool.ruff.format)
docstring-code-format = true
quote-style = "double"
A 100-character line length is a good compromise for modern screens. set of rules flake8-bugbear (b), i like (I and pyupgrade (UP) Add real value without putting pressure on any new reserves.
Running Rough:
# Lint your code
uv run ruff check .
# Auto-fix issues where possible
uv run ruff check --fix .
# Format your code
uv run ruff format .
Pay attention to the pattern: uv run . You never install tools globally or activate environments manually.
# Step 5: Configure Ty for type checking
Ty is also configured pyproject.toml. Add these sections:
(tool.ty.environment)
root = ("./src")
(tool.ty.rules)
all = "warn"
((tool.ty.overrides))
include = ("src/**")
(tool.ty.overrides.rules)
possibly-unresolved-reference = "error"
(tool.ty.terminal)
error-on-warning = false
output-format = "full"
This configuration starts Ty in alert mode, which is ideal for adoption. You fix the obvious issues first, then gradually promote the rules into errors. Keep data/** Prevents type-checker noise from excluded non-code directories.
# Step 6: configuring pytest
Add a section for pytest:
(tool.pytest.ini_options)
testpaths = ("tests")
Run your test suite with:
# Step 7: Checking the entire pyproject.toml
Here’s what your final configuration looks like with everything in place – one file, each device configured, with no scattered configuration files:
(project)
name = "my-project"
version = "0.1.0"
description = "Modern Python project with uv, Ruff, Ty, and Polars"
readme = "README.md"
requires-python = ">=3.13"
dependencies = (
"polars>=1.39.3",
)
(dependency-groups)
dev = (
"pytest>=9.0.2",
"ruff>=0.15.8",
"ty>=0.0.26",
)
(tool.ruff)
line-length = 100
target-version = "py312"
(tool.ruff.lint)
select = ("E4", "E7", "E9", "F", "B", "I", "UP")
(tool.ruff.format)
docstring-code-format = true
quote-style = "double"
(tool.ty.environment)
root = ("./src")
(tool.ty.rules)
all = "warn"
((tool.ty.overrides))
include = ("src/**")
(tool.ty.overrides.rules)
possibly-unresolved-reference = "error"
(tool.ty.terminal)
error-on-warning = false
output-format = "full"
(tool.pytest.ini_options)
testpaths = ("tests")
# Step 8: Writing Code with Pollers
change the contents of src/my_project/main.py With code that exercises the polar side of the stack:
"""Sample data analysis with Polars."""
import polars as pl
def build_report(path: str) -> pl.DataFrame:
"""Build a revenue summary from raw data using the lazy API."""
q = (
pl.scan_csv(path)
.filter(pl.col("status") == "active")
.with_columns(
revenue_per_user=(pl.col("revenue") / pl.col("users")).alias("rpu")
)
.group_by("segment")
.agg(
pl.len().alias("rows"),
pl.col("revenue").sum().alias("revenue"),
pl.col("rpu").mean().alias("avg_rpu"),
)
.sort("revenue", descending=True)
)
return q.collect()
def main() -> None:
"""Entry point with sample in-memory data."""
df = pl.DataFrame(
{
"segment": ("Enterprise", "SMB", "Enterprise", "SMB", "Enterprise"),
"status": ("active", "active", "churned", "active", "active"),
"revenue": (12000, 3500, 8000, 4200, 15000),
"users": (120, 70, 80, 84, 150),
}
)
summary = (
df.lazy()
.filter(pl.col("status") == "active")
.with_columns(
(pl.col("revenue") / pl.col("users")).round(2).alias("rpu")
)
.group_by("segment")
.agg(
pl.len().alias("rows"),
pl.col("revenue").sum().alias("total_revenue"),
pl.col("rpu").mean().round(2).alias("avg_rpu"),
)
.sort("total_revenue", descending=True)
.collect()
)
print("Revenue Summary:")
print(summary)
if __name__ == "__main__":
main()
Before running, you need a build system pyproject.toml So UV installs your project as a package. we will experiment hatchling: :
cat >> pyproject.toml << 'EOF'
(build-system)
requires = ("hatchling")
build-backend = "hatchling.build"
(tool.hatch.build.targets.wheel)
packages = ("src/my_project")
EOF
Then sync and run:
uv sync
uv run python -m my_project.main
You should see a formatted polar table:
Revenue Summary:
shape: (2, 4)
┌────────────┬──────┬───────────────┬─────────┐
│ segment ┆ rows ┆ total_revenue ┆ avg_rpu │
│ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ u32 ┆ i64 ┆ f64 │
╞════════════╪══════╪═══════════════╪═════════╡
│ Enterprise ┆ 2 ┆ 27000 ┆ 100.0 │
│ SMB ┆ 2 ┆ 7700 ┆ 50.0 │
└────────────┴──────┴───────────────┴─────────┘
# Managing daily workflow
Once the project is established, the day-to-day cycle is straightforward:
# Pull latest, sync dependencies
git pull
uv sync
# Write code...
# Before committing: lint, format, type-check, test
uv run ruff check --fix .
uv run ruff format .
uv run ty check
uv run pytest
# Commit
git add .
git commit -m "feat: add revenue report module"
# Changing the way you write Python with Poller
The biggest mindset change in this stack is on the data side. With pollers, your defaults should be:
- Expressions on row-wise operations. Polar expressions let the engine vectorize and parallelize operations. Avoid user defined functions (UDFs) unless there is a native alternative, as UDFs are quite slow.
- Lazy execution compared to eager loading. Use
scan_csv()instead ofread_csv(). it makes aLazyFrameThis creates a query plan, which allows the optimizer to push down the filters and eliminate unused columns. - Parquet-first workflow on CSV-heavy pipelines. A good pattern for preparing internal data looks like this.
# Evaluating when this setup is not the best fit
You may want a different option if:
- Your team has a mature Poem or MyPie workflow that is working well.
- Your codebase relies heavily on Pandas-specific APIs or ecosystem libraries.
- Your organization is standardized on pyrite.
- You are working in a legacy repository where changing equipment would cause more disruption than value.
# Applying Pro Tips
- Never activate the virtual environment manually. Use
uv runTo make sure you are using the correct environment. - always stay committed
uv.lockFor version control. This ensures that the project runs identically on every machine. - Use
--frozenIn CI. It installs dependencies from the lockfile for faster, more reliable builds. - Use
uvxFor one-time tools. Run without installing the tool in your project. - use rough
--fixFlag liberally. It can automatically fix unused imports, outdated syntax, and more. - Prefer lazy API by default. Use
scan_csv()and just call.collect()At the end. - Centralize configuration. Use
pyproject.tomlAs a single source of truth for all devices.
# closing thoughts
The 2026 Python default stack reduces setup effort and encourages best practices: locked environments, single configuration file, fast feedback, and optimized data pipelines. give it a try; Once you’ve experienced environment-agnostic execution, you’ll understand why developers are making the switch.
Kanwal Mehreen He is a machine learning engineer and a technical writer with a deep passion for the intersection of AI with data science and medicine. He co-authored the eBook “Maximizing Productivity with ChatGPT”. As a Google Generation Scholar 2022 for APAC, she is an advocate for diversity and academic excellence. She has also been recognized as a Teradata Diversity in Tech Scholar, a Mitex GlobalLink Research Scholar, and a Harvard VCode Scholar. Kanwal is a strong advocate for change, having founded FEMCodes to empower women in STEM fields.