No-SQL DATABASES WITH PYTHON
MongoDB
MongoDB stores data as documents (BSON), similar to JSON.
MongoDB is ideal when:
- Schema is flexible
- Performance on large unstructured data is needed
- You have nested JSON-like structures
- Real-time analytics and logs
Python driver: pymongo
Connect:
from pymongo import MongoClient
client = MongoClient("mongodb://localhost:27017/")
db = client["shop"]
products = db["products"]
Insert:
products.insert_one({"name": "Phone", "price": 299})
Query:
for p in products.find({"price": {"$gt": 100}}):
print(p)
Update:
products.update_one({"name": "Phone"}, {"$set": {"price": 279}})
Delete:
products.delete_one({"name": "Phone"})
Redis
Redis is an in-memory key-value store used for:
- Caching
- Sessions
- Queues
- Rate limiting
- Real-time analytics
Python library: redis
Connect:
import redis
r = redis.Redis(host="localhost", port=6379)
Operations:
r.set("count", 1)
print(r.get("count"))
Pub/Sub:
r.publish("channel1", "Hello subscribers")
ORM (Object Relational Mapper)
ORMs provide a high-level abstraction that maps Python classes to database tables.
This reduces the need to write raw SQL and improves portability and maintainability.
Two most used ORMs in Python:
- SQLAlchemy
- Django ORM
SQLAlchemy — The Most Powerful ORM In Python
SQLAlchemy consists of two layers:
- Core (manual SQL building)
- ORM (automatically maps classes to tables)
Defining Models (ORM Example)
from sqlalchemy import Column, Integer, String, create_engine
from sqlalchemy.orm import declarative_base, sessionmaker
Base = declarative_base()
class User(Base):
__tablename__ = "users"
id = Column(Integer, primary_key=True)
name = Column(String)
email = Column(String)
Creating Database and Session
engine = create_engine("sqlite:///app.db")
Base.metadata.create_all(engine)
Session = sessionmaker(bind=engine)
session = Session()
Inserting Data
u = User(name="Alice", email="a@example.com")
session.add(u)
session.commit()
Querying
users = session.query(User).filter(User.name == "Alice")
SQLAlchemy supports:
- Joins
- Subqueries
- Transactions
- Bulk operations
- Relationship modeling
- Engine-level pooling
DJANGO ORM
What is Django ORM?
Django ORM is a powerful, built-in component of the Django web framework that allows developers to interact with the database using Python objects instead of raw SQL.
Each model class in Django represents a database table, and each object of that class represents a row in the table.
This makes database operations:
- Easier to write
- More readable
- Less error-prone
Defining a Model
A Django model is defined as a Python class that inherits from models.Model.
Example Model
from django.db import models
class Product(models.Model):
name = models.CharField(max_length=100)
price = models.FloatField()
What Happens Internally?
- Product → table name (product)
- name → VARCHAR(100)
- price → FLOAT
- Django automatically creates a primary key (id)
Making Database Tables (Auto Migrations)
Django automatically handles database schema changes using migrations.
Commands:
python manage.py makemigrations
python manage.py migrate
Benefits:
- No manual SQL needed
- Version-controlled schema
- Easy rollback and updates
Querying Data Using Django ORM
Basic Query
Product.objects.filter(price__gte=100)
🔹 This fetches all products with price greater than or equal to 100.
Common Query Examples
Product.objects.all() # Fetch all records
Product.objects.get(id=1) # Fetch single record
Product.objects.filter(name="Phone") # WHERE condition
Product.objects.exclude(price__lt=50) # NOT condition
Query Chaining
Django ORM allows chaining multiple filters to build complex queries.
Product.objects.filter(price__gte=100).exclude(name="Laptop")
Equivalent SQL:
SELECT * FROM product WHERE price >= 100 AND name != 'Laptop';
Relationship Management
Django ORM supports relationships automatically.
One-to-Many
class Category(models.Model):
name = models.CharField(max_length=50)
class Product(models.Model):
category = models.ForeignKey(Category, on_delete=models.CASCADE)
Benefits:
- Automatic JOINs
- Reverse lookups (category.product_set.all())
- Referential integrity
Built-in Validation
Django automatically validates data before saving.
Examples:
- max_length
- null=False
- blank=False
- Field type validation
product = Product(name="", price=-10)
product.full_clean() # Raises validation error
Database Operations (CRUD)
Create
Product.objects.create(name="Shoes", price=120)
Read
Product.objects.filter(price__lt=200)
Update
Product.objects.filter(id=1).update(price=150)
Delete
Product.objects.filter(id=1).delete()
Why Use Django ORM?
✔ No need to write raw SQL
✔ Database-agnostic (SQLite, MySQL, PostgreSQL)
✔ Secure (prevents SQL injection)
✔ Highly readable and maintainable
✔ Tight integration with Django ecosystem
When Django ORM is Best
- Django-based web applications
- Rapid development
- Projects requiring clean and secure DB access
Applications with complex relationships
Connection Pooling
What is Connection Pooling?
Opening a database connection is a costly operation.
It involves:
- Network communication
- Authentication
- Session setup
- Memory allocation
Connection pooling solves this by reusing existing database connections instead of creating a new one for every request.
A pool maintains a fixed number of open connections that are shared across the application.
How Connection Pooling Works (Step-by-Step)
- Application requests a database connection
- Pool checks if an idle connection is available
- If available → reused immediately
- If not → new connection is created (up to a limit)
- After query execution, connection is returned to the pool
- Connection is reused for the next request
Connections are never closed after each request, only recycled.
SQLAlchemy Connection Pooling Example
from sqlalchemy import create_engine
engine = create_engine(
"postgresql://user:password@localhost/db",
pool_size=5,
max_overflow=10
)
Meaning of Parameters
| Parameter | Description |
|---|---|
| pool_size | Number of persistent connections kept open |
| max_overflow | Extra connections allowed when pool is full |
| Total possible connections | pool_size + max_overflow |
🔹 In this example:
- 5 permanent connections
- Up to 10 temporary extra connections
- Maximum 15 concurrent connections
Why Opening Connections Is Expensive
Without pooling:
- Every request opens a new connection
- Performs authentication repeatedly
- Creates CPU and memory overhead
- Slows down application under load
With pooling:
- Connections are ready to use
- No repeated setup cost
- Faster response time
Benefits of Connection Pooling
✔ Higher performance – Faster query execution
✔ Lower latency – Immediate connection availability
✔ Reduced overhead – Less CPU, memory, and network usage
✔ Better scalability – Handles multiple users efficiently
✔ Database protection – Prevents too many open connections
When Connection Pooling Is Essential
- Web applications (Django, Flask, FastAPI)
- High-traffic APIs
- Microservices
- Enterprise systems
- Any multi-user environment
When Pooling Is NOT Required
- Small scripts
- One-time database tasks
- CLI tools
- Single-user desktop applications
Connection Pooling vs No Pooling
| Without Pooling | With Pooling |
|---|---|
| Slow connections | Fast reuse |
| High overhead | Low overhead |
| Poor scalability | High scalability |
| Frequent connect/disconnect | Persistent connections |
Transactions and Acid in Python
What is a Transaction?
A transaction is a group of database operations executed as a single logical unit.
Either all operations succeed, or none of them are applied.
Transactions protect data integrity, especially in critical operations like payments or transfers.
ACID Properties
Transactions follow ACID, which guarantees reliable database behavior.
1. Atomicity
- All operations succeed or none do
- Prevents partial updates
Example: Money is not deducted unless it is credited elsewhere.
2. Consistency
- Database moves from one valid state to another
- All constraints and rules are preserved
Example: Balance never becomes negative if not allowed.
3. Isolation
- Concurrent transactions do not interfere with each other
- Each transaction behaves as if it runs alone
Example: Two users transferring money simultaneously won’t corrupt data.
4. Durability
- Once committed, data remains saved
- Survives crashes or power failures
Transaction Example in Python (psycopg2)
try:
cursor.execute("BEGIN")
cursor.execute(
"UPDATE accounts SET balance = balance - 100 WHERE id = 1"
)
cursor.execute(
"UPDATE accounts SET balance = balance + 100 WHERE id = 2"
)
conn.commit()
except:
conn.rollback()
What Happens Here?
- Both updates execute together
- If any query fails → rollback
- Data remains consistent
Why Transactions Matter
✔ Prevent data corruption
✔ Ensure safe multi-step operations
✔ Essential for financial and business systems
✔ Maintain database reliability
Migration Tools
Schema migrations allow developers to version database changes.
Popular tools:
- Alembic (for SQLAlchemy)
- Django migrations
- Flyway
- Liquibase
Example (Alembic):
alembic revision -m "create users table"
alembic upgrade head
Indexing & Query Optimization
Indexing improves query speed.
CREATE INDEX idx_user_email ON users(email);
Python code should always benchmark query performance, especially with large tables.
Tools:
- EXPLAIN query plans
- SQLAlchemy logs (echo=True)
- Database profiling
Batch operations & Bulk inserts
Insert many records efficiently:
cursor.executemany("INSERT INTO logs (msg) VALUES (%s)", log_list)
SQLAlchemy bulk:
session.bulk_save_objects(objects)Summary-
Python provides robust support for both SQL and NoSQL databases, enabling developers to build scalable, secure, and high-performance applications. SQL databases are commonly accessed using ORMs like SQLAlchemy and Django ORM, which allow interaction with databases through Python classes instead of raw SQL, offering features such as automatic migrations, relationship management, query optimization, and built-in validation. NoSQL databases like MongoDB and Redis are used when flexible schemas, high speed, and real-time data handling are required, with MongoDB storing data as JSON-like documents and Redis serving as an in-memory key-value store for caching, sessions, and messaging. Core database concepts such as connection pooling improve performance by reusing connections, while transactions following ACID properties ensure data integrity and reliability. Together, these tools and techniques make Python a powerful choice for modern database-driven applications.
