Monday, October 27, 2025

#5 Python Project for AI Use Cases

Ollama AI Daily Toolkit — Python + Streamlit GUI Project

๐Ÿง  Ollama AI Daily Toolkit — Python + Streamlit GUI Project

This tutorial shows how to build a Streamlit web app using Ollama local models. Each AI use case runs as a separate module in a renamed folder ai_utils (fixing the import issue).

๐Ÿ’ก Project Structure
ai_daily_toolkit/
├── app.py
├── ai_utils/
│   ├── __init__.py
│   ├── ollama_client.py
│   ├── planner.py
│   ├── optimizer.py
│   ├── translator.py
│   ├── budgeter.py
│   └── studycoach.py
└── requirements.txt

๐Ÿ“ฆ requirements.txt

streamlit
ollama
pandas

⚙️ ai_utils/ollama_client.py

Utility to send a prompt to Ollama and get response text.

import ollama

def query_ollama(model: str, prompt: str) -> str:
    """
    Send prompt to specified Ollama model and return the response text.
    """
    try:
        response = ollama.chat(model=model, messages=[{"role": "user", "content": prompt}])
        return response['message']['content']
    except Exception as e:
        return f"⚠️ Ollama error: {e}"

๐Ÿš€ app.py — Streamlit Main App

import streamlit as st
from ai_utils import planner, optimizer, translator, budgeter, studycoach

st.set_page_config(page_title="Ollama AI Daily Toolkit", page_icon="๐Ÿง ", layout="wide")

st.title("๐Ÿง  Ollama AI Daily Toolkit")
st.sidebar.title("Select an AI Tool")

tools = ["Smart Planner", "Home Optimizer", "Travel Buddy", "Budget Assistant", "Study Coach"]
choice = st.sidebar.radio("Choose an AI tool:", tools)

if choice == "Smart Planner":
    planner.run_planner()
elif choice == "Home Optimizer":
    optimizer.run_optimizer()
elif choice == "Travel Buddy":
    translator.run_translator()
elif choice == "Budget Assistant":
    budgeter.run_budgeter()
else:
    studycoach.run_studycoach()

๐Ÿ“… ai_utils/planner.py — Smart Daily Planner

import streamlit as st
from .ollama_client import query_ollama

def run_planner():
    st.subheader("๐Ÿ—“️ Smart Personal Assistant & Daily Planner")
    user_prompt = st.text_area("Enter your daily goals or tasks:")
    if st.button("Generate Plan"):
        full_prompt = f"Plan a productive day with the following info:\n{user_prompt}"
        output = query_ollama("mistral", full_prompt)
        st.markdown("### ๐Ÿ“‹ AI Suggested Plan")
        st.write(output)

⚡ ai_utils/optimizer.py — Home Energy Optimizer

import streamlit as st
from .ollama_client import query_ollama

def run_optimizer():
    st.subheader("๐Ÿ  Home Energy Optimizer")
    usage_data = st.text_area("Paste last 7 days electricity usage:")
    if st.button("Optimize Power Usage"):
        prompt = f"Analyze this usage data and suggest energy saving schedule:\n{usage_data}"
        output = query_ollama("phi3", prompt)
        st.markdown("### ⚙️ Optimization Plan")
        st.write(output)

๐ŸŒ ai_utils/translator.py — Multilingual Travel Buddy

import streamlit as st
from .ollama_client import query_ollama

def run_translator():
    st.subheader("๐ŸŒ Multilingual Translator & Travel Buddy")
    phrase = st.text_input("Enter sentence to translate:")
    lang = st.selectbox("Target language:", ["Japanese", "Spanish", "French"])
    if st.button("Translate"):
        prompt = f"Translate to {lang}: '{phrase}'. Also explain if it's polite."
        output = query_ollama("mistral", prompt)
        st.markdown("### ๐Ÿ’ฌ Translation")
        st.write(output)

๐Ÿ’ฐ ai_utils/budgeter.py — Smart Budget Assistant

import streamlit as st
from .ollama_client import query_ollama

def run_budgeter():
    st.subheader("๐Ÿ’ฐ Smart Budget & Shopping Assistant")
    expenses = st.text_area("Enter recent expenses (item + amount):")
    if st.button("Analyze & Suggest Savings"):
        prompt = f"Here are my expenses:\n{expenses}\nSuggest ways to save 10%."
        output = query_ollama("phi3", prompt)
        st.markdown("### ๐Ÿ’ก AI Suggestions")
        st.write(output)

๐Ÿ“˜ ai_utils/studycoach.py — Local Study Coach

import streamlit as st
from .ollama_client import query_ollama

def run_studycoach():
    st.subheader("๐Ÿ“˜ Local Study & Skill Coach")
    topic = st.text_input("Enter topic (e.g., Python basics):")
    if st.button("Generate Lesson"):
        prompt = f"Teach me {topic} for beginners. Give one example and 2 quiz questions."
        output = query_ollama("llama3", prompt)
        st.markdown("### ๐Ÿงฉ Lesson")
        st.write(output)

▶️ Run the App

python -m streamlit run app.py
๐Ÿงฉ Notes
  • Folder renamed to ai_utils to avoid import conflicts.
  • All prompts use ollama.chat() locally — no external API required.
  • Models: mistral, phi3, and llama3 (replace as needed).
  • Use the sidebar to switch between AI tools.
Sample Response for Ollam3:2 model
Pre-requisite Run CMD              : ollama run llama3.2,  Check by CMD : ollama list

C:\Users\AURMC>ollama list
NAME               ID              SIZE      MODIFIED
llama3.2:latest    a80c4f17acd5    2.0 GB    13 days ago

Sample output for Study coach - Topic[ prompt: excel]

#4 AI Use-Cases with different LLM models

Ollama AI Daily Toolkit — Ready-to-Run (YAML) + Prompts

Ollama AI Daily Toolkit — Ready-to-Run YAML + Prompts

Paste this entire HTML into Blogger's HTML view. Each code block has Prism syntax highlighting and a Copy button for easy copy-&-paste into your terminal or editor.

Quick Instructions

1) Click Copy on any code block and paste into your machine.
2) Use ollama run -f <yaml-file> in terminal after downloading or placing models. 3) Replace model names with locally available/lightweight alternatives if needed.

1) daily_planner.yaml — Smart Personal Assistant & Daily Planner
Models: mistral (planner), phi3 (summarizer), codgemma (automator)
models:
  - name: mistral
    role: planner
  - name: phi3
    role: summarizer
  - name: codgemma
    role: automator
workflow:
  - mistral -> phi3 -> codgemma
Prompt Example
Plan my Monday schedule with meetings, travel, and 2 hours of deep work.
Summarize unread emails and add tasks from them.
Expected Output
๐Ÿ—“️ Monday Plan:
- 9:00–10:00 AM: Weekly Meeting (Zoom)
- 10:30–11:00 AM: Travel to Office
- 11:30–1:30 PM: Deep Work – Project Draft
- 2:00 PM: Review team emails → Add 2 tasks (Follow-up on client proposal, update budget sheet)
2) home_energy_optimizer.yaml — Home Energy Optimizer
Models: llama3 (advisor), tinyllama (data_analyzer), phi3 (predictor)
models:
  - name: llama3
    role: advisor
  - name: tinyllama
    role: data_analyzer
  - name: phi3
    role: predictor
workflow:
  - tinyllama -> phi3 -> llama3
Prompt Example
Analyze my last 7 days of electricity usage data.
Suggest when to run washing machine and water heater to save power.
Expected Output
⚡ Power Optimization Plan:
- Washing machine: Run between 2–4 PM (solar surplus)
- Water heater: 7–8 AM, avoid 6–8 PM (peak tariff)
Estimated savings: ₹320/month.
3) travel_buddy.yaml — Multilingual Personal Translator & Travel Buddy
Models: mistral (translator), phi3 (culture_advisor), whisper (speech_to_text)
models:
  - name: mistral
    role: translator
  - name: phi3
    role: culture_advisor
  - name: whisper
    role: speech_to_text
workflow:
  - whisper -> mistral -> phi3
Prompt Example
Translate this sentence to Japanese: "Where is the nearest train station?"
Also tell me if it's polite enough.
Expected Output
Japanese: ไธ€็•ช่ฟ‘ใ„้ง…ใฏใฉใ“ใงใ™ใ‹? (Ichiban chikai eki wa doko desu ka?)
✅ This phrase is polite and suitable for speaking with strangers.
4) budget_assistant.yaml — Smart Budget & Shopping Assistant
Models: phi3 (categorizer), mistral (extractor), llama3 (optimizer)
models:
  - name: phi3
    role: categorizer
  - name: mistral
    role: extractor
  - name: llama3
    role: optimizer
workflow:
  - mistral -> phi3 -> llama3
Prompt Example
Here are my last week’s expenses:
- Groceries ₹1200
- Snacks ₹350
- Taxi ₹500
- Coffee ₹300
Suggest where I can save 10%.
Expected Output
๐Ÿ’ฐ Expense Optimization:
- Reduce snacks & coffee: Target ₹400/week (save ₹250)
- Try monthly grocery packs: Estimated ₹150 saving.
✅ Total potential saving: ₹400 (≈11%)
5) study_coach.yaml — Local Study & Skill Coach
Models: llama3 (tutor), phi3 (quizmaster), codgemma (code_assistant)
models:
  - name: llama3
    role: tutor
  - name: phi3
    role: quizmaster
  - name: codgemma
    role: code_assistant
workflow:
  - llama3 -> phi3 -> codgemma
Prompt Example
Teach me Python basics in one week.
Give me today’s topic, one example, and a 2-question quiz.
Expected Output
๐Ÿ“˜ Python Week 1 – Day 1: Introduction to Variables
Example:
x = 5
name = "Ravi"
print(name, x)

๐Ÿงฉ Quiz:
1. What is a variable?
2. How do you print multiple items in Python?
Bonus) ai_daily_toolkit.yaml — All-in-One Launcher (Menu Mode)
A simple menu-style YAML referencing all assistant configs.
models:
  - name: llama3
  - name: mistral
  - name: phi3
  - name: codgemma
  - name: tinyllama
  - name: whisper
menu:
  - Smart Planner: daily_planner.yaml
  - Home Optimizer: home_energy_optimizer.yaml
  - Travel Buddy: travel_buddy.yaml
  - Budget Assistant: budget_assistant.yaml
  - Study Coach: study_coach.yaml
Run command
ollama run -f ai_daily_toolkit.yaml
README: Usage Tips & Model Notes
  1. Download models: Replace model names with ones you have or download lightweight alternatives in Ollama first.
  2. Permissions: If Blogger strips external scripts, add Prism and this script via a trusted host or inline them (but Blogger often allows CDNs).
  3. Customise: Edit prompts to match your locale (currency, time format) or swap model names for your available local models.
Made for quick copy-and-run demos — tweak models & prompts to suit your environment. ๐Ÿ‘

#3 AI Use-Cases

 Absolutely ✅ — here are 5 practical AI use cases you can build with Ollama (running multiple lightweight local models) for real-life, day-to-day applications.

Each one uses multi-model collaboration, combining specialized models (like LLaMA, Mistral, Phi, or CodeGemma) for different parts of the workflow.


๐Ÿง  1. Smart Personal Assistant & Daily Planner

Use Case: Automate your daily routine, summarize news, plan tasks, and manage reminders.

Models Used:

  • ๐Ÿ—“️ LLaMA 3 / Mistral – for natural conversation and task reasoning.

  • ๐Ÿ“ฐ Phi 3-mini – for summarizing emails, news, or notes.

  • ๐Ÿงพ CodeGemma / Llama-Code – for scripting small automations (e.g., exporting tasks to calendar).

Workflow:

  1. Input: “Plan my Monday – meetings, travel time, and focus work.”

  2. Model 1 (Mistral) → parses and prioritizes schedule.

  3. Model 2 (Phi) → summarizes emails and integrates urgent items.

  4. Model 3 (CodeGemma) → exports final schedule to Google Calendar or Notion.

Outcome: A context-aware daily plan ready in seconds.


๐Ÿ  2. Home Energy Optimizer

Use Case: Suggest when to use appliances based on sunlight, temperature, and power rates.

Models Used:

  • ๐ŸŒž LLaMA 3 – interprets user goals (“reduce power bill by 20%”).

  • ๐Ÿ“Š TinyLlama – analyzes smart meter and solar panel data.

  • ⚙️ Phi 3-mini – predicts power peaks and generates optimization actions.

Workflow:

  1. Collect hourly usage and solar generation data.

  2. TinyLlama summarizes consumption patterns.

  3. Phi predicts next day’s high-cost hours.

  4. LLaMA creates suggestions (“Run washing machine at 3 PM”).

Outcome: AI-driven power savings customized for your household.


๐Ÿ’ฌ 3. Multilingual Personal Translator & Travel Buddy

Use Case: Translate, summarize, and give cultural/contextual advice while traveling.

Models Used:

  • ๐ŸŒ Mistral / Gemma – translation and grammar correction.

  • ๐Ÿงญ Phi 3-mini – local context (e.g., “what’s polite in Japan?”).

  • ๐ŸŽ™️ Whisper (speech model) – speech-to-text and text-to-speech.

Workflow:

  1. User speaks a phrase → Whisper transcribes.

  2. Mistral translates and adjusts for tone.

  3. Phi gives local advice (“use honorifics when greeting”).

Outcome: A pocket translator that also teaches you local etiquette.


๐Ÿ›️ 4. Smart Budget & Shopping Assistant

Use Case: Track expenses, recommend cheaper alternatives, and optimize grocery lists.

Models Used:

  • ๐Ÿ’ฐ Phi 3 – reads transaction text and categorizes spending.

  • ๐Ÿงพ Mistral – extracts structured data from bills.

  • ๐Ÿง  LLaMA 3 – suggests spending optimization (“Switch to X brand saves ₹500/month”).

Workflow:

  1. Upload grocery receipts or SMS summaries.

  2. Models classify expenses, compute monthly totals.

  3. AI suggests where to cut or save.

Outcome: AI-driven personal finance tracking that runs offline and respects privacy.


๐Ÿง‘‍๐Ÿซ 5. Local Study & Skill Coach

Use Case: Personalized learning assistant for students or professionals.

Models Used:

  • ๐Ÿ“˜ LLaMA 3 / Mistral – tutor model explaining concepts simply.

  • ๐Ÿงฉ Phi 3-mini – quiz & flashcard generator.

  • ๐Ÿง‘‍๐Ÿ’ป CodeGemma – builds short code snippets for exercises.

Workflow:

  1. Input: “Teach me Python basics in 1 week.”

  2. LLaMA generates lesson plan.

  3. Phi creates daily quizzes.

  4. CodeGemma builds example programs.

Outcome: A fully personalized, offline learning companion.


⚙️ Implementation Tip (Example Ollama YAML):

# daily_assistant.yaml
models:
  - name: mistral
    role: planner
  - name: phi3
    role: summarizer
  - name: codgemma
    role: automator
workflow:
  - mistral -> phi3 -> codgemma

You can then run:

ollama run -f daily_assistant.yaml

Would you like me to package these 5 use cases into a ready-to-run Ollama YAML multi-model setup (with prompts + example inputs and outputs)?
It can be used as a local AI toolkit for demos or student projects.

Wednesday, October 8, 2025

#1& 2 AI Use Case - Form Filler + Doc Image Extraction

 let’s now make this a mini production-ready system with:

๐Ÿ“ฆ SQLite database
๐Ÿงพ List of all past submissions
๐Ÿ”— Downloadable PDF for each record
๐Ÿง  OCR + Ollama data extraction workflow preserved


๐Ÿงฉ Final Architecture Overview

form_filler/ │ ├── app.py ├── templates/ │ ├── upload.html │ ├── form.html │ └── submissions.html └── static/ ├── uploads/ └── pdfs/

⚙️ Step 1: Install Dependencies

pip install flask pytesseract pillow pdfplumber reportlab

SQLite is built-in with Python.

Make sure:

ollama serve ollama pull llama3.2:latest

๐Ÿง  Step 2: Complete app.py

from flask import Flask, render_template, request, send_from_directory, redirect, url_for
import pytesseract
from PIL import Image
import subprocess, json, os, sqlite3
import pdfplumber
from reportlab.lib.pagesizes import A4
from reportlab.platypus import SimpleDocTemplate, Paragraph, Spacer
from reportlab.lib.styles import getSampleStyleSheet
from datetime import datetime

app = Flask(__name__)
#UPLOAD_FOLDER = 'static/uploads'
#PDF_FOLDER = 'static/pdfs'
DB_FILE = 'submissions.db'
# os.makedirs(UPLOAD_FOLDER, exist_ok=True)
# os.makedirs(PDF_FOLDER, exist_ok=True)
import os
from flask import Flask, render_template, request, send_from_directory

app = Flask(__name__)

# Create safe absolute paths
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
UPLOAD_FOLDER = os.path.join(BASE_DIR, 'static', 'uploads')
PDF_FOLDER = os.path.join(BASE_DIR, 'static', 'pdfs')

# Ensure folders exist
os.makedirs(UPLOAD_FOLDER, exist_ok=True)
os.makedirs(PDF_FOLDER, exist_ok=True)

app.config['UPLOAD_FOLDER'] = UPLOAD_FOLDER
app.config['PDF_FOLDER'] = PDF_FOLDER

# ------------------------------
# DATABASE SETUP
# ------------------------------
def init_db():
    conn = sqlite3.connect(DB_FILE)
    cur = conn.cursor()
    cur.execute("""
    CREATE TABLE IF NOT EXISTS submissions (
        id INTEGER PRIMARY KEY AUTOINCREMENT,
        name TEXT,
        date_of_birth TEXT,
        address TEXT,
        college_name TEXT,
        course_applied TEXT,
        marks TEXT,
        pdf_file TEXT,
        created_at TEXT
    )
    """)
    conn.commit()
    conn.close()

def save_to_db(data, pdf_file):
    conn = sqlite3.connect(DB_FILE)
    cur = conn.cursor()
    cur.execute("""
        INSERT INTO submissions (name, date_of_birth, address, college_name, course_applied, marks, pdf_file, created_at)
        VALUES (?, ?, ?, ?, ?, ?, ?, ?)
    """, (
        data.get('name', ''),
        data.get('date_of_birth', ''),
        data.get('address', ''),
        data.get('college_name', ''),
        data.get('course_applied', ''),
        data.get('marks', ''),
        pdf_file,
        datetime.now().strftime("%Y-%m-%d %H:%M:%S")
    ))
    conn.commit()
    conn.close()

def fetch_all_submissions():
    conn = sqlite3.connect(DB_FILE)
    cur = conn.cursor()
    cur.execute("SELECT id, name, college_name, course_applied, created_at, pdf_file FROM submissions ORDER BY id DESC")
    rows = cur.fetchall()
    conn.close()
    return rows

# ------------------------------
# OCR + OLLAMA UTILITIES
# ------------------------------
def extract_text_from_pdf(pdf_path):
    text = ""
    with pdfplumber.open(pdf_path) as pdf:
        for page in pdf.pages:
            text += page.extract_text() + "\n"
    return text.strip()

def extract_text(file_path):
    ext = os.path.splitext(file_path)[1].lower()
    pytesseract.pytesseract.tesseract_cmd = r"C:\Program Files\Tesseract-OCR\tesseract.exe"
    if ext in ['.jpg', '.jpeg', '.png']:
        return pytesseract.image_to_string(Image.open("sample_form.jpg"))
    elif ext == '.pdf':
        return extract_text_from_pdf(file_path)
    else:
        return ""

def generate_pdf(data, output_path):
    doc = SimpleDocTemplate(output_path, pagesize=A4)
    styles = getSampleStyleSheet()
    elements = []

    elements.append(Paragraph("<b>Admission Form Summary</b>", styles['Title']))
    elements.append(Spacer(1, 20))

    for k, v in data.items():
        if k not in ["submit"]:
            elements.append(Paragraph(f"<b>{k.replace('_', ' ').title()}:</b> {v}", styles['Normal']))
            elements.append(Spacer(1, 10))

    doc.build(elements)

# ------------------------------
# ROUTES
# ------------------------------
@app.route('/')
def upload_page():
    return render_template('upload.html')

@app.route('/process', methods=['POST'])
def process_image():
    if 'file' not in request.files:
        return "No file uploaded."
   
    file = request.files['file']
    if file.filename == '':
        return "No file selected."

    filepath = os.path.join(app.config['UPLOAD_FOLDER'], file.filename) # type: ignore
    file.save(filepath)


    text = extract_text(filepath)
    if not text.strip():
        return "No text could be extracted. Try a clearer image or text-based PDF."

    prompt = f"""
    You are a data extraction assistant.
    Extract and return as JSON the following fields if present:
    - name
    - date_of_birth
    - address
    - college_name
    - course_applied
    - marks
    Text:
    {text}
    """

    try:
        result = subprocess.run(
            ["ollama", "run", "llama3.2:latest"],
            input=prompt.encode("utf-8"),
            capture_output=True,
            timeout=90
        )
        response = result.stdout.decode().strip()
        start = response.find('{')
        end = response.rfind('}') + 1
        json_str = response[start:end] if start != -1 and end != -1 else '{}'
        data = json.loads(json_str)
    except Exception as e:
        data = {"error": f"Model error: {str(e)}"}

    return render_template('form.html', data=data, image=file.filename)

@app.route('/submit', methods=['POST'])
def submit_form():
    data = request.form.to_dict()
    pdf_filename = f"filled_form_{data.get('name', 'anonymous').replace(' ', '_')}.pdf"
    pdf_path = os.path.join(app.config['PDF_FOLDER'], pdf_filename)
    generate_pdf(data, pdf_path)
    save_to_db(data, pdf_filename)
    return redirect(url_for('view_submissions'))

@app.route('/submissions')
def view_submissions():
    rows = fetch_all_submissions()
    return render_template('submissions.html', submissions=rows)

@app.route('/download/<filename>')
def download_pdf(filename):
    return send_from_directory(app.config['PDF_FOLDER'], filename, as_attachment=True)

if __name__ == "__main__":
    init_db()
    app.run(debug=True)

๐Ÿงพ templates/upload.html

<!DOCTYPE html> <html> <head> <title>Upload Document</title> <style> body { font-family: Arial; background: #eef; padding: 40px; } .container { background: #fff; padding: 30px; border-radius: 10px; width: 400px; margin: auto; box-shadow: 0 0 10px #aaa; } </style> </head> <body> <div class="container"> <h2>Upload Document (Image or PDF)</h2> <form action="{{ url_for('process_image') }}" method="POST" enctype="multipart/form-data"> <input type="file" name="file" accept=".jpg,.jpeg,.png,.pdf" required><br><br> <button type="submit">Process</button> </form> <hr> <a href="{{ url_for('view_submissions') }}">๐Ÿ“œ View All Submissions</a> </div> </body> </html>

๐Ÿงฎ templates/form.html

<!DOCTYPE html> <html> <head> <title>Form Preview</title> <style> body { font-family: Arial; background: #eef; padding: 40px; } .container { background: #fff; padding: 30px; border-radius: 10px; width: 600px; margin: auto; box-shadow: 0 0 10px #aaa; } input, textarea { width: 100%; padding: 8px; margin: 8px 0; } </style> </head> <body> <div class="container"> <h2>Auto-Filled Admission Form</h2> <form method="POST" action="{{ url_for('submit_form') }}"> <label>Name:</label> <input type="text" name="name" value="{{ data.get('name', '') }}"> <label>Date of Birth:</label> <input type="text" name="date_of_birth" value="{{ data.get('date_of_birth', '') }}"> <label>Address:</label> <textarea name="address">{{ data.get('address', '') }}</textarea> <label>College Name:</label> <input type="text" name="college_name" value="{{ data.get('college_name', '') }}"> <label>Course Applied:</label> <input type="text" name="course_applied" value="{{ data.get('course_applied', '') }}"> <label>Marks:</label> <input type="text" name="marks" value="{{ data.get('marks', '') }}"> <button type="submit">Submit & Save</button> </form> </div> </body> </html>

๐Ÿ—‚️ templates/submissions.html

<!DOCTYPE html> <html> <head> <title>All Submissions</title> <style> body { font-family: Arial; background: #eef; padding: 40px; } table { border-collapse: collapse; width: 90%; margin: auto; background: #fff; } th, td { border: 1px solid #ccc; padding: 10px; text-align: left; } th { background: #ddd; } a { text-decoration: none; color: blue; } </style> </head> <body> <h2 align="center">๐Ÿ“œ All Submitted Admission Forms</h2> <table> <tr> <th>ID</th> <th>Name</th> <th>College</th> <th>Course</th> <th>Date</th> <th>PDF</th> </tr> {% for s in submissions %} <tr> <td>{{ s[0] }}</td> <td>{{ s[1] }}</td> <td>{{ s[2] }}</td> <td>{{ s[3] }}</td> <td>{{ s[4] }}</td> <td><a href="{{ url_for('download_pdf', filename=s[5]) }}">Download</a></td> </tr> {% endfor %} </table> <div align="center" style="margin-top:20px;"> <a href="{{ url_for('upload_page') }}">⬅️ Back to Upload</a> </div> </body> </html>

๐Ÿš€ Run & Test

python app.py

Open browser → http://localhost:5000

✅ Features

  • Upload scanned image or PDF

  • Extracts data with OCR + Ollama

  • Auto-fills editable web form

  • Generates PDF of final data

  • Saves record to SQLite DB

  • View all records at /submissions with download links


Would you like me to add Admin login protection for the /submissions page next (so only authorized users can view/download records)?            


How you can create and test with real sample image/PDF files for your OCR + Ollama web app.

Since I can’t attach files directly here, I’ll give you ready-to-generate content you can copy-paste and use locally.


๐Ÿ–ผ️ SAMPLE IMAGE (Admission Form – Image)

1️⃣ Open any text editor (e.g., Notepad).
2️⃣ Copy this content and save it as sample_form.txt:

Name: Anitha R
Date of Birth: 23-05-2003
Address: 12, Gandhi Street, Madurai
College Name: PSG College of Arts and Science
Course Applied: B.Sc Computer Science
Marks: 472/500

3️⃣ Now convert this text file into an image:

Option A: Use Python (quick)

from PIL import Image, ImageDraw, ImageFont

text = """Name: Anitha R
Date of Birth: 23-05-2003
Address: 12, Gandhi Street, Madurai
College Name: PSG College of Arts and Science
Course Applied: B.Sc Computer Science
Marks: 472/500"""

img = Image.new("RGB", (800, 400), color="white")
d = ImageDraw.Draw(img)
d.text((30, 30), text, fill="black")
img.save("sample_form.jpg")
print("Saved sample_form.jpg")

Run this once; it will create sample_form.jpg — upload it in your Flask app’s upload page.


๐Ÿ“„ SAMPLE PDF (Admission Form – PDF)

Option A: Generate using Python

from reportlab.lib.pagesizes import A4
from reportlab.pdfgen import canvas

c = canvas.Canvas("sample_form.pdf", pagesize=A4)
text = """Name: Arjun M
Date of Birth: 15-11-2002
Address: 45, Rajaji Salai, Chennai
College Name: Loyola College
Course Applied: M.Sc Data Science
Marks: 489/500"""

c.setFont("Helvetica", 12)
x, y = 80, 750
for line in text.split("\n"):
    c.drawString(x, y, line)
    y -= 20
c.save()
print("Saved sample_form.pdf")

After running this, you’ll have sample_form.pdf in your working directory.

Upload this in your app — the OCR + LLM pipeline should extract data automatically.


✅ QUICK TEST PLAN

Step Input Expected Output
1 Upload sample_form.jpg Text auto-extracted (“Anitha R”, “PSG College …”)
2 Upload sample_form.pdf Text auto-extracted (“Arjun M”, “Loyola College”)
3 Edit fields if needed Submit
4 Go to /submissions Entries listed with download links
5 Download PDF Check that it matches entered data

------------------------

Some time error will come to read OCR. May be because of with out path spec. That error means your Python code is correct, but the Tesseract OCR engine itself isn’t installed or not found in your system path.

Let’s fix it step-by-step depending on your OS ๐Ÿ‘‡


๐ŸชŸ If you’re on Windows

1️⃣ Install Tesseract OCR

Download the Windows installer from the official source:
๐Ÿ‘‰ https://github.com/UB-Mannheim/tesseract/wiki

Pick the latest version (for example, tesseract-ocr-w64-setup-5.x.x.exe).
Run the installer and let it install to the default path, e.g.:

C:\Program Files\Tesseract-OCR\

2️⃣ Add to System PATH

  • Open System Properties → Advanced → Environment Variables

  • Under System variables, find Path, click Edit

  • Add this entry:

    C:\Program Files\Tesseract-OCR\
    
  • Click OK everywhere.

3️⃣ Verify installation

Open a new Command Prompt and run:

tesseract --version

If it prints version info (like tesseract v5.3.3.0), it’s good.

4️⃣ Update your Python code (optional safety net)

Add this line before calling pytesseract.image_to_string:

pytesseract.pytesseract.tesseract_cmd = r"C:\Program Files\Tesseract-OCR\tesseract.exe"

Full example:

import pytesseract
from PIL import Image

# Explicit path on Windows
pytesseract.pytesseract.tesseract_cmd = r"C:\Program Files\Tesseract-OCR\tesseract.exe"

text = pytesseract.image_to_string(Image.open("sample_form.jpg"))
print(text)

๐Ÿง If you’re on Linux

Install via apt:

sudo apt update
sudo apt install tesseract-ocr -y

Verify:

tesseract --version

Then you’re done — no path setting needed.


๐Ÿ If you’re on macOS

Using Homebrew:

brew install tesseract

Verify:

tesseract --version

Then run your Flask app again — it’ll work immediately.


✅ After installation

Restart your Flask app:

python app.py

Upload the sample image (sample_form.jpg) — you should now see extracted text on screen, no Tesseract errors ๐ŸŽฏ


Successful Output : Check in http://localhost:5000/submissions as shown below:


Use Case 1 Successfully Implemented!!!
 


#5 Python Project for AI Use Cases

Ollama AI Daily Toolkit — Python + Streamlit GUI Project ๐Ÿง  Ollama AI Daily Toolkit — Python + Streamlit GUI Project This tut...