FastMCP Fashion Recommender
A mockup full stack app built with React, FastAPI, MongoDB, and Docker, powered by AWS Rekognition & CLIP for multi-tagging and clothing recommendations
Installation
Installing for Claude Desktop
Manual Configuration Required
This MCP server requires manual configuration. Run the command below to open your configuration file:
npx mcpbar@latest edit -c claude
This will open your configuration file where you can add the FastMCP Fashion Recommender MCP server manually.
FastMCP_RecSys
This is a CLIP-Based Fashion Recommender with MCP.
📌 Sample Components for UI
- Image upload
- Submit button
- Display clothing tags + recommendations
Mockup
A user uploads a clothing image → YOLO detects clothing → CLIP encodes → Recommend similar
Folder Structure
/project-root
│
├── /backend
│ ├── Dockerfile
│ ├── /app
│ ├── /aws
│ │ │ └── rekognition_wrapper.py # AWS Rekognition logic
│ │ ├── /utils
│ │ │ └── image_utils.py # Bounding box crop utils
│ │ ├── /controllers
│ │ │ └── clothing_detector.py # Coordinates Rekognition + cropping
│ │ ├── /tests
│ │ │ ├── test_rekognition_wrapper.py
│ │ │ └── test_clothing_tagging.py
│ │ ├── server.py # FastAPI app code
│ │ ├── /routes
│ │ │ └── clothing_routes.py
│ │ ├── /controllers
│ │ │ ├── clothing_controller.py
│ │ │ ├── clothing_tagging.py
│ │ │ └── tag_extractor.py # Pending: define core CLIP functionality
│ │ ├── schemas/
│ │ │ └── clothing_schemas.py
│ │ ├── config/
│ │ │ ├── tag_list_en.py $ Tool for mapping: https://jsoncrack.com/editor
│ │ │ ├── database.py
│ │ │ ├── settings.py
│ │ │ └── api_keys.py
│ │ └── requirements.txt
│ └── .env
│
├── /frontend
│ ├── Dockerfile
│ ├── package.json
│ ├── package-lock.json
│ ├── /public
│ │ └── index.html
│ ├── /src
│ │ ├── /components
│ │ │ ├── ImageUpload.jsx
│ │ │ ├── DetectedTags.jsx
│ │ │ └── Recommendations.jsx
│ │ ├── /utils
│ │ │ └── api.js
│ │ ├── App.js # Main React component
│ │ ├── index.js
│ │ ├── index.css
│ │ ├── tailwind.config.js
│ │ └── postcss.config.js
│ └── .env
├── docker-compose.yml
└── README.md
Quick Start Guide
Step 1: Clone the GitHub Project
Step 2: Set Up the Python Environment
python -m venv venv
source venv/bin/activate # On macOS or Linux
venv\Scripts\activate # On Windows
Step 3: Install Dependencies
pip install -r requirements.txt
Step 4: Start the FastAPI Server (Backend)
uvicorn backend.app.server:app --reload
Once the server is running and the database is connected, you should see the following message in the console:
Database connected
INFO: Application startup complete.
Step 5: Install Dependencies
Database connected INFO: Application startup complete.
npm install
Step 6: Start the Development Server (Frontend)
npm start
Once running, the server logs a confirmation and opens the app in your browser: http://localhost:3000/
What’s completed so far:
- FastAPI server is up and running (24 Apr)
- Database connection is set up (24 Apr)
- Backend architecture is functional (24 Apr)
- Basic front-end UI for uploading picture (25 Apr)
5. Mock Testing for AWS Rekognition -> bounding box (15 May)
PYTHONPATH=. pytest backend/app/tests/test_rekognition_wrapper.py
- Tested Rekognition integration logic independently using a mock → verified it correctly extracts bounding boxes only when labels match the garment set
- Confirmed the folder structure and PYTHONPATH=. works smoothly with pytest from root
6. Mock Testing for AWS Rekognition -> CLIP (20 May)
PYTHONPATH=. pytest backend/app/tests/test_clothing_tagging.py
-
Detecting garments using AWS Rekognition
-
Cropping the image around detected bounding boxes
-
Tagging the cropped image using CLIP
7. Mock Testing for full image tagging pipeline (Image bytes → AWS Rekognition (detect garments) → Crop images → CLIP (predict tags) + Error Handling (25 May)
Negative Test Case | Description |
---|---|
No Detection Result | AWS doesn't detect any garments — should return an empty list. |
Image Not Clothing | CLIP returns vague or empty tags — verify fallback behavior. |
AWS Returns Exception | Simulate rekognition.detect_labels throwing an error — check try-except . |
Corrupted Image File | Simulate a broken (non-JPEG) image — verify it raises an error or gives a hint. |
PYTHONPATH=. pytest backend/app/tests/test_clothing_tagging.py
- detect_garments: simulates AWS Rekognition returning one bounding box: {"Left": 0.1, "Top": 0.1, "Width": 0.5, "Height": 0.5}
- crop_by_bounding_box: simulates the cropping step returning a dummy "cropped_image" object
- get_tags_from_clip: simulates CLIP returning a list of tags: ["T-shirt", "Cotton", "Casual"]
8. Run Testing for CLIP Output (30 May)
python3 -m venv venv
pip install -r requirements.txt
pip install git+https://github.com/openai/CLIP.git
python -m backend.app.tests.test_tag_extractor
Next Step:
- Evaluate CLIP’s tagging accuracy on sample clothing images
- Fine-tune the tagging system for better recommendations
- Test the backend integration with real-time user data
- Set up monitoring for model performance
- Front-end demo
Stars
0Forks
1Last commit
2 months agoRepository age
4 monthsLicense
Apache-2.0
Auto-fetched from GitHub .
MCP servers similar to FastMCP Fashion Recommender:

Stars
Forks
Last commit

Stars
Forks
Last commit

Stars
Forks
Last commit