A scalable multimodal pipeline for processing, indexing, and querying multimodal documents
Ever needed to take 8000 PDFs, 2000 videos, and 500 spreadsheets and feed them to an LLM as a knowledge base? Well, MMORE is here to help you!
Our package requires system dependencies. This snippet will take care of installing them!
sudo apt update
sudo apt install -y ffmpeg libsm6 libxext6 chromium-browser libnss3 \
libgconf-2-4 libxi6 libxrandr2 libxcomposite1 libxcursor1 libxdamage1 \
libxext6 libxfixes3 libxrender1 libasound2 libatk1.0-0 libgtk-3-0 libreoffice
To install the package simply run:
pip install -e .
To install additional RAG-related dependencies, run:
pip install -e '.[rag]'
⚠️ This is a big package with a lot of dependencies, so we recommend to useuv
to handlepip
installations. Check our tutorial on uv.
You can use our predefined CLI commands to execute parts of the pipeline. Note that you might need to prepend python -m
to the command if the package does not properly create bash aliases.
# Run processing
mmore process --config-file examples/process/config.yaml
# Run indexer
mmore index --config-file examples/index/config.yaml
# Run RAG
mmore rag --config-file examples/rag/api/rag_api.yaml
You can also use our package in python code as shown here:
from mmore.process.processors.pdf_processor import PDFProcessor
from mmore.process.processors.base import ProcessorConfig
from mmore.type import MultimodalSample
pdf_file_paths = ["examples/sample_data/pdf/calendar.pdf"]
out_file = "results/example.jsonl"
pdf_processor_config = ProcessorConfig(custom_config={"output_path": "results"})
pdf_processor = PDFProcessor(config=pdf_processor_config)
result_pdf = pdf_processor.process_batch(pdf_file_paths, True, 1) # args: file_paths, fast mode (True/False), num_workers
MultimodalSample.to_jsonl(out_file, result_pdf)
To launch the MMORE pipeline follow the specialised instructions in the docs.
-
📄 Input Documents
Upload your multimodal documents (PDFs, videos, spreadsheets, and more) into the pipeline. -
🔍 Process Extracts and standardizes text, metadata, and multimedia content from diverse file formats. Easily extensible! You can add your own processors to handle new file types.
Supports fast processing for specific types. -
📁 Index Organizes extracted data into a hybrid retrieval-ready Vector Store DB, combining dense and sparse indexing through Milvus. Your vector DB can also be remotely hosted and then you only have to provide a standard API.
-
🤖 RAG Use the indexed documents inside a Retrieval-Augmented Generation (RAG) system that provides a LangChain interface. Plug in any LLM with a compatible interface or add new ones through an easy-to-use interface. Supports API hosting or local inference.
-
🎉 Evaluation
Coming soon An easy way to evaluate the performance of your RAG system using Ragas.
See the /docs
directory for additional details on each modules and hands-on tutorials on parts of the pipeline.
Category | File Types | Supported Device | Fast Mode |
---|---|---|---|
Text Documents | DOCX, MD, PPTX, XLSX, TXT, EML | CPU | ❌ |
PDFs | GPU/CPU | ✅ | |
Media Files | MP4, MOV, AVI, MKV, MP3, WAV, AAC | GPU/CPU | ✅ |
Web Content (TBD) | Webpages | GPU/CPU | ✅ |
We welcome contributions to improve the current state of the pipeline, feel free to:
- Open an issue to report a bug or ask for a new feature
- Open a pull request to fix a bug or add a new feature
- You can find ongoing new features and bugs in the [Issues]
Don't hesitate to star the project ⭐ if you find it interesting! (you would be our star).
This project is licensed under the Apache 2.0 License, see the LICENSE 🎓 file for details.
This project is part of the OpenMeditron initiative developed in LiGHT lab at EPFL/Yale/CMU Africa in collaboration with the SwissAI initiative. Thank you Scott Mahoney, Mary-Anne Hartley