Google Document AI (#208)

* feat(ocr): implement OCR provider interface and add Google Document AI and LLM providers

* chore(deps): reorder dependencies in go.mod for better readability

* chore: update version numbers and adjust Docker configuration for Google Document AI integration

* feat(logging): add structured logging to Google Document AI and LLM providers

* chore: add placeholder file to maintain directory structure in web-app/dist

* chore(docker): remove Google Application Credentials configuration from docker-compose
This commit is contained in:
Icereed 2025-02-10 15:34:12 +01:00 committed by GitHub
parent b1a7b9992d
commit c8c0dd75ff
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
14 changed files with 581 additions and 120 deletions

View file

@ -241,6 +241,7 @@ jobs:
run: npm run test:e2e
env:
CI: true
DEBUG: testcontainers:containers
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
PAPERLESS_GPT_IMAGE: ${{ env.PAPERLESS_GPT_IMAGE }}
- name: Upload Playwright Report

View file

@ -60,10 +60,11 @@ RUN go mod download
RUN CGO_ENABLED=1 go build -tags musl -o /dev/null github.com/mattn/go-sqlite3
# Copy the frontend build
COPY --from=frontend /app/dist /app/dist
COPY --from=frontend /app/dist /app/web-app/dist
# Copy the Go source files
COPY *.go .
COPY ocr ./ocr
# Import ARGs from top level
ARG VERSION

202
README.md
View file

@ -1,4 +1,5 @@
# paperless-gpt
[![License](https://img.shields.io/github/license/icereed/paperless-gpt)](LICENSE)
[![Discord Banner](https://img.shields.io/badge/Join%20us%20on-Discord-blue?logo=discord)](https://discord.gg/fJQppDH2J7)
[![Docker Pulls](https://img.shields.io/docker/pulls/icereed/paperless-gpt)](https://hub.docker.com/r/icereed/paperless-gpt)
@ -6,7 +7,7 @@
![Screenshot](./paperless-gpt-screenshot.png)
**paperless-gpt** seamlessly pairs with [paperless-ngx][paperless-ngx] to generate **AI-powered document titles** and **tags**, saving you hours of manual sorting. While other tools may offer AI chat features, **paperless-gpt** stands out by **supercharging OCR with LLMs**ensuring high accuracy, even with tricky scans. If youre craving next-level text extraction and effortless document organization, this is your solution.
**paperless-gpt** seamlessly pairs with [paperless-ngx][paperless-ngx] to generate **AI-powered document titles** and **tags**, saving you hours of manual sorting. While other tools may offer AI chat features, **paperless-gpt** stands out by **supercharging OCR with LLMs**-ensuring high accuracy, even with tricky scans. If youre craving next-level text extraction and effortless document organization, this is your solution.
https://github.com/user-attachments/assets/bd5d38b9-9309-40b9-93ca-918dfa4f3fd4
@ -17,32 +18,38 @@ https://github.com/user-attachments/assets/bd5d38b9-9309-40b9-93ca-918dfa4f3fd4
1. **LLM-Enhanced OCR**
Harness Large Language Models (OpenAI or Ollama) for **better-than-traditional** OCR—turn messy or low-quality scans into context-aware, high-fidelity text.
2. **Automatic Title & Tag Generation**
2. **Use specialized AI OCR services**
- **LLM OCR**: Use OpenAI or Ollama to extract text from images.
- **Google Document AI**: Leverage Google's powerful Document AI for OCR tasks.
- **More to come**: Stay tuned for more OCR providers!
3. **Automatic Title & Tag Generation**
No more guesswork. Let the AI do the naming and categorizing. You can easily review suggestions and refine them if needed.
3. **Supports DeepSeek reasoning models in Ollama**
4. **Supports DeepSeek reasoning models in Ollama**
Greatly enhance accuracy by using a reasoning model like `deepseek-r1:8b`. The perfect tradeoff between privacy and performance! Of course, if you got enough GPUs or NPUs, a bigger model will enhance the experience.
5. **Automatic Correspondent Generation**
Automatically identify and generate correspondents from your documents, making it easier to track and organize your communications.
6. **Extensive Customization**
- **Prompt Templates**: Tweak your AI prompts to reflect your domain, style, or preference.
6. **Extensive Customization**
- **Prompt Templates**: Tweak your AI prompts to reflect your domain, style, or preference.
- **Tagging**: Decide how documents get tagged—manually, automatically, or via OCR-based flows.
7. **Simple Docker Deployment**
A few environment variables, and youre off! Compose it alongside paperless-ngx with minimal fuss.
8. **Unified Web UI**
- **Manual Review**: Approve or tweak AIs suggestions.
- **Auto Processing**: Focus only on edge cases while the rest is sorted for you.
8. **Unified Web UI**
9. **Opt-In LLM-based OCR**
If you opt in, your images get read by a Vision LLM, pushing boundaries beyond standard OCR tools.
- **Manual Review**: Approve or tweak AIs suggestions.
- **Auto Processing**: Focus only on edge cases while the rest is sorted for you.
---
## Table of Contents
- [Key Highlights](#key-highlights)
- [Getting Started](#getting-started)
- [Prerequisites](#prerequisites)
@ -68,6 +75,7 @@ https://github.com/user-attachments/assets/bd5d38b9-9309-40b9-93ca-918dfa4f3fd4
## Getting Started
### Prerequisites
- [Docker][docker-install] installed.
- A running instance of [paperless-ngx][paperless-ngx].
- Access to an LLM provider:
@ -89,26 +97,40 @@ services:
paperless-gpt:
image: icereed/paperless-gpt:latest
environment:
PAPERLESS_BASE_URL: 'http://paperless-ngx:8000'
PAPERLESS_API_TOKEN: 'your_paperless_api_token'
PAPERLESS_PUBLIC_URL: 'http://paperless.mydomain.com' # Optional
MANUAL_TAG: 'paperless-gpt' # Optional, default: paperless-gpt
AUTO_TAG: 'paperless-gpt-auto' # Optional, default: paperless-gpt-auto
LLM_PROVIDER: 'openai' # or 'ollama'
LLM_MODEL: 'gpt-4o' # or 'deepseek-r1:8b'
PAPERLESS_BASE_URL: "http://paperless-ngx:8000"
PAPERLESS_API_TOKEN: "your_paperless_api_token"
PAPERLESS_PUBLIC_URL: "http://paperless.mydomain.com" # Optional
MANUAL_TAG: "paperless-gpt" # Optional, default: paperless-gpt
AUTO_TAG: "paperless-gpt-auto" # Optional, default: paperless-gpt-auto
LLM_PROVIDER: "openai" # or 'ollama'
LLM_MODEL: "gpt-4o" # or 'deepseek-r1:8b'
# Optional, but recommended for Ollama
TOKEN_LIMIT: 1000
OPENAI_API_KEY: 'your_openai_api_key'
OPENAI_API_KEY: "your_openai_api_key"
# Optional - OPENAI_BASE_URL: 'https://litellm.yourinstallationof.it.com/v1'
LLM_LANGUAGE: 'English' # Optional, default: English
OLLAMA_HOST: 'http://host.docker.internal:11434' # If using Ollama
VISION_LLM_PROVIDER: 'ollama' # (for OCR) - openai or ollama
VISION_LLM_MODEL: 'minicpm-v' # (for OCR) - minicpm-v (ollama example), gpt-4o (for openai), etc.
AUTO_OCR_TAG: 'paperless-gpt-ocr-auto' # Optional, default: paperless-gpt-ocr-auto
OCR_LIMIT_PAGES: '5' # Optional, default: 5. Set to 0 for no limit.
LOG_LEVEL: 'info' # Optional: debug, warn, error
LLM_LANGUAGE: "English" # Optional, default: English
# OCR Configuration - Choose one:
# Option 1: LLM-based OCR
OCR_PROVIDER: "llm" # Default OCR provider
VISION_LLM_PROVIDER: "ollama" # openai or ollama
VISION_LLM_MODEL: "minicpm-v" # minicpm-v (ollama) or gpt-4v (openai)
OLLAMA_HOST: "http://host.docker.internal:11434" # If using Ollama
# Option 2: Google Document AI
# OCR_PROVIDER: 'google_docai' # Use Google Document AI
# GOOGLE_PROJECT_ID: 'your-project' # Your GCP project ID
# GOOGLE_LOCATION: 'us' # Document AI region
# GOOGLE_PROCESSOR_ID: 'processor-id' # Your processor ID
# GOOGLE_APPLICATION_CREDENTIALS: '/app/credentials.json' # Path to service account key
AUTO_OCR_TAG: "paperless-gpt-ocr-auto" # Optional, default: paperless-gpt-ocr-auto
OCR_LIMIT_PAGES: "5" # Optional, default: 5. Set to 0 for no limit.
LOG_LEVEL: "info" # Optional: debug, warn, error
volumes:
- ./prompts:/app/prompts # Mount the prompts directory
- ./prompts:/app/prompts # Mount the prompts directory
# For Google Document AI:
- ${HOME}/.config/gcloud/application_default_credentials.json:/app/credentials.json
ports:
- "8080:8080"
depends_on:
@ -118,20 +140,21 @@ services:
**Pro Tip**: Replace placeholders with real values and read the logs if something looks off.
#### Manual Setup
1. **Clone the Repository**
1. **Clone the Repository**
```bash
git clone https://github.com/icereed/paperless-gpt.git
cd paperless-gpt
```
2. **Create a `prompts` Directory**
2. **Create a `prompts` Directory**
```bash
mkdir prompts
```
3. **Build the Docker Image**
3. **Build the Docker Image**
```bash
docker build -t paperless-gpt .
```
4. **Run the Container**
4. **Run the Container**
```bash
docker run -d \
-e PAPERLESS_BASE_URL='http://your_paperless_ngx_url' \
@ -154,38 +177,43 @@ services:
### Environment Variables
**Note:** When using Ollama, ensure that the Ollama server is running and accessible from the paperless-gpt container.
=======
| Variable | Description | Required |
|------------------------|------------------------------------------------------------------------------------------------------------------|----------|
| `PAPERLESS_BASE_URL` | URL of your paperless-ngx instance (e.g. `http://paperless-ngx:8000`). | Yes |
| `PAPERLESS_API_TOKEN` | API token for paperless-ngx. Generate one in paperless-ngx admin. | Yes |
| `PAPERLESS_PUBLIC_URL` | Public URL for Paperless (if different from `PAPERLESS_BASE_URL`). | No |
| `MANUAL_TAG` | Tag for manual processing. Default: `paperless-gpt`. | No |
| `AUTO_TAG` | Tag for auto processing. Default: `paperless-gpt-auto`. | No |
| `LLM_PROVIDER` | AI backend (`openai` or `ollama`). | Yes |
| `LLM_MODEL` | AI model name, e.g. `gpt-4o`, `gpt-3.5-turbo`, `deepseek-r1:8b`. | Yes |
| `OPENAI_API_KEY` | OpenAI API key (required if using OpenAI). | Cond. |
| `OPENAI_BASE_URL` | OpenAI base URL (optional, if using a custom OpenAI compatible service like LiteLLM). | No |
| `LLM_LANGUAGE` | Likely language for documents (e.g. `English`). Default: `English`. | No |
| `OLLAMA_HOST` | Ollama server URL (e.g. `http://host.docker.internal:11434`). | No |
| `VISION_LLM_PROVIDER` | AI backend for OCR (`openai` or `ollama`). | No |
| `VISION_LLM_MODEL` | Model name for OCR (e.g. `minicpm-v`). | No |
| `AUTO_OCR_TAG` | Tag for automatically processing docs with OCR. Default: `paperless-gpt-ocr-auto`. | No |
| `LOG_LEVEL` | Application log level (`info`, `debug`, `warn`, `error`). Default: `info`. | No |
| `LISTEN_INTERFACE` | Network interface to listen on. Default: `:8080`. | No |
| `AUTO_GENERATE_TITLE` | Generate titles automatically if `paperless-gpt-auto` is used. Default: `true`. | No |
| `AUTO_GENERATE_TAGS` | Generate tags automatically if `paperless-gpt-auto` is used. Default: `true`. | No |
| `AUTO_GENERATE_CORRESPONDENTS` | Generate correspondents automatically if `paperless-gpt-auto` is used. Default: `true`. | No |
| `OCR_LIMIT_PAGES` | Limit the number of pages for OCR. Set to `0` for no limit. Default: `5`. | No |
| `TOKEN_LIMIT` | Maximum tokens allowed for prompts/content. Set to `0` to disable limit. Useful for smaller LLMs. | No |
| `CORRESPONDENT_BLACK_LIST` | A comma-separated list of names to exclude from the correspondents suggestions. Example: `John Doe, Jane Smith`.
# **Note:** When using Ollama, ensure that the Ollama server is running and accessible from the paperless-gpt container.
| Variable | Description | Required |
| -------------------------------- | ---------------------------------------------------------------------------------------------------------------- | -------- |
| `PAPERLESS_BASE_URL` | URL of your paperless-ngx instance (e.g. `http://paperless-ngx:8000`). | Yes |
| `PAPERLESS_API_TOKEN` | API token for paperless-ngx. Generate one in paperless-ngx admin. | Yes |
| `PAPERLESS_PUBLIC_URL` | Public URL for Paperless (if different from `PAPERLESS_BASE_URL`). | No |
| `MANUAL_TAG` | Tag for manual processing. Default: `paperless-gpt`. | No |
| `AUTO_TAG` | Tag for auto processing. Default: `paperless-gpt-auto`. | No |
| `LLM_PROVIDER` | AI backend (`openai` or `ollama`). | Yes |
| `LLM_MODEL` | AI model name, e.g. `gpt-4o`, `gpt-3.5-turbo`, `deepseek-r1:8b`. | Yes |
| `OPENAI_API_KEY` | OpenAI API key (required if using OpenAI). | Cond. |
| `OPENAI_BASE_URL` | OpenAI base URL (optional, if using a custom OpenAI compatible service like LiteLLM). | No |
| `LLM_LANGUAGE` | Likely language for documents (e.g. `English`). Default: `English`. | No |
| `OLLAMA_HOST` | Ollama server URL (e.g. `http://host.docker.internal:11434`). | No |
| `OCR_PROVIDER` | OCR provider to use (`llm` or `google_docai`). Default: `llm`. | No |
| `VISION_LLM_PROVIDER` | AI backend for LLM OCR (`openai` or `ollama`). Required if OCR_PROVIDER is `llm`. | Cond. |
| `VISION_LLM_MODEL` | Model name for LLM OCR (e.g. `minicpm-v`). Required if OCR_PROVIDER is `llm`. | Cond. |
| `GOOGLE_PROJECT_ID` | Google Cloud project ID. Required if OCR_PROVIDER is `google_docai`. | Cond. |
| `GOOGLE_LOCATION` | Google Cloud region (e.g. `us`, `eu`). Required if OCR_PROVIDER is `google_docai`. | Cond. |
| `GOOGLE_PROCESSOR_ID` | Document AI processor ID. Required if OCR_PROVIDER is `google_docai`. | Cond. |
| `GOOGLE_APPLICATION_CREDENTIALS` | Path to the mounted Google service account key. Required if OCR_PROVIDER is `google_docai`. | Cond. |
| `AUTO_OCR_TAG` | Tag for automatically processing docs with OCR. Default: `paperless-gpt-ocr-auto`. | No |
| `LOG_LEVEL` | Application log level (`info`, `debug`, `warn`, `error`). Default: `info`. | No |
| `LISTEN_INTERFACE` | Network interface to listen on. Default: `:8080`. | No |
| `AUTO_GENERATE_TITLE` | Generate titles automatically if `paperless-gpt-auto` is used. Default: `true`. | No |
| `AUTO_GENERATE_TAGS` | Generate tags automatically if `paperless-gpt-auto` is used. Default: `true`. | No |
| `AUTO_GENERATE_CORRESPONDENTS` | Generate correspondents automatically if `paperless-gpt-auto` is used. Default: `true`. | No |
| `OCR_LIMIT_PAGES` | Limit the number of pages for OCR. Set to `0` for no limit. Default: `5`. | No |
| `TOKEN_LIMIT` | Maximum tokens allowed for prompts/content. Set to `0` to disable limit. Useful for smaller LLMs. | No |
| `CORRESPONDENT_BLACK_LIST` | A comma-separated list of names to exclude from the correspondents suggestions. Example: `John Doe, Jane Smith`. | No |
### Custom Prompt Templates
paperless-gpts flexible **prompt templates** let you shape how AI responds:
1. **`title_prompt.tmpl`**: For document titles.
1. **`title_prompt.tmpl`**: For document titles.
2. **`tag_prompt.tmpl`**: For tagging logic.
3. **`ocr_prompt.tmpl`**: For LLM OCR.
4. **`correspondent_prompt.tmpl`**: For correspondent identification.
@ -193,8 +221,8 @@ paperless-gpts flexible **prompt templates** let you shape how AI responds:
Mount them into your container via:
```yaml
volumes:
- ./prompts:/app/prompts
volumes:
- ./prompts:/app/prompts
```
Then tweak at will—**paperless-gpt** reloads them automatically on startup!
@ -204,11 +232,13 @@ Then tweak at will—**paperless-gpt** reloads them automatically on startup!
Each template has access to specific variables:
**title_prompt.tmpl**:
- `{{.Language}}` - Target language (e.g., "English")
- `{{.Content}}` - Document content text
- `{{.Title}}` - Original document title
**tag_prompt.tmpl**:
- `{{.Language}}` - Target language
- `{{.AvailableTags}}` - List of existing tags in paperless-ngx
- `{{.OriginalTags}}` - Document's current tags
@ -216,9 +246,11 @@ Each template has access to specific variables:
- `{{.Content}}` - Document content text
**ocr_prompt.tmpl**:
- `{{.Language}}` - Target language
**correspondent_prompt.tmpl**:
- `{{.Language}}` - Target language
- `{{.AvailableCorrespondents}}` - List of existing correspondents
- `{{.BlackList}}` - List of blacklisted correspondent names
@ -231,18 +263,21 @@ The templates use Go's text/template syntax. paperless-gpt automatically reloads
## Usage
1. **Tag Documents**
1. **Tag Documents**
- Add `paperless-gpt` or your custom tag to the docs you want to AI-ify.
2. **Visit Web UI**
2. **Visit Web UI**
- Go to `http://localhost:8080` (or your host) in your browser.
3. **Generate & Apply Suggestions**
3. **Generate & Apply Suggestions**
- Click “Generate Suggestions” to see AI-proposed titles/tags/correspondents.
- Approve, edit, or discard. Hit “Apply” to finalize in paperless-ngx.
4. **Try LLM-Based OCR (Experimental)**
- If you enabled `VISION_LLM_PROVIDER` and `VISION_LLM_MODEL`, let AI-based OCR read your scanned PDFs.
4. **Try LLM-Based OCR (Experimental)**
- If you enabled `VISION_LLM_PROVIDER` and `VISION_LLM_MODEL`, let AI-based OCR read your scanned PDFs.
- Tag those documents with `paperless-gpt-ocr-auto` (or your custom `AUTO_OCR_TAG`).
**Tip**: The entire pipeline can be **fully automated** if you prefer minimal manual intervention.
@ -261,6 +296,7 @@ The templates use Go's text/template syntax. paperless-gpt automatically reloads
![Image](demo/ocr-example1.jpg)
**Vanilla Paperless-ngx OCR**:
```
La Grande Recre
@ -278,6 +314,7 @@ HERET ET A BIENTOT
```
**LLM-Powered OCR (OpenAI gpt-4o)**:
```
La Grande Récré
Centre Commercial l'Esplanade
@ -302,6 +339,7 @@ MERCI ET A BIENTOT
![Image](demo/ocr-example2.jpg)
**Vanilla Paperless-ngx OCR**:
```
Invoice Number: 1-996-84199
@ -363,6 +401,7 @@ PALATINE IL 60094-4515
```
**LLM-Powered OCR (OpenAI gpt-4o)**:
```
FedEx. Invoice Number: 1-996-84199
Invoice Date: Sep 01, 2014
@ -433,19 +472,18 @@ P.O. Box 94515
```
---
</details>
**Why Does It Matter?**
- Traditional OCR often jumbles text from complex or low-quality scans.
- Large Language Models interpret context and correct likely errors, producing results that are more precise and readable.
**Why Does It Matter?**
- Traditional OCR often jumbles text from complex or low-quality scans.
- Large Language Models interpret context and correct likely errors, producing results that are more precise and readable.
- You can integrate these cleaned-up texts into your **paperless-ngx** pipeline for better tagging, searching, and archiving.
### How It Works
- **Vanilla OCR** typically uses classical methods or Tesseract-like engines to extract text, which can result in garbled outputs for complex fonts or poor-quality scans.
- **Vanilla OCR** typically uses classical methods or Tesseract-like engines to extract text, which can result in garbled outputs for complex fonts or poor-quality scans.
- **LLM-Powered OCR** uses your chosen AI backend—OpenAI or Ollama—to interpret the images text in a more context-aware manner. This leads to fewer errors and more coherent text.
---
@ -457,30 +495,34 @@ P.O. Box 94515
When using local LLMs (like those through Ollama), you might need to adjust certain settings to optimize performance:
#### Token Management
- Use `TOKEN_LIMIT` environment variable to control the maximum number of tokens sent to the LLM
- Smaller models might truncate content unexpectedly if given too much text
- Start with a conservative limit (e.g., 1000 tokens) and adjust based on your model's capabilities
- Set to `0` to disable the limit (use with caution)
Example configuration for smaller models:
```yaml
environment:
TOKEN_LIMIT: '2000' # Adjust based on your model's context window
LLM_PROVIDER: 'ollama'
LLM_MODEL: 'deepseek-r1:8b' # Or other local model
TOKEN_LIMIT: "2000" # Adjust based on your model's context window
LLM_PROVIDER: "ollama"
LLM_MODEL: "deepseek-r1:8b" # Or other local model
```
Common issues and solutions:
- If you see truncated or incomplete responses, try lowering the `TOKEN_LIMIT`
- If processing is too limited, gradually increase the limit while monitoring performance
- For models with larger context windows, you can increase the limit or disable it entirely
## Contributing
**Pull requests** and **issues** are welcome!
1. Fork the repo
2. Create a branch (`feature/my-awesome-update`)
3. Commit changes (`git commit -m "Improve X"`)
**Pull requests** and **issues** are welcome!
1. Fork the repo
2. Create a branch (`feature/my-awesome-update`)
3. Commit changes (`git commit -m "Improve X"`)
4. Open a PR
Check out our [contributing guidelines](CONTRIBUTING.md) for details.
@ -494,11 +536,13 @@ paperless-gpt is licensed under the [MIT License](LICENSE). Feel free to adapt a
---
## Star History
[![Star History Chart](https://api.star-history.com/svg?repos=icereed/paperless-gpt&type=Date)](https://star-history.com/#icereed/paperless-gpt&Date)
---
## Disclaimer
This project is **not** officially affiliated with [paperless-ngx][paperless-ngx]. Use at your own risk.
---

View file

@ -11,13 +11,13 @@ import (
"github.com/gin-gonic/gin"
)
//go:embed dist/*
//go:embed web-app/dist/*
var webappContent embed.FS
// CreateEmbeddedFileServer creates a http.FileSystem from our embedded files
func createEmbeddedFileServer() http.FileSystem {
// Strip the "dist" prefix from the embedded files
stripped, err := fs.Sub(webappContent, "dist")
stripped, err := fs.Sub(webappContent, "web-app/dist")
if err != nil {
panic(err)
}
@ -32,7 +32,7 @@ func serveEmbeddedFile(c *gin.Context, prefix string, filepath string) {
}
// Try to open the file from our embedded filesystem
fullPath := path.Join("dist", prefix, filepath)
fullPath := path.Join("web-app/dist", prefix, filepath)
f, err := webappContent.Open(fullPath)
if err != nil {
// If file not found, serve 404

36
go.mod
View file

@ -5,8 +5,10 @@ go 1.22.0
toolchain go1.23.6
require (
cloud.google.com/go/documentai v1.35.1
github.com/Masterminds/sprig/v3 v3.3.0
github.com/fatih/color v1.18.0
github.com/gabriel-vasile/mimetype v1.4.3
github.com/gen2brain/go-fitz v1.24.14
github.com/gin-gonic/gin v1.10.0
github.com/google/uuid v1.6.0
@ -14,11 +16,17 @@ require (
github.com/stretchr/testify v1.10.0
github.com/tmc/langchaingo v0.1.13-pre.1
golang.org/x/sync v0.11.0
google.golang.org/api v0.214.0
gorm.io/driver/sqlite v1.5.7
gorm.io/gorm v1.25.12
)
require (
cloud.google.com/go v0.116.0 // indirect
cloud.google.com/go/auth v0.13.0 // indirect
cloud.google.com/go/auth/oauth2adapt v0.2.6 // indirect
cloud.google.com/go/compute/metadata v0.6.0 // indirect
cloud.google.com/go/longrunning v0.6.2 // indirect
dario.cat/mergo v1.0.1 // indirect
github.com/Masterminds/goutils v1.1.1 // indirect
github.com/Masterminds/semver/v3 v3.3.0 // indirect
@ -29,12 +37,17 @@ require (
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/dlclark/regexp2 v1.10.0 // indirect
github.com/ebitengine/purego v0.8.0 // indirect
github.com/gabriel-vasile/mimetype v1.4.3 // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/gin-contrib/sse v0.1.0 // indirect
github.com/go-logr/logr v1.4.2 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-playground/locales v0.14.1 // indirect
github.com/go-playground/universal-translator v0.18.1 // indirect
github.com/go-playground/validator/v10 v10.20.0 // indirect
github.com/goccy/go-json v0.10.2 // indirect
github.com/google/s2a-go v0.1.8 // indirect
github.com/googleapis/enterprise-certificate-proxy v0.3.4 // indirect
github.com/googleapis/gax-go/v2 v2.14.0 // indirect
github.com/huandu/xstrings v1.5.0 // indirect
github.com/jinzhu/inflection v1.0.0 // indirect
github.com/jinzhu/now v1.1.5 // indirect
@ -61,11 +74,22 @@ require (
gitlab.com/golang-commonmark/markdown v0.0.0-20211110145824-bf3e522c626a // indirect
gitlab.com/golang-commonmark/mdurl v0.0.0-20191124015652-932350d1cb84 // indirect
gitlab.com/golang-commonmark/puny v0.0.0-20191124015043-9f83538fa04f // indirect
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.54.0 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.54.0 // indirect
go.opentelemetry.io/otel v1.29.0 // indirect
go.opentelemetry.io/otel/metric v1.29.0 // indirect
go.opentelemetry.io/otel/trace v1.29.0 // indirect
golang.org/x/arch v0.8.0 // indirect
golang.org/x/crypto v0.29.0 // indirect
golang.org/x/net v0.25.0 // indirect
golang.org/x/sys v0.27.0 // indirect
golang.org/x/text v0.20.0 // indirect
google.golang.org/protobuf v1.34.1 // indirect
golang.org/x/crypto v0.31.0 // indirect
golang.org/x/net v0.33.0 // indirect
golang.org/x/oauth2 v0.24.0 // indirect
golang.org/x/sys v0.28.0 // indirect
golang.org/x/text v0.21.0 // indirect
golang.org/x/time v0.8.0 // indirect
google.golang.org/genproto v0.0.0-20241118233622-e639e219e697 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20241118233622-e639e219e697 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20241209162323-e6fa225c2576 // indirect
google.golang.org/grpc v1.67.3 // indirect
google.golang.org/protobuf v1.35.2 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
)

79
go.sum
View file

@ -1,3 +1,15 @@
cloud.google.com/go v0.116.0 h1:B3fRrSDkLRt5qSHWe40ERJvhvnQwdZiHu0bJOpldweE=
cloud.google.com/go v0.116.0/go.mod h1:cEPSRWPzZEswwdr9BxE6ChEn01dWlTaF05LiC2Xs70U=
cloud.google.com/go/auth v0.13.0 h1:8Fu8TZy167JkW8Tj3q7dIkr2v4cndv41ouecJx0PAHs=
cloud.google.com/go/auth v0.13.0/go.mod h1:COOjD9gwfKNKz+IIduatIhYJQIc0mG3H102r/EMxX6Q=
cloud.google.com/go/auth/oauth2adapt v0.2.6 h1:V6a6XDu2lTwPZWOawrAa9HUK+DB2zfJyTuciBG5hFkU=
cloud.google.com/go/auth/oauth2adapt v0.2.6/go.mod h1:AlmsELtlEBnaNTL7jCj8VQFLy6mbZv0s4Q7NGBeQ5E8=
cloud.google.com/go/compute/metadata v0.6.0 h1:A6hENjEsCDtC1k8byVsgwvVcioamEHvZ4j01OwKxG9I=
cloud.google.com/go/compute/metadata v0.6.0/go.mod h1:FjyFAW1MW0C203CEOMDTu3Dk1FlqW3Rga40jzHL4hfg=
cloud.google.com/go/documentai v1.35.1 h1:52RfiUsoblXcE57CfKJGnITWLxRM30BcqNk/BKZl2LI=
cloud.google.com/go/documentai v1.35.1/go.mod h1:WJjwUAQfwQPJORW8fjz7RODprMULDzEGLA2E6WxenFw=
cloud.google.com/go/longrunning v0.6.2 h1:xjDfh1pQcWPEvnfjZmwjKQEcHnpz6lHjfy7Fo0MK+hc=
cloud.google.com/go/longrunning v0.6.2/go.mod h1:k/vIs83RN4bE3YCswdXC5PFfWVILjm3hpEUlSko4PiI=
dario.cat/mergo v1.0.1 h1:Ra4+bf83h2ztPIQYNP99R6m+Y7KfnARDfID+a+vLl4s=
dario.cat/mergo v1.0.1/go.mod h1:uNxQE+84aUszobStD9th8a29P2fMDhsBdgRYvZOxGmk=
github.com/Masterminds/goutils v1.1.1 h1:5nUrii3FMTL5diU80unEVvNevw1nH4+ZV4DSLVJLSYI=
@ -23,6 +35,8 @@ github.com/ebitengine/purego v0.8.0 h1:JbqvnEzRvPpxhCJzJJ2y0RbiZ8nyjccVUrSM3q+Gv
github.com/ebitengine/purego v0.8.0/go.mod h1:iIjxzd6CiRiOG0UyXP+V1+jWqUXVjPKLAI0mRfJZTmQ=
github.com/fatih/color v1.18.0 h1:S8gINlzdQ840/4pfAwic/ZE0djQEH3wM94VfqLTZcOM=
github.com/fatih/color v1.18.0/go.mod h1:4FelSpRwEGDpQ12mAdzqdOukCy4u8WUtOY6lkT/6HfU=
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8=
github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0=
github.com/gabriel-vasile/mimetype v1.4.3 h1:in2uUcidCuFcDKtdcBxlR0rJ1+fsokWf+uqxgUFjbI0=
@ -33,6 +47,11 @@ github.com/gin-contrib/sse v0.1.0 h1:Y/yl/+YNO8GZSjAhjMsSuLt29uWRFHdHYUb5lYOV9qE
github.com/gin-contrib/sse v0.1.0/go.mod h1:RHrZQHXnP2xjPF+u1gW/2HnVO7nvIa9PG3Gm+fLHvGI=
github.com/gin-gonic/gin v1.10.0 h1:nTuyha1TYqgedzytsKYqna+DfLos46nTv2ygFy86HFU=
github.com/gin-gonic/gin v1.10.0/go.mod h1:4PMNQiOhvDRa013RKVbsiNwoyezlm2rm0uX/T7kzp5Y=
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY=
github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/go-playground/assert/v2 v2.2.0 h1:JvknZsQTYeFEAhQwI4qEt9cyV5ONwRHC+lYKSsYSR8s=
github.com/go-playground/assert/v2 v2.2.0/go.mod h1:VDjEfimB/XKnb+ZQfWdccd7VUvScMdVu0Titje2rxJ4=
github.com/go-playground/locales v0.14.1 h1:EWaQ/wswjilfKLTECiXz7Rh+3BjFhfDFKv/oXslEjJA=
@ -46,8 +65,14 @@ github.com/goccy/go-json v0.10.2/go.mod h1:6MelG93GURQebXPDq3khkgXZkazVtN9CRI+MG
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/s2a-go v0.1.8 h1:zZDs9gcbt9ZPLV0ndSyQk6Kacx2g/X+SKYovpnz3SMM=
github.com/google/s2a-go v0.1.8/go.mod h1:6iNWHTpQ+nfNRN5E00MSdfDwVesa8hhS32PhPO8deJA=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/googleapis/enterprise-certificate-proxy v0.3.4 h1:XYIDZApgAnrN1c855gTgghdIA6Stxb52D5RnLI1SLyw=
github.com/googleapis/enterprise-certificate-proxy v0.3.4/go.mod h1:YKe7cfqYXjKGpGvmSg28/fFvhNzinZQm8DGnaburhGA=
github.com/googleapis/gax-go/v2 v2.14.0 h1:f+jMrjBPl+DL9nI4IQzLUxMq7XrAqFYB7hBPqMNIe8o=
github.com/googleapis/gax-go/v2 v2.14.0/go.mod h1:lhBCnjdLrWRaPvLWhmc8IS24m9mr07qSYnHncrgo+zk=
github.com/huandu/xstrings v1.5.0 h1:2ag3IFq9ZDANvthTwTiqSSZLjDc+BedvHPAp5tJy2TI=
github.com/huandu/xstrings v1.5.0/go.mod h1:y5/lhBue+AyNmUVz9RLU9xbLR0o4KIIExikq4ovT0aE=
github.com/jinzhu/inflection v1.0.0 h1:K317FqzuhWc8YvSVlFMCCUb36O/S9MCKRDI7QkRKD/E=
@ -113,10 +138,6 @@ github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXl
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/tmc/langchaingo v0.1.12 h1:yXwSu54f3b1IKw0jJ5/DWu+qFVH1NBblwC0xddBzGJE=
github.com/tmc/langchaingo v0.1.12/go.mod h1:cd62xD6h+ouk8k/QQFhOsjRYBSA1JJ5UVKXSIgm7Ni4=
github.com/tmc/langchaingo v0.1.13-pre.0.0.20250202074804-0672790bb23a h1:uEmyBuBfueLWqdvxHYi9/smSb1BfHfXJpDjJAGI38A4=
github.com/tmc/langchaingo v0.1.13-pre.0.0.20250202074804-0672790bb23a/go.mod h1:vpQ5NOIhpzxDfTZK9B6tf2GM/MoaHewPWM5KXXGh7hg=
github.com/tmc/langchaingo v0.1.13-pre.1 h1:r+ma9kl0NuFJGtIrnMPFjEn4RhXktwSI31fIpgiiMm4=
github.com/tmc/langchaingo v0.1.13-pre.1/go.mod h1:vpQ5NOIhpzxDfTZK9B6tf2GM/MoaHewPWM5KXXGh7hg=
github.com/twitchyliquid64/golang-asm v0.15.1 h1:SU5vSMR7hnwNxj24w34ZyCi/FmDZTkS4MhqMhdFk5YI=
@ -135,33 +156,51 @@ gitlab.com/golang-commonmark/puny v0.0.0-20191124015043-9f83538fa04f h1:Wku8eEde
gitlab.com/golang-commonmark/puny v0.0.0-20191124015043-9f83538fa04f/go.mod h1:Tiuhl+njh/JIg0uS/sOJVYi0x2HEa5rc1OAaVsb5tAs=
gitlab.com/opennota/wd v0.0.0-20180912061657-c5d65f63c638 h1:uPZaMiz6Sz0PZs3IZJWpU5qHKGNy///1pacZC9txiUI=
gitlab.com/opennota/wd v0.0.0-20180912061657-c5d65f63c638/go.mod h1:EGRJaqe2eO9XGmFtQCvV3Lm9NLico3UhFwUpCG/+mVU=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.54.0 h1:r6I7RJCN86bpD/FQwedZ0vSixDpwuWREjW9oRMsmqDc=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.54.0/go.mod h1:B9yO6b04uB80CzjedvewuqDhxJxi11s7/GtiGa8bAjI=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.54.0 h1:TT4fX+nBOA/+LUkobKGW1ydGcn+G3vRw9+g5HwCphpk=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.54.0/go.mod h1:L7UH0GbB0p47T4Rri3uHjbpCFYrVrwc1I25QhNPiGK8=
go.opentelemetry.io/otel v1.29.0 h1:PdomN/Al4q/lN6iBJEN3AwPvUiHPMlt93c8bqTG5Llw=
go.opentelemetry.io/otel v1.29.0/go.mod h1:N/WtXPs1CNCUEx+Agz5uouwCba+i+bJGFicT8SR4NP8=
go.opentelemetry.io/otel/metric v1.29.0 h1:vPf/HFWTNkPu1aYeIsc98l4ktOQaL6LeSoeV2g+8YLc=
go.opentelemetry.io/otel/metric v1.29.0/go.mod h1:auu/QWieFVWx+DmQOUMgj0F8LHWdgalxXqvp7BII/W8=
go.opentelemetry.io/otel/trace v1.29.0 h1:J/8ZNK4XgR7a21DZUAsbF8pZ5Jcw1VhACmnYt39JTi4=
go.opentelemetry.io/otel/trace v1.29.0/go.mod h1:eHl3w0sp3paPkYstJOmAimxhiFXPg+MMTlEh3nsQgWQ=
golang.org/x/arch v0.0.0-20210923205945-b76863e36670/go.mod h1:5om86z9Hs0C8fWVUuoMHwpExlXzs5Tkyp9hOrfG7pp8=
golang.org/x/arch v0.8.0 h1:3wRIsP3pM4yUptoR96otTUOXI367OS0+c9eeRi9doIc=
golang.org/x/arch v0.8.0/go.mod h1:FEVrYAQjsQXMVJ1nsMoVVXPZg6p2JE2mx8psSWTDQys=
golang.org/x/crypto v0.26.0 h1:RrRspgV4mU+YwB4FYnuBoKsUapNIL5cohGAmSH3azsw=
golang.org/x/crypto v0.26.0/go.mod h1:GY7jblb9wI+FOo5y8/S2oY4zWP07AkOJ4+jxCqdqn54=
golang.org/x/crypto v0.29.0 h1:L5SG1JTTXupVV3n6sUqMTeWbjAyfPwoda2DLX8J8FrQ=
golang.org/x/crypto v0.29.0/go.mod h1:+F4F4N5hv6v38hfeYwTdx20oUvLLc+QfrE9Ax9HtgRg=
golang.org/x/net v0.25.0 h1:d/OCCoBEUq33pjydKrGQhw7IlUPI2Oylr+8qLx49kac=
golang.org/x/net v0.25.0/go.mod h1:JkAGAh7GEvH74S6FOH42FLoXpXbE/aqXSrIQjXgsiwM=
golang.org/x/sync v0.10.0 h1:3NQrjDixjgGwUOCaF8w2+VYHv0Ve/vGYSbdkTa98gmQ=
golang.org/x/sync v0.10.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/crypto v0.31.0 h1:ihbySMvVjLAeSH1IbfcRTkD/iNscyz8rGzjF/E5hV6U=
golang.org/x/crypto v0.31.0/go.mod h1:kDsLvtWBEx7MV9tJOj9bnXsPbxwJQ6csT/x4KIN4Ssk=
golang.org/x/net v0.33.0 h1:74SYHlV8BIgHIFC/LrYkOGIwL19eTYXQ5wc6TBuO36I=
golang.org/x/net v0.33.0/go.mod h1:HXLR5J+9DxmrqMwG9qjGCxZ+zKXxBru04zlTvWlWuN4=
golang.org/x/oauth2 v0.24.0 h1:KTBBxWqUa0ykRPLtV69rRto9TLXcqYkeswu48x/gvNE=
golang.org/x/oauth2 v0.24.0/go.mod h1:XYTD2NtWslqkgxebSiOHnXEap4TF09sJSc7H1sXbhtI=
golang.org/x/sync v0.11.0 h1:GGz8+XQP4FvTTrjZPzNKTMFtSXH80RAzG+5ghFPgK9w=
golang.org/x/sync v0.11.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.25.0 h1:r+8e+loiHxRqhXVl6ML1nO3l1+oFoWbnlu2Ehimmi34=
golang.org/x/sys v0.25.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.27.0 h1:wBqf8DvsY9Y/2P8gAfPDEYNuS30J4lPHJxXSb/nJZ+s=
golang.org/x/sys v0.27.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.28.0 h1:Fksou7UEQUWlKvIdsqzJmUmCX3cZuD2+P3XyyzwMhlA=
golang.org/x/sys v0.28.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.20.0 h1:gK/Kv2otX8gz+wn7Rmb3vT96ZwuoxnQlY+HlJVj7Qug=
golang.org/x/text v0.20.0/go.mod h1:D4IsuqiFMhST5bX19pQ9ikHC2GsaKyk/oF+pn3ducp4=
golang.org/x/text v0.21.0 h1:zyQAAkrwaneQ066sspRyJaG9VNi/YJ1NfzcGB3hZ/qo=
golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ=
golang.org/x/time v0.8.0 h1:9i3RxcPv3PZnitoVGMPDKZSq1xW1gK1Xy3ArNOGZfEg=
golang.org/x/time v0.8.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
google.golang.org/protobuf v1.34.1 h1:9ddQBjfCyZPOHPUiPxpYESBLc+T8P3E+Vo4IbKZgFWg=
google.golang.org/protobuf v1.34.1/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos=
google.golang.org/api v0.214.0 h1:h2Gkq07OYi6kusGOaT/9rnNljuXmqPnaig7WGPmKbwA=
google.golang.org/api v0.214.0/go.mod h1:bYPpLG8AyeMWwDU6NXoB00xC0DFkikVvd5MfwoxjLqE=
google.golang.org/genproto v0.0.0-20241118233622-e639e219e697 h1:ToEetK57OidYuqD4Q5w+vfEnPvPpuTwedCNVohYJfNk=
google.golang.org/genproto v0.0.0-20241118233622-e639e219e697/go.mod h1:JJrvXBWRZaFMxBufik1a4RpFw4HhgVtBBWQeQgUj2cc=
google.golang.org/genproto/googleapis/api v0.0.0-20241118233622-e639e219e697 h1:pgr/4QbFyktUv9CtQ/Fq4gzEE6/Xs7iCXbktaGzLHbQ=
google.golang.org/genproto/googleapis/api v0.0.0-20241118233622-e639e219e697/go.mod h1:+D9ySVjN8nY8YCVjc5O7PZDIdZporIDY3KaGfJunh88=
google.golang.org/genproto/googleapis/rpc v0.0.0-20241209162323-e6fa225c2576 h1:8ZmaLZE4XWrtU3MyClkYqqtl6Oegr3235h7jxsDyqCY=
google.golang.org/genproto/googleapis/rpc v0.0.0-20241209162323-e6fa225c2576/go.mod h1:5uTbfoYQed2U9p3KIj2/Zzm02PYhndfdmML0qC3q3FU=
google.golang.org/grpc v1.67.3 h1:OgPcDAFKHnH8X3O4WcO4XUc8GRDeKsKReqbQtiCj7N8=
google.golang.org/grpc v1.67.3/go.mod h1:YGaHCc6Oap+FzBJTZLBzkGSYt/cvGPFTPxkn7QfSU8s=
google.golang.org/protobuf v1.35.2 h1:8Ar7bF+apOIoThw1EdZl0p1oWvMqTHmpA2fRTyZO8io=
google.golang.org/protobuf v1.35.2/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=

45
main.go
View file

@ -5,6 +5,7 @@ import (
"fmt"
"net/http"
"os"
"paperless-gpt/ocr"
"path/filepath"
"runtime"
"strconv"
@ -113,10 +114,11 @@ Document Content:
// App struct to hold dependencies
type App struct {
Client *PaperlessClient
Database *gorm.DB
LLM llms.Model
VisionLLM llms.Model
Client *PaperlessClient
Database *gorm.DB
LLM llms.Model
VisionLLM llms.Model
ocrProvider ocr.Provider // OCR provider interface
}
func main() {
@ -150,12 +152,39 @@ func main() {
log.Fatalf("Failed to create Vision LLM client: %v", err)
}
// Initialize OCR provider
var ocrProvider ocr.Provider
providerType := os.Getenv("OCR_PROVIDER")
if providerType == "" {
providerType = "llm" // Default to LLM provider
}
ocrConfig := ocr.Config{
Provider: providerType,
GoogleProjectID: os.Getenv("GOOGLE_PROJECT_ID"),
GoogleLocation: os.Getenv("GOOGLE_LOCATION"),
GoogleProcessorID: os.Getenv("GOOGLE_PROCESSOR_ID"),
VisionLLMProvider: visionLlmProvider,
VisionLLMModel: visionLlmModel,
}
// If provider is LLM, but no VISION_LLM_PROVIDER is set, don't initialize OCR provider
if providerType == "llm" && visionLlmProvider == "" {
log.Warn("OCR provider is set to LLM, but no VISION_LLM_PROVIDER is set. Disabling OCR.")
} else {
ocrProvider, err = ocr.NewProvider(ocrConfig)
if err != nil {
log.Fatalf("Failed to initialize OCR provider: %v", err)
}
}
// Initialize App with dependencies
app := &App{
Client: client,
Database: database,
LLM: llm,
VisionLLM: visionLlm,
Client: client,
Database: database,
LLM: llm,
VisionLLM: visionLlm,
ocrProvider: ocrProvider,
}
// Start background process for auto-tagging

2
ocr.go
View file

@ -36,7 +36,7 @@ func (app *App) ProcessDocumentOCR(ctx context.Context, documentID int) (string,
return "", fmt.Errorf("error reading image file for document %d, page %d: %w", documentID, i+1, err)
}
ocrText, err := app.doOCRViaLLM(ctx, imageContent, pageLogger)
ocrText, err := app.ocrProvider.ProcessImage(ctx, imageContent)
if err != nil {
return "", fmt.Errorf("error performing OCR for document %d, page %d: %w", documentID, i+1, err)
}

View file

@ -0,0 +1,118 @@
package ocr
import (
"context"
"fmt"
documentai "cloud.google.com/go/documentai/apiv1"
"cloud.google.com/go/documentai/apiv1/documentaipb"
"github.com/gabriel-vasile/mimetype"
"github.com/sirupsen/logrus"
"google.golang.org/api/option"
)
// GoogleDocAIProvider implements OCR using Google Document AI
type GoogleDocAIProvider struct {
projectID string
location string
processorID string
client *documentai.DocumentProcessorClient
}
func newGoogleDocAIProvider(config Config) (*GoogleDocAIProvider, error) {
logger := log.WithFields(logrus.Fields{
"location": config.GoogleLocation,
"processor_id": config.GoogleProcessorID,
})
logger.Info("Creating new Google Document AI provider")
ctx := context.Background()
endpoint := fmt.Sprintf("%s-documentai.googleapis.com:443", config.GoogleLocation)
client, err := documentai.NewDocumentProcessorClient(ctx, option.WithEndpoint(endpoint))
if err != nil {
logger.WithError(err).Error("Failed to create Document AI client")
return nil, fmt.Errorf("error creating Document AI client: %w", err)
}
provider := &GoogleDocAIProvider{
projectID: config.GoogleProjectID,
location: config.GoogleLocation,
processorID: config.GoogleProcessorID,
client: client,
}
logger.Info("Successfully initialized Google Document AI provider")
return provider, nil
}
func (p *GoogleDocAIProvider) ProcessImage(ctx context.Context, imageContent []byte) (string, error) {
logger := log.WithFields(logrus.Fields{
"project_id": p.projectID,
"location": p.location,
"processor_id": p.processorID,
})
logger.Debug("Starting Document AI processing")
// Detect MIME type
mtype := mimetype.Detect(imageContent)
logger.WithField("mime_type", mtype.String()).Debug("Detected file type")
if !isImageMIMEType(mtype.String()) {
logger.WithField("mime_type", mtype.String()).Error("Unsupported file type")
return "", fmt.Errorf("unsupported file type: %s", mtype.String())
}
name := fmt.Sprintf("projects/%s/locations/%s/processors/%s", p.projectID, p.location, p.processorID)
req := &documentaipb.ProcessRequest{
Name: name,
Source: &documentaipb.ProcessRequest_RawDocument{
RawDocument: &documentaipb.RawDocument{
Content: imageContent,
MimeType: mtype.String(),
},
},
}
logger.Debug("Sending request to Document AI")
resp, err := p.client.ProcessDocument(ctx, req)
if err != nil {
logger.WithError(err).Error("Failed to process document")
return "", fmt.Errorf("error processing document: %w", err)
}
if resp == nil || resp.Document == nil {
logger.Error("Received nil response or document from Document AI")
return "", fmt.Errorf("received nil response or document from Document AI")
}
if resp.Document.Error != nil {
logger.WithField("error", resp.Document.Error.Message).Error("Document processing error")
return "", fmt.Errorf("document processing error: %s", resp.Document.Error.Message)
}
logger.WithField("content_length", len(resp.Document.Text)).Info("Successfully processed document")
return resp.Document.Text, nil
}
// isImageMIMEType checks if the given MIME type is a supported image type
func isImageMIMEType(mimeType string) bool {
supportedTypes := map[string]bool{
"image/jpeg": true,
"image/jpg": true,
"image/png": true,
"image/tiff": true,
"image/bmp": true,
"application/pdf": true,
}
return supportedTypes[mimeType]
}
// Close releases resources used by the provider
func (p *GoogleDocAIProvider) Close() error {
if p.client != nil {
return p.client.Close()
}
return nil
}

140
ocr/llm_provider.go Normal file
View file

@ -0,0 +1,140 @@
package ocr
import (
"bytes"
"context"
"encoding/base64"
"fmt"
"image"
"os"
"strings"
_ "image/jpeg"
"github.com/sirupsen/logrus"
"github.com/tmc/langchaingo/llms"
"github.com/tmc/langchaingo/llms/ollama"
"github.com/tmc/langchaingo/llms/openai"
)
// LLMProvider implements OCR using LLM vision models
type LLMProvider struct {
provider string
model string
llm llms.Model
template string // OCR prompt template
}
func newLLMProvider(config Config) (*LLMProvider, error) {
logger := log.WithFields(logrus.Fields{
"provider": config.VisionLLMProvider,
"model": config.VisionLLMModel,
})
logger.Info("Creating new LLM OCR provider")
var model llms.Model
var err error
switch strings.ToLower(config.VisionLLMProvider) {
case "openai":
logger.Debug("Initializing OpenAI vision model")
model, err = createOpenAIClient(config)
case "ollama":
logger.Debug("Initializing Ollama vision model")
model, err = createOllamaClient(config)
default:
return nil, fmt.Errorf("unsupported vision LLM provider: %s", config.VisionLLMProvider)
}
if err != nil {
logger.WithError(err).Error("Failed to create vision LLM client")
return nil, fmt.Errorf("error creating vision LLM client: %w", err)
}
logger.Info("Successfully initialized LLM OCR provider")
return &LLMProvider{
provider: config.VisionLLMProvider,
model: config.VisionLLMModel,
llm: model,
template: defaultOCRPrompt,
}, nil
}
func (p *LLMProvider) ProcessImage(ctx context.Context, imageContent []byte) (string, error) {
logger := log.WithFields(logrus.Fields{
"provider": p.provider,
"model": p.model,
})
logger.Debug("Starting OCR processing")
// Log the image dimensions
img, _, err := image.Decode(bytes.NewReader(imageContent))
if err != nil {
logger.WithError(err).Error("Failed to decode image")
return "", fmt.Errorf("error decoding image: %w", err)
}
bounds := img.Bounds()
logger.WithFields(logrus.Fields{
"width": bounds.Dx(),
"height": bounds.Dy(),
}).Debug("Image dimensions")
// Prepare content parts based on provider type
var parts []llms.ContentPart
if strings.ToLower(p.provider) != "openai" {
logger.Debug("Using binary image format for non-OpenAI provider")
parts = []llms.ContentPart{
llms.BinaryPart("image/jpeg", imageContent),
llms.TextPart(p.template),
}
} else {
logger.Debug("Using base64 image format for OpenAI provider")
base64Image := base64.StdEncoding.EncodeToString(imageContent)
parts = []llms.ContentPart{
llms.ImageURLPart(fmt.Sprintf("data:image/jpeg;base64,%s", base64Image)),
llms.TextPart(p.template),
}
}
// Convert the image to text
logger.Debug("Sending request to vision model")
completion, err := p.llm.GenerateContent(ctx, []llms.MessageContent{
{
Parts: parts,
Role: llms.ChatMessageTypeHuman,
},
})
if err != nil {
logger.WithError(err).Error("Failed to get response from vision model")
return "", fmt.Errorf("error getting response from LLM: %w", err)
}
logger.WithField("content_length", len(completion.Choices[0].Content)).Info("Successfully processed image")
return completion.Choices[0].Content, nil
}
// createOpenAIClient creates a new OpenAI vision model client
func createOpenAIClient(config Config) (llms.Model, error) {
apiKey := os.Getenv("OPENAI_API_KEY")
if apiKey == "" {
return nil, fmt.Errorf("OpenAI API key is not set")
}
return openai.New(
openai.WithModel(config.VisionLLMModel),
openai.WithToken(apiKey),
)
}
// createOllamaClient creates a new Ollama vision model client
func createOllamaClient(config Config) (llms.Model, error) {
host := os.Getenv("OLLAMA_HOST")
if host == "" {
host = "http://127.0.0.1:11434"
}
return ollama.New(
ollama.WithModel(config.VisionLLMModel),
ollama.WithServerURL(host),
)
}
const defaultOCRPrompt = `Just transcribe the text in this image and preserve the formatting and layout (high quality OCR). Do that for ALL the text in the image. Be thorough and pay attention. This is very important. The image is from a text document so be sure to continue until the bottom of the page. Thanks a lot! You tend to forget about some text in the image so please focus! Use markdown format but without a code block.`

65
ocr/provider.go Normal file
View file

@ -0,0 +1,65 @@
package ocr
import (
"context"
"fmt"
"github.com/sirupsen/logrus"
)
var log = logrus.New()
// Provider defines the interface for OCR processing
type Provider interface {
ProcessImage(ctx context.Context, imageContent []byte) (string, error)
}
// Config holds the OCR provider configuration
type Config struct {
// Provider type (e.g., "llm", "google_docai")
Provider string
// Google Document AI settings
GoogleProjectID string
GoogleLocation string
GoogleProcessorID string
// LLM settings (from existing config)
VisionLLMProvider string
VisionLLMModel string
}
// NewProvider creates a new OCR provider based on configuration
func NewProvider(config Config) (Provider, error) {
log.Info("Initializing OCR provider: ", config.Provider)
switch config.Provider {
case "google_docai":
if config.GoogleProjectID == "" || config.GoogleLocation == "" || config.GoogleProcessorID == "" {
return nil, fmt.Errorf("missing required Google Document AI configuration")
}
log.WithFields(logrus.Fields{
"location": config.GoogleLocation,
"processor_id": config.GoogleProcessorID,
}).Info("Using Google Document AI provider")
return newGoogleDocAIProvider(config)
case "llm":
if config.VisionLLMProvider == "" || config.VisionLLMModel == "" {
return nil, fmt.Errorf("missing required LLM configuration")
}
log.WithFields(logrus.Fields{
"provider": config.VisionLLMProvider,
"model": config.VisionLLMModel,
}).Info("Using LLM OCR provider")
return newLLMProvider(config)
default:
return nil, fmt.Errorf("unsupported OCR provider: %s", config.Provider)
}
}
// SetLogLevel sets the logging level for the OCR package
func SetLogLevel(level logrus.Level) {
log.SetLevel(level)
}

0
web-app/dist/.keep vendored Normal file
View file

View file

@ -1 +1 @@
{"root":["./src/app.tsx","./src/documentprocessor.tsx","./src/experimentalocr.tsx","./src/history.tsx","./src/main.tsx","./src/vite-env.d.ts","./src/components/documentcard.tsx","./src/components/documentstoprocess.tsx","./src/components/nodocuments.tsx","./src/components/sidebar.tsx","./src/components/successmodal.tsx","./src/components/suggestioncard.tsx","./src/components/suggestionsreview.tsx","./src/components/undocard.tsx"],"version":"5.7.2"}
{"root":["./src/app.tsx","./src/documentprocessor.tsx","./src/experimentalocr.tsx","./src/history.tsx","./src/main.tsx","./src/vite-env.d.ts","./src/components/documentcard.tsx","./src/components/documentstoprocess.tsx","./src/components/nodocuments.tsx","./src/components/sidebar.tsx","./src/components/successmodal.tsx","./src/components/suggestioncard.tsx","./src/components/suggestionsreview.tsx","./src/components/undocard.tsx"],"version":"5.7.3"}

View file

@ -1 +1 @@
{"root":["./vite.config.ts"],"version":"5.7.2"}
{"root":["./vite.config.ts"],"version":"5.7.3"}