Update README.md
Browse files
README.md
CHANGED
|
@@ -19,11 +19,11 @@ The model only takes images as document-side inputs and produce vectors represen
|
|
| 19 |
|
| 20 |
# News
|
| 21 |
|
| 22 |
-
- 2024-07-14: We released huggingface demo
|
| 23 |
|
| 24 |
-
- 2024-07-14: We released a Gradio demo of `miniCPM-visual-embedding-v0`, take a look at [pipeline_gradio.py](https://huggingface.co/RhapsodyAI/minicpm-visual-embedding-v0/blob/main/pipeline_gradio.py). You can run `pipeline_gradio.py` to build a demo on your PC.
|
| 25 |
|
| 26 |
-
- 2024-07-13: We released a command-line based demo of `miniCPM-visual-embedding-v0` for users to retireve most relavant pages from a given PDF file (could be very long), take a look at [pipeline.py](https://huggingface.co/RhapsodyAI/minicpm-visual-embedding-v0/blob/main/pipeline.py).
|
| 27 |
|
| 28 |
- 2024-06-27: 🚀 We released our first visual embedding model checkpoint minicpm-visual-embedding-v0 on [huggingface](https://huggingface.co/RhapsodyAI/minicpm-visual-embedding-v0).
|
| 29 |
|
|
@@ -56,6 +56,15 @@ or
|
|
| 56 |
huggingface-cli download --resume-download RhapsodyAI/minicpm-visual-embedding-v0 --local-dir minicpm-visual-embedding-v0 --local-dir-use-symlinks False
|
| 57 |
```
|
| 58 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 59 |
```python
|
| 60 |
from transformers import AutoModel
|
| 61 |
from transformers import AutoTokenizer
|
|
@@ -96,7 +105,7 @@ print(scores)
|
|
| 96 |
|
| 97 |
# Todos
|
| 98 |
|
| 99 |
-
- Release huggingface space demo.
|
| 100 |
|
| 101 |
- Release the evaluation results.
|
| 102 |
|
|
|
|
| 19 |
|
| 20 |
# News
|
| 21 |
|
| 22 |
+
- 2024-07-14: We released **huggingface demo**! Try our [online demo](https://huggingface.co/spaces/bokesyo/minicpm-visual-embeeding-v0-demo)!
|
| 23 |
|
| 24 |
+
- 2024-07-14: We released a **Gradio demo** of `miniCPM-visual-embedding-v0`, take a look at [pipeline_gradio.py](https://huggingface.co/RhapsodyAI/minicpm-visual-embedding-v0/blob/main/pipeline_gradio.py). You can run `pipeline_gradio.py` to build a demo on your PC.
|
| 25 |
|
| 26 |
+
- 2024-07-13: We released a **command-line based demo** of `miniCPM-visual-embedding-v0` for users to retireve most relavant pages from a given PDF file (could be very long), take a look at [pipeline.py](https://huggingface.co/RhapsodyAI/minicpm-visual-embedding-v0/blob/main/pipeline.py).
|
| 27 |
|
| 28 |
- 2024-06-27: 🚀 We released our first visual embedding model checkpoint minicpm-visual-embedding-v0 on [huggingface](https://huggingface.co/RhapsodyAI/minicpm-visual-embedding-v0).
|
| 29 |
|
|
|
|
| 56 |
huggingface-cli download --resume-download RhapsodyAI/minicpm-visual-embedding-v0 --local-dir minicpm-visual-embedding-v0 --local-dir-use-symlinks False
|
| 57 |
```
|
| 58 |
|
| 59 |
+
- To deploy a local demo, first check `pipeline_gradio.py`, change *model path* to your local path and change *device* to your device (for users with Nvidia card, use 'cuda', for users with apple silicon, use 'mps'). then launch the demo:
|
| 60 |
+
|
| 61 |
+
```bash
|
| 62 |
+
pip install gradio
|
| 63 |
+
python pipeline_gradio.py
|
| 64 |
+
```
|
| 65 |
+
|
| 66 |
+
- To run the model for research purpose, please refer the following code:
|
| 67 |
+
|
| 68 |
```python
|
| 69 |
from transformers import AutoModel
|
| 70 |
from transformers import AutoTokenizer
|
|
|
|
| 105 |
|
| 106 |
# Todos
|
| 107 |
|
| 108 |
+
- [x] Release huggingface space demo.
|
| 109 |
|
| 110 |
- Release the evaluation results.
|
| 111 |
|