Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -17,16 +17,18 @@ This dataset consists of a `pandas` table and attached `images.zip` file with th
|
|
| 17 |
|
| 18 |
* seed (`numpy` seed used to generate random vectors)
|
| 19 |
* path (path to the generated image obtained after unzipping `images.zip`)
|
| 20 |
-
* vector (generated "random" vector used to create StyleGAN3 images)
|
| 21 |
* text (caption of each image, generated using BLIP model: `Salesforce/blip-image-captioning-base`)
|
| 22 |
|
| 23 |
-
## Usage
|
| 24 |
|
| 25 |
-
|
| 26 |
|
| 27 |
```python
|
| 28 |
images = load_dataset("balgot/stylegan3-annotated", data_files=["*.zip"])
|
| 29 |
dataset = load_dataset("balgot/stylegan3-annotated", data_files=["*.csv"])
|
|
|
|
|
|
|
| 30 |
```
|
| 31 |
|
| 32 |
|
|
|
|
| 17 |
|
| 18 |
* seed (`numpy` seed used to generate random vectors)
|
| 19 |
* path (path to the generated image obtained after unzipping `images.zip`)
|
| 20 |
+
* vector (generated numpy "random" vector used to create StyleGAN3 images)
|
| 21 |
* text (caption of each image, generated using BLIP model: `Salesforce/blip-image-captioning-base`)
|
| 22 |
|
| 23 |
+
## Usage
|
| 24 |
|
| 25 |
+
In order not to load the images into the memory, we will load the images separately.
|
| 26 |
|
| 27 |
```python
|
| 28 |
images = load_dataset("balgot/stylegan3-annotated", data_files=["*.zip"])
|
| 29 |
dataset = load_dataset("balgot/stylegan3-annotated", data_files=["*.csv"])
|
| 30 |
+
|
| 31 |
+
# TODO: convert "vector" column to numpy/torch
|
| 32 |
```
|
| 33 |
|
| 34 |
|